Jump to content
  • 13

AI integrated Rendering


twk

Question

Recommended Posts

  • 0
23 hours ago, twk said:

Ahh looks like Archicad beat VW to the punch..


They are basically just wrapping a ty version of the Stable Diffusion API with ControlNET.  I don't think it'll be hard for the vectorworks team to figure this out. 

Good move though by ArchiCAD to be a first mover....but I can't imagine they will be exclusive on this for long. The models will still be hard to make good scenes for landscape until someone trains some custom models with quality landscape specific material.

Edited by Poot
  • Like 2
Link to comment
  • 0
4 hours ago, Poot said:


They are basically just wrapping a ty version of the Stable Diffusion API with ControlNET.  I don't think it'll be hard for the vectorworks team to figure this out. 

Good move though by ArchiCAD to be a first mover....but I can't imagine they will be exclusive on this for long. The models will still be hard to make good scenes for landscape until someone trains some custom models with quality landscape specific material.

. . . sourced from creators that gave permission, were paid for their effort, and credited for helping train the models

  • Like 1
Link to comment
  • 0
On 11/18/2023 at 8:36 PM, BartHays said:

. . . sourced from creators that gave permission, were paid for their effort, and credited for helping train the models


Where did you read that ArchiCAD sourced material from creators? Stable Diffusion is an open source model that also uses smaller models trained on any imaginable type of source material, not specifically creative commons or licensed, to cater to specific styles (realistic, cartoon, architecture, etc)

There is some likelihood that people at ArchiCAD trained their own model within SD tailored to architecture renders, but I am skeptical they built it using only freely available material.

In any case, the base model of Stable Diffusion has been trained/developed from scraped internet content, so it is not crediting creators. This doesn't mean that all uses of it have to do with giving creators credit, since there is a lot of general purpose material on the internet with no commercial or artistic relevance.
 

While copying an artists style, specifically, is wrong and sh** behaviour, I think it can still be used fairly even having been trained on various material so long as what you are prompting/creating is not ripping someone off. We do this with music, food, culture, etc, so fair usage is very achievable.

Edited by Poot
Link to comment
  • 0
8 hours ago, Poot said:


Where did you read that ArchiCAD sourced material from creators? Stable Diffusion is an open source model that also uses smaller models trained on any imaginable type of source material, not specifically creative commons or licensed, to cater to specific styles (realistic, cartoon, architecture, etc)

There is some likelihood that people at ArchiCAD trained their own model within SD tailored to architecture renders, but I am skeptical they built it using only freely available material.

In any case, the base model of Stable Diffusion has been trained/developed from scraped internet content, so it is not crediting creators. This doesn't mean that all uses of it have to do with giving creators credit, since there is a lot of general purpose material on the internet with no commercial or artistic relevance.
 

While copying an artists style, specifically, is wrong and sh** behaviour, I think it can still be used fairly even having been trained on various material so long as what you are prompting/creating is not ripping someone off. We do this with music, food, culture, etc, so fair usage is very achievable.

To me, the interesting bit kicks in when you can overlay your style in the generated output of the model. 

https://blog.metaphysic.ai/custom-styles-in-stable-diffusion-without-retraining-or-high-computing-resources/

 

I could see us creating many styles in the years to come as this sort of function becomes more mainstream. A suburb style that would push the context of the image to be similar to the site surroundings. Company presentation styles are based on the hand-drawn work of people in the company. Interiors based on photos of our own work.

  • Like 1
Link to comment
  • 0
On 11/16/2023 at 4:46 PM, Your Name Here said:

Have we become so lazy that we have outsourced our regurgitation of others designs to AI too?

I spent some time this past semester as a guest juror for a senior landscape studio.

It's sad how much time these student spend learning how to prompt AI instead of being able to solve basic design problems.

I think we are doomed.

 

Students (us included) need to learn both.  Ignorning AI is like ignoring Electricity or the internet.  The best stragety is to adopt the new tools into your workflow.

Edited by Bill_Rios
Hit return too early
Link to comment
  • 0
2 hours ago, Bill_Rios said:

Students (us included) need to learn both.  Ignorning AI is like ignoring Electricity or the internet.  The best stragety is to adopt the new tools into your workflow.

Totally agree with you *if* you add the word "ethically" to the end of your sentence.

 

And, to the point  @Poot made above, I am not saying ArchiCAD has acted unethically, but I also can't say Stable Diffusion, the tool underneath ArchiCAD's AI engine was created ethically. My point is that we all have a choice in how we use (or avoid) these tools. Like a hammer, it can be used for its intended purpose, or even creatively in unexpected ways, but to use it for harm, even through ignorance, is a path we should all try to avoid. 

 

To @Matt Overton above. I agree, this is how I am hoping to use AI text-to-image or image-top-image generation. If I can train a model in my sketch style, or various rendering styles that I created, or from legitimately acquired reference images, It would be a huge time saver. 

 

I have been experimenting with InvokeAI, a standalone, Stable Diffusion app for the desktop, and in training LORA's to add my style to the Model. There are thousands of add-ons out there to "tweak" the app to give you the results you like.  To my eye, most of these add-ons are the real problem, clearly scaping copyrighted material from the internet. But even the underlying technology, the Stable Diffusion Model, as well as the other big names out there are facing real criticism, and court challenges all the way up to the US Supreme Court. Until, the issues of Fair Use, copyright, and credit/compensation are resolved, I am proposing we keep the use of AI experimental and our of our professional practice.  

 

Bart

 

 

 

 

Link to comment
  • 0
3 hours ago, BenjaminGuler said:

We're working on the Veras integration for Vectorworks. We have the public beta available, and it would be great if we could have more users test it on Mac and Windows.

 

Great! I will definitely test it out. I'm a landscape architect, and have worked a lot with SD/ControlNET/Etc. but been too lazy to do my own lora training for landscape. It can often be hard to get good results because of the complexity in outdoor scenes (where building geometry is absent, or of minimal importance) which have a variety of complicated/nested/overlapping geometries, materials, and objects.

VW has a large userbase in the Landscape Architecture world, so it will be interesting to see how it works for landscape as the scope of image training is a bit different.

I will give it a shot over the next week and post some feedback.

Edited by Poot
  • Like 2
Link to comment
  • 0
7 hours ago, BenjaminGuler said:

We're working on the Veras integration for Vectorworks. We have the public beta available, and it would be great if we could have more users test it on Mac and Windows.

 

Here's a link to the release post: https://forum.evolvelab.io/t/veras-vectorworks-release-1-5-1-0-beta/5641

2024-02-01 14-50-26 - modern design with large windows, interior lights, timber building, during winter, ((snow)), blizzar.jpg

image.jpg

 

Great! @BenjaminGuler

Have installed and tried running, but just have blank window when trying to run from the web palette (as per instructions)

 

image.thumb.png.c8d19370f03e7f6687c57cd8554d43d4.png

  • Like 1
Link to comment
  • 0
5 hours ago, twk said:

@Poot, yes have saved views in file, still no luck. Have you got it to work?

Have you tried using the saved view and then opening the palette? Maybe it's not necessary.

 

Yes, I got it to work work fine.

Edited by Poot
Link to comment
  • 0

my MacBook is complaining about the „VerasPalette2024.vwlibrary“-file.

 

 - tried to right click and open, still doesn't help it...

I also allowed it in the mac system settings but the message reappears again and again 😞

 

Last chance: "sudo spctl --master-disable" ?

(don't want to, feels unsafe)

(tried VW 2023 SP8 and 2024 update 3)


Bildschirmfoto2024-02-06um09_18_46.png.a55bfe7327b6e31a1c3dd87c401329d5.png

 

Edited by matteoluigi
Link to comment
  • 0
16 minutes ago, matteoluigi said:

got the vwlibrary making no stress any more, however, white window


That seems to have been a symptom of overriding the Apple security.  I believe Veras is being recognized as the wrong type of file at that time.  Try restarting Vectorworks; does Veras work afterwards?

More details on that issue can be found in a post I made in this thread here: https://forum.evolvelab.io/t/getting-started-with-veras-for-vectorworks/5370/6

For most people experiencing this issue on Mac, I believe what I stated above is the cause.  It's happened for a single Windows user I'm aware of, however.  For this problem, a patch should be coming soon.

  • Like 3
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...