Jump to content
  • 12

AI integrated Rendering


twk

Question

Recommended Posts

  • 0
9 hours ago, VIRTUALENVIRONS said:

That is so cool.  How fast are the changes after a text prompt?

MidJourney and most of the ones I've seen lock render time to 1 minute of GPU time for either a full image or a sample sheet of options. You can set it longer with self hosting or higher account levels but results are very good in that time for mood work. 

Link to comment
  • 0

Fake it till we make it....

Any SDK python gurus want to point us in the right direction?

 

Can I get a script that tiggers on completion of rendering of a viewport then grabs the produced output?

 

I suspect the hooks exist as Image Effects would need to do something similar?

Indeed they'd need to handle the return as well. If we could leavage that then we are part the way to making our own. 

Link to comment
  • 0
3 hours ago, Matt Overton said:

Any SDK python gurus want to point us in the right direction?

😄 that's more for SDK-guys, however, you won't get far without having access to an AI-platform like Veras. Creating the platform by yourself is a little bit more difficult...

 

so, whoever wants to implement sth similar like veras needs a connection to 2th similar like veras

than he needs to know, how to code the interface...

 

doesn't sound that easy. Sounds more like a job for a big company and not for a single person...

Link to comment
  • 0

Apple has been optimising one as a command line tool on MacOS with both AS and IntelS support. 
 

there are others that allow API access in python so to me we could proof of concept at least and get it to work with VW. build our own is way beyond that but
 

As I read it if you self host you can start doing dedicated training against your own image library start producing your own style. 

Link to comment
  • 0
4 hours ago, twk said:

If you're up for following along with some video tutorials here are some good ones, as I believe Veras uses a custom built ControlNet ontop of stable diffusion:

there's no control model view in my text2img section, why? 😉 , maybe i am using the wrong weight? or the wrong stavble diffusion version?

Link to comment
  • 0
6 hours ago, matteoluigi said:

there's no control model view in my text2img section, why? 😉 , maybe i am using the wrong weight? or the wrong stavble diffusion version?

I should've prefaced my post, by saying I haven't actually tried any of this 😂😂. Ever since reading and liking a few posts on twitter, my feed has been flooded with videos and tweets like the ones I posted here.

 

I should also mention, that everything is moving incredibly fast to follow whats happening. The amount of offshoot diffusion models, gpt models, locally run gpt/diffusion models, being released daily is quite mind blowing. It's like seeing a beautiful avalanche; from a distance it looks beautiful, the sheer mass of it so very difficult to comprehend, but at the back of your mind you know alot is getting destroyed in the process.

 

  • Like 1
Link to comment
  • 0
To get the tabs at the top of Automatic 1111 installation, go to Settings/User Interface, and in the Quick Settings list type the following: sd_model_checkpoint, CLIP_stop_at_last_layers, sd_lora, sd_vae

ControlNet Web-Ui URL Link: https://github.com/Mikubill/sd-webui-...

ControlNet Models Link: https://huggingface.co/webui/ControlN...

I hope You Guys Like this Video There are lots of things yet to explore in stable diffusion. Do let me know in the comment section below How should I make videos regarding Stable Diffusion.

 

  • Like 1
Link to comment
  • 0

Here I will mention the three C's again.

I think AI has great possibilities for creative work. 

 

I have been using InvokeAI. It is great fun and can lead to some very useful images.

However,  keep in mind that many (if not all) of the AI Models (Checkpoints, LORAs, Safetensor models, etc) have been built using stolen IP from other creatives.

I sometimes get the watermarks from iStock, or Getty images showing up in my tests, suggesting the model was built by scaping the internet for unlicensed images.

 

Here is one example generated using Invoke AI Stable Diffusion and a Vector Art style LORA:

image.thumb.png.6f4911405f678d91149f6b5a3ade804a.png

 

We need a way to verify that the AI models have followed these rules before we begin using AI for our professional work

Have the creators that the models are built on

A) been given Credit, for their creations,

B) given Consent for the IP to be used

C) been Compensated for their efforts.

 

Bart

 

  • Like 3
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...