Jump to content


  • Posts

  • Joined

  • Last visited


147 Spectacular

1 Follower

Personal Information

  • Occupation
    Exhibit Designer
  • Homepage
  • Location
    United States

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. No, (and kind of yes) The AI model used by the AI Visualizer ( I believe it was stated to be SDXL) is trained on 1024 x 1024 pixel images. It can work well with images that are not square as long as they have a similar total number of pixels (1365 x 768) The AI Visualizer will resample the input image down to something it can work with before processing. If you use a more sophisticated Generative AI tool, you can adjust the thresholds of how the ControlNet converts an image to lines. And, you can have that conversion happen on higher-resolution images before it gets downsampled to the working image. so you can kind of tweak the level of detail that comes out of the control net to be more or less strict with your starting image. For the black and white image of your sub above, I had to lower the thresholds to get the detail in the rear of the sub. Now, if we used a Hidden Line drawing of the sub, instead of a rendering, you would have a much more precise ControlNet to work with. Can you post that same view as a Hidden Line drawing?
  2. @VIRTUALENVIRONS Thank you for the kind words, To answer your question. I have used AI only very minimally for any professional work. With all of the ethical concerns about how the AI models were built, and who gave consent for their work to be used, and who is (or isn't) getting compensated for the use of copyrighted images, I can't ethically justify getting compensated myself for AI generated content. Here is the one exception so far. For an exhibit design, we were proposing a wall of portraits, in the end they would be real photos of people from the local community. For the concept sketch I generated several portraits of random (not real) people to get the idea across. In one other case, I used AI to generate a landscape scene with specific elements that I could take to a professional mural painter and say "Here is a reference for what we are looking for. Can you make something like this, but bigger, better and in your artistic style?" Admittedly, I did both of these with some trepidation. I'm still not 100% confident even these uses are without some unintended consequences on the Creative community. Like @Luis M Ruiz I'm in the exploratory phases. But I have broad concerns about the legality and ethics of AI. I am also convinced that it is here to stay. So, I want to be an knowledgeable advocate for Creatives as best as I can.
  3. @VIRTUALENVIRONS It's hard to argue that no one is talking about AI. You are correct that AI can produce stunning images. And, I am all for that - for exploring ideas or quickly investigating alternate solutions to a design prompt. Also, it is correct to say that the link back to a VW workflow is absent. I would say these same two things of Pinterest, or a simple Google Image search. The AI visualizer, however, has the potential of being more connected to an Initial VW concept image. But let me clear this up, AI is not rendering, It does not reference the Model geometry. Using the example above. AI has no idea what this submarine really looks like. It can only approximate it based on other images tagged with "submarine" or other terms used in the prompt. Secondarily, the AI Visualizer use a secondary image called a ControlNet to help guide the image generation. It basically tells the AI where to put edges. Here is a ControlNet for your middle image: You see, its not reading the geometry of the file, It is looking for edges in the starting image. (It would be better if you had the a shaded view of the model on a white background). From there is it is comparing the terms in the prompt, with millions of other images with similar terms(tokens) and then generating a new image, (guided by the ControlNet). The creativity slider, changes how much bias to give to the tokens and the ControlNet as it generates the images. So, as to where AI might add value, there is (currently) no way to reproduce "photorealistic" renderings of an, as modeled submarine, in AI. The AI is always guessing as to what "Submarine" is and filling in details based on other images. You can use a variety of controls on top of an image of the Sub (IP adapters and other types of ControlNets) but it is not rendering the model. None of these generated images would be useful to show a client as "rendering of the product". However, they might be useful in terms of getting the feel of a marketing strategy. So I had a little fun. note: these images were not generated with the AI Visualizer) You can see it is not using the modeled geometry, but rater a snapshot of the model as a reference to influence Image generation. (hmm. I wonder what that signature is all about? ) And finally, to you point, the Image to 3D model versions of AI are really in their infancy. So, there is no way to get the products of the AI visualizer back in to a VW model without just putting in the work. anyway, I hope that helps. Bart
  4. There is a lot to talk about here. AI is likely the next Smartphone, it is already ubiquitous, we wont be able to live without it in 5 years. And, as consumers, users, and creators, we have to make some demands. We have to have transparency on how the AI models are trained. We will have mechanisms to monetize the use of AI and we must be required share royalties with the sources that helped train the model. Then, in the spirit of @Jeff Prince, make the machines do the mundane. Please don't democratize creativity.
  5. Yeah, I think I am not understanding your objective. A Texture with the "Color" in the Color Drop down is like a 1 pixel wide image, tiled infinitely. Offset, scale, rotation would have no effect. For that matter couldn't Map Type be grayed out too? Ca you be more specific about what you are after?
  6. OK, I am getting some better results with some tweaks, So far, there isn't a "Custom Redshift Options . . ." in the Current Render Mode Dropdown from the Tool Bar so I had to dig into the Render styles in the Resource Manager. Switching the Lighting Options off was Key. Environment lighting doesn't seem to work. Setting the progressive passes to 10 helps it feel more responsive. I am also noticing that the System Monitor shows my graphics card maxing out on VRAM with each update. So I suspect that my 2 monitor set up, with 1 monitor being 4K is taxing my RTX 3070 to the max. The Pixelation bothers me too but I can see the potential with being able to update the model without having to wait for a rendering to finish. At the same time, the denoising with low sampling looks mushy, like like a pastel crayon so, tradeoffs. I am happy with the direction that this is going, I am just trying to understand what to expect and how to optimize this cool new option. Bart
  7. Thank you @Dave Donley On a fairly complex file, I'm seeing about 16-20 seconds to resolve the initial rendering. The every cursor movement requires a re-render ( 2-3 seconds) and if I move a object 3-5 seconds. On a much simpler file, I get the bottom of the numbers listed above. I'll try to capture a video (downsized from 4K to 720 for smaller files size) Interactive Test.mp4
  8. Starting a conversation about this new Rendering mode in VW 2024, Update 5. I gave it a try and don't think I am getting the expected results. it seems woefully slow. However, here are some things that may be complicating my experience. A) I just upgraded to t 4K monitor, HDR monitor. Likely that all those extra pixels slow down the render. B) I am using an existing project with a fairly complex scene and 30+ lights C) I am too impatient to wait for a nice tutorial from VW or @Jonathan Pickup I did try it on a file with only a simple model of a house and even then it refreshed in +/- 3 sec which was fast but not really "interactive" in my estimation. Are my expectations too high? What are your experiences with this new mode? Bart
  9. I got notified that Update 5 is out but the info page give a 404 error: I get the same Page Not Found error via the Message Center or the link from the Update Dialog https://blog.vectorworks.net/updates-improvements-in-vectorworks-2024-update-5?medium=vectorworks_software&source=homescreen&content=message_center Bart
  10. We see this a lot as VCS is syncing with the cloud, sometimes it is not even during a "save and commit". My guess is that it happens even as someone checks out a resource and you try to check out something at the same time. So, when 2 people try to modify the Project file while there is a sync happening in the background. We have seen it get stuck, on occasion. I'm not 100% sure why. 90% of the time it clears up after a minute or so. First step is quitting Cloud Services and restarting. The ( make sure you have backed up any work in your Working file) and then create a new working file. Bart
  11. Yup, more reasons why AI is not ready for professional use in tools like VW. When using other AI apps, I've seen the "Getty Images" logo appear instead of a signature. So, you know that model was trained using illegally scraped images.
  12. The VW AI Visualizer is in its early stages, I would expect it to be behind the more developed user interfaces like Midjourney. I am sure we will see VW open up more control options in future releases. (I'd love to see support for training an LoRA with our companies existing renderings, sketch style, etc.) In the mean time, you may want to look into the the more open source solutions. Automatic1111, ComfyUI, and Invoke all offer stand alone packages you can run on your local machine, with a daunting set of control options and incredibly powerful customizations. I like InvokeAI, it is free to use and has a mission to create tools for creatives, not just the general masses. Of course, you have to have a pretty strong graphics card to do all of this on your local machine. Bart
  13. I do find that AI can have a great sense of humor. Here is my favorite image from a prompt about clean water in the bathroom
  14. Nope. VW lacks this functionality. Many of us have made many requests for a very many needed improvements in texture mapping controls.
  15. This seems like a great opportunity for a Marionette Object??
  • Create New...