Oaktown Posted December 15, 2022 Share Posted December 15, 2022 I've been trying to figure out what I'm doing wrong but no luck. I have a blended screen that is 2514 pixels x 1200 pixels (2 WUXGA projectors), I set the Video Source for Vision to Capture Device then in Vision, I pick the NDI stream that's coming from the same computer from Resolume Arena. The NDI stream is the same size as the screen (2514x1200) and looks fine in the preview monitor BUT when I apply it to the screen, the stream gets squished down and duplicated. I get the same behavior if I render the video and use a ProRes file instead of the NDI stream. Take a look at the screenshots below and let me know I you have any idea what's going on: Quote Link to comment
Vectorworks, Inc Employee bbudzon Posted December 15, 2022 Vectorworks, Inc Employee Share Posted December 15, 2022 The problem here is the texture you have applied to the blended screen in VW. When I test NDI in Vision, I try to always verify a still image displays correctly in VW Shaded Mode first. I usually use a color bar test image so it is easy to see where the corners lie. So, if I insert a 16:9 television in VW, I make sure to apply the 16:9 color bar test texture from the default content. I then verify the Shaded Mode rendering in VW and proceed to Vision. Once you assign an NDI stream, which is also 16:9, it will render correctly in Vision. 1920 / 1080 = 1.777 16 / 9 = 1.777 2514 / 1200 = 2.095 I'm not sure what aspect ratio 2.095 is 😂 But 16:9 expressed as a float is 1.777. I would suggest you create a custom image with a resolution of 2514 x 1200 and paint the corners so it is like a test image. Then, apply this to your blended screen in VW and adjust the texture mapping until you are happy with the result. When you send this to Vision and apply a 2514 x 1200 NDI stream, you should then get a proper result that matches what you see in VW. Quote Link to comment
Oaktown Posted December 15, 2022 Author Share Posted December 15, 2022 (edited) Thanks @bbudzon That's usually where I start to make sure the screen is set right for the work I do. 2514x1200 is a 44ft x 21ft screen with two WUXGA projectors blended. The NDI stream as you can see in the screenshot is also 2514x1200. Any other thoughts? Here is a 2514x1200 PNG file as screen image. Edited December 15, 2022 by Oaktown Quote Link to comment
Vectorworks, Inc Employee bbudzon Posted December 15, 2022 Vectorworks, Inc Employee Share Posted December 15, 2022 If that is what your VWX looks like in shaded mode with a 2514x1200 image applied, then Vision should render the same way when provided a 2514x1200 NDI Stream. One thing you might try as a debugging step, just to eliminate NDI from the equation, is removing the Video Source on the VW side. If you do this and send the file to Vision, Vision should render the same still image that VW renders in Shaded Mode. If Vision does not look like VW when using the same still image texture in both programs, then we need to figure that out first (and once we do the NDI stream should "just work" when we get back to trying it). Quote Link to comment
Oaktown Posted December 15, 2022 Author Share Posted December 15, 2022 If I load a 2514x1200 image in the screen and set the Video Source for Vision to Capture Device, it works!!! Here is a screen capture: https://capture.dropbox.com/A7MdXw13PDVCD5uo and a screen shot: 1 Quote Link to comment
Vectorworks, Inc Employee bbudzon Posted December 16, 2022 Vectorworks, Inc Employee Share Posted December 16, 2022 Good! I'm glad it's working for you and I hope the whole process/procedure makes sense. I recently answered a similar question on the forums and they suggested we try to document this somehow. I'm going to try to work with our Technical Publications team to see if there's a way we can incorporate this into our help or something. Quote Link to comment
Oaktown Posted December 16, 2022 Author Share Posted December 16, 2022 Thank’s @bbudzon, it does make sense somehow but at the same time it’s not something that one can easily figure out on their own without painful trial & error so I totally agree that this should be documented to make it easier for people to use this on their project that include video assets. Also it would be good to document how to effectively utilise image props with alpha in Vision because that’s another one that not very intuitive. I finally figured out out to use transfer my image props with alpha but that was a bit arduous! Frédéric « Oaktown » 1 Quote Link to comment
Vectorworks, Inc Employee bbudzon Posted December 16, 2022 Vectorworks, Inc Employee Share Posted December 16, 2022 Agreed! And I'm sure you've likely seen this already. But, here is a great article I worked on with our Tech Pubs team regarding texturing in VW for Vision 2021+. We leveraged these techniques in our 2021 Demo File. So, that VWX and VSN is a good example of this article in practice. In reality, these practices are good to follow anytime you are exporting a 3DS/glTF/MVR to any other program. A lot of the testing I did involved Blender as an unbiased third party source. It's definitely a good read, but it's a little dense and may take a few passes to fully parse and grasp the concepts. Some things appear subtle or minor, but all of it is incredibly important. Feel free to ask questions in that thread if any come up! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.