Jump to content

Spotlight LED Screens and NDI Mapping in Vision


Recommended Posts


Hi Y'all... Pour yourself a coffee and tuck in; this is a long one!

 

I'm having trouble mapping an NDI stream to various LED tiles placed on stage in set carts.


The set looks like the following screenshot. All of the display surfaces are LED Screen plugin objects with dimensions and pixels set correctly in the OIP as indicated:


 image.thumb.png.f75b820a9c67a27719e80934436f5928.png

 

The image on the LED screens is set in "Edit Array Image" where a custom texture is selected.


Notice that the Scale is set to 50% to get the image to display correctly (the illusion is a stack of TVs with different images). The image used for the texture is only two TVs wide, so this makes sense as applied to the still image in VWX.

 

image.thumb.png.68bc62678cee9850698aee3d36f659b0.png

 
Changing this scale, however, has no effect on the eventual video mapping problem that we shall see in Vision. I tried exporting MVRs with all of the Scale adjustments in this menu set to 100% and had the same results that are illustrated below.


I also wonder what the "Capture Source Name" and "Capture Source Number" are in the "Select Vision Video Source" dialog. These names do not correspond to the names that Vision ends up asking for, and you could put Fred or Ethel in the "Capture Source Name" field and you never see those names come up again throughout the process:

 

image.thumb.png.59f2e799103fddcc3b5ff76132c734eb.png

 

But let's not get distracted...
 
OK, so now we move over to Vision, after exporting the LED tiles in their own MVR. The set carts that they travel in get exported as a separate MVR to keep things organized. These two MVRs plus my Amphitheatre MVR merge together in to this model:

 

image.thumb.png.cb3fb2b010db14efb1f8252cb6fbd48b.png
 
Let's begin by selecting our first LED array (in this case the 4 tile square array top center) and choosing Assign Video Input:

 

image.thumb.png.75ff303d1aa8cff7ee62d00ede3c5ee6.png
 
We assign the video input a name:

 

image.thumb.png.523097ca9018a3192054a0d87654bd9f.png


The video input is fed via NDI from a MacBook Pro running Resolume Arena. The NDI monitor on the Vision PC shows a smooth, steady video image in real time:

We find the 1920x1080 NDI input that is being scaled by 1/3 to give us a raster of 640x360 for use in Vision. 

image.thumb.png.363aedea95acdbc1b0bff3d841f11cd4.pngimage.thumb.png.970261cbb896ed003070b8dfa5b1d368.png


 
 

So let's crop the NDI stream! I scaled my 1080 template down to 640X360 in Photoshop to determine the exact corners of every crop and wrote them down (analog, in a notebook with a pencil) because the cropping tool in Vision is rudimentary at best. You can't zoom in on it, there's no grid or snap guide... best to use the tools in Photoshop and write down the pixels.

 

 

image.thumb.png.9a5979e00a799504d58d0941fb022e4f.png

 

So we get our square crop for our square video surface:

 

image.thumb.png.128f4ee073d5125192a9c0002a990a1e.png

 

And we do it for each of the LED screens in the model:

 

image.thumb.png.8ab1f35fa4b3c27292fa50d10ba0521b.png

 

During this process, plan to spend a lot of time looking at this screen, waiting for Vision to "see" the NDI input that it "saw" just perfectly moments ago:

 

image.thumb.png.81787195368ef45e4fa05cab30404086.png

 

Eventually we have created 13 separate crops of the NDI raster and mapped them to our LED screens. And here is the confusing result:

 

image.thumb.png.1c778a04cb453bb02bbe87e732f1f04b.png

 

Note the different scaling of the crops! Nothing seems to work out right, except the SR High and SL High set carts!

 

image.thumb.png.8d3de7e815797c2710b21120d80682f3.png

 

The 2wide x 4high carts are the only ones that display the crop faithfully. The Ctr Top crop is perfectly square, mapped to a perfectly square screen. It ends up stretched vertically. Ditto the Ctr Bottom... that one is stretched vertically and doubled. The math certianly does not work for the downstage 3wide x 1high boxes.

 

image.thumb.png.473ee617a4db3c74b861983a2ad4ec00.png

 

Playing with the texture scaling and offset in Vision does not help... You can't fix an asymetrical scale as is occuring here.

 

If we use Vectorworks to create an LED screen and send it to Vision, and Vision has the ability to crop an incoming NDI stream, why doesn't this just work?

 

I did try sending the crops from Resolume as separate NDI streams but Vision could not see them as separate.

 

Any help would be appreciated. Getting this model to work will greatly increase the chances that it will be useful to me before we get to rehearsals.

 

Thanks

peace

aj

 

 


peace
aj

Link to comment
  • 2 weeks later...
  • Vectorworks, Inc Employee

I had seen this post and am not 💯  certain what is going on. There is a lot involved in your workflow, but you did a fantastic job detailing all of it and it appears as though you did a fantastic job of ensuring that everything was accurate down to the pixel.

 

I did have some comments I was "holding on to" because I didn't want it to appear as though I was simply not addressing the issue. I was hoping to be able to find time to investigate with the entire VW Vision team and come back with something definitive.

 

 

On 6/20/2021 at 1:14 AM, ajpen said:

Notice that the Scale is set to 50% to get the image to display correctly

 

On 6/20/2021 at 1:14 AM, ajpen said:

Changing this scale, however, has no effect on the eventual video mapping problem that we shall see in Vision.

One comment was related to the text above. If you are scaling the image in VW in such a way that Vision does not pick up on the change, then we should try to find a different way to scale the image. One such way is by editing the Renderworks Texture directly. There are probably many other ways. We will need to investigate this more as ideally everything would work regardless of "how" you scale an image.

 

 

On 6/20/2021 at 1:14 AM, ajpen said:

I also wonder what the "Capture Source Name" and "Capture Source Number" are in the "Select Vision Video Source" dialog.

Another comment was that "Capture Source Name" is entirely ignored at export time. This is why you don't notice it in Vision. I believe this was put/kept here mainly to leave yourself notes in the VWX document. However, the "Capture Source Number" will come over to Vision as "1.cap", "2.cap", "13.cap", "52342.cap"; whatever number you put in there will be used as a unique integer identifier in Vision for "Video Inputs" (which can be NDI or UVC).

 

 

On 6/20/2021 at 1:14 AM, ajpen said:

We find the 1920x1080 NDI input that is being scaled by 1/3 to give us a raster of 640x360 for use in Vision.

One of my last comments was related to 1920x1080 vs 640x360 NDI Streams. In our research, most users are not cropping NDI Streams and most NDI Streams are not so highly detailed and so large in the scene (as viewed on your desktop/laptop monitor) so as to require 1920x1080. NDI supports high and low bandwidth modes and for a realtime vizualizer low bandwidth is preferable for performance reasons. This is why low bandwidth is the default and likely why you are ending up with a 1920x1080 source stream ending up in Vision as 640x360. If you would prefer the source resolution to appear in Vision, simply select the High Bandwidth radio button underneath the NDI live preview. Because you are cropping and doing other very detailed things, I think this is a perfect example of where High Bandwidth is indeed acceptable. Again, in most cases, you should use Low Bandwidth if you can.

 

 

On 6/20/2021 at 1:14 AM, ajpen said:

I did try sending the crops from Resolume as separate NDI streams but Vision could not see them as separate.

I've not had issues with sending multiple NDI streams to Vision. However, it may not be as evident that only Vision Unlimited license allows for more than a single NDI stream.

 

 

All in all, I'm not sure that any of this will resolve your issue. Users may find it informative at the very least. My hope is that this is merely us not having accounted for some workflow and that another workflow/workaround would allow things to proceed. But it is very possible there is just some issue/deficit in the core code when handling these kinds of things. It might take some time, but I am hoping we will be able to figure this out in the long run and ensure that we support this workflow moving forward.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...