Jump to content

Link renderworks camera to tv monitor


Recommended Posts

  • Vectorworks, Inc Employee

Yes, 

 

Create a saved view of anything, then right click on that saved view (Activate in new pane).

 

Then open the visualisation palate and select the camera you want for that pane.

 

Then click back on your main screen to deselect the newly open pane.

 

Drag that pane across to your external display.

Screenshot 2023-04-25 at 09.49.43.png

Link to comment
On 4/25/2023 at 7:48 AM, Pat Stanford said:

I read the question differently. I thought Dendecko wanted to have the camera view show up on a "monitor" object in the VW drawing. And I don't know how to do that or that it can be done.

 

 

That is how I read the question as well. I agree it cannot be done, BUT it would be nice. 

  • Like 1
Link to comment
  • Vectorworks, Inc Employee

I am just wondering how intensive this would be as you would be rendering and redraw/refresh twice the amount of objects in a scene. The objects in the design layer and additionally the objects in the camera view redrawing on the screen. I wondered if having cameras output an NDI stream externally from VW and then play NDI feeds on screen and tv objects in VW would be more efficient.

This would link in to some other requests about LED screens and monitors playing back content live in VW.

 

As there are rendering implications to this workflow I am going to tag @bbudzon as this might thread might be of interest.

  • Like 3
Link to comment
  • Vectorworks, Inc Employee

This is an interesting idea / request for Vectorworks!

 

One quick thing I want to point out is that because Vectorworks doesn't render in realtime, using anything like NDI would not be possible. If, however, NDI was supported in VW, you could "farm out" these renderings to other machines on your local network. So, you could have the same VWX open on 4 machines, 3 of them rendering "iMag cameras" output over NDI and the main machine could receive those NDI streams for rendering onto a monitor in the primary view pane. This would significantly reduce the performance issues, but I haven't given a lot of thought as to how it might affect or integrate with things like FQRW, redshift, or twin motion.

 

Anyway, let's move on and away from NDI; assuming this is all happening on a single machine.

 

In Shaded mode, it might actually be fairly fast as Shaded mode can generally render the scene from different points of view quite quickly. Take, for example, the navigation tools. The framerate while panning, zooming, or using the flyover tool is quite good. So, having to render the scene multiple times from different iMag cameras likely wouldn't have a huge impact. At worst, it would half or quarter the framerate while navigating. But, we could also special case navigation in the code so that it only updates the iMag screens once navigation has stopped.

 

The bigger issue would be with render modes like FQRW. If you had 3 iMag cameras displaying on 3 monitors/screens (which seems like a fairly reasonable use case), the scene would need to be rendered 4 times (once for each iMag camera and once for the view pane). FQRW is very high quality, but does take some time as it is. Tripling the render time here may not be acceptable to some, but I guess they could always just not use this feature or reduce the number of iMag cameras in use. For example, a single iMag camera displayed on 3 monitors is much more performance friendly that 3 iMag cameras on 3 monitors.

 

The only other complication with this feature is its integration into things like redshift or twin motion. I'm sure it would be possible, but I'm not super familiar with these as I work primarily on Vision and am not even sure where to start regarding this portion of the discussion. There may be technical limitations that prevent these integrations from working with iMag entirely.

 

Either way, a pretty neat idea!

  • Like 3
Link to comment

Assuming that the relative size of the iMag camera view on a monitor will be relatively small compared to the overall size of the rendering, it could be rendered at much lower resolution and therefore be faster.

 

Would it be possible to have a the camera make an image file that could be used to make an image prop texture. Then update that image based on a user input? So it does not have to be "real time" for walkthroughs, but at least can be updated "easily"

  • Like 3
  • Love 1
Link to comment
  • Vectorworks, Inc Employee
56 minutes ago, Pat Stanford said:

Assuming that the relative size of the iMag camera view on a monitor will be relatively small compared to the overall size of the rendering, it could be rendered at much lower resolution and therefore be faster.

 

Would it be possible to have a the camera make an image file that could be used to make an image prop texture. Then update that image based on a user input? So it does not have to be "real time" for walkthroughs, but at least can be updated "easily"

Both very good points! Many users of Vision struggle to understand why an NDI stream doesn't need to be 4K and it is precisely the reason you state. Often times, the overall size of the rendering onto the monitor is relatively small. A lower resolution, like 720p or even lower, is often adequate and much better for performance.

 

Also, I hadn't considered an "update on user input" option. One perhaps like sheet layer viewports where you can select an iMag camera and click an "Update" button in the OIP that would trigger the rendering. This would likely integrate much better with renderworks, redshift, twin motion, etc. And, it would also address performance concerns as you would get to choose when to do the update. I guess some of this might be possible with current versions of Vectorworks and the enhancement being discussed here would just streamline that workflow.

  • Like 3
Link to comment

I appreciate everyone's thoughts.  There are definitely some workarounds.  Thanks for clarifying that I wasn't missing anything simple.  I had thought it was possible but so it goes.  I was hoping that I could use it to quickly represent different class representations as on the screen without having to create a specific clip of each and apply it.  No worries though.  You all have been very helpful

 

  • Like 1
Link to comment
  • 4 weeks later...

I was just looking for this information, but sad me the answer is no.

A static rendered image of the iMag is what I was looking for.

I've dropped a camera from the event tools into a drawing, I've got 2 screens next to the stage.

I've zoomed my camera to be roughly a waist shot, and I sure would like to put that waste shot onto the screens, and show the client the damn screens are too small for iMag and to cut the camera entirely.

🙂

So the way to do that would be to render the camera as an image file, save that, import that image as a texture, and map it to the screen which requires centering, and scaling.

It sure would be nice just to link the image to the screen.

 

  • Like 1
Link to comment
5 minutes ago, Mickey said:

So the way to do that would be to render the camera as an image file, save that, import that image as a texture, and map it to the screen which requires centering, and scaling.

It sure would be nice just to link the image to the screen.

Exactly how I did it, and it's as tedious as it sounds.

Link to comment
  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...