Daniel B. Chapman

  • Content count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About Daniel B. Chapman

  • Rank

Personal Information

  • Occupation
    Scenery, Lighting Design & Production
  • Homepage
  • Location
  1. I'd like to bump this topic, would it be possible to get a response to this even if the answer is "we don't have a workflow"? I'm about ready to head into pre-production and I thought this was "likely fixed" in SP4.
  2. I'm going to reference this long standing thread as the baseline for this post: I have a simple project requirement: 1) Run QLab or some other media server on a computer 2) Pipe that media server into Vision (PC) as a Video Input/Camera Input to be visualized for the design team so I can integrate video and lighting in a video rendering. Vision theoretically supports video capture cards but we don't have a list of approved devices. I don't particularly care if this is Vision 2 32-bit, Vision 2 64bit, Vision 4, or Vision 2017, I just need this to work for this (simple) project. Has anyone had luck getting camera to work in the last couple years? I'm open to any suggestion (including piping it in via OpenFrameworks or Processing if I have to).
  3. Kevin, I can't stress how silly this is right now. Virtual Cameras (not Spout/Syphon) (which are cameras to the operating system) should be a fairly trivial to get working and would allow me--and I suspect others--to do 90% of what we need to do for single-source designs. By way of continuing the discussion here's where we are with basic hardware on the operating system (photos to follow): As you can see with Windows none of the streams are detected. I made of point of using XSplit broadcaster rather than spout to prove a point. Skype is happily running in the corner an detects this as a webcam (I should note that Vision detected Spout in the past using this method). We have a little better luck here with OSX as the FaceTime camera is finally listed but you can see the quality of that image is useless--and rather interesting. I would think that the built in hardware would qualify as the basic check and it isn't functional. If we have a piece of hardware that works in a test environment can we list it? I'd like to get this working ASAP and I'm fine with quirks. Lighting Designs often incorporate video at this point in time and I would like to use Vision to visualize my designs. If that isn't/can't happen then I need to find a new tool that isn't projecting low-resolution video at a cardboard model in my basement. (For whatever it is worth I was expecting the OSX to fail, my other Macbook Pro (2010) didn't list the camera the last time I tried nor did the 2012 iMac at the office. This is on a new Macbook Pro (2016/Touch) so maybe there was better luck on the driver.)
  4. I'd love a response on this thread. It has been roughly six months since I re-raised this issue and I'm curious why there's so little response especially considering the broken state of Vision 2017 in regards to video capture. Here's a link to a simple OpenFrameworks example (I'm going out on a limb to say you're using Qt so it isn't really all that foreign as OpenFrameworks is just a collection of standard OpenGL libraries). The receiver code is less than 500 lines in total. If we can't get resonable capture card support I would like the ability to implement it myself via Spout or Syphon. https://github.com/leadedge/Spout2/tree/master/SpoutSDK/Example/ofSpoutExample/src/ofSpoutReceiver
  5. Edward, Wouldn't it make more sense to implement a shared texture capture rather than try to support every card under the sun? The VJ community has had wild success with Syphon on Mac and a large number of media servers support Spout/Syphon or both (for the few cross platform devices). The lack of video capture in Vision has effectively killed my workflow. Vision 2017 does not support a FaceTime camera out of the box (Windows or OSX) or any other "Virtual" cameras that I've tried to creatively pipe data in. I have no idea what you're using on the back end but it would seem to me that you need to implement a shared texture. Can we please get an update on this? I now have a bug report out that's coming close to a year on this issue. We're officially in "unacceptable" territory here for customer support--it is clear that I'm not the only designer who is annoyed that Vision is useless for visualizing projection designs at the moment.
  6. Can we get a list of the supported video capture cards? I would really like to see Blackmagic and Virtual Webcams added to the list. This has been a very rough year for projector previsualization in Vision and I'd like to see if this is solved/working.
  7. Jim, I'm also tagging this. I've raised a formal support ticket as well. I have a ballet and two operas that I would really like to have Vision available for. I've edited my above post to display a picture of the problem in Windows 10 (using a virtual cam). However I have the exact same problem on MacOS/Facetime (physical camera) (Macbook Pro Mid 2010 with Discrete Graphics). Can we get an update on this? It is clearly a critical bug and I'm wondering if the development team has a "clean box" that they are testing on because they seem to be unable to reproduce the error based on my last conversation with them. I'm happy to allow a remote debugging session if that's what it takes to solve this problem.
  8. Jim this was a phone-support request. I don't have a thread on it, I can probably drum up some e-mails if that helps. Eddie should have some knowledge of it. The attached image sums up the problem, web cams (virtual or otherwise) don't show up in the video capture dialog on either my Windows PC or my Mac. Vision 2.2 has the same problem.
  9. I wanted to check in to see if there has been any progress on the Video Capture bug I raised a month or so ago. I have another pair of projects at a moderate scale and I would prefer to use Vision for my projector layouts. What's the status of the camera inputs?
  10. I think these topics were raised quite a long time ago but it might be worth revisiting now that it looks like Vision Instruments are native in Vectorworks 2017 Is there any way to synchronize the following features linked into the ESC Export? These would be a great help to designers working with a large number of conventional fixtures. 1) Focus Points (conventional fixtures focused upon export in Vision) The focus point question is self explanatory, this would save a lot of time coming from a Renderworks model even if there were some minor problems with accuracy. It sure beats focusing by mouse click. 2) Model Views (2D presentation layers represented correctly in Vectorworks 3D) exported directly to Vision from the Viewport. A simple example of this is a lighting boom. Right now my 2D drawings comes in as 2D and the 3D drawing isn't really presentable. As a result I have to have two files or a file with multiple entries for each light. Right now everything in the Vision workflow is pure-3D and while that's fine for the modeling phase extrapolating that into something that I can present to a working crew is basically a new drawing. In a perfect world what I have in my 3D model would be accurately represented in Vision so I could make some changes to the set or plot and just update my model without rethinking things or making duplicate layers (one 3D and one 2D) for my drawings. Unfortunately the combined 2D/3D workflow wrecks havoc on the paperwork side of things so this is a constant headache. 3) Document Origin Right now it looks like Vision exports geometry based on the center for that geometry (which makes sense) but it makes positioning something inside Vision with accuracy very difficult. It would be nice to have some parity between the Scene Graph and Vectorworks origin. Obviously that is happening somewhere in the process, but in translation the ESP origin isn't the same. If I'm wrong about this it would be great to have some documentation on how to achieve it. I've done a fair amount of looking around this week.
  11. I'd love to see Syphon/Spout integration for ESP Vision. The ability to share an OpenGL context from a processing sketch or other application into vision would make the workflow much easier for multi-projector preprogramming. Spout can be found here and has fairly wide support on the Windows operating system: https://github.com/leadedge/Spout2 Syphon is native in OSX and really we just need a hook to pull in the Syphon context in OSX. Is this a possibility?
  12. - Regarding this: It would be great to see this as a feature, the visualization pallet is irritating and my students, across the board (MFA/BA's), have asked if there's a way to do it a suggested here.
  13. Thanks for the quick reply! I completely agree for rendering a complex scene, however for simple worksheets in a complex 3D model (say, a ceiling in thrust consisting of latticework) OpenGL would be a GREAT TOOL for figuring out the position without resorting to time consuming rendering modes or complex worksheets. Renderworks is impractical--at best--for worksheets. There's no reason this can't be fixed the engine is more than capable of rendering decent shadows. I'm not looking to check shots on templates or compose massive scenes, I'm looking to focus a light in a 3D model and get a feel for how that might work in imperfect shot. If this isn't on the feature list it should be. EDIT For whatever it is worth I just cut a hole (made the light a shell which is probably a terrible idea on large models) and it works fine for narrow beam angles. This 50 Degree doesn't work well, but the 19 was fine. This can easily be implemented with revolving about the a rail. If you're willing to have silly looking fixtures I'm sure the wider angles would be fine. (I matched the beam/field) to sharpen it. https://www.dropbox.com/s/a394qfw4o69sa1m/Workaround.PNG?dl=0
  14. Jim, I think you're missing the point here. In OpenGL mode with "shadows" nothing is coming out of the front of the instrument. I can use Fast Renderworks to check shots but it is incredibly time consuming. This definitely feels like a bug in the shader. It seems like this is an issue with the OpenGL shader not recognizing the "cast shadows" option of the textures in OpenGL. And I have a feeling the instrument is trapping the light source. (I demonstrate this by the fact the "NoShadow" textured object is casting a shadow. The default instrument texture is likely trapping the light inside the unit. OpenGL No Shadows Enabled (Both lights work) https://www.dropbox.com/s/8e8t3cszuaeuus4/NoShadows.PNG?dl=0 OpenGL Shadows Enabled: Spotlight device fails to work https://www.dropbox.com/s/okrov4se5xqplpn/Shadow%20Example.PNG?dl=0 Renderworks: CORRECT rendering, the texture should not CAST shadows https://www.dropbox.com/s/aulyvxpejfjepyo/Renderworks.PNG?dl=0 File: https://www.dropbox.com/s/3hk6k06kb14may6/GL-Issues.vwx?dl=0 Obviously the rendering engine in "OpenGL" mode is less sophisticated, but it is completely capable of casting shadows (as shown using the "Spotlight" and this does not work while visualizing with a lighting instrument. Is this a bug? (It feels like a bug.)
  15. I'm having some issues with the OpenGL shadows/textures. I've attached a file here replicating the issue. Basically the symbol seems to be obstructing the actual spot light (OpenGL) inside the light. I've doubled checked the texture and it is set to receive but not to cast shadows. With a regular spotlight (visualization) this works fine: OpenGL https://www.dropbox.com/s/npfx5a03emqp6ix/ss1.PNG?dl=0 OpenGL No Shadows: https://www.dropbox.com/s/5f584zg7nw79ksm/ss2.PNG?dl=0 Renderworks (what's expected): https://www.dropbox.com/s/qy4469voidnb0tu/ss3.PNG?dl=0 https://www.dropbox.com/s/ygd8gcsszu7w38d/ShadowsNotWorking.vwx?dl=0 What's going on here? I'd really like to use shadows to communicate this easily to another designer without setting up a series of renderings and I was under the impression this was now a feature. Is this a bug in the textures or Spotlight? Vectorworks 2016 SP 2 Build 288897, Windows 10 NVidia GTX 960 361.75 (Current Drivers)