Jump to content

Jesse Cogswell

Member
  • Content Count

    140
  • Joined

  • Last visited

Community Reputation

66 Excellent

2 Followers

About Jesse Cogswell

  • Rank
    Journeyman

Personal Information

  • Location
    United States

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Aha, I see now. I guess I didn't realize that Vision could now use alpha textures natively as the main texture. Neat. But this still requires additional work from a VW export, as VW does not allow for alpha textures without using the transparency shader, right?
  2. While we're on the subject of Filter improvements, could we please be able to assign more than one set of criteria per filter? I would love to have a filter that shows all layers with either "Lighting" or "Architecture" tags, for instance. Right now, I have to assign the "Lighting" tag to all architecture layers as well as lighting layers if I want a working filter. So all architecture layers end up with tags for Lighting, Scenic, Audio, Projection, etc so that each discipline can have a filter.
  3. Quite right, that did the trick. It might be a good thing to add to the Help documentation for either the Vision Preferences or Material Properties, or both. And it would probably be a good idea to reverse the Use Alpha Channel property so that True allows the alpha channel to be used rather than False as it does now.
  4. Hmmm, unchecking that box makes no appreciable difference for me. I did try it in Vision 2020 and 2019, and interestingly, alpha textures only seemed to work in 2019. Vision 2021 Screen Capture Vision 2019 Screen Capture
  5. Good morning, it's me again. I've just noticed that my alpha textures are no longer working in SP3.1. I don't know exactly when they stopped. I opened up the Vision 2021 Demo file to make sure that something wasn't wrong on my end, and sure enough, the alpha textures didn't work there either. Screenshot below. Is this something wrong with my machine? If this appears to be working on your end, I'll try repairing/reinstalling Vision 2021 and see if that fixes it.
  6. Not a VW employee, but I believe if you need to use MA-Net2 or MA-Net3, you need a physical Vision dongle, it won't work with a serial number through Service Select. From what I understand, this is an issue with their agreement with MA Lighting. It should still work over sACN or Artnet, though. If you DO have a dongle, make sure that it is current by running the Help - Update Dongle command. I've had issues in the past when updating to a newer version where the dongle wouldn't be properly recognized until running the command.
  7. @BenJ This was likely caused by the things you were trying to click on being made up of multiple classes or having internal geometry of a symbol in a different class than what the symbol is in. In other words, if the symbol is placed in the "Tables and Chairs" class (which is active), but the geometry inside the symbol is in the "None" class, if you have your View Class settings to just Show/Snap, trying to do a single click will be looking for the internal geometry, but doing a lasso select will pull the symbol's class. I understand it's a bit confusing, but it's the reason I almost always draft with my View Classes set to Show/Snap/Modify and my View Layers set to Show/Snap.
  8. I understand that the human eye does sort of an "auto-balance" when it comes to color temperature, but the human eye is what I most commonly light for. So setting the base color temperature to 3400K would be tricky in that it would cause the moving lights (which have a native CCT of between 6000 and 7000K) to read much more blue than as the human eye would see, which would not look as good on a moving light show. I guess what I'm saying is that I felt the older color temperature model did a good job of balancing the extremes and was more accurate to my eye, which is what is most important to me when I'm previsualizing. That being said, I am also currently working on a filmed opera in a traditional stage house with primarily Source 4s (which is where the above screen shots come in). Because we are primarily using conventional tungsten-halogen sources, the cameras are all going to be set with a base white color temperature of 3800K, so the default Vision white balance of 6000K is unfortunately going to be hard to compensate for in previs without grabbing every single light and raising its color temperature by 2000K (though that's looking like the only solution I have at the moment), but I also need the Vision 2021 absolute DMX cameras to simulate accurate camera movement and placement, so I'm kind of in a pickle. If/when a white balance setting is added, would there be a way to add an option for "false" color similar to the old color model to better reflect what lighting looks like to the human eye? With any luck, this covid situation will end and I will go back to lighting for a live audience, and I'd love to be able to accurately simulate what a mixed source show will look like for them.
  9. Assuming that you are running VW2021, they changed the way that multi-circuit fixtures work. I just dropped in an Iris 4 and for some reason, VW isn't properly showing it as a multi-circuit fixture. After doing a bunch of experimentation, I found that editing the 3D component of the "Light Instr Strand Iris 4" symbol and deleting the C-clamp made the fixture suddenly behave as the 4-cell fixture that it is. Once you've done that, a drop-down in the Object Info Palette called "Edit Cell" will be available. From here, you can select each cell individually to assign color, channel, unit number, dimmer, etc, or select <All> to change values for the entire fixture.
  10. While it skews a little blue, I find that Vectorworks is much more accurate with color. I think Vision is skewing far too dramatic to the warm side. I do a lot of museum lighting design and use sources at 3000K, and if they were as orange as Vision is showing, I would absolutely be fired. All I'm saying is to physically get a Source 4 in the office and turn it on and compare it to Vision's output. At this point, I don't know if I can use Vision 2021 to do previz for shows with conventional fixtures if every light is going to look like it has 1/2 CTO in it. I suppose I can set those fixtures to a color temperature of 5000K or so. Maybe as a feature request, could we get a white balance document setting to choose what color temperature should be white?
  11. Good morning. Did something change in regards to color rendering between Vision 2019 and Vision 2021? A Source 4 with no color is reading super amber and not at all realistic. Screen capture from Vision 2021 Screen Capture from Vision 2019. Though still a little pink, this is much closer to what a N/C Source 4 10degree looks like
  12. I meant to warn you about it only working in Top/Plan (bounding box gets distorted in 3D views). Found the problem. VW2019 lets you extrude along a line, but VW2021 only extrudes along a NURBS curve. So I added a couple of lines to convert the line to a NURBS curve before extruding. This should also work in VW2019. PROCEDURE Ortho2Dto3D; {* Polls for plan, front, and side objects and produces simple 3D object Developed by: Jesse Cogswell Date: 4/11/2021 VW Version: 2019 Revisions: *} CONST BUFF = 6; VAR frontA,frontB,sideA,sideB:POINT; origin,trackLoc:POINT3D; width,height,depth,centerZ:REAL; plan,frontProfile,sideProfile,planObj,frontObj,sideObj:HANDLE; result:INTEGER; FUNCTION CheckObjCallback(h:HANDLE) : BOOLEAN; {Provides Callback for selecting extrude objects} VAR objType:INTEGER; BEGIN objType:=GetTypeN(h); IF((objType=3)|(objType=4)|(objType=5)|(objType=6)|(objType=13)|(objType=21)) THEN CheckObjCallback:=TRUE; END; FUNCTION CreatePlanObj(h:HANDLE; rBase,rDepth:REAL) : HANDLE; {Creates 3D object representing plan geometry} VAR tempHd:HANDLE; BEGIN tempHd:=CreateDuplicateObject(h,NIL); CreatePlanObj:=HExtrude(tempHd,rBase,rDepth); END; FUNCTION CreateFrontObj(h:HANDLE; pOrigin:POINT3D; pA,pB:POINT; rHeight,rDepth:REAL) : HANDLE; {Accepts profile information and returns Handle to 3D geometry of front subtract object} VAR pathHd,rectHd,cutHd,solidHd,objHd:HANDLE; objCtr:POINT3D; subResult:INTEGER; BEGIN {Create Path} MoveTo(pOrigin.x,pOrigin.y-(rHeight*0.5)-BUFF); LineTo(pOrigin.x,pOrigin.y+(rHeight*0.5)+BUFF); pathHd:=ConvertToNURBS(LNewObj,FALSE); Move3DObj(pathHd,0,0,(rDepth*0.5)); {Create Profile} Rect(pA.x-BUFF,pA.y+BUFF,pB.x+BUFF,pB.y-BUFF); rectHd:=LNewObj; {Extrude Cut Profiles} solidHd:=ExtrudeAlongPath(pathHd,rectHd); cutHd:=ExtrudeAlongPath(pathHd,h); DelObject(rectHd); DelObject(pathHd); subResult:=SubtractSolid(solidHd,cutHd,objHd); {Move 3D Geometry to Plan object center} Get3DCntr(objHd,objCtr.x,objCtr.y,objCtr.z); Move3DObj(objHd,0-objCtr.x+pOrigin.x,0,0-objCtr.z+(rDepth*0.5)); IF(subResult=0) THEN CreateFrontObj:=objHd; END; FUNCTION CreateSideObj(h:HANDLE; pOrigin:POINT3D; pA,pB:POINT; rWidth,rDepth:REAL) : HANDLE; {Accepts profile information and returns Handle to 3D geometry of side subtract object} VAR pathHd,rectHd,cutHd,solidHd,objHd:HANDLE; objCtr:POINT3D; subResult:INTEGER; BEGIN {Create Path} MoveTo(pOrigin.x+(rWidth*0.5)+BUFF,pOrigin.y); LineTo(pOrigin.x-(rWidth*0.5)-BUFF,pOrigin.y); pathHd:=ConvertToNURBS(LNewObj,FALSE); Move3DObj(pathHd,0,0,(rDepth*0.5)); {Create Profile} Rect(pA.x-BUFF,pA.y+BUFF,pB.x+BUFF,pB.y-BUFF); rectHd:=LNewObj; {Extrude Cut Profile} solidHd:=ExtrudeAlongPath(pathHd,rectHd); cutHd:=ExtrudeAlongPath(pathHd,h); DelObject(rectHd); DelObject(pathHd); subResult:=SubtractSolid(solidHd,cutHd,objHd); {Move 3D Geometry to Plan object center} Get3DCntr(objHd,objCtr.x,objCtr.y,objCtr.z); Move3DObj(objHd,0,0-objCtr.y+pOrigin.y,0-objCtr.z+(rDepth*0.5)); IF(subResult=0) THEN CreateSideObj:=objHd; END; BEGIN {Poll Plan object} SetTempToolHelpStr('Select Plan Object'); TrackObject(CheckObjCallback,plan,trackLoc.x,trackLoc.y,trackLoc.z); HCenter(plan,origin.x,origin.y); origin.z:=0; {Poll Front object} SetTempToolHelpStr('Select Front Object'); TrackObject(CheckObjCallback,frontProfile,trackLoc.x,trackLoc.y,trackLoc.z); {Poll Side object} SetTempToolHelpStr('Select Side Object'); TrackObject(CheckObjCallback,sideProfile,trackLoc.x,trackLoc.y,trackLoc.z); {Get bounding boxes for Front and Side} GetBBox(frontProfile,frontA.x,frontA.y,frontB.x,frontB.y); GetBBox(sideProfile,sideA.x,sideA.y,sideB.x,sideB.y); {Determine overall dimensions} width:=frontB.x-frontA.x; height:=sideB.x-sideA.x; depth:=frontA.y-frontB.y; {Create 3D Geometry} planObj:=CreatePlanObj(plan,origin.z,depth); frontObj:=CreateFrontObj(frontProfile,origin,frontA,frontB,height,depth); sideObj:=CreateSideObj(sideProfile,origin,sideA,sideB,width,depth); {Cut Objects} result:=SubtractSolid(planObj,frontObj,planObj); result:=SubtractSolid(planObj,sideObj,planObj); END; Run(Ortho2Dto3D);
  13. Believe it or not, but I think this would be VERY complicated in Marionette. Marionette is not all that flexible and can be a bit of a nightmare to work with, though I suppose it might be possible as long as the nodes exist. But I think I was able to make this happen using Vectorscript. After running the command, it will ask you to select a Plan object, a Front object, and a Side object, in that order. These objects can be anything that can be extruded (circles, rectangles, polygons, polylines, etc). The script will extrude these along a path as well as a solid rectangle a little larger than the bounding box of the profile, then subtract these objects to create a "cutting" object for the front and side. It then subtracts these from an extrude created by the plan object. I haven't tested this with a wide variety of objects, so it might still break with something complicated, but it is worth trying out. Video of me testing it is below as well as the source code. Feel free to save it in a script palette or make it a full blown menu command. Video of me testing the tool. Windows won't capture floating palettes in screen cap video, so imagine me selecting the script from a script palette. Vectorworks Spotlight 2019 - [2D Orthographic to 3D.vwx] 2021-04-11 21-00-00.mp4 Vectorscript code. It's probably a little rough in terms of variable naming and such, but it certainly works. I also only tested it in Freedom Units, so let me know if it breaks when exposed to the logic of the Metric System. PROCEDURE Ortho2Dto3D; {* Polls for plan, front, and side objects and produces simple 3D object Developed by: Jesse Cogswell Date: 4/11/2021 VW Version: 2019 Revisions: *} CONST BUFF = 6; VAR frontA,frontB,sideA,sideB:POINT; origin,trackLoc:POINT3D; width,height,depth,centerZ:REAL; plan,frontProfile,sideProfile,planObj,frontObj,sideObj:HANDLE; result:INTEGER; FUNCTION CheckObjCallback(h:HANDLE) : BOOLEAN; {Provides Callback for selecting extrude objects} VAR objType:INTEGER; BEGIN objType:=GetTypeN(h); IF((objType=3)|(objType=4)|(objType=5)|(objType=6)|(objType=13)|(objType=21)) THEN CheckObjCallback:=TRUE; END; FUNCTION CreatePlanObj(h:HANDLE; rBase,rDepth:REAL) : HANDLE; {Creates 3D object representing plan geometry} VAR tempHd:HANDLE; BEGIN tempHd:=CreateDuplicateObject(h,NIL); CreatePlanObj:=HExtrude(tempHd,rBase,rDepth); END; FUNCTION CreateFrontObj(h:HANDLE; pOrigin:POINT3D; pA,pB:POINT; rHeight,rDepth:REAL) : HANDLE; {Accepts profile information and returns Handle to 3D geometry of front subtract object} VAR pathHd,rectHd,cutHd,solidHd,objHd:HANDLE; objCtr:POINT3D; subResult:INTEGER; BEGIN {Create Path} MoveTo(pOrigin.x,pOrigin.y-(rHeight*0.5)-BUFF); LineTo(pOrigin.x,pOrigin.y+(rHeight*0.5)+BUFF); pathHd:=LNewObj; Move3DObj(pathHd,0,0,(rDepth*0.5)); {Create Profile} Rect(pA.x-BUFF,pA.y+BUFF,pB.x+BUFF,pB.y-BUFF); rectHd:=LNewObj; {Extrude Cut Profiles} solidHd:=ExtrudeAlongPath(pathHd,rectHd); cutHd:=ExtrudeAlongPath(pathHd,h); DelObject(rectHd); DelObject(pathHd); subResult:=SubtractSolid(solidHd,cutHd,objHd); {Move 3D Geometry to Plan object center} Get3DCntr(objHd,objCtr.x,objCtr.y,objCtr.z); Move3DObj(objHd,0-objCtr.x+pOrigin.x,0,0-objCtr.z+(rDepth*0.5)); IF(subResult=0) THEN CreateFrontObj:=objHd; END; FUNCTION CreateSideObj(h:HANDLE; pOrigin:POINT3D; pA,pB:POINT; rWidth,rDepth:REAL) : HANDLE; {Accepts profile information and returns Handle to 3D geometry of side subtract object} VAR pathHd,rectHd,cutHd,solidHd,objHd:HANDLE; objCtr:POINT3D; subResult:INTEGER; BEGIN {Create Path} MoveTo(pOrigin.x+(rWidth*0.5)+BUFF,pOrigin.y); LineTo(pOrigin.x-(rWidth*0.5)-BUFF,pOrigin.y); pathHd:=LNewObj; Move3DObj(pathHd,0,0,(rDepth*0.5)); {Create Profile} Rect(pA.x-BUFF,pA.y+BUFF,pB.x+BUFF,pB.y-BUFF); rectHd:=LNewObj; {Extrude Cut Profile} solidHd:=ExtrudeAlongPath(pathHd,rectHd); cutHd:=ExtrudeAlongPath(pathHd,h); DelObject(rectHd); DelObject(pathHd); subResult:=SubtractSolid(solidHd,cutHd,objHd); {Move 3D Geometry to Plan object center} Get3DCntr(objHd,objCtr.x,objCtr.y,objCtr.z); Move3DObj(objHd,0,0-objCtr.y+pOrigin.y,0-objCtr.z+(rDepth*0.5)); IF(subResult=0) THEN CreateSideObj:=objHd; END; BEGIN {Poll Plan object} SetTempToolHelpStr('Select Plan Object'); TrackObject(CheckObjCallback,plan,trackLoc.x,trackLoc.y,trackLoc.z); HCenter(plan,origin.x,origin.y); origin.z:=0; {Poll Front object} SetTempToolHelpStr('Select Front Object'); TrackObject(CheckObjCallback,frontProfile,trackLoc.x,trackLoc.y,trackLoc.z); {Poll Side object} SetTempToolHelpStr('Select Side Object'); TrackObject(CheckObjCallback,sideProfile,trackLoc.x,trackLoc.y,trackLoc.z); {Get bounding boxes for Front and Side} GetBBox(frontProfile,frontA.x,frontA.y,frontB.x,frontB.y); GetBBox(sideProfile,sideA.x,sideA.y,sideB.x,sideB.y); {Determine overall dimensions} width:=frontB.x-frontA.x; height:=sideB.x-sideA.x; depth:=frontA.y-frontB.y; {Create 3D Geometry} planObj:=CreatePlanObj(plan,origin.z,depth); frontObj:=CreateFrontObj(frontProfile,origin,frontA,frontB,height,depth); sideObj:=CreateSideObj(sideProfile,origin,sideA,sideB,width,depth); {Cut Objects} result:=SubtractSolid(planObj,frontObj,planObj); result:=SubtractSolid(planObj,sideObj,planObj); END; Run(Ortho2Dto3D);
  14. I've been playing around with the DMX cameras, particularly the new Absolute Camera and have run into some issues. The major one, and it's currently making the absolute camera unusable for me, is that slow movement appears really "steppy." I'm attaching a video showing a 30 second complex camera movement (slow dolly in, pan down, tilt up) and the camera appers to be "bouncing." I suspect that this is being caused by the DMX parameters for the absolute camera being 32-bit parameters, which my console (ETC Eos) does not support. I think the only lighting console at the moment that supports 32-bit parameters is the MA3. Vision Camera Test.mp4 So I'm back to using the relative camera for now, which is a hugely laborious process to program since it involves a ton of trial and error to get the camera to land where you want it to when doing complex maneuvers. I've also noticed some inaccuracies in the Vision help file regarding relative cameras. Speaking of the Eos, the fixture profile for the Vision relative camera is functionally useless. I don't know if you guys built the profile or if ETC did, but relegating the camera activation to a Control parameter means that I can't activate a camera with a cue without using a macro. The ranges are also pretty far off and don't have an easy way of getting to the "dead" zone at the middle of each parameter (49-51%). The parameters all home to 100%, so you have to go through the entire upper range at high movement to get to the dead zone in between. Not ideal. Also, camera orbit should not be in the Shutter encoder group. I opted to build my own profile and in the process, discovered some things. When listing ranges, especially for 16-bit parameters, you should list the actual DMX output rather than the percentage. For example, the documentation for Pan Vert lists ranges 1-48 to be Pan Down. Actual movement occurs at DMX 31680, which is 48.34%. It may seem small, but that means that if you set the parameter to a straight 48% (DMX 31456) as the documentation states, you're well into a pan move already rather than on the cusp of one. The ranges listed for Pan Left and Pan Right are backward. 1-48 pans right, 51-99 pans left. The ranges listed for Orbit Up and Orbit down are backward (assuming we're talking about orbiting the camera downward. If we're talking about orbiting the view, then Orbit Left and Orbit Right are backward). The ranges listed for Zoom In and Zoom Out are backward. 1-48 zooms out, 52-99 zooms in.
  15. I have a couple of questions regarding the Render Buffer when rendering out stills and videos. All the help file says is, "To help achieve the desired resolution, select a size that is smaller than the Resolution in the Render Movie dialog box." How much smaller? Does it need to keep the same aspect ratio as the output? What exactly does the Render Buffer do? I have a file that I have been working on that is fairly complex. When I try to render out the video and have the Render Buffer match the output, the render always fails about halfway through and Vision crashes. However, if I set the Render Buffer to a smaller size than the output, the resulting render has a completely different camera view and effects such as bloom don't appear at all. Below are three images, one is a screen capture from the Vision software and the second and third are the direct output from running the Render Still command immediately after taking the screen shot. 1. Screen capture from Vision. All quality settings at Medium, 1920x1080 resolution. Monitor resolution 1920x1080. Notice the exaggerated bloom effect and camera with the bottom of the platform legs just barely in the frame. 2. Rendered still. Output at 1920x1080, render buffer at 1280x720, all settings at High. Notice the complete lack of bloom and the camera FOV being significantly wider than the screen capture. 3. Rendered still. Output at 1920x1080, render buffer at 1920x1080 with all settings at High. Bloom is retained and camera, while still having a wider FOV, is at least close to the screen capture. Vision crashed while generating these stills and generally does crash after completing any render, still or movie. I've attached the dmp file from the last render in a zipped folder. Analyzing it, it shows a NULL_POINTER_READ_c0000005 error. No dmp file is generated when it crashes in the middle of rendering out a video. Vision version: 26.0.3.586381 OS: Windows 10 Pro 2004 build 19041.867 Processor: i7-8650U @ 1.9 Ghz, 2.11 GHz base speed RAM: 16 GB Graphics: NVIDIA GeForce GTX 1060, 6 GB dedicated VRAM VisCrashDump28.zip

 

7150 Riverwood Drive, Columbia, Maryland 21046, USA   |   Contact Us:   410-290-5114

 

© 2018 Vectorworks, Inc. All Rights Reserved. Vectorworks, Inc. is part of the Nemetschek Group.

×
×
  • Create New...