Forum Administrator
  • Content count

  • Joined

  • Last visited

Everything posted by JimW

  1. Correct, if the document units were set so that a whole number in those units was within the realm of the accuracy needed, then it would most likely not be an issue. if you need to measure down to the accuracy of a percentage of the thickness of a strand of hair (effectively the size of the inaccuracy being discussed here) then inches and 16ths of an inch are too large a unit to do so. We don't force it like other applications do, they just automatically round the value the user is shown to make them feel better (or automatically change to a smaller unit of precision), the error still exists underneath. I would say letting the user set an absolute rounding value would also make sense, but it's still a "dishonest" value underneath. I'll broach the topic with the directors and see what they think, this is likely not something that's come up for them in awhile according to a quick internal tracking and forum search. EDIT: However, to clarify, wouldn't asking for manual control over the rounding be giving the user the same control as they currently have when setting the Fraction precision to 1/64 and not showing decimals for inexact fractions?
  2. If you place the path snapped along the side of the profile objects, does it behave differently?
  3. The two above posts are where this issue ends unfortunately. This can not be corrected in any meaningful way in Vectorworks. If engineering was forced to act on it, most likely they would simply take away the dimensional fields in the OIP for objects that had questionable results at that incredibly small scale and ensure that the dimensions always read correctly instead. I don't like it, but it isn't going to change anytime soon, keeping geometry near or appropriate for the the full unit size in dimensioning is the only way to avoid it.
  4. Gradients and Transparencies are still somewhat heavy, but since they are now handled by the much faster GPU-dependant VGM, they no longer cause as much pain as they did pre-2015. The heavy/problematic objects these days that I run into most frequently: 1) Inefficient mesh objects (OBJ, 3DS and STL imports mainly) that did not have polygon count in mind when they were created in other software. 2) Unnecessarily dense 2D geometry, which often comes from DWG import. This takes the form of straight lines or simple curves that are defined by hundreds or thousands of vertices when only a handful are needed to display the original shape. 3) PDFs. A PDF of somewhat high resolution (2K+) can bring Vectorworks to a crawl, where just a simple bitmap import of the image would not. This is mostly due to the high resolution as well as the embedded vertex/snapping geometry that some PDFs contain. This is why we spent effort of the crop, resolution downsampling and snapping geometry control for PDFs. 4) Having any type of object extremely far from the document origin. Even a few 2D loci can do it. Normally this presents with symptoms of disappearing objects and failed surface/solids operations, but can also present as slow screen redraw. For file SIZE the easiest way for a document to balloon is for multiple high resolution image imports or textures to be included in the document even though they aren't in use. This is pretty manageable via the Purge function however. The lack of use of symbols for repetitive geometry can also make a file much larger than it needs to be quite quickly.
  5. The problem with verifying what is and isn't multicore in Vectorworks, is during import and many other types of actions it bounces back and forth between single threaded actions and multithreaded actions rapidly. The functional math being performed at the very bottom end is all single core, but there are multicore/threaded processes that switch on and off to perform other operations in the meantime if the single thread importing items has finished a "chunk" that needs other work done on it by another process before the import is completed. You'll be able to see this more in the total CPU utilization % compared to the individual core utilization percentages. Also, "Real" vs "Fake" (Physical vs Logical) cores is another complex discussion, but when implemented properly by a software's developer, it doesn't matter to a process thread if a core is "real" or not. By the way, if you have hyperthreading enabled that means you now have 8 logical cores, rather than 4 physical and 4 logical. Without hyperthreading, you'd just have 4 physical and no "fake"/logical cores. For example, here's a fully multithreaded action, (Renderworks rendering in the indirect lighting phase) and you can see Cinerender using up 700% and change, effectively using 7 cores to their fullest and a bit of the remaining core which also has to handle the rest of the processes on my machine including Vectorworks which passes it information between the various render phases. As you can see there are 8 cores on my CPU here, 4 physical cores hyperthreaded to 8 logical cores. Regardless of whether a core is "real" or not it is fully utilized when doing a rendering. Now, for geometry/math operations, this changes a bit. Since it's a single threaded operation, take a look at this example where I duplicate an array of 20 complex mesh objects: See how it's using 4 cores, but not all the way? And that Vectorworks process sits at 88%? (It hovered between 66% and 100% for this test but it was fluctuating and I missed it in the screenshot) that means that Vectorworks is only pushing a single thread to be processed, but the OS is deciding that single thread can be spread across 4 cores. The OS is in charge of which of those cores the operation is spread across, this can be seen in the differences in color in the CPU individual core monitor. You see a lot of red in the second screenshot but almost all green in the first. Red is CPU usage dictated by the OS. Green is CPU usage dictated by the application. So when you see the reg/green split like that, you aren't seeing Vectorworks performing a multithreaded process, you're seeing the operating system attempting to split up a single threaded process. NOTE: I've been told when the OS does this, it's never allowed to use all the cores available, but the rules can be different on Mac vs Windows. I have not tested that personally however and don't know how true those claims are. I am very much oversimplifying what the engineers have explained to me in great detail over the years, but this is the general gist of it. Now, as a more practical answer, I have a number of different types of hardware available here for testing and if you can dropbox/google drive me one of these large files, I can absolutely test it on a few of them to see if a significant difference in import time appears. No matter how solid the theory involved may be; nothing beats a real world usage test.
  6. This issue was confirmed and filed as a bug: VB-138985 I found no other workaround than restarting Vectorworks entirely, and even that didn't always resolve it permanently.
  7. After switching to Arial, if I create a new viewport, it works fine, but the old converted viewports still do not display the text properly. If recreating the viewports after selecting a supported font works in your document let me know, otherwise make sure to get in touch directly with technical support and have them take a look. If they can confirm this as a conversion issue it can be filed as a bug.
  8. Also, go to Tools > Options > Vectorworks Preferences > Display > Edit Font Mappings on both your 2016 and 2017 licenses and make sure they are mapped the same way, this sometimes doesn't migrate properly between versions with some font types.
  9. You're talking about quite a few things here, I'll attempt break them down: 1) You will have very little success attempting to speed up the geometry calculations that occur during import with any hardware upgrades. 2) Rendering in Vectorworks is fully multicore, geometry/file operations/math are not. 3) Going from 16GB of RAM to 32GB will likely not show an increase in performance unless the files you are using are actively taking up all of your system RAM when left open and idling, which is uncommon. 4) The operating system (as long as it's supported) has very little, if any, effect on the speed of any aspect of Vectorworks. 5) Your graphics card will have no effect on file import, only in how long it takes to display the file in OpenGL or Top/Plan, which generally is just a few seconds even on mediocre cards after the geometry calculations are complete. 6) Upgrading to a RAM disk from an SSD might help file import but only VERY slightly and more likely than not wouldn't be noticeable, since the main bottleneck is CPU usage rather than disk read/write speed. You aren't doing anything wrong at all, it's just that the underlying math and geometry calculations in Vectorworks have not yet been updated to take advantage of multicore processors. Doing this requires rebuilding systems from scratch, so it is only being done in one or two areas at a time as doing it all in one version wouldn't be possible. Hopefully this is the information you were looking for, if not please let me know and I can further clarify.
  10. The main advantage is when roundtripping VW < > C4D. I was unaware of the reflectance limitation originally or I would have made it much clearer in the release marketing materials I worked with, apologies. The reflectance limitations are planned to be removed, but I do not know when.
  11. I recommend resetting preferences as described here:
  12. Agreed! I would absolutely like this as an option in OpenGL settings. Request Submitted: VE-96212
  13. If you're in a viewport, ^ this ^ is the most likely cause. If the above doesn't correct it: If you set all classes to visible, then verify that View > Layer and View > Class Options are both set to Show/Snap/Modify Others, do you still not see the dimensions? If you hover over where the dimensions were placed, do you see a highlight but no geometry? Or just nothing at all?
  14. TECHNICALLY that is correct, but these days the difference is so negligible that you would be hard pressed to notice it. You might see the difference in math/geometry calculation speed between a single 1.8 GHz core and a single 4.5 GHz core but once it gets much closer than that it becomes difficult to measure the difference really. For Renderworks renderings it is almost always better to have more cores than higher clock speed. EDIT: The multi monitor performance/instability issue is a filed bug, it seems to be more common on Windows but I have seen slowdowns on Mac as well that are way more severe than would be expected for the increase in GPU load from two monitors. I will add your post to the case.
  15. Navigating around in Top Plan uses the GPU when in Best Performance mode and is multi core (GPU cores) aware, but it does not use one or more CPU cores. Math and geometry operations (regardless of the view you're in) are still CPU-only and are only single core. More and more things are going to be offloaded onto the GPU in the future since the performance gains are significant so the GPUs importance will only increase. As for troubleshooting testing, disconnect one of the two monitors, reboot the machine. When you launch Vectorworks (leaving the Navigation Graphics where you had them before this thread) does it still lag and flash or does it behave differently?
  16. Contacting the Training department now.
  17. If these are your current specs, hardware is most likely not your problem at all. Post the following from your machine please: and I can take a closer look. Also, tell me under Tools > Options > Vectorworks Preferences > Display what you have Navigation Graphics set to and if when you change that settings and try similar edits to geometry if you get that same flashing or not.
  18. What are the graphics/GPU options on both of those machines, and what kind of work do you normally use Vectorworks for?
  19. @AlexS in Support I think had the cleanest fix: 1) Recreate the Ceiling-main class in the affected document manually. 2) Use Custom Select or Select Similar to select all Door objects. 3) Click Settings in the OIP and under 2D Visualization, set the visibility to be controlled by Class and then next to Wall Lines, select Ceiling-main. This seemed to bring it back to normal in a test document here.
  20. There would need to be consideration given to multiple scales for design layers, if layers were aligned or not, and then if users didn't always print to the same paper size (very common) they'd have to configure it differently for each marquee print. It is unfortunately not as simple as it may look at first.
  21. If it was mapped to None then I am not sure of the best solution, if it were mapped to any other class than None you could make sure all other objects were moved out of that class, recreate ceiling-main manually then delete the mapped class and redirect the components to ceiling-main, but None is not a class you are allowed to delete. I'll check with a few others here, but that isn't one I can remember running into before, apologies.
  22. Normally you would just create a viewport, crop it to whatever shape you'd like, then print from the sheet layer directly so that you can orient the cropped viewport the way you want on the page with the viewport ensuring accurate scaling relative to the page. If we just implemented a marquee area for design layer printing, you'd have no way to ensure scale or page position. Sheet layer viewports would also resolve the black background issue without making you toggle it off.
  23. What version of Vectorworks are you using? If you deleted the class, it would have forced you to reassign objects from it (so, control of the wall lines) to a different class, what class did you map it to?
  24. Quickest method is to go to File > Export > Export Image, and then you have the option of Marquee which will let you define the borders. Then print the exported image after orienting it the way you like via your printer's built in page configuration.