Jump to content

Hardware tests and Data


Recommended Posts

So besides the obvious "What hardware can I buy to make it perfect?" type question I want to be more more inclusive of data analytics and actual testing.

 

1) I know that there is the Demo file that Vision comes with, but what are the chance of having a few "user created" scenes that we can send out for testing purposes? I would LOVE to see a data table of the same file, used across a bunch of different hardware and settings.

 

2) What types of setups do you guys test on at Vectorworks HQ? Do you try multiple types of setups from minimal to absolutely insane?

 

3) In a lot of Graphics heavy programs or scenarios there are very specific requirements to get the most out of a system. That includes even down to the exact graphics driver you are using for a given card. Has any testing, data or thought been put into trying to figure that out?

 

4) While yes there are limitations to the way a software can scale too, I would think those limiting factors would be money and time? In my opinion I would like to see improvements made in the ability to get large fixture heavy shows working as fast as possible, and then go back and add in the fancy features. In most cases I end up running my machine at 1024x768, turning off every feature the program has just to get the lights to flash all at one time when I hit the button. Thus all that time put into making those features is a complete waste. Knowing that it is possible for me to buy a 64 core processor, with dual 2080ti, 256gb of ram and then overclocking everything, but not knowing if that actually helps me at all is painful.

 

Lastly, if there is anything we can do to help test this sign me up. 

  • Love 1
Link to comment
  • Vectorworks, Inc Employee
14 hours ago, jweston said:

1) I know that there is the Demo file that Vision comes with, but what are the chance of having a few "user created" scenes that we can send out for testing purposes? I would LOVE to see a data table of the same file, used across a bunch of different hardware and settings.

We do not have any, but this has been discussed in the past. I think one issue we kept running into was users wanting their files kept private. A lot of the files we get from users are on NDA and cannot be shared with other users, but are used for internal testing.

 

14 hours ago, jweston said:

2) What types of setups do you guys test on at Vectorworks HQ? Do you try multiple types of setups from minimal to absolutely insane?

Most of our developers run on fairly minimal machines. Tech Support and Marketing have access to one "beast" machine that we often take to tradeshows. This machine was a beast when we built it, but it is now becoming dated. I'm not sure the CPU/RAM that it has, but we had 2x1080Ti's in SLI in it at one point.

 

14 hours ago, jweston said:

3) In a lot of Graphics heavy programs or scenarios there are very specific requirements to get the most out of a system. That includes even down to the exact graphics driver you are using for a given card. Has any testing, data or thought been put into trying to figure that out?

No, but I like this idea a lot! I'm not sure how we'd go about approaching it, but that's neither here nor there 😛

 

14 hours ago, jweston said:

4) While yes there are limitations to the way a software can scale too, I would think those limiting factors would be money and time? In my opinion I would like to see improvements made in the ability to get large fixture heavy shows working as fast as possible, and then go back and add in the fancy features. In most cases I end up running my machine at 1024x768, turning off every feature the program has just to get the lights to flash all at one time when I hit the button. Thus all that time put into making those features is a complete waste. Knowing that it is possible for me to buy a 64 core processor, with dual 2080ti, 256gb of ram and then overclocking everything, but not knowing if that actually helps me at all is painful.

Those limiting factors are not time and money. I know this is an extreme example, but passwords are very well encrypted. So much so that the fastest computer in the world would take years/lifetimes to crack it. The "issue" here isn't necessarily the hardware. It is the algorithm that is used to crack this password. For example, a dictionary attack may be faster than brute forcing. This is all on the same hardware, but the software imposes an upper limit on "how quickly the password can be cracked".

 

Taking this conversation to Vision, look at Vision 2018 vs Vision 2019. It did not matter how much time/money/hardware you threw at Vision 2018, it just ran like garbage compared to the same machine with Vision 2019. So, while it may seem like the limiting factors of performance are time/money, this is only true to a certain extent. Eventually, you will hit an upper limit of the algorithm itself, and the issue is no longer hardware but the software design.

 

One thing to keep in mind as well as that it was somewhat intended for these features to be shut off for real time renderings. The main reason we keep these features around is for "High Quality Render Movie/Still".

 

Lastly, I do not disagree that Vision performance for real-time renderings needs to be improved. There are a few areas we can look at, but the one I've got my eye on is the physics engine (which is what handles panning/tilting fixtures, moving meshes related to DMX XForms, etc etc).

 

 

14 hours ago, jweston said:

Lastly, if there is anything we can do to help test this sign me up. 

I'd love to get some files from you guys that you wouldn't mind shipping as a demo file for Vision 2021. We usually record some DMX with it so a user without a light board can easily playback a showfile to see what Vision is capable of.

 

The only other thing is finding a way to document hardware performance as well as driver performance. This will need lots of testing.

Perhaps, the best thing to do here is come up with a "workflow" or "benchmark" for testing. For example:

  1. Load the sponza demo file
  2. Ensure all app/doc settings are set to default
  3. Playback recorded DMX
  4. Write down FPS at 0:30, 1:00, 1:30, and 2:00 (or something like that)

The most important thing when doing these kinds of tests is ensuring that everyone is testing the same way.

Having other programs like VW running in the background would affect these tests. So, we'll have to take it on good faith that no shenanigans like that are happening.

Link to comment
4 minutes ago, bbudzon said:

We do not have any, but this has been discussed in the past. I think one issue we kept running into was users wanting their files kept private. A lot of the files we get from users are on NDA and cannot be shared with other users, but are used for internal testing.

I can absolutely come up with some weird designs to send over. In all honesty they are not as "pretty" as the Sponza demo file. But that is purely by design. At least in our use case, we are trying to get the fastest frame rate, with as much realism as possible. Does this mean we get to use the nice new haze? Sadly no. We just need to have the lights dim, wiggle and show gobos with as much accuracy to real life as possible. Our goal is not photo realistic renderings, but something that closely resembles the timing you would expect in real life on the actual rig.

 

I will try and create a small, medium and large show file to send over your way that can be distributed to anyone.

 

10 minutes ago, bbudzon said:

Most of our developers run on fairly minimal machines. Tech Support and Marketing have access to one "beast" machine that we often take to tradeshows. This machine was a beast when we built it, but it is now becoming dated. I'm not sure the CPU/RAM that it has, but we had 2x1080Ti's in SLI in it at one point.

Where is the tip jar that I can add money too to get you guys good hardware? While I know the current discussion is that the software may not currently scale perfectly with hardware, I would love to push the need to have something that scales better. 

 

12 minutes ago, bbudzon said:

No, but I like this idea a lot! I'm not sure how we'd go about approaching it, but that's neither here nor there 😛

Cannot say I know the best way to test this either, but I do know with other programs this is a much bigger sticking point. And in following this, knowing if a different cpu architecture is better than another, or cores vs clock speed type discussions.

 

Vision is arguably less about data and planning, and purely about speed and looks. So knowing how to get the most out of that is really the only thing that matters to us in Vision (maybe others disagree?)

 

17 minutes ago, bbudzon said:

Those limiting factors are not time and money. I know this is an extreme example, but passwords are very well encrypted. So much so that the fastest computer in the world would take years/lifetimes to crack it. The "issue" here isn't necessarily the hardware. It is the algorithm that is used to crack this password. For example, a dictionary attack may be faster than brute forcing. This is all on the same hardware, but the software imposes an upper limit on "how quickly the password can be cracked".

 

Taking this conversation to Vision, look at Vision 2018 vs Vision 2019. It did not matter how much time/money/hardware you threw at Vision 2018, it just ran like garbage compared to the same machine with Vision 2019. So, while it may seem like the limiting factors of performance are time/money, this is only true to a certain extent. Eventually, you will hit an upper limit of the algorithm itself, and the issue is no longer hardware but the software design.

 

One thing to keep in mind as well as that it was somewhat intended for these features to be shut off for real time renderings. The main reason we keep these features around is for "High Quality Render Movie/Still".

 

Lastly, I do not disagree that Vision performance for real-time renderings needs to be improved. There are a few areas we can look at, but the one I've got my eye on is the physics engine (which is what handles panning/tilting fixtures, moving meshes related to DMX XForms, etc etc).

Fair enough, but one would expect in that extreme case  that the amount of time it takes to crack a password with an old Pentium III CPU vs a new AMD Threadripper 3390x would be significantly lower.

 

Obviously Vision has been getting faster. However lighting, like Moore's law seems to be getting complex just as fast. With all the current multipart fixtures things have gotten big fast.

 

I can get over all of the trials and tribulations (see fixture orientation, dmx transforms and the like...) but allowing the hardware to scale "better" with hardware should be a must.

 

I hear by place my vote to let you guys focus on making the software "solve passwords better" haha. If you guys improve that portion of the software, then I can throw hardware at it. I think it is perfectly acceptable to accept that if I want it to run faster I just throw gear/money at it. Currently I don't feel I can do that with much success.

 

Then once all that settles we can all deal with MVR and GDTF. 🤣

 

(Steps off soapbox)

  • Like 1
Link to comment
  • Vectorworks, Inc Employee
15 minutes ago, jweston said:

I can absolutely come up with some weird designs to send over. In all honesty they are not as "pretty" as the Sponza demo file. But that is purely by design. At least in our use case, we are trying to get the fastest frame rate, with as much realism as possible. Does this mean we get to use the nice new haze? Sadly no. We just need to have the lights dim, wiggle and show gobos with as much accuracy to real life as possible. Our goal is not photo realistic renderings, but something that closely resembles the timing you would expect in real life on the actual rig.

I love this idea! We have always generally focused on prettier looking scenes with lower light counts as these same files were used for marketing material. But focusing on a performance friendly scene with many many lights is a fantastic idea. Let's coordinate on this.

 

20 minutes ago, jweston said:

I will try and create a small, medium and large show file to send over your way that can be distributed to anyone.

Just an FYI, I think the priority here would be large show file then medium then small. I like your, "Go big or go home" idea above 😉 If you can provide all three, that would be great. I'll look into seeing which ones we can get approved.

 

21 minutes ago, jweston said:

Where is the tip jar that I can add money too to get you guys good hardware?

🤣 I think it's somewhat of a personal preference for me. I prefer the mobility of a laptop over a desktop. And don't get me wrong, my laptop is beefy. But a laptop will almost never outperform a decent desktop. And even though I do my primary development on a MBP, I have a Windows PC that is a desktop and it's card is nicer (but could still probably use an upgrade; tbh, I never thought to request one as I use it so infrequently).

 

So, maybe saying minimal hardware wasn't completely accurate. I simply meant that your 2x2080Ti in SLI Thread Ripper is going to demolish my primary machine that I use for development 😂 I think anytime I need more power than my MBP or Windows PC can handle, I just borrow the TechSupport/Marketing machine which is at least 2x1080Ti in SLI. No thread ripper though 😛 

 

28 minutes ago, jweston said:

Fair enough, but one would expect in that extreme case  that the amount of time it takes to crack a password with an old Pentium III CPU vs a new AMD Threadripper 3390x would be significantly lower.

 

Obviously Vision has been getting faster. However lighting, like Moore's law seems to be getting complex just as fast. With all the current multipart fixtures things have gotten big fast.

 

I can get over all of the trials and tribulations (see fixture orientation, dmx transforms and the like...) but allowing the hardware to scale "better" with hardware should be a must.

 

I hear by place my vote to let you guys focus on making the software "solve passwords better" haha. If you guys improve that portion of the software, then I can throw hardware at it. I think it is perfectly acceptable to accept that if I want it to run faster I just throw gear/money at it. Currently I don't feel I can do that with much success.

 

Then once all that settles we can all deal with MVR and GDTF. 🤣

 

(Steps off soapbox)

No you are completely right! My example was poorly formed when talking about passwords as neither of those examples represented how software scales. So getting right to the point as Vision itself is a good example:

(Note: I haven't tested this, strictly speaking. But this is pulling from my general experiences and memory.)

  • If you ran Vision 2018 on an old machine and ran it on a new machine, you may get a 50% increase in performance. I feel this is being generous, but perhaps I'm wrong.
  • If you ran Vision 2019 on an old machine and ran it on a new machine, you may get up to a 200% increase in performance. I expect given the right circumstances it could be much more (given how we leverage the number of textures the GPU supports).

One last thing to point out is that Vision 2019 gave you the ability to control the way performance scales. So, in Vision 2018 you were stuck with the 50% increase when you upgrade your machine. With Vision 2019+, you are given many options to control the quality vs performance. This can be used to offset the poor performance of the old machine to bring it more "inline" with the newer machine.

 

So all this being said, have things improved since 2018? Of course 😄 I don't think anyone is denying that.

 

But to your point, that next leap in performance sure would be nice 🙂 What makes this tough is performing analysis on current code, and then properly executing a new design without breaking existing functionality. I still truly believe from what I've seen, the renderer is running very well. We never truly optimized the physics engine and I believe it is to blame. It will be a major overhaul to get it to be more performant, but I think as we investigate more we will find this is the right course of action.

Link to comment
  • 6 months later...

Hey!

 

I'm wondering if there's any insight to what features Vision 2021 will pack? I'm currently running Vision 2020 on the Machine Specs below and the "real-time"  rendering is so heavy on the machine it makes pre-vizing any show with X4 bars, JDC's or any "real world" qnt of moving lights painful, especially with "real world haze" at 1920 x 1080 (the whole reason I baught vision in the first place).

 

I'd be happy to share some files with Vectorworks to see if there's something I've been doing wrong to put such strain on the GPU, or do you advise moving away from an RTX based GPU towards a QuadroRTX card?

 

I'm also keen to know if Vision plans to implement, SFX (Flames/CO2), Laser, Projection/Projection Mapping, and Projection onto Gauze like surfaces support in the next realse??

 

Lastly, does vision support SLI of GPU's? 

 

Current system:

CPU - Threadripper 2950X 16core 32Thread 3.7GHz
GPU - NVIDIA RTX2080

SSD - NVME.2 Card
RAM - 32G at 3200Mhz

 

Don't get me wrong I love the software and the ease of working from Vectorworks into Vision, saves days of work compared to WYSIWYG it's just so close to being perfect I'd love some insight!

 

Looking forward to hearing back!

 

Cheers

Aidan

 

Link to comment
  • Vectorworks, Inc Employee

We can't reveal anything about Vision 2021 till it's released...

 

Vision does support SLI. One of out test machines ran 2-GTX 980 ti in SLI and we seeing seeing significant frame rate increases over a single card.

Crossfire should also work but most cards are dropping support for multi-GPU.

 

The biggest gain will be from adjusting the settings in Vision.

If you don't need shadows turn them off. Shadows are one of the advantages of using Vision over other applications but they also take a lot of GPU to process.

Just even small changes in the settings cam make huge performance changes.

If you lower the quality of Vision to match what MA or WYSIWYG you will see frame rates that far exceeds the other applications.

 

If you post your settings we can give you some hints on what to adjust.

Link to comment
  • 2 weeks later...

I have two GTX 1070's in my machine running in SLI. The SLI bridge is working but they don't appear to be running in SLI mode for Vision. Do I need to enable anything within Vision to make it utilise SLI?

 

Cheers,

Dan

Edited by DBLD
Reworded question to make it clearer
Link to comment
  • Vectorworks, Inc Employee

We'll have to check internally how this works again as it may have changed since we tried last.

 

When I first ran my SLI test, I started with just one GPU in the machine.

I ran FurMark and wrote down the FPS. I then ran Vision and wrote down the FPS.

I then installed the second GPU and the SLI bridge. IIRC, SLI did have to be enabled on a per application basis inside of the nVidia Control Panel.

I reran FurMark and it clearly was using SLI as the FPS was nearly double. This was also true of Vision.

At the time, I do not remember seeing an SLI rendering mode or a Force alternate frame rendering option.

 

Would you mind posting you nVidia driver version and what version of Windows you are on?

Link to comment
  • Vectorworks, Inc Employee

Vision 2020 does have an FPS counter, but it may be going away soon. If you ensure that the "focus" of the application is in the Scene Graph Dock, you can press the 'o' key to get the FPS to popup. Note: If you press 'o' twice, you will get very detailed reports. This report is for debugging only and actually hurts performance.

 

FWIW, my recommendation is to never trust any built-in FPS counter. How do you know it is being reported properly? What if there is a small mathematical bug? 😛

Anyway, reason I bring this up is I try to ALWAYS use a third party program to count FPS. Since you are on windows, I'd recommend FRAPS. There may be other alternatives, but FRAPS has worked well for me on PC in the past 😉

Link to comment

I have run some more tests and taken screen shots.

 

Furmark without SLI - 35 FPS

Furmark with SLI - 70 FPS (The green SLI indicator bar can be seen to the left)

 

Vision without SLI (tested with Fraps) - 60 FPS

Vision with SLI (tested with Fraps) - 50 FPS (The SLI inidcator bar on the left is showing no SLI utilization. I had to enable force alternate frame rendering on either card 1 or card 2 to get Vision to use both cards but there was no increase in FPS.)

 

 

without SLI.PNG

with SLI, PhysX auto.PNG

Furmark with SLI.jpg

Furmark without SLI.jpg

  • Love 1
Link to comment
  • Vectorworks, Inc Employee

Thanks for the detailed feedback!

I think we may need to look into some things on our end. But, I did have one thing I wanted you to try / be aware of.

 

I noticed that, based on your framerate, you are likely running with VSync on. VSync gives you the best quality renderings by avoiding things like screen tearing, which is only evident if the GPU updates and monitor updates are not "in sync". Generally, you should ALWAYS have VSync on.

 

However, in this specific case, we are trying to profile and compare performance metrics. In this case, you need to "fully unlock" the Vision renderer by unchecking the "Enable VSync" checkbox in the Application Settings (Note: Vision must be restarted for this setting to take effect). What you should see, after this has been disabled, is framerates that are able to go above (in your case) 60fps.

 

So, I do think SLI may have changed some since we last tested and we need to find a way to run these tests internally while working remotely in these times. And I thank you VERY MUCH for your detailed reporting. Odds are, if you are having this issue, someone else is as well! Hopefully, we can get this figured out 🙂 

 

I'd definitely be interested in seeing your Vision screenshots again with Enable VSync unchecked, but I'm not sure how much it's going to help. FWIW, FurMark has VSync disabled by default as it's only purpose is to test performance (quality issues such as screen tearing are not a concern here).

Link to comment

I tested with VSync off and the frame rate is about the same.

 

I was running a resolution of 1920 x 1080 with high texture, shadow and surface quality settings, dynamic shadows only on objects and VSync off. If I reduce to a resolution of 1280 x 960 the frame rate goes up to around 80 but this is a very simple scene-  there is only 10 spots. As soon as I introduce 8 x Quantum Washes the frame rate drops to around 14 FPS with resolution at 1280 x 960, or a resolution of 1920 x 1080 and all quality settings to very low FPS is around 8.

 

I was really hoping that introducing a second graphics card would improve this as this is still quite a small rig- once I start visualising a festival rig with multi element heads such as X4 Bars and JDC-1's things get really bogged down, and programming a music show with this frame rate isn't ideal.

 

620024975_withSLIV-SyncoffinVision.thumb.PNG.6366664e94ac2ef07863d276987030dc.PNG

Link to comment
  • Vectorworks, Inc Employee

It is my understanding that most applications do not need to concern themselves with if there is one or two graphics cards. This was evidenced by our tests back with Vision 2019, although I understand SLI and the way that it works may have changed in the last few years. So, I am hopeful that we can get Vision to leverage both of your graphics cards as this worked for us (without code changes) in the past. This is similar to how software need not concern itself with whether or not the hard drive for your system is a spinning disk drive or a solid state drive; the OS handles that for the application. Monitors are another good example of the OS handling things "for the application".

 

I'm sorry that I don't have more information for you right now. I will work with our internal employees on trying to get a machine set up with 2 GPUs so we can figure out what is going on.

 

Would you mind posting things like:

- The full version of Windows you are running

- The full version of the nVidia driver

- The full version of the nVidia software

- The full version of Vision

 

Hopefully, with this information, we can try to reproduce your exact setup 😉

 

Edit: Sorry, was going back through the post and saw this, "nVidia driver version 452.06 and am on Windows 10". The other information would still be helpful 😄

 

Edit2: So, after some further research, SLI has changed quite a bit over the last few years. Some newer cards do not support SLI and some cards support SLI but only when the application is designed to handle it. From what I can tell, a GTX 1070 should support "auto-sli" without the application needing to handle it. But, we are still investigating and working on shuffling equipment around the company so we can run in-house tests.

Link to comment
  • 3 weeks later...

Sorry for the very slow reply...

 

I am on:

 

Windows 10 Pro, build 19041.508

nVidia driver 456.38

nVidia Control Panel 8.1.940.0

Vision 25.0.5.562108

 

 

I had a major breakthrough in terms of frame rate today. In Vision preferences I changed haze quality from 1 to 0 and although it doesn't look quite as good frame rate went from 5 fps to 30 fps on very high texture, shadow and surface light quality settings at 1080 resolution.

 

  • Love 1
Link to comment
  • Vectorworks, Inc Employee

As we've looked into it more and more, it seems SLI is going to be a thing of the past for nVidia. They are dropping support on almost all of their newer cards and the ones that do support it require code to handle it (whereas before the card handled SLI for the code).

 

I have not yet heard back from our technology department on whether or not we have equipment to test. The last test we performed in house was on 2x1080Ti's, but one has died since.

 

So, while I still cannot help much in regards to SLI, based on your post above you may find this helpful.

 

Here is a list (in rough order) of application/document settings that impact performance in a significant way in 2021:

  1. Enable Shadows (shut this off for a big leap in performance)
    1. If you MUST have shadows on, consider turning them off globally by unchecking Enable Shadows and consider turning them on at a per mesh/layer level.
  2. Haze Texture Intensity; not to be confused with Haze Intensity (set this to 0% to disable 4D haze and get a big leap in performance)
  3. Haze Style (HQ 4D is usually best, but make sure you’re at half resolution and not full)
  4. Resolution Quality (esp when you are fullscreening Vision, setting this to something reasonable like 720p is ideal; the lowest value you can put up with is usually the sweet spot; I run 768x576)
  5. Haze Quality (the lowest value you can stand the better; I run 0.01 or 1%)
  6. High Precision (I recommend turning this OFF)
  7. Render Fixtures (I recommend setting this to OFF or Black)
Link to comment
  • 3 months later...
  • Vectorworks, Inc Employee

Yes. Vision 2021 has moved to OpenGL 4.1 and we weren't able to get the text overlays working properly. We also found some bugs in the reporting, which I believe have been mostly corrected (but since the reporting is no longer public, this information is internal only).

 

We had a very rough POC for putting FPS into the status bar (near the NDI status bar indicator). This is much simpler to achieve than a text overlay in the viewport, but does come with the downside that it is a single FPS counter for the entire program (and not an FPS counter per viewport). There are obvious ways we could remedy this (the most obvious being adding an FPS counter into the statusbar for each visible viewport, with some identifier as to which counter refers to which viewport).

 

It would definitely be interesting to hear from users here on the forums if they preferred the text overlay on the viewport, or if a status bar counter would be sufficient.

 

Alternatively, and the current suggestion/workaround, use a third party application for detecting FPS of Vision.

Link to comment
On 1/22/2021 at 1:14 PM, bbudzon said:

Yes. Vision 2021 has moved to OpenGL 4.1 and we weren't able to get the text overlays working properly. We also found some bugs in the reporting, which I believe have been mostly corrected (but since the reporting is no longer public, this information is internal only).

 

We had a very rough POC for putting FPS into the status bar (near the NDI status bar indicator). This is much simpler to achieve than a text overlay in the viewport, but does come with the downside that it is a single FPS counter for the entire program (and not an FPS counter per viewport). There are obvious ways we could remedy this (the most obvious being adding an FPS counter into the statusbar for each visible viewport, with some identifier as to which counter refers to which viewport).

 

It would definitely be interesting to hear from users here on the forums if they preferred the text overlay on the viewport, or if a status bar counter would be sufficient.

 

Alternatively, and the current suggestion/workaround, use a third party application for detecting FPS of Vision.

 

Status Bar!

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...