Jump to content
PVA - Jim

Apple Special Event

Recommended Posts

33 minutes ago, Mark Aceto said:

Thanks, Jim. No, not sections; just plan, elevation, iso.

 

I think for us Mac users, we’re looking at which one of the “greatest hits” will get us the most bang for our buck (and what features are a waste of money). In other words, how does a 2012 Mac Pro 12-core 1080 ti stack up against a 2017 iMac Pro 10-core Vega (exactly the same; I’ve own both) vs a maxed out iMac vs a MBP vs a maxed out Mini.


Gotcha, thanks for clarifying. This, this is very hard, as Vectorworks is now changing dramatically in which hardware affects what from version to version. So for instance, my answers to the above in 2017 would be completely different in 2019. One machine may stack up more favorably in 2019 but not so obviously in 2017 or vice versa, but this is just because im one guy testing things in my little mad scientist lab of an office. As Vectorworks' guts are more modernized and as we get our hands on more hardware, we will have better and clearer answers for you for the critical day to day performance of certain hardware configs.

 

  • Like 1

Share this post


Link to post
44 minutes ago, Jim Wilson said:

Yes, thats the geometry phase choking and not letting your computer flex it's full muscle, thats normally the fault of Vectorworks older code we havent yet replaced. 

 

That's the part I'm confused about. My computer is flexing its full muscle (one muscle; one core) until it hits a ceiling and freezes the app, so i don't understand how a 2ghz muscle will have the same performance as a 4ghz muscle. I want the most headroom as possible before I hit that ceiling.

 

Share this post


Link to post

It is indeed confusing. Some of the older portions of Vectorworks are capable to completely utilizing a core... and doing hardly anything with that usage. However you're right, ill do a more modern test (I havent done a practical one since the Intel Core 2 Duo days now that I think about it)  and confirm how much of a factor clock speed really is both inside and outside of geometry.

  • Like 1

Share this post


Link to post
24 minutes ago, Jim Wilson said:


Gotcha, thank for clarifying. This, this is very hard, as Vectorworks is now changing dramatically in which hardware affects what from version to version. So for instance, my answers to the above in 2017 would be completely different in 2019. One machine may stack up more favorably in 2019 but not so obviously in 2017 or vice versa, but this is just because im one guy testing things in my little mad scientist lab of an office. As Vectorworks' guts are more modernized and as we get our hands on more hardware, we will have better and clearer answers for you for the critical day to day performance of certain hardware configs.

 

This is part of the struggle I've been having. I need to buy hardware that has a 3 years plus lifespan. My main piece of software is VW. All of my work is in 3d. I do a few "renderings" but the majority of what I do is creating design/construction drawings from the model using sheet layer viewports (plan/section/elevation/perspective/ISO). Almost every viewport is dual rendered (Renderworks or OpenGL background plus Hidden Line foreground) to create mostly greyscale drawings. I have spent a lot of time trying to figure out what to do hardware wise. Thankfully my old 15" RetinaMacBook Pro was/is very capable, reliable and at this point has paid for itself many times over.

 

The irony and my main frustration with all of this is that NV has not made improving VW on existing hardware a priority on the Mac side. Its clear that embracing Metal and removing the single core barriers would extend the life of every Mac users hardware. The fact that both of these things are still, after all this time, at least two years away is frustrating.

 

Kevin

 

Share this post


Link to post
1 minute ago, Kevin McAllister said:

The irony and my main frustration with all of this is that NV has not made improving VW on existing hardware a priority on the Mac side. Its clear that embracing Metal and removing the single core barriers would extend the life of every Mac users hardware. The fact that both of these things are still, after all this time, at least two years away is frustrating.

The fact that it is still two years away and not longer, is precisely because it was given top priority. No matter how high on the list we push it, this was always going to be a multi year project. There is only one team with the know how to work on it at any given time, and then only a few chunks it can be broken up into for individual engineers to work on. We can't go the route a lot of game development companies can go and hire a huge warehouse of contract engineers to plug away at it for 6 months and then be done. I proposed this very thing, and it is simply not possible because of the age, size, complexity and organization of our code base. It didn't come down to a cost of investment or we would have spent the cash happily.

  • Like 3

Share this post


Link to post
1 hour ago, Jim Wilson said:

The fact that it is still two years away and not longer, is precisely because it was given top priority. No matter how high on the list we push it, this was always going to be a multi year project. There is only one team with the know how to work on it at any given time, and then only a few chunks it can be broken up into for individual engineers to work on. We can't go the route a lot of game development companies can go and hire a huge warehouse of contract engineers to plug away at it for 6 months and then be done. I proposed this very thing, and it is simply not possible because of the age, size, complexity and organization of our code base. It didn't come down to a cost of investment or we would have spent the cash happily.

 

I agree with @Andy Broomell, your directness and transparency when possible is always appreciated.

 

Kevin

 

  • Like 4

Share this post


Link to post
On 11/13/2018 at 12:08 PM, Mark Aceto said:

That's the part I'm confused about. My computer is flexing its full muscle (one muscle; one core) until it hits a ceiling and freezes the app, so i don't understand how a 2ghz muscle will have the same performance as a 4ghz muscle. I want the most headroom as possible before I hit that ceiling.

 

On 11/13/2018 at 12:10 PM, Jim Wilson said:

It is indeed confusing. Some of the older portions of Vectorworks are capable to completely utilizing a core... and doing hardly anything with that usage. However you're right, ill do a more modern test (I havent done a practical one since the Intel Core 2 Duo days now that I think about it)  and confirm how much of a factor clock speed really is both inside and outside of geometry.

 

Screenshot attached of the single core CPU constraint that I described. Whether rotating a complex DWG import, importing/exporting a 3DM file, rendering a hidden line SLVP, publishing a drawing set or countless other operations...  this is the scenario that stops me in my tracks.

 

2018 was far worse, so I'm grateful for the improvements in 2019 but this still happens constantly. In fact, there are some single cores operations that I can't complete on my 2014 MacBook Pro that I can complete on my 2017 iMac Pro, so I'd like to understand why a faster single core clock speed would not help me.

 

Troubleshooting Process of Elimination

  • As you can see in the screenshot, the 16gb AMD GPU is barely being tapped at all (even with a 2nd 4k display), so that's not a factor
  • 64gb RAM is overkill, so that's not a factor
  • 10 cores certainly aren't holding me back on VW's 4-core limit, so there's only one factor that freezes the app: single core clock speed*
    • *Well, that and the fact that Apple throttles the CPU before turning on the fans, so I would love to know if there's an unofficial workaround to hit 4.5ghz (I've never caught it breaking 4.3ghz)

Assessment

  • Adding RAM would not help (in fact, I could reduce to 32gb without lowering any performance)
  • Upgrading the GPU would not help (in fact, I could reduce to 8gb without lowering any performance)
  • Adding more cores would not help (in fact, reducing number of cores to increase single core clock speed would be an upgrade with the exception of CPU-based rendering which is fast becoming obsolete)
  • Unless I'm missing something, the only other variable left to prevent VW from freezing is to bump the single core clock speed

Screen Shot 2018-11-18 at 8.44.05 PM (2).png

Share this post


Link to post
8 hours ago, Mark Aceto said:
  • Unless I'm missing something, the only other variable left to prevent VW from freezing is to bump the single core clock speed

 


The issue is that there is NO variable that can be changed at the present time to prevent Vectorworks from freezing in these scenarios. I will test CPU clock speed to see if that can help mitigate it, but it cannot be avoided entirely until we replace more of it's older systems.

Share this post


Link to post
2 hours ago, Jim Wilson said:


The issue is that there is NO variable that can be changed at the present time to prevent Vectorworks from freezing in these scenarios. I will test CPU clock speed to see if that can help mitigate it, but it cannot be avoided entirely until we replace more of it's older systems.

When I read things like the above, and thenexperience the decidedly un-smooth release of VW 2019 (it's still not reliable enough for me, even after Sp1.1) And I imagine all the engineers being thrown at those issues instead of fixing the really big issue of legacy code I get sadface. Someone is pointing the ship at the wrong landmarks.

  • Like 1

Share this post


Link to post
6 hours ago, Jim Wilson said:

The issue is that there is NO variable that can be changed at the present time to prevent Vectorworks from freezing in these scenarios. I will test CPU clock speed to see if that can help mitigate it, but it cannot be avoided entirely until we replace more of it's older systems.

 

Something is getting lost in translation... While I appreciate your transparency and completely understand what you're saying about the code, I'm asking about hardware. When my system is reporting a single core choking at 4.26ghzregardless of what app I'm using - my interpretation of that data is to increase the clock speed. I don't expect a faster chip to solve the problem but I do hope that it will help with fewer freezes (more headroom).

 

The reason I spec'd this iMac Pro with overkill RAM and VRAM was because I wanted to be certain, without a shadow of doubt or speculation, what hardware is constraining VW. For example, when my 2014 MacBook Pro runs out of memory (16gb), I'm left to wonder will 32gb be enough? But with 64gb RAM and 16gb VRAM, the only hardware constraint I've observed is single core performance.

 

So, the question is: with the current code of VW 2019 as it stands, what is the CPU that will constrain VW the least? Based on what I'm seeing on a regular basis in the screenshot I provided, my guess is the i7 4-core 4.2ghz base clock speed that turbos to a theoretical 5.0ghz.

 

Edited by Mark Aceto

Share this post


Link to post
2 hours ago, mjm said:

When I read things like the above, and thenexperience the decidedly un-smooth release of VW 2019 (it's still not reliable enough for me, even after Sp1.1) And I imagine all the engineers being thrown at those issues instead of fixing the really big issue of legacy code I get sadface. Someone is pointing the ship at the wrong landmarks.


Ive talked about this a few times but the core issue (oversimplified) is this:
We can not throw more than a handful of engineers at the legacy codebase problem at a time, mainly because of how old the code base is. Modern programming languages and software architectures allow for this, but the items we have left over from decades of history do not.  So when you see shiny new things pop up, its because that is what we can assign new engineers to and create new tools that add functionality.

If it were simply a choice between doing only functionality fixes VS only new features, we would choose fixes, but it is not that simple unfortunately. We need MANY fixes and updates and enhancements still, and just like we are far and away beyond 2014 and older versions in terms of document complexity that is possible, speed of use, and what hardware can be used at all, a few versions from now we will be equally far from where we are today. We have very large leaps in core functionality still actively in the pipe.

  • Like 1

Share this post


Link to post
2 hours ago, Mark Aceto said:

So, the question is: with the current code of VW 2019 as it stands, what is the CPU that will constrain VW the least?

As soon as I can get a legitimate test together, I will post the results directly.

  • Like 1

Share this post


Link to post
11 minutes ago, Jim Wilson said:


Ive talked about this a few times but the core issue (oversimplified) is this:
We can not throw more than a handful of engineers at the legacy codebase problem at a time, mainly because of how old the code base is. Modern programming languages and software architectures allow for this, but the items we have left over from decades of history do not.  So when you see shiny new things pop up, its because that is what we can assign new engineers to and create new tools that add functionality.

If it were simply a choice between doing only functionality fixes VS only new features, we would choose fixes, but it is not that simple unfortunately. We need MANY fixes and updates and enhancements still, and just like we are far and away beyond 2014 and older versions in terms of document complexity that is possible, speed of use, and what hardware can be used at all, a few versions from now we will be equally far from where we are today. We have very large leaps in core functionality still actively in the pipe.

Whether or not I have said this before, let me say it here now: I might very well have been long gone from this CAD platform if Jim Wilson hadn't shown up in the nick of time to be the face of VW I prefer to see. Thanks man. And but, if things (tools/documents) continue to change/shift/grow in complexity, does that not suggest the urgency of a solution to the legacy code as the highest imperative? And why add complexity when the codebase cannot truly support it? 

(Thanks Jim for your patience, a great asset)

 

Edited by mjm
  • Like 4

Share this post


Link to post
3 hours ago, Jim Wilson said:


I like nothing more to facilitate work and disseminate knowledge. I am very glad to hear I can help.

 

3 hours ago, Jim Wilson said:


This is an exactly correct assumption. I'll give you the super-oversimplified summary of whats going on behind the scenes:

Vectorworks is built in two programming languages, C++ (an older language but still a very common one) and Vectorscript, which is very similar/based on Pascal which is declining rapidly in popularity in older applications and nearly unheard of in newer ones. All NEW tools or commands we add to Vectorworks are written either in C++ or added on the side, like Cloud Services, a service which effectively gives Vectorworks more abilities without actually being completely PART of the software itself. So when we add newer things or semi external services, THOSE aspects of the software package CAN be worked on concurrently and have resources poured into them to get a proportionate boost in development speed. The older stuff is effectively carved in stone and has to be transcribed manually.

We cant "port" or directly translate things that were made in Vectorscript directly to C++, an engineer needs to sit down, do a full study on the tool or command or whatever they may be replacing, and completely rebuild that tool from scratch in the newer language. It isn't so much like dropping a newer engine into an older car as much as it is showing a modern smartphone manufacturer a picture of a horse and buggy, along with a list of things that horse and buggy can do, and then telling them to create a device that covers all those bases. For some items like "Rectangle" this simple and straightforward, but for complex things like our Site Model tool, the list of "things it does" is incredibly long, its interoperability with other features immensely complex, can sometimes be subjective, and often the coding methods that need to be used in C++ to accomplish the same tasks as Vectorscript are very odd or require that it be completely re-engineered.

The re-engineering and effectively repeating decades of work is always, ALWAYS worth it in the end so we keep it up, but there are many limitations into how fast they can be accelerated. None of you in this thread are wrong at all, and many of the assumptions made above are exactly the conclusions I'd have come to as well if I didn't have the good fortune to be able to have a chat with the engineers that work on it daily. =====snipped=====

 

This is exactly what I was looking for. As always, thank you.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


 

7150 Riverwood Drive, Columbia, Maryland 21046, USA   |   Contact Us:   410-290-5114

 

© 2018 Vectorworks, Inc. All Rights Reserved. Vectorworks, Inc. is part of the Nemetschek Group.

×
×
  • Create New...