Jump to content

Vectorworks Performance Benchmarking


Recommended Posts

  • Vectorworks, Inc Employee

This came up recently in another thread, but I think it merits it's own discussion:

20 hours ago, P Retondo said:

Jim, VW needs to deal head on with these speed and efficiency perceptions / reality (?) by instituting performance testing and releasing the data.  When I buy a processor I look at all the available data, and it is both voluminous and convincing.  CAD programs need to do the same thing - if for no other reason than to let their engineers know whether they are doing a good job.  When I make the time-consuming commitment to convert my files and resources to a new version, I want to know if my performance is going to be at least equal to the previous version.  That's just a simple business decision, and I don't base those on sales department press releases.

 

I want to do this. Some of our distributors have directly asked us for something similar. The key difficulty I'm running into is: I have yet to find a way to even come close to showing what a "standard" file is. It cant be defined by number of objects, since objects have a broad range of complexity and various types of objects affect performance in various ways. It can't be defined by file size, since a dramatic difference in file size can occur even in two files that have the same geometry related to how cleanly the resources in each file were managed.

For instance, a common issue we run into is a report of something like "Sheet Layers Are Slow" which can have a bunch of different causes. The core one I see most often being a user only using a single sheet layer, and then putting dozens or sometimes even hundreds of instances of a Title Block object on it, which will brings things to a crawl. Title Blocks are optimized so that they are only recalculated and loaded when a sheet is viewed, or right before its printed/exported. If ALL title blocks are on a single sheet layer, this optimization becomes worthless and you have to wait for all sheets to update in order to continue working once you switch to that single sheet layer.

This means that in order to define performance, I would also have to effectively dictate workflow, something we have not done before in most cases. The general rule in Vectorworks has long been "You can do almost everything 5 different ways, the correct one is the one that works best for you." and I really do love that. But making standardized performance indicators for Vectorworks runs exactly in opposition to this mindset in every way I've been able to come up with.

SOME parts of Vectorworks are easy to benchmark, such as rendering speed, which is why you have all likely seen those that I have posted. Those are much more cut and dry, as I can test the exact same scene across different hardware and the resulting rendering time is a direct relation to performance.

Things like duplicating arrays of objects, doing complex geometrical calculations etc, do not result in times that vary directly based on hardware performance, since a lot of the slowness in those operations is currently a Vectorworks software limitation and not the fault of your hardware. Until these processes are moved to multiple threads I don't think they will be benchmark-able in a meaningful way. (To clarify, I TRIED to benchmark them in a meaningful way, and got more variance in the completion time based on what other applications were open more so than what hardware I used. )


I would very much like to hear any suggestions or feedback if anyone can see an avenue to approach this that I have missed. I miss plenty. I will not be working on this for some time, but I wanted to go ahead and pop this discussion up and take responses while it was fresh in my mind. I would also like to hear the KINDS of performance indicators you all are interested in so that I can ponder on how best to provide them in a technically simple but accurate manner.
 

Link to comment

Thanks for the good post!

I like the "5 different ways" rule you are describing. Still it would be good to have some recommendations for what can speed things up and what can be slowing down VW.  Like the title block example you are mentioning. We have also learned that Symbols is a good thing both for making the file smoother and also comes handy when you later need to change or add info to the drawing. Some knowledge base articles discussing workflows and speed issues could be valuable to reduce user errors

  • Like 2
Link to comment

Jim, thanks for the thoughtful response.  Obviously, specific dysfunctionality of certain files and workflows are not going to be a good subject for benchmarking.  But that doesn't mean benchmarking is useless.  On the contrary, having some set of reasonable tests for various aspects of VW (rendering, certain 2d operations, certain sheet layer tasks) would be just enormously helpful, so that we can compare speeds - let's say, going back to VW15 - and get an idea of what we are in for if we upgrade.  The other issue you mention is that this would have to be done on different machines, let's say three or four generic representatives of the kinds of machines your users tend to have.

 

It's not an end-all-be-all, but head and shoulders above the information desert we currently face.  I tend to think that almost any simplifying decision about files, testing categories, and machine types made by you and your team would be enthusiastically welcomed.

Link to comment
1 hour ago, Jim Wilson said:

The general rule in Vectorworks has long been "You can do almost everything 5 different ways, the correct one is the one that works best for you." and I really do love that. But making standardized performance indicators for Vectorworks runs exactly in opposition to this mindset in every way I've been able to come up with

 

 

Going off-topic a bit perhaps... but I wonder to what extent this approach is sustainable if we are trying to move towards BIM and generally working much more in 3d.

 

Don't get me wrong, I'm quite keen on the principle of not being constrained to one way of doing things, but compared to doing 2d linework (which is what VW excelled in for some years), constructing a workable 3d model and generating drawings from it is a pretty complicated process. There's not very much guidance from VW on exactly the 'proper' way to set everything up, and while that gives us on the one hand quite a lot of flexibility, on the other hand I wonder if it just creates an excuse for VW's designers to kind of not actually come up with a robust way of building and maintaining a 3d model. As a user it's often unclear quite how the designers imagine us using certain things, and therefore hard to judge how best to use them.

 

It also worries me a bit that my drawing setup 'system' which kind-of works for me not only might be hard work for someone else to get a handle on, should I need to package out work to others (I'm currently a sole practitioner) but also that it is diverging from how VW is intended to be used, or how most other people are using it, so I might just end up in some sort of dead end when some changes are made in the future that make my methods unworkable.

 

It seems that to some extent VW is already bogged down in layers of things that are retained to make 'legacy' workflows still work and that it prevents streamlining and general overhauls.

 

The difficulty in benchmarking performance is perhaps a symptom/indicator of all this. If it's not possible to create a 'typical' file or a series of 'typical' files then how on earth do those writing the software make decisions about what direction to take its development or what to optimise for?

 

I hate software that boxes you in to the extent that, say, you simply can't design a certain thing because there's not a way of drawing it. At the same time I wonder if VW needs a bit more 'direction'.

  • Like 3
Link to comment
  • Vectorworks, Inc Employee
10 minutes ago, line-weight said:

I wonder to what extent this approach is sustainable if we are trying to move towards BIM and generally working much more in 3d. Don't get me wrong, I'm quite keen on the principle of not being constrained to one way of doing things, but compared to doing 2d linework (which is what VW excelled in for some years), constructing a workable 3d model and generating drawings from it is a pretty complicated process.


This very question keeps many of us up at night. It's a big one. The fact that we also cater to many different industries that use varying levels of information inclusion in their models (For instance, the Entertainment folk have effectively been doing their version of BIM for quite awhile, and certain aspect of Site Design have been data driven since I started here.) and the right answer for one is not always the right answer for all of them.

I am very glad for once that the method in which we handle this is above my pay grade, its an industry-wide big ticket question in a lot of ways.

  • Like 1
Link to comment

For what it is worth, I'm a strong advocate of NOT being constrained by a software engineer's vision of what I should be designing.  Tools that guide the process, or make certain results more efficient to achieve have the potential to seriously crimp creative freedom.

 

BTW, yes, a serious distraction from the topic at hand, which is benchmarking!

 

On that topic:  the tests that occur to me as being key are 1) OpenGL 3d navigation, 2) Final Renderworks, 3) operations in sheet layer viewports (which have been subject to slow performance in the past), 4) site model updates, 5) undo certain operations (such as convert polygons to lines), 6) applying linetypes (much slower now than it used to be), 7) populating the resource palette, 8) booting.

Edited by P Retondo
Link to comment

The single biggest (internal) benchmark is very simple: does this release make my client's workflow easier and quicker, and how? In addition, to recognise that your clients have files that have lengthy tenures (often 2 yrs plus) and that the introduction of advancement needs to take that into account because these files will have and will be. worked on. As a 3D designer the biggest impact in the last 3 years IMHO has been the clip cube and its associated ability to see the insides of a building and generate sections and elevations. Brilliant. So, for example (there are many but this is a good one) The slab tool now renders the "floor tool" as a legacy tool. I can see many benefits in the slab tool and use it when we can, in 3d. But we have been using the floor tool for years all over our files for many things including floors. TBH the slab tool is not anywhere as easily manipulable as the floor tool in 2/3D (see forum posts) so this makes it very frustrating and a constant concern in our on-going workflow. Often its easier just to start again. An example of a much wider issue: we need easily transferable AND better tools.  The benchmark should be make our workflow easier and quicker not more complicated! HTH   

  • Like 2
Link to comment

Jim: I would offer that at least for architects,  two benchmarks might be to measure say a typical residential project as well as a multi-story commercial building using NNA templates, complete with NNA standard PIO's etc. two projects that might be typical (solicit users? have a competition?) that can then be used to measure versions as time goes on. 

 

Having used this since minicad 7, and especially of late, new releases suck, then hit their stride somewhere after SP3. To be honest, the quality control has been shall we say, waaaay less than great.  With every release, with every file, something corrupts or drags the file to a crawl as it's developed. In most cases by the time I get to the end of CD's with every project, there's serious molasses. I cannot be the only one who sees this (judging by what one sees on the boards).  Over the years, (but not always),  I've learned how to figure out where the program bogs down or blows up. 

 

While we're at it, for architects, I'm puzzled why the resource browser is such a mess. There are European, Asian, and US standards for product and document  specifications and unlike other programs, seems to be missing from the organization. There's no logic in the file structure and a lot of resources come off as half baked. I'm suspicious that legacy objects (+pios),  buried in current content, are not policed. As I add in say sinks, refrigerators, furniture, beams, storefront-you name it, are not cleaned up. In sum for my two cents there is a serious lack of rigor when it comes to resources and file structure. From a users perspective, yeah it's tied to speed. I'm really tired of waiting for the software all the time and I can only imagine there are lots of seasoned users who feel the same.  I'm hoping NNA will get more aggressive about dumping legacy stuff (like 5 yrs old, not 2-3 yrs as David s noted) and clean out the program with the goal of making it run faster and with more stability. I can only imagine the code must get stupidly cumbersome lugging forward deadwood. 

  • Like 3
Link to comment

I'm unsure what the jump-cut from the previous thread's question, to this thread's answer is intended to convey Jim? 

 

The question asked in the original thread (ever-decreasing performance of Vectorworks) was about "slowness", the progressive deterioration of productivity from version to version and, obviously, the frustrated user experience, which got summed up as, "what's wrong with Vectorworks?"  

 

To "clarify" Jim, is this statement intended to address the "user experience" that was raised in the previous thread?  

 

19 hours ago, Jim Wilson said:

Things like duplicating arrays of objects, doing complex geometrical calculations etc, do not result in times that vary directly based on hardware performance, since a lot of the slowness in those operations is currently a Vectorworks software limitation and not the fault of your hardware. Until these processes are moved to multiple threads I don't think they will be benchmark-able in a meaningful way. (To clarify, I TRIED to benchmark them in a meaningful way, and got more variance in the completion time based on what other applications were open more so than what hardware I used. )

 

If so, I don't see how that's not already meaningful? Also, I have to say, the logic of deliberately avoiding benchmarks that include the apparent, problematic, operations and calculations etc., until they're no longer problematic, is truly inspired. Is that a standard operating procedure (S.O.P) at N.A.?  

 

Anyway, let's agree, for convenience sake, that as you say, these operations and calculations etc. are not "benchmark-able in a meaningful way, since a lot of the slowness in those operations is currently a Vectorworks software limitation" which is exacerbated by "what other applications [are] open" "and not the fault of your hardware" .  .  .  hmm, I think I've forgotten my point,  let's just agree that you've already summed it up perfectly! 

 

Edited by M5d
Link to comment

Over on the McNeel discourse forum there is a user (name-Holo), a few years back he created a small plugin to test Rhino performance and benchmark things. He named the plugin 'Holomark'. It's proved massively popular with end users and the Rhino devs alike. Below are a couple of links and a screenshot of my recent 'Holomark Test' which shows the results broken down with a score too. I have no idea whether something like this is possible or useful for VW  ☺️

https://discourse.mcneel.com/t/holomark-2-released/8040

https://www.food4rhino.com/app/holomark-2

 

 

 

Holomark V6 Milezee.PNG

Link to comment
  • Vectorworks, Inc Employee
On 1/22/2019 at 2:24 PM, David S said:

The single biggest (internal) benchmark is very simple: does this release make my client's workflow easier and quicker, and how?

Unfortunately, that's the exact opposite of a benchmark. It is the subjective experience one has when using software, as most workflows of our users are not something that is standardized and comparable to most others. It is the MOST important metric in my opinion as well, but it is so personalized that it has to be judged in a much more human way than something like objectively faster rendering speeds and load times.

 

On 1/22/2019 at 9:00 PM, jnr said:

I would offer that at least for architects,  two benchmarks might be to measure say a typical residential project as well as a multi-story commercial building using NNA templates, complete with NNA standard PIO's etc. two projects that might be typical (solicit users? have a competition?) that can then be used to measure versions as time goes on. 

This is kind of the problem I'm getting at, if we start dictating what a typical residential or commercial project is, sure that's possible. The fact that we DON'T dictate what is typical and what is out-of-spec or beyond design intent is the key issue. I can pop out files that *I* consider standard, sure but a lot of the time when getting feedback it becomes "Well obviously you would use Floors and Never Slabs" or "No one uses tool X everyone uses Y" followed shortly by "Thats preposterous, Y is used by everyone I know and their dog, X is unheard of and used only in Lichtenstein." This is of course an over dramatic comparison, but I am still finishing the first cup of coffee. The key thing I want to convey is: I WANT to provide metrics and benchmarks and version comparisons, it is merely deciding the proper way to do so that can be conveyed in a meaningful way.

 

On 1/23/2019 at 7:18 AM, M5d said:

I'm unsure what the jump-cut from the previous thread's question, to this thread's answer is intended to convey Jim? 

Because they are two logically separate topics, one asking for official benchmarks on speed (objective) and one concerned with a perceived decline in speed across versions (subjective). There's no way I could possibly carry on both conversations in a single thread without it getting out of control. 
 

On 1/23/2019 at 7:18 AM, M5d said:

Also, I have to say, the logic of deliberately avoiding benchmarks that include the apparent, problematic, operations and calculations etc., until they're no longer problematic, is truly inspired. Is that a standard operating procedure (S.O.P) at N.A.?  

I posted a question asking specifically for what kind of things users wanted to see in benchmarks and that I see the value of them, how did you arrive at the conclusion that I/we wanted the opposite of that?

 

Though to be clear; regardless of how frustrated you may be, the passive aggressive tone that has so permeated our media and politics will not be accepted here:

On 1/23/2019 at 7:18 AM, M5d said:

To "clarify" Jim, is this statement intended to address the "user experience" that was raised in the previous thread?  

 

I need you to dial it back a bit. In any case, us ignoring a problem like that isn't possible, you all could easily just post benchmarks refuting any false claims we made or lack of claims we made. 

I'm not trying to NOT show benchmarks, I'm trying to ONLY show benchmarks that are:

1) Factual (And not just in ideal conditions)
2) Beneficial to a majority of users
3) Not lying by omission (Looking at those ever-changing focus metrics Apple uses at their keynotes)

4) Useful in purchasing decisions 

It is no secret at ALL that geometry calculation speeds have not changed between versions. This is because of that single thread Core geometry engine I've discussed here so often. That issue pretty much just translates into a flat line chart however, not one that is increasing or decreasing. You're pretty much going to get the same time results across versions for things like duplicating an array of cubes or importing DWGs with a set number of polygons. If those kinds of charts are what people really want, sure I'll post them, but I don't think they help OR hurt. The problem with geometry calc slowness is known and talking more about it or filing more requests related to it will not change it any faster. It is already being worked on. If it was possible to make that go faster, I'd be doing whatever made that happen instead of writing this post.

I normally don't share things like this because of their complex nature, but heres an idea of what's going on from an analytical side. Speed is VERY hard to pull anonymous metrics on, but Vectorworks crashes are quite trackable:
image.png

 


The color key for the above chart:
image.png
From left to right, the weekly number of crashes we get from the various versions of Vectorworks. I began this filtered chart at the launch of 2018 SP0. The big dark and light blue bits in the center are Vectorworks 2018 SP2 and Vectorworks 2018 SP3. Versions are stacked from top to bottom on these charts, the top being the oldest version included and the bottom being the latest. By far, the most unstable versions of Vectorworks were the middle of 2018's life cycle. The small green and red bits on the bottom right are 2019 SP1 and SP2. Crashing has reduced SIGNIFICANTLY in that time, yet we still have reports thinking that 2019 is more unstable than past versions.

This is because these simple metrics aren't enough to include everything there is to the experience of working with a set of tools like ours. Its all too easy to discount claims of slowness or instability with metrics like the one above, but we choose not hide behind metrics like this because we know it can't possibly tell the whole story and because we use it ourselves and see that what you all are saying FEELS true.

We will be sharing more metrics in the future. We will be making more benchmark-style reports available in the future. If you all can provide me with specific objective comparisons you would like to see, I will provide them. That's the point of this thread.

  • Like 2
Link to comment

I know specific queries about that graph are probably exactly not what you wanted to encourage by posting it - but is it indexed to number of users? That is, if in a certain week you see a similar number of crashes in 2018 SP5 and 2019 SP2, but there are at that point 10 times as many users sticking with 2018 as have moved to 2019, then that means 2019 is crashing ten times as much, if you see what I mean.

  • Like 1
Link to comment
2 hours ago, Jim Wilson said:

Unfortunately, that's the exact opposite of a benchmark. It is the subjective experience one has when using software, as most workflows of our users are not something that is standardized and comparable to most others. It is the MOST important metric in my opinion as well, but it is so personalized that it has to be judged in a much more human way than something like objectively faster rendering speeds and load times.

 

This is kind of the problem I'm getting at, if we start dictating what a typical residential or commercial project is, sure that's possible. The fact that we DON'T dictate what is typical and what is out-of-spec or beyond design intent is the key issue. I can pop out files that *I* consider standard, sure but a lot of the time when getting feedback it becomes "Well obviously you would use Floors and Never Slabs" or "No one uses tool X everyone uses Y" followed shortly by "Thats preposterous, Y is used by everyone I know and their dog, X is unheard of and used only in Lichtenstein." This is of course an over dramatic comparison, but I am still finishing the first cup of coffee. The key thing I want to convey is: I WANT to provide metrics and benchmarks and version comparisons, it is merely deciding the proper way to do so that can be conveyed in a meaningful way.

 

Because they are two logically separate topics, one asking for official benchmarks on speed (objective) and one concerned with a perceived decline in speed across versions (subjective). There's no way I could possibly carry on both conversations in a single thread without it getting out of control. 
 

I posted a question asking specifically for what kind of things users wanted to see in benchmarks and that I see the value of them, how did you arrive at the conclusion that I/we wanted the opposite of that?

 

Though to be clear; regardless of how frustrated you may be, the passive aggressive tone that has so permeated our media and politics will not be accepted here:

I need you to dial it back a bit. In any case, us ignoring a problem like that isn't possible, you all could easily just post benchmarks refuting any false claims we made or lack of claims we made. 

I'm not trying to NOT show benchmarks, I'm trying to ONLY show benchmarks that are:

1) Factual (And not just in ideal conditions)
2) Beneficial to a majority of users
3) Not lying by omission (Looking at those ever-changing focus metrics Apple uses at their keynotes)

4) Useful in purchasing decisions 

It is no secret at ALL that geometry calculation speeds have not changed between versions. This is because of that single thread Core geometry engine I've discussed here so often. That issue pretty much just translates into a flat line chart however, not one that is increasing or decreasing. You're pretty much going to get the same time results across versions for things like duplicating an array of cubes or importing DWGs with a set number of polygons. If those kinds of charts are what people really want, sure I'll post them, but I don't think they help OR hurt. The problem with geometry calc slowness is known and talking more about it or filing more requests related to it will not change it any faster. It is already being worked on. If it was possible to make that go faster, I'd be doing whatever made that happen instead of writing this post.

I normally don't share things like this because of their complex nature, but heres an idea of what's going on from an analytical side. Speed is VERY hard to pull anonymous metrics on, but Vectorworks crashes are quite trackable:
image.png

 


The color key for the above chart:
image.png
From left to right, the weekly number of crashes we get from the various versions of Vectorworks. I began this filtered chart at the launch of 2018 SP0. The big dark and light blue bits in the center are Vectorworks 2018 SP2 and Vectorworks 2018 SP3. Versions are stacked from top to bottom on these charts, the top being the oldest version included and the bottom being the latest. By far, the most unstable versions of Vectorworks were the middle of 2018's life cycle. The small green and red bits on the bottom right are 2019 SP1 and SP2. Crashing has reduced SIGNIFICANTLY in that time, yet we still have reports thinking that 2019 is more unstable than past versions.

This is because these simple metrics aren't enough to include everything there is to the experience of working with a set of tools like ours. Its all too easy to discount claims of slowness or instability with metrics like the one above, but we choose not hide behind metrics like this because we know it can't possibly tell the whole story and because we use it ourselves and see that what you all are saying FEELS true.

We will be sharing more metrics in the future. We will be making more benchmark-style reports available in the future. If you all can provide me with specific objective comparisons you would like to see, I will provide them. That's the point of this thread.

Great Chart. 

Can that chart be drilled down to show filtered by machine type (MBP 15" late 2015, for ex) or by os, etc?

Link to comment
  • Vectorworks, Inc Employee
12 minutes ago, line-weight said:

I know specific queries about that graph are probably exactly not what you wanted to encourage by posting it - but is it indexed to number of users? That is, if in a certain week you see a similar number of crashes in 2018 SP5 and 2019 SP2, but there are at that point 10 times as many users sticking with 2018 as have moved to 2019, then that means 2019 is crashing ten times as much, if you see what I mean.

That chart isnt, no, it's strictly volume of crashes in total.

When we do the mega charts internally this is shown followed by comparisons like crashes-per-day-per-user. We also have to account for other things like time of year (December and January are generally quiet in terms of how many people are using the software) as well as things like OS compatibility. For instance, there were a huge load of crashes related specifically to Mojave, but we can also separate the Windows and Mac data to see if it really is a spike just because of the OS or if crash rate is raising or lowering independent of the OS.

I actually love the heck out of our analytics tracking, and engineering has been repeatedly doubling down on it. For instance, I got a real morale boost when we started discussing how "Quality is not a Switch" and not something that will simply exist or not exist within a version, but something that must be tracked and catered to as much as any other aspect. We always cared before of course, but now we care AND have tools to back up our decisions with data.

 

1 minute ago, mjm said:

Can that chart be drilled down to show filtered by machine type (MBP 15" late 2015, for ex) or by os, etc?

It doesn't go by machine year that I am aware of, but we can indeed peel off and look at JUST iMac Pro configurations, or MacBook Pro, etc and then effectively sort out specific ones by CPU or GPU options. It's less easy on the Windows side since there are so many configurations, but on that side we can also split it up by GPU or CPU to see if there's a hardware specific trend. We have a lot of alerts set up now to warn us ASAP if something like "Anyone with a GTX 9800 is doomed" happens after an update for instance. 
 

I can share and discuss these metrics to some extent, but there are things I am not permitted to reveal because of our user privacy rules, which I am more than happy to abide by. Too many companies let that stuff slip these days. I will gladly answer all questions related to it that I can within that ruleset.

Link to comment
1 minute ago, Jim Wilson said:

That chart isnt, no, it's strictly volume of crashes in total.

When we do the mega charts internally this is shown followed by comparisons like crashes-per-day-per-user. We also have to account for other things like time of year (December and January are generally quiet in terms of how many people are using the software) as well as things like OS compatibility. For instance, there were a huge load of crashes related specifically to Mojave, but we can also separate the Windows and Mac data to see if it really is a spike just because of the OS or if crash rate is raising or lowering independent of the OS.

I actually love the heck out of our analytics tracking, and engineering has been repeatedly doubling down on it. For instance, I got a real morale boost when we started discussing how "Quality is not a Switch" and not something that will simply exist or not exist within a version, but something that must be tracked and catered to as much as any other aspect. We always cared before of course, but now we care AND have tools to back up our decisions with data.

 

It doesn't go by machine year that I am aware of, but we can indeed peel off and look at JUST iMac Pro configurations, or MacBook Pro, etc and then effectively sort out specific ones by CPU or GPU options. It's less easy on the Windows side since there are so many configurations, but on that side we can also split it up by GPU or CPU to see if there's a hardware specific trend. We have a lot of alerts set up now to warn us ASAP if something like "Anyone with a GTX 9800 is doomed" happens after an update for instance. 
 

I can share and discuss these metrics to some extent, but there are things I am not permitted to reveal because of our user privacy rules, which I am more than happy to abide by. Too many companies let that stuff slip these days. I will gladly answer all questions related to it that I can within that ruleset.

Thanks for the answer Jim. 

Link to comment

@Jim Wilson 

 

 

On 1/22/2019 at 6:52 AM, P Retondo said:

I sympathize with your problem, and don't have much to offer since I am sticking with v2017, despite owning three v2018 licenses and two v2019 licenses.

Jim, VW needs to deal head on with these speed and efficiency perceptions / reality (?) by instituting performance testing and releasing the data.  When I buy a processor I look at all the available data, and it is both voluminous and convincing.  CAD programs need to do the same thing - if for no other reason than to let their engineers know whether they are doing a good job.  When I make the time-consuming commitment to convert my files and resources to a new version, I want to know if my performance is going to be at least equal to the previous version.  That's just a simple business decision, and I don't base those on sales department press releases.

 

 

@P Retondo was responding to the OP of the original thread and the productivity issue in focus, an issue we're all well aware of. That discussion was addressed to the growing "lag" or unresponsiveness of everyday, essential, tools, as is being discussed in a number of other threads too.

 

For what it's worth Jim, I agree with you, benchmarking the "problem" is pointless, it doesn't make it any less of a problem though. Nor do I think P Retondo's post was about avoiding the issue raised by the OP of the original thread either.

 

So yeah, the benchmarks you've proposed in response to the original thread really don't help or define anything of use; they're just a convenient distraction. And I don't believe there's any great demand or concern out here, in user-land, for benchmarking the processes that are working well and utilising our multi-flavoured rainbow of new and old hardware to their capacities. The reason for this is quite simple Jim, those processes automatically improve when we purchase new hardware and the purchase of new hardware is a business decisions under our control. The main "hardware" constrained metric of use to us, and likely to inform any purchasing decision, is rendering.

 

The growing state of unrest about the state of Vectorworks however, is because our businesses also rely on the decisions made by N.N.A's management. We build our businesses around the "well-tuned" use of software and platforms that, once established, are very difficult to shift from without a major disruption. That . . . . "well . . . . dash . . . . tuned" . . . . use . . . . of . . . . the . . . . software however, is being disrupted by the poor prioritisation of N.N.A's executive in not maintaining the underpinnings of many tools, to the point were they've become unresponsive to the pace of users as their complexity has grown. This is a problem, as you've pointed out, that we cannot fix with a hardware purchase and, as you have also pointed out, is worsened by having other applications open, but these are everyday tasks we're discussing here, everyday tasks that are becoming everyday issues for everyday users, everyday.

 

This directly concerns you Jim, your roll and your conveyance of the discourse between the two parties involved here. In the greater portion of your work, you're assisting users "objectively" with genuinely "specific" technical issues and you're applauded for the work you're paid to do. In the other portion of your work, managing the "user experience" where the issues are universal and not "specific", your responses are a matter of public relations and are consequently made "subjective" by the very same pay cheque. The issues at hand are an ever-growing public relations problem for N.N.A. and until they address the core problems of the software with a definitive response, that PR issue is only going to grow. 

 

What's of concern from you Jim, reviewing your more Orwellian tactics, is that instead of a firm and direct responses to those fundamental problems, such as a statement about how and, most importantly, when they'll be resolved, you keep finding ways and means of circumventing them as topical issues. The primary issue that spawned the creation of this thread, the motive for its existence, is the very thing that you, by design, excised from its scope and discussion from the outset. The worrying thing is why? Why all this energy directing away from the problem and crickets about the fix, or worse, the tin eared dismissal that was given in the original thread. The OP of the original thread clarified the matter was for "General Discussion" in their second post, but clearly you found it more convenient to ignore those remarks, to deem the issue as specific and play semantics with the verb that was used to describe what had actually happened to that thread. Determining issues as "specific", side stepping into arguments about "subjectivity" between versions or proposing new distraction, don't fix the issue or inform us about when it will be fixed.  

 

Just to be clear on this new diversion of inter-version "subjectivity", what is tested or not tested is of no real interest; of course a tool that is the same today as it was five years ago is going to perform approximately the same, it will no doubt perform much better when multi-threaded I suspect, but many tools are not the same today as they were five versions ago, while their complexity has grown the hardware aperture through which they operate has been left behind.

 

It seems incomprehensible that N.N.A's executive could not have seen this problem coming from a long-long-long way off. And yet, here we are with you running around trying to put out fires, implying it subjective, it's the user, it's specific . . . which is tantamount to saying we're crazy! 

 

 

Anyway, so you don't like my passive aggressive tone? What am I to make of this, is it another permutation of the way the original thread was handled, am I being threatened with disbarment as a result? You've got all the power on that front I'm afraid Jim, how you use it is up to you. My response to your "perception" of my tone, however, is this . . . 

 

On 1/22/2019 at 7:12 AM, Jim Wilson said:

I did not, what made you think this?

 

So I guess what is passive aggressive is "subjective" too Jim? And I suppose that you had no idea about what was actually being referred to in the post that you were responding to here? Well, my tone was not passive aggressive, what made you think that? Believe me all my questions and forgetfulness, were "genuine". 

 

My tone, if I had one, stemmed from my reading of the original thread, where the behaviour of a company representative towards a user, client and customer was the appalling use of various tactics motivated towards suppressing the extent to which some negative feedback might permeate the forum, presumably out of commercial interest. Subjective? Maybe. But you could have just as easily applied your energies "genuinely" to the original thread, instead you created this "controlled" distraction. What does the existence of this thread actually say? Might that be subjective too? 

 

P Retondo's post was not about hardware Jim or the purchase there of, if you had read it properly, you would have understood the use of the hardware analogy was as an example for some kind of equivalent way of measuring the processes going on inside our software. The concern was for the impact moving between different versions of software is having on our productivity and our businesses. 

 

As to the politics of the day, often referred to as Post-Truth, that's about saying whatever it takes, or doing whatever it takes, in service of the particular entity you derive some material benefit from. It's about immediate and short-term perspectives over the broader collective and our collective human dignity. Post-Truth involve the use of distortions, distractions and diversions whenever and wherever its ugly reality becomes an inconvenient truth to those it seeks to manipulate. So the irony Jim, is that this is exactly my perception of what has been deployed and accepted here. Do we need we go over the rhetoric that was used to sell 2019, which, by my reading, implied issues like this had finally been resolved.     

 

It's simple man, just don't spin us for fools, that all, straight-up, honest, plain speaking is all that's required.

 

 

Edited by M5d
Link to comment

@M5d I agree with lots of what you say but I think you are being a little unfair to Jim W because there are things that aren't in his power to do. Anger at those higher up who have allowed VW to get to its current state is justified and I share that with you but my impression is that Jim does what he is able, and does give us honest, plain speaking, within the boundaries that are set by his position. In fact he goes much further than that than I think you'd find most 'official' representatives of a company on their support forums.

  • Like 2
Link to comment

@line-weight don't worry, I actually like Jim and his presence in the forum. If my recent posts seem unfair, like I'm gunning for Jim himself, it's not so. What I'm sighted on is the use of rhetoric and other forms of magic, where the substance is lacking. Unfortunately, Jim's roll, makes him a focal point for any conflicted narratives the company might hope to run through these forums. These forums however, should belong to the users more than the company, they're the only vehicle we have to "dial it up", so to speak, when the company appears to engage in less than honest practices. 

 

  • Like 1
Link to comment
  • Vectorworks, Inc Employee

@M5dThe benchmarking referred to in this thread is not the benchmarking of hardware nor the effect of hardware on the software, but the benching of versions compared directly against eachother.

As suggested further above, in identical files on identical hardware, to give an idea of whether the software itself is slowing down as releases progress. I plan to do this testing and post it, I hypothesize that for things like geometry calculation and duplication, we will see speeds increase for 2015, and then remain very much the same until today on the same hardware under the same test conditions, we will see if that pans out. This information is as valuable to me in working to improve the software as it is to you all in your decision of whether to purchase in the first place, upgrade regularly, or to choose another package entirely. I don't want anyone using Vectorworks that doesn't feel it is worth its cost, and I am not personally willing to deceive people in order to drive sales.

Whether I like a tone or not is immaterial. It is imperative that this forum be kept clean of false accusations and misinformation. If there is information I posses that I do not release, it is because that information is still covered under NDA. The things you are saying are incredibly important and need to be said, thank you for saying them, it is only the tone and the accusation of intentional deception that I have to take issue with. Lets just drop this portion of the debate entirely and keep working to make things better.

Link to comment

Well Jim, after reviewing what I wrote, I don't think you were accused as such. The discussion however, was all about perceptions and how they are used, one way or another. Much of my response was simply bouncing off what had already been introduced into the conversation, on this and the previous occasion. And the knots that either you or the company get tangled up in, when trying to cast a particular light, they're definitely not of my making.  

 

I rarely comment Jim, the only thing that generally motivates me to comment is a threshold in those perceptions, if I'm not talking freely about them, then I don't see the point. A final warning (also the first) or finishing the job is effectively the same thing, based on what you hope it engenders. They're strange things taunted with expectation! Anyway, it appears I cannot close my own profile, so use your power Jim.

    

Link to comment

Just to be clear about what I have said, I think this thread and Jim's comments are an extremely constructive response - not at all an attempt at deflection or obfuscation, rhetorical or otherwise.  If you have ever worked on a software project (I have), you will know that testing is probably the most important part of the process.  In that regard, developing some benchmark tools, which run the software through a series of speed tests, would be extremely important to be able to know whether the code as written is helping or hindering the specific goal of optimizing the speed of operations.  Making the result of benchmark testing available to users, compared across versions, is what both Jim and I would like to see.  Not as a way of solving every user's particular problems, but as a way of knowing in general whether a new version of the program is improving performance, or not.  Just as getting a faster processor may not solve someone's issue, having a version that benchmarks better may not solve every issue, but it will at the least tell us whether a performance problem is due to a basic software design issue.

  • Like 1
Link to comment

So, I see my profile is still here. I'll assume (with my previous post) that that carries some sort of mutual understanding. I appreciate your position is between a rock and a hard place at times @Jim Wilson. I would not want the crucible of the company's fragilities for myself, nor should we (users) feel as though we're being moderated by them either, that's a fine and fuzzy line I'm sure.

 

@P Retondo it was the criteria that went under the knife above, what was said wasn't meant to detract from the notion of an "overall" measure. The thrust of my comments was driven more by the context than the subject, if that makes any sense. Still, from a user's perspective, I'd much prefer such things were simply academic to us and of no real consequence to everyday use. We purchase products to do a job, piloting what probabilities lay between their promotion and pitfalls each year shouldn't be a job in itself, not from a company of Nemetschek's scale at least.

Edited by M5d
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...