Jump to content

Debriefing DLVP vs referenced Layer


Recommended Posts

Hi everyone,

I would like to have your opinion and experience with DLVP and your eventual conclusion concerning their usage.

Here is our experience - our conclusion is :We give up using it in most of our workflow...

We started using DLVP when we upgraded from VW12.5 to V2009. That was 18monthes ago.

I think we really tried to do the thing well :reading extensively the Ellicott Height project structure and white papers... I myself was really enthousiastic at first (DLVP advertising being one of the reason we decided to upgrade). Also I feel that we have tried long enough so that the decision to give up is not made light heartly.

Quickly everyone in our small team (but I) started to complain about DLVP and their complexity. Nevertheless the few of us continued brainstorming on its usage (convinced that whe had to do it since it was the new path Nemetschek had engaged on... )and probalby since I insisted so much we've been using the system all that time.

However, I must now admit that we have never found again a nice workflow since introducing the DLVP (files are huges, computers slow down, 3D workflow full of various bugs, from Z uncoherence and export problem to non display of element..., also the complexity of the concept itself makes it heavy to use...)

So we're spending some time now to debrief the past 18months, and it seems that one of the main conclusion is that DLVP are so uneasy to use that it actually has messed up completely our workflow (which used to be pretty fine).

The conclusion is that we are abandonning definitevely DLVP for almost everything, going back to the old referenced layer system.

The only part where we've decided to stick to DLVP is for sections and Elevations, since the ability to rotate the DLVP allows to rotate the plan in a much more flexible way that the with the layer link... and that's good for drawing the elevations.

I'd be happy to read some other users opinion...

Link to comment

Anaelle,

I would be interested to see what kind of projects and the DLVP setups you were using. I realize that it is very easy for the DLVP system get out of hand and unravel. I would be interested to know more about how the team and project data were organized to handle modeling/drawing necessary information.

Answering the following questions (and files) can help me provide some answers/help:

How large was the project? (How many stories, how much area?)

How many people were on the team?

Who was responsible for what modeling/drawing?

What was modeled vs. what was drawn (2D only)? What level of detail was modeled vs drawn?

Were Sheet Layers used?

How was printing of drawings handled? From which files were drawings printed from?

What were you trying to do with the DLVPs? Reference floor plans from multiple files into a central model or layers from a central model to multiple files?

Were there problems with visibility control of classes or layers?

Were you using absolute or relative settings for the referencing?

Were files being shared by a peer-to-peer network of computers, or was a central file server being used?

Mac or PC, or mixed?

I know these seem like a lot of questions, and there will be many more, but they are all relevant to getting a better understanding of what worked or didn't work and why. I would love to help you sort this out as I am working on revising collaboration documentation for Vectorworks offices/projects, over the summer.

You can also contact me offline or PM. It would help me to be able to examine the files, first hand, to see where you found the pain and offer suggestions.

Link to comment

Hi Jeffrey,

i'll try to answer to your question. and if you send me your e-mail I can send you a pdf of our file structure (it is actually not upto date, since it still state to use DLVP while we've decided to go back to referenced layers)

ok here we go :

1) How large was the project? (How many stories, how much area?)

We've been working on different projects.

ProjA - block of 6 appartments, 4 stores, roughly 600m2

ProjB - block of 11 appartement, 5 stores, roughly 1000 m2

ProjC - 12 accomodations 4 floors, 3000m2

projD - 65 appartment + day care in 5 blocks average 3floors. 7000m2

CompA - Daycare for 66 kids - 3floors, 1500m2

CompB - 25 appartments in 3 blocks, 3000m2

CompC - Refurbishment of factory in to offices 10.000m2

various competition don't remember...

2) How many people were on the team?

We're a team of 4/5 working on all the projects transversaly but not always simultaneously

3)Who was responsible for what modeling/drawing?

One person "in charge" per project yet also working on other project where he is not "in charge".

4)Who was responsible for what modeling/drawing?

Separation are "site drawing" "floor plans" (with block or unit subdivision if big project)"section and elevation" "3D" "Details" (with distinction of plan and section)

5) What was modeled vs. what was drawn (2D only)? What level of detail was modeled vs drawn?

modeled : site3D, general volume (exporte in SKP and serve as base for sections/elevation in some projects)

gave up going to much in details since other 3D modeling quicker... but usefull in competition whe the project changes. DLVPs cause bugs (eg if DZ first set to X and then to Y, for some reason it memorise X and never goes back to Y so there are gap between floors)

All the "decoration" is 2D (eg 3D human much to heavy to be useable)

6) Were Sheet Layers used?

yes

7) How was printing of drawings handled? From which files were drawings printed from?

Big file referencing all the files and using sheet layer with SVP => batch printing or batch PDF exporting

we intend to change that and use those big files only at the very last stage and in intermediate stage to use local SVP and saved views...

8) What were you trying to do with the DLVPs?

Reference floor plans from multiple files into a central model : yes

or layers from a central model to multiple files? nop

9)Were there problems with visibility control of classes or layers?

yes, mainly in competition, due to the fact that even if we have lot of classes there is always a special class needed for one particular project and when created it doesn't show up in the already created DLVP... makes a big mess since people try to "find" which is the missing class.

Also becauses system is complex so there's always one of us who for some reason (mainly fed up looking for the source) just "force" one class setting in the DLVP rather then letting the setting controlled by the attribute in the main navigation...

big mess too... + the strees of deadlines that makes people setting classes randomly under the last minute panick and then big mess to find out what has been forced where...

10) Were you using absolute or relative settings for the referencing?

absolute. We have a big post on the intranet saying that the Zero should "never" be reset because of big mess experienced with projD. comming from VW12.5 the proble was not obvious but with DLVP lot of troubles, also because our first reaction when starting to use the DLVP was to chain reference them... very big mess... we don't do that any more, but still the files are quickly very heavy. and the absolute rigour necessary is quickly painfull for the team... (it's difficult to remember why we shouldn't do something that is possible :

-why not chain references : because it causes problems later

-why not reset the zero : because it causes problems later

-why not make more than one layer visible per DLVP : because it causes problems later

etc...

11) Were files being shared by a peer-to-peer network of computers, or was a central file server being used? Server

11) PC

all the best.

A.

Link to comment

Analle,

It sounds like you are having the same issues as our group as we seem to be involved in projects of same scale. We still use Design Layer Viewports due to the ability to combine model projects.

We do not use the suggested workflow as suggested by NNA. we have developed our own methodology based upon the issues you have mentioned that still allow a team of 4-5 work together on the project. If you like I can send you some links offline for our latest thinking on how we organize our work and workflow.

Regards,

PS, by the way your project work is quite nice so you must be overcoming some of the shortcomings of the tool!

Link to comment

Hi Stan,

For me it is the "heaviness" of the DLVP that is mainly responsible of the troubles, not the logic of the referencing itself. That's why having balanced the "pro" and "cons" of the DLVP, we had the feeling that they were a real plus for us only for the manual drawing of sections and elevations (for some projects we use sectionVP and then convert them to group to copy paste in the section file in order to do additional drawing and annotation on them... Actually we would have found it much better to have the sectionVP on DL rather than SL, so manual copy pasting is our workaround)

Hi Jeffrey,

I shall send you the PDF of our file structure : the rectangles represent vwx files. the rounded rectangles represent layers contained within the files. The continuous lines mean "layer belonging to the file". The dot lines mean "layers referenced into the file". We intend to replace all the DLVP referencing with old fashionned referenced layers (what we where allready doing for 3D anyway). Except for the section and elevation as said.

=> beware this is only the structure of the "working files".

As said before, for daily use and print we're going to add local sheet layers. for final presentation we have a file that gather all the needed layers from all the other files and than generate many sheets.

Link to comment

Yes,

But may be due to the amount of referenced element (???) the update time of the reference is too long... so we often have them cached into the file anyway to allow manual control of the updating... if we do it else way it can take up to 10min to open just one file... and I've got the worst computer since it is a laptop...

Link to comment

Anaelle,

I must admit that looking at your PDF made my head spin before I fell off my chair. ;-)

There is a great deal of complexity going on here, that I must also admit, I don't understand. Stan and I have had these discussions before, as I have had with other users in the past trying to do the same thing. Most of the time, I am overwhelmed by the level of complexity of these referencing schemes to get a project completed.

Forgive me if I'm going to be a bit to philosophical here, but without actually troubleshooting actual files, this is where I must speak from.

From my own perspective, learning/dealing with BIM has had a profound influence on rethinking the HOW of production as much as the WHAT is being produced. Ultimately, a set of drawings are needed in the same way they have been for a very long time, and will continue to be needed for some time in the future. With BIM, the model, the repository for the basis of all information displayed, is the new focus /requirement that adds a new dimension.

Most times, users look at the end product, the drawings, and start from this point determining the workflows, human effort, and data structure for the project. In a CAD-centric, drawing-centric, 2D paradigm, this makes sense. Information must be sliced and diced in ways many people can get a lot of work done at the same time. But coordination is a nightmare, because information is not intelligently linked or directly derived from one view (plan) to another (elevation/section).

In a BIM-centric paradigm, a complete, intelligent, 3D model is the focus of all workflows, human interaction and data structure. At some point, all the data, in it's "raw" form has to come together in a centralized way to create a singular instance of the database, as a whole, where the data can be queried, sliced and diced, and annotated for consumption or "rendering" in views necessary for documentation.

In short, I think stepping back and re-examining a "typical" large project data structure from another point of view would be useful. Instead of saying "I've got 4 people, each responsible for these types of drawings", what about starting from "what will this virtual construction, this database look like, when complete, and how do I get that info in and the required views/drawings out in the most efficient way with the tools I have? I think this forces everyone to reconsider the modeling/drawing responsibilities in a way which may change the way the files, models, and drawings are ultimately structured and produced.

I am sorry if this sounds a bit glib and abstract, but without going beyond diagrams, it's hard to be more concrete.

The current technology of Vectorworks in a collaborative environment is what it is. The challenge for everyone is figuring out the sweet spot of leveraging it, or ignoring it altogether.

Link to comment

10) Were you using absolute or relative settings for the referencing?

absolute. We have a big post on the intranet saying that the Zero should "never" be reset because of big mess experienced with projD. comming from VW12.5 the proble was not obvious but with DLVP lot of troubles, also because our first reaction when starting to use the DLVP was to chain reference them... very big mess... we don't do that any more, but still the files are quickly very heavy. and the absolute rigour necessary is quickly painfull for the team... (it's difficult to remember why we shouldn't do something that is possible :

-why not chain references : because it causes problems later

-why not reset the zero : because it causes problems later

-why not make more than one layer visible per DLVP : because it causes problems later

etc...

Anaelle,

What I meant was not about the 2D plan origin, but the referencing scheme where the user can choose to reference the file from a relative location or and absolute location in a particular folder, machine, or server.

11) Were files being shared by a peer-to-peer network of computers, or was a central file server being used? Server

11) PC

For the long reference update times of true referencing DLVPs, I wonder if you could perform a test by putting ALL the project files on a single machine and try opening the "slow" files. The point is to test if the network speed, or any choke points in the network are causing the delays in the updates, not the amount of data. A lot of optimization has gone into 2009 and 2010 to handle referencing, but the Vectorworks app has NO control over the quality of the network's ability to transmit this data. All it takes is one bad hub, wire, connection, or workstation settings to bring network speed to its knees and compromise the experience.

Link to comment
Yes,

But may be due to the amount of referenced element (???) the update time of the reference is too long... so we often have them cached into the file anyway to allow manual control of the updating... if we do it else way it can take up to 10min to open just one file... and I've got the worst computer since it is a laptop...

See my response to speed in the immediately above post...

Link to comment
  • 2 weeks later...

Most times, users look at the end product, the drawings, and start from this point determining the workflows, human effort, and data structure for the project. In a CAD-centric, drawing-centric, 2D paradigm, this makes sense. Information must be sliced and diced in ways many people can get a lot of work done at the same time. But coordination is a nightmare, because information is not intelligently linked or directly derived from one view (plan) to another (elevation/section).

In a BIM-centric paradigm, a complete, intelligent, 3D model is the focus of all workflows, human interaction and data structure. At some point, all the data, in it's "raw" form has to come together in a centralized way to create a singular instance of the database, as a whole, where the data can be queried, sliced and diced, and annotated for consumption or "rendering" in views necessary for documentation.

I quite agree with the BIM approach, yet your approach is indeed very theoretical. And could need a bit more practical insight to get more usefull.

As a matter of fact, BIM is far from perfect yet. So currently one can only use it but partially (at least in the VW we have now)

To strenghten the point that we are practising BIM as much as we feel it can be I'd say that :

- We use VW's BIM for the site terrain and for the general volumetry of the project (we complement the rest manually) and it works pretty well. we use it for genereting sections, and volumes in the early phase of the project. and for that purpose we're happy about it.

- we conceive all our projet in 3d FIRST, even if most of the 3d is not done in VW, so we're quite far from the rigid old fashion set of drawing

- we have structured our work so to use as much as we can the parametric qualities of VW

And to get to my point in this post, i'd say that we all agree that

a complete, intelligent, 3D model is the focus of all workflows, human interaction and data structure.

but this does not change the fact that many people need to get a lot of work done at the same time.

We can't wait for one of us to enter all the 3d data while the others wait. So we need to work together and at the same time to get as much as the 3d model done as quick as possible.

This is what we try to do with the structure of our files. and we need to set up a structure that works for the various kind of job we do : be it competition, sketches. call for tender... large scale or detailing...

So the point is not about how we slice the work, but the fact the the work need to be sliced and reput together.

it is true that several element of VW are not well design for that purpose.

i'd give an quick example : i like the way vw can draw parametric stairs form one floor to another... however this only works when the different floors are on different layers in a single file... if you separate floors in several files (as we need to do it since in a big project without repetitive unit, you cannot have one single person working on the floors => so you need to separate them in different files...) this does not work anymore... well i might still work with the old style referenced layer I should give it a try... but it definitively do not work with DLVP...

My point is : no matter it is BIM or not, if you need to do the 3D model of anything in team... someone need to d one part while the other is doing the other... and each part would be a file...

Link to comment
[ I wonder if you could perform a test by putting ALL the project files on a single machine and try opening the "slow" files. The point is to test if the network speed, or any choke points in the network are causing the delays in the updates, not the amount of data.

Yes I've test that in the past. We have a 1G hub and all the network cards are 1G too... it does go quicker on a single machine but not so significantly... and anyway... we do need the server.

We have other programs running through the server without trouble... so it seems that the problem is particular to vw and probably due to the fact that the full referencing is quite heavy...

Link to comment
  • 4 weeks later...

By the way : I've just redone that test : "ALL the project files on a single machine and try opening the "slow" files"

Here is the result :

1) All files locals (for a 17MB file referencing 5 files of 11MB, 11MB, 13MB and 15MB - our files are often at least twice this size)

= 80 seconds per update (1minutes 20seconds)

2) Referenced files on server = 90 seconds (1minutes 30seconds)

= +12,5%

3) All files on server = 95 seconds (1minutes 33seconds)

= +18%

...

Link to comment
  • 1 year later...

Hey guys... just wondering whether this post should get further updates... or just let it died? I think it is an important conversation.

Blimey- has your office resolved the speed problems by switching back to design layer referencing?

Jeffery- I second Blime's point about 'workflow'. If you have more than one person working on a project, the task of modelling has to be spread between different people in the team.

In any case, modelling+drafting are intertwined processes. It is not like when you finish modelling, you start drafting. These two processes are very much interactive and iterative.

All the other CAD packages on the market now have much better teamwork interfaces (look at ArchiCAD, Revit etc), they also have a teamwork structure that keeps the model in one file to maintain its BIM integrity.

Vectorwork's ability to link one file to another a really antiquated way of teamwork. Is Nemetschek planning a better, and different way of teamwork for Vectorworks?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...