Jump to content

herbieherb

Member
  • Posts

    291
  • Joined

  • Last visited

Posts posted by herbieherb

  1. More info please 🙂
    Which programs do you use? What kind of plans do you draw? 2D/3D? Do you render? OpenGL or Renderworks or Hiddenline? And if so, how much time do you spend rendering? How much of your working time do you generally spend with Vectorworks? Do you use it for work or study?
    If you are working on a typical plan, what is your current maximum RAM and VRAM usage?

  2. While Vectorworks would certainly perform well on a MacPro, you pay a lot for things you don't really need in Vectorworks. You get a server CPU, ECC RAM, enterprise SSD and a workstation GPU while Vectorworks asks for a desktop or HEDT CPU, non ECC RAM, a fast SSD and a fast gaming GPU.

    This is why you get a HEDT computer that performs better in Vectorworks than the full-featured MacPro for the price of the basic MacPro model.
    I would also be rather critical of the upgradeability. As Linus says in the video, it is worth buying a base model and upgrading RAM, CPU or GPU yourself. But I wouldn't speculate to be able to upgrade to the next Xeon generation in the future. Intel has already announced that a new Xeon generation with PCI-E 4.0 can be expected by the end of the year. But almost certainly with a new socket. So the MacPro 2019 will then only be able to be upgraded with the Xeons that are already available today, at least then at a probably lower price. Based on this anouncement of Intel, I dare say that Apple will be releasing a PCI-e 4.0 MacPro 2020 upgrade before Christmas. So if you want to buy a MacPro and can wait, I would rather wait for this update, because the 2020 PCIe 4.0 motherboard with a new socket might then actually survive another Xeon generation and keep its upgrade promise. Also this 2020 PCI-e 4.0 MacPro would'nt be already outdated before its release.

  3. I would rather say it's like ploughing up a field with a Formula 1 racing car. If you want a really badass Vectorworks tractor, buy a Threadripper 2990x machine. If you're looking for a decent machine, you'll do fine with one of the Ryzen 3000 right now. If it really has to be Apple, an iMac is still the best choice.

  4. I wouldn't model the curtain, but rather place a texture with displacement mapping on a simple 3D polygon. It looks much better in a rendering, you don't have to model it and you don't have to have a lot of geometry in your project while working.


    Here is a video of how you do it:

    The texture only works if you select a Renderworks render style and turn on Displacement Mapping.


    Here's the texture used:

    Curtain3.thumb.jpg.f7c4d56ee81362581bd4056790ecdfbb.jpg

     

    And also some other Textures to experiment with:

    Curtain1.thumb.jpg.63374b5a29b4484b38ed1a1d3f7ca84f.jpg

     

    Curtain2.thumb.jpg.a1cce79a97a43180a699e039a0fdf069.jpg

     

    And here's the Vectorworks file from the video:

    curtain.vwx

  5. Unfortunately there is no possibility to export point clouds from Vectorworks. I would use a different software for merging point clouds anyway. Get CloudCompare. An open source software for displaying and editing point clouds. With it you can easily merge your point clouds and export them in any format.

  6. Don't use meshes if you can avoid them. Vectorworks is not optimized for meshes, but for solids. It just doesn't seem possible to optimize for everything. For example, the developers had to decide to make many viewports with little content or few viewports with a lot of content faster. (They decided to do the second.) Similar dilemmas have probably led to plans consisting of single lines becoming slower with each version, but nicely tidy plans with PIOs, symbols etc. are much more performant nowadays.

  7. HBM2 is actually ECC capable. But the driver must use this ECC capability. The Radeon VII does not do ECC, although it would theoretically be able to do so.
    So the fewer crashes you see are not due to an ominous semi ECC.
    Typically, in 16GB RAM, there is a single bit flip about four times a week in 24/7 use. About 2-15% of these lead to a calculation problem, or crash. That's a maximum of about 1 noticeable error per 2 weeks on a good working new system without GPU ECC.

    Older memory and operation under higher temperatures generally lead to more memory errors. I think your observation is mainly because you are now working on more energy efficient and newer hardware.

    • Like 1
  8. Quadro graphics cards only give you more performance in the software for which special drivers are available. These custom drivers are one factor that makes Quadro so expensive.

    The Quadro cards have the same graphics chips as the Geforce cards, are simply technically throttled, so that reliability and lifetime are higher. A Quadro RTX 4000 has the chip of the Geforce RTX 2080 Ti, but it is throttled to the speed of the RTX 2070 Super. They have also built in more RAM. That is the other reason why they seem so expensive for the performance they provide.

    But these special drivers and specs makes them more suitable for CAD which are specialized in this. (calculate a lot of geometry without fancy shaders) but slower for gaming. (Less geometry, but with fancy shaders). Lumion and Twinmotion run with a game engine and therefore the Geforce cards are more suitable. Also Vectorworks runs better with the gaming drivers of the Geforce, because it is optimized for it.

    For Vectorworks, Lumion and Twinmotion a Geforce is therefore the better choice.

    For further technical advice, post the parts of your current computer in the hardware section of the forum.

    • Like 2
  9. Photogrammetry needs a lot of CPU power. Some programs also run with CUDA, so they use the graphics card instead of the CPU.
    For a good point cloud you want to use several hundred photos. The calculation time increases overproportionally with the number of photos and takes about 15 minutes for 300 photos, one hour for 600, 3 hours for one thousand and two days for 4000 photos. (using VisualSFM with CUDA on a RTX2070)

     

    VisualSFM is OpenSource if you want to try out.

    • Like 1
  10. A NAS is basically a small, power-saving computer that is attached to the network and also configured over the network. So it has no display, no graphics card etc. and basically consists of a small case with processor, RAM, network connection and usually several hard drive bays. The main advantage is that they form a hard disk in the network to which all others have access. It is the simplest type of server. Modern NAS have additional functions such as cloud service, web hosting, backup functions, etc.
    Usually a NAS has several hard disks running in RAID-1 or higher, so in case of a hard disk failure, the hard disk can be easily replaced without any data loss.
    Check out the QNAP and Synology pages. Currently, these manufacturers make the best NAS.

    We used this 2bay NAS from Synology for a long time. It is very reliable and we are also satisfied with the configurability.

    • Like 1
  11. Ryzen 3000 x570 boards are already very well equipped with lanes, so the more expensive ones have up to 3 M.2 connectors. Apart from the graphics card, most other PCIe cards use only 4 lanes. With a x570 board you are already very well equipped.

    As a backup you could, for example, get a NAS, which serves as a central storage space for the entire home system. Then you just move the data you are working on to the fast SSD in your computer or you synchronize the Data from your Computer to the NAS. From the NAS, you can make a daily backup with versioning to a cloud of your choice. From the data in the cloud, you make a backup to an external hard drive once a month, for example. You only plug in this external hard drive when you make the backup.

    With this method your data is very safe.

    If a hard drive fails, you can replace the one in the NAS and RAID-1 will restore the broken drive. If your home is on fire and your NAS and computers are destroyed, you have the cloud. If your NAS is hacked and your data is encrypted (and the backup in the cloud is also encrypted), you have the external hard drive with the emergency backup.

  12. I wouldn't worry too much about the Lanes. If you're doing a single-GPU setup, you've had enough of them anyway.
    If you later realize that you want to have multiple GPU's, you won't have much of a performance penalty, because they are only connected with x8 instead of x16. Also you could then connect two PCIe 4.0 capable graphics cards. (That's why I explicitly recommended an X570 board, these are the only ones that support it at the moment) Since PCIe 4.0 is about twice as fast as the currently common PCIe 3.0, this will also be enough for several graphics cards, even if they are therefore only connected on the 8 instead of 16 lanes. This means that two graphics cards supporting PCIe 4.0 on 8 PCIe 4.0 lanes each would be practically as fast as two graphics cards on 16 PCIe 3.0 lanes each (as for example it's the case in the MacPro which promotes its many pcie 3.0 lanes).

     

  13. Raid 1 is not a backup. It only helps you in case of a drive failure and improves performance. But if a virus encrypts your data, you accidentally delete data or data is written incorrectly, you are screwed.
    If you want to back up your data, you have to synchronize it regularly, e.g. with a cloud, so that the data is physically at a different location (fire case). Then you need an additional backup, which you do manually, but less regularly on a hard disk without internet access (e.g. 1x weekly/monthly on an external HD). This way you protect yourself against viruses that encrypt your data (your first backup system would simply synchronize the encrypted data automatically into the cloud).
    Then you could consider versioning (keeping several daily/monthly/yearly backups) so that you can go back to a state of, say, the day before yesterday or a month ago if you need to.

  14. 15 hours ago, Art V said:

    They are independent, technically speaking, but... the last paragraph quoted above does mean there is a dependency because the CPU has to wait for memory so the RAM must be capable of "synced" timing cycles with the CPU, or the CPU needs to be overclocked to make the best use of those fast timings. In the past a bad combination could actually make things worse, which is much less of an issue these days but if you want to get the most out of the fastest RAM then it is likely the mobo/CPU software will use some overclocking to get the best speed.

     

    With that said, if the manual states that some overclocking is needed for certain RAM then good cooling is even more important to maintain stability, though there could be stability issues anyway if one manually overclocks the system. If the CPU/mobo software is doing the overclocking it will usually be within safe margins. Which is what I think the MSI Creative Content stuff is doing.

    You're mixing up two different things. Overclocking RAM is not the same as overclocking the CPU, nor does it affect the CPU itself.
    The supported range of RAM clock speed varies depending on the processor and is listed in the mainboard specifications. There are clock speeds that are natively supported, and those that are achieved by overclocking RAM. While overclocking the processor is associated with certain risks and can also limit the life span, overclocking RAM is unproblematic because you only set the RAM to the clock speed for which it was produced. To do this, you change an option in the UEFI (BIOS) so that the motherboard reads the supported clock rate from the RAM bars and the BIOS should automatically set everything correctly. Since the RAM was produced for this clock, they do not become too hot. No additional cooling is needed. So overclocking the RAM has really nothing to do with overclocking the processor.
    Faster RAM has the same effect on the processor as a faster graphics card would have. The CPU runs exactly the same as before, except that the overall system can perform better. Most TRX40 motherboards support RAM up to 3000 MHz natively and are tested with OC up to 4666 MHz. Using 3200 Mhz RAM is no problem.

     

    10 hours ago, Spikey said:

    How good for all this is the Quadro 4000rtx from what Ive been describing compared to 1 or say a pair or linked Quadro RTX 5000 or 6000? 

    Only use one graphics card. Most programs could not use the second graphics card at all. The money is better invested in a single, but better graphics card.

     

    The way you describe your intention, you'll need an all-round workstation. I also looked around in the forums of Solidworks again and it seems that besides performance there are sometimes graphics problems with consumer graphics cards.
    Since the more expensive Threadripper would only make a difference in CPU rendering, I'd do without it. Take the Quadro card instead.
    This way you get a computer that is less powerful in one specific application, but can handle all kinds of programs.
    Due to the very large budget, you still have a computer that will surely outshine 95% of all CAD workstations. By comparison, the most powerful computer in our office cost £2900.

     

    It might just as well be a good idea to set half the money aside and invest in cheaper hardware that you upgrade later, or if there is a lot of activity in the hardware market, replace it completely.
    Threadripper is very strong when it comes to CPU rendering. But I don't think this is your focus. You could save a lot of money if you only use a Ryzen 3950, because then the mainboards are also much cheaper.
    With a Threadripper 3960x, a Quadro 5000 you're in the 4400 pound range. With a Ryzen 3950x and a RTX 4000 you are already at 2700 pounds. So you have saved a lot of money, which you can put into the next graphics card generation in a year. The cheaper system still has more power than probably 90% of all workstations here in the forum.

    • Like 2
  15. But if a user decides to work like this anyway, for other reasons

    2 hours ago, Spikey said:

    The main reason why I was feeling trapped into the Quadro would be if in Solid works the Geforce card stopped it actually working or being buggy, problematic, visualise not working etc. When I ask on the solid works forum its all get Quadro or you will have problems... Not sure if that's just to push Quadro as I agree in that when I look at results Geforce cards are way faster.... I was even toying with the idea of maybe getting one of each to swap depending on the job and possibly trying to get one second hand but been told too risky and thats a £850 to over £1000 extra.

    I don't use solidworks myself, so I can't say if you will really have stability problems with a Geforce. I have also seen other benchmarks that showed bigger performance differences: https://www.engineering.com/DesignSoftware/DesignSoftwareArticles/ArticleID/18630/Whats-the-Difference-Between-GeForce-and-Quadro-Graphics-Cards.aspx It will probably be similar to Vectorworks and integrated graphics cards. Many people have a MacMini and only draw with the onboard graphics card. This works great as long as they don't work on larger models, but it can also cause graphics problems. That's why nobody in the forum would recommend a computer with an onboard solution. But if a user decides to work like this anyway, for other reasons, it can still work quite well, depending on the workflow. If Solidworks is the main CAD, I would clearly decide for a Quadro.  But in your case it is the only software among many that would benefit from Quadro. With all others the system would be slower. That's why I'd think carefully about where I want to take the performance penalty. You can also buy the RTX 2080 Ti and if you notice that Solidworks really doesn't work reliably with your workflow, you sell the Geforce again and install a Quadro. Changing it later is very easy: take out the old card, put in the new one, install the drivers and you're done.

     

    2 hours ago, Spikey said:

    Also you mentioned ram and Im a little confused.

    Like the CPU and the GPU, the RAM also has a clock rate with which it works. They are all independent of each other. You can have a CPU at 4GHz, RAM at 3GHz and a GPU at 2GHz. The number measures how many steps the component does per second. The clock speed of RAM has no effect on the clock speed of the processor, so the processor will not suddenly be overclocked. AMD processors just have been found to perform better when combined with fast clocked RAM.

     

    Then with RAM there are the timings that are important. With timings you measure how long RAM needs to respond. But high clock speed with fast timings will be extremely expensive. The sweet-spot of DDR4 RAM is about 3200 GHz clock speed and CL 14 timings.


    For the same reason I wouldn't use ECC RAM if it is not absolutely necessary. Not only are they much more expensive, they are also not available with high clock rates.

     

    Registred/buffered RAM is RAM with an extra chip that acts as a buffer between processor and memory unit. It provides higher reliability for very large amounts of RAM, but also provides higher latency, so it is slower than unbuffered/unregistered RAM.

     

    So faster RAM basically means what you statet in this conclusion:

    2 hours ago, Spikey said:

    or can you have faster ram that reduces latency and makes the processor run better but without over clocking it?

     

    The most important question has not yet been asked. What do you want to do with all this software. Are you going to push the limits of every program with hugest possible projects? Or do you want to learn the programs with smaller tutorials and medium-sized sample projects.

     

    This might also give an answer to your graphics card question. If you don't feed Solidworks with gigantic projects, the 2080Ti will probably still perform very well, because it is an absolute high-end card despite its consumer orientation and you won't need the power boost of a Quadro in Solidworks at all.

     

    I find your list very ambitious, by the way. I've been learning Vectorworks for more than a decade and still haven't finished. 🙂 I'd have a look at all of these programs and then concentrate on just a few that you find most useful.

    • Like 1
  16. Take a look at these benchmark results: http://www.cgchannel.com/2019/10/group-test-nvidia-quadro-titan-and-geforce-rtx-gpus/ There you can compare the graphics cards for many of your software. Purely in terms of performance, you're usually best off with the RTX 2080s, especially when you consider the price.  It is 40% or 60% cheaper than the Quadro RTX 4000 or 5000, while even in Solidworks the Quadro RTX 4000 performs worse and the 5000 only marginally better.
    Afterwards you put the saved money into the better processor, because there you get practically 1:1 better Renderworks performance for the extra money, while the single-core performance is only 0.2% worse. (But only if u use CPU renderers (e.g. Renderworks) at all. If not, of course you don't need the bigger Threadripper. But then I would consider whether the cpu should be a threadripper at all and not just one of the high core count consumer Ryzen.

    DDR RAM is probably not that important in your application area. Statistically speaking, with 64GB you have about 2x per month one bit that flips. Usually this results for example in a single pixel with a different color when rendering, if anything happens at all. In the worst case you will have a crash, but most of these flips do not lead to a crash. Buggy software is much more of a problem here. The money you save is better spent on faster clocked RAM with lower timing. The Threadripper loves low timings and high clock rates. In my benchmarks I was able to get 5%-10% performance in OpenGL with faster RAM. So you can decide: approximately 5 crashes less per year or permanently 5%-10% more OpenGL-Performance.

    • Like 1
  17. Rendering: (Renderworks and some parts of Hiddenline): All Cores CPU (best atm is the AMD Threadripper 3970X but expensive)

    OpenGL: mainly GPU, but also limited by single core CPU (best Performance with Gaming Hardware, no need for Enterprise GPU, the Gaming GPU suit better, you need enough RAM on your GPU to fit your Models, the other specs don't matter much, because they are designed for gaming and therefore mostly overpowered for CAD anyway, also higher screen resolution needs better graphics card)

    2D, Viewports, some parts of Hiddenline, PIO: single Core (best price/performance atm is something like the Intel i7 8700k but also the Threadripper 3970x does perform equal)

    RAM: non ECC, you need enough to fit your drawings, more is not better (Vectorworks doesn't benefit of ECC)

    Autosave: Get yourself a very fast M.2 SSD (best atm is Samsung Evo 970 Plus)

     

    As you can see, there is no ideal system because vectorworks is used in many different ways. So first you need to describe your workflow. Which operating system would you like to use? Which screens do you use? What kind of models do you draw? 2D or 3D?  Do you render? If so, how often and for how long? What components does your current system have? What is the current RAM usage on the graphics card in OpenGL mode in a larger project? What is the current RAM usage in renderworks renderings and hiddenline renderings?

    • Like 1
  18. 3 hours ago, B Cox said:

    It's become the common wisdom NOT to upgrade to VW new versions until at least SP2 every year because they are not compatible with the new apple OS. 

    It's the same with any major software and MacOS while Windows doesn't have this issue. Obviously it's a problem of Apple. I really don't understand why you blame Vectorworks for this. ¯\_(ツ)_/¯

    Same issue with life cycles. Every bigger app only works on the last two or three releases of macOS. It is common knowledge.  It doesn't help to complain in the forums of the software manufacturers. You have to go to Apple.

    • Like 4
  19. Let's say the freedom that your projects are not influenced by the possibilities of CAD and that your plans look the way you want them to, this full customization has a price. The parametric objects do not come close to those of Archicad. But you have the possibility to realize every imaginable object with many different techniques and any kind of representation. Unfortunately, this freedom also leads to bugs, especially in new versions. Simply because everyone draws differently in Vectorworks and you don't have to check one way and ok that.
    So it all depends on where you put the focus. Do you want the freedom to realize your ideas exactly the way you want them to be? Or do you just want a quick way to put your project on paper?

  20. The NVIDIA GeForce 2080 Super (8GB) might be a bit too expensive for the performance boost in Vectorworks if you don't want to work on multiple 4k monitors. The processor is certainly a good choice if you also want to render on the computer with Renderworks from time to time. I would save some money on the graphics card and "only" install a 2070 Super. Therefore I would invest in a M.2 SSD as potent as possible, e.g. a Samsung SSD 970 Pro. You will notice this every time you save the file. Normally every SSD is so fast that the normal user doesn't notice any difference. But with Vectorworks, depending on the backup setting, the hard drive processes a GB of data every few minutes.

  21. Unfortunately, with this method the DTM data becomes up to 6 times larger than necessary. With very large data sets I would create a marionette/script that extract the vertices from the 3D polygons, then deletes the doubled vertices and creates 3D-loci out of it. Then generate a new DTM out of these 3D-Loci.

×
×
  • Create New...