Jump to content

MacBook Pro 16 M1 or M1Max?


Recommended Posts

OK, finally some updated MacBooks from Apple.

 

Current and future vwx -

 

Any comment on extra cores? 32 standard in the M1Max or upgrade to 64.

Unified memory? everyone happy so far? redraws, renders, etc. I saw reports here of slower/same render times for simple scenes, but significant improvement with extra bounces and complex shaders.

 

Even if not yet ready for latest OS and M1Max, vwx is likely to catch up quickly. My opinion.

 

Mostly my files are less than complex. But that could change. Anything will improve my current kit.

 

-B

Link to comment

As I currently mainly use a M1 Mini 16 GB,

My only issue, for my project size, is basically the RAM limit only.

As soon as Activity MonitorMemory Pressure changes from green to yellow

working is no fun anymore.So I would primarily go for 64 MB first.

As nothing upgradable, I would max out GPU and CPU too ...

 

I just do not really have a need for a Mobile.

Would prefer a Mac Mini (Pro space gray) or iMac 27"+.

Or Mac Pro , which I may never be able to afford anymore again.

 

Since VW 2022 is Universal, or Apple Silicon native, everything got better

on M1, but it did not solve the Memory problem in general.

The frustrating thing is that always when I switch to 16 core, 64 GB RAM RX 6800

with 16 GB VRAM - PC - the File is also slow and lagging ....

So I have great expectations about everything coming, including M1 Max ...

 

Edited by zoomer
  • Like 2
Link to comment

I'm really curious to see the M1 Pro working, but sadly the reviewers they will try and get the max units of each chip but then they don't do CAD software testing. Hopefully we will have some people in here that will purchase at least the M1 Pro with 16gb and 32gb of unified memory and let us know ?? pretty please?? 😄 


PS. i'm looking into get one of this macbook pros M1 Pro 16gb or 32gb (now that is the difficult choice due to high price)

Link to comment
4 hours ago, FBernardo said:

I'm really curious to see the M1 Pro working,

 

from what to expect ....

Single core Performance is on par with our M1s. So VW should be similarly as fast

on a M1 Pro or M1 Max. It could only be faster in rare multi-threaded VW tasks

that may use more than 4 (performance) Cores, if any at all.

M1 has 4 performance cores, M1 Pro/Max have 8 performance cores.

 

So, like for Cinebench multicore CPU Rendering, and RW CPU Rendering in VW

should be nearly twice as fast than M1.

(BTW Which is still about 1.5 times slower than my Ryzen 3950X)

Just RW Rendering in "Redshift" mode, by GPU, should be at least, nearly about twice

as fast than M1 (7 or 8 GPU cores), with M1 Max/Pro 16 GPU cores or 4x faster with

32 GPU cores.

If Redshift uses both, GPU and CPU, it should be even faster than just by the additional

GPU cores.

 

Plus, the more RAM by M1 Max/Pro, the more memory lanes and throughput.

And the SSD is much faster.

 

As long as you don't hit a RAM limit with VW, you may hardly notice any difference

in speed in VW modeling. Maybe in View refresh/navigation and saving files.

But depending on MBPs configuration, you will have much more reserves for project

and file sizes.

  • Like 1
Link to comment

I'm also considering an upgrade but looking at the current CPU usage makes me really hesitant if I'll actually see a big jump in performance. Below is a screenshot showing rather typical CPU usage when running VW. I've never seen all the processors peak, it seems rather like only 2 of the performance cores are normally being utilised, and not to an equal amount, with an additional core getting some rare use. I'm currently on a MacBook Air 16 GB RAM. I absolutely love the small form factor and performance of the machine but it seems VW isn't really making use of all the power.

 

1656762856_Skrmavbild2021-10-21kl_10_54_01.thumb.png.dbc4d9de8200bccc7b96c4924da07935.png

Link to comment
On 10/20/2021 at 11:15 PM, zoomer said:

 

from what to expect ....

Single core Performance is on par with our M1s. So VW should be similarly as fast

on a M1 Pro or M1 Max. It could only be faster in rare multi-threaded VW tasks

that may use more than 4 (performance) Cores, if any at all.

M1 has 4 performance cores, M1 Pro/Max have 8 performance cores.

 

So, like for Cinebench multicore CPU Rendering, and RW CPU Rendering in VW

should be nearly twice as fast than M1.

(BTW Which is still about 1.5 times slower than my Ryzen 3950X)

Just RW Rendering in "Redshift" mode, by GPU, should be at least, nearly about twice

as fast than M1 (7 or 8 GPU cores), with M1 Max/Pro 16 GPU cores or 4x faster with

32 GPU cores.

If Redshift uses both, GPU and CPU, it should be even faster than just by the additional

GPU cores.

 

Plus, the more RAM by M1 Max/Pro, the more memory lanes and throughput.

And the SSD is much faster.

 

As long as you don't hit a RAM limit with VW, you may hardly notice any difference

in speed in VW modeling. Maybe in View refresh/navigation and saving files.

But depending on MBPs configuration, you will have much more reserves for project

and file sizes.

 

And how it behaves with TwinMotion?? Have you tried using the Twinmotion, because that is a quick killer with the ram, that was my initial doubts, because i'm looking into the M1 Pro 32gb at least which should be more than enough for the VW with some RW, but essentially i want to be using both at the "same time"... Any tests on this ?

 

Link to comment

I test TM from time to time on my M1.

(I am still not sure if latest official 2021.1.4 is already fully optimized

for Apple Silicon. At least it does no more crash !)

 

The biggest problem is the 16 GB limit on shared RAM.

If you exceed the memory usage too much and M1 starts swapping too much,

everything will slow down. (Not just TM, also VW, Bricscad, C4D, .... )

So I personally want at least 64 GB for the next M Mac.

 

Of course more than my 8 GPU cores would be nice for frame rates.

(I would go with 32 core GPU too)

I just wished to have the option to ditch the M1 Max's video decoding units and

choose some more CPU cores instead, as I am still doing lots of old school

CPU Rendering.

Edited by zoomer
  • Like 1
Link to comment
56 minutes ago, zoomer said:

I test TM from time to time on my M1.

(I am still not sure if latest official 2021.1.4 is already fully optimized

for Apple Silicon. At least it does no more crash !)

 

The biggest problem is the 16 GB limit on shared RAM.

If you exceed the memory usage too much and M1 starts swapping too much,

everything will slow down. (Not just TM, also VW, Bricscad, C4D, .... )

So I personally want at least 64 GB for the next M Mac.

 

Of course more than my 8 GPU cores would be nice for frame rates.

(I would go with 32 core GPU too)

I just wished to have the option to ditch the M1 Max's video decoding units and

choose some more CPU cores instead, as I am still doing lots of old school

CPU Rendering.

 

 

I'm really hoping to see the workflow and how it handles heavy scenes, Jonathan Reeves has a video working with the M1 and it was working okish, so now with the huge boost it is this upgrade i'm quite hyped to see 🙂

Link to comment

Usually CAD or 3D Apps are complicated, grown over years and mostly not 

what you would call the most optimized Apps in the world.

 

But also VW increased the use of CPU multithreading and GPU requirements.

Like needing more VRAM when using multiple View Panes or now adding

the Redshift Renderer using GPU.

Also Apple now offering up to 32 GPU cores - but only max 8 P-CPU cores,

means something.

And AFAIK it is more complicated and less transparent today. Apple offers

a API where Software Developers call for resources, but behind the curtain

it does not matter if macOS will send these calls finally to their SoC's

CPU or GPU cores, or Security, Video Encoding, Machine Learning, ... or

whatever cores do the job best.

Link to comment

Difficult question.

 

If you keep your computers for a very long time (5-7 years), then by the end of that time you will probably be using all 32 cores.

 

If you replace your computer often (every 1-2 years), it is unlikely that the software will have expanded to the point where you NEED the bigger hardware. But it would not hurt to have it.

 

It is the 2-5 year range that is more difficult to say.

 

List price difference between 16 cores and 16 gig of RAM and 32 cores and 32 gig of RAM is $800.  Or about $0.40/hour in a normal work year.

 

My philosophy for a long time has been buy the top of the line and upgrade every 3 years OR buy the mid-range and upgrade every 18 months.

 

Your time is way more valuable than the cost of the hardware. The more you wait for the computer (instead of having the computer wait for you), the more you lose.

 

My 2 cents.

  • Like 4
Link to comment
43 minutes ago, zeno said:

Today I will have the opportunity to compare my m1 Mbp 13" with a 16" m1 max with 64gb unified memory.

 

I have big point cloud file (6x100 million point referenced files) and panorama with redshift rendering file for testing.

 

If someone is interested, you can send me files with some description via PM

 

Session will start in 7 hours from now

 

Thanks

 

 

 

Hi @zeno this will be really interesting, looking forward to hearing how it goes 

Link to comment
9 minutes ago, zeno said:

Good morning.

 

I can confirm that the data for now has defined a 50% faster CPU rendering time and 70% faster GPU rendering than a 13 'MBP with M1.
The tests concerned for now only viewports at 300 dpi rendered with custom renderworks on one side and redshift on the other, repeating the same operations with both machines.


There have been some problems with unified memory though. I wanted to do an important stress test. I opened a file I worked with two years ago on a 27 'Intel iMac quad core (2017) with 64GB of RAM at 2400. It is a file with 6 referenced files. In each of these files there is a 100 million point poincloud. So the final file features a 600 million point cloud.


Well, the iMac as long as it had 40 GB of RAM in doing this process ran out of virtual memory and crashed. The entire operating system was crashing, not just the software. By reaching 64GB of RAM, I was able to finish the job by the time. With difficulty, but I did it.
With the MBP 13' and M1 there was no way to overcome 3 pointclouds.


I was sure I could handle everything with the test machine which has the following characteristics

- MBP 16 'M1 max
- 64 GB unified memory
- 2 TB SSD

 

But with great regret he managed to barely open 4 and then close. It failed to finish when VW started asking for around 94-95GB of memory.

 

Something important must therefore be changed about virtual memory, and this could be a problem in these cases, because VW with very large files consumes a lot of virtual memory and does not get it free until the software is turned off.

 

In the night I calculated an Panorama with redshift. As soon as I can I'll show you some images

 

Z

 

 

Thanks @zeno for reporting your test, really interesting. I also have a low brow question, do you notice any added performance in say more ‘normal’ files you work with (ie no point cloud) with the m1 max over the m1 while you navigate and draw? Or due to single core being the same is it fairly neutral? 

 

Tim

Link to comment
8 minutes ago, Tim Norman-Prahm said:

Thanks @zeno for reporting your test, really interesting. I also have a low brow question, do you notice any added performance in say more ‘normal’ files you work with (ie no point cloud) with the m1 max over the m1 while you navigate and draw? Or due to single core being the same is it fairly neutral? 

 

Tim

 

Actually not I'm focused on big files. I'm worked with this machine remotely, it's not mine. Mine is the same and arrived at end November. Maybe I will test big section viewport later. If you have something send me on PM 

Link to comment
3 hours ago, zeno said:

Good morning.

 

I can confirm that the data for now has defined a 50% faster CPU rendering time and 70% faster GPU rendering than a 13 'MBP with M1.
The tests concerned for now only viewports at 300 dpi rendered with custom renderworks on one side and redshift on the other, repeating the same operations with both machines.


There have been some problems with unified memory though. I wanted to do an important stress test. I opened a file I worked with two years ago on a 27 'Intel iMac quad core (2017) with 64GB of RAM at 2400. It is a file with 6 referenced files. In each of these files there is a 100 million point poincloud. So the final file features a 600 million point cloud.


Well, the iMac as long as it had 40 GB of RAM in doing this process ran out of virtual memory and crashed. The entire operating system was crashing, not just the software. By reaching 64GB of RAM, I was able to finish the job by the time. With difficulty, but I did it.
With the MBP 13' and M1 there was no way to overcome 3 pointclouds.


I was sure I could handle everything with the test machine which has the following characteristics

- MBP 16 'M1 max
- 64 GB unified memory
- 2 TB SSD

 

But with great regret he managed to barely open 4 and then close. It failed to finish when VW started asking for around 94-95GB of memory.

 

Something important must therefore be changed about virtual memory, and this could be a problem in these cases, because VW with very large files consumes a lot of virtual memory and does not get it free until the software is turned off.

 

In the night I calculated an Panorama with redshift. As soon as I can I'll show you some images

 

Z

 

 

 

 

Thank you very much.

This is really interesting and basically exactly the configuration I would choose.

  • Like 1
Link to comment
  • Vectorworks, Inc Employee

Point cloud imports in Vectorworks have a hard limit of 100 million points for a reason, higher values can run into usability and slow performance issues.  Stacking multiple ones is indeed a stress test for how to get Vectorworks to run out of memory.  IMO this test is not using a typical model.

 

  • Like 1
Link to comment
19 minutes ago, Dave Donley said:

Point cloud imports in Vectorworks have a hard limit of 100 million points for a reason, higher values can run into usability and slow performance issues.  Stacking multiple ones is indeed a stress test for how to get Vectorworks to run out of memory.  IMO this test is not using a typical model.

 


confirm is not. But it runs on a 2017 iMac and not on m1 machine

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...