Jump to content

Point Cloud vs 3D Model/Mesh from Photos


Recommended Posts

Two years ago I uploaded photos of a barn to VW Cloud Services and selected what I believe was "Photos to 3D."  The result was a Point Cloud (.pts) file, and I imported the point cloud into Vectorworks (where, in the OIP, it's indeed identified as a "Point Cloud Object").

 

Yesterday, I repeated the process for a greenhouse, and the only applicable current option seems to be "Photos to 3D model."  I assumed I was going to end up with a point cloud, but instead I ended up with a 3D Mesh (in an .obj file).

 

Following a little research, it seems that point clouds are more accurate than meshes, though the 3D mesh seems far easier for extracting information.  And it seems that, at this point, Nomad and/or Lidar are necessary for creating a point cloud via VW Cloud Services?  I definitely did not use either of those two years ago when the barn point cloud was generated; just a bunch of photos...

 

So... it seems something has changed over the past couple of years... any clarification would be greatly appreciated!

Link to comment

Hi @inikolova

Thank you for your response & explanation.  Initially I thought maybe I was completely missing something...  I do have a couple of questions:

 

I'd read that a point cloud is more accurate than the mesh.  Aside from needing to be re-scaled, how inaccurate is the mesh?  I'm interested in its usefulness for existing conditions documentation.

 

Also, are there any benefits to USDZ over OBJ?  The .obj file simply imported as a mesh.  The .usd file imported as single-object groups within groups, redundant to about eight levels before finally getting down to what appeared to be exactly the same mesh...

 

Thanks!

Link to comment
  • Vectorworks, Inc Employee
54 minutes ago, willofmaine said:

I'd read that a point cloud is more accurate than the mesh.

I think this is a very general statement. In some cases it may be, depending on the equipment used to capture it. Laser scanners can produce very accurate point clouds. For scanning with Lidar on iOS, accuracy would depend on many things - the size of area being scanned, sufficient surface details, lighting condidtions, etc. In the case when a point cloud is generated from photos, I woud think the point cloud would have a similar accuracy as the mesh, since the algorithm uses photos to create either one. Whether having a point cloud or mesh is better and would produce more desired results, also depends on the follow up commands/workflow you will use to manipulate the model in Vectorworks. Some workflows prefer point cloud data, while others work better with a mesh. 

 

I am sorry I can't give you a definite answer. After all, the accuracy is fully in the hands of the algorithm, which is developed and maintained by Apple. I recommend taking a few critical measurements of every scan, to verify desired accuracy of the generated model. 

 

54 minutes ago, willofmaine said:

Aside from needing to be re-scaled, how inaccurate is the mesh?

If your image set doesn't contain depth and gravity data (generated from supported iOS devices), the mesh would not be in the correct orientation and scale. Capturing the photos in Nomad would produce better results than taking the photos with a regular camera that does not capture any additional metadata that the algoritm uses to aid in the reconstruction process. The depth and gravity data help the algorithm know the true size and orientation of the scanned object, so no re-scaling would be needed at the end.

 

54 minutes ago, willofmaine said:

Also, are there any benefits to USDZ over OBJ?  The .obj file simply imported as a mesh.

USDZ is the native format produced by the Apple's framework. The OBJ is created from the USDZ in a post-process as a conversion. So, I would think using the USDZ is better. We produce the OBJ simply because at the time we released the new photogrammetry algorithm, Vectorworks did not support import of USDZ. We would probably stop producing OBJ files soon because producing more than one file per scan seems confusing and users are not sure which one to use 😃 .

 

Regarding the grouping, this is likely a result of the hierarchy of the USDZ file, the way it is given to us. I will consult with the engineers to see if there is anything that we can do to reduce the levels of nesting.

 

I hope this helps.

 

Best regards,

Iskra Nikolova

  • Like 2
Link to comment
5 hours ago, inikolova said:

I think this is a very general statement. In some cases it may be, depending on the equipment used to capture it. Laser scanners can produce very accurate point clouds. For scanning with Lidar on iOS, accuracy would depend on many things - the size of area being scanned, sufficient surface details, lighting condidtions, etc. In the case when a point cloud is generated from photos, I woud think the point cloud would have a similar accuracy as the mesh, since the algorithm uses photos to create either one. Whether having a point cloud or mesh is better and would produce more desired results, also depends on the follow up commands/workflow you will use to manipulate the model in Vectorworks. Some workflows prefer point cloud data, while others work better with a mesh. 

 

I am sorry I can't give you a definite answer. After all, the accuracy is fully in the hands of the algorithm, which is developed and maintained by Apple. I recommend taking a few critical measurements of every scan, to verify desired accuracy of the generated model. 

 

If your image set doesn't contain depth and gravity data (generated from supported iOS devices), the mesh would not be in the correct orientation and scale. Capturing the photos in Nomad would produce better results than taking the photos with a regular camera that does not capture any additional metadata that the algoritm uses to aid in the reconstruction process. The depth and gravity data help the algorithm know the true size and orientation of the scanned object, so no re-scaling would be needed at the end.

 

USDZ is the native format produced by the Apple's framework. The OBJ is created from the USDZ in a post-process as a conversion. So, I would think using the USDZ is better. We produce the OBJ simply because at the time we released the new photogrammetry algorithm, Vectorworks did not support import of USDZ. We would probably stop producing OBJ files soon because producing more than one file per scan seems confusing and users are not sure which one to use 😃 .

 

Regarding the grouping, this is likely a result of the hierarchy of the USDZ file, the way it is given to us. I will consult with the engineers to see if there is anything that we can do to reduce the levels of nesting.

 

I hope this helps.

 

Best regards,

Iskra Nikolova

 

Please don't stop producing OBJ ! Despite creating several files, it's widely used by many 3D apps and it's often very reliable. Many 3D printers also support OBJ, and that cn be a point once color 3D printers become more available. 

  • Like 2
Link to comment
8 hours ago, inikolova said:

If your image set doesn't contain depth and gravity data (generated from supported iOS devices), the mesh would not be in the correct orientation and scale.

 

I always thought orientation about Z might be related to from where you were starting the scan, compass or GPS. But from my tests I rather have to think Z rotation is completely random.
Or maybe in Buildings it will orient by the nearest WLAN Router or such things, have not checked that 🙂

But it would already help if Apple would aline the first recognized Wall to aX or Y axis.

 

8 hours ago, inikolova said:

After all, the accuracy is fully in the hands of the algorithm, which is developed and maintained by Apple.

 

I am pleased that the accuracy isn't that bad when scanning in a 3-4 m range. Pretty similar to manual measurements and building tolerances. But it gets bad when you scan, already scanned, areas again. You may get another Wall surface layer that is 2-3 cm off.

 

8 hours ago, inikolova said:

USDZ is the native format produced by the Apple's framework.

 

And I think it is already a very capable and the most future proof 3D format.

 

8 hours ago, inikolova said:

Regarding the grouping, this is likely a result of the hierarchy of the USDZ file, the way it is given to us. I will consult with the engineers to see if there is anything that we can do to reduce the levels of nesting.

 

That is indeed a great problem, especially in VW.

Either Groups in VW learn to read naming and use the typical Hierarchy Panel Dropdown from Edit Modes and Symbol to offer some orientation, or, for now import USD Groups/Empties/Locators/ .... as Symbols.

That would make geometry access already easier.

And AFAIR, the Nomad USD Grouping in VW does not exactly resemble the Hierarchy seen in 3D Apps Outlier/Item Trees.

 

And that Apple's/Nomad's Floor Plan "Meshes" are a real mess.

 

 

On the other hand, opposed to C4D/FBX, exporting USD from VW offers lose Polygons only, for at least all Wall and Slab PIOs. Only Column PIOs or Extrudes come in as complete Meshes. So for VW to Blender, unfortunately USD is currently a less suited option than older proprietary FBX.

 

Link to comment
2 hours ago, zoomer said:

On the other hand, opposed to C4D/FBX, exporting USD FBX from VW offers lose Polygons only, for at least all Wall and Slab PIOs. Only Column PIOs or Extrudes come in as complete Meshes.

 

Correction,

Loose Polygons was even a VW FBX !

(That did not happen with a VW FBX export from VW 2019 but with VW 2025 and VW 2024)

Cameras are also off scale in FBX 2019 FBX, although a general import Scaling is not needed.

Looks like VW Cameras export in wrong scale in general (in any C4D/FBX/... ?).

 

VW USD to Blender is even worse.

- My VW File is metric meters (Blender too) but VW exports in mm

- Some parts (looks like at least Columns, Windows or Doors) miss the needed import scaling of 0.001
(And are hard to repair as they cause even worse display issues than "far from VW Origin" in VW ....)

- USD export does not offer any hierarchy/grouping, just a long list of single Elements

- Cameras and Lights do not export at all

Link to comment

@inikolova Yes, all very helpful, thank you.  Based on very limited experience so far, I do prefer working with the mesh, as it doesn't "evaporate" when zooming in.  I knew I was breaking all kinds of rules with my greenhouse experiment (particularly lighting rules, with the sun shining into the greenhouse and all), but I actually got what I thought were pretty impressive results.  And it seemed pretty easy to set a working plane, create an extrude, and use that to reorient the mesh - though I'm sure that that process could get tedious after a while.  Evidently Nomad won't work on my (old) Android phone, but I'll continue to experiment with my camera for now.  Thanks!  

 

(Oh... I have an oldish iPad, which I just installed Nomad on and very easily made a (very deformed) mesh of a pan (but, again, I broke lots of rules).  I see that it generated a "gravity.TXT" file for each image and, sure enough, the pan is right-side up... but, it's also 3x bigger than it should be, so not sure about that...).  I'll keep experimenting.

Link to comment
  • 2 weeks later...
  • Vectorworks, Inc Employee
On 11/23/2024 at 12:43 AM, willofmaine said:

I see that it generated a "gravity.TXT" file for each image and, sure enough, the pan is right-side up... but, it's also 3x bigger than it should be, so not sure about that...)

The gravity file is only responsible about orientation. The scale is determined by depth data which is only collected on devices that support this feature. If your device supports it, you would also see "depth.tif" file for each image.

 

With planned upcoming updates, scanning objects up to 6 ft in size would also utilize the LiDAR scanner, which would help with scaling of the model on devices without a depth camera.

Link to comment

On the related topic of using Photos to 3D Model, it works pretty well for me. I can go into Google Earth and take about 20 screenshots of a project site from various angles, add files to a folder in my Vectorworks Cloud folder, right click folder to generate the model. Import the OBJ, scale it, and its like having Google Earth right inside VW. 

 

Question: Is there a way to edit the mesh object to delete something out of it, without losing the texture mapping? 

 

When I try to edit the mesh object and delete vertexes to remove a large tree or other element that is conflicting with my new building, the entire mesh turns white. Any ideas?

 

Link to comment
8 hours ago, gmm18 said:

On the related topic of using Photos to 3D Model, it works pretty well for me. I can go into Google Earth and take about 20 screenshots of a project site from various angles, add files to a folder in my Vectorworks Cloud folder, right click folder to generate the model. Import the OBJ, scale it, and its like having Google Earth right inside VW. 

 

Question: Is there a way to edit the mesh object to delete something out of it, without losing the texture mapping? 

 

When I try to edit the mesh object and delete vertexes to remove a large tree or other element that is conflicting with my new building, the entire mesh turns white. Any ideas?

 

 

Vectorworks can't handle the task, but lots of other programs can.

For instance, you can trim meshes and preserve the texture mapping in Blender and then export the new OBJ back to Vectorworks.

I've posted some example here on the forum before of that workflow.  Blender is free, and you can probably learn how to do this in 30 minutes by watching a couple of videos on YouTube.

 

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...