Jump to content

Photogrammetry - who's doing it?


Recommended Posts

Hey, all

 

I'm new to the forum, though not new to VW. I'm curious who's processing photogrammetry through VW Cloud Services. I've been running some samples and have mixed results. 

 

It seems apparent that exif data is not being used, including any telemetry or GPS, as my DPCs come in a weird orientation and placed on the origin (whereas geoTIFFs import relative to it). 

 

So here are my questions:

-Who's using it, and how useful have you found it in comparison to other services or software?

-Is this a Beta service? maybe a proof of concept or showcase piece?

-Is the intention for this to become a core component of Cloud Services? (I would love if this were the case, and continually developed to compete with the likes of Pix4D/Zephyr/etc)

-Is anyone working on a way to gereference or retain/reintroduce geospatial data to the point cloud?

-Any chance of PCS support in the future?

 

I have a million more questions, but I'll let this begin the conversation to see if anyone is even interested. 

Thanks!

Link to comment

Thanks for sharing! 

 

Sorry if I wasn’t clear. I’ve been flying drones and doing photogrammetry for years, as well as developing terrain and contextual models for even longer, so I’m quite familiar with the process. 

 

However, I hate ReCap, And there are plenty of great photogrammetry softwares for developing georeferenced dense point clouds (plus textural surface meshes, visual analyses, etc, etc). My questions are related to vectorworks. I love the software and would love if I could process and stitch DPCs in the same software I’m using to do all my other technical design work. 

 

 

 

TL;DR: I’ve been doing photogrammetry a long time, though in many other software. I’m exploring VWs service and seeing who else is as well. I haven’t found a gourd workflow yet and wonder if anyone has, and I’m generally interested in how Seriously VW is taking this service for technical professionals like myself. 

Link to comment
33 minutes ago, angelojoseph said:

Thanks for sharing! 

 

Sorry if I wasn’t clear. I’ve been flying drones and doing photogrammetry for years, as well as developing terrain and contextual models for even longer, so I’m quite familiar with the process. 

 

However, I hate ReCap, And there are plenty of great photogrammetry softwares for developing georeferenced dense point clouds (plus textural surface meshes, visual analyses, etc, etc). My questions are related to vectorworks. I love the software and would love if I could process and stitch DPCs in the same software I’m using to do all my other technical design work. 

 

 

 

TL;DR: I’ve been doing photogrammetry a long time, though in many other software. I’m exploring VWs service and seeing who else is as well. I haven’t found a gourd workflow yet and wonder if anyone has, and I’m generally interested in how Seriously VW is taking this service for technical professionals like myself. 

 

Like I said, I spent time talking with VW about their cloud services.  Ultimately, it is not currently a suitable tool for anything significant outdoors IMHO.  It does a great job ingesting data produced on other platforms though.  It's hard to imagine VW ever catching up to the quality and services offered by others in this regard...

 

  • Like 3
Link to comment
  • 2 years later...

Rather than start a new topic, I hope to receive some pointers and keep this thread alive.  I have submitted photos a few times to the vwx cloud services for 3d from photos.  So far never generates.  Always I receive the "Did not Generate" notice on the portal, and an email with the boilerplate tips - Use many photos, provide overlap, uniform exposure, textured surface (that yellow jeep in the vwx tutorial doesn't seem very textural), but nothing specific to the photos submitted.  All of these attempts are merely my tests to investigate how it works and what it produces.  I don't need a perfect, dense model.  Anything would do. I will make improvements once I understand what makes it work.

 

Most recent is a little Buddha sculpture (by others). Here is link to the 14 photos submitted.

https://www.dropbox.com/sh/arsz4xtnu55qq4r/AAC9xFcr8F27YeIymhIn6XSBa?dl=0

 

So, how are these photos not conforming to the tips?

Needs more photos? (2 more? 100 more?)

Not enough overlap? Top views confused? Does order of photos in the list mater?

Is the background too similar to the object of interest?

Is the exposure not uniform enough? - I adjusted prior to submittal.

Do I need to paint stripes or drape with a net for more texture?

Or???

 

Or can anyone suggest a better path to making it work?

 

Thanks

-B

 

Link to comment
  • 4 months later...

@Benson Shaw Did you manage to get something out of the cloud eventually? I've tried a handful of photo sets without luck. I noticed this statement in the guidelines: "Currently, the algorithm is not optimized for 360-degree (wrap-around) views of an object. ", sounds a little weird to me but I tried a few sets without 360 wrap and that didn't get me anywhere either.

Link to comment

@Anders Blomberg No success so far with the VCS photogrammetry.  But I will give it some more tries.

 

Meanwhile, I tried the Polycam app on my old iPhone (no LIDAR).  Initial trial of free version made an OK wrap around of a coffee table w/100 photos. Free version processes only to OBJ.  I upgraded to the "pro" version, US$60/year.  This opens many other export options for point clouds which vwx accepts.  Tried to scan a bathroom interior and the processing failed.  I think better to make one wall at a time or one wall then amend with another, then amend, etc.

 

Impressions:

VCS

  • I was trying to find the low threshold - start with minimal photo set to produce a low quality point cloud, then see effects of more photos, better light, etc.
  • I probably needed more images for a successful processing.
  • Images collected via phone camera to Photos app, which uses up the phone storage, then upload to VCS, then delete from phone
  • I think Vectorworks Nomad may have a more integrated path for this.
  • Need to try some more and explore Nomad, too.

Polycam

  • Good to have success on first try.
  • OBJ imported in rotated - turned on end. Point cloud oriented correctly.  I did not georef to test positioning.
  • Test of prox 100 photos (free limit) processed on medium quality produced a lumpy coffee table, but the magazine covers were readable.
  • Cost of pro subscription is minor problem. But what if Nomad works as well?
  • "Video" mode automatically triggers shutter when app detects sufficient overlap. As @jeff prince suggests, I can just "wave my phone around".
  • Photos transfer to Polycam for processing. Never show up in the Photo library, but do store in the app.
  • Deleteing the capture set from the app does not immediately update the phone storage.  Looking into that.  Might need a phone restart.
  • Plan to experiment more.

-B

  • Like 1
Link to comment

Great to hear your results @Benson Shaw! My trials with the VW cloud were ranging from ≈20 - ≈50 JPEGs taken with my iPhones standard camera app and uploaded to the cloud services via the the finder-folder. So I didn't really aim for either an upper or lower threshold, just what seemed reasonable. Items were ranging from simple, like a couple of coloured pens, to very complex, like a potted English ivy.

Link to comment

Looked into this a little more today and it seems Apple has released a new API for photogrammetry called object capture. I tried a super simple software called PhotoCatch with the same set of photos that the VW cloud couldn't make sense of and it did produce something decent I must say. I guess the plant, placed on a highly reflective surface, is a really complex object to scan so I'm rather impressed. I exported from PhotoCatch in an OBJ format and imported into VW. Looks as below. Just took a couple of minutes.

 

 

Hedera.vwx

Edited by Anders Blomberg
  • Like 4
Link to comment
On 3/28/2022 at 12:43 PM, Anders Blomberg said:

Looked into this a little more today and it seems Apple has released a new API for photogrammetry called object capture. I tried a super simple software called PhotoCatch with the same set of photos that the VW cloud couldn't make sense of and it did produce something decent I must say. I guess the plant, placed on a highly reflective surface, is a really complex object to scan so I'm rather impressed. I exported from PhotoCatch in an OBJ format and imported into VW. Looks as below. Just took a couple of minutes.

 

 

 

Hedera.vwx

 

You prompted me to try something similar.

 

Also used the "photocatch" app.

 

I took some photos using the not especially good camera on quite an old ipad. First I used the app on the ipad, which lets you see the resulting model but you have to pay to download it. So I used the desktop version of the app on my mac instead, which lets you export the model without charge.

 

I also exported as OBJ and was slightly surprised when it imported into VW2021 in a recognisable form on the first attempt (just had to adjust its rotation and scale it to a sensible size).

 

Surprising what's possible with just photos (no Lidar or anything).

 

There's still a lot I need to understand - like how the texture is attached to the mesh, how to edit it usefully, and so on. I might see how it works on the interior of a room next.

 

 

 

 

  • Like 4
Link to comment
  • 3 weeks later...
  • 4 weeks later...
  • 3 months later...

I need to do a survey of a site quite soon. A building on a quite steeply sloping piece of land. The building needs to be measured accurately and I'll do it the old fashioned way but I've been wondering if it's worth trying to do the surrounding land by photogrammetry. It doesn't need to be dead accurate, just enough to set the building in context.

 

I don't have a drone or anything, just an android phone with a pretty decent camera but no lidar. I'd probably be able to get a few high-level viewpoints looking down onto the land in question by standing on building roofs etc.

 

A few posts up ^^ I tried the app "photocatch" and was quite impressed with the results looking at a small object. But when I tried it on some building interior spaces it didn't do so well at all. May have been to do with lack of good lighting, I don't know.

 

Is it worth me having a go at a site survey? If so, any tips for apps or strategies that would most effectively get something useful into Vectorworks?

Link to comment
50 minutes ago, Benson Shaw said:

can you elaborate a bit on the 5 meter limit?

 

From what I know of people used Lidar with Apple Apps and used it in other 3D Apps ....

 

5m limit will not allow you to scan a cathedral of course.

But if you stay in the save range of the Lidar, one walked up a larger property around a

larger building from one end to the other (for a DTM) and found that finally reached

the end at a pretty precise elevation and that the scan was really useful.

 

As I think the typical LIDAR Scans are great for scanning a small Appartment

(I do not love the amount of Mesh Data), I am more interested in coming Apps

that use new Apple APIs shown at WWDC 2022, wich will allow to create very

basic volume simplifications from scans by using AI.

Like detecting Walls, Openings and such at a much smaller file size,

kind of the essence of a Scan.

Link to comment

 

I have used my iPhone too and found it quite useful.

 

In this example, I tried to illustrate potential drainage problem against a house wall.1389722457_TerrngmodellKastanjen5vstB.thumb.jpg.ccf12621d8fddb2389d775ba4bfb4b36.jpg

 

Here is another example where I made sure it was safe to drill a vertical hole in the middle of a lot of piping and equipment, by placing two different scans from two different floors on top of one another. Note how well the walls aligned.

 

1330782771_Drillingahole.thumb.jpg.268abe1088093768eae5ef92ab0f2713.jpg

 

  • Like 4
Link to comment

Are people using Lidar on iPhone/iPad to scan objects rather than buildings/topography, for the purposes of creating symbols? As an alternative to laboriously measuring things + modelling them from scratch + texturing them. I'm talking about furniture, light fittings, appliances, sanitaryware, MEP plant, etc. I can rarely find what I need in the VW Libraries so next port of call is 3D Warehouse, then if no luck there it's a case of modelling it from scratch. I have no major problem with this - I quite like doing it - but it would be a massive time-saver to be able to scan something with my phone then there it is, a ready-made, accurately-sized, ready-textured model. I suppose the only thing is that if they're meshes they're going to look bad in non-orthogonal Hidden Line views but this is the same for lots of symbols already. Well + file size of course but presumably you can simplify mesh.

 

Is this something people are doing? I don't have a Lidar-enabled device unfortunately. I have seen some really nice looking scanned objects online. Is the accuracy better on smaller objects?

 

Although I love what people are doing here with buildings + landscapes, for me I can't see it precluding the need for a proper laser scan + topo survey. As Jeff says:

9 hours ago, jeff prince said:

It's close enough to get started on a project typically, but not something I would trust to replace a proper survey.

 

Link to comment
21 hours ago, Tom W. said:

Are people using Lidar on iPhone/iPad to scan objects rather than buildings/topography, for the purposes of creating symbols? As an alternative to laboriously measuring things + modelling them from scratch + texturing them. I'm talking about furniture, light fittings, appliances, sanitaryware, MEP plant, etc. I can rarely find what I need in the VW Libraries so next port of call is 3D Warehouse, then if no luck there it's a case of modelling it from scratch. I have no major problem with this - I quite like doing it - but it would be a massive time-saver to be able to scan something with my phone then there it is, a ready-made, accurately-sized, ready-textured model. I suppose the only thing is that if they're meshes they're going to look bad in non-orthogonal Hidden Line views but this is the same for lots of symbols already. Well + file size of course but presumably you can simplify mesh.

 

Is this something people are doing? I don't have a Lidar-enabled device unfortunately. I have seen some really nice looking scanned objects online. Is the accuracy better on smaller objects?

 

Although I love what people are doing here with buildings + landscapes, for me I can't see it precluding the need for a proper laser scan + topo survey. As Jeff says:

 

On the phone level, it can not be used directly to measure for example kitchen cabinets. It's simply not accurate enough for precision measurements. It is however useful for quick price estimates and an excellent piece of documentation.

 

Another major flaw with scanners is that they generate huge files where almost everything consists of junk info. As an example, imagine a very simple table model, represented by an extruded rectangle representing the table top and four small extruded rectangles for the legs. The parts can be defined mathematically by a very limited number of coordinate data. Most scanner software on the other hand, needs to measure a huge number of points just to be able to estimate where the key data points (the edges) are located. The fundamental difference is that the human computer can instantly figure out where the edges and boundary are, whereas most scanner software can't.

 

On more organic shapes things swing a bit towards scanning. Manual modeling can still generate much smaller models, but it requires way more skill and craftsmanship to get there. The example is probably beyond what most CAD designers could achieve and it would take quite a lot of work. The scan took less than five minutes with an iPhone 13 Pr. The result is surprisingly similar to the original (the picture to the left was placed next to the original within the scanning software). The model was also accepted without any problem by the 3D printer software and came out fairly good on a simple $200 3D printer. 

 

 

Scan.jpg.7c8b63efbbe4dc5c4f65e08d8d7e0184.jpg

 

  • Like 1
Link to comment

Thanks @Claes Lundstrom this is exactly the kind of thing I'm talking about. Circumstances where manual modelling would be challenging/time-consuming. Like you say, lots of things are fairly easy + quick (+ fun!) to model but others - organic shapes - less so.

 

So for example if I had an Eames chair it was important for me to include in my model I could scan it in a few minutes + create a symbol that would likely be fine for illustrative purposes?

 

vitra_eames_lounge_chair_santos_palisander__29621.jpeg.09716fde62a1c27a052d90ce3c04cc6a.jpeg

 

It just needs to look right + be more or less the right dims. It's just a prop in the scene not something that needs to be millimetre-perfect because I'm going to be manufacturing it. Would a phone scan of something like this chair result in something useable?

 

To me this would be really useful. At the moment I'm restricted to what I am capable + have time to model myself plus whatever I can find online. This would be a useful additional option for building my symbol libraries.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...