Jump to content

Photogrammetry - who's doing it?


Recommended Posts

The biggest limitations when scanning objects with an iPhone, such as the Eames chair, comes on the skinny parts, for example the foot and armrests. Bigger more solid chunky objects typically works fairly ok I would say. In my example, a 250 year old chair, the seat works fine, whereas the skinny and more intricate parts fail. The problem with it is of course the combination of being skinny and having a very intricate and detailed shape. 

 

Another disadvantage with scans in general for a symbol is that the model becomes much bigger. A good symbol should always have as few elements a possible, especially when you insert many on a bigger model. Keep it as simple as possible while maintaining a recognizable shape. 

 

 

 

1400004253_Scannedchair.jpg.35d5d6217c61eebcd116af581ae6d63e.jpg

  • Like 2
Link to comment

Another limitation, at least without LIDAR, is scans of surfaces which are transparent, or shiny, or consistent color, or of similar color/texture to background objects.  I modeled a tub/shower unit via tape measure and lots of fun PushPull, Taper Face and Fillet. It's a strange beast - 8' long continuous piece of fiberglass 5' tub at one end, 3' shower at other. I decided to see how a scan compares using Polycam app (I don't think I will renew it) and my iPhone XS (10) - no LIDAR.

 

The scan processing had lots of trouble differentiating the tub surfaces front to back. The glass shower enclosure with chrome frame caused eruptions and spurs of points representing location of virtual images of reflected objects. As the camera panned and tilted the changing location of the ceiling light and window reflections caused other spurs in the point cloud or just eliminated segments of the surfaces.  Scanning with different light (time of day) causes color changes.

 

I was curious whether I could make it work. I stuck bits of tape all over the tub/shower and scanned in small sections.  The sub area scans processed reasonable surfaces in some cases, but had to be cropped (eg in the app, or Meshlab), and/or rotated and scaled in Vectorworks (the 6" tile and full screen cursor is very helpful). Then aligned via the tape bits with Move by Points.

 

Constrained spaces almost always require moment to moment camera tilt/rotation to capture the target area (eg in the tub, or in the shower or between the vanity and the tub, etc). This camera attitude change can process a bulge in the point cloud, or produce a cloud which needs 3d rotation to realign to the proper working plane.

 

I never did get some of the sub areas to process. LIDAR would probably help with this (?) and a different app with better processing, and better camera technique.  I tried importing LAZ and PLY formats. I prefer the LAZ for only vague reasons.

 

-B

 

image.thumb.png.8e3a38caf1f9e2397554fb96163944cb.png

 

image.png.54703e63eb1bdeeb81b5b8abd8495516.png

 

image.thumb.png.b28e9294b3dd3c0c91a03cd16af090fe.png

  • Like 3
Link to comment
On 9/2/2022 at 7:45 PM, Dave Donley said:

Hello @line-weight: We are currently converting the photogrammetry kernel used for Photos to 3D Model to use macOS' Object Capture.  We have run image datasets through it that fail in the current implementation, and the macOS processing succeeds where the previous one would fail.  This upgrade will go live in a month or two, after the 2023 initial release.  If you collect some images to try we can use them to validate that it does a good job and maybe compare to the existing feature?

 

Another part of this is to use LiDAR from iOS to improve the photogrammetry, the depth image from the LiDAR helps the algorithm.

 

FYI, on iOS you can collect point clouds directly using Nomad if your phone or tablet has LiDAR (limit is 5 meters though).  https://cloud.vectorworks.net/portal/help/pages/capture-a-point-cloud-with-lidar/?app=IOS_NOMAD.  Maybe you can find someone who has one and install Nomad on it to scan your site.  If you login with your VW account ID the files will go to your cloud storage.

 

 

 

Thanks for this - do you mean I could send you some images that you would then use to test things and might be able to give me something in 3d in return, if successful? I'd certainly be happy to do that - obviously I'd need to double check with my client on that job. I believe there's also some drone footage of the site (I've not seen it yet) and could also share that if useful.

 

I don't have a LiDAR equipped phone or tablet myself unfortunately. There's a small chance I might be able to borrow one.

Link to comment
On 9/2/2022 at 10:18 PM, Benson Shaw said:

  And whether it is helpful to place objects or points of known size and separation in the scene? Traffic cones, tape “x”s, etc

 

This is one of my questions too... is there software that lets me combine photogrammetry with certain bits of direct survey data - so, perhaps I have a few accurately measured straight-line distances between easily identifiable points, and can tell the software exactly where these are on the photos (or a first iteration of the mesh produced from the photogrammetry) so it can then squash or stretch things a bit and hopefully end up with a generally more accurate model.

Link to comment
On 9/3/2022 at 1:22 AM, jeff prince said:

I've used the drone a bunch to do photogrammetry in the desert, but with my recent work in the forests of New England... the drone is useless and ground based lidar has been the hero.

 

The example you give in your post is very useful in giving me an idea of limitations/level of likely accuracy - thanks.

 

Presumably the reason the drone is useless in the forests is simply because the trees obscure any useful view of the ground? The site I'm looking at is in woodland, but the area of interest is mainly within a clearing, so I expect some drone footage might be usable in this scenario.

  • Like 1
Link to comment
2 hours ago, line-weight said:

Presumably the reason the drone is useless in the forests is simply because the trees obscure any useful view of the ground?

Correct.  I have to be able to see the ground to model it.  That, and trees move in the wind and have delicate features and textures that can confuse the computer when comparing and aligning images.

  • Like 1
Link to comment
2 hours ago, line-weight said:

 

This is one of my questions too... is there software that lets me combine photogrammetry with certain bits of direct survey data - so, perhaps I have a few accurately measured straight-line distances between easily identifiable points, and can tell the software exactly where these are on the photos (or a first iteration of the mesh produced from the photogrammetry) so it can then squash or stretch things a bit and hopefully end up with a generally more accurate model.


Using targets placed at known points can be very helpful in rectifying an image, it works for both drones and ground based LiDAR and/or photography.

Link to comment

Not aware that his is commonly available for DIY photogrammetry, but:

LIDAR Site survey aerial data can apparently be stripped to “bare earth” via some magical tech filtering and processing. USGS claims 10cm vertical accuracy. Different layers can isolate and display elevation data for buildings, low veg, trees, util towers, etc. 

 

https://www.usgs.gov/faqs/what-lidar-data-and-where-can-i-download-it#faq


I think similar lidar data sets are already implemented for UK? EU? Scandinavia? Elsewhere?

-B

Link to comment
2 hours ago, Benson Shaw said:

Not aware that his is commonly available for DIY photogrammetry, but:

LIDAR Site survey aerial data can apparently be stripped to “bare earth” via some magical tech filtering and processing. USGS claims 10cm vertical accuracy. Different layers can isolate and display elevation data for buildings, low veg, trees, util towers, etc. 

 

https://www.usgs.gov/faqs/what-lidar-data-and-where-can-i-download-it#faq


I think similar lidar data sets are already implemented for UK? EU? Scandinavia? Elsewhere?

-B

 

I have some LiDAR data for the area I work in which I purchased along with some standard mapping. It was limited to land that the UK Environment Agency had surveyed for flood analysis (I think) so in the case of the area I am interested in was actually a relatively small area but I thought I'd take a look at it anyway. And yes there was the choice of purchasing DSM data or DTM data, with the former being height values from the first surface to the ground so including buildings, vegetation, power lines, etc + the latter being just the terrain: the man-made features + vegetation having been digitally removed.

  • Like 1
Link to comment
8 hours ago, jeff prince said:


Using targets placed at known points can be very helpful in rectifying an image, it works for both drones and ground based LiDAR and/or photography.

How does that rectifying normally work - via software, or more like manually adjusting things?

 

Trying to imagine what I'd actually do once I got the mesh into Vectorworks. I have my ways of distorting/tweaking 2D things (such as scanned plans, aerial photos, etc) to best fit known dimensions but obviously it's another level of complicated once there's a third dimension to worry about.

Link to comment
3 hours ago, Tom W. said:

 

I have some LiDAR data for the area I work in which I purchased along with some standard mapping. It was limited to land that the UK Environment Agency had surveyed for flood analysis (I think) so in the case of the area I am interested in was actually a relatively small area but I thought I'd take a look at it anyway. And yes there was the choice of purchasing DSM data or DTM data, with the former being height values from the first surface to the ground so including buildings, vegetation, power lines, etc + the latter being just the terrain: the man-made features + vegetation having been digitally removed.

 

Looks like it's in theory open data available for free ... working out how to get hold of it in a useful form is another thing. I think I've seen it come up as an option from the various online mapping suppliers that you might use to get site location plans etc from.

 

It seems you can preview it here (with quite a few holes in the coverage but fairly fascinating nonetheless)

 

https://houseprices.io/lab/lidar/map

 

and also as a layer on the NLS site (which is my go-to place for looking at historical maps)

 

https://maps.nls.uk/guides/lidar/

  • Like 2
Link to comment
5 minutes ago, line-weight said:

 

Looks like it's in theory open data available for free ... working out how to get hold of it in a useful form is another thing. I think I've seen it come up as an option from the various online mapping suppliers that you might use to get site location plans etc from.

 

It seems you can preview it here (with quite a few holes in the coverage but fairly fascinating nonetheless)

 

https://houseprices.io/lab/lidar/map

 

and also as a layer on the NLS site (which is my go-to place for looking at historical maps)

 

https://maps.nls.uk/guides/lidar/

 

That's v helpful thank you. Yes I got it from ProMap as an add-on to a much larger order. It was supplied as 2D contours, 3D contours + TIN

Link to comment
55 minutes ago, line-weight said:

What you got - did you get a sense of how accurate it was?

 

Not really because I've not actually done anything with it. Either because I already had topo surveys to work with or because I've been working outside of the area covered by the LiDAR data. To be honest I kind of forgot I had it. I ought to superimpose the lidar site model over some of my topo survey generated site models + compare them...!

 

This is a 800x800m portion of it:

 

249719616_Screenshot2022-09-06at11_48_10.thumb.png.aeb67ec57b64c34e236a0868ef65a48c.png

 

  • Like 2
Link to comment
6 hours ago, line-weight said:

How does that rectifying normally work - via software, or more like manually adjusting things?

 

Trying to imagine what I'd actually do once I got the mesh into Vectorworks. I have my ways of distorting/tweaking 2D things (such as scanned plans, aerial photos, etc) to best fit known dimensions but obviously it's another level of complicated once there's a third dimension to worry about.


It’s done in software during image processing.  Basically, you provide location information for the targets you put out on the site and the software uses that additional data during the processing.

 

If you look up “GCP or Ground control points” within the software you are considering, there should be explicit directions on what you should do.

 

The same principle is handy for photoshop mosaics too.  If you put a few targets on a large subject, it makes stitching the photos together much more accurate.

  • Like 1
Link to comment
4 minutes ago, jeff prince said:


It’s done in software during image processing.  Basically, you provide location information for the targets you put out on the site and the software uses that additional data during the processing.

 

If you look up “GCP or Ground control points” within the software you are considering, there should be explicit directions on what you should do.

 

The same principle is handy for photoshop mosaics too.  If you put a few targets on a large subject, it makes stitching the photos together much more accurate.

Ok, thanks, so it would all happen prior to bringing it into Vectorworks.

 

Am familiar with the sort of principle having done that in panorama-stitching software in the past.

  • Like 1
Link to comment
  • 2 weeks later...

A question.... the site I mentioned earlier in this thread, which I'm interested in experimenting with, I now have some access to various bits of drone footage.

 

This was not done by me, but by someone else without the specific aim of using it for photogrammetry. I think that it contains info that will be useful though.

 

Should I be thinking in terms of going through the footage and capturing multiple stills images from it, to then feed into whatever software I use? Or is there software that can use mp4 footage as-is? One issue could be that the footage contains quite a lot of stuff that's not relevant to the specific area I'm interested in, but I could trim it into a series of clips that did focus on that area.

 

Secondary question - any recommendations for software that will run on a mac, to do all this?

Link to comment
1 hour ago, line-weight said:

A question.... the site I mentioned earlier in this thread, which I'm interested in experimenting with, I now have some access to various bits of drone footage.

 

This was not done by me, but by someone else without the specific aim of using it for photogrammetry. I think that it contains info that will be useful though.

 

Should I be thinking in terms of going through the footage and capturing multiple stills images from it, to then feed into whatever software I use? Or is there software that can use mp4 footage as-is? One issue could be that the footage contains quite a lot of stuff that's not relevant to the specific area I'm interested in, but I could trim it into a series of clips that did focus on that area.

 

Secondary question - any recommendations for software that will run on a mac, to do all this?


I recommend chatting with @RussUas he has experience with just this sort of thing.  A while back he made a model of the cliffs in one of my drone videos using Agisoft Metashape Pro.

 

Agisoft and Autodesk Recap are really good and diverse toolsets, but come at a high price.  That price is justified if you do a lot of this kind of thing.  A less expensive option would be to use a video software to capture frames and then use an online service like MapsMadeEasy to create the model.  Alternatively, hiring a drone company to process your data could be reasonable as well.
 

 

  • Like 1
Link to comment

So ... I reminded myself of what I'd already worked out since last time I looked at this.

 

Macos Monterey has some kind of photogrammetry API built into it. If you want to use it directly then you need coding/command line skills that I don't have.

 

However there is a nominally free app called Photocatch

https://apps.apple.com/jp/app/photocatch/id1576081762?l=en

Which kind of works as a user-friendly front end for it.

 

So, I went through the various drone videos I have for my trial site. Done by someone else and not really intended for this purpose - nonetheless they do contain lots of different angles on the bit of land I'm interested in. I went through them manually, capturing still frames that covered as many directions of view as possible. I ended up with about 100 images.

 

Then I fed these into Photocatch. The first result was not very impressive. It wasn't complete nonsense - I could discern a distorted version of what I was after.

 

Photocatch doesn't offer you many options or settings. But upping the "model quality" to "full", and "feature sensitivity" to "high" improved the results quite a bit.

 

At this stage, I could see that some bits of the mesh were more detailed/correct than others and that it did seem to correspond to areas that weren't very well captured by the drone footage.

 

So I tried adding in some more photos of the bits the drone footage didn't really cover - photos taken by me on a different day with a different camera. I thought it might not like that (would the different lighting, or a few objects in different positions throw it off?) but actually it led to an improvement in the model.

 

I did a second round of adding some more of my own ground-level photos that covered bits of the model that still weren't coming out very well. And this did lead to a further improvement.

 

After importing the mesh into VW this is what it looks like (I've turned off textures partly for client confidentiality reasons and partly so it's clear how lumpy the underlying mesh is).

 

 

 

 

 

It has in fact produced something that might, potentially, be useful. I don't immediately have a way of verifying how accurate it is. The main thing I was interested in capturing was not the shed-like building itself (I've already modelled that in VW using my standard methods) but the shape of the land it sits within. I've yet to work out how exactly I extract the info into a form (site model?) that's more directly usable in my working drawings. I know there's a bit of discussion earlier in the thread and in that other one, linked above, about how to do that.

 

The Photocatch app frustratingly has very little documentation. So it's kind of trial and error working out what the algorithm is going to like or play nicely with, or do with certain types of input. There's no facility for me to help it with any clues (for example marking points on photos that I know correspond with each other) or giving reference dimensions, both of which surely could improve results.

 

It looks like a popular paid app for mac is "Metashape" which I assume uses its own methods rather than apple's one. There's a demo version/free trial option and I might have a go feeding it the same set of images to see if it does any better.

 

https://agisoft.freshdesk.com/support/solutions/articles/31000135259-how-to-try-full-metashape-functionality-before-buying

 

 

 

 

 

  • Like 3
Link to comment

One observation about how that Photocatch app treated the info I gave it:

 

At the bottom of the slope you can see in the model, there's another larger building. This is visible in many of the photos I gave it - and some of the photos I gave it were mainly just looking at this other larger building, but in less detail (and fewer in number) than of the smaller building at the top of the slope.

 

This larger building doesn't appear at all in the mesh it has given me (apart from a tiny corner of it connected by a thin strand of ground surface).

 

So it has somehow decided that either there is insufficient information about this other building, or that it's not of interest, based on the focus of my photo set. But how this decision is made, is a little opaque.

 

I might try a couple of experiments:

 

a) remove from the photo-set the photos that deal primarily with the smaller building at its immediate surrounds, and see if it then gives me a model attempting more detail on the larger one.

 

b) remove from the photo-set the photos that are really only looking at areas outside of the mesh that it has given me, and re-run the process to see if I get better results having removed that extraneous info.

 

My impression is that the Apple API is mainly aimed at people capturing a single object (like a product) from all sides without any "ground" or surrounding context. So, for a sprawling site, would it be better to try and feed it info in sort of self contained chunks, that are later stitched together manually?

  • Like 1
Link to comment
18 hours ago, line-weight said:

I might try a couple of experiments:

 

a) remove from the photo-set the photos that deal primarily with the smaller building at its immediate surrounds, and see if it then gives me a model attempting more detail on the larger one.

 

Not sure if this is of much interest to anyone other than me, but this is what happened - the screen recording shows the first-attempt model, then superimposed on it, the model that was generated when I removed all the photos that only dealt with the smaller building. It has chosen to model much more of the site. This is using info all of which was available in the photoset used for the first attempt: so clearly it has made some kind of decision behind the scenes about what to include.

 

 

 

  • Like 2
Link to comment
6 hours ago, line-weight said:

Not sure if this is of much interest to anyone other than me, but this is what happened

VERY much interesting! Thank you for process commentary and  showing results. An apparent preference in the app for prioritizing or ignoring certain elements of the source data requires a curation step (or two). Or perhaps different apps are “tuned” for different expected uses. Good to see reports here matching various apps with use conditions and results. 
 

-B

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...