line-weight Posted September 23, 2022 Share Posted September 23, 2022 Tried feeding the same set of photos into the demo version of Metashape. This was just following the default workflow and accepting all the default settings (of which, there are many more to fiddle with than in Photocatch). The demo version doesn't allow export of models so wasn't able to bring it into VW for direct capture, however this is what it gave me. Interestingly, like the first attempt with Photocatch it seems to have made a decision to focus only on one portion of the site. I did get some kind of error message at some point in the process asking me to do something about unmatched photos, but I dismissed it and carried on. Probably with greater understanding of the software, much better results could be obtained. But at first sight, the "default" output it's given me is not obviously better than what I got out of photocatch. It seems to have more gaps and contains one very obviously completely wrong section, where a wall has become entirely detached from the building and is floating in fresh air. 2 Quote Link to comment
Popular Post Pat Stanford Posted September 27, 2022 Popular Post Share Posted September 27, 2022 Completely off topic, but not really. They are using Lidar to scan bears for Fat Bear Week! https://www.nps.gov/katm/blogs/fat-bear-week-2020.htm Last year, Alaska Regional GIS Specialist Joel Cusick devised a method of laser scanning to non-invasively measure the volume of the portly participants of Fat Bear Week. He returned this year, armed with knowledge gained and the patience to wait for a bear waiting for a fish. A tool like Terrestrial Lidar scanning is used almost exclusively in Civil Engineering fields, to scan interiors of buildings for example, so putting it to the test on live animals is all new. Right now, we can use this to determine the volume of a bear as well as length, height and girth. As we learn more, this could open numerous new potential applications, perhaps even to monitor the overall health of these, and other, animals in the wild. Most importantly for all those voting for Bear 747 this year, Joel’s results show they did indeed vote for the largest bear of Brooks River. He came in with a volume of 22.6 cubic feet compared to 19.78 cubic feet for runner up Bear 32 (“Chunk”) and 17.7 cubic feet for Bear 151 (“Walker”), Bear 747 semi-finals competitor. Long live this year’s champion and congratulations to Joel for continuing this innovative and interesting field of research! 1 1 3 Quote Link to comment
mjm Posted September 27, 2022 Share Posted September 27, 2022 @Pat StanfordThanks man. Made my day complete.Both hilarious and cool af. Quote Link to comment
line-weight Posted November 21, 2022 Share Posted November 21, 2022 I'm going to have to do a bit more reading/watching to properly understand exactly what this is and how it works... but seems pretty interesting. Not sure how useful it's likely to be for our kinds of purposes. 3 Quote Link to comment
Tom W. Posted April 30 Share Posted April 30 On 9/4/2022 at 11:59 AM, Claes Lundstrom said: The biggest limitations when scanning objects with an iPhone, such as the Eames chair, comes on the skinny parts, for example the foot and armrests. Bigger more solid chunky objects typically works fairly ok I would say. In my example, a 250 year old chair, the seat works fine, whereas the skinny and more intricate parts fail. The problem with it is of course the combination of being skinny and having a very intricate and detailed shape. Another disadvantage with scans in general for a symbol is that the model becomes much bigger. A good symbol should always have as few elements a possible, especially when you insert many on a bigger model. Keep it as simple as possible while maintaining a recognizable shape. Scaniverse.mp4 365.74 kB · 0 downloads I finally had a go at this myself this afternoon. I tried the chair below with Scaniverse several times + I could tell from the preview it was useless so didn't bother taking any further. Using the Photos to 3D model feature on Nomad however was much better: I need to try some other things with the iPad/Scaniverse + see if the results are better... Quote Link to comment
Jeff Prince Posted April 30 Share Posted April 30 1 hour ago, Tom W. said: Using the Photos to 3D model feature on Nomad however was much better Did you try the photogrammetry option in Scanniverse instead of lidar? I have found it creates a better result than Nomad. Quote Link to comment
Tom W. Posted May 1 Share Posted May 1 8 hours ago, jeff prince said: Did you try the photogrammetry option in Scanniverse instead of lidar? I have found it creates a better result than Nomad. Thanks for the tip but I can't see a photogrammetry option within Scaniverse...? Quote Link to comment
Anders Blomberg Posted September 14 Share Posted September 14 Hi! Thought I might as well jump one of the old threads on this topic. I recently visited a site and did 2 captures to evaluate different methods. I did a Lidar scan with Scaniverse and also did a "photos to 3D model" in Nomad, both via an iPhone. Ended up with ≈200 photos for the latter method. The nomad scan is way better in most ways but seems to drift considerably while scanning which becomes obvious when walking around an object (a house in this case). Guess this is due to the linear nature of the capture. The Photos to 3D doesn't suffer from this but rather has a molten metal kind of style to all surfaces, so they are considerably less accurate than the Lidar scan. Has anyone got any recommendations for how to achieve better results for a project like this without calling the surveyor? A far fetched idea that came to mind would be to get an Apple Watch Ultra and hope that the higher accuracy multiband GPS would sync to connected devices and improve upon things but I guess it's unlikely? I've also got a little drone, a DJI mini 3 pro, but haven't really been able to utilize as I hoped for in these cases. Scaniverse Lidar scan: Nomad "molten metal", Photos to 3D: 3 Quote Link to comment
Tom W. Posted September 14 Share Posted September 14 9 minutes ago, Anders Blomberg said: Hi! Thought I might as well jump one of the old threads on this topic. I recently visited a site and did 2 captures to evaluate different methods. I did a Lidar scan with Scaniverse and also did a "photos to 3D model" in Nomad, both via an iPhone. Ended up with ≈200 photos for the latter method. The nomad scan is way better in most ways but seems to drift considerably while scanning which becomes obvious when walking around an object (a house in this case). Guess this is due to the linear nature of the capture. The Photos to 3D doesn't suffer from this but rather has a molten metal kind of style to all surfaces, so they are considerably less accurate than the Lidar scan. Has anyone got any recommendations for how to achieve better results for a project like this without calling the surveyor? A far fetched idea that came to mind would be to get an Apple Watch Ultra and hope that the higher accuracy multiband GPS would sync to connected devices and improve upon things but I guess it's unlikely? I've also got a little drone, a DJI mini 3 pro, but haven't really been able to utilize as I hoped for in these cases. Scaniverse Lidar scan: Nomad "molten metal", Photos to 3D: I think the idea is to conduct a number of separate scan then stitch them together in VW. Did you see this video? I have tried a few point clouds on Scaniverse + Nomad + thought Nomad gave the better results. I was doing architectural interiors + in the end gave up. There was tons of drift if I attempted to scan several rooms at a time. I suppose I should try again doing one room at a time then stitching them all together but frankly the amount of work involved + the hit + miss nature of the exercise puts me off: what am I gaining over doing it the traditional method with pen, paper + laser measure? It's a lot easier to model from your own drawn dims. Where I need an accurate laser scan l'd prefer to get the surveyor in. But be interested to hear from others getting better results as I'm willing to persevere. @michaelk you've been doing good stuff right? 2 Quote Link to comment
Anders Blomberg Posted September 14 Share Posted September 14 Thanks for the input, watched the video a while back but forgot about it. I might try the stitching next time. It's valuable to get data on surrounding ground and I believe the Lidar does a good enough job there for simple jobs as long as drifting isn't becoming to much of an issue. Traditional measuring with yardstick etc. is a pain outdoors as nothing is flat/vertical/parallel asf. so some kind model is a big win there. 2 Quote Link to comment
Claes Lundstrom Posted September 15 Share Posted September 15 A few suggestions regarding Ladar scanning: Stitching definitely helps. Practice scanning. It's a craft and you soon learn that slow gentle movements helps. Plan your scans so that you move the camera as little as possible. Avoid thin object like for example shrubs and bicycles, which seldom works well. Avoid shiny objects such as glass and cars, as reflections never works well (spray it with something dull if you have to). Dump scans where you get weird offsets. Better to try again. At best, you can expect an accuracy of about 1%, so don't expect miracles. You can use it with DTM with decent results, but I suggest that you clean away obvious errors and excessive measurements. Each point may be less accurate than traditional measuring, but on the other hand you don't get thousands of measurements from traditional measuring. I have found that using a combination of traditional measuring and scans to be quite useful. 4 Quote Link to comment
Tom W. Posted September 15 Share Posted September 15 1 minute ago, Claes Lundstrom said: A few suggestions regarding Ladar scanning: Stitching definitely helps. Practice scanning. It's a craft and you soon learn that slow gentle movements helps. Plan your scans so that you move the camera as little as possible. Avoid thin object like for example shrubs and bicycles, which seldom works well. Avoid shiny objects such as glass and cars, as reflections never works well (spray it with something dull if you have to). Dump scans where you get weird offsets. Better to try again. At best, you can expect an accuracy of about 1%, so don't expect miracles. You can use it with DTM with decent results, but I suggest that you clean away obvious errors and excessive measurements. Each point may be less accurate than traditional measuring, but on the other hand you don't get thousands of measurements from traditional measuring. I have found that using a combination of traditional measuring and scans to be quite useful. Good advice thank you. I think you're right to say that you shouldn't expect to be able to rely on the results 100% like you can with a professionally scanned point cloud. It's beneficial to have some baseline dimensions to match the scans to. 2 Quote Link to comment
line-weight Posted September 15 Share Posted September 15 1 hour ago, Claes Lundstrom said: I have found that using a combination of traditional measuring and scans to be quite useful. It would be nice if this could be automated to some extent. For example, I imagine having some kind of reference point objects that I can place around whatever I am surveying. I take manual, accurate measurements of their relative positions, tell the photogrammetry software what these are, and it recognises them in the images and uses them to calibrate what it outputs. 2 Quote Link to comment
Jeff Prince Posted September 15 Share Posted September 15 6 hours ago, line-weight said: It would be nice if this could be automated to some extent. For example, I imagine having some kind of reference point objects that I can place around whatever I am surveying. I take manual, accurate measurements of their relative positions, tell the photogrammetry software what these are, and it recognises them in the images and uses them to calibrate what it outputs. That exists already in most drone photo processing softwares and pretty much every professional LiDAR processing package. You place targets at known locations and the software adjusts accordingly. Trick is, you need those reference targets placed precisely, otherwise you are back to square 1. 2 Quote Link to comment
Matt Overton Posted September 19 Share Posted September 19 Hi All, We have local geolocated 1m cloud files we can get from government lands agency alright for broad context of the sites we work on but they need embellishment to get visible neighbours context. Wondering if and one has seen a system that can argument one cloud with photos or scan. Quote Link to comment
Jeff Prince Posted September 19 Share Posted September 19 10 hours ago, Matt Overton said: Hi All, We have local geolocated 1m cloud files we can get from government lands agency alright for broad context of the sites we work on but they need embellishment to get visible neighbours context. Wondering if and one has seen a system that can argument one cloud with photos or scan. You can do that pretty easily with Metashape or contracting someone to do the combination for you if you don’t use such software frequently. Alternatively, you could generate point clouds from your photos and then manually line them up in Vectorworks if you have some easy to identify common points. either way, it’s going to take a bit of effort since the data you were given may not be the quality needed and could be challenging to find common alignment points. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.