angelojoseph Posted March 5, 2019 Share Posted March 5, 2019 Hey, all I'm new to the forum, though not new to VW. I'm curious who's processing photogrammetry through VW Cloud Services. I've been running some samples and have mixed results. It seems apparent that exif data is not being used, including any telemetry or GPS, as my DPCs come in a weird orientation and placed on the origin (whereas geoTIFFs import relative to it). So here are my questions: -Who's using it, and how useful have you found it in comparison to other services or software? -Is this a Beta service? maybe a proof of concept or showcase piece? -Is the intention for this to become a core component of Cloud Services? (I would love if this were the case, and continually developed to compete with the likes of Pix4D/Zephyr/etc) -Is anyone working on a way to gereference or retain/reintroduce geospatial data to the point cloud? -Any chance of PCS support in the future? I have a million more questions, but I'll let this begin the conversation to see if anyone is even interested. Thanks! Quote Link to comment
Jeff Prince Posted March 6, 2019 Share Posted March 6, 2019 I've been using photogrammetry for a few years, and most recently with drones. I was having discussion with VW about their cloud services a while back. I ultimately decided to use Autodesk Recap for our needs. Here's a thread with some discussion and examples: Quote Link to comment
angelojoseph Posted March 6, 2019 Author Share Posted March 6, 2019 Thanks for sharing! Sorry if I wasn’t clear. I’ve been flying drones and doing photogrammetry for years, as well as developing terrain and contextual models for even longer, so I’m quite familiar with the process. However, I hate ReCap, And there are plenty of great photogrammetry softwares for developing georeferenced dense point clouds (plus textural surface meshes, visual analyses, etc, etc). My questions are related to vectorworks. I love the software and would love if I could process and stitch DPCs in the same software I’m using to do all my other technical design work. TL;DR: I’ve been doing photogrammetry a long time, though in many other software. I’m exploring VWs service and seeing who else is as well. I haven’t found a gourd workflow yet and wonder if anyone has, and I’m generally interested in how Seriously VW is taking this service for technical professionals like myself. Quote Link to comment
Jeff Prince Posted March 6, 2019 Share Posted March 6, 2019 33 minutes ago, angelojoseph said: Thanks for sharing! Sorry if I wasn’t clear. I’ve been flying drones and doing photogrammetry for years, as well as developing terrain and contextual models for even longer, so I’m quite familiar with the process. However, I hate ReCap, And there are plenty of great photogrammetry softwares for developing georeferenced dense point clouds (plus textural surface meshes, visual analyses, etc, etc). My questions are related to vectorworks. I love the software and would love if I could process and stitch DPCs in the same software I’m using to do all my other technical design work. TL;DR: I’ve been doing photogrammetry a long time, though in many other software. I’m exploring VWs service and seeing who else is as well. I haven’t found a gourd workflow yet and wonder if anyone has, and I’m generally interested in how Seriously VW is taking this service for technical professionals like myself. Like I said, I spent time talking with VW about their cloud services. Ultimately, it is not currently a suitable tool for anything significant outdoors IMHO. It does a great job ingesting data produced on other platforms though. It's hard to imagine VW ever catching up to the quality and services offered by others in this regard... 3 Quote Link to comment
Benson Shaw Posted November 9, 2021 Share Posted November 9, 2021 Rather than start a new topic, I hope to receive some pointers and keep this thread alive. I have submitted photos a few times to the vwx cloud services for 3d from photos. So far never generates. Always I receive the "Did not Generate" notice on the portal, and an email with the boilerplate tips - Use many photos, provide overlap, uniform exposure, textured surface (that yellow jeep in the vwx tutorial doesn't seem very textural), but nothing specific to the photos submitted. All of these attempts are merely my tests to investigate how it works and what it produces. I don't need a perfect, dense model. Anything would do. I will make improvements once I understand what makes it work. Most recent is a little Buddha sculpture (by others). Here is link to the 14 photos submitted. https://www.dropbox.com/sh/arsz4xtnu55qq4r/AAC9xFcr8F27YeIymhIn6XSBa?dl=0 So, how are these photos not conforming to the tips? Needs more photos? (2 more? 100 more?) Not enough overlap? Top views confused? Does order of photos in the list mater? Is the background too similar to the object of interest? Is the exposure not uniform enough? - I adjusted prior to submittal. Do I need to paint stripes or drape with a net for more texture? Or??? Or can anyone suggest a better path to making it work? Thanks -B Quote Link to comment
Anders Blomberg Posted March 23, 2022 Share Posted March 23, 2022 @Benson Shaw Did you manage to get something out of the cloud eventually? I've tried a handful of photo sets without luck. I noticed this statement in the guidelines: "Currently, the algorithm is not optimized for 360-degree (wrap-around) views of an object. ", sounds a little weird to me but I tried a few sets without 360 wrap and that didn't get me anywhere either. Quote Link to comment
line-weight Posted March 23, 2022 Share Posted March 23, 2022 I recently tried to use cloud services to do some animation renders. A high proportion failed. I gave up. It could be that the problem is not in what you are sending them, but stuff that goes wrong at the other end. 1 Quote Link to comment
Benson Shaw Posted March 23, 2022 Share Posted March 23, 2022 @Anders Blomberg No success so far with the VCS photogrammetry. But I will give it some more tries. Meanwhile, I tried the Polycam app on my old iPhone (no LIDAR). Initial trial of free version made an OK wrap around of a coffee table w/100 photos. Free version processes only to OBJ. I upgraded to the "pro" version, US$60/year. This opens many other export options for point clouds which vwx accepts. Tried to scan a bathroom interior and the processing failed. I think better to make one wall at a time or one wall then amend with another, then amend, etc. Impressions: VCS I was trying to find the low threshold - start with minimal photo set to produce a low quality point cloud, then see effects of more photos, better light, etc. I probably needed more images for a successful processing. Images collected via phone camera to Photos app, which uses up the phone storage, then upload to VCS, then delete from phone I think Vectorworks Nomad may have a more integrated path for this. Need to try some more and explore Nomad, too. Polycam Good to have success on first try. OBJ imported in rotated - turned on end. Point cloud oriented correctly. I did not georef to test positioning. Test of prox 100 photos (free limit) processed on medium quality produced a lumpy coffee table, but the magazine covers were readable. Cost of pro subscription is minor problem. But what if Nomad works as well? "Video" mode automatically triggers shutter when app detects sufficient overlap. As @jeff prince suggests, I can just "wave my phone around". Photos transfer to Polycam for processing. Never show up in the Photo library, but do store in the app. Deleteing the capture set from the app does not immediately update the phone storage. Looking into that. Might need a phone restart. Plan to experiment more. -B 1 Quote Link to comment
Anders Blomberg Posted March 24, 2022 Share Posted March 24, 2022 Great to hear your results @Benson Shaw! My trials with the VW cloud were ranging from ≈20 - ≈50 JPEGs taken with my iPhones standard camera app and uploaded to the cloud services via the the finder-folder. So I didn't really aim for either an upper or lower threshold, just what seemed reasonable. Items were ranging from simple, like a couple of coloured pens, to very complex, like a potted English ivy. Quote Link to comment
Popular Post Anders Blomberg Posted March 28, 2022 Popular Post Share Posted March 28, 2022 (edited) Looked into this a little more today and it seems Apple has released a new API for photogrammetry called object capture. I tried a super simple software called PhotoCatch with the same set of photos that the VW cloud couldn't make sense of and it did produce something decent I must say. I guess the plant, placed on a highly reflective surface, is a really complex object to scan so I'm rather impressed. I exported from PhotoCatch in an OBJ format and imported into VW. Looks as below. Just took a couple of minutes. Skärminspelning 2022-03-28 kl. 13.30.14.mov Hedera.vwx Edited March 28, 2022 by Anders Blomberg 5 Quote Link to comment
line-weight Posted April 4, 2022 Share Posted April 4, 2022 On 3/28/2022 at 12:43 PM, Anders Blomberg said: Looked into this a little more today and it seems Apple has released a new API for photogrammetry called object capture. I tried a super simple software called PhotoCatch with the same set of photos that the VW cloud couldn't make sense of and it did produce something decent I must say. I guess the plant, placed on a highly reflective surface, is a really complex object to scan so I'm rather impressed. I exported from PhotoCatch in an OBJ format and imported into VW. Looks as below. Just took a couple of minutes. Skärminspelning 2022-03-28 kl. 13.30.14.mov 228.26 MB · 0 downloads Hedera.vwx You prompted me to try something similar. Also used the "photocatch" app. I took some photos using the not especially good camera on quite an old ipad. First I used the app on the ipad, which lets you see the resulting model but you have to pay to download it. So I used the desktop version of the app on my mac instead, which lets you export the model without charge. I also exported as OBJ and was slightly surprised when it imported into VW2021 in a recognisable form on the first attempt (just had to adjust its rotation and scale it to a sensible size). Surprising what's possible with just photos (no Lidar or anything). There's still a lot I need to understand - like how the texture is attached to the mesh, how to edit it usefully, and so on. I might see how it works on the interior of a room next. Screen Recording 2022-04-04 at 15.59.19.mov 4 Quote Link to comment
line-weight Posted April 22, 2022 Share Posted April 22, 2022 Just came across this, an interesting account of photogrammetry being used by structural engineers http://www.billharveyassociates.com/photogrammetry 3 Quote Link to comment
Tom W. Posted April 22, 2022 Share Posted April 22, 2022 Can't beat a good Victorian railway arch 1 Quote Link to comment
Popular Post Anders Blomberg Posted May 20, 2022 Popular Post Share Posted May 20, 2022 (edited) So we ended up using photogrammetry in a recent project I did which turned out really cool I think. I had the surveyor send up a drone from which he created a DTM via photogrammetry and an orthomosaic Geo-TIFF photo. As I'm still trying out new stuff I didn't really know what to ask for but he ended up exporting a DWG TIN model from which I could create my own site model in VW and drape it with the ortho as a texture. It's all geolocated and everything so I'm all happy. I guess this might be basic stuff to many of you but it was exciting to me so I thought I might just share it. Skärminspelning 2022-05-20 kl. 11.28.39.mov Edited May 20, 2022 by Anders Blomberg 7 Quote Link to comment
bcd Posted May 20, 2022 Share Posted May 20, 2022 Looks great - thanks for sharing it. 2 Quote Link to comment
line-weight Posted September 1, 2022 Share Posted September 1, 2022 I need to do a survey of a site quite soon. A building on a quite steeply sloping piece of land. The building needs to be measured accurately and I'll do it the old fashioned way but I've been wondering if it's worth trying to do the surrounding land by photogrammetry. It doesn't need to be dead accurate, just enough to set the building in context. I don't have a drone or anything, just an android phone with a pretty decent camera but no lidar. I'd probably be able to get a few high-level viewpoints looking down onto the land in question by standing on building roofs etc. A few posts up ^^ I tried the app "photocatch" and was quite impressed with the results looking at a small object. But when I tried it on some building interior spaces it didn't do so well at all. May have been to do with lack of good lighting, I don't know. Is it worth me having a go at a site survey? If so, any tips for apps or strategies that would most effectively get something useful into Vectorworks? Quote Link to comment
Vectorworks, Inc Employee Popular Post Dave Donley Posted September 2, 2022 Vectorworks, Inc Employee Popular Post Share Posted September 2, 2022 Hello @line-weight: We are currently converting the photogrammetry kernel used for Photos to 3D Model to use macOS' Object Capture. We have run image datasets through it that fail in the current implementation, and the macOS processing succeeds where the previous one would fail. This upgrade will go live in a month or two, after the 2023 initial release. If you collect some images to try we can use them to validate that it does a good job and maybe compare to the existing feature? Another part of this is to use LiDAR from iOS to improve the photogrammetry, the depth image from the LiDAR helps the algorithm. FYI, on iOS you can collect point clouds directly using Nomad if your phone or tablet has LiDAR (limit is 5 meters though). https://cloud.vectorworks.net/portal/help/pages/capture-a-point-cloud-with-lidar/?app=IOS_NOMAD. Maybe you can find someone who has one and install Nomad on it to scan your site. If you login with your VW account ID the files will go to your cloud storage. 5 Quote Link to comment
Benson Shaw Posted September 2, 2022 Share Posted September 2, 2022 @Dave Donley can you elaborate a bit on the 5 meter limit? And whether it is helpful to place objects or points of known size and separation in the scene? Traffic cones, tape “x”s, etc Quote Link to comment
zoomer Posted September 2, 2022 Share Posted September 2, 2022 50 minutes ago, Benson Shaw said: can you elaborate a bit on the 5 meter limit? From what I know of people used Lidar with Apple Apps and used it in other 3D Apps .... 5m limit will not allow you to scan a cathedral of course. But if you stay in the save range of the Lidar, one walked up a larger property around a larger building from one end to the other (for a DTM) and found that finally reached the end at a pretty precise elevation and that the scan was really useful. As I think the typical LIDAR Scans are great for scanning a small Appartment (I do not love the amount of Mesh Data), I am more interested in coming Apps that use new Apple APIs shown at WWDC 2022, wich will allow to create very basic volume simplifications from scans by using AI. Like detecting Walls, Openings and such at a much smaller file size, kind of the essence of a Scan. Quote Link to comment
Popular Post Jeff Prince Posted September 3, 2022 Popular Post Share Posted September 3, 2022 1 hour ago, zoomer said: But if you stay in the save range of the Lidar, one walked up a larger property around a larger building from one end to the other (for a DTM) and found that finally reached the end at a pretty precise elevation and that the scan was really useful. You can collect a lot of data in such a scenario, but without RTK correction, Lidar scanners as found in the iPhone/iPad Pro can have quite a bit of significant error in it. It's close enough to get started on a project typically, but not something I would trust to replace a proper survey. I've been testing the iPhone scanner extensively. It creates some interesting results. Here is one example that tests the extremes. I was scanning both natural and man made features to study the placement of a future driveway within a forested area. The horizontal distance scanned was about 250' long and 20' in vertical deviation. I had a survey with accurate locations for an existing building, wood shed, and propane tank enclosure and rough 2' contours based on aerial Lidar which I could use to line up my scan. When orientating the scan to the known building location, the other features were off by up to 3.5'. That was acceptable for my purposes on this project because I was most interested in the approximate locations of rock outcroppings and getting more detail in ledge and undercut ledge conditions. However, without a survey, it could be challenging to use such a scan to position horizontal elements when things get tight. I've been making a training class on iPhone scanning based on my real world findings. Hopefully I'll get it done sometime before the end of the year 🙂 Here is the overall study area layered with the survey. The existing building is in the lower left corner. The brown square with two cylinders in the top middle is the propane enclosure. South is up in this image. Here we look towards the building and woodshed from southwest behind the propane enclosure. The can was useful for locating trees, generator, and utility pole. Here we can see the horizontal deviation between the scan (black triangles) and the surveyed position of the enclosure (red C shaped line). The distance it is off by is nearly 3' horizontally and 18" vertically. Here is the woodshed, surveyed position is the red box filled with tan. The horizontal deviation is a little over 3' But here was the kind of thing I was really interested in capturing, tree and rock outcropping locations to determine how much stone blasting might be required to stitch in a driveway related to the undercut ledge. The colored contours from the 2' lidar also has some interesting deviation from reality 🙂 Anyhow, this example is significantly different than scanning the interior or exterior of a building. These scanners do pretty good with rectilinear and level features. The natural environment requires RTK correction or lots of manual effort to stitch together smaller study areas. Still, cool and useful technology if you work within or accept its limitations, like most tools we use 🙂 On 9/1/2022 at 3:57 PM, line-weight said: I need to do a survey of a site quite soon. A building on a quite steeply sloping piece of land. The building needs to be measured accurately and I'll do it the old fashioned way but I've been wondering if it's worth trying to do the surrounding land by photogrammetry. It doesn't need to be dead accurate, just enough to set the building in context. I think if you measure the building and take many overlapping photos, you might be able to get something useful for your purposes. I believe Pix4 has a free trial that you could use to testing things and get an output. If you have access to Recap, you could get something outputs too. The key to photogrammetry is to capture overlapping photos so the software has something to do its comparative analysis on. I'm with Dave, if you can borrow an Iphone or iPad in the Pro versions, you could get results much easier with something like Scaniverse. I've used the drone a bunch to do photogrammetry in the desert, but with my recent work in the forests of New England... the drone is useless and ground based lidar has been the hero. 6 1 Quote Link to comment
Popular Post Jeff Prince Posted September 3, 2022 Popular Post Share Posted September 3, 2022 (edited) Here's a pool we designed, under construction. You can see it does a much nicer job with flat rectangles than irregular forests 🙂 Here's the model And the scan of reality, prior to the installation of coping and interior finishing. I think an iPhone Pro pays for itself after using it on two or three jobs with most of the expense related to learning how to use the phone/app/vectorworks together to make something useful. You get better with each scan as your capture technique improves. and the two overlaid... It's within inches with some of the deviation attributed to the acceptable tolerances of pool construction. Edited September 3, 2022 by jeff prince 8 Quote Link to comment
Popular Post Claes Lundstrom Posted September 3, 2022 Popular Post Share Posted September 3, 2022 I have used my iPhone too and found it quite useful. In this example, I tried to illustrate potential drainage problem against a house wall. Here is another example where I made sure it was safe to drill a vertical hole in the middle of a lot of piping and equipment, by placing two different scans from two different floors on top of one another. Note how well the walls aligned. 5 Quote Link to comment
Tom W. Posted September 3, 2022 Share Posted September 3, 2022 Are people using Lidar on iPhone/iPad to scan objects rather than buildings/topography, for the purposes of creating symbols? As an alternative to laboriously measuring things + modelling them from scratch + texturing them. I'm talking about furniture, light fittings, appliances, sanitaryware, MEP plant, etc. I can rarely find what I need in the VW Libraries so next port of call is 3D Warehouse, then if no luck there it's a case of modelling it from scratch. I have no major problem with this - I quite like doing it - but it would be a massive time-saver to be able to scan something with my phone then there it is, a ready-made, accurately-sized, ready-textured model. I suppose the only thing is that if they're meshes they're going to look bad in non-orthogonal Hidden Line views but this is the same for lots of symbols already. Well + file size of course but presumably you can simplify mesh. Is this something people are doing? I don't have a Lidar-enabled device unfortunately. I have seen some really nice looking scanned objects online. Is the accuracy better on smaller objects? Although I love what people are doing here with buildings + landscapes, for me I can't see it precluding the need for a proper laser scan + topo survey. As Jeff says: 9 hours ago, jeff prince said: It's close enough to get started on a project typically, but not something I would trust to replace a proper survey. Quote Link to comment
Claes Lundstrom Posted September 4, 2022 Share Posted September 4, 2022 21 hours ago, Tom W. said: Are people using Lidar on iPhone/iPad to scan objects rather than buildings/topography, for the purposes of creating symbols? As an alternative to laboriously measuring things + modelling them from scratch + texturing them. I'm talking about furniture, light fittings, appliances, sanitaryware, MEP plant, etc. I can rarely find what I need in the VW Libraries so next port of call is 3D Warehouse, then if no luck there it's a case of modelling it from scratch. I have no major problem with this - I quite like doing it - but it would be a massive time-saver to be able to scan something with my phone then there it is, a ready-made, accurately-sized, ready-textured model. I suppose the only thing is that if they're meshes they're going to look bad in non-orthogonal Hidden Line views but this is the same for lots of symbols already. Well + file size of course but presumably you can simplify mesh. Is this something people are doing? I don't have a Lidar-enabled device unfortunately. I have seen some really nice looking scanned objects online. Is the accuracy better on smaller objects? Although I love what people are doing here with buildings + landscapes, for me I can't see it precluding the need for a proper laser scan + topo survey. As Jeff says: On the phone level, it can not be used directly to measure for example kitchen cabinets. It's simply not accurate enough for precision measurements. It is however useful for quick price estimates and an excellent piece of documentation. Another major flaw with scanners is that they generate huge files where almost everything consists of junk info. As an example, imagine a very simple table model, represented by an extruded rectangle representing the table top and four small extruded rectangles for the legs. The parts can be defined mathematically by a very limited number of coordinate data. Most scanner software on the other hand, needs to measure a huge number of points just to be able to estimate where the key data points (the edges) are located. The fundamental difference is that the human computer can instantly figure out where the edges and boundary are, whereas most scanner software can't. On more organic shapes things swing a bit towards scanning. Manual modeling can still generate much smaller models, but it requires way more skill and craftsmanship to get there. The example is probably beyond what most CAD designers could achieve and it would take quite a lot of work. The scan took less than five minutes with an iPhone 13 Pr. The result is surprisingly similar to the original (the picture to the left was placed next to the original within the scanning software). The model was also accepted without any problem by the 3D printer software and came out fairly good on a simple $200 3D printer. 1 Quote Link to comment
Tom W. Posted September 4, 2022 Share Posted September 4, 2022 Thanks @Claes Lundstrom this is exactly the kind of thing I'm talking about. Circumstances where manual modelling would be challenging/time-consuming. Like you say, lots of things are fairly easy + quick (+ fun!) to model but others - organic shapes - less so. So for example if I had an Eames chair it was important for me to include in my model I could scan it in a few minutes + create a symbol that would likely be fine for illustrative purposes? It just needs to look right + be more or less the right dims. It's just a prop in the scene not something that needs to be millimetre-perfect because I'm going to be manufacturing it. Would a phone scan of something like this chair result in something useable? To me this would be really useful. At the moment I'm restricted to what I am capable + have time to model myself plus whatever I can find online. This would be a useful additional option for building my symbol libraries. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.