Jump to content

James Russell

Member
  • Posts

    409
  • Joined

  • Last visited

Reputation

85 Excellent

Personal Information

  • Occupation
    Set & Lighting Designer
  • Homepage
    https://www.youtube.com/@vectorworksaustralia
  • Hobbies
    Innovative Technology & Creative Design
  • Location
    Australia

Recent Profile Visitors

5,603 profile views
  1. Hi Dom, It certainly does - I've been chasing various fringe cases for it for about 2 years now. Just be cautious of some of the naming issues (to do with the leading sort numbers of Marionette functions) as mentioned below; It's a great step forward in making Marionette so much more useful and open to the community - particularly when creating iterative custom objects (Marquees, Desks, Booths, Laptops, Roadcases, etc). Cheers! James
  2. Ooooh I'm so debating if I'm stepping on @michaelk toes here... it is his base script. The original thread you're looking for is here: I've just tried both your file and a new file and I think I know where your issue lay. In this script there is the usage of 'GetTopVisibleWS'. I had a quick hunt, as I'm sure Michael did at the time, and there doesn't seem to be a super elegant way to just replace this with the target name of a worksheet (Inbound a random input from @Pat Stanford). As such you'd need to do the following order of events. Have your previous file open. Create a new file OR Open a target file OR Open a Template. Using the Resource Manager - Import all the items into the new file (if template you might already have these). [And this is the important one to the script(s)] For example, if you're about to run the 'Layers + Descrp from Worksheet' you need to; In the Resource Manager, right click and Open the 'CREATE DESIGN LAYERS' worksheet. Just have it most recently open... that's all. Then run the script. What this will do is make the 'CREATE DESIGN LAYERS' worksheet the TopVisible worksheet in Vectorwork's Mind - and thus the target for the script. If you were then going to do classes, you would need to open the 'CREATE CLASSES' worksheet and then run the 'Classes + Descrp from Worksheet' script. Unless I'm missing something that just worked for me - mainly I noticed that if I had the 'CREATE CLASSES' worksheet active and clicked 'Layers + Descrpt from Worksheet' it would be populated with the Class Details - that's what gave the targeting away. Let me know if that resolves things? Cheers, James
  3. I think this is a fascinating discussion and everyone is going to have a different perspective, depending on their use case, geography and industry. My background is in the Live Events Industry. When I started using Vectorworks in 2008 it was the only platform on the market which had toolsets specifically tailored for CAD drafting of theatre / events. AutoCAD and WYSIWYG were available although in their infancy comparatively - and it's not until much later on that we saw Rhino, Sketchup, Depence and Revit come in as competitors. I think this question could be similarly phrased regarding the Adobe Suite of products. You could ask "why aren't Premiere, After Effects and Audition all in one application so I can do it all" - and in my opinion it comes down to the difficulty in designing a core application foundation to support so many features. Comparison CPU vs GPU If you compare Vectorworks to the Unreal Engine you'll see two very different applications under the hood. Vectorworks is, in my understanding from a rendering perspective, built upon the MAXON CineRender which is a CPU-based rendering engine - and although Hardware Accelerated doesn't actually render on GPUs (OpenGL / Shaded being the exception to this). Implementation of Redshift in VW2022 does provide a pathway for GPU based renderings, however in my understanding, this is still not the default. Unreal 5, which is the core of not only Unreal Engine but also TwinMotion, is entirely GPU Multi-Threaded Rendering. This utilises Nanite Voxel Rendering (new age polygons) and Havok physics engine (cloths, fluids, particles, etc). Comparison NURBs vs Mesh (Polygon) Vectorworks, and to my knowledge only Rhino and Solidworks also, are a NURBs Based Modelling Systems. For those of you who've made it this far into the post that's NURBs aka Non-Uniform Rational B-Splines. AutoCAD, Sketchup, Unreal, TwinMotion and most other application in this market are Meshed-Based Modelling Systems. This system utilises Polygons, or in newer forms Voxels, in order to plot points in 2D / 3D space. What does that actually mean? Every time you draw a circle in Vectorworks it's defined as MATH, with in infinite resolution. Every time you draw a circle in AutoCAD it's defined as a series of points and a game of join the dots occur - however a fixed resolution. **Note: The use of the phrase 'Every time' is generically used and there are exceptions to this analogy... always. Comparative Thoughts and Conclusions I personally think that what's under the hood of our current Vectorworks is a finely tuned drafting machine, which although we all occasionally have various gripes about, does do an amazing job at creating beautiful CAD drawings. I believe that there are applications which render far better than Vectorworks can and will due to the way in which Vectorworks is built. From a rendering perspective, CPU-Based NURBs rendering is a horrible approach, and those players in the market who are doing Voxel Based GPU Multi-Threaded rendering are doing it well. The upside and gain we all have for Vectorworks being a CPU-Based NURBs environment is very lightweight files with infinite resolution. I personally don't want Vectorworks to invest in this market any further. I would like it, but if they were to do it I believe they (poor programmers) would have to re-write so much of the core application that we (the user group) would loose out in so many ways - you can't buy a Ferrari and then take it off-roading on weekends. As @EAlexander quite rightfully said; I use Vectorworks every work day. I render things in Hidden Line, I render things in OpenGL (Shaded). The moment it's time to send something visually to client I will send that model to TwinMotion. That's been my professional workflow for the last 4 years and I haven't looked back. Previous work examples and comparisons below; Vectorworks CAD Examples Palais Theatre (J.Russell 2024) - Modelling in Vectorworks Hamer Hall (J.Russell 2018) - Modelling in Vectorworks Vectorworks Rendering Examples Royal Pines (J.Russell 2019) - Modelled and Rendered in Vectorworks Sidney Myer Music Bowl (J.Russell 2018) - Modelled and Rendered in Vectorworks [Still from Sun Study] TwinMotion Rendering Examples Regent Theatre Melbourne (J.Russell 2024) - Modelled in Vectorworks, Rendered in TwinMotion Sidney Myer Music Bowl (J.Russell 2024) - Modelled in Vectorworks, Rendered in TwinMotion I think this a really valuable discussion - thanks for all contributions so far. Cheers, James
  4. Hi Etienne, The context here certainly helps, and it sounds like a wonderful process and product creation overall. The process you've outlined is similar to how I create files ready for laser cutting, where different Pen Colours (which for ease I also do Class Bindings) mean different processes or depths (Score Surface, Engrave Surface, Cut Surface, Double / Triple Pass Cut Surface, 10% Cut, 20% Cut, etc). Example of a Laser Cut and Engraved Access Point 'Coathanger' Design - J.Russell 2024 I personally think you're riding the line between a fairly diverse Standard Operating Procedure and an Automated Task (be it Python / Vectorscript / Marionette). The appeal of replacing or supplementing this series of tasks with an Automated Task is understandable, however I think you'll hit several barriers in niche situations in attempting to cover such a broad basis of possible designs / arrangements / objects. Certainly never wanting to deter creativity though, let's break your actions into some smaller steps! Name Recognition Although I don't fully understand this step there are so many possible recognitions available in script options that I'm sure this would be possible. Individual Objects in Vectorworks can be named. If you were working in a Symbol Basis they are named and can be recalled very easily. Ungrouping and further manipulation can also occur. Objects could have records attached and then be named and categorised in this way. Width (and/or Height & Depth) Again something that is done in so many different ways in Vectorworks. The most immediate is a concept called a 'Bounding Box' where the overall size of an object is measured and compared. You'll find this term in all the script options available to you. Once you have a Bounding Box of all your items you'd be able to compare this to your fabric rolls (which could be a populated list stored in the script or from CSV). Ungrouping and Collision Checks There's several methods of checking if a 2D Object is inside / touching another 2D Object, for example; GetPolyPt & PtInPoly are both used in scripting to assess points. Intersecting two objects will tell you if they overlap (and the result of this would be used as an indicator of overlap). Decomposition Totally possible via scripts, you'd just need to ensure it's decomposed to exactly the level / detail you wanted. Classing by Colours and Rules This is all just Criteria (defining things) and Classification (assigning things). Any of the rules you wish to choose are fine, you'll just have to find the preferential order of things. Collinear Lines (Touch, Overlap or Superimposed) We've covered this one - not in Marionette, and I'm still sure it's possible... probably. All of the above IS possible. I don't think you want a mega-script which just steamrolls from Step 1 to Step 6. I think you'd need to break each of the tasks down and create smaller scripts to optimise your workflow - as human interaction is still going to be both required and visually desired to ensure each step is working correctly. Next Steps: If you're really keen on learning Marionette then I think you should checkout the Vectorworks University, in particular the following courses; Introduction to Marionette https://university.vectorworks.net/mod/page/view.php?id=647 Advanced Marionette https://university.vectorworks.net/mod/overview/view.php?id=2483 If you're keen on learning either Vectorscript and/or Python then the resources directly related to Vectorworks are limited - only because, in particular Python, are so broadly applied. Boost Your Productivity with Scripts ( @michaelk so many Script flexes in one video...) https://university.vectorworks.net/course/view.php?id=320 Places like Codecademy or Brilliant come fairly highly recommended for Python beginners - however you'll still at some stage need to get familiar with the Developer Wiki; https://developer.vectorworks.net/index.php?title=Main_Page I know it's not the immediate solve you might be looking for however if you persist it can and will lead to the best solution for you and your company long term. Joining the forum however and asking questions is the first step - well done! Cheers, James
  5. Hi Letti, I'm using ChatGTP-4o with a Personalisation profile I have made / refined directly for Vectorscript within Vectorworks. Initial Script and Steering Ideally if you're planning this you should start a blank series of chats with Memory retention active and then be really specific with your initial instruction sets, with things like; I'm writing a Vectorscript for use in Vectorworks. Limit coding responses only with code and languages found in the Vectorscript Functions Libraries. Do not create or presume standard functions from Python. Then you need to be super specific with your initial brief. I helps that I personally have a fairly heavy math and coding background so I was able to get a fairly concise brief with both the approach and initial code chunks. I would suggest starting these projects small and I often will ask ChatGTP to re-describe my initial brief back to me before even starting the code base - to ensure that it understand clearly the goals of a project. You will always still encounter, what I refer to as, 'code ghosts' which are made-up creations which ChatGTP still retains and implements based on other coding languages (in particular Python as it's close enough to Vectorscript). This is where the ability to read the code it supplies is important to be able to identify made-up. Revisions and Adjustments Firstly, ChatGTP (and other natural language models) work really well with code. With Vectorscript you can feed the Script Errors directly back to it and it will 90% of the time be able to solve them. Hot Tip (Script Error Output): Instead of re-writing the Script Errors dialog, or taking a screen shot, you can find the raw text file this generates in your Application Support folder, for example (on Mac); yournamehere/Libraries/Application Support/Vectorworks/20XX/Error Output This means you're able to copy the raw text which is much better and easier for ChatGTP to manage. Additionally, when doing revisions, continually go back to previous versions and steer towards a different path. Don't be afraid to say: "I don't like this version, let's go back to [Copy Code Chunk] and then I want to try...". Everytime that I have a successful code chunk I save that version. You can do this iteratively through your resource manager and do version numbers. If you're getting nerdy... as I might sometime be... you'll use Visual Studio Code with a Vectorscript Language Plugin and versions in GIT or similar. Limitations Don't expect that ChatGTP, or any other natural language models, will be able to just generate a giant script instantly and perfectly without your interaction. The script above took 8 Chat GTP versions with me correcting manually 2 of them (in this case the "Whipped up" timeframe was 36 minutes from conception to final). I use natural language models to write significant portions of code now in all languages, or at least give me alternative code methods to each problem, and the biggest tip is to only do small portions which are relative to each other. This is where a modular function based approach works so well; refine one Function to do a specific task, continue with your natural language model refine this Function until it runs exactly how you want, and then start a fresh for the next Function - then tie them together at the very end with everything defined. Anyways, that's how I'm using Natural Language Models currently... thanks for coming to my TED Talk? Cheers, James
  6. Hi Etienne, I'm so onboard with Marionette. I love it for all the visual things. I'm just not super all over it for Global Activities, like the one you're suggesting. I'm definitely not saying not to in this case, and the things that @Marissa Farrell continually changes my perspective on this I'm grateful for, however as a comparative math operation like this I typically do it in Vectorscript or Python. An example of which is below that I just whipped up and did some tests to take this: Into this [Purple are newly created merged lines]: Presuming this is your request understood correctly the Logic here is pretty straight forward; Find any two lines (H1 and H2 in this case). Find their Start and End Points. Find their Vectors (directions) If [ the Start AND End of H2 is within the Start and End of H1 ] AND [ the vector of H1 and H2 are the same ] then H2 can be deleted. If [ the Start OR End of H2 is within the Start and End of H1 ] AND [ the vector of H1 and H2 are the same ] the H2 must be a continuation of H1. Find out if H2 is the Start or End of H1 and then create a new line to cover the whole length. Rinse and repeat until there are no more line. Vectorscript Code Example: { James Russell FEB 2025 } { Math and logic by me, code layout thanks to ChatGTP... because I'm time poor. } PROCEDURE ConsolidateLines; VAR h1, h2, hNew: HANDLE; x1a, y1a, x2a, y2a: REAL; x1b, y1b, x2b, y2b: REAL; merged: BOOLEAN; FUNCTION HasSameVector(x1a, y1a, x2a, y2a, x1b, y1b, x2b, y2b: REAL): BOOLEAN; BEGIN { Two lines have the same vector if their direction vectors are proportional } HasSameVector := ((x2a - x1a) * (y2b - y1b) = (y2a - y1a) * (x2b - x1b)); END; FUNCTION FullyContains(x1a, y1a, x2a, y2a, x1b, y1b, x2b, y2b: REAL): BOOLEAN; BEGIN { Check if the second line is fully inside the first line and has the same vector } FullyContains := (x1b >= x1a) AND (y1b >= y1a) AND (x2b <= x2a) AND (y2b <= y2a) AND HasSameVector(x1a, y1a, x2a, y2a, x1b, y1b, x2b, y2b); END; FUNCTION Overlaps(x1a, y1a, x2a, y2a, x1b, y1b, x2b, y2b: REAL): BOOLEAN; VAR minAx, maxAx, minBx, maxBx, minAy, maxAy, minBy, maxBy: REAL; BEGIN { Get bounding values for both lines } minAx := Min(x1a, x2a); maxAx := Max(x1a, x2a); minBx := Min(x1b, x2b); maxBx := Max(x1b, x2b); minAy := Min(y1a, y2a); maxAy := Max(y1a, y2a); minBy := Min(y1b, y2b); maxBy := Max(y1b, y2b); { Overlaps if X and Y ranges intersect } Overlaps := (maxAx >= minBx) AND (maxBx >= minAx) AND (maxAy >= minBy) AND (maxBy >= minAy); END; BEGIN h1 := FInGroup(ActLayer); { First line } WHILE h1 <> NIL DO BEGIN IF GetType(h1) = 2 THEN BEGIN { Ensure it's a line object } GetSegPt1(h1, x1a, y1a); GetSegPt2(h1, x2a, y2a); h2 := FInGroup(ActLayer); { Second line } WHILE h2 <> NIL DO BEGIN IF (h1 <> h2) AND (GetType(h2) = 2) THEN BEGIN GetSegPt1(h2, x1b, y1b); GetSegPt2(h2, x2b, y2b); merged := FALSE; { If h2 is fully inside h1 and has the same vector, delete h2 } IF FullyContains(x1a, y1a, x2a, y2a, x1b, y1b, x2b, y2b) THEN BEGIN DelObject(h2); merged := TRUE; END; { If h2 overlaps h1 and shares the same vector, merge them } IF Overlaps(x1a, y1a, x2a, y2a, x1b, y1b, x2b, y2b) AND HasSameVector(x1a, y1a, x2a, y2a, x1b, y1b, x2b, y2b) THEN BEGIN MoveTo(Min(x1a, x1b), Min(y1a, y1b)); LineTo(Max(x2a, x2b), Max(y2a, y2b)); { Correctly get the last created object } hNew := LNewObj; { Ensure hNew is valid before applying color/thickness } { Remove this section if you don't want your lines coloured - I just did this for the example } IF hNew <> NIL THEN BEGIN SetPenFore(hNew, 50000, 0, 50000); { Purple } SetLW(hNew, 25); { 1pt Line Thickness } END; DelObject(h1); DelObject(h2); merged := TRUE; END; IF merged THEN BEGIN h1 := FInGroup(ActLayer); { Reset loop to check new lines } h2 := NIL; { Exit inner loop } END ELSE BEGIN h2 := NextObj(h2); { Move to next line } END; END ELSE BEGIN h2 := NextObj(h2); { Move to next object } END; END; END; h1 := NextObj(h1); { Move to next line } END; END; RUN(ConsolidateLines); * I'm very sure there are still fringe case in which this doesn't work. It's an example. ** It doesn't work on any curves. You could change it to operate on a different Layer/Group/Selection. *** I'd still very much like to see a Marionette Network that does this... and might build one later... Anyways, just an option to think about - great question though! Cheers, James
  7. Hi Martin, I was having a look at your profile following your recent LED Curved Wall project - and I actually have your answer to this query. Get ready for some secret sauce! I've been wanting Marionette to Worksheet two-way integration for the longest time. You'll find all my findings so far under VB-204683. Thanks to Vlado and the team we're getting a lot closer to this dream - and in VW 2025 the logic at least is there. Here's the current magic; Worksheet Database Row, Criteria: Type is Marionette Objects (obviously narrow as needed) Header type > Functions > Specialized for Marionette Object > ObjectData(parameter) Header Row: =OBJECTDATA('PARAMETER', 'Your Field Name') You MAY have to update the object once and then do a File > Recalculate All Worksheets. You MAY have issues with some numbers of fields from Marionette (for example if you're using order numbers 100FieldA and 101FieldB will sometimes have issues). But I've just done this with your LEDCurve V2 file and I'm able to update the NumLedWidth field in the spreadsheet and your Marionette Object also updates in real time. Kinda, super, cool! Have fun, happy Marionetting! Cheers, James
  8. Martin, Firstly well done on an excellent LED tool script. @Scott C. Parker and team if you haven't seen this from a concept point of view this is an excellent example of how a Modular Simple Curve LED Wall tool could look for future releases of Spotlight LED Packages - albeit probably more slimilinely programmed in Python raw over Marionette. I think the way you've done your texture mapping is super interesting and I'll be having a dive into that Marionette logic a bit later - congratulations!
  9. Hi, I've only ever found the Datasmith Direct Live Link is only functional for plans <50MB and useful for talking in near realtime with clients. For any serious plans (50MB+ and iterative versions) then an independent export of a Datasmith file (Hierarchical Export) Material bindings are tricky and in my experience follow these rules; Exported objects from VW contain a unique identifier and a bound material. As long as the object remains the same (same UID) then material changes you make in TwinMotion will be retained when the object is moved. New objects however will not have materials updated automatically in TwinMotion. For example; If you export 5 Red Cubes from VW, then in TwinMotion replace that Red Texture with Bricks the 5 Cubes from VW will retain this information if they're moved around or resized. If you duplicate a Cube in VW to make a 6th Cube and then either update the Datasmith Live Link, or re-export the file, the 6th Cube will not automatically have the Brick Texture applied. You can make use of the Texture Substitution feature of TwinMotion on very large projects to avoid part of this however that's quite a rabbit hole in itself. Backup files have all new UIDs. I'll have to check if there's actually a bug report / feature request on this however if you have to restore from a Backup File all UIDs of objects will be different, meaning that if you create a new Datasmith Export from VW all material bindings will be gone. Lastly, when exporting to Datasmith (either Live Link or Manual), the 'Quality' setting you choose must remain the same between all exports. Different Quality settings also seem to have different UIDs for the object (which makes sense that VW exporting a Cylinder with 64 Faces on Medium is a different object to a Cylinder with 128 Faces on High). Even though I personally would like to see continued development of the Datasmith Direct Live Link and its capabilities I think realistically for any Venue / Building / Site renderings that you'd be doing from Vectorworks to TwinMotion you're realistically better off doing iterative structured exports in stages (even for size / management doing it Hierarchical by Saved View Sets). Below: Examples of a recent finished project (The Regent Theatre - Melbourne, Australia) which contained quite complex asset and texture management. Happy rendering - hope that helps. Cheers, James
  10. In Stable Diffusion, negative prompts are used to guide the model away from generating certain elements in the image. The prompt should describe the unwanted elements directly without using negations like “no.” When crafting a negative prompt, you should directly describe the elements you want to avoid. For your example, if you don’t want an image with bright colors, the negative prompt should be “bright colors” as the element to avoid. Note: It feels horrible for me as an Australian to use Colors over Colours however slightly more of the SD datasets acknowledge American English... As a more detailed example if your objective was to make; an image of a cozy vintage cafe with wooden furniture, dim lighting, and a warm atmosphere AND to avoid any modern elements, bright colors, and people. Positive Prompt Input Field: “A cozy vintage cafe with wooden furniture, dim lighting, and a warm, inviting atmosphere. The cafe has antique decorations, old bookshelves, and (a small fireplace).” *Note: In the examples below the (a small fireplace) was incremented for the second image as another test. Negative Prompt Input Field: “Bright colors, modern elements, people, electronic devices, neon signs, plastic furniture” Happy generating! J
  11. @zoomer negative prompts have an equal weighting as positive prompts; and as such we need to be super careful with blanketed negative prompting. For example the terms 'gross proportions' and 'ugly' are not terms which should be baked into the negative steering of a Model or Checkpoint. Imagine you're doing a Tim Burton, Guillermo del Toro or Wes Anderson inspired piece, the characters and theme you're aiming for are inherently 'grossly proportioned' and, to some, 'ugly'. I agree with @Luis M Ruiz that suggesting these prompts, for the most part, is helpful for those learning AI Visualisation and as GUI feature overall - however I would not like to see these baked into the Model or Checkpoint in any way as I think it would immediately set a Model Bias which would limit creativity.
  12. I think much of this relies heavily on the success of Apple Vision Pro and if they're using the same Apple RoomPlan technology for elements of it's mapping (which I suspect they're not an it's just active point cloud lidar for surface and object detection). Realistically until Apple RoomPlan update for uneven surfaces, curved walls, stairs, columns, raked floors and many other things I don't see why/how Nomad would be able to do anything further with this feature - however I'm happy to be proven wrong! 😛
  13. I personally think that using generic bulk negative prompts is problematic when working with text to image machine learning models. I've seen a lot of people these days who just leave their negative prompt as "bad anatomy, deformed hands, deformed face, ugly hair", eventually filling their prompt cap with such phrases. The biggest issue with this is how the machine learning treats the assessment of the negative phrases. For example "deformed hands" although it will overall reduce the finger deformity in hands, it will also generate images which have far less emphasis on "hands" overall, leading to people with their arms behind backs or out of frame. A better case would be to, when refining the image, use positive phrases like "detailed hands" or "detailed hair" which steers the model towards a positive and detailed emphasis on these phrases. If you've looked at a few of my previous posts on the topic of the current implementation of Stable Diffusion into VW2024 the above methodology, and prompts in general, can only go hand-in-hand with better control over the Stable Diffusion Generation - including primarily the Sampling Steps (crucial for a prompt like "detailed hair" as sampling steps <20 won't even get to hair detailing) and InPainting (the process of masking just an area of an image for generative change - for example you could mask just the hand of a person and then run 20 Batched Images over just those hands at a low "Creativity" / CFG and choose the best fit). Stable Diffusion (Same engine as VW, Automatic1111 Interface) Below is an example of a generated human on the right (text to image creation), and then shown on the left is the InPainting region (black) - noting this is Stable Diffusion XL in the Automatic1111 container. The goal of this example is to show the ease in which InPainting can correct fingers (or other elements). (Left: InPainting Mask, Right: Chosen Iteration of Human Figure) The video below shows 50 example hand replacements. Each generation took around 7 seconds on a local machine to make. Hands Example.mp4 50 Images created in 1 Batch for hand replacement, 40 Sample Steps with 7 CFG, 0.4 Denoising Strength (Creativity). Overall, in my experience, prompts (both positive and negative) should be applied on a per image / intention basis, as starting with a bulk prompt eliminates huge portions of the dataset overall. Or as @Luis M Ruiz rightly stated;
  14. @zoomer, sorry I should have been slightly more clear. The list above is the capabilities of Apple RoomPlan. I'm unsure if they're all implemented into Nomad - that would require someone from the Dev team to probably chip in. I more posted the list so people could make a clear assessment of, for example, "Will Nomad know what a Car is if I scan it"; as indicated by the list above, Apple RoomPlan doesn't know what a car is, therefore it's safe to say Nomad won't know what a car is. Again, Dev team might be able to shed more light on this - and more importantly, I'd like to know the future of this app in terms of active development.
  15. Ahoy, Having worked with Apple RoomPlan on other projects for a little while now, I'm fairly confident that the Nomad Scan is a refined/polished version of the full base code and sample available - as in my testing sporadically over the last few years seem to correlate and @techdef I echo the sentiments on the lack of Stairs and Multi-Level surfaces. For those of you interested in the Apple RoomPlan capabilities it's available here ( https://developer.apple.com/documentation/roomplan/ ) but as a short list, here's the available room details; Inspecting room details The story, floor number, or level on which the captured room resides within a larger structure. An array of floors that the framework identifies during a scan. A 2D area in a room that the framework identifies as a surface. An array of doors that the framework identifies during a scan. An array of objects that the framework identifies during a scan. A 3D area in a room that the framework identifies as an object. An array of openings that the framework identifies during a scan. An array of walls that the framework identifies during a scan. An array of windows that the framework identifies during a scan. One or more room types that the framework observes in the room. And the available object detection categories; Determining the object category A category for an object that represents a bathtub. A category for an object that represents a bed. A category for an object that represents a chair. A category for an object that represents a dishwasher. A category for an object that represents a fireplace. A category for an object that represents an oven. A category for an object that represents a refrigerator. A category for an object that represents a sink. A category for an object that represents a sofa. A category for an object that represents stairs. A category for an object that represents a storage area. A category for an object that represents a stove. A category for an object that represents a table. A category for an object that represents a television. A category for an object that represents a toilet. A category for an object that represents a clothes washer or dryer. I think with these two lists in hand you can see exactly where the capabilities are - at least in my current usage and understanding of both Nomad and Apple RoomPlan. Happy scanning! J
×
×
  • Create New...