Jump to content

M5d

Member
  • Posts

    458
  • Joined

  • Last visited

Everything posted by M5d

  1. Apple Insider has mentioned, in a couple threads, that Apple had indicated to them that there will be a basic stand; (Reply 38) , but as yet haven't been able to re-confirm it. Another interesting article on a new collaboration between Graphisoft and Epic / Twinmotion has popped up at Architosh which, perhaps, provides perspective for Apples Mac Pro. My take, once you've purchased Apple's "modular" mac pro, you've likely purchased a chassis, power supply and cooling system to take you through the next 3 or 4 generations of internals, hence "The Power to Change Everything" and "Through and through, Mac Pro is built to change with your needs."
  2. I'm yet to try it, but I went through the Unreal Engine FAQ's, there's a few answers / links there that may be helpful: www.unrealengine.com/en-US/faq?active=twinmotion www.youtube.com/playlist
  3. architosh.com/2019/05/epic-acquires-twinmotion Nice promotion.
  4. There's a crashing file in a direct message, which I think I've sent, if it's not there I'll try again.
  5. Hi Marissa, I'm on a bit of a learning curve here and fell down a few rabbit holes. I tried to create a simplified example of the problem to post, without the maths and in a single configuration, but the behaviour changed. So after some trial and error I eventually found the wrapped networks were forming into separate groups due to the use of Valves where I wanted part of their outputs stopped. I also tried Filters combined with Delete nodes, but that caused Vectorworks to crash. Anyway, what I've learned, is you need to pass a like object along in place of any parts that are no longer required, then the outputs of the separately wrapped networks can be formed into a single group, I'm using tiny polygon triangles placed at the apex as a solution for the moment. For those of us who lack proper programming knowledge, Marionette is an enticing and more intuitive tool, but getting a handle on the various nuances of when and where its similar, but different, nodes can and should be applied, like Valve, Filter and IF nodes, is difficult from the available information. The other thing I'm wondering, after managing to crash Vectorworks quite a bit from poor usage / understanding, is shouldn't the Debug mode also act as a kind of sandbox mode for finding such issues?
  6. Is there a way to prevent or reverse the default groupings that marionette creates from wrapped objects and operations? I have a truss object that is the composite of polygons created within separate wrapped networks (and sub-networks) for organisation. At the final stage I would like to include an option to extrude them as a single object, but the ungroup node doesn't appear to work on wrapper groupings or on the nested groupings that result from internal wrappers.
  7. And in earlier versions too, but transferring between files never worked. As you've discovered, the class settings would get messed up. I'm pretty sure Viewport Styles, or something akin to that, has been wish-listed a few times.
  8. Something else to check, is that the cells are not formatted as text cells. If they are, or even if they were when the formula was entered, you need to use clear contents first before reentering the formulas, or they don't respond. I think it's a bug and cells that are formatted as text, probably shouldn't have the record field drop-down showing, it's confusing!
  9. So, I see my profile is still here. I'll assume (with my previous post) that that carries some sort of mutual understanding. I appreciate your position is between a rock and a hard place at times @Jim Wilson. I would not want the crucible of the company's fragilities for myself, nor should we (users) feel as though we're being moderated by them either, that's a fine and fuzzy line I'm sure. @P Retondo it was the criteria that went under the knife above, what was said wasn't meant to detract from the notion of an "overall" measure. The thrust of my comments was driven more by the context than the subject, if that makes any sense. Still, from a user's perspective, I'd much prefer such things were simply academic to us and of no real consequence to everyday use. We purchase products to do a job, piloting what probabilities lay between their promotion and pitfalls each year shouldn't be a job in itself, not from a company of Nemetschek's scale at least.
  10. Well Jim, after reviewing what I wrote, I don't think you were accused as such. The discussion however, was all about perceptions and how they are used, one way or another. Much of my response was simply bouncing off what had already been introduced into the conversation, on this and the previous occasion. And the knots that either you or the company get tangled up in, when trying to cast a particular light, they're definitely not of my making. I rarely comment Jim, the only thing that generally motivates me to comment is a threshold in those perceptions, if I'm not talking freely about them, then I don't see the point. A final warning (also the first) or finishing the job is effectively the same thing, based on what you hope it engenders. They're strange things taunted with expectation! Anyway, it appears I cannot close my own profile, so use your power Jim.
  11. @line-weight don't worry, I actually like Jim and his presence in the forum. If my recent posts seem unfair, like I'm gunning for Jim himself, it's not so. What I'm sighted on is the use of rhetoric and other forms of magic, where the substance is lacking. Unfortunately, Jim's roll, makes him a focal point for any conflicted narratives the company might hope to run through these forums. These forums however, should belong to the users more than the company, they're the only vehicle we have to "dial it up", so to speak, when the company appears to engage in less than honest practices.
  12. @Jim Wilson @P Retondo was responding to the OP of the original thread and the productivity issue in focus, an issue we're all well aware of. That discussion was addressed to the growing "lag" or unresponsiveness of everyday, essential, tools, as is being discussed in a number of other threads too. For what it's worth Jim, I agree with you, benchmarking the "problem" is pointless, it doesn't make it any less of a problem though. Nor do I think P Retondo's post was about avoiding the issue raised by the OP of the original thread either. So yeah, the benchmarks you've proposed in response to the original thread really don't help or define anything of use; they're just a convenient distraction. And I don't believe there's any great demand or concern out here, in user-land, for benchmarking the processes that are working well and utilising our multi-flavoured rainbow of new and old hardware to their capacities. The reason for this is quite simple Jim, those processes automatically improve when we purchase new hardware and the purchase of new hardware is a business decisions under our control. The main "hardware" constrained metric of use to us, and likely to inform any purchasing decision, is rendering. The growing state of unrest about the state of Vectorworks however, is because our businesses also rely on the decisions made by N.N.A's management. We build our businesses around the "well-tuned" use of software and platforms that, once established, are very difficult to shift from without a major disruption. That . . . . "well . . . . dash . . . . tuned" . . . . use . . . . of . . . . the . . . . software however, is being disrupted by the poor prioritisation of N.N.A's executive in not maintaining the underpinnings of many tools, to the point were they've become unresponsive to the pace of users as their complexity has grown. This is a problem, as you've pointed out, that we cannot fix with a hardware purchase and, as you have also pointed out, is worsened by having other applications open, but these are everyday tasks we're discussing here, everyday tasks that are becoming everyday issues for everyday users, everyday. This directly concerns you Jim, your roll and your conveyance of the discourse between the two parties involved here. In the greater portion of your work, you're assisting users "objectively" with genuinely "specific" technical issues and you're applauded for the work you're paid to do. In the other portion of your work, managing the "user experience" where the issues are universal and not "specific", your responses are a matter of public relations and are consequently made "subjective" by the very same pay cheque. The issues at hand are an ever-growing public relations problem for N.N.A. and until they address the core problems of the software with a definitive response, that PR issue is only going to grow. What's of concern from you Jim, reviewing your more Orwellian tactics, is that instead of a firm and direct responses to those fundamental problems, such as a statement about how and, most importantly, when they'll be resolved, you keep finding ways and means of circumventing them as topical issues. The primary issue that spawned the creation of this thread, the motive for its existence, is the very thing that you, by design, excised from its scope and discussion from the outset. The worrying thing is why? Why all this energy directing away from the problem and crickets about the fix, or worse, the tin eared dismissal that was given in the original thread. The OP of the original thread clarified the matter was for "General Discussion" in their second post, but clearly you found it more convenient to ignore those remarks, to deem the issue as specific and play semantics with the verb that was used to describe what had actually happened to that thread. Determining issues as "specific", side stepping into arguments about "subjectivity" between versions or proposing new distraction, don't fix the issue or inform us about when it will be fixed. Just to be clear on this new diversion of inter-version "subjectivity", what is tested or not tested is of no real interest; of course a tool that is the same today as it was five years ago is going to perform approximately the same, it will no doubt perform much better when multi-threaded I suspect, but many tools are not the same today as they were five versions ago, while their complexity has grown the hardware aperture through which they operate has been left behind. It seems incomprehensible that N.N.A's executive could not have seen this problem coming from a long-long-long way off. And yet, here we are with you running around trying to put out fires, implying it subjective, it's the user, it's specific . . . which is tantamount to saying we're crazy! Anyway, so you don't like my passive aggressive tone? What am I to make of this, is it another permutation of the way the original thread was handled, am I being threatened with disbarment as a result? You've got all the power on that front I'm afraid Jim, how you use it is up to you. My response to your "perception" of my tone, however, is this . . . So I guess what is passive aggressive is "subjective" too Jim? And I suppose that you had no idea about what was actually being referred to in the post that you were responding to here? Well, my tone was not passive aggressive, what made you think that? Believe me all my questions and forgetfulness, were "genuine". My tone, if I had one, stemmed from my reading of the original thread, where the behaviour of a company representative towards a user, client and customer was the appalling use of various tactics motivated towards suppressing the extent to which some negative feedback might permeate the forum, presumably out of commercial interest. Subjective? Maybe. But you could have just as easily applied your energies "genuinely" to the original thread, instead you created this "controlled" distraction. What does the existence of this thread actually say? Might that be subjective too? P Retondo's post was not about hardware Jim or the purchase there of, if you had read it properly, you would have understood the use of the hardware analogy was as an example for some kind of equivalent way of measuring the processes going on inside our software. The concern was for the impact moving between different versions of software is having on our productivity and our businesses. As to the politics of the day, often referred to as Post-Truth, that's about saying whatever it takes, or doing whatever it takes, in service of the particular entity you derive some material benefit from. It's about immediate and short-term perspectives over the broader collective and our collective human dignity. Post-Truth involve the use of distortions, distractions and diversions whenever and wherever its ugly reality becomes an inconvenient truth to those it seeks to manipulate. So the irony Jim, is that this is exactly my perception of what has been deployed and accepted here. Do we need we go over the rhetoric that was used to sell 2019, which, by my reading, implied issues like this had finally been resolved. It's simple man, just don't spin us for fools, that all, straight-up, honest, plain speaking is all that's required.
  13. I'm unsure what the jump-cut from the previous thread's question, to this thread's answer is intended to convey Jim? The question asked in the original thread (ever-decreasing performance of Vectorworks) was about "slowness", the progressive deterioration of productivity from version to version and, obviously, the frustrated user experience, which got summed up as, "what's wrong with Vectorworks?" To "clarify" Jim, is this statement intended to address the "user experience" that was raised in the previous thread? If so, I don't see how that's not already meaningful? Also, I have to say, the logic of deliberately avoiding benchmarks that include the apparent, problematic, operations and calculations etc., until they're no longer problematic, is truly inspired. Is that a standard operating procedure (S.O.P) at N.A.? Anyway, let's agree, for convenience sake, that as you say, these operations and calculations etc. are not "benchmark-able in a meaningful way, since a lot of the slowness in those operations is currently a Vectorworks software limitation" which is exacerbated by "what other applications [are] open" "and not the fault of your hardware" . . . hmm, I think I've forgotten my point, let's just agree that you've already summed it up perfectly!
  14. Does your graph indicate the percentage of users actually using the latest release, or just the download numbers? And isn't it possible, as jnr already pointed out, that your influence had something to do with those download numbers as well? The context for this year's release was the narrative you used to introduce it. I know my own reaction to the introduction –– "Our teams here at Vectorworks, Inc. paid close attention to the wishlist requests and other forum communications this year, and we heard you loud and clear - you want improvements to the quality, speed, and usability of the software . . ." –– was one of great enthusiasm, it "genuinely" had me considering an early jump as well. This statement seemed both definitive and indicative of major changes in Q.A. and approach, "loud and clear" it said! Now the narrative, with hindsight and the release period over, is that you're telling yourselves at N.A., in a post telling us, that "users are less and less inclined to wait" and that this means you are "more obligated than ever to improve the quality of initial releases" with "multiple new efforts underway". What I believe we (users) are actually "inclined" to do, is follow our previous experiences until, of course, we're presented with a reason to reconsider, which is most likely what is shown in your numbers. We were given a story of genuine change and, in your voice, it was genuinely embraced as genuine. 😏 But the irony of this rather odd-looking loop of cause, effect and post hoc analysis Jim, is it seems to expose that the rhetoric about quality was just that, rhetoric, and now, it seems, a little more rhetoric was necessary to mop up the last lot. I'm curious as to why this may have occurred and wonder if that's not shown on your graph too? Because the other thing the data appears to show is a 6 to 7% decline in the number of users who bothered to install the previous version at all. Nevertheless, action = confidence; it would probably help if, instead of nebulous statements like "loud and clear" or "more obligated than ever", we had detail on what the "multiple new efforts" actually are? And help if we knew why it is you now think the Q.A. resources and actions, if any, implied to have taken place for the 2019 release fell short in hindsight?
  15. Yeah Bob, that kind of power looks amazing, GPUs and eGPUs look to be the way we will finally see the rendering bottleneck disappear. I had hoped there was some kind of new OS magic in Rob's results making the eGPU available. I don't see why this couldn't start happening though; Nemetschek recently took it's holdings in Maxon to 100% and Cinema 4D has already started making use of the GPU for rendering. Why not unleash C4D's rendering / visualisation capacity fully "within" Vectorworks as either an upgrade or as a more thoroughly integrated plugin? This has been a longtime request of users, then what's happening on the GPU front can become part of the Vectorworks workflow in a timely manner. It seems like low hanging fruit to me? Especially when you consider CineRender is the engine underneath Renderworks and that we can already import C4d textures into Vectorworks . . .
  16. Don't know if this will solve the issue, but it is worth checking. Under the vectorworks dropdown menu, choose preferences and set the 2D conversion resolution to very high, then retry the export.
  17. This is very interesting news, by those figures adding an eGPU is like adding another 6 to 8 CPU cores. I wonder if GPU rendering is limited to an eGPU, as a secondary resource? I would love to know more about how this is working from NV, and what's possible?
  18. Just checking, those renderings weren't done with Vectorworks / Renderworks were they? The addition of the eGPU appears to have made a big difference.
  19. Bump. This problem persists, it's still on the agenda for the next 2018 SP I hope?
  20. Yes, it's highly dysfunctional. I'll add to your list that it can also cause Vectorworks to crash, but there does not seem to be a particular set of circumstances that cause it to do so, just repeated use. After a crash and restart, it will usually work again on the offending object / texture.
  21. Once you perform a Solid Subtraction on a Framing or Structural member, their data becomes unavailable, but the member is still intact inside the subtraction. Worksheets should be able to get their data still, but they can't as far as I can tell. Is there a way?
  22. Thanks Pat, I had it backwards, so to speak, because I've always used a Location field in my records for entering room names into, if this works in the worksheet, then that field will simply become irrelevant.
  23. Hi Pat Yeah, I was dreaming, not completely though. I watched a tutorial some time ago on recording the location of objects within a drawing, which I thought was in the service select library, but I cannot find it. There's the location (LOC) criteria, which oddly doesn't show up in Help, and the recently added criteria GetSpaceNameForObj, which I'll experiment with. But you're right, Records are always inert and scripts, I suspect, would be getting overcomplicated.
  24. How do you get a record to pick up the location of the symbol it is attached to from a space object. I’m sure I’ve seen this, but I cannot find any examples of how it is done, does it require interaction from a worksheet to happen?
×
×
  • Create New...