Jump to content

Altivec

Member
  • Content Count

    173
  • Joined

  • Last visited

Posts posted by Altivec


  1. When you mentioned during the teasers that not many new features would be released and that time was focused on fixing and implementing 2.0 versions of existing features, I had high hopes.   I'm not sure how basic texturing where free software trounces Vectorworks 25 year old+ implementation wouldn't fall into "We wanted to focus more on existing systems that needed improvement this year". 

     

    Year after year, I just can't understand how this isn't a priority.   If Vectorworks is suppose to provide presentation worthy drawings, how can texture mapping, what ultimately covers every 3d object, not be deemed a priority.  after waiting decades, I have ultimately given up hope of ever seeing this. 

    • Like 2

  2. The biggest thing I want from the Resource Browser is to have a toggle for "preserve folder hierarchy" and have it stick.   It would make life keeping symbols organized a billion times easier.   Jim, Please make my day and say they fixed this.   Since they already have this option built into the import dialog, it should be trivial to make the selection stick.

     

    I am also glad search is being improved.  I used it a couple of times and stopped.   The lag was just too much for me to bare and I have a 12 core Mac Pro with a fast SSD.    If that Lag gets cut down big time, this is a feature I would love to use.


  3. 42 minutes ago, Sambo said:

    I upgraded to a 12-Core and hooooooly smokes it's fast (the FirePro D700 helps!). I do photorealistic renders of big shows (like Adobe MAX), and I have yet to spend more than 20 minutes on an intricate FinalRW that includes full event lighting, hundreds of textures, and thousands and thousands of polygons. I would recommend this rig. Cheers.

    Screen Shot 2017-07-18 at 1.02.28 PM.png

    I also picked up a 12 core Mac after they reduced the price. I got to say I am more impressed with it than I thought I would be. 

     

    Its really the the combination of the total package. CPU's are fast at rendering. Graphics buttery smooth even with complex open gl models. Dead quiet if that matters to you. But the ssd in there is killer fast and makes a huge difference.

     

    with the reduced price, It's not a bad value. 

    • Like 2

  4. Yah. an expandable case like the old cheese grater design is more probable.  I just find the use of the word "Modular" a little more mysterious. When I think modules I think of little boxes that snap together like lego.  You choose the processor module you want and snap it to the separate GPU and Storage modules.  If you need PCI expansion, you buy that module and snap it on, if you need more CPU's just buy another CPU module and snap that on. etc...   Sounds fantasy like, but that's the first thing I though of when I heard "Module"


  5. It took them 4 years to figure this out... This exact scenario should have played out 1 to 2 years ago.   They saved me by a thread.  I was literally days away from switching our entire company to PC.    The big thing in this announcement is the commitment and game plan for us pro users.  I was going to switch because their communication and lack of action made it appear that they were out of the pro market.   Knowing there is a future on the Mac, I think I'll pick up a lower priced 12 core to get me by.

     

    I'm looking forward to see what this "modular" design will be like.  It would be great if you could just plug in more cores if thats what you need (okay I think I'm dreaming now)


  6. I've seen  the  message (The resource list is empty)  a few times now.   For some reason, my Heliodons don't have a 2d graphic, they just show a Locus.  When I go into settings for the Heliodon, there is a button labeled "2D Graphic" which pops open a mini resource browser with the words "The resource list is Empty"

     

    I have checked my application folder and the library folder does contain a file called "Heliodon" with 2d graphics.  I've tried importing these images into my file and although they do import, it still doesn't show up in the Heliodon settings.   My Questions is.   How do I get the Heliodon to see 2 graphics?


  7. I've also had some things go invisible on me today with SP2 but I haven't upgraded to Sierra yet.  So this may be more of a SP2 thing rather than a Sierra thing.   I too tried changing compatibility settings under preferences and that had no effect.   

     

    A little different for me though.  I could see the objects in wireframe but not in Open GL.  When I used select all, the objects would hi-light in orange so I knew they were there and if I did a fast render everything showed up.  Just invisible in Open GL.  It only happened once and fixed itself on restart so I didn't worry about it too much.  From the fix list, it does look like they worked on a few things with the compatibility settings.  I guess they have a little more work to do...


  8. 21 hours ago, zoomer said:

    Oh, 2017 ?

    It was working well in 2016 for me.

    Should I check that ?

     

    I don't seem to be having problems with the glows I use with just a simple colour (like your example above)  because upping the glow percentage above 100% would not be an issue in that case.  My problem is when I'm using an image (the source of the glow) and the image itself is an integral part of the scene (like a TV).  If the glow is set too low, you can't see any affect,  If the glow is set too high, the image washes out.  

     

    I've been trying out various things that Jim suggested and although I'm getting close, I'm still not 100% happy.  I'll figure it out.   PS.  I don't think the backlit shader recognizes glows as light sources, well at least I couldn't make it happen.

     


  9. Thanks Jim

     

    I tried your suggestion out by just replacing glow with backlight but it resulted in a very washed out image with no glow.    I am going to experiment with it a bit more but I'm a bit confused by your explanation.  When you say it takes its like(light) from a light object.   Does that mean I'm suppose to have a light source behind the screen as per your video example (i.e. place an area light directly behind the screen)? or does backlit sort of work like glow where the texture itself is emitting light?


  10. Ever since 2017, the glows on my TV screen images don't seem to project as much glow as they use to.  I am talking about the subtle lighting you get on the walls, floor and ceiling that gives the appearance that the TV is giving off light.  Pre 2017, I would simply import an image into the color shader, Set my Reflectivity to glow with the emit light check box ticked on and it worked nicely.    Now I can barely see the effect.   I've tried increasing the glow but as soon as you go past 100, the image starts getting washed out.   Has anyone else noticed this?   How are you guys doing your screen textures?

     


  11. Wow... that's quite the list of fixes.  I guess they really did hire a bunch more people.

     

    Anyways... so far SP2 has been a major improvement for me.   I had major lag when editing textures and major lag before doing any renderings. This happened for me in VW2016 as well.  All lag is gone, its almost like like I got a brand new computer.   Very happy so far. 


  12. Thanks Matt.  I thought the same thing over a year and half ago.  Then someone said wait until Oct 2015.  Then I was told to wait  to Jan 2106,  Then March of 2016,  For sure at WWDC, Then Oct Mac Event,  For sure no later than the end of November.  Now we're talking March 2017.   At that time it will be getting close to almost 4 years since the current MacPro was last updated.  Intel has updated the Xeon E5, 2 times in that span, maybe it will even be 3 times by then.  Graphics cards have been updated several times.  Their is absolutely no excuse for Apple not to update the MacPro other than they are not interested

     

     As much as I heart wants to hang on, my brain is telling me "boy are you dumb".   Its really time for me to cut the cord.

     

    Anyone that is not pushing the limits of Vectorworks will be fine on a Mac but if you are serious about doing renderings or heavy duty modeling, the future does not look bright in Mac land.  I'm sure there will be a new MacPro out one day but I think that will be the last one.  I've seen so many Pro's leave due to Apple's neglect that I can only imagine the sales of these new MacPro's will be minimal at best.   The less they sell, the less chance of another one down the road.   iMacs should be around for a while so if that is working for you now, there is not too much to worry about.


  13. I just wanted to update what I said in this thread just in case anyone is making purchasing decisions based on it.   The rumour site that mentioned MacPro updates no later than the end of November has now recanted.  They've now changed it to March which tells me, they have no idea whats going on.

     

    http://www.macworld.co.uk/news/mac/new-mac-pro-release-date-rumours-uk-mac-pro-2016-tech-specs-new-features-march-3536364/

     

    I've been following this perpetually moving upgrade target for over a year and half now and I think I'm at my limit with Apple.  Although I prefer their OS by far, I need a new machine and I can't fathom paying 10K for a 3 year old 12 core MacPro with outdated GPU when for the same money I can get a 36core HP or Dell with a GTX 1080 graphics card.   The difference in specs is just too great to ignore.  I guess I'll see if there will be any black Friday deals on workstations.  After 30+ years on the Mac, I may be a new PC user by next week.  We'll see how it goes.


  14. Glad to hear its not just me.  I bought an Enterprise based on all the rave reviews I've heard about this thing and because of Jim's comment that they are working on building full Enterprise support .  Although it seems to work in the demo and other software that I don't care about, I have NEVER been able to get it to work properly in VW.    I just use it as paper weight on my desk as i wait for that day.


  15. So from what I am gathering here texture wise.   The only important part about making an image efficient is to worry about its actual dimensions and that's it.   Compressing an image will only reduces the quality and has no actual benefit.  To give an extreme example   a 10MB tiff that is 128 x 128 pixels  would be more efficient then a 100 kb jpg that is 512 x 512 pixels.  Man, I've been doing my textures wrong for years.    This is awesome stuff and exactly what I want to learn here.  Thanks to everyone that is participating.  Please keep it coming.


  16. 2 hours ago, zoomer said:

    VW stores all image textures in the VW file.

    PNG is a quite ok format as it has a lossless compression. Not the most effective though.

    Your image file size may be larger than your original JPEG. But at least the images wont get

    worse, if someone bombs in large BMP or Tiff files.

    If you get problems with RAM or file size, you can set VW to use JPEG format for your textures.

    Normally just don't care about VW and image formats.

     

    Sorry Zoomer,  I think you are misunderstanding my question.  I am not asking if PNG is good format nor am I complaining that VW converts them to PNG.   My questions are solely to find out if there are any benefits to pre-processing images before bringing them into VW.  and if so, what is the ideal thing we should do.   Should I be compressing my images in Jpg?  Should I be optimizing a minimal palette in PNG?  Should I be importing an uncompressed Tiff?   or VW will  treat all formats the same and I would end up with the same efficiency no matter what pre processing I did.

     

     

    2 hours ago, zoomer said:

     

    Another thing is that the renderer has to decompress all images again which needs some

    processing time. So most pro texture libraries use uncompressed formats like Tiff or old TGA.

    I use JPEG for my libraries though as texture quality isn't everything for me and I do not use

    high res textures anyway.

     

    Now you lost me.   So you are saying that if I want high quality textures, I need to import them as Tiff.  In which VW will then convert them to compressed PNG thus losing its high quality format.  But then when rendering VW will convert the PNG back to Tiff without any loss of quality.  I don't believe thats possible.  Once the file is compressed, you can't go back without losing some quality.

     

    I can understand that VW will create a better PNG if I import a tiff over a jpg and that is where my question comes in.  If there is no improvement in texture efficiency whether I import a compressed jpg or a pristine uncompressed Tiff, then I've been doing it wrong.   Instead of using compressed jpg which is give me no benefit, I should be using tiff and getting better quality with no efficiency penalty. 

     

     

    2 hours ago, zoomer said:

     

     

    For the VW file efficiency.

    I am not sure if VW's mesh/poly render behavior is really as bad a s Jim said, I did not notice

    any disadvantages in rendering or OpenGL so far. But handling these for modeling is indeed

    a bit strange.

    Generic Solids are the best if you can renounce of parametric history.

    But it is quite ok to use all parametric Arch Objects, Extrudes, Solid Add and Subtractions and

    what else VW offers for common content creation. Just avoid many unnecessary nested Solid

    operations that you really don't need.

    Plus, make use of Symbols for any Object that will appear in several copies

    This will help of course file size, as well as load times for OpenGL or RW Rendering.

     

    I don't think Jim was implying meshes are devastatingly bad.  If your models are not that complex, you probably won't notice a difference.  You should also keep edit-ability over converting objects to generic solids if that is the case.   But this topic is on efficiency.  My models are huge, 2GB+ and I have thousands and thousands of objects, with several hundred textures.   When you add up all the tiny bits of efficiency in each object and each texture, it makes a big difference in the end.


  17. Some excellent information here.  Thanks Jim...  But with answers comes more questions, lol

     

    20 hours ago, JimW said:

    Should we be saving our image maps in PNG before importing them? If we do import our image maps as PNG, does the compression settings we set stay intact or does vectorworks change them anyways?

    They are internally converted to PNG anyway, so you do not need to. Resolution should be maintained after conversion within reason, however if you have multiple gigabyte humungous res images I think they may be capped but its really really high if I recall correctly.

     

    Yes.  I realize it converts our images to PNG but I think what I'm after is, what exactly is VW doing to our images?  For example.   Lets say I have a 1MB jpg image.  What I usually do is bring it into an image editor and compress it down to lets say a 100 kb jpg (Dimensions are the same, just highly compressed).  Since VW is converting it to PNG.  Is there any benefit to compressing the file size to a 100kb or is the resulting PNG file going to have the same the file size and efficiency whether I imported the original 1mb file or the 100kb file.  In other words, Did I just waste my time and made my image look worse(artifacts) by compressing it with absolutely no benefit or did reducing the file size by 10X improve this textures efficiency?

     

    Same goes with PNG.   In an image editor, I have the ability to adjust many settings including limited custom pallets, 8 bit, 24 bit, resampling options.  If I spend the time using these settings to get the file size down and I import the image as a PNG. does VW keep my carefully crafted PNG  settings or does it also convert my PNG to what ever it likes.

     

    20 hours ago, JimW said:

     

    Are procedural objects more efficient than meshes? 

    Yes. Our handling of 3D polys (which in Vectorworks is how we handle meshes anyway) is extremely inefficient and I normally avoid them personally except for when I need very specific geometry that I can import from 3Dwarehouse or somewhere similar.

     

    I agree completely.  I hate 3D polys and try to avoid it if at all possible.   I was just wondering if I should be.  Its great to hear that they are inefficient so I can continue avoiding them

     

    20 hours ago, JimW said:

     

    Does converting these objects to “Generic Solids” make things more efficient?

    Yes IF the object you converted had a complex, or many levels of history or an extremely complex single level of history. For instance if you added a solid to a solid, added that to another solid, added that to another solid etc for a number of levels, that object would be much slower in all geometric calculations and large in file size because of all the history contained within it. Generic solids conversion removes that history. If the object only had one level of history and it was only two object interacting in the solid subtract for instance, you wouldn't notice much difference. If you only did one solid subtract but you subtracted 500 objects from the main one, you'd see the difference with just one level of history.

     

    That's great to know.   This helps out a lot.  So when making my symbols I will have to make two of them.   One that is the original editable one and one that's a generic solid to use in my models.   This is the exact stuff I want to learn in this thread.
     

    20 hours ago, JimW said:

     

    Are Sub-D objects efficient? 

    They are effectively containers with meshes inside, so not very. I normally reserve using SubDs for organic or complex geometry that can't be made easily with the older solids tools.

     

    Wasn't expecting that answer.   Wow.   So Sub-D are actually meshes and not procedural.  I was thinking the opposite because they look so smooth when rendered.  Yah, if these are meshes and they are that fine, I can see them bogging things down.   Their appearance, at least in wireframe mode changes when "converting to generic solid".  Do you know if this lightens the load a bit or is it still the same mesh regardless?  


  18. I think what would be useful is some kind of test suite.  It would work really well for rendering speeds at least.   Not sure how the GPU could be tested unless the VW team is willing to put in a method of turning on some kind of frame rates.   A test suite would also allow people to see how things work cross platform and include the bazillion PC configs people get.   I know I would be interested in comparing.


  19. Since I see that VW coders and software designers sometimes lurk in here to help answer some questions, I wouldn’t mind some advanced discussions on how I should be doing things to make my files and work more efficient.  I like to push the limits of both software and hardware to achieve the best possible results so I spend a lot of time doing things that I think make the software more responsive and make renderings quicker.   The thing is, I have no concrete proof that I should be doing these things or if they make any difference at all. 

     

    I always thought, the efficiency of textures was very important, so any time I use an image map, I go through great length to keep them as dimensionally small as possible and I painstakingly adjust compression levels and select the best image format to make sure the file size is as small as possible.   My rational for this is that if you have hundreds of textures going on in a render, it would be a lot easier on the software to load and buffer these smaller files.

     

    About a year ago, I extracted an image out of a vectorworks texture and noticed it was a png file,  but I knew I imported it as a jpg file.  I brought this up to Jim and eventually  learned that Vectorworks converts all of our image maps to png regardless of what format we put in. Ever since, I have felt that I’ve been wasting my time compressing these files because vectorworks will change it anyways.   These are the types of questions I would like answered on a more factual level.

     

    Here are a bunch of a questions along that line:

    • Is it more efficient to use procedural textures or image map base textures?
    • Should we be saving our image maps in PNG before importing them?
    • If we do import our image maps as PNG, does the compression settings we set stay intact or does vectorworks change them anyways?
    • Does the file size of images have any bearing on speed or performance?

     

    I also have the same types of questions when it comes to modelling.  My belief is that native procedural objects are more efficient than meshes and 3D Polygons, but are they really?

    • Are procedural objects more efficient than meshes?  What I mean by procedural is; objects generated from VW tools with no conversion (an extruded polygon with fillet edges or multiple levels of add or subtract solids)
    • Does converting these objects to “Generic Solids” make things more efficient?
    • Are Sub-D objects efficient?  The heavy lined appearance of them gives me the impression that they are not, so again, I convert them to generic solids which gives the visual appearance of a not so intense object.  But maybe I’m wasting my time and removing editable for absolutely no reason at all.

     

    I’m hoping someone that knows the inner workings of vectorworks could shine some light on some of these things so that we can improve the way we and the software works.

    • Like 1

 

7150 Riverwood Drive, Columbia, Maryland 21046, USA   |   Contact Us:   410-290-5114

 

© 2018 Vectorworks, Inc. All Rights Reserved. Vectorworks, Inc. is part of the Nemetschek Group.

×
×
  • Create New...