More tinkering

Finally tried out the aweSurface shader I bought weeks ago. Told myself I’d stick with something simple first, but for some unfathomable reason I instead tried it on glass and water. Gah! Those surfaces are probably among the hardest to work with too.

I think aweSurface has a lot of potential but I can tell it’s going to take me a while to get the hang of it.

Used DLD Miniature Worlds, as it’s iray only. For the iray and awe renders I used the same hdr environment. For the 3DL Default and Ubersurface renders I used an environment sphere for reflections.

iray, 30 minutes

aweSurface, 15 minutes, final raytrace

3DL default, 4 minutes, progressive rendering

Ubersurface, 4 minutes, progressive rendering

I changed the water color in the aweSurface one but it didn’t seem to work. Thinking about it now, that’s probably due to reflection settings. Also forgot to lower the bump settings on the wood in the Ubersurface render.

Interesting bunch of results. The glass edges are sharper in iray and aweSurface. No refraction in the last two; I can never seem to get a result that’s not overpowering, esp if there’s something in the water like little fishies.

So…back to the beginning I go and this time I’m sticking with statuary until I get a better idea on how this stuff works. 😛

8 thoughts on “More tinkering

  1. Unlike most DAZ store products, AweSurface comes with two detailed user guides in PDF (installed by default to the “ReadMes” folder of your content library).

    Page 7 onwards of the “aweSurface User guide” gives relevant settings and examples for absorption in refractive media; the channel you need is transmission.

    It’s actually pretty simple to use for “solid” refractive stuff like seawater or glass figurines.

    But liquid-in-glass (the so-called “nested dielectrics”) are a tricky thing to simulate. Here’s a nice article explaining why:

    And those “sharper edges” you see in Iray and aweSurface is actually proper Fresnel reflection attenuation: reflection in dielectrics is 100% at glancing angles and about 4% when viewed up front (the middle of the sphere). So there is this “bright outline”.

    • I have both readmes and read through them, right after I bought the product and while I was tinkering around with these props. I even dutifully changed the IOR to 1.33 for my water! 😀

      It’s good info, I just struggle to understand these kinds of details. Some folks “get it” right away, but it’s a lot to take in for me and can be a bit overwhelming too. And if there’s math involved, I’m probably doomed.

      Still, I’ll keep trying!

      I didn’t do anything with fresnel in the ubersurface attempt. I have the ubersurface readme file to refer to if I want to try to improve that one.The plain 3DL one is as good as it’ll ever get, I suppose.

      I’ll re-read the bit on transmission for aweSurface. I wanted a more turquoise water color — tried base color, tried diffuse color, and nothing changed. It’s frustrating to keep missing basic stuff even when I read the darn readme files. 😦

      Just looked at the article you linked. It broadly makes sense (as much as it ever will to me.). I like the idea of scaling the liquid up a bit, even if it’s just to see what it does. I always thought those gaps between the liquid prop and the glass prop looked a little weird.

      • Gaps between glass and liquid are more than just a little odd =D

        It’s not really math here, just physics. Transparent materials have no diffuse colour because whatever colour they appear have is not due to light scattering off their surface but due to internal absorption of certain wavelengths in the refracted light (what remains unabsorbed is transmission). It makes all the sense once you realise that if there were any diffuse scattering off the surface, these materials wouldn’t be really transparent! They’d be like coated with something… think a thick layer of dust on glass.

        The free UberSurface can do some reflective glass/water fairly well – though its Fresnel doesn’t take IoR into account, so its settings will need to be eyeballed. But speculars will not be attenuated (only UberSurface2 from the store does it), so with some lighting it won’t look right. What Ubers can’t do is proper absorption (they just multiply a constant colour into refraction, so the saturation of this colour won’t depend on thickness) or “murkiness” (refraction roughness).
        The latter is pretty weird, come to think of it, because Ubers do have reflection roughness and it’s totally the same code with refraction, just the vectors point different ways.

        • LOL! For me, there’s not much practical difference between math and physics. I figure this mathematical abyss in my head is some sort of defect in my brain wiring, but most of the time I don’t have to do anything more complicated than keep track of the money in my bank account or occasionally do a little addition/subtraction to figure out how old some relative is on their birthday.

          The water prop I used from MIniature Worlds looks likes it’s colored, though I admit I didn’t look too closely at the iray settings.

          And I do get the point of what you’re saying, it’s just that I tend to look at render projects/images from more of a — theatrical? — standpoint. Not sure if that’s the right word, but something more artificially created to get across a mood or atmosphere or effect.

          You approach rendering from a more scientific standpoint (which, btw, I find fascinating even if I don’t get the technical stuff) whereas I look at it mostly from a photo manipulation standpoint. Renders are like stock photos, and if I can’t get what I want from a DS render engine, I’ll just figure out how to do it in postwork.

          That said, I like it when I can get things to look as close as possible in the render to begin with, so I know I need to learn more about surfaces beyond shiny/not shiny or whatever.

          I’m going to tinker with the morphing classical bust set by Predatron, which I bought a little while ago, and follow along with the readme PDFs for various settings. For instance, I want to make a statue bust look like wax. That should be fun, and it will be good practice. There are some settings for things like potatoes or milk that might make a good starting point for a waxy look.

  2. You don’t really need “advanced” maths to understand the principles behind classical physics, though. It’s all basically built on conservation laws. Addition/subtraction, as you say. And then for optics you also need to be able to mentally draw straight lines, again, nothing complex about that.

    Speaking of stock photos – or just about any good-quality photos – using them as reference makes setting up materials in 3D much more productive. Even if you do need to spend some time analysing the lighting in the photo (so that you could either match it in your 3D scene or know which effects to disregard), having a legit reference to rely on rather than memory/imagination makes all the difference. I was reading a book for graphic designers not long ago, and it described all those ways that the human vision system (the eye and the brain) can be fooled. In other words, we are not hi-fidelity cameras, and we often think something looks a certain way when in fact it doesn’t!

    • I dunno, this looks suspiciously like ((cue scary music in my head)) MATH:

      or example, a 300 nm thin film would be 33% ( 250 nm + ( 50 / (400 – 250 ) ).

      And there is a lot of stuff like this in the readme PDFs:

      Physically based BRDF (Oren Nayar for diffuse, Cook Torrance, Ashikhmin Shirley and GGX for specular).

      Out of that, I understood the words “diffuse” and “specular,” ha! Seriously, though, I substitute “industry standards” whenever I read such phrases. I figure the references aren’t really for people like me but for those folks who love digging into, and playing with, the technical details behind surface materials.

      I use references a lot, though not only stock photos. I have a folder called “inspirations” where I save everything from striking faces to digital art.

      These inspirations usually require a lot of kit-bashing of various props and clothes. After I select an inspirational image, I drop it in a separate folder, then start going through all the content promo images that I’ve saved to find the best matching resources, which I then copy and move to that new WIP folder. This can take a long time; I have a lot of stuff and invariably I forget where I installed something, even though I’ve made efforts to rigorously organize my content.

      • Oh come on, this is just multiplication and division, it’s not that far from addition and subtraction =D

        If you look closer at the words that start with capital letters, you’ll see they are actually names =) These are names of researchers who developed these models (outside of GGX – the authors decided to call their model this way because of the parameters used in their equations, IIRC).

        The models all look a little differently, and so they have names – so that, when asked, you could say which model you used to make this or that material look exactly the way it did. Oh, and the generic “diffuse” from old DS shaders also has a researcher-related name: Lambertian.

        You don’t need to know specifically how these models are computed, you can just render out something like that Predatron bust you mentioned, using each of the models in sequence, note the difference in render times (not that big, but just in case) and how they differ visually. Maybe you will see you have a preference for one of them, or maybe not. I know I do =D

        Ah I see, sounds interesting. I’m wary of looking too much at contemporary copyrighted artwork like artistic photos or digital stuff, though – all that “she stole the pose for her sketch from my painting” and similar drama I saw on deviantArt and other places makes me feel uneasy. What if something impresses me too much, but I forget about it and years later make an image that will look similar and will make people think I “stole someone’s idea” or something? Naaah. I’d rather “steal” from someone like da Vinci or Caravaggio.

  3. I did look up a few of those names and encountered much scary looking math stuff. I am much better off sticking to rendering. Eeek.

    I did not know that about “Lambertian,,” but it would explain all those times I encountered the word “lambert” in a materials file.

    I admit I blatantly copied a bunch of poses recently from retro science fiction covers, ha!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s