Welcome to the Poser - OFFICIAL Forum
Poser - OFFICIAL F.A.Q (Updated: 2019 Jun 20 8:35 am)
Subject: surface imperfection material shaderiikuuk opened this issue on Jun 06, 2005 · 21 posts
Hi I was about to make it perfect, but now i see i need some help with a shader node. I was trying to do a surface imperfection node, which is somehow shows areas on the object's geometry where its bends(convex) more then 'normal'. The problems i found: - you cannot use du dv nodes (bugreport already sent) in P6 - dPdv and dPdu nodes show only the direction where the geometry tends, but nothing about if it changes. - dNdu and dNdv nodes contain the information we need, but since these dont change inside one vertex, you get only semi results, which should be fed into a further shader (i.e. dNdu and dNdv will be the same in one vertex). I would be interested if anyone can show me a nice way to 'smudge'/blur any result in a P6 shader... (see attached shader node) From the shader: The way it works is pretty easy, it adds the length of the partial differentials of the normal vector, then transfers the values into a 0-1 interval. Last it checks if thats over a predefinied value. Although im a mathematician (kinf of), i didnt look too much for the correct equation, but if you have some advice, im defenitly curious.
Whoa! Very nice work!
"Few are agreeable in conversation, because each thinks more of what he intends to say than that of what others are saying, and listens no more when he himself has a chance to speak." - Francois de la Rochefoucauld
Intel Core i7 920, 12GB RAM, 9800GTX 512MB video, 2 x 1TB HDD
Poser 7 SR3:Inches, PoserPro 2010 SR1:Inches, PoserPro 2012: Inches
Does anybody know a good tutorial that explains how to "read" a shader like this? I did the real skin shader tutorial and what is going on makes sense to me. (Okay, not perfect sense, but enough!) But I still am not quite seeing how to convert mathematical ideas into this kind of node language. Or how to go the other way to run into something like this and be able to come up with an idea of what it is doing.
This is exceptional! I wonder it a similar concept could be done for skin (but in reverse - so the more "imperfect", the higher the red content). I need to check out these new P6 nodes you are using. You are right....these things can't be blurred - one pixel on the render can't get material room info on another pixel (not through nodes anyway). But some ideas....in this case you've got obtuse (is that the right term) angles that you are trying to effect the color of. /If/ they were accute, you could use an AO node to determine if other polys were in the vacinity (use it in conjuction with the diffuse node). Also, have you tried using the gather node. It has a parameter to gather from a specified angle - if you plug in 360degrees it might be a way for you to detect what the nearby polys are doing.
You probably need to know a little about vector calculus for this one, which is something that's rarely learned outside a maths or physics degree. The first thing is that dNdv and dNdu are 3 dimensional vectors representing the amount the normal vector changes as you travel accross the surface of the model in the v and u directions respectively (those are the same u and v you get in a UV map). In order to figure out how strongly curved the surface is at any point, we want to know how long those two vectors are. A 3D vector is usually expressed as a set of 3 values like this: "(x,y,z)" or "(component0, component1, component2)". To find out how long the vector is, we get the square root of the sum of the square of each of the three components: Sqrt(comp0^2 + comp1^2 + comp2^2) That's what iikuuk is doing in those two top right columns of nodes. The component funtions extract those three values, the power functions square each of them, then they are added together. The top two nodes over on the left get the square roots (two, since there are two vectors). At this point we have a measure of the curvature of the surface, but we really want a measure that's always going to be between 0 and 1, rather than a measure that could conceivably be any positive value, so we have to do a little trick to convert our measure to a measure between 0 and 1. What iikuuk does with the next couple of nodes is this: newmeasure=oldmeasure/(oldmeasure+50) This is always smaller than 1 and could be as small as zero but no smaller. The rest is really just about choosing a suitable cutt off point - how big does your measure need to be before you decide to chip the paint off? A few general observations: * This relies on having a good UV mapping. It needs to be a low distortion mapping to work well. * This won't work on models where the vetices have been split to create hard edges. You need a model with continuous, unbroken mesh across those sharp edges. * You can cut down the number of nodes used by adding dNdv and dNdu together (use a color math node rather than a math node) and calculating the magnitude of the result rather than adding the magnitudes of the two separately. It'll give slightly different results but does essentially the same thing and is arguably a more "correct" approach.
View Ajax's Gallery - View Ajax's Freestuff - View Ajax's Store - Send Ajax a message
Ajax thank you for that. I actually do understand vectors and calculus I need little refreshers now and then, but in general I get it. So your walk through is great. I'm guessing I just need more practice to be able to "read" it the way you did. I haven't ever really figured out about UV. When I look at a flattened out UV map, I'm guessing that one of those is equivalent to X and the other to Y, but since each of them will really be wrapped around in space "they" gave them different designations to avoid confusion? Again, I really appreciate the guidance. If you know of any further places where I can get into the math a bit it might really help me understand what is going on.
This is outstanding. Great idea! Now we need someone to write a Python program for converting formulae into shader networks and vice versa. Oh, or maybe a math shader node which accepts a formula. HINT - HINT - SR2 - HINT - HINT :-)Have you met Antonia?
I did not jump. I made a tiny step and there conclusions were. (Buffy the Vampire Slayer)
Thanx and welcome :) I hoped someone with a more decent english knowledge will do some tutorial from the pure nodes i made ;) And yes, Ajax was right about the restrictions of the geometry , but as always there is no general solution for a shader problem like this; since only a geometry built up from fractals would be perfect(which is mostly true for the real world(...)). About the last adviced restriction, im not quite sure if the difference will be just a small amount. I have to look at it more since i didnt think about a shorcut like that, but as far as i feel know it wont be a better result, but will be quicker by far. Since its a bit more comlex than just say something vaguely, ill check and let you know about the result. (And yes that requires more from those nasty vector calculus.)
Wow! Very interesting indeed! Thank you iikuuk for posting this, and thank you Ajax for the illuminating commentary. I re-created this network just to see if I could get my head around what it is actually doing (alas, my head doesn't seem to stretch that far). That was before Ajax posted the explanation which helps a great deal. There's something truly exciting about feeling way out of one's depth but at the same time hungry to learn. I felt that way about 3D modelling, then about mapping and rendering, then figure creation, and now about this stuff. And the learning is made so much easier by people like you who share your mastery so generously here on the internet - "the university of the third age". Thank you Ob
LOL i dont consider myself as a guru :) but thanx. (if it was to Ajax, i agree :) Ok, back from the math papers, if you do add dNdv and dNdu AND do the calculation afterward you will not have the right result. The reason is pretty ease, if you add those vectors, its like you transform one to the other, hence the lenght will be smaller (or equal), then the original |dNdu|+|dNdv|. Furthermore if dNdu and dNdv points absolutely in the opposite direction (and thats not uncommon), the proposed addition will show a small value, which will not reflect the change of the surface. If only you could do a "normals forward" on_that_node, or call an abs function of the vector's axis', would you get 'similar' values, but even then there would be a difference (and would lose its extra speed).
"You are right....these things can't be blurred - one pixel on the render can't get material room info on another pixel (not through nodes anyway)." That's a feature (!) of the REYES rendering algorithm (the one that FireFly, PRMan or 3Delight are based on). The benefit is that since no shading point depends on other shading points, they can be calculated in parallel on vector computers or SIMD instruction sets and that the renderer can discard things it already rendered from memory (because it won't need them again).