Welcome to the Poser - OFFICIAL Forum

Forum Moderators:  Digitell, CHMedia    Forum Coordinators:  RedPhantom

Poser - OFFICIAL F.A.Q (Updated: 2020 May 06 10:25 am)


 Subject: Underwater submarine

Helgard opened this issue on Jun 26, 2010 · 183 posts

Top of Forum Unsubscribe Print

  bagginsbill    ( ) ( posted at 12:53AM Fri, 09 July 2010 · edited on 12:53AM Fri, 09 July 2010 · @3669855


The DC feature uses two numbers to define the attenuation.

The first is the DepthCue_StartDist(ance). Until the distance is bigger than this value, the attenuation is 1.

The second is the DepthCue_EndDist(ance). This is the distance at which the attenuation is 0.

For any distance between those two, the attenuation is a linear decrease from 1 to 0.

If we denote the start distance with the letter a, and the end distance with the letter b, then the function that is implemented inside the Atmosphere node for DC is shown in the attached graph.

The linear decreasing region is implemented by 1 - (x - a) / (b - a). Of course that is a straight line forever, so the effective value is Clamped. Clamped just means the value is constrained to be at least 0 and no more than 1.

So the full equation is Clamp(1 - (x - a) / (b - a)), but I'm not going to keep writing the Clamp part. Just keep in mind that attenuation is always in the range 0 to 1.

 


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 12:58AM Fri, 09 July 2010  · @3669856


Now it's totally silly to have DepthCue_StartDist be anything but 0, at least for underwater attenuation. Unless there is an air bubble around the camera, attenuation begins immediately.

So we're going to set a = 0, and only deal with the end distance, b.

This simplifies our equation a bit. With a = 0, the equation is just:

1 - x/b

In the attached graph, b = 6.

I'd like to introduce another idea here that will be very useful later. This is the notion of "half distance". This is the distance at which the attenuation function is precisely 1/2. The half distance is exactly b/2. In this case, that would be a distance of 3.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 1:11AM Fri, 09 July 2010 · edited on 1:13AM Fri, 09 July 2010 · @3669858


So is this straight line really how light is attenuated in water? No.

From first principles, I could show that the attenuation function is actually exponential decay, via calculus. But while that's an interesting little diversion, it's kind of OK to just accept it as truth.

Exponential decay can be expressed in many ways. But the way that I find most useful with regard to a comparison to what we've seen already is like this:

.5 ^ (x / h)

Where h is the distance where the attenuation is exactly 1/2, i.e. the half distance I mentioned.

It's easy to verify why that works. Consider the case where x  = h. Then the function is

.5 ^ ( h / h)

which is

.5 ^ 1

which is exactly 1/2.

Now if I want to understand how exponential decay looks compared to the linear falloff that DC implements by default, I would choose to line up the half distance of the exponential decay function with the half distance of the linear DC function.

Recall earlier that the linear DC half distance is b/2. So I want to use b/2 for my h.

.5 ^ (x / (b / 2))

With a little rearranging it should be clear that this is:

.5 ^ (2(x / b))

And so I have changed the graph - the old linear attenuation is now a dotted blue line. The new, correct exponential decay attenuation is the green line.

Some things to note:

They coincide at x = 0 and x = 3, which is the half distance.

In the range 0 <= x <= h, they aren't very different. I like to use the phrase "directionally correct" in a case like this. By that I mean that the linear DC attenuation is pretty close to what it really should be in that range. It starts and ends in the right places, and stays pretty close to the correct value everywhere in between.

After x is greater than h, the linear DC is not directionally correct. It goes to 0, first of all, while the real function never goes to 0. That could be tolerated if it didn't go to zero until the real function was very close to 0. But instead it goes to zero damn fast, while the real function keeps on going with significant non-zero values for quite a while.

This is why the built-in attenuation doesn't look right except for stuff that is close to the camera, closer than the half distance. Those things look pretty much how they should. But everything after the half distance looks totally wrong.
 


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 1:26AM Fri, 09 July 2010 · edited on 1:29AM Fri, 09 July 2010 · @3669860


To help visualize how extremely wrong this is, I did another graph. Also, in this graph I've changed the notes to use h instead of b/2.

So again we have the case of h=3. In addition, I've graphed the case where h=1.

In that case, when x = 2, the linear attenuation has already reached 0 - no light is transmitted to the camera.

Whereas, the exponential decay can be seen to get close to 0, but not until the distance is at least 8 times the half distance.

That's a huge difference. With the standard use of linear attenuation, you're completely unable to see anything from x = 2 to x = 8, things you should be seeing in the render.

Note I haven't talked about units here. The units really don't matter, but let's use some units that mean something to us. Suppose h = 100 feet. Then the linear attenuation makes everything invisible after 200 feet. But with the correct attenuation, we should still be able to easily see things out to 600 feet, and if we look hard, we should be able to detect objects out to 800 feet.

If you're used to metric, then imagine 200 meters versus 800 meters.

Whatever units you're used to thinking in, it should be intuitively obvious that this is a serious departure from reality.

So what do we do about it?


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  kawecki    ( ) ( posted at 1:38AM Fri, 09 July 2010  · @3669861


Nothing to do with submarines, only an aerial scene with simple planes:

Stupidity also evolves!

  kawecki    ( ) ( posted at 1:39AM Fri, 09 July 2010  · @3669862


The scene setup:

Stupidity also evolves!

  bagginsbill    ( ) ( posted at 1:59AM Fri, 09 July 2010  · @3669863

In any system that does something wrong, it sometimes seems there is no way to make it right. But quite often, there is. Any time the wrongness is reversible, then there is a way to transform what you have into what you want. This comes up all the time. For example, monitors display luminance wrong. Dark things appear darker than they should. But if we have an understanding of what it does mathematically, as a function, and that function can be inverted with precision, then we can transform what we pass into that bad function such that it ends up doing nothing wrong. In the case of bad monitor response functions, we know that the displayed luminance is a power function, and if we pre-condition our output with the opposite power function, those things will cancel out and we'll get what we want. That's what gamma correction is. Feeding a bad function precisely constructed wrong data will cause the badness to go away.

So we know our "bad" function is:

1 - x/b

Well what can we manipulate here? We can't change the number 1 - that's a constant. We can't change x - that is the distance between the camera and the object and it is what it is. We're running out of options. What's left? Can we mess with the end distance, b, in some way as to make the bad function into a good one?

Hmmm.

What I'm saying is we want to arrange things so that 1 - x / b (the bad function) becomes the same as .5^(x/h), the good function. Algebra to the rescue!

We want to force this to be true:

1 - x / b = .5 ^ (x / h)

I'm suggesting that if I can solve for b, I'll end up with a new "wrong" function that will produce exactly what I want.

In order to avoid some typing, let's let G represent the good function I want, so G = .5 ^ (x / h)

Rewriting with G I have:

1 - x / b = G

Subtract G from both sides:

1 - x / b - G = 0

Add x / b to both sides:

1 - G = x / b

Multiply both sides by b:

b ( 1 - G) = x

And divide both sides by 1 - G.

b = x / (1 - G)

We have to pause for moment. During a proof, you have to make sure that each step is legal. For adding, subtracting, and multiplying, everything is legal. But for dividing, we must promise not to divide by 0, otherwise we can get into trouble, since dividing by 0 is impossible. I just divided by 1 - G. Can that ever be 0? 

Indeed it can. Remember that G is the correct and good attenuation function, and that this function does equal 1 at some point. Therefore, 1 - G could be 0. But where is it that G is 1? Only when the distance is 0. Meaning, we're talking about dividing by 0 if and only if we're rendering an object that is actually touching the camera. I happen to know that will never happen - renders can't render anything that is touching the camera. That would entail a divide by 0 when trying to do the perspective projection. So - we can safely ignore the possibility that G is 1, and therefore ignore the possibility of dividing by 0 in this proof.

OK. With that safely out of the way, we've managed to isolate b, the end distance. All that remains is to put back the definition of G:

b = x / (1 - .5 ^ (x / h))

Wow - Yay! This is exactly the compensation function we need to make the Poser Atmosphere attenuation function do what we want. We need to plug a node network into DepthCue_EndDist, also known as "b", that implements the function we just arrived at.

Great. So all we need is the distance from the camera to the point being rendered.

Uhoh. There is no node that does that. I wish there was. Are we dead?

Nope. If we accept a tiny bit of manual work setting up a shader, we can get this done.

But ... it's really late and I've got to go to bed. So, more tomorrow.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  kawecki    ( ) ( posted at 2:11AM Fri, 09 July 2010  · @3669867

I did a trick with the ground plane. I perspective UV corrected with Poser because the original aerial image didn't look good with planar projection, also had to correct with other program the mapping that Poser did that was also not good.
Of course that this setting only will works with the defined camera angle, but it will not hard to do a morphing ground plane that adjust itself its UV mapping with the different cameras angles.
Also added a little distance fog with transparent planes. I have not used Poser's depth cue because Poser has not control of the height of the fog.

All this works fine for static scenes and the scene easily can be mount picking the right background and ground images. Instead of a ground plane you can use a terrain if it is properly mapped.
For animation things get very much more complicated due the different cameras angles and so the ground and background planes becomes big and to achieve a good quality the image textures would be very huge and impossible to render. Also is waste of resources, you load a huge resource and use a tiny fragment for a scene rendering.
Tiling  can help a little because you can use normal textures, but for animation it won't look good to the repetitive and boring pattern of the textures.

Vue solve this problem easily. Vue can generate procedural clouds, sky and terrain mapping, also you have a great control of fog, haze and atmospheric effects.
Vue only creates what the camera see in the rendering, move the camera and it will create other things that the camera see and is fast.

I have very little experience with Poser's atmosphere, when I tried to do something the renderings were so slow that I quit and most things thet I wanted was unable to do.
I want smoke, fire, fog limited to a volume, ghosts, ghostly effects......

Stupidity also evolves!

  kawecki    ( ) ( posted at 2:58AM Fri, 09 July 2010 · edited on 3:00AM Fri, 09 July 2010 · @3669878

And now the submarine:

The attenuation is some function of the distance (x), in general can be exp(x) or some polynomial approximation of exp(x).
Now what is x? Is supposed to be the distance between the camera and a point of an object.
The problem is from where do we get the value of the distance(x)?
We can get the distance from the z value in the camera space, but where is z = 0 and the zscale factor that depends on the scene?
The distance can be  d = scale.(Zcamera - Zpoint).
Now the z value in the camera space is perspective corrected:  Zcamera = k/(d + Zreal)
where a and d are parameters that depend on the camera setting.
We must know all those parameters and scales, then invert the perspective transformation and only then we can have the real and correct distance to be used in the attenuation function.
For complicating more the illumination of a point depend on how much the light illuminating the point attenuated traveling through the water.
And much more the attenuation in water depend on the frecuency or wave lenght of the light, longer wave length attenuate much more that shorter and illumination turn more blueish as light travel more distance. Deep water is not green as generally painted, is blue!

Stupidity also evolves!

  Helgard    ( ) ( posted at 2:59AM Fri, 09 July 2010  · @3669880

Bagginsbill, amazingly, I understand everything so far. It all makes sense and the logic works for me.

kawecki - that system works, but how long does it take you to set up a new scene from scratch, to find the images, size them, apply them, correct them, etc.


Your specialist military, sci-fi, historical and real world site.

  Helgard    ( ) ( posted at 3:03AM Fri, 09 July 2010  · @3669882

Mmmm, after reading kawecki's last post, I have some questions for both kawecki and Bagginsbill, but I think i will wait until the explantion is over, because maybe they are still to be answered.


Your specialist military, sci-fi, historical and real world site.

  kawecki    ( ) ( posted at 3:26AM Fri, 09 July 2010  · @3669886

Quote - kawecki - that system works, but how long does it take you to set up a new scene from scratch, to find the images, size them, apply them, correct them, etc.

Well, as I almost don't do aerial scenes, I already have all what I need, a background plane, a fog plane and a tiled ground plane and some tiled terrains. The setup is very fast.
Now comes the textures and here is the problem, it can take hours to find one.
I can make a scene, pose the figures and even make some props in 30 minutes, render in few minutes and spend five hours finding a texture.
Is not by itself so difficult, I have a good presellected textures of excellent rendering quality (some textures can be bad and very low quality if you look at them, but render in a marvelous way).
I have 5GB of this textures in my HD, the models that I have have a good mapping.
The problem is to find one that express what I wanted to do, sometimes the first that picked was exactly what I wanted, so it takes seconds and the final image is done in less than an hour, other time I have to pick tens, render to see how it looks and try again and sometimes I quit.
As for aerial images I discovered with my example is that I need to make a morphing ground plane, once is done it will be very easy to use for aerial scenes and other.

As for pre-made scenes I only have done a forest. A ground with a lot of trees, you can move and rotate the camera within some limits and there are trees there, even has a road.
For keeping the number of polygons low the tree have no top and are only a low poly 8 side cylinder. For what the top if your camera doesn't see it?

Stupidity also evolves!

  kawecki    ( ) ( posted at 3:49AM Fri, 09 July 2010  · @3669895


The forest

Stupidity also evolves!

  kawecki    ( ) ( posted at 3:51AM Fri, 09 July 2010  · @3669896


And what it is.

I didn't like the original textures, so I spend a little time finding other.
I did this prop several years ago and not finished, one day I'll continue and improve it.

Stupidity also evolves!

  bagginsbill    ( ) ( posted at 9:58AM Fri, 09 July 2010 · edited on 10:03AM Fri, 09 July 2010 · @3670217


Quote - Now what is x? Is supposed to be the distance between the camera and a point of an object.
The problem is from where do we get the value of the distance(x)?
We can get the distance from the z value in the camera space, but where is z = 0 and the zscale factor that depends on the scene?
The distance can be  d = scale.(Zcamera - Zpoint).
Now the z value in the camera space is perspective corrected:  Zcamera = k/(d + Zreal)
where a and d are parameters that depend on the camera setting.
We must know all those parameters and scales, then invert the perspective transformation and only then we can have the real and correct distance to be used in the attenuation function.

Sorry, but everything you said after the underlined sentence is not true. What you describe is the Z-Depth, but that's not the "x" that DepthCue is using. It is using the exact straight line distance from the camera to the object. Nothing about the camera rotation or focal length or perspective mode matters at all. It is doing exactly what it is supposed to do, as you said in the second sentence above.

This is easily proven by a simple test. Set up a tile pattern on the ground. Point the camera down a bit. Set the Depth_Cue start distance (a) to something within your view. Set the Depth_Cue end distance (b) just slightly past that. I used a = 10 feet, and b = 10.1 feet. Thus, the linear decrease in attenuation will start at 10 feet and end at 10.1 feet, forming a pretty sharp gradient that is only 1 inch thick.

If what you say is true, the gradient will appear to be a straight line. If, on the other hand, it is actual radial distance, not Z-depth, the gradient will appear to be an ellipse. If you point the camera straight down at the ground it will be a circle.

Observe the curve (sometimes I am poetic).

So, having established that all the problems you anticipated do not exist, can you figure out how we can calculate the distance from the camera to the object?


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 10:01AM Fri, 09 July 2010  · @3670221


This is looking straight down at the ground. Clearly a circle, despite the fact that the Z-depth is a constant. It's not Z-depth.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  Helgard    ( ) ( posted at 10:33AM Fri, 09 July 2010  · @3670229

Bagginsbill, you are right, but one thing that kawecki said is also true, and I don't know yet if you have taken this into account, and it may be a bit over the top for the purposes of what we want. To explain:

  1. We are trying to find a "correct" system of depth cue for an underwater scene.

  2. We are going to handle distance as well as the scattering of light by particles.

  3. But as kawecki said, the deeper you go, the light should fade, and as he also said, and this is really finicky, the light should become bluer, until it eventually fades almost totally to black at real deep depths.

While I was looking at all the reference pictures I saw this effect but wasn't really thinking about it until kawecki mentioned it. I think for what we want to do, and the depth at which we are working (100 feet), this is maybe not really a consideration or something that needs to be taken into account.

I think in this picture you can see the darkening effect of depth.

http://fc05.deviantart.net/fs13/f/2007/047/8/f/Underwater_Light_and_Bubbles_by_Della_Stock.jpg 

But at the depths we are working at, you don't really see as much of an effect:

http://www.hawaiipictures.com/pictures/gallery/underwater/underwater11600x1200-1.jpg


Your specialist military, sci-fi, historical and real world site.

  bagginsbill    ( ) ( posted at 1:19PM Fri, 09 July 2010  · @3670294

I will get to that. I've already taken care of it. Apparently you guys don't recognize the depth influenced changes in the attenuation and the scattering in my images. Yes we're only dealing with 100 feet, but there is already a big difference, versus the constant amount of scattering that the built-in math does.

My math takes into account not only what depth you're at, but whether you're looking up, across, or down, automatically.

When I do the side-by-side comparisons, you'll be much more aware of the fact that I've already shown what you're talking about. Just not in deep water.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  Helgard    ( ) ( posted at 1:51PM Fri, 09 July 2010  · @3670314

Cool, as I was saying earlier, I thought you would still get to that, lol, so I should have kept my mouth shut, lol.


Your specialist military, sci-fi, historical and real world site.

  Coleman    ( ) ( posted at 2:02PM Fri, 09 July 2010 · edited on 2:03PM Fri, 09 July 2010 · @3670321

I like the turbulence emphasis in Red October.

I think they decided to fake deep water lighting and use turbulence for dramatic effect. In deep water you probably couldn't visually see anything anyways.

Could you make some turbulence prop for your animated sub, Helgard? That would be very cool.

http://movieclips.com/watch/the-hunt-for-red-october-1990/escaping-torpedoes/


  Helgard    ( ) ( posted at 2:23PM Fri, 09 July 2010  · @3670328

Coleman,

I have a script by Ockham called Bubbles or something like that, that emits bubbles, which I have used before in underwater animations. I am sure I could get a good effect with that, although, in reality, there will be no bubbles from a submarine, lol, it will sort of defeat the purpose of being a stealthy ship. :-)

In films and movies they often cheat for effect. In most submarine clips, because there is no external markers, you cannot actually see that something is moving, so adding bubbles gives a visual market to make the viewer think the object is moving. If they didn't add the bubbles, the submarine would literally look like it is standing still, unless it was moving past something like rocks or the surface ot the ocean bed.

It is the same with muzzle flashes in movies. Real guns, in daylight, make very little of a muzzle flash and the flash that it does make lasts for hundreths of a second, but in movies you always see massive brusts of flame. if they didn't add this, you would actually not realise that the gun has been fired. If you look at news clips where this effect has not been added in the guns look a lot different when they fire compared to the guns in movies.


Your specialist military, sci-fi, historical and real world site.

  kawecki    ( ) ( posted at 8:56PM Fri, 09 July 2010 · edited on 8:58PM Fri, 09 July 2010 · @3670439

Quote - While I was looking at all the reference pictures I saw this effect but wasn't really thinking about it until kawecki mentioned it. I think for what we want to do, and the depth at which we are working (100 feet), this is maybe not really a consideration or something that needs to be taken into account.

The effect is still important even at low depth. At depth of 30 m we can ignore the attenuation of sun light coming from above, but we cannot ignore the effect of light traveling in horizontal direction through water.
A red submarine that is near the camera should look red and another red submarine that is one kilometer far away must look black, while a near blue submarine will look blue and another far away blue submarine still will continue to look blue.
This problem can be solved if we can assign different attenuation values for each RGB component with Red component having the bigger attenuation and blue the smaller.
I suppose that we can continue to use the same function for each component only changing the factors for R,G and B

Stupidity also evolves!

  kawecki    ( ) ( posted at 9:32PM Fri, 09 July 2010 · edited on 9:35PM Fri, 09 July 2010 · @3670452

Quote - Sorry, but everything you said after the underlined sentence is not true. What you describe is the Z-Depth, but that's not the "x" that DepthCue is using. It is using the exact straight line distance from the camera to the object. Nothing about the camera rotation or focal length or perspective mode matters at all. It is doing exactly what it is supposed to do, as you said in the second sentence above.

Poser does all the calculation internally, you only can set the start and end point of depth cue in Poser will do by itself all the calculation of distances and attenuate the light I suppose in linear way.
But if you want to change the attenuation function to be different what Poser does internally you must calculate the distances by yourself and apply your transfer function.
There are only two ways to calculate distances, no matter if you or Poser does.
You can calculate distance in the World Space or in the Camera Space.

In World Space you need the x,y,z values for the Camera and the x,y,z for each point of the objects in scene.
For the distance calculation you must perform three substractions, three squares and one square root for each vertex of the scene, something very time consuming.

In Camera Space to calculate distances you only need to know the z value of the camera and the z value for each point, values that you get from the Z buffer.
With a linear Z buffer the distance is nothing more than the difference of z values.
For a perspective Z buffer you must first perform a division and then the difference of the z values, is more time consuming than a linear Z buffer.
Even so, all is very must faster than in the World Space because you only need to calculate distances for each pixel of the scene and not all vertices of all objects. You calculate only what you see and not what is obscured by other objects.
Of course that Poser must do it internally in the Camera Space.

In either case you also need to know the scale factor of the World or Camera Space and in case of the perspective Z buffer also the Camera parameters.

Stupidity also evolves!

  seachnasaigh    ( ) ( posted at 2:22AM Sat, 10 July 2010  · @3670527

(BB)

Quote - So - how about you export the equirectangular image from a Vue-generated world?

That was what prompted me to get Vue - I wanted to generate texturing images for a large Poser environment model i'm working on, one large enough to accommodate my elvish tree & cottage models, yet feasible for populating a forest setting with trees.

Poser 11 Pro 11.1.1.35540, in Poser native units.  

OSes:  Win7Prox64, Win7Ultx64

Silo Pro 2.5.6 64bit, Vue Infinite 2014.7, Genetica 4.0 Studio, UV Mapper Pro, UV Layout Pro, PhotoImpact X3, GIF Animator 5, Reality 4.3.1 & Lux 2.0


  Coleman    ( ) ( posted at 3:02AM Sat, 10 July 2010  · @3670539

Thanks for the explanation.

Quote - Coleman,

I have a script by Ockham called Bubbles or something like that, that emits bubbles, which I have used before in underwater animations. I am sure I could get a good effect with that, although, in reality, there will be no bubbles from a submarine, lol, it will sort of defeat the purpose of being a stealthy ship. :-)

In films and movies they often cheat for effect. In most submarine clips, because there is no external markers, you cannot actually see that something is moving, so adding bubbles gives a visual market to make the viewer think the object is moving. If they didn't add the bubbles, the submarine would literally look like it is standing still, unless it was moving past something like rocks or the surface ot the ocean bed.

It is the same with muzzle flashes in movies. Real guns, in daylight, make very little of a muzzle flash and the flash that it does make lasts for hundreths of a second, but in movies you always see massive brusts of flame. if they didn't add this, you would actually not realise that the gun has been fired. If you look at news clips where this effect has not been added in the guns look a lot different when they fire compared to the guns in movies.


  Helgard    ( ) ( posted at 6:48AM Mon, 13 September 2010  · @3702420

Bagginsbill: Just bumping this up again. Any idea when and if you will ever make the product. If you don't have time I am willing to help out. I sort of need it.


Your specialist military, sci-fi, historical and real world site.

  bagginsbill    ( ) ( posted at 8:26AM Mon, 13 September 2010  · @3702444

I'm overwhelmed with paid work already. Want to make a joint product? I almost think that's the only way I'll ever get anything commercial published.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  Cyberwoman    ( ) ( posted at 8:33AM Mon, 13 September 2010  · @3702447

I would be interested in seeing the work behind it (although I second nruddock's suggestion that a PDF might make it easier to work with the equations). Not sure if it will make any sense, since I think I've forgotten almost everything from my high school calculus class, but I'd like to take a look at it anyway. Maybe it will inspire me to go find my old textbooks and learn some of it again.

~*I've made it my mission to build Cyberworld, one polygon at a time*~

Watch it happen at my technology blog, Building Cyberworld.


  bagginsbill    ( ) ( posted at 12:37PM Wed, 15 September 2010  · @3703423


I am almost ready to post the scene. There are a lot of parameters that have to be adjusted whenever you move the water surface, change the scatter color, or move the camera. I wrote a Python script to fully automate all this. It will synchronize values across:

Atmosphere shader
SunLight shader
IBL shader
Environment Sphere
Water Plane

Pretty cool. Will post soon.

Here is a render with the water surface 100 feet above the ground.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 12:38PM Wed, 15 September 2010  · @3703424


Change surface to 200. The water plane and the environment sphere move. And all the lighting and shading is adjusted.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 12:38PM Wed, 15 September 2010  · @3703425


Here at 300 feet.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 12:40PM Wed, 15 September 2010  · @3703428


Observe how the light and caustics change with depth.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 12:43PM Wed, 15 September 2010  · @3703429


Look up and you see the sky through the water, in a physically correct way.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 12:50PM Wed, 15 September 2010  · @3703432


Scattering color and intensity are easily adjusted in one place.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  bagginsbill    ( ) ( posted at 12:55PM Wed, 15 September 2010  · @3703433


I'll eventually make other ground covers, but I think this one works pretty well even up close. It's 100% procedural.

The ground prop I made myself. You can easily make others using any terrain generator. I used Sculptris.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  Helgard    ( ) ( posted at 12:57PM Wed, 15 September 2010  · @3703434

baginsbill, sending you an e-mail now.


Your specialist military, sci-fi, historical and real world site.

  Snarlygribbly    ( ) ( posted at 12:59PM Wed, 15 September 2010 · edited on 1:00PM Wed, 15 September 2010 · @3703435

Wow. You've done some pretty impressive things in the past but this might be the best yet!
Amazing stuff. Sometimes I'm impressed when I see something done in Poser that I wouldn't have been able to work out how to do myself. But this is on an altogether different level, because this is stuff I wouldn't have thought anybody could do in Poser.

Free stuff @ snarlygribbly.org/poser


  Helgard    ( ) ( posted at 8:38AM Thu, 16 September 2010  · @3703758


OK, the project is almost ready, we are adding some props for extra underwater detail, so there will be a WWI ship, WWII submarine and modern freighter in two halves, all low polygon and low detail, with rust textures, to add to the scenes. There will also be a few other typical underwater props, an anchor, old 18th century cannon barrel, and a few other things. The product will be designed for both underwater and surface scenes.

There is also an optional animated caustic map for those who want to animate underwater scenes, as well as a script to create bubbles from any prop in the scene.

So, what else do you want (and will actually use) in a product like this?

Fish, ships, submarines, scuba gear, etc, are things that should be seperate products, lol, so don't ask for those.


Your specialist military, sci-fi, historical and real world site.

  flibbits    ( ) ( posted at 10:41PM Fri, 15 July 2011  · @3818511

Was this ever completed?



  bagginsbill    ( ) ( posted at 11:02PM Fri, 15 July 2011  · @3818514

I pretty well finished the technology part, if I recall. I remember working on automating it so that all the shader tweaking that had to be done based on the y-coordinate of the water surface would be handled without user intervention. Prior to that, when you moved the water surface prop up or down, or you moved the camera, you had to go into several places in shaders and update some numbers. 

I'm pretty sure that all that remains is to document it and package it up.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  Anthony Appleyard    ( ) ( posted at 12:30AM Fri, 05 August 2011 · edited on 12:40AM Fri, 05 August 2011 · @3826668

Please, in summary, how do I get these various caustic and linear-fade amd asymptotic-fade effects, and the seen-from-underwater surface light refraction effect?And the  waves-on-water-seen-from-below? Which of them need Poser 8 rather than Poser 7? How did you make the seabed seen in some of the scenes? Any chance of a brief tutorial?


  bantha    ( ) ( posted at 1:21AM Tue, 04 October 2011  · @3846617

BB, did you publish the python scipt for your underwater shader somewhere? 


A ship in port is safe; but that is not what ships are built for.
Sail out to sea and do new things.
-"Amazing Grace" Hopper

Avatar image of me done by Chidori

  bagginsbill    ( ) ( posted at 4:52PM Fri, 25 July 2014  · @4164728

Bumping my thread.

If people want this I'm happy to dig it up and show how it's done.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  seachnasaigh    ( ) ( posted at 5:16PM Fri, 25 July 2014  · @4164732

     I certainly want it.  I'd like to see the math too, if you have a means of posting scribbles.

Poser 11 Pro 11.1.1.35540, in Poser native units.  

OSes:  Win7Prox64, Win7Ultx64

Silo Pro 2.5.6 64bit, Vue Infinite 2014.7, Genetica 4.0 Studio, UV Mapper Pro, UV Layout Pro, PhotoImpact X3, GIF Animator 5, Reality 4.3.1 & Lux 2.0


  willyb53    ( ) ( posted at 5:36PM Fri, 25 July 2014  · @4164736

I am also very interested :D Bill

People that know everything by definition can not learn anything

  bantha    ( ) ( posted at 1:39AM Sat, 26 July 2014  · @4164781

I'm still interested.


A ship in port is safe; but that is not what ships are built for.
Sail out to sea and do new things.
-"Amazing Grace" Hopper

Avatar image of me done by Chidori

  bagginsbill    ( ) ( posted at 7:17AM Sat, 26 July 2014  · @4164803


OK I found the underwater runtime. I have to do some work to use it. Upon loading the scene into PP2014, it changed the names of my props, adding _1 to them all. This messed up the script which expected names like "WaterPlane", not "WaterPlane_1".

I'm improving some of the materials. Here's a demo.


Renderosity forum reply notifications are wonky. If I read a follow-up in a thread, but I don't myself reply, then notifications no longer happen AT ALL on that thread. So if I seem to be ignoring a question, that's why. (Updated September 23, 2019)

  parkdalegardener    ( ) ( posted at 9:00AM Sat, 26 July 2014  · @4164819

Very cool



  Anthony Appleyard    ( ) ( posted at 5:27AM Sun, 27 July 2014 · edited on 5:30AM Sun, 27 July 2014 · @4164953

Attached Link: https://en.wikipedia.org/wiki/Bow_thruster


This image is a Poser render of my surface (and short-dive-submersible) grab-dredger. The sea surface is the ground plane re-colored and 50% transparent (edge transparency = 0%); the deep sea beyond visibility limit is the background with color red=0, green=128, blue=255. How could I make the effect of underwater visibility decreasing with depth, while keeping the air above water level clear?

(The big hole each side in its bows is the inlet and exit of a bow thruster)


  FightingWolf    ( ) ( posted at 9:39PM Mon, 28 July 2014  · @4165265

I'm interested.  The enviro-dome is awesome and I can't wait to get a chance to render underwater scenes.



 To create a post you must first sign in or register an account.

Privacy Notice

This site uses cookies to deliver the best experience. Our own cookies make user accounts and other features possible. Third-party cookies are used to display relevant ads and to analyze how Renderosity is used. By using our site, you acknowledge that you have read and understood our Terms of Service, including our Cookie Policy and our Privacy Policy.