|
Author Topic:   (ROAM) Triangle Priorities: why/how do they work?
Tim
Member
posted January 20, 2000 04:36 AM            
I've been trying to implement ROAM, and I'm stuck when it comes to calculating priorities.
Sure, I have the ROAM PDF file, and I even looked at some other peoples source code for their ROAM implementations (from this BB), and no where do I find an explaination on *how* it's supposed to function.
I can write up the code to do it,(which doesn't seem to work), so I need to know the principles behind the formula stated in 6.2

namely, they start with vector a,b,c. which is the 'camera space' vector of 0,0,eT.
and p,q,r, the camera space vector of a 'domain point'.

Ok. so what does a point at the origin (but displaced by the wedgie thickness) have to do with a point on the triangle (or is it wedgie?)

The paper is extremely unclear as to what to do here.

I can't visualize how these two vectors are related. 0,0,eT will get transformed into camera space. So will a wedgie/triangle vertex. But if 0,0,eT in world space is 2000m behind the viewer, and the triangle in question is 2m away, how is that going to compute a priority?

Right now I'm trying to compute priority on the wedgie thickness and a spline mapped against distance from the viewer. Which seems like it would work... ( as long as I don't narrow my FOV to an extreme)

If anyone could explain how the ROAM priorities work with a diagram or other long-winded explaination, I'd be most greatful.

Thanks everyone for the help, this board is a great place for information, and TreadMarks is certainly great inspiration.


IP:

Chris C
Member
posted January 20, 2000 07:10 AM         
OK, here goes...

I'll quickly summarise what the formulae mean and then explain in a bit more detail: the priority values are approximate absolute max errors in screen space, in other words, the absolute max difference between what a group of (possibly coarsely tesselated) triangles look like projected onto the screen and what they would like if drawn at maximum detail, in pixels.

The vector (a, b, c) is the direction of the height field elevation in camera space, i.e. if you are level and looking straight ahead parallel to the ground, then it'll be (0, eT, 0) if y is up in your co-ordinate system.

eT (if I remember correctly) is the abs max error between all the heights of the mesh triangle vertices and the vertex heights of the triangles contained within it at the highest level of tesselation. A wedgie is a triangle capped volume (ie. a wedge) that is formed by simply extruding the mesh triangle up and down by eT for all its vertices. By simply projecting this wedgie into screen space you can then find out which wedgie thickness (at each vertex) projected to the largest distance in screen space and take this as the mesh tri's priority.

So, how do you do this? The answer lies in the (a, b, c) vector. First, you project each mesh tri vertex into camera space to get (p, q, r) for each vertex. You can then project (p, q, r) + (a, b, c) to get the top of the wedgie and (p, q, r) - (a, b, c) to get the bottom at each mesh vertex. The paper says that the max. wedgie thickness will be at the vertices, so you just do this for each mesh vertex, giving you two points. Then project these two camera space points onto the screen.

To find the screen space error simply compute the distance between the wedgie top and bottom screen co-ordinates for each vertex and take the maximum. This will correspond to the max screen space error in pixels... The formula stated in the paper basically puts all these steps into one formula and then simplifies it, making it a little hard to follow, but the end result should be the same.

This is all off the top of my head, so there may be a mistake or two, but this is pretty much what I understood.

Hope this helps,

Chris


Basically, vector (a, b, c) corresponds to the direction of elevation in camera

IP:

Bryan T
Member
posted January 20, 2000 11:05 AM            
I struggled with the wedgies as well, but decided to go with Seumas' method "variance". You can find a discussion of this in other threads on the board (use the search facility for the word 'variance').

I have since chosen to use a different metric though. I was not getting enough detail on some areas of the landscape that seemed important, so I tried calculating the variance using the difference of the min and max heights for triangle nodes.

This seems to work very well and gives detail more properly to the bumpy regions than my the other calculation I had. Using the min/max heights of the nodes, you never have to use matrix math and avoid all those extra cycles.

Good luck with your implementation!
--Bryan

IP:

Tim
Member
posted January 20, 2000 01:56 PM            

Chris Wrote: "So, how do you do this? The answer lies in the (a, b, c) vector. First, you project each mesh tri vertex into camera space to get (p, q, r) for each vertex. You can then project (p, q, r) + (a, b, c) to get the top of the wedgie and (p, q, r) - (a, b, c) to get the bottom at each mesh vertex.

That seems to be the same as using the world space for a triangle, and generating the top and bottom of the wedgie just by adding/subtracting eT for each Z. I have to do that anyway for the view frustum culling.

(I'm stuck with 3DSMAX coordinate system, since that's my modeller, Z is 'altitude')

"Then project these two camera space points onto the screen."

Ah, See they never talk about projecting into screen space.

Though I've tried this approach before I knew what section 6.2 was talking about. And detail didn't seem to reduce with distance.

"To find the screen space error simply compute the distance between the wedgie top and bottom screen co-ordinates for each vertex and take the maximum."

I presume it'd be distance between the each 'extruded' point. Since if the view banks, the wedgie and triangle turns on it's side.

So if I understand it:

Triangle Vertices: Vector3 V1, V2, V3;
Wedgie Vertices = Vector3 W1, W2...W6;

Wedgie top: W1=(V1.x, V1.y, V1.z+eT) ...
Wedgie Bottom: W4= (V1.x, V1.y, V1.z-eT) ...

Project this into screen space:
(perhaps slow, but I'm trying to get the idea working first)

masterMatrix = eyeMatrix * perspectiveMatrix;
Vector3 screenW1 = W1 * masterMatrix;
Vector3 screenW2 = W2 * masterMatrix;
...
(I love c++)

Find 3 Distances:

Vector3 distVect = V4 - V1;
float distance1 = distVect.Magnitude();
...

And then use the maximum distance for my prioritiy for this wedgie.

As for wedgies vs variance, I think they are the same thing, at least, they are in my implementation.
I use that wonderful mip-map like array to store my 'eT' values. Triangles have 'index' and 'level' members. Children get level+1, index*2.

If I can get this working properly, I'll move on to the merge queue. (Split queue works for the most part)

IP:

LDA Seumas
unregistered
posted January 20, 2000 02:11 PM           
Bryan,

If the difference between min and max heights inside of a particular triangle means what I think it does, won't you run into the situation where a perfectly smooth slope on a hill registers as highly variant (and thus highly tessellated) when in actual fact it could be perfectly represented at a very coarse level of tessellation? This approach may work decently if your terrain is primarily flat with occasional spurts of roughness or small slopes, but it doesn't seem like a good general solution, unless I'm misunderstanding.

------------------
-- Seumas McNally, Lead Programmer, Longbow Digital Arts

IP:

Chris C
Member
posted January 20, 2000 04:26 PM         
The only thing I like about the ROAM priority computation is the fact that when you observe the terrain from up above, looking straight down, the mesh tri priorities lower, and the mesh is represented more coarsely with very little perceptible error. Of course this is no use if you're always viewing the mesh from just above ground level, but for impressive zoom effects, it's quite neat. Just too many expensive computations, though - I reckon you could replicate the effect by including the angle between the view ahead vector and the terrain elevation vector (ie. the z part of your view vector if your using Z = up ) in your own priority computations for a quick approximation. Just an idea.

Chris

IP:

Tim
Member
posted January 20, 2000 08:16 PM            
Hey, I got it working!!

Thanks for your help Chris, and of course thanks to LBDA for introducing me to ROAM.

I had to re-write a bit of my code but far away hills now properly start to reduce in detail, ground up close tesselates nicely.

So far I'm using the code like above, slow, but it works!

Onto the merge queue for frame-to-frame coherency. Which aughta be easy. (cross fingers)

Hopefully the end result will be spiffy. I have grand plans for this little routine.

IP:

Bryan T
Member
posted January 24, 2000 11:26 AM            
Seumas,

Perhaps my variance calculations were not correct, but they were not giving me the detail in the distance that I would like. Are you adding up the variances of the children or just taking the greatest overall variance over all children?

The problem with the hill is actually moot. If all you have is one long slope, then all the heights will be equal. If you have some areas that are flat and others that are sloped, then yes the sloped areas will get at least one more tessellation. I have not noticed a drain on the pool for these however.

My datasets are Tread Marks maps currently and most have enough bumpiness that the triangles go where they are needed regardless. Also, when comparing TreadMarks on wireframe with my engine on wireframe (at <5000 tris) I feel my tessellation looks more correct. After about 8000 tris they both look the same.

One oddity of my engine though is that I begin tesselation at the upper left corner of the HeightMap, instead of the patch directly under the eye. If the eye is in the lower right corner, it may have run out of triangles before adding the desired level of detail under the eye. Anoying, but it's not a game engine so it doesn't need that level of polish.

--Bryan

IP: