This topic is 3 pages long:   1  2  3  |
Author Topic:   Rottger/Quadtree Algorithm
Klaus Hartmann
Member
posted February 08, 2000 11:43 AM            
MarkW,

I'm a bit confused now. I take the average height, and the detail increases as I get closer to the ground. The LOD works just as I would expect.

Did I miss something?

IP:

MarkW
Member
posted February 08, 2000 12:06 PM            
Klaus,

It was just a suggestion... It depends on what your game is like. Let's take an example:

Your average terrain height is 100.
The camera moves below 100 towards the ground which is at 0. Assuming only your vertical position is changing, the ground detail (at height 0) will start decreasing because your distance from 100 is increasing. It's not a huge difference and the effect really depends on your scale, terrain type, etc.

IP:

Klaus Hartmann
Member
posted February 08, 2000 12:32 PM            
MarkW,

Thanks. I understand now. I sort of like your idea, because it will also be faster than computing the average height for each block.

Anyway, I plan to rewrite my engine tonight so that I can make it publicly available (hopefully with correct geomorphing, and this time it'll use Direct3D). I'll support your idea in this version.

BTW... does anybody know of a fast way to convert all those fans into large strip. I'd really prefer to use vertex-buffers with indexed strips.

IP:

MarkBatten
Member
posted February 09, 2000 10:14 AM            
Guys -- As Klaus requested, I'll post the portion of my source that walks the fans and renders them after the blend matrix has been built. Compared to what I read of what you've done, mine seems rather simplistic, but the results look good. All I ask in exchange for the post is your comments on how it might be improved. It's on my other computer, but I'll try to send it shortly.

IP:

Klaus Hartmann
Member
posted February 09, 2000 10:45 AM            
MarkBatten,

Thank you so much for this. I really appreciate it, and of course I'll provide feedback.

At this moment I'm also trying to get in touch with one of the authors of the paper. Maybe I have more luck than MarkW had, because I'm native to German (just as the author is). It might be easier for him to answer. Well... let's hope they are willing to answer some unanswered questions.

IP:

Klaus Hartmann
Member
posted February 09, 2000 05:59 PM            
MarkBatten,

Have a look at your old post (Feb 03, 2000 - 02:47 PM).
If your propagation code really works as you explain in your post, then you ran into a trap (I almost ran into it myself). The problem is that you cannot propagate the errors in a recursive process. The reason is as follows. You need to compute K times the neighbors' d2 values. However, if these neighbors have another parent than the current block, then chances are that the neighbors' children didn't propagate the d2 values to the corners (just because they are in a different branch of the recursive process and are therefore visited later). Thus you read wrong d2 values from the edges.

In order to fix this, you first need to process *all* smallest blocks in the matrix. *After* this you walk up to the next level and process *all* blocks on that level. *After* this you walk up again... and so on.

IP:

Klaus Hartmann
Member
posted February 09, 2000 06:08 PM            
Correction to my last message.

Of course I didn't mean "K times the neighbors' d2 values". I meant K times the d2 values that are stored at the edges. But... the neighbors' children do contribute to those d2 values, and thus the rest of the post is correct.

IP:

MarkW
Member
posted February 09, 2000 07:40 PM            
Klaus... I also get my d2 values the same way. Just have a pair of for loops that walk through the terrain's dimensions. I increment x/y by the current level's node dimensions. I think this is also faster than the recursive version.

Speaking of d2 values... I think both of you only consider the local d2 and the 4 edge values (N,E,S,W) and then store the Max in the node's center. Would it help to also compare to the existing d2 at the node's center before we overwrite it?

IP:

Klaus Hartmann
Member
posted February 09, 2000 09:15 PM            
MarkW -- As for you d2-values question: I just don't know.

Guys -- I'd like to discuss this error propagation again. The point is this. If I really implemented the propagation as it is presented in the paper, then my this part of my code would be completely different. I'm now going to copy the important parts of the paper, and then comment on them. (Always keep in mind that the authors are mathematicians with Ph.D.s).

I'm going to do this, because I think there's a chance, that none of us got the error propagation right. Let's start:

(1) "Thus, if d21/d22 <= K, then we have to modify the d2-values in the following fashion:"

The above contains an "if". Does this perhaps mean "if and only if"?

(2) "Starting with the smallest existing block, we calculate the local d2-values of all blocks and propagate them up the tree."

The important words here are "local" and "all". If I take this sentence as it stands, then I would first compute the *local* d2-values of *all* blocks in the tree, before I do anything else (like multiplying by K and comparing with existing values).

(3) "The d2-value of each block is the maximum of the local value and K times the previously calculated values of adjacent blocks at the next lower level."

Now this one is a beauty. First off, have a look at the end of the second paragraph in the geomorphing section. Here the authors use the terms "lower level" and "higher level". One might think that they accidentally confused "lower" with "higher", but... If you were assigning numbers to each level in the tree, then you would choose 0 for the root, and 1 for the level below. Thus, the root is at a lower level, than the level below. Therefore, the authors didn't confuse "lower" with "higher".

Now back to the propagation. The important part here is "...K times the previously calculated values of adjacent blocks at the next lower level."
If we use the same convention as in the geomorphing section, then the "next lower level" is the level above, and the "previously calculated values" are the values we calculated in (2).

Well, my description above is not really complete, because it would have taken too much time. But please consider this "Why do we have children, if 0.5 <= f < 1 ?". One possible explanation of this could be, that our error propagation is completely wrong.

Opinions, please?

IP:

MarkW
Member
posted February 10, 2000 11:36 AM            
Klaus,

In my opinion, the error propagation is definitely open to interpretation. If you read some of my earlier posts in this forum, I was doing it completely different at the start. When I read the term "adjacent blocks at a lower level" I thought it meant the 2 child nodes of your neighbor that border the current block. I wasn't doing the propagation to the corners at all. The algorithm still seemed to work without any cracks, etc. Since then I fixed it to propagate to the corners but this illustrates that just because it looks like it's working does not mean you're doing it correctly.

As I have stated here before, the geomorphing presented in the paper will only work as intended if you can fix it so "f values less than 1/2 indicate that the node has at least one child". I'm working on trying to fix that part of the paper before I worry about the rest. I agree with you that the problem must be with the d2 values. I'll also try various propagation ideas to try and fix it.

IP:

MarkW
Member
posted February 10, 2000 01:57 PM            
Just a quick update. I played around with some of the ideas proposed by Klaus and found there is a way to fix the "Why do we have children, if 0.5 <= f < 1 ?" problem. What I did was to only consider local d2 values during the pre-process propagation step. Then during the step where you calculate f and b values, I would pick my d2 like we used to in the pre-process (Max of local d2 and d2*K of N,E,S,W vertex).

I'll do some more investigation to make sure this was not just random luck. Hope it works out!

IP:

Klaus Hartmann
Member
posted February 10, 2000 03:49 PM            
MarkW,

I really hope you have success. The more I look at the paper the more confused I get. I guess I've been staring at it for too long.

Sometimes I wished that Siggraph papers were allowed to be longer than 8 pages. Peter Lindstrom's paper with 10 pages was a real exception.

Anyway... I tried to contact one of the authors again, and invited him to join the discussion. I also sent him a link to some PDFs that contain the whole discussion (hope you don't mind). Well, let's hope...

IP:

Klaus Hartmann
Member
posted February 10, 2000 09:17 PM            
MarkW,

I was thinking again... In one of your mails you wrote:

"I think I understand now why the author of the paper insisted that nodes with 0.5< f < 1 should not have any children. Those f values would correspond to a blend value of 0 < b < 1. The reason this is important is that it gives you a way to test if a node is really subdivided (b==1). Without this condition, you have no way of knowing if nodes with 0 < b < 1 really have children."

This is either not correct or I misunderstood your point. You know that a node is subdivided, if b > 0 (f < 1), because b <= 0 corresponds to f >= 1.

But there's still a reason, why it is important that nodes with 0.5 <= f < 1 are not further refined. Let's talk about this "further refined", first. In the paper it reads "A close examination reveals that, if f falls into the range [0.5,1), the quadtree is not further refined, and a single fan is generated for the complete node". If I understand this correctly, then this means that the node has exactly 4 children, and they are all leaf nodes. Thus you can draw their parent node with a single fan.

Now for the reason why this is important. My geomorphing does exactly what the paper says, but popping still occurs, if a new child is born. The reason why there is popping is, that the parent node of the new child has not yet finished its morphing process, because the blending factor was still in range (0; 1]. This must never happen, if you don't want to morph corner vertices. Have a look at the paper... they don't morph corner vertices. Hence, you need to finish morphing a node, before you subdivide it.

But the old question remains... how can we ensure, that 0.5 <= f < 1 nodes don't have children? This must have something to do with this constant K and the d2-values, and we also need to ensure, that adjacent nodes are not subdivided too early, if the current node is currently morphing.

IP:

MarkBatten
Member
posted February 11, 2000 10:30 AM            
Sorry to be away for a while (Darn job interferes from time to time ). But this is a fascinating discussion. Klaus, I think you're correct that d2's can't be propagated recursively. That's a goof on my part. I guess I have to agree with MarkW that you can get this algorithm wrong and still have it look right, but that's a funky result, isn't it?

MarkW, I don't check the value already stored, just the local d2 and the neighbors. But since the values that could already be stored have already been propagated to every possible neighbor, I don't think you need to or are supposed to.

Klaus, I'll be the first to agree that the paper is not clear. And the authors wouldn't help me much either. But my best guess is that when talking about propagation, they're definitely saying K times the values at higher resolution (i.e. toward the leaf nodes). I think that because the d2-propagation definitely moves up toward the root, not down to the leaves, and theoretically it doesn't make sense to be multiplying the effects of K down towards the leaves. So I think starting at the height field data and 3x3 blocks, and working up towards the root, is the only conceivable way to do this. When geomorphing, I think "higher level" means "closer to the leaf nodes."

Re the 0.5<=f<1 thing, I just don't know.

Anyway, I promised to reveal the family jewels, at least as they relate to walking the completed tree and rendering. So here it is. In light of what's gone before I have less confidence that this is the way it was intended, but it works. RenderBVertex is a call that multiplies by b before doing a glVert, and RenderVertex omits the multiply (for corner verts, which aren't morphed). At the moment I can't remember what's up with that last, largely-empty switch.

void rLOD::RenderTree(w16 x,w16 y,w16 dim)
{
int d2=dim>>1,d4=dim>>2;
float b;
int xy=y*map_dim+x,yd4=d4*map_dim,ydd=dim*map_dim;
int firstVert=0;

bool clipRight=(map_dim-x)<dim;
bool clipLeft=x<dim;
bool clipTop=(map_dim-y)<dim;
bool clipBottom=y<dim;

bool didLast; //so we don't repeat verts

b=blends[xy];
if(blends[xy+yd4+d4]!=0) //ne needs subdivision
{
if(inFan)
EndFan();
RenderTree(x+d4,y+d4,d2);
}
else{
StartFan();
RenderBVertex(x,y,0,0,b);
if(clipRight | | blends[xy+dim]!=0){
RenderBVertex(x+d2,y, 0,d2, min(b,blends[xy+dim]));
firstVert=1;
}
else firstVert=2;
RenderVertex(x+d2,y+d2);
if(clipTop | | blends[xy+ydd]){
RenderBVertex(x,y+d2, d2,0, min(b,blends[xy+ydd]));
didLast=true;
}
else didLast=false;
inFan=true;
}

if(blends[xy+yd4-d4]!=0) //nw
{
if(inFan)
EndFan();
RenderTree(x-d4,y+d4,d2);
inFan=false;
}
else{
if(!inFan){
StartFan();
RenderBVertex(x,y,0,0,b);
}
if((clipTop | | blends[xy+ydd]!=0) && !didLast){
RenderBVertex(x,y+d2, d2,0, min(b,blends[xy+ydd]));
if(!firstVert)firstVert=3;
}
else if(!firstVert)
firstVert=4;
RenderVertex(x-d2,y+d2);
if(clipLeft | | blends[xy-dim]){
RenderBVertex(x-d2,y, 0,d2, min(b,blends[xy-dim]));
didLast=true;
}
else didLast=false;
inFan=true;
}

if(blends[xy-yd4-d4]!=0) //sw
{
if(inFan)
EndFan();
RenderTree(x-d4,y-d4,d2);
inFan=false;
}
else{
if(!inFan){
StartFan();
RenderBVertex(x,y, 0,0, b);
}
if((clipLeft | | blends[xy-dim]!=0) && !didLast){
RenderBVertex(x-d2,y, 0,d2, min(b,blends[xy-dim]));
if(!firstVert)firstVert=5;
}
else if(!firstVert)firstVert=6;
RenderVertex(x-d2,y-d2);
if(clipBottom | | blends[xy-ydd]){
didLast=true;
RenderBVertex(x,y-d2, d2,0, min(b,blends[xy-ydd]));
}
else didLast=false;
inFan=true;
}

if(blends[xy-yd4+d4]!=0) //se
{
if(inFan)
EndFan();
RenderTree(x+d4,y-d4,d2);
inFan=false;
}
else{
if(!inFan){
StartFan();
RenderBVertex(x,y,0,0,b);
}
if((clipBottom | | blends[xy-ydd]!=0) && !didLast){
RenderBVertex(x,y-d2, d2,0, min(b,blends[xy-ydd]));
if(!firstVert)firstVert=7;
}
else if(!firstVert)firstVert=8;
RenderVertex(x+d2,y-d2);
if(clipRight | | blends[xy+dim])
RenderBVertex(x+d2,y, 0,d2, min(b,blends[xy+dim]));
inFan=true;
}
//now close up fan
if(inFan){
switch(firstVert){
case 0:
case 1:
RenderBVertex(x+d2,y, 0,d2, min(b,blends[xy+dim])); break;
case 2:
RenderVertex(x+d2,y+d2); break;
case 3:
break;
case 4:
break;
case 5:
break;
case 6:
break;
case 7:
break;
case 8:
break;
}
EndFan();
}
}

IP:

MarkW
Member
posted February 11, 2000 11:25 AM            
Upon further testing I found the method I came up with still does not fix the "0.5<=f<1" issue for all cases. So I decided to look over the paper from the start (again!) and try to figure out why the author thinks this should be true. I think I found the reason (and possibly why it will never work for all cases). Here we go:

f=l/(d * C * max(c*d2,1))

Now let's say f > 0.5 for the parent node.

Since any child nodes will have edge length equal to d/2 that automatically doubles f (and takes it over 1). The d2 values don't really play a role here since you know the child must have lower d2 values due to the propagation. So d2 can only serve to make the child f even larger. The only flaw in all this is that the paper assumes l stays constant. But this is not true in general since child nodes can be closer than their parent. So to me, it appears like this algorithm will only work over long distances where the two l values start to become very close.

IP:

Klaus Hartmann
Member
posted February 11, 2000 11:30 AM            
MarkBatten -- Thanks a lot for the code I'll take a very close look at it. I'll let you know about possible optimization as soon as possible. But, first let me finish my demo for the public, and then I can tell you more.

Guys -- Does anyone of you understand the following part of the paper? From the paragraph before Eq. (4) to "... Condition (4) is not automatically fullfilled." I'm talking about the formulas. If I understood those formulas, then I'm sure I would understand the whole propagation.

IP:

MarkBatten
Member
posted February 11, 2000 04:54 PM            
Well, I understand it, but it didn't help me with anything practical. For reasons explained earlier in the paper, you have to maintain a maximum edge length ratio of 2:1 in adjacent blocks. So if you've got an f that indicates subdivision (f2), then neighbor blocks of twice the edge length have to have f's (f1) less than f2, to be sure they'll split too. Eq.4 substitutes terms from Eq.3 to express the relationship f1<f2 in terms of the ratio between edge length and the distance to the eye. The text after Eq.4 says that within the white rectangle in Fig. 6, you're guaranteed to get the required subdivision, because f will always be less than 1. But when the eye is more distant, the required relationship will only exist if l1/2*l2 is less than K, and then he derives the value of K. The bottom line is that K gives you the maximum safe ratio between the d2's of the two blocks in question. To be sure that the required relationship gets "baked in" to the precalculations, you multiply by K as you build up instead of just taking the maximum of the propagated values. This is why I said before that it didn't make sense to start at the root and work down. (Also because the paper then says, "Starting with the *smallest* existing block, we calculate ... and propagate them *up* the tree.") I think we're all three doing this part properly (I include myself in that group now that Klaus has corrected me about not doing the d2 calculation recursively). You calculate d2's at the lowest level for the whole tree, propagate them as shown in Figure 7, then move to one coarser level and do it again, etc.

IP:

Klaus Hartmann
Member
posted February 12, 2000 08:13 AM            
Mark Batten,

Thanks for the explanation. It wasn't quite what I was looking for, but it still helped. I was more looking for an explanation about how exactly those formulas are derived, and why they are as they are... a proof for those formulas. But your explanation still helped a lot. Thanks.

As for "we are all three doing the propagation properly.". I think you are right. When I first implemented the propagation I didn't even think about it. I implemented it intuitively. And so did others, like C. J. Cookson, Mike Cocker, Mark Batten, and Mark Wilczynski. Mike Cocker, however, made the same mistake than Mark Batten did (recursion).
Anyway, we all implemented this part of the algorithm independently, and we all chose the same approach. Now there are two options:

(a) we did it correctly

(b) we all implemented Lindstrom's algorithm before, and are blinded by Lindstrom's "error propagation". (which was: take the maximum of the children's delta values -- this was possible with recursion, btw.).

I for myself believe in (a) now.

Sometime ago, MarkW asked, if it would help, if we included K times the local d2 value. The answer is "no". K exists to ensure that neighboring blocks don't differ by more than one level w.r.t the current block. Children of the current block are not neighbors, they are simply a possible replacement for the current block, and thus the answer is "no". (This is wrong in C. J. Cookson's code. Mention it here, in case he reads along).

IP:

MarkW
Member
posted February 12, 2000 01:45 PM            
Klaus,

Those derivations for K, etc. are not that hard to understand. I could go over the details if you really want but I don't think it's necessary at this point. I'm sure we're all doing the propagation part correctly now. I had my doubts before because I could never get the "f < 0.5" issue to work. But that problem really has nothing to do with d2 values or the propagation. It's based on a bad assumption by the author (see my earlier message) and has to be addressed another way. I'm going to try and come up with an efficient run-time method to enforce this condition for any viewpoint location.

IP:

MikeC
Member
posted February 12, 2000 06:09 PM            
Hi guys.. I have been trying to read along a little. It seems that we are all having issues with the algorithm presented in the paper. I actually never attempted the geomorphing since I was still having issues with occassional cracking. I read the statement from Klaus about not being able to do the propagation of d2's recursively. This got me looking into that section of my code, and I think I may have found the source of the cracks, and fixed it. I will probably look into geomorphing now, since my terrain pop's horribly. I was actually considering reworking the entire thing now though.

I am curious if any of you are planning on attending GDC this year? There is a talk on Friday 12:00-1:00pm on the algorithm in this paper. There are 3 people giving the presentation, so maybe one of them will have some insight. One of the guys (Louis Castle)is from Westwood, the other two I have no idea. Well if you guys go, maybe I will see you there.

good luck on you implementations.

IP:

MarkW
Member
posted February 12, 2000 10:30 PM            
Mike,

I'm going to GDC and will attend that lecture. Mail me in private at mwilczyn@cyberwar.com and maybe we can meet up somewhere to chat. Louis Castle and Jonathan Lanier both work for Westwood Studios. I'm not sure about the 3rd guy. I'm talking with Louis right now about the lecture so maybe I can find out some answers to our problems before we go.

IP:

MarkBatten
Member
posted February 14, 2000 09:03 AM            
I can't get to GDC, so any insights you pick up -- or source code, which they promise to provide -- would be greatly appreciated, either via post here or by email (I'm most reliably at battenmw@bingham.com).

Thanks.

IP:

Chris C
Member
posted February 14, 2000 03:46 PM         
I agree with MarkW that the way forward with this algorithm is to implement some kind of runtime method for maintaining correct level of detail differences between quadtree nodes. I did come up with one a couple of weeks ago, but sadly only on paper! It did offer a solution that could cope with huge terrains that don't suffer from cracks between tiles as well as runtime terrain deformation which would make it more suitable for use in games.

I'd love to work more on this, but studying takes up far too much time! Anyhow, this might give you all some food for thought.

Nice to hear about the GDC talk, too - it'll be good to hear how the pros have interpreted/modified this algorithm.

IP:

TheGecko
New Member
posted February 20, 2000 09:44 PM            
You know....after reading this entire post...I feel extremely stupid
I did not have a clue as to what you people were talking about.You guys sound like pros. This is prolly because I'm a newbie to this whole terrain programming stuff. Oh well,I'll get there some day.

IP:

Kaeto
Member
posted March 14, 2000 12:53 PM            
Hehe... I feel the same way, but don't worry. You'll figure it out eventually. It just takes forever, like everything else

IP:

This topic is 3 pages long:   1  2  3