Author Topic:   GeoMorphing
posted June 14, 2000 07:04 AM            
The past few days I have been thinking about how to implent GeoMorphing. As Bryan notes in his article, the way to do it would be using the following formula:

MorphedZ = (fMorph * actualZ) + ((1-fMorph) * interpolatedZ);

However, since the TriTreeNodes are erased and relinked every frame, it would be impossible to store any variables inside the TriTreeNode structure as Bryan suggests. Should one change its ROAM implentation then to allow storage of statical pointers (or vertex data)? Like to have a TOTAL_TRIS and have a structure for each tri? Or does anyone have an entire other solution to this problem?


Joris "Prophylon" Jansen


Bryan T
posted June 20, 2000 12:07 AM            

I think this was discussed before as well, give it a search real quick, you might find something more than the following.

GeoMorphing can be implemented with a single additional field per node. A simple float could be used, or the morph value can be compressed into a byte form.

The morph value is based on DISTANCE TO EYE, not time. That is, as the eye approches a vertex, each frame it's distance will be slightly closer. Thus a simple equation that maps distance to morph value will suffice.

This is what Seumas had done with Tread Marks. It works fine with the split only methods because the equation does not require storage between frames.



posted June 20, 2000 12:53 AM            
Sorry for not searching it first =/ Will remind myself to search before I ask

Anyway, thanks for explaining! As you said, I did not understood that it was dependent on distance wheeheehe That makes it alot easier


New Member
posted July 11, 2000 06:10 AM            
I am just about to implement geomorphing in my own project, and I am debating between blending based on distance or time.

Time is what hughes hoppes used in his implementation of progressive LOD for terrain. He only morphs splits and not merges, but it shouldn't be very different.

Then again, treadmarks uses distance based blending, and it works fine. I will probably try both, but I am betting that time based will work best with my system which is incremental (frame coherent).

I use quadtrees with regular grids, the idea being that managing 1,000 and displaying 1,000 nodes with a few hundred triangles per node with hardware acceleration (GTS, etc) will be just as fast or faster than managing and displaying 3,000 nodes with a triangle per node (bintrees) - and will look much better.

I like the triangle fan quadtree approach because it can pack a few triangles in each quadtree node, but you won't be getting anything near your peak hw performance if you don't use larger batches of triangles.

Geomorphing poses a problem in that it requires modifying the vertex positions every frame, and thus some of the speed advantage of the hardware accelerator will be lost.

With morphing based on distance, for any frame *all* of your vertices should be morphing, and thus you probably lose most of your hw accel advantage because you still have to walk through so many vertices in memory and thus are system bus limited. (as opposed to AGP or GPU limited)

With morphing based on time, you can explicitly limit what % of the total vertices are morphing each frame. (Hoppes didn't limit it, but reports that about 30% were morphing).

Another idea is to put a cap on the maximum per frame morph movement to explicitly limit popping at the expense of mesh accuracy. This would require a subtle change to priority equations.

After about a week mucking around with my own home-brew analytical edge crack eliminator algorithm that eliminates the cracks between meshes of arbitary LOD, and not getting it through all its strange special cases (dammit! - its tough when the vertices are all procedurally generated). I decided to just use the simplification that others have used and limit myself to allowing only a 1 degree seperation in LOD between neighbors. That fixed the cracks . .

but it ****ed up my incremental LOD. I implemented it by setting the split priority to 0.0 if a node fails the neighbor criteria (all neighbors at same LOD) and the merge priority to a huge number.

What seems to happen is that it will want to split a certain node very badly but can't, and can't split a node needed to split it, and can't split the node needed to split that one, and so until it gets to some node way out there it doesn't really want to split at all. So it does apparently nothing or makes very poor split decisions.

I think the solution is to chain the priority computations through neigbhor dependencies, so that if node I is dependant on node J, then J's priority is bumped up to I's, and so on.

This is definetly less effecient and more of a headache in one sense, so I wonder if fixing my original algo is not more worthwile.

Anyone else have this problem?