posted July 11, 2000 06:10 AM
I am just about to implement geomorphing in my own project, and I am debating between blending based on distance or time.Time is what hughes hoppes used in his implementation of progressive LOD for terrain. He only morphs splits and not merges, but it shouldn't be very different.
Then again, treadmarks uses distance based blending, and it works fine. I will probably try both, but I am betting that time based will work best with my system which is incremental (frame coherent).
I use quadtrees with regular grids, the idea being that managing 1,000 and displaying 1,000 nodes with a few hundred triangles per node with hardware acceleration (GTS, etc) will be just as fast or faster than managing and displaying 3,000 nodes with a triangle per node (bintrees) - and will look much better.
I like the triangle fan quadtree approach because it can pack a few triangles in each quadtree node, but you won't be getting anything near your peak hw performance if you don't use larger batches of triangles.
Geomorphing poses a problem in that it requires modifying the vertex positions every frame, and thus some of the speed advantage of the hardware accelerator will be lost.
With morphing based on distance, for any frame *all* of your vertices should be morphing, and thus you probably lose most of your hw accel advantage because you still have to walk through so many vertices in memory and thus are system bus limited. (as opposed to AGP or GPU limited)
With morphing based on time, you can explicitly limit what % of the total vertices are morphing each frame. (Hoppes didn't limit it, but reports that about 30% were morphing).
Another idea is to put a cap on the maximum per frame morph movement to explicitly limit popping at the expense of mesh accuracy. This would require a subtle change to priority equations.
After about a week mucking around with my own home-brew analytical edge crack eliminator algorithm that eliminates the cracks between meshes of arbitary LOD, and not getting it through all its strange special cases (dammit! - its tough when the vertices are all procedurally generated). I decided to just use the simplification that others have used and limit myself to allowing only a 1 degree seperation in LOD between neighbors. That fixed the cracks . .
but it ****ed up my incremental LOD. I implemented it by setting the split priority to 0.0 if a node fails the neighbor criteria (all neighbors at same LOD) and the merge priority to a huge number.
What seems to happen is that it will want to split a certain node very badly but can't, and can't split a node needed to split it, and can't split the node needed to split that one, and so until it gets to some node way out there it doesn't really want to split at all. So it does apparently nothing or makes very poor split decisions.
I think the solution is to chain the priority computations through neigbhor dependencies, so that if node I is dependant on node J, then J's priority is bumped up to I's, and so on.
This is definetly less effecient and more of a headache in one sense, so I wonder if fixing my original algo is not more worthwile.
Anyone else have this problem?