|
Author Topic:   Supporting Large Topologies
Bryan T
Member
posted April 26, 2000 03:45 PM            
More topics for discussion...

How to support extremely large topologies? Not 1k x 1k, but 1000k x 1000k or larger. Should they be pre-made by artists? Generated by fractals? A combination? Can they be compressed/decompressed on the fly?

How would the different methods work with a Quadtree algorithm? BinTree?

--Bryan

IP:

Effin Goose
Member
posted April 27, 2000 01:56 AM            
My thoughts on this is that the terrain would almost have to be fractally generated to some extent, simply because its so very big (I'm not sure about the time scale needed to design a 1000k x 1000k map but i'm quite sure its a large one ). How ever I also beleive that some parts of the map would have to be able to be pre made by artists, so that you can add in things like canyons or towns or whatever. My problem with the whole large scale map is how to render it. My current implementation of the bintritree algorithm uses a pre-computed implicit binary tree to store the variances. With a really large map, a binary tree would take up far too much room, so the only other way would be to compute the variance for a patch of land on the fly, which i imagine wouldnt be the fastest way to do things. Another problem i forsee (I work as a part-time seer down at the local markets ) is how to seemlessly integrate the fractal parts of the map with the artist designed parts.. anyone have any thoughts on these problems?

Ryan

------------------
I dont like it, and Im sorry I ever had anything to do with it
- Schrodinger

IP:

Bryan T
Member
posted April 27, 2000 09:26 PM            
How fortuitous(sp?) of you, Effin: http://www.gamasutra.com/features/20000427/martin_01.htm

Sounds like this article is what you were talking about. If anyone can explain how they got pic 3a from any mathematical mashing of pics 1 and 2, please enlighten me. I completely understand the math involved, but find no relationships apparent..

I could also forsee an algorithm that took a bitmap 'mask' and an artist-created heightmap. The mask would tell the algorithm what parts of the heightmap to leave alone (ie: the artist generated parts). The algorithm would then fractally-generate the unmasked parts and smooth them into the overall scene.

I had an idea to use fractal-image compression techniques to store terrain data, then uncompress them to any level of detail. Unfortunately the current fractal compression techniques use square patches of the image and leave horrendous nearly-square cliffs in the output dataset. A lot of smoothing would be needed.

For display of large terrain sets, I would definitly cut the terrain into managable tiles. This would automatically limit the depth of your trees (ie: variance). Also, you wouldn't have to keep the whole thing in memory at one time. Now, how to keep the meshes from cracking between tiles, and how to add a new (untesselated) tile next to an already-tesselated one.... hmmm.

--Bryan

IP:

Effin Goose
Member
posted April 28, 2000 12:00 AM            
I read the article just before coming here ;^)

Limiting the depth of the tree's is good, but you still would have to either generate the variance tree's each time you came across a new tile, or read it in from disk, which would mean storing the variance tree for each patch on disk, which I imagine would increase the size of the terrain dataset by quite a bit i imagine.. Though it would be interesting to see which is faster, generating or loading..

One easy thing with a frame incoherant version of the ROAM alorithm, is that adding in a new untesselated tile is easy, as the the triangles are tesselated every frame. Its simply a matter of editing the left and right neighbor pointer variables of any neighboring tiles. Any frame coherent version however, would have a not so good time dealing with it i imagine :^)

Ryan

------------------
I dont like it, and Im sorry I ever had anything to do with it
- Schrodinger

IP:

Aelith
New Member
posted April 28, 2000 07:14 AM            

To get any truly massive terrain databases, you have to use real-time procedural generation - you simply can't store a 100k x 100k area of terrain with centimeter resolution unless it is really, really compressed. Procedural generation is naturally the best conceivable compression.

People these days seem to lean towards bintrees, but I think bintrees should be thrown out the door. The future of polygon rendering rests on dedicated geometry processors that need nice large vertex buffers consisting of huge triangle strips or fans sitting in local video memory. (although, the next-gen geometry hardware should be able to construct a regular grid on the fly from a height field texture)

The point is that doing all this complicated LOD with bintrees to get the minimum theoretical number of vertices to represent a given surface at a certain quality metric is more or less irrelvant if it limits us to using 10% or less of your possible number of vertices.

So coarse grained LOD with a nice and fast regular grid within each quadtree cell is the way to go. You can get a nice spectrum between fine LOD and coarse LOD by simply altering the size of that regular grid.

You have to use quadtrees because they are the natural subdivision for 2D surfaces (textures). I guess a K-d tree could be used as well, but that seems to be unecessary complication.

Anyway, I have a frame-coherent quadtree based real-time fractal terrain system prototype, and its pretty nice: http://www.astralfx.com/users/ccs/screenshots.html

It of course still needs alot of work.

-Jake

IP: