BettinaTizzy sent in a link to a video demonstration of a system called ‘Unlimited Detail’, which claims to offer realtime and interactive rendering of point cloud data.  The video gets into the typical problems of the common polygonal geometry rendering solutions (low tessellation leads to blocky visuals), ray-tracing (very very slow), and voxels (they never really say what’s wrong with voxels, to be honest), and claim they have a new system they equate to a ‘3d Search Algorithm’.

Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the SEARCH tool and typed in a word like MONEY the search tool quickly searches for every place that word appeared in the document. Google and Yahoo are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesnt touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call MASS CONNECTED PROCESSING. Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.

Sounds very much like a ray-tracing algorithm to me.  I do take issue with their ‘unlimited detail’ claim, as they talk about visualizing billions of points simultaneously and interactively.  Nothing is unlimited, as eventually you will run out of memory.

With all of that said, however, the demo is impressive.  They claim that the cancellation of larrabee will hurt their release, but that their algorithm is primarily software-based anyway so it should be fine.  Watch the video below, and post your thoughts in the comments.  Hype, or a vision of the future?

Update: After a discussion with a colleague, I was reminded of a paper presented at SIGGRAPH2000 on a tool called ‘WarpEngine’, which used dedicated ASIC’s to combine & warp pre-rendered images into a simulated-3D scene.  The source images could be of any detail level (even photographs), and with enough of them you could compose fully interactive 3D scenes.  This looks eerily familiar, and probably suffers from the same limitations:

  • No Motion
  • Massive input dataset (you have to have several images of the objects in the scene rendered from multiple viewpoints), but a simple Octree storage system makes it trivial to navigate

With modern hardware, this seems very possible to do directly on the CPU.  Read the “WarpEngine” paper here.