Adaptive Refinement
The next feature is almost the opposite of Manta. Where Manta is a great solution to running ParaView on massively parallel systems, it doesn’t do much for the researcher trying to load data on his personal workstation. Another new “in-development” option (that you can try yourself if you compile the CVS) is the Adaptive features. Currently only available in the RAW rectilinear datareader , it consists of both an in-core and out-of-core tool that decompose a dataset into a multi-resolution representation and then dynamically loads the required refinement level at run-time. First you preprocess the data with the out-of-core tool, and at run time you can see visible “pops” as it transitions and recomputes your filters on the necessary level. But the really neat stuff doesn’t end there.
They showed a very powerful video of the same dataset being loaded in 4 unique ways. The first was simply loading the entire RAW dataset, which never finished or displayed anything during the entire demonstration. The second was streaming, which gave visual feedback almost immediately but the blocks came in at random. The third was streaming with optimization, which prioritized blocks so that ones in the center of the screen loaded first. Visual feedback was immediate, but it still took quite a while to fill the screen. The fourth, and final, was the Adaptive Multiresolution with Optimization, which filled the entire screen almost instantly. Of course, it was a lower-resolution version of the mesh, but as he zoomed into the dataset and panned around, higher-resolution blocks were paged in and out of memory.

That’s all fine and good, but I forgot to mention the dataset he was loading. If my eyes are correct, he was loading a 1600x2400x42 3600x2400x42 Floating-point dataset that occupied (according to him) 1.4G on disk, on a MacBook Pro laptop. At the lowest resolution level, it consumed only 5.8M, which is why it loaded so quickly.
Also, it’s fully compatible with most of the ParaView filters. He demonstrating using it with Isosurfaces, where the isosurfaces would automatically recompute as new blocks were paged in and out. However, there are problems with some filters, most notably (as I asked in the talk) filters like Streamlines. Streamlines are not computing on the highest level of resolution, but rather on the level of resolution currently loaded. This means that as you move around and the data blocks come and go, the streamline will constantly be recomputed, leading to alot of jumping. He acknowledged this as an issue, but did state that filters have the ability to adjust the level and areas of refinement, so the streamline filter could be modified to specifically request higher levels in necessary regions.
@Jon Woodring Ahh, thanks for the correction. I updated the article.
Thanks for the article on the tutorial!
I just wanted to correct something slightly, the data set was 3600x2400x42 floats, but otherwise correct on the data sizes. Thanks!