High-end rendering is becoming more and more popular amongst visualization scientists thanks to it’s ability to enhance the visualization with depth cues like shadows, smooth shading, and nice reflection and refraction effects.  Where I work we use Autodesk 3dsMax and Mental Ray to render our stuff (as evidenced in previous posts), but an article on BlenderNation covers how the folks at Oak Ridge use Blender.

So a very typical render would be to generate 60 seconds of animation at 24 frames per second for 1440 frames. I’d take 128 nodes of roughly 16 cores each ( 2048 cores ) – I’d get back 3 x 128 frames every hour so in less than 4 hours – I’d have 1 minute of HD animation. So it is possible to generate 60-90 second clips in a night without requiring a lot of resources ( compared to what we have anyway ). However, from the number of nodes we have you can see that we could render many minutes of video in less than 30 minutes if we needed to do it.

Our process is very similar, although we have a dedicated Windows cluster for this purpose (3dsMax won’t run on Linux).   While this is good and all, what I’ld really like to see is an MPI-aware version of a tool like this.  Mental Ray has a distributed rendering feature which works up to a few nodes, and even POV-Ray has an unofficial MPI build.  If only Blender would incorporate a true MPI-aware rendersystem, it would be a huge boon to some of the larger datasets we find ourselves dealing with.

via Oak Ridge National Laboratory: Blender on a Supercomputer! | BlenderNation.