StreamingParaView
Note: Since ParaView 3.10, the streaming and adaptive streaming applications have been rewritten as Plugins and are now distributed within ParaView proper. --DaveDemarle 12:59, 2 February 2011 (EST)
What is does
LANL's streaming ParaView application was developed for the purpose of visualizing very large data sets. The program is an realization of the concepts described in the paper, "A modular extensible visualization system architecture for culled prioritized data streaming." Ahrens et al, Proceedings of SPIE, Jan 2007.
Briefly, it changes ParaView so that it renders data in a set number of passes. At each pass a different piece of the data is rendered and composited into a final image. Because each pass considers a fraction of the total data, StreamingParaView allows the visualization of data sets that exceed the memory capacity of the machine on which the visualization is done. This is closely related to what standard ParaView does, where each processor on a distributed memory machine independently processes a small subset of the data synchronously. In fact the two approaches are complimentary, and can be combined simply by running StreamingParaview and connecting it to a parallel server.
Streaming implies overhead, the most important source of which is that at each iteration results cached in the VTK pipeline after the execution of the previous pass is invalidated.
Fortunately, because increasing the number of passes creates finer data granularity, it is often the case that significant portions of the domain can be identified that do not contribute to the final result and can, therefore, be ignored. To exploit this fact, VTK's priority determination facility is rigorously exercised. That facility adds a new pipeline execution pass, REQUEST_UPDATE_EXTENT_INFORMATION, before the REQUEST_DATA pass. The new pass asks each filter to estimate, based upon whatever meta information is available for a particular piece, whether the piece will contribute to the final result. An example is geometric bounds for the piece which can be used to quickly determine that an entire piece will be rejected by a clipping filter. With this information, the piece processing order can be reordered such that insignificant pieces are skipped and important pieces (for example those nearest to the camera) are processed before unimportant pieces.
It is also fortunate that visualization is usually a process of data reduction. When the number of important pieces is small enough, pipeline results are automatically cached by StreamingParaView. The cache occurs in a dedicated filter which is placed near the end of the pipeline. This means that for the most part, the streaming overhead only has to be paid the first time any object is displayed. Subsequent displays (for example on camera motion) will reuse cached results and thus until some filter parameter changes, rendering speed is not appreciably slower than for standard ParaView.
How to use it
Start StreamingParaView and load the StreamingParaView plugin. Choose a number of passes to stream over in the application's preferences dialog. Then use StreamingParaView as you would the standard ParaView application. On the preferences page, you also have control over the size of the cache, and a limit for the number of pieces that will be displayed. Both settings are optional but can be used to improve responsiveness.
Note that StreamingParaView creates all filters with their display turned off by default. Thus, you must use the eye icon in the pipeline browser, or the enable state on the display tab of the object inspector, to see any results. This behavior is intentional. All pipelines start with the reader, which has no criteria for culling pieces, so interactivity is limited when the reader has to be fully displayed initially.
To inspect what StreamingParaView is doing, you can turn on a piece bounds display mode on any filter's display tab. This draws a wireframe bounding box around each non rejected piece as it is drawn. With this view you can easily see what parts of the data are being skipped. You also have an option to enable console logging messages on the preferences page. This produces detailed progress messages that are useful to developers debugging StreamingParaView.
Current limitations
Filters which require parallel communication may cause a deadlock when each node rearranges and skips pieces arbitrarily. Streaming likewise is incompatible with certain view types. Because of this, StreamingParaView exposes only a subset of the view types, readers, sources, filters and writers in normal ParaView.
There is an unresolved problem regarding global information. It is in general impossible to determine global meta-information such as scalar ranges and geometric bounds, without touching all pieces. Doing so would make the application prohibitively slow. We therefore make the compromise of updating a single piece, and using that piece's meta-information as a proxy for the entire dataset's meta-information. Because of this, the information on the Information tab of the Object Inspector is always suspect. For the same reason, initial settings for filters (ie default contour values, clip plane placement, camera bounds and center of rotation) are based on guesses and these guesses need to be manually corrected more often than for standard ParaView.
Finally, because of the way prioritization is implemented, the client does not yet effectively cache the server's visualization pipeline results. As such, every camera manipulation causes message traffic between the client and the server nodes, which slows the application down appreciably in parallel configurations.