Difference between revisions of "StreamingParaView"

From KitwarePublic
Jump to: navigation, search
m
 
Line 2: Line 2:
 
--[[User:DaveDemarle|DaveDemarle]] 12:59, 2 February 2011 (EST)
 
--[[User:DaveDemarle|DaveDemarle]] 12:59, 2 February 2011 (EST)
  
== WHAT IT DOES ==
+
== What is does ==
  
 
LANL's streaming ParaView application was developed for the purpose of visualizing
 
LANL's streaming ParaView application was developed for the purpose of visualizing
Line 27: Line 27:
 
increasing the number of passes creates finer data granularity, it is often  
 
increasing the number of passes creates finer data granularity, it is often  
 
the case that significant portions of the domain can be identified that do  
 
the case that significant portions of the domain can be identified that do  
not contribute to the final result and can therefore be ignored.
+
not contribute to the final result and can, therefore, be ignored.
 
To exploit this fact, VTK's priority determination facility is rigorously  
 
To exploit this fact, VTK's priority determination facility is rigorously  
 
exercised. That facility adds a new pipeline execution pass,  
 
exercised. That facility adds a new pipeline execution pass,  
Line 35: Line 35:
 
contribute to the final result. An example is geometric bounds for the  
 
contribute to the final result. An example is geometric bounds for the  
 
piece which can be used to quickly determine that an entire piece will  
 
piece which can be used to quickly determine that an entire piece will  
be rejected by a clipping filter. With this information the piece processing
+
be rejected by a clipping filter. With this information, the piece processing
 
order can be reordered such that insignificant pieces are skipped and  
 
order can be reordered such that insignificant pieces are skipped and  
 
important pieces (for example those nearest to the camera)
 
important pieces (for example those nearest to the camera)
 
are processed before unimportant pieces.  
 
are processed before unimportant pieces.  
  
It is also fortunate that visualiation is usually a process of data reduction.  
+
It is also fortunate that visualization is usually a process of data reduction.  
 
When the number of important pieces is small enough, pipeline results
 
When the number of important pieces is small enough, pipeline results
 
are automatically cached by StreamingParaView. The cache occurs
 
are automatically cached by StreamingParaView. The cache occurs
 
in a dedicated filter which is placed near the end of the pipeline.  
 
in a dedicated filter which is placed near the end of the pipeline.  
This means that for the most part, the streaming overhead only has to be payed the first time any object is displayed. Subsequent displays (for example on camera motion) will reuse cached results and thus
+
This means that for the most part, the streaming overhead only has to be paid the first time any object is displayed. Subsequent displays (for example on camera motion) will reuse cached results and thus
 
until some filter parameter changes, rendering speed is not  
 
until some filter parameter changes, rendering speed is not  
 
appreciably slower than for standard ParaView.
 
appreciably slower than for standard ParaView.
  
== HOW TO USE IT ==
+
== How to use it ==
  
 
Start StreamingParaView and load the StreamingParaView plugin.
 
Start StreamingParaView and load the StreamingParaView plugin.
 
Choose a number of passes to stream over in the application's preferences dialog.
 
Choose a number of passes to stream over in the application's preferences dialog.
 
Then use StreamingParaView as you would the standard ParaView application.  
 
Then use StreamingParaView as you would the standard ParaView application.  
On the preferences page you also have control over the  
+
On the preferences page, you also have control over the size of the cache, and a limit for the number of pieces that will be displayed. Both settings are optional but can be used to improve responsiveness.
size of the cache, and a limit for the number of pieces that will be displayed. Both settings are optional, but can be used to improve responsiveness.
 
  
 
Note that StreamingParaView creates all filters with their display turned  
 
Note that StreamingParaView creates all filters with their display turned  
Line 60: Line 59:
 
the enable state on the display tab of the object inspector, to see any results.
 
the enable state on the display tab of the object inspector, to see any results.
 
This behavior is intentional. All pipelines start with the reader, which has no  
 
This behavior is intentional. All pipelines start with the reader, which has no  
criteria for culling pieces, so interactivity is limited when the reader's has  
+
criteria for culling pieces, so interactivity is limited when the reader has  
 
to be fully displayed initially.
 
to be fully displayed initially.
  
 
To inspect what StreamingParaView is doing, you can turn on a piece bounds  
 
To inspect what StreamingParaView is doing, you can turn on a piece bounds  
display mode on any filter's display tab. This draws a wire frame bounding box  
+
display mode on any filter's display tab. This draws a wireframe bounding box  
 
around each non rejected piece as it is drawn. With this view you can easily  
 
around each non rejected piece as it is drawn. With this view you can easily  
 
see what parts of the data are being skipped. You also have an option to  
 
see what parts of the data are being skipped. You also have an option to  
Line 70: Line 69:
 
detailed progress messages that are useful to developers debugging StreamingParaView.
 
detailed progress messages that are useful to developers debugging StreamingParaView.
  
== CURRENT LIMITATIONS ==
+
== Current limitations ==
  
Filters which require parallel communication may cause deadlock when each node rearranges and skips pieces arbitrarily. Streaming likewise is incompatible with certain view types. Because of this, StreamingParaView exposes only a subset of the view types, readers, sources, filters and writers in normal paraview.
+
Filters which require parallel communication may cause a deadlock when each node rearranges and skips pieces arbitrarily. Streaming likewise is incompatible with certain view types. Because of this, StreamingParaView exposes only a subset of the view types, readers, sources, filters and writers in normal ParaView.
  
 
There is an unresolved problem regarding global information. It is in general impossible to
 
There is an unresolved problem regarding global information. It is in general impossible to
 
determine global meta-information such as scalar ranges and geometric bounds, without  
 
determine global meta-information such as scalar ranges and geometric bounds, without  
 
touching all pieces. Doing so would make the application prohibitively slow. We therefore make the compromise of  
 
touching all pieces. Doing so would make the application prohibitively slow. We therefore make the compromise of  
updating a single piece, and using that piece's meta-information as a proxy for the entire dataset's meta-information. Because of this, the information on the Information tab of the Object Inspector is always suspect. For the same reason initial settings for filters (ie default contour values, clip plane placement, camera bounds and  
+
updating a single piece, and using that piece's meta-information as a proxy for the entire dataset's meta-information. Because of this, the information on the Information tab of the Object Inspector is always suspect. For the same reason, initial settings for filters (ie default contour values, clip plane placement, camera bounds and  
 
center of rotation) are based on guesses and these guesses need to be manually corrected more often than
 
center of rotation) are based on guesses and these guesses need to be manually corrected more often than
 
for standard ParaView.
 
for standard ParaView.
  
 
Finally, because of the way prioritization is implemented, the client does not yet effectively cache the server's visualization pipeline results. As such, every camera manipulation causes message traffic between the client and the server nodes, which slows the application down appreciably in parallel configurations.
 
Finally, because of the way prioritization is implemented, the client does not yet effectively cache the server's visualization pipeline results. As such, every camera manipulation causes message traffic between the client and the server nodes, which slows the application down appreciably in parallel configurations.

Latest revision as of 17:39, 26 September 2018

Note: Since ParaView 3.10, the streaming and adaptive streaming applications have been rewritten as Plugins and are now distributed within ParaView proper. --DaveDemarle 12:59, 2 February 2011 (EST)

What is does

LANL's streaming ParaView application was developed for the purpose of visualizing very large data sets. The program is an realization of the concepts described in the paper, "A modular extensible visualization system architecture for culled prioritized data streaming." Ahrens et al, Proceedings of SPIE, Jan 2007.

Briefly, it changes ParaView so that it renders data in a set number of passes. At each pass a different piece of the data is rendered and composited into a final image. Because each pass considers a fraction of the total data, StreamingParaView allows the visualization of data sets that exceed the memory capacity of the machine on which the visualization is done. This is closely related to what standard ParaView does, where each processor on a distributed memory machine independently processes a small subset of the data synchronously. In fact the two approaches are complimentary, and can be combined simply by running StreamingParaview and connecting it to a parallel server.

Streaming implies overhead, the most important source of which is that at each iteration results cached in the VTK pipeline after the execution of the previous pass is invalidated.

Fortunately, because increasing the number of passes creates finer data granularity, it is often the case that significant portions of the domain can be identified that do not contribute to the final result and can, therefore, be ignored. To exploit this fact, VTK's priority determination facility is rigorously exercised. That facility adds a new pipeline execution pass, REQUEST_UPDATE_EXTENT_INFORMATION, before the REQUEST_DATA pass. The new pass asks each filter to estimate, based upon whatever meta information is available for a particular piece, whether the piece will contribute to the final result. An example is geometric bounds for the piece which can be used to quickly determine that an entire piece will be rejected by a clipping filter. With this information, the piece processing order can be reordered such that insignificant pieces are skipped and important pieces (for example those nearest to the camera) are processed before unimportant pieces.

It is also fortunate that visualization is usually a process of data reduction. When the number of important pieces is small enough, pipeline results are automatically cached by StreamingParaView. The cache occurs in a dedicated filter which is placed near the end of the pipeline. This means that for the most part, the streaming overhead only has to be paid the first time any object is displayed. Subsequent displays (for example on camera motion) will reuse cached results and thus until some filter parameter changes, rendering speed is not appreciably slower than for standard ParaView.

How to use it

Start StreamingParaView and load the StreamingParaView plugin. Choose a number of passes to stream over in the application's preferences dialog. Then use StreamingParaView as you would the standard ParaView application. On the preferences page, you also have control over the size of the cache, and a limit for the number of pieces that will be displayed. Both settings are optional but can be used to improve responsiveness.

Note that StreamingParaView creates all filters with their display turned off by default. Thus, you must use the eye icon in the pipeline browser, or the enable state on the display tab of the object inspector, to see any results. This behavior is intentional. All pipelines start with the reader, which has no criteria for culling pieces, so interactivity is limited when the reader has to be fully displayed initially.

To inspect what StreamingParaView is doing, you can turn on a piece bounds display mode on any filter's display tab. This draws a wireframe bounding box around each non rejected piece as it is drawn. With this view you can easily see what parts of the data are being skipped. You also have an option to enable console logging messages on the preferences page. This produces detailed progress messages that are useful to developers debugging StreamingParaView.

Current limitations

Filters which require parallel communication may cause a deadlock when each node rearranges and skips pieces arbitrarily. Streaming likewise is incompatible with certain view types. Because of this, StreamingParaView exposes only a subset of the view types, readers, sources, filters and writers in normal ParaView.

There is an unresolved problem regarding global information. It is in general impossible to determine global meta-information such as scalar ranges and geometric bounds, without touching all pieces. Doing so would make the application prohibitively slow. We therefore make the compromise of updating a single piece, and using that piece's meta-information as a proxy for the entire dataset's meta-information. Because of this, the information on the Information tab of the Object Inspector is always suspect. For the same reason, initial settings for filters (ie default contour values, clip plane placement, camera bounds and center of rotation) are based on guesses and these guesses need to be manually corrected more often than for standard ParaView.

Finally, because of the way prioritization is implemented, the client does not yet effectively cache the server's visualization pipeline results. As such, every camera manipulation causes message traffic between the client and the server nodes, which slows the application down appreciably in parallel configurations.