https://public.kitware.com/Wiki/api.php?action=feedcontributions&user=Wschroed&feedformat=atomKitwarePublic - User contributions [en]2024-03-29T12:59:45ZUser contributionsMediaWiki 1.38.6https://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/DataArrays&diff=64711VTK/Tutorials/DataArrays2022-02-22T15:17:32Z<p>Wschroed: </p>
<hr />
<div>== Background ==<br />
<br />
VTK datasets store most of their important information in subclasses of vtkDataArray. Vertex locations (vtkPoints::Data), cell topology (vtkCellArray::Ia), and numeric point, cell, and generic attributes (vtkFieldData::Data) are the dataset features accessed most frequently by VTK algorithms, and these all rely on the vtkDataArray API.<br />
<br />
== Terminology ==<br />
<br />
This page uses the following terms:<br />
<br />
A '''ValueType''' is the element type of an array. For instance, vtkFloatArray has a ValueType of float.<br />
<br />
An '''ArrayType''' is a subclass of vtkDataArray. It specifies not only a ValueType, but an array implementation as well. This becomes important as vtkDataArray subclasses will begin to stray from the typical "array-of-structs" ordering that has been exclusively used in the past.<br />
<br />
A '''dispatch''' is a runtime-resolution of a vtkDataArray’s ArrayType, and is used to call a section of executable code that has been tailored for that ArrayType. Dispatching has compile-time and run-time components. At compile-time, the possible ArrayTypes to be used are determined and a worker code template is generated for each type. At run-time, the type of a specific array is determined and the proper worker instantiation is called.<br />
<br />
'''Template explosion''' refers to a sharp increase in the size of a compiled binary that results from instantiating a template function or class on many different types.<br />
<br />
=== vtkDataArray ===<br />
<br />
The data array type hierarchy in VTK has a unique feature when compared to typical C++ containers: a non-templated base class. All arrays containing numeric data inherit vtkDataArray, a common interface that sports a very useful API. Without knowing the underlying ValueType stored in data array, an algorithm or user may still work with any vtkDataArray in meaningful ways: The array can be resized, reshaped, read, and rewritten easily using a generic API that substitutes double-precision floating point numbers for the array’s actual ValueType. For instance, we can write a simple function that computes the magnitudes for a set of vectors in one array and store the results in another using nothing but the typeless vtkDataArray API:<br />
<br />
<source lang="cpp"><br />
// 3 component magnitude calculation using the vtkDataArray API.<br />
// Inefficient, but easy to write:<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
vtkIdType numVectors = vectors->GetNumberOfTuples();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
// What data types are magnitude and vectors using?<br />
// We don’t care! These methods all use double.<br />
magnitude->SetComponent(tupleIdx, 0,<br />
std::sqrt(vectors->GetComponent(tupleIdx, 0) *<br />
vectors->GetComponent(tupleIdx, 0) +<br />
vectors->GetComponent(tupleIdx, 1) *<br />
vectors->GetComponent(tupleIdx, 1) +<br />
vectors->GetComponent(tupleIdx, 2) *<br />
vectors->GetComponent(tupleIdx, 2)); <br />
}<br />
}<br />
</source><br />
<br />
=== The Costs of Flexibility ===<br />
<br />
However, this flexibility comes at a cost. Passing data through a generic API has a number of issues:<br />
<br />
;Accuracy<br />
: Not all ValueTypes are fully expressible as a double. The truncation of integers with > 52 bits of precision can be a particularly nasty issue.<br />
;Performance<br />
: Virtual overhead: The only way to implement such a system is to route the vtkDataArray calls through a run-time resolution of ValueTypes. This is implemented through the virtual override mechanism of C++, which adds a small overhead to each API call.<br />
: Missed optimization: The virtual indirection described above also prevents the compiler from being able to make assumptions about the layout of the data in-memory. This information could be used to perform advanced optimizations, such as vectorization.<br />
<br />
So what can one do if they want fast, optimized, type-safe access to the data stored in a vtkDataArray? What options are available?<br />
<br />
=== The Old Solution: vtkTemplateMacro ===<br />
<br />
The vtkTemplateMacro is described in this section. While it is no longer considered a best practice to use this construct in new code, it is still usable and likely to be encountered when reading the VTK source code. Newer code should use the vtkArrayDispatch mechanism, which is detailed later. The discussion of vtkTemplateMacro will help illustrate some of the practical issues with array dispatching.<br />
<br />
With a few minor exceptions that we won’t consider here, prior to VTK 7.1 it was safe to assume that all numeric vtkDataArray objects were also subclasses of vtkDataArrayTemplate. This template class provided the implementation of all documented numeric data arrays such as vtkDoubleArray, vtkIdTypeArray, etc, and stores the tuples in memory as a contiguous array-of-structs (AOS). For example, if we had an array that stored 3-component tuples as floating point numbers, we could define a tuple as:<br />
<br />
<source lang="cpp"><br />
struct Tuple { float x; float y; float z; };<br />
</source><br />
<br />
An array-of-structs, or AOS, memory buffer containing this data could be described as:<br />
<br />
<source lang="cpp"><br />
Tuple ArrayOfStructsBuffer[NumTuples];<br />
</source><br />
<br />
As a result, ArrayOfStructsBuffer will have the following memory layout:<br />
<br />
<source lang="cpp"><br />
{ x1, y1, z1, x2, y2, z2, x3, y3, z3, ...}<br />
</source><br />
<br />
That is, the components of each tuple are stored in adjacent memory locations, one tuple after another. While this is not exactly how vtkDataArrayTemplate implemented its memory buffers, it accurately describes the resulting memory layout.<br />
<br />
vtkDataArray also defines a GetDataType method, which returns an enumerated value describing a type. We can used to discover the ValueType stored in the array.<br />
<br />
Combine the AOS memory convention and GetDataType() with a convenient method on the data arrays named “GetVoidPointer()”, and a path to efficient, type-safe access was available. GetVoidPointer() does what it says on the tin: it returns the memory address for the array data’s base location as a void*. While this breaks encapsulation and sets off warning bells for the more pedantic among us, the following technique was safe and efficient when used correctly (and is consistent with and similar to the std::vector<>.data() method - provided for many practical and performance related reasons):<br />
<br />
<source lang="cpp"><br />
// 3-component magnitude calculation using GetVoidPointer.<br />
// Efficient and fast, but assumes AOS memory layout<br />
template <typename ValueType><br />
void calcMagnitudeWorker(ValueType *vectors, ValueType *magnitude,<br />
vtkIdType numVectors)<br />
{<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
// We now have access to the raw memory buffers, and assuming<br />
// AOS memory layout, we know how to access them.<br />
magnitude[tupleIdx] = <br />
std::sqrt(vectors[3 * tupleIdx + 0] *<br />
vectors[3 * tupleIdx + 0] +<br />
vectors[3 * tupleIdx + 1] *<br />
vectors[3 * tupleIdx + 1] +<br />
vectors[3 * tupleIdx + 2] *<br />
vectors[3 * tupleIdx + 2]); <br />
}<br />
}<br />
<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
assert(“Arrays must have same datatype!” && <br />
vtkDataTypesCompare(vectors->GetDataType(),<br />
magnitude->GetDataType()));<br />
switch (vectors->GetDataType())<br />
{<br />
vtkTemplateMacro(calcMagnitudeWorker<VTK_TT*>(<br />
static_cast<VTK_TT*>(vectors->GetVoidPointer(0)),<br />
static_cast<VTK_TT*>(magnitude->GetVoidPointer(0)),<br />
vectors->GetNumberOfTuples());<br />
}<br />
}<br />
</source><br />
<br />
The vtkTemplateMacro, as you may have guessed, expands into a series of case statements that determine an array’s ValueType from the ‘int GetDataType()’ return value. The ValueType is then typedef’d to VTK_TT, and the macro’s argument is called for each numeric type returned from GetDataType. In this case, the call to calcMagnitudeWorker is made by the macro, with VTK_TT typedef’d to the array’s ValueType.<br />
<br />
This is the typical usage pattern for vtkTemplateMacro. The calcMagnitude function calls a templated worker implementation that uses efficient, raw memory access to a typesafe memory buffer so that the worker’s code can be as efficient as possible. But this assumes AOS memory ordering, and as we’ll mention, this assumption may no longer be valid as VTK moves further into the field of in-situ analysis.<br />
<br />
But first, you may have noticed that the above example using vtkTemplateMacro has introduced a step backwards in terms of functionality. In the vtkDataArray implementation, we didn’t care if both arrays were the same ValueType, but now we have to ensure this, since we cast both arrays’ void pointers to VTK_TT*. What if vectors is an array of integers, but we want to calculate floating point magnitudes? <br />
<br />
=== vtkTemplateMacro with Multiple Arrays ===<br />
<br />
The best solution prior to VTK 7.1 was to use two worker functions. The first is templated on vector’s ValueType, and the second is templated on both array ValueTypes:<br />
<br />
<source lang="cpp"><br />
// 3-component magnitude calculation using GetVoidPointer and a <br />
// double-dispatch to resolve ValueTypes of both arrays.<br />
// Efficient and fast, but assumes AOS memory layout, lots of boilerplate<br />
// code, and the sensitivity to template explosion issues increases.<br />
template <typename VectorType, typename MagnitudeType><br />
void calcMagnitudeWorker2(VectorType *vectors, MagnitudeType *magnitude,<br />
vtkIdType numVectors)<br />
{<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
// We now have access to the raw memory buffers, and assuming<br />
// AOS memory layout, we know how to access them.<br />
magnitude[tupleIdx] = <br />
std::sqrt(vectors[3 * tupleIdx + 0] *<br />
vectors[3 * tupleIdx + 0] +<br />
vectors[3 * tupleIdx + 1] *<br />
vectors[3 * tupleIdx + 1] +<br />
vectors[3 * tupleIdx + 2] *<br />
vectors[3 * tupleIdx + 2]); <br />
}<br />
}<br />
<br />
// Vector ValueType is known (VectorType), now use vtkTemplateMacro on<br />
// magnitude:<br />
template <typename VectorType><br />
void calcMagnitudeWorker1(VectorType *vectors, vtkDataArray *magnitude,<br />
vtkIdType numVectors)<br />
{<br />
switch (magnitude->GetDataType())<br />
{<br />
vtkTemplateMacro(calcMagnitudeWorker2(vectors,<br />
static_cast<VTK_TT*>(magnitude->GetVoidPointer(0)), numVectors);<br />
}<br />
}<br />
<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
// Dispatch vectors first:<br />
switch (vectors->GetDataType())<br />
{<br />
vtkTemplateMacro(calcMagnitudeWorker1<VTK_TT*>(<br />
static_cast<VTK_TT*>(vectors->GetVoidPointer(0)),<br />
magnitude, vectors->GetNumberOfTuples());<br />
}<br />
}<br />
</source><br />
<br />
This works well, but it’s a bit ugly and has the same issue as before regarding memory layout. Double dispatches using this method will also see more problems regarding binary size. The number of template instantiations that the compiler needs to generate is determined by <math>I = T^D</math>, where I is the number of template instantiations, T is the number of types considered, and D is the number of dispatches. As of VTK 7.1, vtkTemplateMacro considers 14 data types, so this double-dispatch will produce 14 instantiations of calcMagnitudeWorker1 and 196 instantiations of calcMagnitudeWorker2. If we tried to resolve 3 vtkDataArrays into raw C arrays, 2744 instantiations of the final worker function would be generated. As more arrays are considered, the need for some form of restricted dispatch becomes very important to keep this template explosion in check.<br />
<br />
== Data Array Changes in VTK 7.1 ==<br />
<br />
Starting with VTK 7.1, the Array-Of-Structs (AOS) memory layout is no longer the only vtkDataArray implementation provided by the library. The Struct-Of-Arrays (SOA) memory layout is now available throught the vtkSOADataArrayTemplate class. The SOA layout assumes that the components of an array are stored separately, as in:<br />
<br />
<source lang="cpp"><br />
struct StructOfArraysBuffer <br />
{ <br />
float *x; // Pointer to array containing x components<br />
float *y; // Same for y<br />
float *z; // Same for z<br />
};<br />
</source><br />
<br />
The new SOA arrays were added to improve interoperability between VTK and simulation packages for live visualization of in-situ results. Many simulations use the SOA layout for their data, and natively supporting these arrays in VTK will allow analysis of live data without the need to explicitly copy it into a VTK data structure.<br />
<br />
As a result of this change, a new mechanism is needed to efficiently access array data. vtkTemplateMacro and GetVoidPointer are no longer an acceptable solution -- implementing GetVoidPointer for SOA arrays requires creating a deep copy of the data into a new AOS buffer, a waste of both processor time and memory. <br />
<br />
So we need a replacement for vtkTemplateMacro that can abstract away things like storage details while providing performance that is on-par with raw memory buffer operations. And while we’re at it, let’s look at removing the tedium of multi-array dispatch and reducing the problem of 'template explosion'. The remainder of this page details such a system.<br />
<br />
== Best Practices for vtkDataArray Post-7.1 ==<br />
<br />
We’ll describe a new set of tools that make managing template instantiations for efficient array access both easy and extensible. As an overview, the following new features will be discussed:<br />
<br />
* '''vtkGenericDataArray''' The new templated base interface for all numeric vtkDataArray subclasses.<br />
* '''vtkArrayDispatch''' Collection of code generation tools that allow concise and precise specification of restrictable dispatch for up to 3 arrays simultaneously.<br />
* '''vtkArrayDownCast''' Access to specialized downcast implementations from code templates.<br />
* '''vtkDataArrayAccessor''' Provides Get and Set methods for accessing/modifying array data as efficiently as possible. Allows a single worker implementation to work efficiently with vtkGenericDataArray subclasses, or fallback to use the vtkDataArray API if needed.<br />
* '''VTK_ASSUME''' New abstraction for the compiler <nowiki>__assume</nowiki> directive to provide optimization hints.<br />
<br />
These will be discussed more fully, but as a preview, here’s our familiar calcMagnitude example implemented using these new tools:<br />
<br />
<source lang="cpp"><br />
// Modern implementation of calcMagnitude using new concepts in VTK 7.1:<br />
// A worker functor. The calculation is implemented in the function template<br />
// for operator().<br />
struct CalcMagnitudeWorker<br />
{<br />
// The worker accepts VTK array objects now, not raw memory buffers.<br />
template <typename VectorArray, typename MagnitudeArray><br />
void operator()(VectorArray *vectors, MagnitudeArray *magnitude)<br />
{<br />
// This allows the compiler to optimize for the AOS array stride.<br />
VTK_ASSUME(vectors->GetNumberOfComponents() == 3);<br />
VTK_ASSUME(magnitude->GetNumberOfComponents() == 1);<br />
<br />
// These allow this single worker function to be used with both<br />
// the vtkDataArray 'double' API and the more efficient <br />
// vtkGenericDataArray APIs, depending on the template parameters:<br />
vtkDataArrayAccessor<VectorArray> v(vectors);<br />
vtkDataArrayAccessor<MagnitudeArray> m(magnitude);<br />
<br />
vtkIdType numVectors = vectors->GetNumberOfTuples();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
// Set and Get compile to inlined optimizable raw memory accesses for<br />
// vtkGenericDataArray subclasses.<br />
m.Set(tupleIdx, 0, std::sqrt(v.Get(tupleIdx, 0) * v.Get(tupleIdx, 0) +<br />
v.Get(tupleIdx, 1) * v.Get(tupleIdx, 1) +<br />
v.Get(tupleIdx, 2) * v.Get(tupleIdx, 2)));<br />
}<br />
}<br />
};<br />
<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
// Create our worker functor:<br />
CalcMagnitudeWorker worker;<br />
<br />
// Define our dispatcher. We’ll let vectors have any ValueType, but only<br />
// consider float/double arrays for magnitudes. These combinations will<br />
// use a 'fast-path' implementation generated by the dispatcher:<br />
typedef vtkArrayDispatch::Dispatch2ByValueType<br />
<<br />
vtkArrayDispatch::AllTypes, // ValueTypes allowed by first array<br />
vtkArrayDispatch::Reals // ValueTypes allowed by second array<br />
> Dispatcher;<br />
<br />
// Execute the dispatcher:<br />
if (!Dispatcher::Execute(vectors, magnitude, worker))<br />
{<br />
// If Execute() fails, it means the dispatch failed due to an<br />
// unsupported array type. In this case, it’s likely that the magnitude<br />
// array is using an integral type. This is an uncommon case, so we won’t<br />
// generate a fast path for these, but instead call an instantiation of <br />
// CalcMagnitudeWorker::operator()<vtkDataArray, vtkDataArray>.<br />
// Through the use of vtkDataArrayAccessor, this falls back to using the<br />
// vtkDataArray double API:<br />
worker(vectors, magnitude);<br />
}<br />
}<br />
</source><br />
<br />
== vtkGenericDataArray ==<br />
<br />
The vtkGenericDataArray class template drives the new vtkDataArray class hierarchy. The ValueType is introduced here, both as a template parameter and a class-scope typedef. This allows a typed API to be written that doesn’t require conversion to/from a common type (as vtkDataArray does with double). It does not implement any storage details, however. Instead, it uses the CRTP idiom to forward key method calls to a derived class without using a virtual function call. By eliminating this indirection, vtkGenericDataArray defines an interface that can be used to implement highly efficient code, because the compiler is able to see past the method calls and optimize the underlying memory accesses instead.<br />
<br />
There are two main subclasses of vtkGenericDataArray: vtkAOSDataArrayTemplate and vtkSOADataArrayTemplate. These implement array-of-structs and struct-of-arrays storage, respectively.<br />
<br />
== vtkTypeList ==<br />
<br />
Type lists are a metaprogramming construct used to generate a list of C++ types. They are used in VTK to implement restricted array dispatching. As we’ll see, vtkArrayDispatch offers ways to reduce the number of generated template instantiations by enforcing constraints on the arrays used to dispatch. For instance, if one wanted to only generate templated worker implementations for vtkFloatArray and vtkIntArray, a typelist is used to specify this:<br />
<br />
<source lang="cpp"><br />
// Create a typelist of 2 types, vtkFloatArray and vtkIntArray:<br />
typedef vtkTypeList_Create_2(vtkFloatArray, vtkIntArray) MyArrays;<br />
<br />
Worker someWorker = ...;<br />
vtkDataArray *someArray = ...;<br />
<br />
// Use vtkArrayDispatch to generate code paths for these arrays:<br />
vtkArrayDispatch::DispatchByArray<MyArrays>(someArray, someWorker);<br />
</source><br />
<br />
There’s not much to know about type lists as a user, other than how to create them. As seen above, there is a set of macros named vtkTypeList_Create_X, where X is the number of types in the created list, and the arguments are the types to place in the list. In the example above, the new type list is typically bound to a friendlier name using a local typedef, which is a common practice.<br />
<br />
The vtkTypeList.h header defines some additional type list operations that may be useful, such as deleting and appending types, looking up indices, etc. vtkArrayDispatch::FilterArraysByValueType may come in handy, too. But for working with array dispatches, most users will only need to create new ones, or use one of the following predefined vtkTypeLists:<br />
<br />
* vtkArrayDispatch::Reals -- All floating point ValueTypes.<br />
* vtkArrayDispatch::Integrals -- All integral ValueTypes.<br />
* vtkArrayDispatch::AllTypes -- Union of Reals and Integrals.<br />
* vtkArrayDispatch::Arrays -- Default list of ArrayTypes to use in dispatches.<br />
<br />
The last one is special -- vtkArrayDispatch::Arrays is a type list of ArrayTypes set application-wide when VTK is built. This vtkTypeList of vtkDataArray subclasses is used for unrestricted dispatches, and is the list that gets filtered when restricting a dispatch to specific ValueTypes. <br />
<br />
Refining this list allows the user building VTK to have some control over the dispatch process. If SOA arrays are never going to be used, they can be removed from dispatch calls, reducing compile times and binary size. On the other hand, a user applying in-situ techniques may want them available, because they’ll be used to import views of intermediate results.<br />
<br />
By default, vtkArrayDispatch::Arrays contains all AOS arrays. The CMake option VTK_DISPATCH_SOA_ARRAYS will enable SOA array dispatch as well. More advanced possibilities exist and are described in VTK/CMake/vtkCreateArrayDispatchArrayList.cmake.<br />
<br />
== vtkArrayDownCast ==<br />
<br />
In VTK, all subclasses of vtkObject (including the data arrays) support a downcast method called SafeDownCast. It is used similarly to the C++ dynamic_cast -- given an object, try to cast it to a more derived type or return NULL if the object is not the requested type. Say we have a vtkDataArray and want to test if it is actually a vtkFloatArray. We can do this:<br />
<br />
<source lang="cpp"><br />
void DoSomeAction(vtkDataArray *dataArray)<br />
{<br />
vtkFloatArray *floatArray = vtkFloatArray::SafeDownCast(dataArray);<br />
if (floatArray)<br />
{<br />
// ... (do work with float array)<br />
}<br />
}<br />
</source><br />
<br />
This works, but it can pose a serious problem if DoSomeAction is called repeatedly. SafeDownCast works by performing a series of virtual calls and string comparisons to determine if an object falls into a particular class hierarchy. These string comparisons add up and can actually dominate computational resources if an algorithm implementation calls SafeDownCast in a tight loop.<br />
<br />
In such situations, it’s ideal to restructure the algorithm so that the downcast only happens once and the same result is used repeatedly, but sometimes this is not possible. To lessen the cost of downcasting arrays, a FastDownCast method exists for common subclasses of vtkAbstractArray. This replaces the string comparisons with a single virtual call and a few integer comparisons and is far cheaper than the more general SafeDownCast. However, not all array implementations support the FastDownCast method.<br />
<br />
This creates a headache for templated code. Take the following example:<br />
<br />
<source lang="cpp"><br />
template <typename ArrayType><br />
void DoSomeAction(vtkAbstractArray *array)<br />
{<br />
ArrayType *myArray = ArrayType::SafeDownCast(array);<br />
if (myArray)<br />
{<br />
// ... (do work with myArray)<br />
}<br />
}<br />
</source><br />
<br />
We cannot use FastDownCast here since not all possible ArrayTypes support it. But we really want that performance increase for the ones that do -- SafeDownCasts are really slow! vtkArrayDownCast fixes this issue:<br />
<br />
<source lang="cpp"><br />
template <typename ArrayType><br />
void DoSomeAction(vtkAbstractArray *array)<br />
{<br />
ArrayType *myArray = vtkArrayDownCast<ArrayType>(array);<br />
if (myArray)<br />
{<br />
// ... (do work with myArray)<br />
}<br />
}<br />
</source><br />
<br />
vtkArrayDownCast automatically selects FastDownCast when it is defined for the ArrayType, and otherwise falls back to SafeDownCast. This is the preferred array downcast method for performance, uniformity, and reliability.<br />
<br />
== vtkDataArrayAccessor ==<br />
<br />
Array dispatching relies on having templated worker code carry out some operation. For instance, take this vtkArrayDispatch code that locates the maximum value in an array:<br />
<br />
<source lang="cpp"><br />
// Stores the tuple/component coordinates of the maximum value:<br />
struct FindMax<br />
{<br />
vtkIdType Tuple; // Result<br />
int Component; // Result<br />
<br />
FindMax() : Tuple(-1), Component(-1) {}<br />
<br />
template <typename ArrayT><br />
void operator()(ArrayT *array)<br />
{<br />
// The type to use for temporaries, and a temporary to store<br />
// the current maximum value:<br />
typedef typename ArrayT::ValueType ValueType;<br />
ValueType max = std::numeric_limits<ValueType>::min();<br />
<br />
// Iterate through all tuples and components, noting the location<br />
// of the largest element found.<br />
vtkIdType numTuples = array->GetNumberOfTuples();<br />
int numComps = array->GetNumberOfComponents();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numTuples; ++tupleIdx)<br />
{<br />
for (int compIdx = 0; compIdx < numComps; ++compIdx)<br />
{<br />
if (max < array->GetTypedComponent(tupleIdx, compIdx))<br />
{<br />
max = array->GetTypedComponent(tupleIdx, compIdx);<br />
this->Tuple = tupleIdx;<br />
this->Component = compIdx;<br />
}<br />
}<br />
}<br />
}<br />
};<br />
<br />
void someFunction(vtkDataArray *array)<br />
{<br />
FindMax maxWorker;<br />
vtkArrayDispatch::Dispatch::Execute(array, maxWorker);<br />
// Do work using maxWorker.Tuple and maxWorker.Component...<br />
}<br />
</source><br />
<br />
There’s a problem, though. Recall that only the arrays in vtkArrayDispatch::Arrays are tested for dispatching. What happens if the array passed into someFunction wasn’t on that list?<br />
<br />
The dispatch will fail, and maxWorker.Tuple and maxWorker.Component will be left to their initial values of -1. That’s no good. What if someFunction is a critical path where we want to use a fast dispatched worker if possible, but still have valid results to use if dispatching fails? Well, we can fall back on the vtkDataArray API and do things the slow way in that case. When a dispatcher is given an unsupported array, it returns false, so let’s just add a backup implementation:<br />
<br />
<source lang="cpp"><br />
// Stores the tuple/component coordinates of the maximum value:<br />
struct FindMax<br />
{ /* As before... */ };<br />
<br />
void someFunction(vtkDataArray *array)<br />
{<br />
FindMax maxWorker;<br />
if (!vtkArrayDispatch::Dispatch::Execute(array, maxWorker))<br />
{<br />
// Reimplement FindMax::operator(), but use the vtkDataArray API's<br />
// "virtual double GetComponent()" instead of the more efficient<br />
// "ValueType GetTypedComponent()" from vtkGenericDataArray.<br />
}<br />
}<br />
</source><br />
<br />
Ok, that works. But ugh...why write the same algorithm twice? That’s extra debugging, extra testing, extra maintenance burden, and just plain not fun. <br />
<br />
Enter vtkDataArrayAccessor. This utility template does a very simple, yet useful, job. It provides component and tuple based Get and Set methods that will call the corresponding method on the array using either the vtkDataArray or vtkGenericDataArray API, depending on the class’s template parameter. It also defines an APIType, which can be used to allocate temporaries, etc. This type is double for vtkDataArrays and vtkGenericDataArray::ValueType for vtkGenericDataArrays.<br />
<br />
Another nice benefit is that vtkDataArrayAccessor has a more compact API. The only defined methods are Get and Set, and they’re overloaded to work on either tuples or components (though component access is encouraged as it is much, much more efficient). Note that all non-element access operations (such as GetNumberOfTuples) should still be called on the array pointer using vtkDataArray API.<br />
<br />
Using vtkDataArrayAccessor, we can write a single worker template that works for both vtkDataArray and vtkGenericDataArray, without a loss of performance in the latter case. That worker looks like this:<br />
<br />
<source lang="cpp"><br />
// Better, uses vtkDataArrayAccessor:<br />
struct FindMax<br />
{<br />
vtkIdType Tuple; // Result<br />
int Component; // Result<br />
<br />
FindMax() : Tuple(-1), Component(-1) {}<br />
<br />
template <typename ArrayT><br />
void operator()(ArrayT *array)<br />
{<br />
// Create the accessor:<br />
vtkDataArrayAccessor<ArrayT> access(array);<br />
<br />
// Prepare the temporary. We’ll use the accessor's APIType instead of<br />
// ArrayT::ValueType, since that is appropriate for the vtkDataArray<br />
// fallback:<br />
typedef typename vtkDataArrayAccessor<ArrayT>::APIType ValueType;<br />
ValueType max = std::numeric_limits<ValueType>::min();<br />
<br />
// Iterate as before, but use access.Get instead of<br />
// array->GetTypedComponent. GetTypedComponent is still used<br />
// when ArrayT is a vtkGenericDataArray, but <br />
// vtkDataArray::GetComponent is now used as a fallback when ArrayT<br />
// is vtkDataArray.<br />
vtkIdType numTuples = array->GetNumberOfTuples();<br />
int numComps = array->GetNumberOfComponents();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numTuples; ++tupleIdx)<br />
{<br />
for (int compIdx = 0; compIdx < numComps; ++compIdx)<br />
{<br />
if (max < access.Get(tupleIdx, compIdx))<br />
{<br />
max = access.Get(tupleIdx, compIdx);<br />
this->Tuple = tupleIdx;<br />
this->Component = compIdx;<br />
}<br />
}<br />
}<br />
}<br />
};<br />
</source><br />
<br />
Now when we call operator() with say, ArrayT=vtkFloatArray, we’ll get an optimized, efficient code path. But we can also call this same implementation with ArrayT=vtkDataArray and still get a correct result (assuming that the vtkDataArray’s double API represents the data well enough).<br />
<br />
Using the vtkDataArray fallback path is straightforward. At the call site:<br />
<br />
<source lang="cpp"><br />
void someFunction(vtkDataArray *array)<br />
{<br />
FindMax maxWorker;<br />
if (!vtkArrayDispatch::Dispatch::Execute(array, maxWorker))<br />
{<br />
maxWorker(array); // Dispatch failed, call vtkDataArray fallback<br />
}<br />
// Do work using maxWorker.Tuple and maxWorker.Component -- now we know<br />
// for sure that they’re initialized!<br />
}<br />
</source><br />
<br />
Using the above pattern for calling a worker and always going through vtkDataArrayAccessor to Get/Set array elements ensures that any worker implementation can be its own fallback path.<br />
<br />
== VTK_ASSUME ==<br />
<br />
While performance testing the new array classes, we compared the performance of a dispatched worker using the vtkDataArrayAccessor class to the same algorithm using raw memory buffers. We managed to achieve the same performance out of the box for most cases, using both AOS and SOA array implementations. In fact, with --ffast-math optimizations on GCC 4.9, the optimizer is able to remove all function calls and apply SIMD vectorized instructions in the dispatched worker, showing that the new array API is thin enough that the compiler can see the algorithm in terms of memory access.<br />
<br />
But there was one case where performance suffered. If iterating through an AOS data array with a known number of components using GetTypedComponent, the raw pointer implementation initially outperformed the dispatched array. To understand why, note that the AOS implementation of GetTypedComponent is along the lines of:<br />
<br />
<source lang="cpp"><br />
ValueType vtkAOSDataArrayTemplate::GetTypedComponent(vtkIdType tuple,<br />
int comp) const<br />
{<br />
// AOSData is a ValueType* pointing at the base of the array data.<br />
return this->AOSData[tuple * this->NumberOfComponents + comp];<br />
}<br />
</source><br />
<br />
Because NumberOfComponents is unknown at compile time, the optimizer cannot assume anything about the stride of the components in the array. This leads to missed optimizations for vectorized read/writes and increased complexity in the instructions used to iterate through the data.<br />
<br />
For such cases where the number of components is, in fact, known at compile time (due to a calling function performing some validation, for instance), it is possible to tell the compiler about this fact using VTK_ASSUME.<br />
<br />
VTK_ASSUME wraps a compiler-specific <nowiki>__assume</nowiki> statement, which is used to pass such optimization hints. Its argument is an expression of some condition that is guaranteed to always be true. This allows more aggressive optimizations when used correctly, but be forewarned that if the condition is not met at runtime, the results are unpredictable and likely catastrophic.<br />
<br />
But if we’re writing a filter that only operates on 3D point sets, we know the number of components in the point array will always be 3. In this case we can write:<br />
<br />
<source lang="cpp"><br />
VTK_ASSUME(pointsArray->GetNumberOfComponents() == 3);<br />
</source><br />
<br />
in the worker function and this instructs the compiler that the array’s internal NumberOfComponents variable will always be 3, and thus the stride of the array is known. Of course, the caller of this worker function should ensure that this is a 3-component array and fail gracefully if it is not.<br />
<br />
There are many scenarios where VTK_ASSUME can offer a serious performance boost, the case of known tuple size is a common one that’s really worth remembering.<br />
<br />
== vtkArrayDispatch == <br />
<br />
The dispatchers implemented in the vtkArrayDispatch namespace provide array dispatching with customizable restrictions on code generation and a simple syntax that hides the messy details of type resolution and multi-array dispatch. There are several "flavors" of dispatch available that operate on up to three arrays simultaneously.<br />
<br />
=== Components Of A Dispatch ===<br />
<br />
Using the vtkArrayDispatch system requires three elements: the array(s), the worker, and the dispatcher.<br />
<br />
==== The Arrays ====<br />
<br />
All dispatched arrays must be subclasses of vtkDataArray. It is important to identify as many restrictions as possible. Must every ArrayType be considered during dispatch, or is the array’s ValueType (or even the ArrayType itself) restricted? If dispatching multiple arrays at once, are they expected to have the same ValueType? These scenarios are common, and these conditions can be used to reduce the number of instantiations of the worker template.<br />
<br />
==== The Worker ====<br />
<br />
The worker is some generic callable. In C++98, a templated functor is a good choice. In C++14, a generic lambda is a usable option as well. For our purposes, we’ll only consider the functor approach, as C++14 is a long ways off for core VTK code.<br />
<br />
At a minimum, the worker functor should define operator() to make it callable. This should be a function template with a template parameter for each array it should handle. For a three array dispatch, it should look something like this:<br />
<br />
<source lang="cpp"><br />
struct ThreeArrayWorker<br />
{<br />
template <typename Array1T, typename Array2T, typename Array3T><br />
void operator()(Array1T *array1, Array2T *array2, Array3T *array3)<br />
{<br />
/* Do stuff... */<br />
}<br />
};<br />
</source><br />
<br />
At runtime, the dispatcher will call ThreeWayWorker::operator() with a set of Array1T, Array2T, and Array3T that satisfy any dispatch restrictions.<br />
<br />
Workers can be stateful, too, as seen in the FindMax worker earlier where the worker simply identified the component and tuple id of the largest value in the array. The functor stored them for the caller to use in further analysis:<br />
<br />
<source lang="cpp"><br />
// Example of a stateful dispatch functor:<br />
struct FindMax<br />
{<br />
// Functor state, holds results that are accessible to the caller:<br />
vtkIdType Tuple;<br />
int Component;<br />
<br />
// Set initial values:<br />
FindMax() : Tuple(-1), Component(-1) {}<br />
<br />
// Template method to set Tuple and Component ivars:<br />
template <typename ArrayT><br />
void operator()(ArrayT *array)<br />
{ <br />
/* Do stuff... */<br />
}<br />
};<br />
</source><br />
<br />
==== The Dispatcher ====<br />
<br />
The dispatcher is the workhorse of the system. It is responsible for applying restrictions, resolving array types, and generating the requested template instantiations. It has responsibilities both at run-time and compile-time.<br />
<br />
During compilation, the dispatcher will identify the valid combinations of arrays that can be used according to the restrictions. This is done by starting with a typelist of arrays, either supplied as a template parameter or by defaulting to vtkArrayDispatch::Arrays, and filtering them by ValueType if needed. For multi-array dispatches, additional restrictions may apply, such as forcing the second and third arrays to have the same ValueType as the first. It must then generate the required code for the dispatch -- that is, the templated worker implementation must be instantiated for each valid combination of arrays.<br />
<br />
At runtime, it tests each of the dispatched arrays to see if they match one of the generated code paths. Runtime type resolution is carried out using vtkArrayDownCast to get the best performance available for the arrays of interest. If it finds a match, it calls the worker’s operator() method with the properly typed arrays. If no match is found, it returns false without executing the worker.<br />
<br />
=== Restrictions: Why They Matter ===<br />
<br />
We’ve made several mentions of using restrictions to reduce the number of template instantiations during a dispatch operation. You may be wondering if it really matters so much. Let’s consider some numbers.<br />
<br />
VTK is configured to use 13 ValueTypes for numeric data. These are the standard numeric types float, int, unsigned char, etc. By default, VTK will define vtkArrayDispatch::Arrays to use all 13 types with vtkAOSDataArrayTemplate for the standard set of dispatchable arrays. If enabled during compilation, the SOA data arrays are added to this list for a total of 26 arrays.<br />
<br />
Using these 26 arrays in a single, unrestricted dispatch will result in 26 instantiations of the worker template. A double dispatch will generate 676 workers. A triple dispatch with no restrictions creates a whopping 17,576 functions to handle the possible combinations of arrays. That’s a '''lot''' of instructions to pack into the final binary object.<br />
<br />
Applying some simple restrictions can reduce this immensely. Say we know that the arrays will only contain floats or doubles. This would reduce the single dispatch to 4 instantiations, the double dispatch to 16, and the triple to 64. We’ve just reduced the generated code size significantly. We could even apply such a restriction to just create some 'fast-paths' and let the integral types fallback to using the vtkDataArray API by using vtkDataArrayAccessors. Dispatch restriction is a powerful tool for reducing the compiled size of a binary object.<br />
<br />
Another common restriction is that all arrays in a multi-array dispatch have the same ValueType, even if that ValueType is not known at compile time. By specifying this restriction, a double dispatch on all 26 AOS/SOA arrays will only produce 52 worker instantiations, down from 676. The triple dispatch drops to 104 instantiations from 17,576.<br />
<br />
Always apply restrictions when they are known, especially for multi-array dispatches. The savings are worth it.<br />
<br />
=== Types of Dispatchers ===<br />
<br />
Now that we’ve discussed the components of a dispatch operation, what the dispatchers do, and the importance of restricting dispatches, let’s take a look at the types of dispatchers available.<br />
<br />
----<br />
<br />
==== vtkArrayDispatch::Dispatch ====<br />
<br />
This family of dispatchers take no parameters and perform an unrestricted dispatch over all arrays in vtkArrayDispatch::Arrays.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::Dispatch -- Single dispatch.<br />
: vtkArrayDispatch::Dispatch2 -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3 -- Triple dispatch.<br />
<br />
'''Arrays considered:''' All arrays in vtkArrayDispatch::Arrays.<br />
<br />
'''Restrictions:''' None.<br />
<br />
'''Usecase:''' Used when no useful information exists that can be used to apply restrictions.<br />
<br />
'''Example Usage:'''<br />
<source lang="cpp"><br />
vtkArrayDispatch::Dispatch::Execute(array, worker);<br />
</source><br />
<br />
----<br />
<br />
==== vtkArrayDispatch::DispatchByArray ====<br />
<br />
This family of dispatchers takes a vtkTypeList of explicit array types to use during dispatching. They should only be used when an array’s exact type is restricted. If dispatching multiple arrays and only one has such type restrictions, use vtkArrayDispatch::Arrays (or a filtered version) for the unrestricted arrays.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::DispatchByArray -- Single dispatch.<br />
: vtkArrayDispatch::Dispatch2ByArray -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3ByArray -- Triple dispatch.<br />
<br />
'''Arrays considered:''' All arrays explicitly listed in the parameter lists.<br />
<br />
'''Restrictions:''' Array must be explicitly listed in the dispatcher’s type.<br />
<br />
'''Usecase:''' Used when one or more arrays have known implementations.<br />
<br />
'''Example Usage:'''<br />
An example here would be a filter that processes an input array of some integral type and produces either a vtkDoubleArray or a vtkFloatArray, depending on some condition. Since the input array’s implementation is unknown (it comes from outside the filter), we’ll rely on a ValueType-filtered version of vtkArrayDispatch::Arrays for its type. However, we know the output array is either vtkDoubleArray or vtkFloatArray, so we’ll want to be sure to apply that restriction:<br />
<br />
<source lang="cpp"><br />
// input has an unknown implementation, but an integral ValueType.<br />
vtkDataArray *input = ...;<br />
<br />
// Output is always either vtkFloatArray or vtkDoubleArray:<br />
vtkDataArray *output = someCondition ? vtkFloatArray::New()<br />
: vtkDoubleArray::New();<br />
<br />
// Define the valid ArrayTypes for input by filtering <br />
// vtkArrayDispatch::Arrays to remove non-integral types:<br />
typedef typename vtkArrayDispatch::FilterArraysByValueType<br />
<<br />
vtkArrayDispatch::Arrays,<br />
vtkArrayDispatch::Integrals<br />
>::Result InputTypes;<br />
<br />
// For output, create a new vtkTypeList with the only two possibilities:<br />
typedef vtkTypeList_Create_2(vtkFloatArray, vtkDoubleArray) OutputTypes;<br />
<br />
// Typedef the dispatch to a more manageable name:<br />
typedef vtkArrayDispatch::Dispatch2ByArray<br />
<<br />
InputTypes, <br />
OutputTypes<br />
> MyDispatch;<br />
<br />
// Execute the dispatch:<br />
MyDispatch::Execute(input, output, someWorker);<br />
</source><br />
<br />
----<br />
<br />
==== vtkArrayDispatch::DispatchByValueType ====<br />
<br />
This family of dispatchers takes a vtkTypeList of ValueTypes for each array and restricts dispatch to only arrays in vtkArrayDispatch::Arrays that have one of the specified value types.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::DispatchByValueType -- Single dispatch.<br />
: vtkArrayDispatch::Dispatch2ByValueType -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3ByValueType -- Triple dispatch.<br />
<br />
'''Arrays considered:''' All arrays in vtkArrayDispatch::Arrays that meet the ValueType requirements.<br />
<br />
'''Restrictions:''' Arrays that do not satisfy the ValueType requirements are eliminated.<br />
<br />
'''Usecase:''' Used when one or more of the dispatched arrays has an unknown implementation, but a known (or restricted) ValueType.<br />
<br />
'''Example Usage:'''<br />
Here we’ll consider a filter that processes three arrays. The first is a complete unknown. The second is known to hold unsigned char, but we don’t know the implementation. The third holds either doubles or floats, but its implementation is also unknown.<br />
<br />
<source lang="cpp"><br />
// Complete unknown:<br />
vtkDataArray *array1 = ...;<br />
// Some array holding unsigned chars:<br />
vtkDataArray *array2 = ...;<br />
// Some array holding either floats or doubles:<br />
vtkDataArray *array3 = ...;<br />
<br />
// Typedef the dispatch to a more manageable name:<br />
typedef vtkArrayDispatch::Dispatch3ByValueType<br />
<<br />
vtkArrayDispatch::AllTypes, <br />
vtkTypeList_Create_1(unsigned char),<br />
vtkArrayDispatch::Reals<br />
> MyDispatch;<br />
<br />
// Execute the dispatch:<br />
MyDispatch::Execute(array1, array2, array3, someWorker);<br />
</source><br />
<br />
----<br />
<br />
==== vtkArrayDispatch::DispatchByArrayWithSameValueType ====<br />
<br />
This family of dispatchers takes a vtkTypeList of ArrayTypes for each array and restricts dispatch to only consider arrays from those typelists, with the added requirement that all dispatched arrays share a ValueType.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::Dispatch2ByArrayWithSameValueType -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3ByArrayWithSameValueType -- Triple dispatch.<br />
<br />
'''Arrays considered:''' All arrays in the explicit typelists that meet the ValueType requirements.<br />
<br />
'''Restrictions:''' Combinations of arrays with differing ValueTypes are eliminated.<br />
<br />
'''Usecase:''' When one or more arrays are known to belong to a restricted set of ArrayTypes, and all arrays are known to share the same ValueType, regardless of implementation.<br />
<br />
'''Example Usage:'''<br />
Let’s consider a double array dispatch, with array1 known to be one of four common array types (AOS float, double, int, and vtkIdType arrays), and the other is a complete unknown, although we know that it holds the same ValueType as array1.<br />
<br />
<source lang="cpp"><br />
// AOS float, double, int, or vtkIdType array:<br />
vtkDataArray *array1 = ...;<br />
// Unknown implementation, but the ValueType matches array1:<br />
vtkDataArray *array2 = ...;<br />
<br />
// array1’s possible types:<br />
typedef vtkTypeList_Create_4(vtkFloatArray, vtkDoubleArray,<br />
vtkIntArray, vtkIdTypeArray) Array1Types;<br />
<br />
// array2’s possible types:<br />
typedef typename vtkArrayDispatch::FilterArraysByValueType<br />
<<br />
vtkArrayDispatch::Arrays,<br />
vtkTypeList_Create_4(float, double, int, vtkIdType)<br />
> Array2Types;<br />
<br />
// Typedef the dispatch to a more manageable name:<br />
typedef vtkArrayDispatch::Dispatch2ByArrayWithSameValueType<br />
<<br />
Array1Types,<br />
Array2Types<br />
> MyDispatch;<br />
<br />
// Execute the dispatch:<br />
MyDispatch::Execute(array1, array2, someWorker);<br />
</source><br />
<br />
----<br />
<br />
==== vtkArrayDispatch::DispatchBySameValueType ====<br />
<br />
This family of dispatchers takes a single vtkTypeList of ValueType and restricts dispatch to only consider arrays from vtkArrayDispatch::Arrays with those ValueTypes, with the added requirement that all dispatched arrays share a ValueType.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::Dispatch2BySameValueType -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3BySameValueType -- Triple dispatch.<br />
: vtkArrayDispatch::Dispatch2SameValueType -- Double dispatch using vtkArrayDispatch::AllTypes.<br />
: vtkArrayDispatch::Dispatch3SameValueType -- Triple dispatch using vtkArrayDispatch::AllTypes.<br />
<br />
'''Arrays considered:''' All arrays in vtkArrayDispatch::Arrays that meet the ValueType requirements.<br />
<br />
'''Restrictions:''' Combinations of arrays with differing ValueTypes are eliminated.<br />
<br />
'''Usecase:''' When one or more arrays are known to belong to a restricted set of ValueTypes, and all arrays are known to share the same ValueType, regardless of implementation.<br />
<br />
'''Example Usage:'''<br />
Let’s consider a double array dispatch, with array1 known to be one of four common ValueTypes (float, double, int, and vtkIdType arrays), and array2 known to have the same ValueType as array1.<br />
<br />
<source lang="cpp"><br />
// Some float, double, int, or vtkIdType array:<br />
vtkDataArray *array1 = ...;<br />
// Unknown, but the ValueType matches array1:<br />
vtkDataArray *array2 = ...;<br />
<br />
// The allowed ValueTypes:<br />
typedef vtkTypeList_Create_4(float, double, int, vtkIdType) ValidValueTypes;<br />
<br />
// Typedef the dispatch to a more manageable name:<br />
typedef vtkArrayDispatch::Dispatch2BySameValueType<br />
<<br />
ValidValueTypes<br />
> MyDispatch;<br />
<br />
// Execute the dispatch:<br />
MyDispatch::Execute(array1, array2, someWorker);<br />
</source><br />
<br />
== Advanced Usage ==<br />
<br />
=== Accessing Memory Buffers ===<br />
<br />
Despite the thin vtkGenericDataArray API’s nice feature that compilers can optimize memory accesses, sometimes there are still legitimate reasons to access the underlying memory buffer. This can still be done safely by providing overloads to your worker’s operator() method. For instance, vtkDataArray::DeepCopy uses a generic implementation when mixed array implementations are used, but has optimized overloads for copying between arrays with the same ValueType and implementation. The worker for this dispatch is shown below as an example:<br />
<br />
<source lang="cpp"><br />
// Copy tuples from src to dest:<br />
struct DeepCopyWorker<br />
{<br />
// AoS --> AoS same-type specialization:<br />
template <typename ValueType><br />
void operator()(vtkAOSDataArrayTemplate<ValueType> *src,<br />
vtkAOSDataArrayTemplate<ValueType> *dst)<br />
{<br />
std::copy(src->Begin(), src->End(), dst->Begin());<br />
}<br />
<br />
// SoA --> SoA same-type specialization:<br />
template <typename ValueType><br />
void operator()(vtkSOADataArrayTemplate<ValueType> *src,<br />
vtkSOADataArrayTemplate<ValueType> *dst)<br />
{<br />
vtkIdType numTuples = src->GetNumberOfTuples();<br />
for (int comp; comp < src->GetNumberOfComponents(); ++comp)<br />
{<br />
ValueType *srcBegin = src->GetComponentArrayPointer(comp);<br />
ValueType *srcEnd = srcBegin + numTuples;<br />
ValueType *dstBegin = dst->GetComponentArrayPointer(comp);<br />
<br />
std::copy(srcBegin, srcEnd, dstBegin);<br />
}<br />
}<br />
<br />
// Generic implementation:<br />
template <typename Array1T, typename Array2T><br />
void operator()(Array1T *src, Array2T *dst)<br />
{<br />
vtkDataArrayAccessor<Array1T> s(src);<br />
vtkDataArrayAccessor<Array2T> d(dst);<br />
<br />
typedef typename vtkDataArrayAccessor<Array2T>::APIType DestType;<br />
<br />
vtkIdType tuples = src->GetNumberOfTuples();<br />
int comps = src->GetNumberOfComponents();<br />
<br />
for (vtkIdType t = 0; t < tuples; ++t)<br />
{<br />
for (int c = 0; c < comps; ++c)<br />
{<br />
d.Set(t, c, static_cast<DestType>(s.Get(t, c)));<br />
}<br />
}<br />
}<br />
};<br />
</source><br />
<br />
== Putting It All Together ==<br />
<br />
Now that we’ve explored the new tools introduced with VTK 7.1 that allow efficient, implementation agnostic array access, let’s take another look at the calcMagnitude example from before and identify the key features of the implementation:<br />
<br />
<source lang="cpp"><br />
// Modern implementation of calcMagnitude using new concepts in VTK 7.1:<br />
struct CalcMagnitudeWorker<br />
{<br />
template <typename VectorArray, typename MagnitudeArray><br />
void operator()(VectorArray *vectors, MagnitudeArray *magnitude)<br />
{<br />
VTK_ASSUME(vectors->GetNumberOfComponents() == 3);<br />
VTK_ASSUME(magnitude->GetNumberOfComponents() == 1);<br />
<br />
vtkDataArrayAccessor<VectorArray> v(vectors);<br />
vtkDataArrayAccessor<MagnitudeArray> m(magnitude);<br />
<br />
vtkIdType numVectors = vectors->GetNumberOfTuples();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
m.Set(tupleIdx, 0, std::sqrt(v.Get(tupleIdx, 0) * v.Get(tupleIdx, 0) +<br />
v.Get(tupleIdx, 1) * v.Get(tupleIdx, 1) +<br />
v.Get(tupleIdx, 2) * v.Get(tupleIdx, 2)));<br />
}<br />
}<br />
};<br />
<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
CalcMagnitudeWorker worker;<br />
typedef vtkArrayDispatch::Dispatch2ByValueType<br />
<<br />
vtkArrayDispatch::AllTypes,<br />
vtkArrayDispatch::Reals<br />
> Dispatcher;<br />
<br />
if (!Dispatcher::Execute(vectors, magnitude, worker))<br />
{<br />
worker(vectors, magnitude); // vtkDataArray fallback<br />
}<br />
}<br />
</source><br />
<br />
This implementation:<br />
<br />
; Uses dispatch restrictions to reduce the number of instantiated templated worker functions.<br />
: Assuming 26 types are in vtkArrayDispatch::Arrays (13 AOS + 13 SOA).<br />
: The first array is unrestricted. All 26 array types are considered.<br />
: The second array is restricted to float or double ValueTypes, which translates to 4 array types (one each, SOA and AOS).<br />
: 26 * 4 = 104 possible combinations exist. We’ve eliminated 26 * 22 = 572 combinations that an unrestricted double-dispatch would have generated (it would create 676 instantiations).<br />
; The calculation is still carried out at double precision when the ValueType restrictions are not met.<br />
: Just because we don’t want those other 572 cases to have special code generated doesn’t necessarily mean that we wouldn't want them to run.<br />
: Thanks to vtkDataArrayAccessor, we have a fallback implementation that reuses our templated worker code.<br />
: In this case, the dispatch is really just a fast-path implementation for floating point output types.<br />
; The performance should be identical to iterating through raw memory buffers.<br />
: The vtkGenericDataArray API is transparent to the compiler. The specialized instantiations of operator() can be heavily optimized since the memory access patterns are known and well-defined.<br />
: Using VTK_ASSUME tells the compiler that the arrays have known strides, allowing further compile-time optimizations.<br />
<br />
Hopefully this has convinced you that the vtkArrayDispatch and related tools are worth using to create flexible, efficient, typesafe implementations for your work with VTK. Please direct any questions you may have on the subject to the VTK mailing lists.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/DataArrays&diff=64710VTK/Tutorials/DataArrays2022-02-22T14:58:49Z<p>Wschroed: </p>
<hr />
<div>== Background ==<br />
<br />
VTK datasets store most of their important information in subclasses of vtkDataArray. Vertex locations (vtkPoints::Data), cell topology (vtkCellArray::Ia), and numeric point, cell, and generic attributes (vtkFieldData::Data) are the dataset features accessed most frequently by VTK algorithms, and these all rely on the vtkDataArray API.<br />
<br />
== Terminology ==<br />
<br />
This page uses the following terms:<br />
<br />
A '''ValueType''' is the element type of an array. For instance, vtkFloatArray has a ValueType of float.<br />
<br />
An '''ArrayType''' is a subclass of vtkDataArray. It specifies not only a ValueType, but an array implementation as well. This becomes important as vtkDataArray subclasses will begin to stray from the typical "array-of-structs" ordering that has been exclusively used in the past.<br />
<br />
A '''dispatch''' is a runtime-resolution of a vtkDataArray’s ArrayType, and is used to call a section of executable code that has been tailored for that ArrayType. Dispatching has compile-time and run-time components. At compile-time, the possible ArrayTypes to be used are determined and a worker code template is generated for each type. At run-time, the type of a specific array is determined and the proper worker instantiation is called.<br />
<br />
'''Template explosion''' refers to a sharp increase in the size of a compiled binary that results from instantiating a template function or class on many different types.<br />
<br />
=== vtkDataArray ===<br />
<br />
The data array type hierarchy in VTK has a unique feature when compared to typical C++ containers: a non-templated base class. All arrays containing numeric data inherit vtkDataArray, a common interface that sports a very useful API. Without knowing the underlying ValueType stored in data array, an algorithm or user may still work with any vtkDataArray in meaningful ways: The array can be resized, reshaped, read, and rewritten easily using a generic API that substitutes double-precision floating point numbers for the array’s actual ValueType. For instance, we can write a simple function that computes the magnitudes for a set of vectors in one array and store the results in another using nothing but the typeless vtkDataArray API:<br />
<br />
<source lang="cpp"><br />
// 3 component magnitude calculation using the vtkDataArray API.<br />
// Inefficient, but easy to write:<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
vtkIdType numVectors = vectors->GetNumberOfTuples();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
// What data types are magnitude and vectors using?<br />
// We don’t care! These methods all use double.<br />
magnitude->SetComponent(tupleIdx, 0,<br />
std::sqrt(vectors->GetComponent(tupleIdx, 0) *<br />
vectors->GetComponent(tupleIdx, 0) +<br />
vectors->GetComponent(tupleIdx, 1) *<br />
vectors->GetComponent(tupleIdx, 1) +<br />
vectors->GetComponent(tupleIdx, 2) *<br />
vectors->GetComponent(tupleIdx, 2)); <br />
}<br />
}<br />
</source><br />
<br />
=== The Costs of Flexibility ===<br />
<br />
However, this flexibility comes at a cost. Passing data through a generic API has a number of issues:<br />
<br />
;Accuracy<br />
: Not all ValueTypes are fully expressible as a double. The truncation of integers with > 52 bits of precision can be a particularly nasty issue.<br />
;Performance<br />
: Virtual overhead: The only way to implement such a system is to route the vtkDataArray calls through a run-time resolution of ValueTypes. This is implemented through the virtual override mechanism of C++, which adds a small overhead to each API call.<br />
: Missed optimization: The virtual indirection described above also prevents the compiler from being able to make assumptions about the layout of the data in-memory. This information could be used to perform advanced optimizations, such as vectorization.<br />
<br />
So what can one do if they want fast, optimized, type-safe access to the data stored in a vtkDataArray? What options are available?<br />
<br />
=== The Old Solution: vtkTemplateMacro ===<br />
<br />
The vtkTemplateMacro is described in this section. While it is no longer considered a best practice to use this construct in new code, it is still usable and likely to be encountered when reading the VTK source code. Newer code should use the vtkArrayDispatch mechanism, which is detailed later. The discussion of vtkTemplateMacro will help illustrate some of the practical issues with array dispatching.<br />
<br />
With a few minor exceptions that we won’t consider here, prior to VTK 7.1 it was safe to assume that all numeric vtkDataArray objects were also subclasses of vtkDataArrayTemplate. This template class provided the implementation of all documented numeric data arrays such as vtkDoubleArray, vtkIdTypeArray, etc, and stores the tuples in memory as a contiguous array-of-structs (AOS). For example, if we had an array that stored 3-component tuples as floating point numbers, we could define a tuple as:<br />
<br />
<source lang="cpp"><br />
struct Tuple { float x; float y; float z; };<br />
</source><br />
<br />
An array-of-structs, or AOS, memory buffer containing this data could be described as:<br />
<br />
<source lang="cpp"><br />
Tuple ArrayOfStructsBuffer[NumTuples];<br />
</source><br />
<br />
As a result, ArrayOfStructsBuffer will have the following memory layout:<br />
<br />
<source lang="cpp"><br />
{ x1, y1, z1, x2, y2, z2, x3, y3, z3, ...}<br />
</source><br />
<br />
That is, the components of each tuple are stored in adjacent memory locations, one tuple after another. While this is not exactly how vtkDataArrayTemplate implemented its memory buffers, it accurately describes the resulting memory layout.<br />
<br />
vtkDataArray also defines a GetDataType method, which returns an enumerated value describing a type. We can used to discover the ValueType stored in the array.<br />
<br />
Combine the AOS memory convention and GetDataType() with a convenient method on the data arrays named “GetVoidPointer()”, and a path to efficient, type-safe access was available. GetVoidPointer() does what it says on the tin: it returns the memory address for the array data’s base location as a void*. While this breaks encapsulation and sets off warning bells for the more pedantic among us, the following technique was safe and efficient when used correctly:<br />
<br />
<source lang="cpp"><br />
// 3-component magnitude calculation using GetVoidPointer.<br />
// Efficient and fast, but assumes AOS memory layout<br />
template <typename ValueType><br />
void calcMagnitudeWorker(ValueType *vectors, ValueType *magnitude,<br />
vtkIdType numVectors)<br />
{<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
// We now have access to the raw memory buffers, and assuming<br />
// AOS memory layout, we know how to access them.<br />
magnitude[tupleIdx] = <br />
std::sqrt(vectors[3 * tupleIdx + 0] *<br />
vectors[3 * tupleIdx + 0] +<br />
vectors[3 * tupleIdx + 1] *<br />
vectors[3 * tupleIdx + 1] +<br />
vectors[3 * tupleIdx + 2] *<br />
vectors[3 * tupleIdx + 2]); <br />
}<br />
}<br />
<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
assert(“Arrays must have same datatype!” && <br />
vtkDataTypesCompare(vectors->GetDataType(),<br />
magnitude->GetDataType()));<br />
switch (vectors->GetDataType())<br />
{<br />
vtkTemplateMacro(calcMagnitudeWorker<VTK_TT*>(<br />
static_cast<VTK_TT*>(vectors->GetVoidPointer(0)),<br />
static_cast<VTK_TT*>(magnitude->GetVoidPointer(0)),<br />
vectors->GetNumberOfTuples());<br />
}<br />
}<br />
</source><br />
<br />
The vtkTemplateMacro, as you may have guessed, expands into a series of case statements that determine an array’s ValueType from the ‘int GetDataType()’ return value. The ValueType is then typedef’d to VTK_TT, and the macro’s argument is called for each numeric type returned from GetDataType. In this case, the call to calcMagnitudeWorker is made by the macro, with VTK_TT typedef’d to the array’s ValueType.<br />
<br />
This is the typical usage pattern for vtkTemplateMacro. The calcMagnitude function calls a templated worker implementation that uses efficient, raw memory access to a typesafe memory buffer so that the worker’s code can be as efficient as possible. But this assumes AOS memory ordering, and as we’ll mention, this assumption may no longer be valid as VTK moves further into the field of in-situ analysis.<br />
<br />
But first, you may have noticed that the above example using vtkTemplateMacro has introduced a step backwards in terms of functionality. In the vtkDataArray implementation, we didn’t care if both arrays were the same ValueType, but now we have to ensure this, since we cast both arrays’ void pointers to VTK_TT*. What if vectors is an array of integers, but we want to calculate floating point magnitudes? <br />
<br />
=== vtkTemplateMacro with Multiple Arrays ===<br />
<br />
The best solution prior to VTK 7.1 was to use two worker functions. The first is templated on vector’s ValueType, and the second is templated on both array ValueTypes:<br />
<br />
<source lang="cpp"><br />
// 3-component magnitude calculation using GetVoidPointer and a <br />
// double-dispatch to resolve ValueTypes of both arrays.<br />
// Efficient and fast, but assumes AOS memory layout, lots of boilerplate<br />
// code, and the sensitivity to template explosion issues increases.<br />
template <typename VectorType, typename MagnitudeType><br />
void calcMagnitudeWorker2(VectorType *vectors, MagnitudeType *magnitude,<br />
vtkIdType numVectors)<br />
{<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
// We now have access to the raw memory buffers, and assuming<br />
// AOS memory layout, we know how to access them.<br />
magnitude[tupleIdx] = <br />
std::sqrt(vectors[3 * tupleIdx + 0] *<br />
vectors[3 * tupleIdx + 0] +<br />
vectors[3 * tupleIdx + 1] *<br />
vectors[3 * tupleIdx + 1] +<br />
vectors[3 * tupleIdx + 2] *<br />
vectors[3 * tupleIdx + 2]); <br />
}<br />
}<br />
<br />
// Vector ValueType is known (VectorType), now use vtkTemplateMacro on<br />
// magnitude:<br />
template <typename VectorType><br />
void calcMagnitudeWorker1(VectorType *vectors, vtkDataArray *magnitude,<br />
vtkIdType numVectors)<br />
{<br />
switch (magnitude->GetDataType())<br />
{<br />
vtkTemplateMacro(calcMagnitudeWorker2(vectors,<br />
static_cast<VTK_TT*>(magnitude->GetVoidPointer(0)), numVectors);<br />
}<br />
}<br />
<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
// Dispatch vectors first:<br />
switch (vectors->GetDataType())<br />
{<br />
vtkTemplateMacro(calcMagnitudeWorker1<VTK_TT*>(<br />
static_cast<VTK_TT*>(vectors->GetVoidPointer(0)),<br />
magnitude, vectors->GetNumberOfTuples());<br />
}<br />
}<br />
</source><br />
<br />
This works well, but it’s a bit ugly and has the same issue as before regarding memory layout. Double dispatches using this method will also see more problems regarding binary size. The number of template instantiations that the compiler needs to generate is determined by <math>I = T^D</math>, where I is the number of template instantiations, T is the number of types considered, and D is the number of dispatches. As of VTK 7.1, vtkTemplateMacro considers 14 data types, so this double-dispatch will produce 14 instantiations of calcMagnitudeWorker1 and 196 instantiations of calcMagnitudeWorker2. If we tried to resolve 3 vtkDataArrays into raw C arrays, 2744 instantiations of the final worker function would be generated. As more arrays are considered, the need for some form of restricted dispatch becomes very important to keep this template explosion in check.<br />
<br />
== Data Array Changes in VTK 7.1 ==<br />
<br />
Starting with VTK 7.1, the Array-Of-Structs (AOS) memory layout is no longer the only vtkDataArray implementation provided by the library. The Struct-Of-Arrays (SOA) memory layout is now available throught the vtkSOADataArrayTemplate class. The SOA layout assumes that the components of an array are stored separately, as in:<br />
<br />
<source lang="cpp"><br />
struct StructOfArraysBuffer <br />
{ <br />
float *x; // Pointer to array containing x components<br />
float *y; // Same for y<br />
float *z; // Same for z<br />
};<br />
</source><br />
<br />
The new SOA arrays were added to improve interoperability between VTK and simulation packages for live visualization of in-situ results. Many simulations use the SOA layout for their data, and natively supporting these arrays in VTK will allow analysis of live data without the need to explicitly copy it into a VTK data structure.<br />
<br />
As a result of this change, a new mechanism is needed to efficiently access array data. vtkTemplateMacro and GetVoidPointer are no longer an acceptable solution -- implementing GetVoidPointer for SOA arrays requires creating a deep copy of the data into a new AOS buffer, a waste of both processor time and memory. <br />
<br />
So we need a replacement for vtkTemplateMacro that can abstract away things like storage details while providing performance that is on-par with raw memory buffer operations. And while we’re at it, let’s look at removing the tedium of multi-array dispatch and reducing the problem of 'template explosion'. The remainder of this page details such a system.<br />
<br />
== Best Practices for vtkDataArray Post-7.1 ==<br />
<br />
We’ll describe a new set of tools that make managing template instantiations for efficient array access both easy and extensible. As an overview, the following new features will be discussed:<br />
<br />
* '''vtkGenericDataArray''' The new templated base interface for all numeric vtkDataArray subclasses.<br />
* '''vtkArrayDispatch''' Collection of code generation tools that allow concise and precise specification of restrictable dispatch for up to 3 arrays simultaneously.<br />
* '''vtkArrayDownCast''' Access to specialized downcast implementations from code templates.<br />
* '''vtkDataArrayAccessor''' Provides Get and Set methods for accessing/modifying array data as efficiently as possible. Allows a single worker implementation to work efficiently with vtkGenericDataArray subclasses, or fallback to use the vtkDataArray API if needed.<br />
* '''VTK_ASSUME''' New abstraction for the compiler <nowiki>__assume</nowiki> directive to provide optimization hints.<br />
<br />
These will be discussed more fully, but as a preview, here’s our familiar calcMagnitude example implemented using these new tools:<br />
<br />
<source lang="cpp"><br />
// Modern implementation of calcMagnitude using new concepts in VTK 7.1:<br />
// A worker functor. The calculation is implemented in the function template<br />
// for operator().<br />
struct CalcMagnitudeWorker<br />
{<br />
// The worker accepts VTK array objects now, not raw memory buffers.<br />
template <typename VectorArray, typename MagnitudeArray><br />
void operator()(VectorArray *vectors, MagnitudeArray *magnitude)<br />
{<br />
// This allows the compiler to optimize for the AOS array stride.<br />
VTK_ASSUME(vectors->GetNumberOfComponents() == 3);<br />
VTK_ASSUME(magnitude->GetNumberOfComponents() == 1);<br />
<br />
// These allow this single worker function to be used with both<br />
// the vtkDataArray 'double' API and the more efficient <br />
// vtkGenericDataArray APIs, depending on the template parameters:<br />
vtkDataArrayAccessor<VectorArray> v(vectors);<br />
vtkDataArrayAccessor<MagnitudeArray> m(magnitude);<br />
<br />
vtkIdType numVectors = vectors->GetNumberOfTuples();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
// Set and Get compile to inlined optimizable raw memory accesses for<br />
// vtkGenericDataArray subclasses.<br />
m.Set(tupleIdx, 0, std::sqrt(v.Get(tupleIdx, 0) * v.Get(tupleIdx, 0) +<br />
v.Get(tupleIdx, 1) * v.Get(tupleIdx, 1) +<br />
v.Get(tupleIdx, 2) * v.Get(tupleIdx, 2)));<br />
}<br />
}<br />
};<br />
<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
// Create our worker functor:<br />
CalcMagnitudeWorker worker;<br />
<br />
// Define our dispatcher. We’ll let vectors have any ValueType, but only<br />
// consider float/double arrays for magnitudes. These combinations will<br />
// use a 'fast-path' implementation generated by the dispatcher:<br />
typedef vtkArrayDispatch::Dispatch2ByValueType<br />
<<br />
vtkArrayDispatch::AllTypes, // ValueTypes allowed by first array<br />
vtkArrayDispatch::Reals // ValueTypes allowed by second array<br />
> Dispatcher;<br />
<br />
// Execute the dispatcher:<br />
if (!Dispatcher::Execute(vectors, magnitude, worker))<br />
{<br />
// If Execute() fails, it means the dispatch failed due to an<br />
// unsupported array type. In this case, it’s likely that the magnitude<br />
// array is using an integral type. This is an uncommon case, so we won’t<br />
// generate a fast path for these, but instead call an instantiation of <br />
// CalcMagnitudeWorker::operator()<vtkDataArray, vtkDataArray>.<br />
// Through the use of vtkDataArrayAccessor, this falls back to using the<br />
// vtkDataArray double API:<br />
worker(vectors, magnitude);<br />
}<br />
}<br />
</source><br />
<br />
== vtkGenericDataArray ==<br />
<br />
The vtkGenericDataArray class template drives the new vtkDataArray class hierarchy. The ValueType is introduced here, both as a template parameter and a class-scope typedef. This allows a typed API to be written that doesn’t require conversion to/from a common type (as vtkDataArray does with double). It does not implement any storage details, however. Instead, it uses the CRTP idiom to forward key method calls to a derived class without using a virtual function call. By eliminating this indirection, vtkGenericDataArray defines an interface that can be used to implement highly efficient code, because the compiler is able to see past the method calls and optimize the underlying memory accesses instead.<br />
<br />
There are two main subclasses of vtkGenericDataArray: vtkAOSDataArrayTemplate and vtkSOADataArrayTemplate. These implement array-of-structs and struct-of-arrays storage, respectively.<br />
<br />
== vtkTypeList ==<br />
<br />
Type lists are a metaprogramming construct used to generate a list of C++ types. They are used in VTK to implement restricted array dispatching. As we’ll see, vtkArrayDispatch offers ways to reduce the number of generated template instantiations by enforcing constraints on the arrays used to dispatch. For instance, if one wanted to only generate templated worker implementations for vtkFloatArray and vtkIntArray, a typelist is used to specify this:<br />
<br />
<source lang="cpp"><br />
// Create a typelist of 2 types, vtkFloatArray and vtkIntArray:<br />
typedef vtkTypeList_Create_2(vtkFloatArray, vtkIntArray) MyArrays;<br />
<br />
Worker someWorker = ...;<br />
vtkDataArray *someArray = ...;<br />
<br />
// Use vtkArrayDispatch to generate code paths for these arrays:<br />
vtkArrayDispatch::DispatchByArray<MyArrays>(someArray, someWorker);<br />
</source><br />
<br />
There’s not much to know about type lists as a user, other than how to create them. As seen above, there is a set of macros named vtkTypeList_Create_X, where X is the number of types in the created list, and the arguments are the types to place in the list. In the example above, the new type list is typically bound to a friendlier name using a local typedef, which is a common practice.<br />
<br />
The vtkTypeList.h header defines some additional type list operations that may be useful, such as deleting and appending types, looking up indices, etc. vtkArrayDispatch::FilterArraysByValueType may come in handy, too. But for working with array dispatches, most users will only need to create new ones, or use one of the following predefined vtkTypeLists:<br />
<br />
* vtkArrayDispatch::Reals -- All floating point ValueTypes.<br />
* vtkArrayDispatch::Integrals -- All integral ValueTypes.<br />
* vtkArrayDispatch::AllTypes -- Union of Reals and Integrals.<br />
* vtkArrayDispatch::Arrays -- Default list of ArrayTypes to use in dispatches.<br />
<br />
The last one is special -- vtkArrayDispatch::Arrays is a type list of ArrayTypes set application-wide when VTK is built. This vtkTypeList of vtkDataArray subclasses is used for unrestricted dispatches, and is the list that gets filtered when restricting a dispatch to specific ValueTypes. <br />
<br />
Refining this list allows the user building VTK to have some control over the dispatch process. If SOA arrays are never going to be used, they can be removed from dispatch calls, reducing compile times and binary size. On the other hand, a user applying in-situ techniques may want them available, because they’ll be used to import views of intermediate results.<br />
<br />
By default, vtkArrayDispatch::Arrays contains all AOS arrays. The CMake option VTK_DISPATCH_SOA_ARRAYS will enable SOA array dispatch as well. More advanced possibilities exist and are described in VTK/CMake/vtkCreateArrayDispatchArrayList.cmake.<br />
<br />
== vtkArrayDownCast ==<br />
<br />
In VTK, all subclasses of vtkObject (including the data arrays) support a downcast method called SafeDownCast. It is used similarly to the C++ dynamic_cast -- given an object, try to cast it to a more derived type or return NULL if the object is not the requested type. Say we have a vtkDataArray and want to test if it is actually a vtkFloatArray. We can do this:<br />
<br />
<source lang="cpp"><br />
void DoSomeAction(vtkDataArray *dataArray)<br />
{<br />
vtkFloatArray *floatArray = vtkFloatArray::SafeDownCast(dataArray);<br />
if (floatArray)<br />
{<br />
// ... (do work with float array)<br />
}<br />
}<br />
</source><br />
<br />
This works, but it can pose a serious problem if DoSomeAction is called repeatedly. SafeDownCast works by performing a series of virtual calls and string comparisons to determine if an object falls into a particular class hierarchy. These string comparisons add up and can actually dominate computational resources if an algorithm implementation calls SafeDownCast in a tight loop.<br />
<br />
In such situations, it’s ideal to restructure the algorithm so that the downcast only happens once and the same result is used repeatedly, but sometimes this is not possible. To lessen the cost of downcasting arrays, a FastDownCast method exists for common subclasses of vtkAbstractArray. This replaces the string comparisons with a single virtual call and a few integer comparisons and is far cheaper than the more general SafeDownCast. However, not all array implementations support the FastDownCast method.<br />
<br />
This creates a headache for templated code. Take the following example:<br />
<br />
<source lang="cpp"><br />
template <typename ArrayType><br />
void DoSomeAction(vtkAbstractArray *array)<br />
{<br />
ArrayType *myArray = ArrayType::SafeDownCast(array);<br />
if (myArray)<br />
{<br />
// ... (do work with myArray)<br />
}<br />
}<br />
</source><br />
<br />
We cannot use FastDownCast here since not all possible ArrayTypes support it. But we really want that performance increase for the ones that do -- SafeDownCasts are really slow! vtkArrayDownCast fixes this issue:<br />
<br />
<source lang="cpp"><br />
template <typename ArrayType><br />
void DoSomeAction(vtkAbstractArray *array)<br />
{<br />
ArrayType *myArray = vtkArrayDownCast<ArrayType>(array);<br />
if (myArray)<br />
{<br />
// ... (do work with myArray)<br />
}<br />
}<br />
</source><br />
<br />
vtkArrayDownCast automatically selects FastDownCast when it is defined for the ArrayType, and otherwise falls back to SafeDownCast. This is the preferred array downcast method for performance, uniformity, and reliability.<br />
<br />
== vtkDataArrayAccessor ==<br />
<br />
Array dispatching relies on having templated worker code carry out some operation. For instance, take this vtkArrayDispatch code that locates the maximum value in an array:<br />
<br />
<source lang="cpp"><br />
// Stores the tuple/component coordinates of the maximum value:<br />
struct FindMax<br />
{<br />
vtkIdType Tuple; // Result<br />
int Component; // Result<br />
<br />
FindMax() : Tuple(-1), Component(-1) {}<br />
<br />
template <typename ArrayT><br />
void operator()(ArrayT *array)<br />
{<br />
// The type to use for temporaries, and a temporary to store<br />
// the current maximum value:<br />
typedef typename ArrayT::ValueType ValueType;<br />
ValueType max = std::numeric_limits<ValueType>::min();<br />
<br />
// Iterate through all tuples and components, noting the location<br />
// of the largest element found.<br />
vtkIdType numTuples = array->GetNumberOfTuples();<br />
int numComps = array->GetNumberOfComponents();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numTuples; ++tupleIdx)<br />
{<br />
for (int compIdx = 0; compIdx < numComps; ++compIdx)<br />
{<br />
if (max < array->GetTypedComponent(tupleIdx, compIdx))<br />
{<br />
max = array->GetTypedComponent(tupleIdx, compIdx);<br />
this->Tuple = tupleIdx;<br />
this->Component = compIdx;<br />
}<br />
}<br />
}<br />
}<br />
};<br />
<br />
void someFunction(vtkDataArray *array)<br />
{<br />
FindMax maxWorker;<br />
vtkArrayDispatch::Dispatch::Execute(array, maxWorker);<br />
// Do work using maxWorker.Tuple and maxWorker.Component...<br />
}<br />
</source><br />
<br />
There’s a problem, though. Recall that only the arrays in vtkArrayDispatch::Arrays are tested for dispatching. What happens if the array passed into someFunction wasn’t on that list?<br />
<br />
The dispatch will fail, and maxWorker.Tuple and maxWorker.Component will be left to their initial values of -1. That’s no good. What if someFunction is a critical path where we want to use a fast dispatched worker if possible, but still have valid results to use if dispatching fails? Well, we can fall back on the vtkDataArray API and do things the slow way in that case. When a dispatcher is given an unsupported array, it returns false, so let’s just add a backup implementation:<br />
<br />
<source lang="cpp"><br />
// Stores the tuple/component coordinates of the maximum value:<br />
struct FindMax<br />
{ /* As before... */ };<br />
<br />
void someFunction(vtkDataArray *array)<br />
{<br />
FindMax maxWorker;<br />
if (!vtkArrayDispatch::Dispatch::Execute(array, maxWorker))<br />
{<br />
// Reimplement FindMax::operator(), but use the vtkDataArray API's<br />
// "virtual double GetComponent()" instead of the more efficient<br />
// "ValueType GetTypedComponent()" from vtkGenericDataArray.<br />
}<br />
}<br />
</source><br />
<br />
Ok, that works. But ugh...why write the same algorithm twice? That’s extra debugging, extra testing, extra maintenance burden, and just plain not fun. <br />
<br />
Enter vtkDataArrayAccessor. This utility template does a very simple, yet useful, job. It provides component and tuple based Get and Set methods that will call the corresponding method on the array using either the vtkDataArray or vtkGenericDataArray API, depending on the class’s template parameter. It also defines an APIType, which can be used to allocate temporaries, etc. This type is double for vtkDataArrays and vtkGenericDataArray::ValueType for vtkGenericDataArrays.<br />
<br />
Another nice benefit is that vtkDataArrayAccessor has a more compact API. The only defined methods are Get and Set, and they’re overloaded to work on either tuples or components (though component access is encouraged as it is much, much more efficient). Note that all non-element access operations (such as GetNumberOfTuples) should still be called on the array pointer using vtkDataArray API.<br />
<br />
Using vtkDataArrayAccessor, we can write a single worker template that works for both vtkDataArray and vtkGenericDataArray, without a loss of performance in the latter case. That worker looks like this:<br />
<br />
<source lang="cpp"><br />
// Better, uses vtkDataArrayAccessor:<br />
struct FindMax<br />
{<br />
vtkIdType Tuple; // Result<br />
int Component; // Result<br />
<br />
FindMax() : Tuple(-1), Component(-1) {}<br />
<br />
template <typename ArrayT><br />
void operator()(ArrayT *array)<br />
{<br />
// Create the accessor:<br />
vtkDataArrayAccessor<ArrayT> access(array);<br />
<br />
// Prepare the temporary. We’ll use the accessor's APIType instead of<br />
// ArrayT::ValueType, since that is appropriate for the vtkDataArray<br />
// fallback:<br />
typedef typename vtkDataArrayAccessor<ArrayT>::APIType ValueType;<br />
ValueType max = std::numeric_limits<ValueType>::min();<br />
<br />
// Iterate as before, but use access.Get instead of<br />
// array->GetTypedComponent. GetTypedComponent is still used<br />
// when ArrayT is a vtkGenericDataArray, but <br />
// vtkDataArray::GetComponent is now used as a fallback when ArrayT<br />
// is vtkDataArray.<br />
vtkIdType numTuples = array->GetNumberOfTuples();<br />
int numComps = array->GetNumberOfComponents();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numTuples; ++tupleIdx)<br />
{<br />
for (int compIdx = 0; compIdx < numComps; ++compIdx)<br />
{<br />
if (max < access.Get(tupleIdx, compIdx))<br />
{<br />
max = access.Get(tupleIdx, compIdx);<br />
this->Tuple = tupleIdx;<br />
this->Component = compIdx;<br />
}<br />
}<br />
}<br />
}<br />
};<br />
</source><br />
<br />
Now when we call operator() with say, ArrayT=vtkFloatArray, we’ll get an optimized, efficient code path. But we can also call this same implementation with ArrayT=vtkDataArray and still get a correct result (assuming that the vtkDataArray’s double API represents the data well enough).<br />
<br />
Using the vtkDataArray fallback path is straightforward. At the call site:<br />
<br />
<source lang="cpp"><br />
void someFunction(vtkDataArray *array)<br />
{<br />
FindMax maxWorker;<br />
if (!vtkArrayDispatch::Dispatch::Execute(array, maxWorker))<br />
{<br />
maxWorker(array); // Dispatch failed, call vtkDataArray fallback<br />
}<br />
// Do work using maxWorker.Tuple and maxWorker.Component -- now we know<br />
// for sure that they’re initialized!<br />
}<br />
</source><br />
<br />
Using the above pattern for calling a worker and always going through vtkDataArrayAccessor to Get/Set array elements ensures that any worker implementation can be its own fallback path.<br />
<br />
== VTK_ASSUME ==<br />
<br />
While performance testing the new array classes, we compared the performance of a dispatched worker using the vtkDataArrayAccessor class to the same algorithm using raw memory buffers. We managed to achieve the same performance out of the box for most cases, using both AOS and SOA array implementations. In fact, with --ffast-math optimizations on GCC 4.9, the optimizer is able to remove all function calls and apply SIMD vectorized instructions in the dispatched worker, showing that the new array API is thin enough that the compiler can see the algorithm in terms of memory access.<br />
<br />
But there was one case where performance suffered. If iterating through an AOS data array with a known number of components using GetTypedComponent, the raw pointer implementation initially outperformed the dispatched array. To understand why, note that the AOS implementation of GetTypedComponent is along the lines of:<br />
<br />
<source lang="cpp"><br />
ValueType vtkAOSDataArrayTemplate::GetTypedComponent(vtkIdType tuple,<br />
int comp) const<br />
{<br />
// AOSData is a ValueType* pointing at the base of the array data.<br />
return this->AOSData[tuple * this->NumberOfComponents + comp];<br />
}<br />
</source><br />
<br />
Because NumberOfComponents is unknown at compile time, the optimizer cannot assume anything about the stride of the components in the array. This leads to missed optimizations for vectorized read/writes and increased complexity in the instructions used to iterate through the data.<br />
<br />
For such cases where the number of components is, in fact, known at compile time (due to a calling function performing some validation, for instance), it is possible to tell the compiler about this fact using VTK_ASSUME.<br />
<br />
VTK_ASSUME wraps a compiler-specific <nowiki>__assume</nowiki> statement, which is used to pass such optimization hints. Its argument is an expression of some condition that is guaranteed to always be true. This allows more aggressive optimizations when used correctly, but be forewarned that if the condition is not met at runtime, the results are unpredictable and likely catastrophic.<br />
<br />
But if we’re writing a filter that only operates on 3D point sets, we know the number of components in the point array will always be 3. In this case we can write:<br />
<br />
<source lang="cpp"><br />
VTK_ASSUME(pointsArray->GetNumberOfComponents() == 3);<br />
</source><br />
<br />
in the worker function and this instructs the compiler that the array’s internal NumberOfComponents variable will always be 3, and thus the stride of the array is known. Of course, the caller of this worker function should ensure that this is a 3-component array and fail gracefully if it is not.<br />
<br />
There are many scenarios where VTK_ASSUME can offer a serious performance boost, the case of known tuple size is a common one that’s really worth remembering.<br />
<br />
== vtkArrayDispatch == <br />
<br />
The dispatchers implemented in the vtkArrayDispatch namespace provide array dispatching with customizable restrictions on code generation and a simple syntax that hides the messy details of type resolution and multi-array dispatch. There are several "flavors" of dispatch available that operate on up to three arrays simultaneously.<br />
<br />
=== Components Of A Dispatch ===<br />
<br />
Using the vtkArrayDispatch system requires three elements: the array(s), the worker, and the dispatcher.<br />
<br />
==== The Arrays ====<br />
<br />
All dispatched arrays must be subclasses of vtkDataArray. It is important to identify as many restrictions as possible. Must every ArrayType be considered during dispatch, or is the array’s ValueType (or even the ArrayType itself) restricted? If dispatching multiple arrays at once, are they expected to have the same ValueType? These scenarios are common, and these conditions can be used to reduce the number of instantiations of the worker template.<br />
<br />
==== The Worker ====<br />
<br />
The worker is some generic callable. In C++98, a templated functor is a good choice. In C++14, a generic lambda is a usable option as well. For our purposes, we’ll only consider the functor approach, as C++14 is a long ways off for core VTK code.<br />
<br />
At a minimum, the worker functor should define operator() to make it callable. This should be a function template with a template parameter for each array it should handle. For a three array dispatch, it should look something like this:<br />
<br />
<source lang="cpp"><br />
struct ThreeArrayWorker<br />
{<br />
template <typename Array1T, typename Array2T, typename Array3T><br />
void operator()(Array1T *array1, Array2T *array2, Array3T *array3)<br />
{<br />
/* Do stuff... */<br />
}<br />
};<br />
</source><br />
<br />
At runtime, the dispatcher will call ThreeWayWorker::operator() with a set of Array1T, Array2T, and Array3T that satisfy any dispatch restrictions.<br />
<br />
Workers can be stateful, too, as seen in the FindMax worker earlier where the worker simply identified the component and tuple id of the largest value in the array. The functor stored them for the caller to use in further analysis:<br />
<br />
<source lang="cpp"><br />
// Example of a stateful dispatch functor:<br />
struct FindMax<br />
{<br />
// Functor state, holds results that are accessible to the caller:<br />
vtkIdType Tuple;<br />
int Component;<br />
<br />
// Set initial values:<br />
FindMax() : Tuple(-1), Component(-1) {}<br />
<br />
// Template method to set Tuple and Component ivars:<br />
template <typename ArrayT><br />
void operator()(ArrayT *array)<br />
{ <br />
/* Do stuff... */<br />
}<br />
};<br />
</source><br />
<br />
==== The Dispatcher ====<br />
<br />
The dispatcher is the workhorse of the system. It is responsible for applying restrictions, resolving array types, and generating the requested template instantiations. It has responsibilities both at run-time and compile-time.<br />
<br />
During compilation, the dispatcher will identify the valid combinations of arrays that can be used according to the restrictions. This is done by starting with a typelist of arrays, either supplied as a template parameter or by defaulting to vtkArrayDispatch::Arrays, and filtering them by ValueType if needed. For multi-array dispatches, additional restrictions may apply, such as forcing the second and third arrays to have the same ValueType as the first. It must then generate the required code for the dispatch -- that is, the templated worker implementation must be instantiated for each valid combination of arrays.<br />
<br />
At runtime, it tests each of the dispatched arrays to see if they match one of the generated code paths. Runtime type resolution is carried out using vtkArrayDownCast to get the best performance available for the arrays of interest. If it finds a match, it calls the worker’s operator() method with the properly typed arrays. If no match is found, it returns false without executing the worker.<br />
<br />
=== Restrictions: Why They Matter ===<br />
<br />
We’ve made several mentions of using restrictions to reduce the number of template instantiations during a dispatch operation. You may be wondering if it really matters so much. Let’s consider some numbers.<br />
<br />
VTK is configured to use 13 ValueTypes for numeric data. These are the standard numeric types float, int, unsigned char, etc. By default, VTK will define vtkArrayDispatch::Arrays to use all 13 types with vtkAOSDataArrayTemplate for the standard set of dispatchable arrays. If enabled during compilation, the SOA data arrays are added to this list for a total of 26 arrays.<br />
<br />
Using these 26 arrays in a single, unrestricted dispatch will result in 26 instantiations of the worker template. A double dispatch will generate 676 workers. A triple dispatch with no restrictions creates a whopping 17,576 functions to handle the possible combinations of arrays. That’s a '''lot''' of instructions to pack into the final binary object.<br />
<br />
Applying some simple restrictions can reduce this immensely. Say we know that the arrays will only contain floats or doubles. This would reduce the single dispatch to 4 instantiations, the double dispatch to 16, and the triple to 64. We’ve just reduced the generated code size significantly. We could even apply such a restriction to just create some 'fast-paths' and let the integral types fallback to using the vtkDataArray API by using vtkDataArrayAccessors. Dispatch restriction is a powerful tool for reducing the compiled size of a binary object.<br />
<br />
Another common restriction is that all arrays in a multi-array dispatch have the same ValueType, even if that ValueType is not known at compile time. By specifying this restriction, a double dispatch on all 26 AOS/SOA arrays will only produce 52 worker instantiations, down from 676. The triple dispatch drops to 104 instantiations from 17,576.<br />
<br />
Always apply restrictions when they are known, especially for multi-array dispatches. The savings are worth it.<br />
<br />
=== Types of Dispatchers ===<br />
<br />
Now that we’ve discussed the components of a dispatch operation, what the dispatchers do, and the importance of restricting dispatches, let’s take a look at the types of dispatchers available.<br />
<br />
----<br />
<br />
==== vtkArrayDispatch::Dispatch ====<br />
<br />
This family of dispatchers take no parameters and perform an unrestricted dispatch over all arrays in vtkArrayDispatch::Arrays.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::Dispatch -- Single dispatch.<br />
: vtkArrayDispatch::Dispatch2 -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3 -- Triple dispatch.<br />
<br />
'''Arrays considered:''' All arrays in vtkArrayDispatch::Arrays.<br />
<br />
'''Restrictions:''' None.<br />
<br />
'''Usecase:''' Used when no useful information exists that can be used to apply restrictions.<br />
<br />
'''Example Usage:'''<br />
<source lang="cpp"><br />
vtkArrayDispatch::Dispatch::Execute(array, worker);<br />
</source><br />
<br />
----<br />
<br />
==== vtkArrayDispatch::DispatchByArray ====<br />
<br />
This family of dispatchers takes a vtkTypeList of explicit array types to use during dispatching. They should only be used when an array’s exact type is restricted. If dispatching multiple arrays and only one has such type restrictions, use vtkArrayDispatch::Arrays (or a filtered version) for the unrestricted arrays.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::DispatchByArray -- Single dispatch.<br />
: vtkArrayDispatch::Dispatch2ByArray -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3ByArray -- Triple dispatch.<br />
<br />
'''Arrays considered:''' All arrays explicitly listed in the parameter lists.<br />
<br />
'''Restrictions:''' Array must be explicitly listed in the dispatcher’s type.<br />
<br />
'''Usecase:''' Used when one or more arrays have known implementations.<br />
<br />
'''Example Usage:'''<br />
An example here would be a filter that processes an input array of some integral type and produces either a vtkDoubleArray or a vtkFloatArray, depending on some condition. Since the input array’s implementation is unknown (it comes from outside the filter), we’ll rely on a ValueType-filtered version of vtkArrayDispatch::Arrays for its type. However, we know the output array is either vtkDoubleArray or vtkFloatArray, so we’ll want to be sure to apply that restriction:<br />
<br />
<source lang="cpp"><br />
// input has an unknown implementation, but an integral ValueType.<br />
vtkDataArray *input = ...;<br />
<br />
// Output is always either vtkFloatArray or vtkDoubleArray:<br />
vtkDataArray *output = someCondition ? vtkFloatArray::New()<br />
: vtkDoubleArray::New();<br />
<br />
// Define the valid ArrayTypes for input by filtering <br />
// vtkArrayDispatch::Arrays to remove non-integral types:<br />
typedef typename vtkArrayDispatch::FilterArraysByValueType<br />
<<br />
vtkArrayDispatch::Arrays,<br />
vtkArrayDispatch::Integrals<br />
>::Result InputTypes;<br />
<br />
// For output, create a new vtkTypeList with the only two possibilities:<br />
typedef vtkTypeList_Create_2(vtkFloatArray, vtkDoubleArray) OutputTypes;<br />
<br />
// Typedef the dispatch to a more manageable name:<br />
typedef vtkArrayDispatch::Dispatch2ByArray<br />
<<br />
InputTypes, <br />
OutputTypes<br />
> MyDispatch;<br />
<br />
// Execute the dispatch:<br />
MyDispatch::Execute(input, output, someWorker);<br />
</source><br />
<br />
----<br />
<br />
==== vtkArrayDispatch::DispatchByValueType ====<br />
<br />
This family of dispatchers takes a vtkTypeList of ValueTypes for each array and restricts dispatch to only arrays in vtkArrayDispatch::Arrays that have one of the specified value types.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::DispatchByValueType -- Single dispatch.<br />
: vtkArrayDispatch::Dispatch2ByValueType -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3ByValueType -- Triple dispatch.<br />
<br />
'''Arrays considered:''' All arrays in vtkArrayDispatch::Arrays that meet the ValueType requirements.<br />
<br />
'''Restrictions:''' Arrays that do not satisfy the ValueType requirements are eliminated.<br />
<br />
'''Usecase:''' Used when one or more of the dispatched arrays has an unknown implementation, but a known (or restricted) ValueType.<br />
<br />
'''Example Usage:'''<br />
Here we’ll consider a filter that processes three arrays. The first is a complete unknown. The second is known to hold unsigned char, but we don’t know the implementation. The third holds either doubles or floats, but its implementation is also unknown.<br />
<br />
<source lang="cpp"><br />
// Complete unknown:<br />
vtkDataArray *array1 = ...;<br />
// Some array holding unsigned chars:<br />
vtkDataArray *array2 = ...;<br />
// Some array holding either floats or doubles:<br />
vtkDataArray *array3 = ...;<br />
<br />
// Typedef the dispatch to a more manageable name:<br />
typedef vtkArrayDispatch::Dispatch3ByValueType<br />
<<br />
vtkArrayDispatch::AllTypes, <br />
vtkTypeList_Create_1(unsigned char),<br />
vtkArrayDispatch::Reals<br />
> MyDispatch;<br />
<br />
// Execute the dispatch:<br />
MyDispatch::Execute(array1, array2, array3, someWorker);<br />
</source><br />
<br />
----<br />
<br />
==== vtkArrayDispatch::DispatchByArrayWithSameValueType ====<br />
<br />
This family of dispatchers takes a vtkTypeList of ArrayTypes for each array and restricts dispatch to only consider arrays from those typelists, with the added requirement that all dispatched arrays share a ValueType.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::Dispatch2ByArrayWithSameValueType -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3ByArrayWithSameValueType -- Triple dispatch.<br />
<br />
'''Arrays considered:''' All arrays in the explicit typelists that meet the ValueType requirements.<br />
<br />
'''Restrictions:''' Combinations of arrays with differing ValueTypes are eliminated.<br />
<br />
'''Usecase:''' When one or more arrays are known to belong to a restricted set of ArrayTypes, and all arrays are known to share the same ValueType, regardless of implementation.<br />
<br />
'''Example Usage:'''<br />
Let’s consider a double array dispatch, with array1 known to be one of four common array types (AOS float, double, int, and vtkIdType arrays), and the other is a complete unknown, although we know that it holds the same ValueType as array1.<br />
<br />
<source lang="cpp"><br />
// AOS float, double, int, or vtkIdType array:<br />
vtkDataArray *array1 = ...;<br />
// Unknown implementation, but the ValueType matches array1:<br />
vtkDataArray *array2 = ...;<br />
<br />
// array1’s possible types:<br />
typedef vtkTypeList_Create_4(vtkFloatArray, vtkDoubleArray,<br />
vtkIntArray, vtkIdTypeArray) Array1Types;<br />
<br />
// array2’s possible types:<br />
typedef typename vtkArrayDispatch::FilterArraysByValueType<br />
<<br />
vtkArrayDispatch::Arrays,<br />
vtkTypeList_Create_4(float, double, int, vtkIdType)<br />
> Array2Types;<br />
<br />
// Typedef the dispatch to a more manageable name:<br />
typedef vtkArrayDispatch::Dispatch2ByArrayWithSameValueType<br />
<<br />
Array1Types,<br />
Array2Types<br />
> MyDispatch;<br />
<br />
// Execute the dispatch:<br />
MyDispatch::Execute(array1, array2, someWorker);<br />
</source><br />
<br />
----<br />
<br />
==== vtkArrayDispatch::DispatchBySameValueType ====<br />
<br />
This family of dispatchers takes a single vtkTypeList of ValueType and restricts dispatch to only consider arrays from vtkArrayDispatch::Arrays with those ValueTypes, with the added requirement that all dispatched arrays share a ValueType.<br />
<br />
;Variations:<br />
: vtkArrayDispatch::Dispatch2BySameValueType -- Double dispatch.<br />
: vtkArrayDispatch::Dispatch3BySameValueType -- Triple dispatch.<br />
: vtkArrayDispatch::Dispatch2SameValueType -- Double dispatch using vtkArrayDispatch::AllTypes.<br />
: vtkArrayDispatch::Dispatch3SameValueType -- Triple dispatch using vtkArrayDispatch::AllTypes.<br />
<br />
'''Arrays considered:''' All arrays in vtkArrayDispatch::Arrays that meet the ValueType requirements.<br />
<br />
'''Restrictions:''' Combinations of arrays with differing ValueTypes are eliminated.<br />
<br />
'''Usecase:''' When one or more arrays are known to belong to a restricted set of ValueTypes, and all arrays are known to share the same ValueType, regardless of implementation.<br />
<br />
'''Example Usage:'''<br />
Let’s consider a double array dispatch, with array1 known to be one of four common ValueTypes (float, double, int, and vtkIdType arrays), and array2 known to have the same ValueType as array1.<br />
<br />
<source lang="cpp"><br />
// Some float, double, int, or vtkIdType array:<br />
vtkDataArray *array1 = ...;<br />
// Unknown, but the ValueType matches array1:<br />
vtkDataArray *array2 = ...;<br />
<br />
// The allowed ValueTypes:<br />
typedef vtkTypeList_Create_4(float, double, int, vtkIdType) ValidValueTypes;<br />
<br />
// Typedef the dispatch to a more manageable name:<br />
typedef vtkArrayDispatch::Dispatch2BySameValueType<br />
<<br />
ValidValueTypes<br />
> MyDispatch;<br />
<br />
// Execute the dispatch:<br />
MyDispatch::Execute(array1, array2, someWorker);<br />
</source><br />
<br />
== Advanced Usage ==<br />
<br />
=== Accessing Memory Buffers ===<br />
<br />
Despite the thin vtkGenericDataArray API’s nice feature that compilers can optimize memory accesses, sometimes there are still legitimate reasons to access the underlying memory buffer. This can still be done safely by providing overloads to your worker’s operator() method. For instance, vtkDataArray::DeepCopy uses a generic implementation when mixed array implementations are used, but has optimized overloads for copying between arrays with the same ValueType and implementation. The worker for this dispatch is shown below as an example:<br />
<br />
<source lang="cpp"><br />
// Copy tuples from src to dest:<br />
struct DeepCopyWorker<br />
{<br />
// AoS --> AoS same-type specialization:<br />
template <typename ValueType><br />
void operator()(vtkAOSDataArrayTemplate<ValueType> *src,<br />
vtkAOSDataArrayTemplate<ValueType> *dst)<br />
{<br />
std::copy(src->Begin(), src->End(), dst->Begin());<br />
}<br />
<br />
// SoA --> SoA same-type specialization:<br />
template <typename ValueType><br />
void operator()(vtkSOADataArrayTemplate<ValueType> *src,<br />
vtkSOADataArrayTemplate<ValueType> *dst)<br />
{<br />
vtkIdType numTuples = src->GetNumberOfTuples();<br />
for (int comp; comp < src->GetNumberOfComponents(); ++comp)<br />
{<br />
ValueType *srcBegin = src->GetComponentArrayPointer(comp);<br />
ValueType *srcEnd = srcBegin + numTuples;<br />
ValueType *dstBegin = dst->GetComponentArrayPointer(comp);<br />
<br />
std::copy(srcBegin, srcEnd, dstBegin);<br />
}<br />
}<br />
<br />
// Generic implementation:<br />
template <typename Array1T, typename Array2T><br />
void operator()(Array1T *src, Array2T *dst)<br />
{<br />
vtkDataArrayAccessor<Array1T> s(src);<br />
vtkDataArrayAccessor<Array2T> d(dst);<br />
<br />
typedef typename vtkDataArrayAccessor<Array2T>::APIType DestType;<br />
<br />
vtkIdType tuples = src->GetNumberOfTuples();<br />
int comps = src->GetNumberOfComponents();<br />
<br />
for (vtkIdType t = 0; t < tuples; ++t)<br />
{<br />
for (int c = 0; c < comps; ++c)<br />
{<br />
d.Set(t, c, static_cast<DestType>(s.Get(t, c)));<br />
}<br />
}<br />
}<br />
};<br />
</source><br />
<br />
== Putting It All Together ==<br />
<br />
Now that we’ve explored the new tools introduced with VTK 7.1 that allow efficient, implementation agnostic array access, let’s take another look at the calcMagnitude example from before and identify the key features of the implementation:<br />
<br />
<source lang="cpp"><br />
// Modern implementation of calcMagnitude using new concepts in VTK 7.1:<br />
struct CalcMagnitudeWorker<br />
{<br />
template <typename VectorArray, typename MagnitudeArray><br />
void operator()(VectorArray *vectors, MagnitudeArray *magnitude)<br />
{<br />
VTK_ASSUME(vectors->GetNumberOfComponents() == 3);<br />
VTK_ASSUME(magnitude->GetNumberOfComponents() == 1);<br />
<br />
vtkDataArrayAccessor<VectorArray> v(vectors);<br />
vtkDataArrayAccessor<MagnitudeArray> m(magnitude);<br />
<br />
vtkIdType numVectors = vectors->GetNumberOfTuples();<br />
for (vtkIdType tupleIdx = 0; tupleIdx < numVectors; ++tupleIdx)<br />
{<br />
m.Set(tupleIdx, 0, std::sqrt(v.Get(tupleIdx, 0) * v.Get(tupleIdx, 0) +<br />
v.Get(tupleIdx, 1) * v.Get(tupleIdx, 1) +<br />
v.Get(tupleIdx, 2) * v.Get(tupleIdx, 2)));<br />
}<br />
}<br />
};<br />
<br />
void calcMagnitude(vtkDataArray *vectors, vtkDataArray *magnitude)<br />
{<br />
CalcMagnitudeWorker worker;<br />
typedef vtkArrayDispatch::Dispatch2ByValueType<br />
<<br />
vtkArrayDispatch::AllTypes,<br />
vtkArrayDispatch::Reals<br />
> Dispatcher;<br />
<br />
if (!Dispatcher::Execute(vectors, magnitude, worker))<br />
{<br />
worker(vectors, magnitude); // vtkDataArray fallback<br />
}<br />
}<br />
</source><br />
<br />
This implementation:<br />
<br />
; Uses dispatch restrictions to reduce the number of instantiated templated worker functions.<br />
: Assuming 26 types are in vtkArrayDispatch::Arrays (13 AOS + 13 SOA).<br />
: The first array is unrestricted. All 26 array types are considered.<br />
: The second array is restricted to float or double ValueTypes, which translates to 4 array types (one each, SOA and AOS).<br />
: 26 * 4 = 104 possible combinations exist. We’ve eliminated 26 * 22 = 572 combinations that an unrestricted double-dispatch would have generated (it would create 676 instantiations).<br />
; The calculation is still carried out at double precision when the ValueType restrictions are not met.<br />
: Just because we don’t want those other 572 cases to have special code generated doesn’t necessarily mean that we wouldn't want them to run.<br />
: Thanks to vtkDataArrayAccessor, we have a fallback implementation that reuses our templated worker code.<br />
: In this case, the dispatch is really just a fast-path implementation for floating point output types.<br />
; The performance should be identical to iterating through raw memory buffers.<br />
: The vtkGenericDataArray API is transparent to the compiler. The specialized instantiations of operator() can be heavily optimized since the memory access patterns are known and well-defined.<br />
: Using VTK_ASSUME tells the compiler that the arrays have known strides, allowing further compile-time optimizations.<br />
<br />
Hopefully this has convinced you that the vtkArrayDispatch and related tools are worth using to create flexible, efficient, typesafe implementations for your work with VTK. Please direct any questions you may have on the subject to the VTK mailing lists.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK_Datasets&diff=64558VTK Datasets2020-11-11T16:58:25Z<p>Wschroed: </p>
<hr />
<div>Kitware maintains data repositories for testing and developing VTK.<br />
<br />
To learn more, visit the current [https://gitlab.kitware.com/vtk/vtk/-/blob/master/CONTRIBUTING.md#contributing-to-vtk instructions for developing with VTK].</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2016&diff=59143VTK/GSoC 20162016-02-19T16:43:43Z<p>Wschroed: Flying edges updates</p>
<hr />
<div>Project ideas for the Google Summer of Code 2016<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors. Ideally students will have submitted a patch or two to their project, [[https://gitlab.kitware.com/vtk/vtk/blob/master/Documentation/dev/git/develop.md instructions are here]] as they will have to soon after being accepted, but it is not a requirement for the proposal. VTK makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/VTK/help/mailing.html mailing lists], [http://open.cdash.org/index.php?project=VTK dashboard].<br />
<br />
=== Templated Input Generator for VTK ===<br />
<br />
'''Brief explanation''':<br />
Build up an infrastructure that makes it straighforward to bring new scientific data formats into VTK. The infrastructure will handle the complexities of temporal support, parallel processing, composite data structures, ghost levels and the like, and provide easy to use entry points that bring data from the file or other source and populate VTK arrays.<br />
<br />
'''Expected Results:'''<br />
A set of classes that can take an input specification and produce vtk data objects correctly and relatively efficiently.<br />
The input specification should be sufficiently abstracted from VTKs data types that users who understand the input format well won't have to understand VTK's complexities in order to use it.<br />
<br />
'''Prerequisites:'''<br />
C++ and probably a scripting language such as Python or Lua.<br />
<br />
'''References:'''<br />
http://www.paraview.org/Wiki/Writing_ParaView_Readers<br />
<br />
'''Mentor(s):''' Robert Maynard (robert dot maynard at kitware dot com) and/or David DeMarle (dave dot demarle at kitware dot com)<br />
<br />
=== Supporting Solid Model Geometry in VTK ===<br />
<br />
'''Brief explanation:''' Traditionally VTK has addressed the visualization needs of post-processed simulation information. Typically in these cases a tessellated mesh represents the geometric domain. This project will extend VTK's role in the simulation lifecycle by investigating approaches that will enable VTK to visualize the parametric boundary representation information used in solid modeling kernels such as CGM and OpenCASCADE (http://www.opencascade.org), which is typical pre-processing description of the geometric domain.<br />
<br />
'''Expected results:''' A VTK module that interfaces with one or more solid modeling kernels.<br />
<br />
'''Prerequisites:''' Experience in C++, and data structures. Some experience in VTK, parametric surfaces and solid modeling kernels ideal but not necessary.<br />
<br />
'''Mentor:''' Bob O'Bara (bob dot obara at kitware dot com).<br />
<br />
=== KiwiViewer on VTK and Google Cardboard support ===<br />
<br />
'''Brief explanation:''' KiwiViewer (http://www.kiwiviewer.org) is a model viewer for VTK datasets that runs on iOS and Android devices. It is built from a cross compiled version of an older release of VTK coupled with VES (http://www.vtk.org/Wiki/VES), a lightweight rendering library that runs on OpenGL ES. The most recent release of VTK supports iOS and Android directly, so bringing KiwiViewer up to date with full featured rendering would open up many visualization capabilities.<br />
<br />
Once VTK has been updated on KiwiWiewer, Google Cardboard support could be added. Google provides a SDK to implement VR features in OpenGL applications for Android and iOS (https://developers.google.com/cardboard/unity/). It will be great to visualize scientific results in a VR environment.<br />
<br />
'''Expected results:''' A new version of KiwiViewer.<br />
<br />
'''Prerequisites:''' Experience developing for mobile platforms and C++.<br />
<br />
'''Mentor:''' Tim Thirion (tim dot thirion at kitware dot com).<br />
<br />
=== OpenFOAM Catalyst adaptor ===<br />
<br />
'''Brief explanation:''' OpenFOAM (http://www.openfoam.org) is a premier open source Computational Fluid Dynamics (CFD) simulation package. ParaView/Catalyst (http://www.paraview.org/Wiki/ParaView/Catalyst/Overview) is a VTK based in-situ visualization framework that tightly couples visualization capabilities to arbitrary simulation code. Updates to the data import path between OpenFOAM and VTK would give extreme scalability to OpenFOAM because data products would never need to be written to disk. It would also facilitate live data and computational steering connections that let the scientist see new results while they are being generated.<br />
<br />
'''Expected results:''' A Catalyst adaptor contributed to either the OpenFOAM or ParaView communities. Two feasible starting points to begin the work are the existing vtkOpenFOAM readers and and the vtkFOAM FOAM-to-VTK exporter.<br />
<br />
'''Prerequisites:''' Experience developing in C++, experience with CFD.<br />
<br />
'''Mentor:''' Andy Bauer (andy dot bauer at kitware dot com) and Takuya Oshima (oshima at eng dot niigata-u dot ac dot jp)<br />
<br />
=== Direct mapped Polyhedral input cells from OpenFOAM ===<br />
<br />
'''Brief explanation:''' OpenFOAM is an Open Source Computational Fluid Dynamics (CFD) package. OpenFOAM runs on unstructured meshes that are composed of polyhedral cells. Polyhedral support is now provided with VTK although this is not supported by all filters. The default option within the OpenFOAM reader is to decompose polyhedral cells into the other VTK primitive types. The OpenFOAM reader also lacks support for ghost cells when reading in parallel.<br />
<br />
'''Expected results:''' An updated OpenFOAM reader with support for ghost cells when reading in parallel where the default output is a polyhedral cells. Test cases should be created for many of the common filters and polyhedral related bugs should be fixed.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' Paul Edwards (paul dot m dot edwards at intel dot com)<br />
<br />
=== Better Package management Support for Java and Python ===<br />
<br />
'''Brief explanation:''' VTK has bindings to languages other than its native C++ but lacks strong integration with package management systems like Maven for Java and PIP for Python. As such every project written in those languages solves the problem of how to provide VTK's runtime libraries in their own way. Updating VTK's build system to be compatible with the standard resources for different languages will make VTK more approachable for large groups of users and simplify life for application developers.<br />
<br />
For Java, the task will be to smooth out the bumps and complexity to run a VTK application inside a Java environment. The first step would be to embed within the VTK Java library a better native library loading mechanism similar to what was done with Jogl. Next we would provide prebuilt versions of VTK for the 3 major platforms and publish them on a public Maven repository. Similarly for Python we would extend VTK's Superbuild CMake infrastructure for making python executables to make libraries are suitable for distribution and installation via PIP.<br />
<br />
'''Expected results:''' Automatic publication of pre-compiled VTK library across all platform (OS X, Windows, Linux) via Maven with an automated system library loading and deployment of the VTK library with its native counter part managed via pip install for usage within the system Python or a Python Virtual Environment.<br />
<br />
'''Prerequisites:''' Experience with C++, Java and Python.<br />
<br />
'''Mentor:''' Sebastien Jourdain (sebastien dot jourdain at kitware dot com)<br />
<br />
=== VTK/ParaView integration into Jupyter / iPython notebooks ===<br />
<br />
'''Brief explanation:''' VTK and ParaView are native scientific libraries used for data processing and visualization. Beeing Python Wrapped, VTK/ParaView can be used within any Python environment such as iPython notebooks. But currently nothing is done to ease interactive 3D visualization within an iPython notebooks. Relying on the VTK/ParaViewWeb stack, we want to enable it.<br />
<br />
'''Expected results:''' Provide an integration path into iPython notebooks while enabeling a set of helper commands to start/stop/edit interactive visualization within a notebook either for VTK or ParaView or both.<br />
<br />
'''Prerequisites:''' Experience with Python, VTK and Web.<br />
<br />
'''Mentor:''' Sebastien Jourdain (sebastien dot jourdain at kitware dot com)<br />
<br />
=== Flying Edges Extensions ===<br />
<br />
'''Brief explanation''':<br />
Flying Edges is a very fast isocontouring, scalable, threaded implementation, as far as we know the fastest non-preprocessed algorithm available today (presented at LDAV 2015). We have just scratched the surface of this technology, and there are several directions to extend this work. This includes building a GPU implementation; extending the capability to other structured data types such as rectilinear grids; and developing new algorithms for clipping structured datasets (e.g, mesh generation).<br />
<br />
'''Expected Results:'''<br />
A set of classes that are integrated into VTK that provide accelerated computational methods for cutting, clipping, and isocontouring.<br />
<br />
'''Prerequisites:'''<br />
C++ and probably a scripting language such as Python.<br />
<br />
'''References:'''<br />
https://www.researchgate.net/publication/282975362_Flying_Edges_A_High-Performance_Scalable_Isocontouring_Algorithm<br />
<br />
'''Mentor(s):''' Will Schroeder (will dot schroeder at kitware dot com) and/or Rob Maynard (robert dot maynard at kitware dot com).<br />
<br />
== Half Baked Ideas ==<br />
<br />
(contact Dave DeMarle if you would like to work on one of these or an idea of your own and I will find you a good mentor to work out a solid GSoC proposal with)<br />
<br />
* make concave polydata "just work" (i.e. render correctly) with minimal impact on common case speed<br />
<br />
* an add on framework to help VTK using applications keep track of units<br />
<br />
* anything from vtk user voice http://vtk.uservoice.com/forums/31508-general, except documentation (unfortunately) since docs effort is explicitly ruled out of GSoC<br />
<br />
* anything from paraview user voice http://paraview.uservoice.com/forums/11350-general<br />
<br />
* lua wrapping, lua programmable filters<br />
<br />
* advanced rendering algorithms with OpenGL2 back end - Ambient occlusion, Reflection, etc etc.<br />
<br />
* interface to modern high quality rendering engines<br />
<br />
* select point cloud workflow such as surface reconstruction, and fundamental algorithms for curvature estimation, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2016&diff=58984VTK/GSoC 20162016-02-09T16:24:50Z<p>Wschroed: /* Half Baked Ideas */</p>
<hr />
<div>Project ideas for the Google Summer of Code 2016<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors. Ideally students will have submitted a patch or two to their project, [[https://gitlab.kitware.com/vtk/vtk/blob/master/Documentation/dev/git/develop.md instructions are here]] as they will have to soon after being accepted, but it is not a requirement for the proposal. Kitware makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/VTK/help/mailing.html mailing lists], [http://open.cdash.org/index.php?project=VTK dashboard].<br />
<br />
=== Computational Biology (Molecular Dynamics) In Situ Visualization ===<br />
<br />
'''Brief explanation:''' Computational Biology involves using computer simulations to study biological problems using molecular dynamics and other techniques. Of particular interest is [[http://www.gromacs.org/ GROMACS]], a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. GROMACS is optimized to run on distributed memory clusters with recent support for GPU and SSE optimization. These GROMACS supercomputing simulations produce enormous (terabytes) file output to be analyzed post process by tools that only read the trajectory (position, velocity, and forces) or coordinate (molecular structure) information, and simply guess at the topology rather than using the simulations topology defined in GROMACS.<br />
<br />
This project would provide a baseline implementation of ParaView Catalyst for molecular in situ visualization and data analysis embedded in GROMACS based on GROMACS' computed topology and trajectory information.<br />
<br />
'''Expected results:''' The result would be ParaView Catalyst adaptors, example python scripts, and new advanced visualization techniques for GROMACS in order to enhance the computational biology workflow.<br />
<br />
'''Prerequisites:''' C++ and python experience required, some experience with VTK and ParaView ideally, but not required.<br />
<br />
'''Mentor:''' Marcus D. Hanwell (mhanwell at kitware dot com).<br />
<br />
=== Templated Input Generator for VTK ===<br />
<br />
'''Brief explanation''':<br />
Build up an infrastructure that makes it straighforward to bring new scientific data formats into VTK. The infrastructure will handle the complexities of temporal support, parallel processing, composite data structures, ghost levels and the like, and provide easy to use entry points that bring data from the file or other source and populate VTK arrays.<br />
<br />
'''Expected Results:'''<br />
A set of classes that can take an input specification and produce vtk data objects correctly and relatively efficiently.<br />
The input specification should be sufficiently abstracted from VTKs data types that users who understand the input format well won't have to understand VTK's complexities in order to use it.<br />
<br />
'''Prerequisites:'''<br />
C++ and probably a scripting language such as Python or Lua.<br />
<br />
'''References:'''<br />
http://www.paraview.org/Wiki/Writing_ParaView_Readers<br />
<br />
'''Mentor(s):''' Robert Maynard (robert dot maynard at kitware dot com) and/or David DeMarle (dave dot demarle at kitware dot com)<br />
<br />
=== Supporting Solid Model Geometry in VTK ===<br />
<br />
'''Brief explanation:''' Traditionally VTK has addressed the visualization needs of post-processed simulation information. Typically in these cases a tessellated mesh represents the geometric domain. This project will extend VTK's role in the simulation lifecycle by investigating approaches that will enable VTK to visualize the parametric boundary representation information used in solid modeling kernels such as CGM and OpenCASCADE (http://www.opencascade.org), which is typical pre-processing description of the geometric domain.<br />
<br />
'''Expected results:''' A VTK module that interfaces with one or more solid modeling kernels.<br />
<br />
'''Prerequisites:''' Experience in C++, and data structures. Some experience in VTK, parametric surfaces and solid modeling kernels ideal but not necessary.<br />
<br />
'''Mentor:''' Bob O'Bara (bob dot obara at kitware dot com).<br />
<br />
=== KiwiViewer on VTK ===<br />
<br />
'''Brief explanation:''' KiwiViewer (http://www.kiwiviewer.org) is a model viewer for VTK datasets that runs on iOS and Android devices. It is built from a cross compiled version of an older release of VTK coupled with VES (http://www.vtk.org/Wiki/VES), a lightweight rendering library that runs on OpenGL ES. The most recent release of VTK supports iOS and Android directly, so bringing KiwiViewer up to date with full featured rendering would open up many visualization capabilities.<br />
<br />
'''Expected results:''' A new version of KiwiViewer.<br />
<br />
'''Prerequisites:''' Experience developing for mobile platforms and C++.<br />
<br />
'''Mentor:''' Tim Thirion (tim dot thirion at kitware dot com).<br />
<br />
=== OpenFOAM Catalyst adaptor ===<br />
<br />
'''Brief explanation:''' OpenFOAM (http://www.openfoam.org) is a premier open source Computational Fluid Dynamics (CFD) simulation package. ParaView/Catalyst (http://www.paraview.org/Wiki/ParaView/Catalyst/Overview) is a VTK based in-situ visualization framework that tightly couples visualization capabilities to arbitrary simulation code. Updates to the data import path between OpenFOAM and VTK would give extreme scalability to OpenFOAM because data products would never need to be written to disk. It would also facilitate live data and computational steering connections that let the scientist see new results while they are being generated.<br />
<br />
'''Expected results:''' A Catalyst adaptor contributed to either the OpenFOAM or ParaView communities. Two feasible starting points to begin the work are the existing vtkOpenFOAM readers and and the vtkFOAM FOAM-to-VTK exporter.<br />
<br />
'''Prerequisites:''' Experience developing in C++, experience with CFD.<br />
<br />
'''Mentor:''' Andy Bauer (andy dot bauer at kitware dot com) and Takuya Oshima (oshima at eng dot niigata-u dot ac dot jp)<br />
<br />
=== Direct mapped Polyhedral input cells from OpenFOAM ===<br />
<br />
'''Brief explanation:''' OpenFOAM is an Open Source Computational Fluid Dynamics (CFD) package. OpenFOAM runs on unstructured meshes that are composed of polyhedral cells. Polyhedral support is now provided with VTK although this is not supported by all filters. The default option within the OpenFOAM reader is to decompose polyhedral cells into the other VTK primitive types. The OpenFOAM reader also lacks support for ghost cells when reading in parallel.<br />
<br />
'''Expected results:''' An updated OpenFOAM reader with support for ghost cells when reading in parallel where the default output is a polyhedral cells. Test cases should be created for many of the common filters and polyhedral related bugs should be fixed.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' Paul Edwards (paul dot m dot edwards at intel dot com)<br />
<br />
<br />
=== Better Package management Support for Java ===<br />
<br />
'''Brief explanation:''' VTK is widely used across many communities (C++, Python, Java) but VTK is lacking integration into each community package management. It is true in Java with Maven but also in Python with PIP.<br />
We will focus on the Java side as the requirements for using VTK with Java might seems foreign for many Java developers. <br />
Therefore it would be nice to remove that barrier by smoothing out the bumps and complexity to run a VTK application inside a Java environment. The first step would be to embed within the VTK Java library a better native library loading mechanism similar to what was done with Jogl. <br />
Then provide a set of prebuild version of VTK for the 3 major platforms and publish them on a public Maven repository which will allow any Java developer to simply declare its dependency using Maven and not wory <br />
about setting environment variable or build native code.<br />
<br />
'''Expected results:''' Automatic publication of pre-compiled VTK library across all platform (OS X, Windows, Linux) via Maven with an automated system library loading. The building of those library will be performed using our CMake SuperBuild infrastructure with our traget platform dashboards.<br />
<br />
'''Prerequisites:''' Experience with Java while having knowledge in C++.<br />
<br />
'''Mentor:''' Sebastien Jourdain (sebastien dot jourdain at kitware dot com)<br />
<br />
=== Better Package management Support for Python ===<br />
<br />
'''Brief explanation:''' VTK is widely used across many communities (C++, Python, Java) but VTK is lacking integration into each community package management. It is true in Java with Maven but also in Python with PIP.<br />
Therefore it will be interesting to provide a PIP support for VTK, which could then allow anyone to simply deploy VTK within their Python environment via a simple command line or a requirement.txt file.<br />
<br />
'''Expected results:''' Deployment of the VTK library with its native counter part managed via pip install for usage within the system Python or a Python Virtual Environment.<br />
<br />
'''Bonus results:''' Similar action with the ParaView library which also provide a Python wrapping.<br />
<br />
'''Prerequisites:''' Experience with Python.<br />
<br />
'''Mentor:''' Sebastien Jourdain (sebastien dot jourdain at kitware dot com)<br />
<br />
=== VTK/ParaView integration into Jupyter / iPython notebooks ===<br />
<br />
'''Brief explanation:''' VTK and ParaView are native scientific libraries used for data processing and visualization. Beeing Python Wrapped, VTK/ParaView can be used within any Python environment such as iPython notebooks. But currently nothing is done to ease interactive 3D visualization within an iPython notebooks. Relying on the VTK/ParaViewWeb stack, we want to enable it.<br />
<br />
'''Expected results:''' Provide an integration path into iPython notebooks while enabeling a set of helper commands to start/stop/edit interactive visualization within a notebook either for VTK or ParaView or both.<br />
<br />
'''Prerequisites:''' Experience with Python, VTK and Web.<br />
<br />
'''Mentor:''' Sebastien Jourdain (sebastien dot jourdain at kitware dot com)<br />
<br />
<br />
== Half Baked Ideas ==<br />
<br />
(contact Dave DeMarle if you would like to work on one of these or an idea of your own and I will find you a good mentor to work out a solid GSoC proposal with)<br />
<br />
* make concave polydata "just work" (i.e. render correctly) with minimal impact on common case speed<br />
<br />
* an add on framework to help VTK using applications keep track of units<br />
<br />
* anything from vtk user voice http://vtk.uservoice.com/forums/31508-general, except documentation (unfortunately) since docs effort is explicitly ruled out of GSoC<br />
<br />
* anything from paraview user voice http://paraview.uservoice.com/forums/11350-general<br />
<br />
* lua wrapping, lua programmable filters<br />
<br />
* advanced rendering algorithms with OpenGL2 back end - Ambient occlusion, Reflection, etc etc.<br />
<br />
* interface to high quality rendering engines<br />
<br />
* select point cloud workflow such as surface reconstruction, and fundamental algorithms for curvature estimation, etc.<br />
<br />
* flying edges extended for other structured data types (structured grids, rectilinear grid) and also to structured clipping</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/July_2015&diff=58024VTK/ARB Notes/July 20152015-07-09T23:54:00Z<p>Wschroed: </p>
<hr />
<div>'''July 9, 2015'''<br />
<br />
Notes:<br />
# Remote modules (Lorensen)<br />
## generally how to increase community engagement<br />
## description of remotes mechanism which is the simple insertion of a file in the Remotes subdirectory which specifies location and git tag. The build process pulls down the repository and builds ir. An adaptation of ITK's approach.<br />
# Works in progress / Cool new features<br />
## VTK-m description by Ken Moreland: efficient support for many- multi-core devices and libraries.<br />
## Zero-Copy, data model updates (Geveci) to reduce memory costs and couple with simulation and other external data structures.<br />
## Python 3.0 support (Gobbi, others) which is being worked on now. David will focus on v3.2 to take advantage of new VTK features.<br />
## VTK Maintenance - OpenGL2 and interaction efforts (Geveci,Schroeder). Need to go beyond the basic VTK rendering architecture and extend it towards extensible OpenGL support.<br />
## Parallel efforts (vtkSMPTools, OpenMP, etc) (Schroeder,Geveci)<br />
# Release schedule<br />
## Philosophical arguments: Backward compatibility & aggressive release schedule<br />
## v6.3 - next 1-2 weeks<br />
## v7.0 - about a month after v6.3<br />
## v8.0</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/July_2015&diff=58023VTK/ARB Notes/July 20152015-07-09T19:17:35Z<p>Wschroed: </p>
<hr />
<div>'''July 9, 2015'''<br />
<br />
Agenda:<br />
# Remote modules (Lorensen)<br />
## including generally how to increase community engagement<br />
# Works in progress / Cool new features<br />
## VTK-m (Ken Moreland)<br />
## Zero-Copy, data model updates (Geveci)<br />
## Python 3.0 support (Gobbi, others)<br />
## VTK Maintenance - OpenGL2 and interaction efforts (Geveci,Schroeder)<br />
## Parallel efforts (vtkSMPTools, OpenMP, etc) (Schroeder,Geveci)<br />
# Release schedule<br />
## Philosophical arguments: Backward compatibility & aggressive release schedule<br />
## v6.3<br />
## v7.0<br />
## v8.0</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/July_2015&diff=58022VTK/ARB Notes/July 20152015-07-09T18:27:19Z<p>Wschroed: Created page with "'''July 9, 2015''' Agenda: # Remote modules (Lorensen) ## including generally how to increase community engagement # Works in progress / Cool new features ## VTK-m (Ken Mo..."</p>
<hr />
<div>'''July 9, 2015'''<br />
<br />
Agenda:<br />
# Remote modules (Lorensen)<br />
## including generally how to increase community engagement<br />
# Works in progress / Cool new features<br />
## VTK-m (Ken Moreland)<br />
## Zero-Copy, data model updates (Geveci)<br />
## Python 3.0 support (Gobbi, others)<br />
## VTK Maintenance - OpenGL2 and interaction efforts (Geveci,Schroeder)<br />
## Parallel efforts (vtkSMPTools, OpenMP, etc) (Schroeder,Geveci)<br />
# Release schedule<br />
## Philosophical arguments: Backward compatibility & aggressive release schedule<br />
## v6.3<br />
## v7.0<br />
## v8.0</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes&diff=58021VTK/ARB Notes2015-07-09T18:23:23Z<p>Wschroed: </p>
<hr />
<div>Here are running notes from recent ARB meetings (beginning 2013):<br />
* [[VTK/ARB_Notes/March_2014 | March 2014]]<br />
* [[VTK/ARB_Notes/May_2014 | May 2014]]<br />
* [[VTK/ARB_Notes/July_2015 | July 2015]]</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB&diff=57967VTK/ARB2015-07-03T10:06:29Z<p>Wschroed: /* Current Members */</p>
<hr />
<div>__NOTOC__<br />
==Purpose==<br />
<br />
The VTK Architecture Review Board (ARB) is a group of individuals whose goal is to advance the technology in VTK by providing direction and oversight to the development of VTK. While the open-source nature of VTK allows natural progression via its many developers, the ARB seeks to balance the intentions of each small group of developers, ensuring that changes will benefit the community as a whole. The ARB serves the following functions:<br />
* Maintain a roadmap of VTK including long-term plans.<br />
* Make decisions on high-impact code changes to VTK.<br />
<br />
==Scope of ARB Intervention==<br />
<br />
Code changes with a high impact on developers and/or users should be reviewed by the ARB. The following are some guiding principles for deciding whether changes require ARB involvement:<br />
* Will the change significantly affect backwards compatibility?<br />
* Does the change cause a significant shift in the functionality and scope of VTK?<br />
* Are there licensing issues with the code?<br />
Smaller feature additions and bug fixes will not in general require ARB approval, although they should in most cases have an associated development plan (see the [[VTK/Managing the Development Process|Managing the Development Process]] document).<br />
<br />
==Roles==<br />
<br />
The '''President''' organizes the meeting agenda and maintains the roadmap and the list of outstanding proposals requiring ARB intervention. He or she is also responsible for setting up ARB meeting times/places and ensuring that the goals of the meeting are accomplished. The president may invite individuals or groups who have submitted proposals to present their plans at ARB meetings.<br />
<br />
The '''Secretary''' keeps records of each meeting, assists in the setup of the meeting location and technology (e.g. projectors, video conferencing, etc.) required, and facilitates communication of proposals to the ARB, as well as decisions from the ARB back to the community.<br />
<br />
==Meetings==<br />
<br />
The ARB will meet on a schedule of their choosing and convenience, but at least once a quarter. The ARB may meet informally at any time as the need arises to evaluate proposals. ([[VTK/ARB/Meetings|Meeting notes and scheduled meetings are listed here]].)<br />
<br />
==Conflict Resolution==<br />
<br />
Conflicts will be resolved by discussion and consensus where at all possible. When such an agreement is impossible, the members of the ARB will vote on the issue, with the President breaking any tie vote.<br />
<br />
==Membership==<br />
<br />
Membership, while initially determined by Kitware, will develop organically from the ARB itself. ARB members are responsible for nominating new members, who are elected by consensus or majority vote (with the president breaking any tie). Existing members may step down from the ARB at any point. Members who are unable to attend meetings after reasonable effort to contact them, or are found to be exceedingly counterproductive to the purposes of the ARB, may be dropped from the ARB by consensus or vote.<br />
<br />
==Current Members==<br />
<br />
The following are the current members of the ARB. It is likely that many of these positions may change over time. The list below summarizes each members organization and expertise.<br />
<br />
* Jim Ahrens, Los Alamos National Laboratories (Supercomputing: VTK, ParaView)<br />
* Berk Geveci, Kitware Inc. (Supercomputing: VTK, ParaView)<br />
* Bill Lorensen, Master and Commander (Medical Imaging: VTK, Slicer)<br />
* Andrew Maclean, Centre for Autonomous Systems, University of Sydney (Geometry: VTK, Robotics, Software Process)<br />
* Steve Pieper, Isomics (Medical Imaging: VTK, Slicer)<br />
* Paolo Quadrani, CINECA System and Technology Department (Medical Imaging: VTK, MAF)<br />
* Will Schroeder, Kitware Inc. (Geometry, Data Structures, Algorithms: VTK)<br />
* Ken Mooreland, Sandia National Laboratories (This is a rotating position with other lead VTK technologists from Sandia) (Informatics: VTK, Titan)<br />
* Alejandro Ribes, Research Scientist at EDF R&D (ParaView, In-situ visualization and analytics, statistical visual analysis) <br />
* David Gobbi, Research Scientist at Calgary Image Processing and Analysis Centre<br />
<br />
==Mailing List==<br />
This members only mailing list can be found at http://www.vtk.org/mailman/listinfo/arb</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB&diff=57961VTK/ARB2015-06-29T20:09:59Z<p>Wschroed: /* Current Members */</p>
<hr />
<div>__NOTOC__<br />
==Purpose==<br />
<br />
The VTK Architecture Review Board (ARB) is a group of individuals whose goal is to advance the technology in VTK by providing direction and oversight to the development of VTK. While the open-source nature of VTK allows natural progression via its many developers, the ARB seeks to balance the intentions of each small group of developers, ensuring that changes will benefit the community as a whole. The ARB serves the following functions:<br />
* Maintain a roadmap of VTK including long-term plans.<br />
* Make decisions on high-impact code changes to VTK.<br />
<br />
==Scope of ARB Intervention==<br />
<br />
Code changes with a high impact on developers and/or users should be reviewed by the ARB. The following are some guiding principles for deciding whether changes require ARB involvement:<br />
* Will the change significantly affect backwards compatibility?<br />
* Does the change cause a significant shift in the functionality and scope of VTK?<br />
* Are there licensing issues with the code?<br />
Smaller feature additions and bug fixes will not in general require ARB approval, although they should in most cases have an associated development plan (see the [[VTK/Managing the Development Process|Managing the Development Process]] document).<br />
<br />
==Roles==<br />
<br />
The '''President''' organizes the meeting agenda and maintains the roadmap and the list of outstanding proposals requiring ARB intervention. He or she is also responsible for setting up ARB meeting times/places and ensuring that the goals of the meeting are accomplished. The president may invite individuals or groups who have submitted proposals to present their plans at ARB meetings.<br />
<br />
The '''Secretary''' keeps records of each meeting, assists in the setup of the meeting location and technology (e.g. projectors, video conferencing, etc.) required, and facilitates communication of proposals to the ARB, as well as decisions from the ARB back to the community.<br />
<br />
==Meetings==<br />
<br />
The ARB will meet on a schedule of their choosing and convenience, but at least once a quarter. The ARB may meet informally at any time as the need arises to evaluate proposals. ([[VTK/ARB/Meetings|Meeting notes and scheduled meetings are listed here]].)<br />
<br />
==Conflict Resolution==<br />
<br />
Conflicts will be resolved by discussion and consensus where at all possible. When such an agreement is impossible, the members of the ARB will vote on the issue, with the President breaking any tie vote.<br />
<br />
==Membership==<br />
<br />
Membership, while initially determined by Kitware, will develop organically from the ARB itself. ARB members are responsible for nominating new members, who are elected by consensus or majority vote (with the president breaking any tie). Existing members may step down from the ARB at any point. Members who are unable to attend meetings after reasonable effort to contact them, or are found to be exceedingly counterproductive to the purposes of the ARB, may be dropped from the ARB by consensus or vote.<br />
<br />
==Current Members==<br />
<br />
The following are the current members of the ARB. It is likely that many of these positions may change over time. The list below summarizes each members organization and expertise.<br />
<br />
* Jim Ahrens, Los Alamos National Laboratories (Supercomputing: VTK, ParaView)<br />
* Berk Geveci, Kitware Inc. (Supercomputing: VTK, ParaView)<br />
* Bill Lorensen, Master and Commander (Medical Imaging: VTK, Slicer)<br />
* Andrew Maclean, Centre for Autonomous Systems, University of Sydney (Geometry: VTK, Robotics, Software Process)<br />
* Steve Pieper, Isomics (Medical Imaging: VTK, Slicer)<br />
* Paolo Quadrani, CINECA System and Technology Department (Medical Imaging: VTK, MAF)<br />
* Will Schroeder, Kitware Inc. (Geometry, Data Structures, Algorithms: VTK)<br />
* Ken Mooreland, Sandia National Laboratories (This is a rotating position with other lead VTK technologists from Sandia) (Informatics: VTK, Titan)<br />
* Stephane Ploix, EDF (Supercomputing: VTK, ParaView, shared-memory parallelism)<br />
* David Gobbi, Research Scientist at Calgary Image Processing and Analysis Centre<br />
<br />
==Mailing List==<br />
This members only mailing list can be found at http://www.vtk.org/mailman/listinfo/arb</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB&diff=57760VTK/ARB2015-04-20T15:02:10Z<p>Wschroed: /* Current Members */</p>
<hr />
<div>__NOTOC__<br />
==Purpose==<br />
<br />
The VTK Architecture Review Board (ARB) is a group of individuals whose goal is to advance the technology in VTK by providing direction and oversight to the development of VTK. While the open-source nature of VTK allows natural progression via its many developers, the ARB seeks to balance the intentions of each small group of developers, ensuring that changes will benefit the community as a whole. The ARB serves the following functions:<br />
* Maintain a roadmap of VTK including long-term plans.<br />
* Make decisions on high-impact code changes to VTK.<br />
<br />
==Scope of ARB Intervention==<br />
<br />
Code changes with a high impact on developers and/or users should be reviewed by the ARB. The following are some guiding principles for deciding whether changes require ARB involvement:<br />
* Will the change significantly affect backwards compatibility?<br />
* Does the change cause a significant shift in the functionality and scope of VTK?<br />
* Are there licensing issues with the code?<br />
Smaller feature additions and bug fixes will not in general require ARB approval, although they should in most cases have an associated development plan (see the [[VTK/Managing the Development Process|Managing the Development Process]] document).<br />
<br />
==Roles==<br />
<br />
The '''President''' organizes the meeting agenda and maintains the roadmap and the list of outstanding proposals requiring ARB intervention. He or she is also responsible for setting up ARB meeting times/places and ensuring that the goals of the meeting are accomplished. The president may invite individuals or groups who have submitted proposals to present their plans at ARB meetings.<br />
<br />
The '''Secretary''' keeps records of each meeting, assists in the setup of the meeting location and technology (e.g. projectors, video conferencing, etc.) required, and facilitates communication of proposals to the ARB, as well as decisions from the ARB back to the community.<br />
<br />
==Meetings==<br />
<br />
The ARB will meet on a schedule of their choosing and convenience, but at least once a quarter. The ARB may meet informally at any time as the need arises to evaluate proposals. ([[VTK/ARB/Meetings|Meeting notes and scheduled meetings are listed here]].)<br />
<br />
==Conflict Resolution==<br />
<br />
Conflicts will be resolved by discussion and consensus where at all possible. When such an agreement is impossible, the members of the ARB will vote on the issue, with the President breaking any tie vote.<br />
<br />
==Membership==<br />
<br />
Membership, while initially determined by Kitware, will develop organically from the ARB itself. ARB members are responsible for nominating new members, who are elected by consensus or majority vote (with the president breaking any tie). Existing members may step down from the ARB at any point. Members who are unable to attend meetings after reasonable effort to contact them, or are found to be exceedingly counterproductive to the purposes of the ARB, may be dropped from the ARB by consensus or vote.<br />
<br />
==Current Members==<br />
<br />
The following are the current members of the ARB. It is likely that many of these positions may change over time. The list below summarizes each members organization and expertise.<br />
<br />
* Jim Ahrens, Los Alamos National Laboratories (Supercomputing: VTK, ParaView)<br />
* Berk Geveci, Kitware Inc. (Supercomputing: VTK, ParaView)<br />
* Bill Lorensen, Master and Commander (Medical Imaging: VTK, Slicer)<br />
* Andrew Maclean, Centre for Autonomous Systems, University of Sydney (Geometry: VTK, Robotics, Software Process)<br />
* Steve Pieper, Isomics (Medical Imaging: VTK, Slicer)<br />
* Paolo Quadrani, CINECA System and Technology Department (Medical Imaging: VTK, MAF)<br />
* Will Schroeder, Kitware Inc. (Geometry, Data Structures, Algorithms: VTK)<br />
* Ken Mooreland, Sandia National Laboratories (This is a rotating position with other lead VTK technologists from Sandia) (Informatics: VTK, Titan)<br />
* Stephane Ploix, EDF (Supercomputing: VTK, ParaView, shared-memory parallelism)<br />
<br />
==Mailing List==<br />
This members only mailing list can be found at http://www.vtk.org/mailman/listinfo/arb</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56294VTK/ARB Notes/May 20142014-05-07T20:52:40Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Discussed the NIH VTK Maintenance Grant. Reviewed in detail Aims 1, 2, 3. which is the focus of the work.<br />
* Will provided overview.<br />
* Ken provided more details about rendering work (Aim 1). Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching the fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed. In which case we may have to build a new subsystem (TBD).<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering. (Part of the large data Aim 1 work.)<br />
* Will discussed Aim 2 (community) and requested feedback from the ARB to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim 3 and the interface with our five medical application subcontractors. As an FYI Steve P, and Bill L. were quite pleased at how well VTK6 ported to Slicer, kudos to JC and J2.<br />
<br />
Some suggestions in random order during the discussion:<br />
* Carrying coordinate transform information through the pipeline is important. This is necessary for imaging (Bill, Steve), and for assembly transformations (Stephane). The basic metadata representation is probably easy to do; the concern is for data processing and rendering. It may be a simple approach works well, and relying on ITK for more advanced medical computing may make sense (meaning improving our interfaces to VTK <-> ITK so data can flow more easily).<br />
* During the rendering rework need to make sure that support for efficient parallel rendering is maintained (Ken M., David R. expressed this concern).<br />
* Volume rendering label maps is an important requirement (Bill, Steve)<br />
* There was concern about proper support for large polydata rendering. Meaning culling mostly, although LOD and other techniques were discussed. (Stephane)<br />
* We are planning on improving VTK's support for higher-quality rendering; e.g., shadows, reflections, etc.<br />
<br />
The plan is to hold the next ARB meeting in about a month (early June). Will will set up a Doodle poll. Also next time we will invite individuals to the hangout separately to avoid permission issues with Google Hangout.<br />
<br />
We had a follow on conversation related to "correctness" of marching cubes. Silva et al. have reported some issues with marching cubes, which are not bugs but really just due to a 30-yr old algorithm (which may not be as advanced as current algorithms). There was also discussion on the way to respond to this information in terms of community outreach, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56293VTK/ARB Notes/May 20142014-05-07T15:53:01Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Discussed the NIH VTK Maintenance Grant. Reviewed in detail Aims 1, 2, 3. which is the focus of the work.<br />
* Will provided overview.<br />
* Ken provided more details about rendering work (Aim 1). Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching the fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed. In which case we may have to build a new subsystem (TBD).<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering. (Part of the large data Aim 1 work.)<br />
* Will discussed Aim 2 (community) and requested feedback from the ARB to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim 3 and the interface with our five medical application subcontractors. As an FYI Steve P, and Bill L. were quite pleased at how well VTK6 ported to Slicer, kudos to JC and J2.<br />
<br />
Some suggestions in random order during the discussion:<br />
* Carrying coordinate transform information through the pipeline is important. This is necessary for imaging (Bill, Steve), and for assembly transformations (Stephane). The basic metadata representation is probably easy to do; the concern is for data processing and rendering. It may be a simple approach works well, and relying on ITK for more advanced medical computing may make sense (meaning improving our interfaces to VTK <-> ITK so data can flow more easily.<br />
* During the rendering rework need to make sure that support for efficient parallel rendering is maintained (Ken M., David R. expressed this concern).<br />
* Volume rendering label maps is an important requirement (Bill, Steve)<br />
* There was concern about proper support for large polydata rendering. Meaning culling mostly, although LOD and other techniques were discussed. (Stephane)<br />
* We are planning on improving VTK's support for higher-quality rendering; e.g., shadows, reflections, etc.<br />
<br />
The plan is to hold the next ARB meeting in about a month (early June). Will will set up a Doodle poll. Also next time we will invite individuals to the hangout separately to avoid permission issues with Google Hangout.<br />
<br />
We had a follow on conversation related to "correctness" of marching cubes. Silva et al. have reported some issues with marching cubes, which are not bugs but really just due to a 30-yr old algorithm (which may not be as advanced as current algorithms). There was also discussion on the way to respond to this information in terms of community outreach, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56289VTK/ARB Notes/May 20142014-05-06T20:20:57Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Discussed the NIH VTK Maintenance Grant. Reviewed in detail Aims 1, 2 ,3. which is the focus of the work.<br />
* Will provided overview.<br />
* Ken provided more details about rendering work (Aim 1). Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching the fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed. In which case we may have to build a new subsystem (TBD).<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering. (Part of the large data Aim 1 work.)<br />
* Will discussed Aim 2 (community) and requested feedback from the ARB to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim 3 and the interface with our five medical application subcontractors. As an FYI Steve P, and Bill L. were quite pleased at how well VTK6 ported to Slicer, kudos to JC and J2.<br />
<br />
Some suggestions in random order during the discussion:<br />
* Carrying coordinate transform information through the pipeline is important. This is necessary for imaging (Bill, Steve), and for assembly transformations (Stephane). The basic metadata representation is probably easy to do; the concern is for data processing and rendering. It may be a simple approach works well, and relying on ITK for more advanced medical computing may make sense (meaning improving our interfaces to VTK <-> ITK so data can flow more easily.<br />
* During the rendering rework need to make sure that support for efficient parallel rendering is maintained (Ken M., David R. expressed this concern).<br />
* Volume rendering label maps is an important requirement (Bill, Steve)<br />
* There was concern about proper support for large polydata rendering. Meaning culling mostly, although LOD and other techniques were discussed. (Stephane)<br />
* We are planning on improving VTK's support for higher-quality rendering; e.g., shadows, reflections, etc.<br />
<br />
The plan is to hold the next ARB meeting in about a month (early June). Will will set up a Doodle poll. Also next time we will invite individuals to the hangout separately to avoid permission issues with Google Hangout.<br />
<br />
We had a follow on conversation related to "correctness" of marching cubes. Silva et al. have reported some issues with marching cubes, which are not bugs but really just due to a 30-yr old algorithm (which may not be as advanced as current algorithms). There was also discussion on the way to respond to this information in terms of community outreach, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56288VTK/ARB Notes/May 20142014-05-06T20:20:25Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Discussed the NIH VTK Maintenance Grant. Reviewed in detail Aims 1, 2 ,3. which is the focus of the work.<br />
* Will provided overview.<br />
* Ken provided more details about rendering work (Aim 1). Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching the fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed. In which case we may have to build a new subsystem (TBD).<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering. (Part of the large data Aim 1 work.)<br />
* Will discussed Aim 2 (community) and requested feedback from the ARB to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim 3 and the interface with our five medical application subcontractors. As an FYI Steve P, and Bill L. were quite pleased at how well VTK6 ported to Slicer, kudos to JC and J2.<br />
<br />
Some suggestions in random order during the discussion:<br />
* Carrying coordinate transform information through the pipeline is important. This is necessary for imaging, and for assembly transformations (Stephane). The basic metadata representation is probably easy to do; the concern is for data processing and rendering. It may be a simple approach works well, and relying on ITK for more advanced medical computing may make sense (meaning improving our interfaces to VTK <-> ITK so data can flow more easily.<br />
* During the rendering rework need to make sure that support for efficient parallel rendering is maintained (Ken M., David R. expressed this concern).<br />
* Volume rendering label maps is an important requirement (Bill, Steve)<br />
* There was concern about proper support for large polydata rendering. Meaning culling mostly, although LOD and other techniques were discussed. (Stephane)<br />
* We are planning on improving VTK's support for higher-quality rendering; e.g., shadows, reflections, etc.<br />
<br />
The plan is to hold the next ARB meeting in about a month (early June). Will will set up a Doodle poll. Also next time we will invite individuals to the hangout separately to avoid permission issues with Google Hangout.<br />
<br />
We had a follow on conversation related to "correctness" of marching cubes. Silva et al. have reported some issues with marching cubes, which are not bugs but really just due to a 30-yr old algorithm (which may not be as advanced as current algorithms). There was also discussion on the way to respond to this information in terms of community outreach, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56287VTK/ARB Notes/May 20142014-05-06T19:57:14Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Discussed the NIH VTK Maintenance Grant. Reviewed in detail Aims 1, 2 ,3. which is the focus of the work.<br />
* Will provided overview.<br />
* Ken provided more details about rendering work (Aim 1). Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching the fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed. In which case we may have to build a new subsystem (TBD).<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering. (Part of the large data Aim 1 work.)<br />
* Will discussed Aim 2 (community) and requested feedback to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim 3 and the interface with our five medical application subcontractors. As an FYI Steve P, and Bill L. were quite pleased at how well VTK6 ported to Slicer, kudos to JC and J2.<br />
<br />
Some suggestions in random order during the discussion:<br />
* Carrying coordinate transform information through the pipeline is important. This is necessary for imaging, and for assembly transformations (Stephane). The basic metadata representation is probably easy to do; the concern is for data processing and rendering. It may be a simple approach works well, and relying on ITK for more advanced medical computing may make sense (meaning improving our interfaces to VTK <-> ITK so data can flow more easily.<br />
* During the rendering rework need to make sure that support for efficient parallel rendering is maintained (Ken M., David R. expressed this concern).<br />
* Volume rendering label maps is an important requirement (Bill, Steve)<br />
* There was concern about proper support for large polydata rendering. Meaning culling mostly, although LOD and other techniques were discussed. (Stephane)<br />
* We are planning on improving VTK's support for higher-quality rendering; e.g., shadows, reflections, etc.<br />
<br />
The plan is to hold the next ARB meeting in about a month (early June). Will will set up a Doodle poll. Also next time we will invite individuals to the hangout separately to avoid permission issues with Google Hangout.<br />
<br />
We had a follow on conversation related to "correctness" of marching cubes. Silva et al. have reported some issues with marching cubes, which are not bugs but really just due to a 30-yr old algorithm (which may not be as advanced as current algorithms). There was also discussion on the way to respond to this information in terms of community outreach, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56286VTK/ARB Notes/May 20142014-05-06T19:56:37Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Discussed the NIH VTK Maintenance Grant. Reviewed in detail Aims 1, 2 ,3. which is the focus of the work.<br />
* Will provided overview.<br />
* Ken provided more details about rendering work (Aim 1). Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching the fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed. In which case we may have to build a new subsystem (TBD).<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering. (Part of the large data Aim 1 work.)<br />
* Will discussed Aim 2 (community) and requested feedback to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim 3 and the interface with our five medical application subcontractors. As an FYI Steve P, and Bill L. were quite pleased at how well VTK6 ported to Slicer, kudos to JC and J2.<br />
<br />
Some suggestions in random order during the discussion:<br />
* Carrying coordinate transform information through the pipeline is important. This is necessary for imaging, and for assembly transformations (Stephane). The basic metadata representation is probably easy to do; the concern is for data processing and rendering. It may be a simple approach works well, and relying on ITK for more advanced medical computing may make sense (meaning improving our interfaces to VTK <-> ITK so data can flow more easily.<br />
* During the rendering rework need to make sure that support for efficient parallel rendering is maintained (Ken M., David R. expressed this concern).<br />
* Volume rendering label maps is an important requirement (Bill, Steve)<br />
* There was concern about proper support for large polydata rendering. Meaning culling mostly, although LOD and other techniques were discussed. (Stephane)<br />
* We are planning on improving VTK's support for higher-quality rendering; e.g., shadows, reflections, etc.<br />
<br />
The plan is to hold the next ARB meeting in about a month (early June). Will will set up a Doodle poll. Also next time we will invite individuals to the hangout separately to avoid permission issues with Google Hangout.<br />
<br />
We had a follow on conversation related to "correctness" of marching cubes. Silva et al. have reported some issues with marching cubes, which are not bugs but really just due to a 30-yr old algorithm (which may not be as advanced as current algorithms). There was also discussion on the way to respond to this information in terms of community response, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56285VTK/ARB Notes/May 20142014-05-06T19:47:29Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Discussed the NIH VTK Maintenance Grant. Reviewed in detail Aims 1, 2 ,3. which is the focus of the work.<br />
* Will provided overview.<br />
* Ken provided more details about rendering work (Aim 1). Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching the fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed. In which case we may have to build a new subsystem (TBD).<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering. (Part of the large data Aim 1 work.)<br />
* Will discussed Aim 2 (community) and requested feedback to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim 3 and the interface with our five medical application subcontractors. As an FYI Steve P, and Bill L. were quite pleased at how well VTK6 ported to Slicer, kudos to JC and J2.<br />
<br />
Some suggestions in random order during the discussion:<br />
* Carrying coordinate transform information through the pipeline is important. This is necessary for imaging, and for assembly transformations. The basic metadata representation is probably easy to do; the concern is for data processing and rendering. It may be a simple approach works well, and relying on ITK for more advanced medical computing may make sense (meaning improving our interfaces to VTK <-> ITK so data can flow more easily.<br />
* During the rendering rework need to make sure that support for efficient parallel rendering is maintained (Ken M., David R. expressed this concern).<br />
* Volume rendering label maps is an important requirement (Bill, Steve)<br />
* There was concern about proper support for large polydata rendering. Meaning culling mostly, although LOD and other techniques were discussed. (Stephane)<br />
* We are planning on improving VTK's support for higher-quality rendering; e.g., shadows, reflections, etc.<br />
<br />
The plan is to hold the next ARB meeting in about a month (early June). Will will set up a Doodle poll. Also next time we will invite individuals to the hangout separately to avoid permission issues with Google Hangout.<br />
<br />
We had a follow on conversation related to "correctness" of marching cubes. Silva et al. have reported some issues with marching cubes, which are not bugs but really just due to a 30-yr old algorithm (which may not be as advanced as current algorithms). There was also discussion on the way to respond to this information in terms of community response, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56284VTK/ARB Notes/May 20142014-05-06T19:41:13Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Reviewed Aims 1, 2 ,3. <br />
* Will provided overview.<br />
* Ken provided more details about rendering work. Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching the fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed. In which case we may have to build a new subsystem (TBD).<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering.<br />
* Will discussed Aim 2 (community) and requested feedback to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim 3 and the interface with our five medical application subcontractors. As an FYI Steve P, and Bill L. were quite pleased at how well VTK6 ported to Slicer, kudos to JC and J2.<br />
<br />
Some suggestions in random order during the discussion:<br />
* Carrying coordinate transform information through the pipeline is important. This is necessary for imaging, and for assembly transformations. The basic metadata representation is probably easy to do; the concern is for data processing and rendering.<br />
* During the rendering rework need to make sure that support for efficient parallel rendering is maintained.<br />
* Volume rendering label maps is an important requirement.<br />
* There was concern about proper support for large polydata rendering. Meaning culling mostly, although LOD and other techniques were discussed.<br />
* We are planning on improving VTK's support for higher-quality rendering; e.g., shadows, reflections, etc.<br />
<br />
The plan is to hold the next ARB meeting in about a month (early June). Will will set up a Doodle poll. Also next time we will invite individuals to the hangout separately to avoid permission issues with Google Hangout.<br />
<br />
We had a follow on conversation related to "correctness" of marching cubes. Silva et al. have reported some issues with marching cubes, which are not bugs but really just due to an 30-yr old algorithm. There was also discussion on the way to respond to this information, etc.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56283VTK/ARB Notes/May 20142014-05-06T19:28:17Z<p>Wschroed: </p>
<hr />
<div>'''May 6, 2014'''<br />
<br />
Reviewed Aims 1, 2 ,3. <br />
* Will provided overview<br />
* Ken provided more details about rendering work. Basically now we are retaining the current architecture and rewriting the innards with modern OpenGL practices (relying on shaders, ditching fixed pipeline approach). Once this preliminary work is done we will consider whether issues like scene graph, many actors, etc. need to be addressed.<br />
* Berk discussed the AMR composite dataset for handling hierarchical volume representation, processing, and rendering.<br />
* Will discussed Aim 2 (community) and requested feedback to assist Dave DeMarle and Chris Mullins in their work.<br />
* We briefly touched on Aim3 and the interface with our five medical application subcontractors. Steve and Bill L. were quite pleased at how well VTK6 ported to Slicer.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/May_2014&diff=56274VTK/ARB Notes/May 20142014-05-06T11:40:57Z<p>Wschroed: Created page with "'''May 6, 2014'''"</p>
<hr />
<div>'''May 6, 2014'''</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/March_2014&diff=56270VTK/ARB Notes/March 20142014-05-05T19:23:39Z<p>Wschroed: </p>
<hr />
<div>3/5/2014<br />
<br />
Will started with introductions.<br />
* Membership: Claudio left. Ken Moreland replaced Brian Wylie<br />
* Discussion on the mission of the board: Long term vision. Guidance on big changes.<br />
<br />
Will provided a quick overview of the NIH VTK Maintenance grant. Overhaul of rendering (geometry and volume rendering) and maintenance.<br />
<br />
Discussion on future directions:<br />
* Bill: VTK 20 years old. Applications are moving to lighter weight platforms. Mobile etc.<br />
* Steve: A lot of stuff in VTK that you can’t afford on lightweight platforms. Could use Javascript bindings.<br />
* Paolo: Lots of information from sensors. Big data.<br />
* Andrew: Threading could be a focus. Pull in boost.<br />
* Steve: Aware of image orientation, propagate through pipeline. Natively handled in VTK.<br />
* Bill: Thought NIH grant could handle orientation etc.<br />
* Bill: Maybe we are a niche market. Always fit in big data.<br />
* Bill, Steve: Geovis, social 3D visualization<br />
* Bill: Focused in 3D<br />
<br />
Question: Does C++ library fit into these new areas (Web).<br />
* Steve: Doing rendering in OpenCL. He would always choose OpenCL for threaded application development.<br />
* Steve: VTK should take rendering seriously. Good lighting, effects etc.<br />
* Steve, Berk: Discussion on client-server separation. VTK’s role on server side.<br />
* Stephane: Distinguish between data that fits in memory on the client vs. server side. API for small datasets on the client side.<br />
* Bill: Software process big sphere of influence for us. We have good data model and solid code base. Well tested architecture. Apply architecture to new domains.<br />
* Ken: Bloated nature of things in VTK. Huge stack trace when debugging pipeline execution. For HPC and mobile trimmed down rendering. People balking at sheer size of VTK. Smaller libraries to fit into HPC and mobile<br />
* Steve: xdummy. Avoid dependence on X server.<br />
* Modularization at a filter level<br />
<br />
Will: Future discussions on roadmap, VTK rendering. Berk et al. will lead a discussion on scene graph in VTK.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes/March_2014&diff=56269VTK/ARB Notes/March 20142014-05-05T19:21:05Z<p>Wschroed: Created page with "3/5/2014 Will started with introductions. Membership: Claudio left. Ken Moreland replaced Brian Wylie. Discussion on the mission of the board: Long term vision. Guidance on ..."</p>
<hr />
<div>3/5/2014<br />
<br />
Will started with introductions. <br />
<br />
Membership: Claudio left. Ken Moreland replaced Brian Wylie.<br />
<br />
Discussion on the mission of the board: Long term vision. Guidance on big changes.<br />
<br />
Will provided an overview of the NIH VTK Maintenance grant. Overhaul of rendering (geometry and volume rendering) and maintenance.<br />
<br />
Discussion on future directions:<br />
<br />
Bill: VTK 20 years old. Applications are moving to lighter weight platforms. Mobile etc.<br />
<br />
Steve: A lot of stuff in VTK that you can’t afford on lightweight platforms. Could use Javascript bindings.<br />
<br />
Paolo: Lots of information from sensors. Big data.<br />
<br />
Andrew: Threading could be a focus. Pull in boost.<br />
<br />
Steve: Aware of image orientation, propagate through pipeline. Natively handled in VTK.<br />
<br />
Bill: Thought NIH grant could handle orientation etc.<br />
<br />
Bill: Maybe we are a niche market. Always fit in big data.<br />
<br />
Bill, Steve: Geovis, social 3D visualization<br />
<br />
Bill: Focused in 3D<br />
<br />
Question: Does C++ library fit into these new areas (Web).<br />
<br />
Steve: Doing rendering in OpenCL. He would always choose OpenCL for threaded application development.<br />
<br />
Steve: VTK should take rendering seriously. Good lighting, effects etc.<br />
<br />
Steve, Berk: Discussion on client-server separation. VTK’s role on server side.<br />
<br />
Stephane: Distinguish between data that fits in memory on the client vs. server side. API for small datasets on the client side.<br />
<br />
Bill: Software process big sphere of influent for us. We have good data model and solid code base. Well tested architecture. Apply architecture to new domains.<br />
<br />
Ken: Bloated nature of things in VTK. Huge stack trace when debugging pipeline execution. For HPC and mobile trimmed down rendering. People balking at sheer size of VTK. Smaller libraries to fit into HPC and mobile<br />
<br />
Steve: xdummy. Avoid dependence on X server.<br />
<br />
Modularization at a filter level<br />
<br />
Will: Future discussions on roadmap, VTK rendering. Berk et al. will lead a discussion on scene graph in VTK.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes&diff=56268VTK/ARB Notes2014-05-05T19:20:44Z<p>Wschroed: </p>
<hr />
<div>Here are running notes from recent ARB meetings (beginning 2013):<br />
* [[VTK/ARB_Notes/March_2014 | March 2014]]<br />
* [[VTK/ARB_Notes/May_2014 | May 2014]]</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes&diff=56267VTK/ARB Notes2014-05-05T19:20:28Z<p>Wschroed: </p>
<hr />
<div>Here are running notes from recent ARb meetings (beginning 2013):<br />
* [[VTK/ARB_Notes/March_2014 | March 2014]]<br />
* [[VTK/ARB_Notes/May_2014 | May 2014]]</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes&diff=56266VTK/ARB Notes2014-05-05T19:20:12Z<p>Wschroed: </p>
<hr />
<div>Here are running notes from recent ARb meetings (beginning 2013):<br />
* [[VTK/ARB_Notes/March_2014 | March 2014]]<br />
* [[VTK/ARB_Notes/May_2014 | May_2014]]</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB_Notes&diff=56265VTK/ARB Notes2014-05-05T19:17:53Z<p>Wschroed: Created page with "Here are running notes from recent ARb meetings (beginning 2013): *"</p>
<hr />
<div>Here are running notes from recent ARb meetings (beginning 2013):<br />
*</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK&diff=56264VTK2014-05-05T19:17:00Z<p>Wschroed: /* Development Process */</p>
<hr />
<div><center>http://public.kitware.com/images/logos/vtk-logo2.jpg</center><br />
<br /><br />
The Visualization ToolKit (VTK) is an open source, freely available software system for 3D computer graphics, image processing, and visualization used by thousands of researchers and developers around the world. VTK consists of a C++ class library, and several interpreted interface layers including Python, Tcl/Tk and Java. Professional support and products for VTK are provided by Kitware, Inc. ([http://www.kitware.com www.kitware.com]) VTK supports a wide variety of visualization algorithms including scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as implicit modelling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation. In addition, dozens of imaging algorithms have been directly integrated to allow the user to mix 2D imaging / 3D graphics algorithms and data.<br />
<br />
<br />
== Learning VTK ==<br />
If you want to learn how to use or develop VTK, please see [[VTK/Learning_VTK | Learning VTK]]<br />
<br />
== Building VTK ==<br />
* Where can I [http://vtk.org/get-software.php download VTK]?<br />
<br />
* Where can I download a tarball of the [http://vtk.org/files/nightly/vtkNightlyDocHtml.tar.gz nightly HTML documentation]?<br />
<br />
* How do I build the [[VTK/BuildingDoxygen|Doxygen documentation]]?<br />
<br />
* [[VTK/Git|Using Git for VTK development]]<br />
<br />
* [[VTK/GitMSBuild|Using Git and MSBuild to build VTK]]<br />
<br />
* [[VTK/PythonDevelopment|Setting up a Python Development Environment using Eclipse/Pydev]]<br />
<br />
* [[VTK/Build parameters | Build parameters]]<br />
<br />
* [[Making Development Environment without compiling source distribution]]<br />
<br />
* [[VTK/Building/VisualStudio | Building VTK with Visual Studio]]<br />
<br />
== Extending VTK ==<br />
<br />
* Where can I get [[VTK Datasets]]?<br />
<br />
* [[VTK Classes|User-Contributed Classes]]<br />
<br />
* [[VTK Coding Standards]] <br />
<br />
* [[VTK/Commit_Guidelines|VTK Commit Guidelines]]<br />
<br />
* [[VTK/Git/Develop|Contribute to VTK / Patch Procedure]]<br />
<br />
* [[VTK Scripts|Extending VTK with Scripts]]<br />
<br />
== Projects/ Tools that use VTK == <br />
<br />
* [[VTK Tools|VTK-Based Tools and Applications]]<br />
<br />
* What are some [[VTK Projects|projects using VTK]]?<br />
<br />
== Troubleshooting ==<br />
* [[VTK FAQ|Frequently asked questions (FAQ)]]<br />
<br />
== Miscellaneous ==<br />
* [[VTK Related Job Opportunities|VTK Related Job Opportunities]]<br />
<br />
* [[VTK/Third Party Library Patrol | VTK 3rd Party Library Patrol]]<br />
<br />
* [[VTK/Meeting Minutes | Meeting Minutes]]<br />
<br />
* [[VTK/License | VTK License]]<br />
* [[VTK/ThirdPartyLicenses | VTK Third-Party Licenses]]<br />
<br />
== Summary of Changes ==<br />
<br />
==== VTK 6.2 (git master) ====<br />
<br />
* Under Cocoa, removed "-fobjc-gc" as a default compiler flag. VTK still supports Cocoa garbage collection, but you must specify it yourself now.<br />
* Added a reader/writer for NIFTI image files.<br />
<br />
==== VTK 6.1 ====<br />
<br />
* Move to use CMake's external data support over VTKData<br />
* [[VTK/OpenGL_Errors | OpenGL error detection and reporting macros and error cleanup ]]<br />
* [[VTK/OpenGL_Driver_Information | API for dealing with OpenGL driver bugs ]]<br />
* [[VTK/OSMesa_Support | Enable rendering with OSMesa where possible ]]<br />
* [[ParaView/Line_Integral_Convolution | Surface LIC parallelization and features for interactive tuning ]]<br />
* [[VTK/VTK_SMP | SMP framework introduced to make shared memory parallel development]]<br />
* Fixed compiler/linker errors when building against OS X 10.9 SDK. Fixed other errors building against llvm's [http://libcxx.llvm.org libc++].<br />
* Support for unicode text when a suitable font file is used in vtkTextProperty.<br />
* [[VTK/Wrapping C++11 Code | Wrapper support for header files with C++11 syntax]].<br />
* [[VTK/Better_Java_Support | Better Java support and install rules]]<br />
* Depth peeling support for ATI devices<br />
* Ctests generate a stacktrace on POSIX systems in response to catastrophic failure such as abort or segfault.<br />
* Qt5 support<br />
* [[VTK/API_Changes_6_0_0_to_6_1_0 | API Diff Report]]<br />
<br />
==== VTK 6.0 ====<br />
<br />
* [[VTK/VTK_6_Migration_Guide | VTK 6 API Migration Guide]]<br />
* [[VTK/Build_System_Migration | VTK 6 (build system) Migration Guide]]<br />
* [[VTK/Module_Development | VTK 6 Module Development]]<br />
* [[VTK/Remove_VTK_4_Compatibility | Remove VTK 4 compatibility layer from pipeline]]<br />
* [[VTK/Modularization_Proposal | Modularization]]<br />
* [[VTK/Remove_vtkTemporalDataSet | Temporal support changes]]<br />
* [[VTK/Composite_data_changes | Composite data structure changes ]]<br />
* [[VTK/API_Changes_5_10_1_to_6_0_0 | API Diff Report]]<br />
<br />
==== VTK 5.10 ====<br />
<br />
* [[VTK/improved unicode support | Change unicode readers/writers to register as codecs (finished Oct 29 2010)]]<br />
* [[VTK/Image Rendering Classes | New image rendering classes (start Dec 15 2010, finish Mar 15 2011)]]<br />
* [[VTK/Image Interpolators | Image interpolators (start Jun 20 2011, finish Aug 31 2011)]]<br />
* [[VTK/GSoC | Projects from Google Summer of Code 2011]]<br />
* [[VTK/Release5100 New Classes | List of new classes in 5.10]]<br />
* [[VTK/API_Changes_5_8_0_to_6_1_0 | API Diff Report]]<br />
<br />
==== VTK 5.8 ====<br />
<br />
* [[VTK/Polyhedron_Support | Polyhedron cells and MVC Interpolation]]<br />
* [http://visimp.cs.unc.edu/2010/10/26/reeb-graphs/ Reeb Graphs]<br />
* [[VTK/Closed Surface Clipping | Clipping of closed surfaces (start Mar 26, 2010, finish Apr 22, 2010)]]<br />
* [[VTK/Wrapper Update 2010 | New wrappers (start Apr 28, 2010)]]<br />
* [[VTK/Image Stencil Improvements | Improved image stencil support (start Nov 3, 2010)]]<br />
* [[VTK/MNI File Formats | MNI file formats]]<br />
* [[VTK/Release580 New Classes | List of New Classes]]<br />
<br />
==== VTK 5.6 ====<br />
<br />
* [[VTK/MultiPass_Rendering | VTK Multi-Pass Rendering]]<br />
* [[VTK/Multicore and Streaming | Multicore and Streaming]]<br />
* [[VTK/statistics | Statistics]]<br />
* [[VTK/Array Refactoring | Array Refactoring]]<br />
* [[VTK/3DConnexion Devices Support | 3DConnexion Devices Support]]<br />
* [[VTK/Charts | New Charts API]]<br />
* [[VTK/New CellPicker | New Cell Picker and Volume Picking (start Nov 2010, finish Feb 2010)]]<br />
<br />
==== VTK 5.4 ====<br />
<br />
* [[VTK 5.4 Release Planning]]<br />
* [[VTK/Cray XT3 Compilation| Cray XT3 Compilation]]<br />
* [[VTK/Geovis vision toolkit | Geospatial and vision visualization support ]]<br />
<br />
==== VTK 5.2 ====<br />
<br />
* [[VTK/Java Wrapping | VTK Java Wrapping]]<br />
* [[VTK/Composite Data Redesign | Composite Data Redesign]]<br />
* [[VTK Shaders | VTK Shaders]]<br />
* [[VTKShaders | Shaders in VTK]]<br />
* [[VTK/VTKMatlab | VTK with Matlab]]<br />
* [[VTK/Time_Support | VTK Time support]]<br />
* [[VTK/Graph Layout | VTK Graph Layout]]<br />
* [[VTK/Depth_Peeling | VTK Depth Peeling]]<br />
* [[VTK/Using_JRuby | Using VTK with JRuby]]<br />
* [[VTK/Painters | Painters]]<br />
<br />
==== VTK 5.0 ====<br />
<br />
* [[VTK/Tutorials/New_Pipeline | New Pipeline]]<br />
* [[VTKWidgets | VTK Widget Redesign]]<br />
<br />
== News ==<br />
<br />
=== Development Process ===<br />
The VTK Community is [[VTK/Managing_the_Development_Process | upgrading its development process]]. The current process using Git can be found at the [[VTK/Git|VTK Git page]]. We are doing this in response to the continuing and rapid growth of the toolkit. A VTK Architecture Review Board [[VTK/Architecture_Review_Board |VTK ARB]] is being put in place to provide strategic guidance to the community, and individuals are being identified as leaders in various VTK subsystems.<br />
<br />
Have a question or topic for the ARB to discuss about the future of VTK? First, please bring the topic to the [http://public.kitware.com/mailman/listinfo/vtk-developers VTK developers mailing list]. If the issue is not resolved there or needs further planning or direction, you may [[VTK/ARB/Meetings#Potential Topics|enter a suggested topic for discussion]].<br />
<br />
* [[Proposed Changes to VTK | Proposed Changes to VTK]]<br />
* [[VTK/ARB_Notes | VTK ARB Notes ]]<br />
<br />
===[[VTK/NextGen|VTK NextGen]]=== <br />
We have started collecting works in progress as well as future ideas at [[VTK/NextGen|NextGen]]. Please add anything you are working on, would like to collaborate on, or would like to see in the future of VTK!<br />
<br />
== Wrapping ==<br />
<br />
* [[VTK/Wrappers | Wrapping Tools]]<br />
<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
* [[VTK/CSharp/ActiViz.NET|CSharp/ActiViz.NET]]<br />
** [[VTK/Examples/CSharp|CSharp/ActiViz.NET code samples]]<br />
* [[VTK/CSharp/ComingSoon|CSharp/ComingSoon]]<br />
<br />
== Developers Corner ==<br />
[[VTK/Git|Development process with Git]]<br />
<br />
[[VTK/Developers Corner|Developers Corner]]<br />
<br />
<!-- <br />
== External Links ==<br />
dead link *[http://zorayasantos.tripod.com/vtk_csharp_examples VTK examples in C#] (Visual Studio 5.0 and .NET 2.0)<br />
--><br />
{{VTK/Template/Footer}}</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=ParaView/Catalyst/Overview&diff=53704ParaView/Catalyst/Overview2013-07-29T23:52:42Z<p>Wschroed: /* Technical Objectives */</p>
<hr />
<div><center>[[File:CatalystLogo.png|500px]]</center><br />
== Background ==<br />
Several factors are driving the growth of simulations. Computational power of supercomputers and computer clusters is growing, while the price of individual computers is decreasing. Distributed computing techniques allow hundreds, or even thousands, of computer nodes to participate in a single simulation. The benefit of this computational power is that simulations are becoming more accurate and useful for predicting complex phenomena. The downside to this growth is the enormous amounts of data that need to be saved and analyzed to determine the results of the simulation. Unfortunately, the growth of IO capabilities has not kept up with the growth of processing power in these machines. Thus, the ability to generate data has outpaced our ability to save and analyze the data. This bottleneck is throttling our ability to benefit from our improved computing resources. For example, simulations often save their states infrequently to minimize storage requirements. <br />
<br />
Such coarse temporal sampling makes it difficult to notice some complex behavior. To address this issue, ParaView can now be easily used to integrate concurrent analysis and visualization directly into simulation codes. This functionality is often referred to as co-processing, ''in situ'' processing or co-visualization.<br />
This feature is available through ParaView Catalyst (previously called ParaView Co-Processing). The workflows comparing the old to the new simulation workflow using ParaView Catalyst can be seen in the figure below.<br />
{|<br />
|[[Image:FullWorkFlow.png|thumb|800px|Full Workflow]]<br />
|}<br />
{|<br />
|[[Image:CatalystWorkFlow.png|thumb|800px|Workflow With Co-Processing]]<br />
|}<br />
<br />
== Technical Objectives ==<br />
<br />
The main objective of the co-processing toolset is to facilitate integration of easy to use, core data processing into the simulation to enable scalable data analysis. The toolset has two main parts:<br />
<br />
* '''An extensible and flexible library''': ParaView Catalyst was designed to be flexible enough to be embedded in various simulation codes with relative ease and minimal footprint. This flexibility is critical, as a library that requires significant effort to embed cannot be successfully deployed in a large number of simulations. The co-processing library is also easily-extended so that users can deploy new analysis and visualization techniques to existing co-processing installations. The minimal footpring is through using the Catalyst configuration tools (see directions for [[Generating_Catalyst_Source_Tree|generating source]] and [[Build_Directions| building]]) to reduce the overall amount of ParaView and VTK libraries that a simulation code needs to link to.<br />
<br />
* '''Configuration tools for ParaView Catalyst output''': It is important for users to be able to configure the Catalyst output using graphical user interfaces that are part of their daily work-flow. <br />
<br />
Note: All of this can be done for large data. The Catalyst library will often be used on a distributed system. For the largest simulations, the visualization of extracts may also require a distributed system (i.e. a visualization cluster).<br />
<br />
== Details ==<br />
<br />
Using ParaView Catalyst is a fundamental change in to the way that simulation results are obtained. The entire<br />
goal is to reduce the time to gaining insight into the problem being simulated. Figure 1 shows<br />
the computational time perform a full workflow using Sandia's CTH simulation code for various problem sizes and process counts.<br />
This time includes both simulation and post-processing simulation time. Figure 2 shows the execution time for gaining the same results with CTH while using Catalyst for ''in situ'' analysis and visualization.<br />
<br />
{|<br />
|[[Image:CTHFullWorkflow.png|thumb|400px|Figure 1: Classical workflow.]] || [[Image:CTHCatalystWorkflow.png|thumb|400px|Figure 2: Catalyst workflow.]]<br />
|}<br />
<br />
Note that as the problem size increases as well as the number of processes increases, the benefits of using Catalyst<br />
become more apparent. This is due in a large part because the computing system resources are being stretched to<br />
their limit and inefficiencies become more apparent. This is detailed in Sandia's SAND2010-6118 technical report which is referenced below. One possible workflow that ParaView's co-processing tools<br />
enables is demonstrated more fully in Figure 3.<br />
<br />
{|<br />
|[[Image:CatalystFullWorkFlow.png|650px|thumb|left|Figure 3: Full workflow.]]<br />
|}<br />
<br />
In this workflow the user creates a Python script using ParaView's plugin for creating Catalyst co-processing scripts. Here the user can choose a variety of outputs: extracted data such as polygonal output with field data, rendered images, plot information and/or statistics. The Python scripts are then used by Catalyst during the simulation run to output the simulation user's desired information. Typically, the extracted data is orders of magnitude smaller than saving out the full data set. This is shown in Figure 4 for a relatively small problem for several VTK filters.<br />
Often the reduced file IO also results in faster simulation runs since in certain cases it is faster<br />
for Catalyst to compute a desired extract and save that to disk compared to just saving the full raw data<br />
to disk. Figure 5 shows the compute time for certain VTK filters compared to saving the full raw data for a small 6 process run.<br />
<br />
{|<br />
|[[Image:CatalystReduceOutputSize.png|450px|thumb|Figure 4: Extract file size compared to full raw data.]] ||<br />
[[Image:CatalystReduceRunTime.png|450px|thumb|Figure 5: Time to compute extracts compared to file IO.]]<br />
|}<br />
<br />
== Important Links ==<br />
<br />
* The [http://catalyst.paraview.org main page] for ParaView Catalyst.<br />
* The most complete information is available in the [[Media:CatalystUsersGuide.pdf|ParaView Catalyst User's Guide]].<br />
* [https://github.com/acbauer/CatalystExampleCode Example code] with samples from Python, C, C++ and Fortran for creating adaptors as well as examples of hard-coded C++ Catalyst pipelines.<br />
* A [[Media:ParaViewCatalystV1Tutorial.pdf|tutorial]] on ParaView Catalyst along with [[Media:ParaViewCatalystV1TutorialFiles.tgz|sample files]].<br />
* Sandia National Laboratories SAND2010-6118 technical report on [http://www.sandia.gov/~kmorel/documents/MilestoneFY10Sandia.pdf Visualization on Supercomputing Platform Level II ASC Milestone].<br />
<br />
Information for ParaView's original co-processing tools are still [[CoProcessing|available]] but are for versions of ParaView before 4.0.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=ParaView/Catalyst/Overview&diff=53703ParaView/Catalyst/Overview2013-07-29T23:49:24Z<p>Wschroed: /* Technical Objectives */</p>
<hr />
<div><center>[[File:CatalystLogo.png|500px]]</center><br />
== Background ==<br />
Several factors are driving the growth of simulations. Computational power of supercomputers and computer clusters is growing, while the price of individual computers is decreasing. Distributed computing techniques allow hundreds, or even thousands, of computer nodes to participate in a single simulation. The benefit of this computational power is that simulations are becoming more accurate and useful for predicting complex phenomena. The downside to this growth is the enormous amounts of data that need to be saved and analyzed to determine the results of the simulation. Unfortunately, the growth of IO capabilities has not kept up with the growth of processing power in these machines. Thus, the ability to generate data has outpaced our ability to save and analyze the data. This bottleneck is throttling our ability to benefit from our improved computing resources. For example, simulations often save their states infrequently to minimize storage requirements. <br />
<br />
Such coarse temporal sampling makes it difficult to notice some complex behavior. To address this issue, ParaView can now be easily used to integrate concurrent analysis and visualization directly into simulation codes. This functionality is often referred to as co-processing, ''in situ'' processing or co-visualization.<br />
This feature is available through ParaView Catalyst (previously called ParaView Co-Processing). The workflows comparing the old to the new simulation workflow using ParaView Catalyst can be seen in the figure below.<br />
{|<br />
|[[Image:FullWorkFlow.png|thumb|800px|Full Workflow]]<br />
|}<br />
{|<br />
|[[Image:CatalystWorkFlow.png|thumb|800px|Workflow With Co-Processing]]<br />
|}<br />
<br />
== Technical Objectives ==<br />
<br />
The main objective of the co-processing toolset is to facilitate integration of easy to use, core data processing into the simulation to enable scalable data analysis. The toolset has two main parts:<br />
<br />
* '''An extensible and flexible library''': ParaView Catalyst was designed to be flexible enough to be embedded in various simulation codes with relative ease and minimal footprint. This flexibility is critical, as a library that requires a lot of effort to embed cannot be successfully deployed in a large number of simulations. The co-processing library is also easily-extended so that users can deploy new analysis and visualization techniques to existing co-processing installations. The minimal footpring is through using the Catalyst configuration tools (see directions for [[Generating_Catalyst_Source_Tree|generating source]] and [[Build_Directions| building]]) to reduce the overall amount of ParaView and VTK libraries that a simulation code needs to link to.<br />
<br />
* '''Configuration tools for ParaView Catalyst output''': It is important for users to be able to configure the Catalyst output using graphical user interfaces that are part of their daily work-flow. <br />
<br />
Note: All of this must be done for large data. The Catalyst library will often be used on a distributed system. For the largest simulations, the visualization of extracts may also require a distributed system (i.e. a visualization cluster).<br />
<br />
== Details ==<br />
<br />
Using ParaView Catalyst is a fundamental change in to the way that simulation results are obtained. The entire<br />
goal is to reduce the time to gaining insight into the problem being simulated. Figure 1 shows<br />
the computational time perform a full workflow using Sandia's CTH simulation code for various problem sizes and process counts.<br />
This time includes both simulation and post-processing simulation time. Figure 2 shows the execution time for gaining the same results with CTH while using Catalyst for ''in situ'' analysis and visualization.<br />
<br />
{|<br />
|[[Image:CTHFullWorkflow.png|thumb|400px|Figure 1: Classical workflow.]] || [[Image:CTHCatalystWorkflow.png|thumb|400px|Figure 2: Catalyst workflow.]]<br />
|}<br />
<br />
Note that as the problem size increases as well as the number of processes increases, the benefits of using Catalyst<br />
become more apparent. This is due in a large part because the computing system resources are being stretched to<br />
their limit and inefficiencies become more apparent. This is detailed in Sandia's SAND2010-6118 technical report which is referenced below. One possible workflow that ParaView's co-processing tools<br />
enables is demonstrated more fully in Figure 3.<br />
<br />
{|<br />
|[[Image:CatalystFullWorkFlow.png|650px|thumb|left|Figure 3: Full workflow.]]<br />
|}<br />
<br />
In this workflow the user creates a Python script using ParaView's plugin for creating Catalyst co-processing scripts. Here the user can choose a variety of outputs: extracted data such as polygonal output with field data, rendered images, plot information and/or statistics. The Python scripts are then used by Catalyst during the simulation run to output the simulation user's desired information. Typically, the extracted data is orders of magnitude smaller than saving out the full data set. This is shown in Figure 4 for a relatively small problem for several VTK filters.<br />
Often the reduced file IO also results in faster simulation runs since in certain cases it is faster<br />
for Catalyst to compute a desired extract and save that to disk compared to just saving the full raw data<br />
to disk. Figure 5 shows the compute time for certain VTK filters compared to saving the full raw data for a small 6 process run.<br />
<br />
{|<br />
|[[Image:CatalystReduceOutputSize.png|450px|thumb|Figure 4: Extract file size compared to full raw data.]] ||<br />
[[Image:CatalystReduceRunTime.png|450px|thumb|Figure 5: Time to compute extracts compared to file IO.]]<br />
|}<br />
<br />
== Important Links ==<br />
<br />
* The [http://catalyst.paraview.org main page] for ParaView Catalyst.<br />
* The most complete information is available in the [[Media:CatalystUsersGuide.pdf|ParaView Catalyst User's Guide]].<br />
* [https://github.com/acbauer/CatalystExampleCode Example code] with samples from Python, C, C++ and Fortran for creating adaptors as well as examples of hard-coded C++ Catalyst pipelines.<br />
* A [[Media:ParaViewCatalystV1Tutorial.pdf|tutorial]] on ParaView Catalyst along with [[Media:ParaViewCatalystV1TutorialFiles.tgz|sample files]].<br />
* Sandia National Laboratories SAND2010-6118 technical report on [http://www.sandia.gov/~kmorel/documents/MilestoneFY10Sandia.pdf Visualization on Supercomputing Platform Level II ASC Milestone].<br />
<br />
Information for ParaView's original co-processing tools are still [[CoProcessing|available]] but are for versions of ParaView before 4.0.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=ParaView/Catalyst/Overview&diff=53702ParaView/Catalyst/Overview2013-07-29T23:47:08Z<p>Wschroed: /* Background */</p>
<hr />
<div><center>[[File:CatalystLogo.png|500px]]</center><br />
== Background ==<br />
Several factors are driving the growth of simulations. Computational power of supercomputers and computer clusters is growing, while the price of individual computers is decreasing. Distributed computing techniques allow hundreds, or even thousands, of computer nodes to participate in a single simulation. The benefit of this computational power is that simulations are becoming more accurate and useful for predicting complex phenomena. The downside to this growth is the enormous amounts of data that need to be saved and analyzed to determine the results of the simulation. Unfortunately, the growth of IO capabilities has not kept up with the growth of processing power in these machines. Thus, the ability to generate data has outpaced our ability to save and analyze the data. This bottleneck is throttling our ability to benefit from our improved computing resources. For example, simulations often save their states infrequently to minimize storage requirements. <br />
<br />
Such coarse temporal sampling makes it difficult to notice some complex behavior. To address this issue, ParaView can now be easily used to integrate concurrent analysis and visualization directly into simulation codes. This functionality is often referred to as co-processing, ''in situ'' processing or co-visualization.<br />
This feature is available through ParaView Catalyst (previously called ParaView Co-Processing). The workflows comparing the old to the new simulation workflow using ParaView Catalyst can be seen in the figure below.<br />
{|<br />
|[[Image:FullWorkFlow.png|thumb|800px|Full Workflow]]<br />
|}<br />
{|<br />
|[[Image:CatalystWorkFlow.png|thumb|800px|Workflow With Co-Processing]]<br />
|}<br />
<br />
== Technical Objectives ==<br />
<br />
The main objective of the co-processing toolset is to integrate core data processing with the simulation to enable scalable data analysis,<br />
while also being simple to use for the analyst. The toolset has two main parts:<br />
<br />
* '''An extensible and flexible library''': ParaView Catalyst was designed to be flexible enough to be embedded in various simulation codes with relative ease and minimal footprint. This flexibility is critical, as a library that requires a lot of effort to embed cannot be successfully deployed in a large number of simulations. The co-processing library is also easily-extended so that users can deploy new analysis and visualization techniques to existing co-processing installations. The minimal footpring is through using the Catalyst configuration tools (see directions for [[Generating_Catalyst_Source_Tree|generating source]] and [[Build_Directions| building]]) to reduce the overall amount of ParaView and VTK libraries that a simulation code needs to link to.<br />
<br />
* '''Configuration tools for ParaView Catalyst output''': It is important for users to be able to configure the Catalyst output using graphical user interfaces that are part of their daily work-flow. <br />
<br />
Note: All of this must be done for large data. The Catalyst library will often be used on a distributed system. For the largest simulations, the visualization of extracts may also require a distributed system (i.e. a visualization cluster).<br />
<br />
== Details ==<br />
<br />
Using ParaView Catalyst is a fundamental change in to the way that simulation results are obtained. The entire<br />
goal is to reduce the time to gaining insight into the problem being simulated. Figure 1 shows<br />
the computational time perform a full workflow using Sandia's CTH simulation code for various problem sizes and process counts.<br />
This time includes both simulation and post-processing simulation time. Figure 2 shows the execution time for gaining the same results with CTH while using Catalyst for ''in situ'' analysis and visualization.<br />
<br />
{|<br />
|[[Image:CTHFullWorkflow.png|thumb|400px|Figure 1: Classical workflow.]] || [[Image:CTHCatalystWorkflow.png|thumb|400px|Figure 2: Catalyst workflow.]]<br />
|}<br />
<br />
Note that as the problem size increases as well as the number of processes increases, the benefits of using Catalyst<br />
become more apparent. This is due in a large part because the computing system resources are being stretched to<br />
their limit and inefficiencies become more apparent. This is detailed in Sandia's SAND2010-6118 technical report which is referenced below. One possible workflow that ParaView's co-processing tools<br />
enables is demonstrated more fully in Figure 3.<br />
<br />
{|<br />
|[[Image:CatalystFullWorkFlow.png|650px|thumb|left|Figure 3: Full workflow.]]<br />
|}<br />
<br />
In this workflow the user creates a Python script using ParaView's plugin for creating Catalyst co-processing scripts. Here the user can choose a variety of outputs: extracted data such as polygonal output with field data, rendered images, plot information and/or statistics. The Python scripts are then used by Catalyst during the simulation run to output the simulation user's desired information. Typically, the extracted data is orders of magnitude smaller than saving out the full data set. This is shown in Figure 4 for a relatively small problem for several VTK filters.<br />
Often the reduced file IO also results in faster simulation runs since in certain cases it is faster<br />
for Catalyst to compute a desired extract and save that to disk compared to just saving the full raw data<br />
to disk. Figure 5 shows the compute time for certain VTK filters compared to saving the full raw data for a small 6 process run.<br />
<br />
{|<br />
|[[Image:CatalystReduceOutputSize.png|450px|thumb|Figure 4: Extract file size compared to full raw data.]] ||<br />
[[Image:CatalystReduceRunTime.png|450px|thumb|Figure 5: Time to compute extracts compared to file IO.]]<br />
|}<br />
<br />
== Important Links ==<br />
<br />
* The [http://catalyst.paraview.org main page] for ParaView Catalyst.<br />
* The most complete information is available in the [[Media:CatalystUsersGuide.pdf|ParaView Catalyst User's Guide]].<br />
* [https://github.com/acbauer/CatalystExampleCode Example code] with samples from Python, C, C++ and Fortran for creating adaptors as well as examples of hard-coded C++ Catalyst pipelines.<br />
* A [[Media:ParaViewCatalystV1Tutorial.pdf|tutorial]] on ParaView Catalyst along with [[Media:ParaViewCatalystV1TutorialFiles.tgz|sample files]].<br />
* Sandia National Laboratories SAND2010-6118 technical report on [http://www.sandia.gov/~kmorel/documents/MilestoneFY10Sandia.pdf Visualization on Supercomputing Platform Level II ASC Milestone].<br />
<br />
Information for ParaView's original co-processing tools are still [[CoProcessing|available]] but are for versions of ParaView before 4.0.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=ParaView/Catalyst/Overview&diff=53701ParaView/Catalyst/Overview2013-07-29T23:42:49Z<p>Wschroed: /* Background */</p>
<hr />
<div><center>[[File:CatalystLogo.png|500px]]</center><br />
== Background ==<br />
Several factors are driving the growth of simulations. Computational power of supercomputers and computer clusters is growing, while the price of individual computers is decreasing. Distributed computing techniques allow hundreds, or even thousands, of computer nodes to participate in a single simulation. The benefit of this computational power is that simulations are becoming more accurate and useful for predicting complex phenomena. The downside to this growth is the enormous amounts of data that need to be saved and analyzed to determine the results of the simulation. Unfortunately, the growth of IO capabilities has not kept up with the growth of processing power in these machines. Thus, the ability to generate data has outpaced our ability to save and analyze the data. This bottleneck is throttling our ability to benefit from our improved computing resources. Simulations save their states only very infrequently to minimize storage requirements. This coarse temporal sampling makes it difficult to notice some complex behavior. To get past this barrier, ParaView can now be easily used to integrate concurrent analysis and visualization directly with simulation codes. This functionality is often referred to as co-processing, ''in situ'' processing or co-visualization.<br />
This feature is available through ParaView Catalyst (previously called ParaView Co-Processing). The difference between workflows when using ParaView Catalyst can be seen in the figures below.<br />
{|<br />
|[[Image:FullWorkFlow.png|thumb|800px|Full Workflow]]<br />
|}<br />
{|<br />
|[[Image:CatalystWorkFlow.png|thumb|800px|Workflow With Co-Processing]]<br />
|}<br />
<br />
== Technical Objectives ==<br />
<br />
The main objective of the co-processing toolset is to integrate core data processing with the simulation to enable scalable data analysis,<br />
while also being simple to use for the analyst. The toolset has two main parts:<br />
<br />
* '''An extensible and flexible library''': ParaView Catalyst was designed to be flexible enough to be embedded in various simulation codes with relative ease and minimal footprint. This flexibility is critical, as a library that requires a lot of effort to embed cannot be successfully deployed in a large number of simulations. The co-processing library is also easily-extended so that users can deploy new analysis and visualization techniques to existing co-processing installations. The minimal footpring is through using the Catalyst configuration tools (see directions for [[Generating_Catalyst_Source_Tree|generating source]] and [[Build_Directions| building]]) to reduce the overall amount of ParaView and VTK libraries that a simulation code needs to link to.<br />
<br />
* '''Configuration tools for ParaView Catalyst output''': It is important for users to be able to configure the Catalyst output using graphical user interfaces that are part of their daily work-flow. <br />
<br />
Note: All of this must be done for large data. The Catalyst library will often be used on a distributed system. For the largest simulations, the visualization of extracts may also require a distributed system (i.e. a visualization cluster).<br />
<br />
== Details ==<br />
<br />
Using ParaView Catalyst is a fundamental change in to the way that simulation results are obtained. The entire<br />
goal is to reduce the time to gaining insight into the problem being simulated. Figure 1 shows<br />
the computational time perform a full workflow using Sandia's CTH simulation code for various problem sizes and process counts.<br />
This time includes both simulation and post-processing simulation time. Figure 2 shows the execution time for gaining the same results with CTH while using Catalyst for ''in situ'' analysis and visualization.<br />
<br />
{|<br />
|[[Image:CTHFullWorkflow.png|thumb|400px|Figure 1: Classical workflow.]] || [[Image:CTHCatalystWorkflow.png|thumb|400px|Figure 2: Catalyst workflow.]]<br />
|}<br />
<br />
Note that as the problem size increases as well as the number of processes increases, the benefits of using Catalyst<br />
become more apparent. This is due in a large part because the computing system resources are being stretched to<br />
their limit and inefficiencies become more apparent. This is detailed in Sandia's SAND2010-6118 technical report which is referenced below. One possible workflow that ParaView's co-processing tools<br />
enables is demonstrated more fully in Figure 3.<br />
<br />
{|<br />
|[[Image:CatalystFullWorkFlow.png|650px|thumb|left|Figure 3: Full workflow.]]<br />
|}<br />
<br />
In this workflow the user creates a Python script using ParaView's plugin for creating Catalyst co-processing scripts. Here the user can choose a variety of outputs: extracted data such as polygonal output with field data, rendered images, plot information and/or statistics. The Python scripts are then used by Catalyst during the simulation run to output the simulation user's desired information. Typically, the extracted data is orders of magnitude smaller than saving out the full data set. This is shown in Figure 4 for a relatively small problem for several VTK filters.<br />
Often the reduced file IO also results in faster simulation runs since in certain cases it is faster<br />
for Catalyst to compute a desired extract and save that to disk compared to just saving the full raw data<br />
to disk. Figure 5 shows the compute time for certain VTK filters compared to saving the full raw data for a small 6 process run.<br />
<br />
{|<br />
|[[Image:CatalystReduceOutputSize.png|450px|thumb|Figure 4: Extract file size compared to full raw data.]] ||<br />
[[Image:CatalystReduceRunTime.png|450px|thumb|Figure 5: Time to compute extracts compared to file IO.]]<br />
|}<br />
<br />
== Important Links ==<br />
<br />
* The [http://catalyst.paraview.org main page] for ParaView Catalyst.<br />
* The most complete information is available in the [[Media:CatalystUsersGuide.pdf|ParaView Catalyst User's Guide]].<br />
* [https://github.com/acbauer/CatalystExampleCode Example code] with samples from Python, C, C++ and Fortran for creating adaptors as well as examples of hard-coded C++ Catalyst pipelines.<br />
* A [[Media:ParaViewCatalystV1Tutorial.pdf|tutorial]] on ParaView Catalyst along with [[Media:ParaViewCatalystV1TutorialFiles.tgz|sample files]].<br />
* Sandia National Laboratories SAND2010-6118 technical report on [http://www.sandia.gov/~kmorel/documents/MilestoneFY10Sandia.pdf Visualization on Supercomputing Platform Level II ASC Milestone].<br />
<br />
Information for ParaView's original co-processing tools are still [[CoProcessing|available]] but are for versions of ParaView before 4.0.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/Git&diff=41758VTK/Git2011-07-20T19:57:25Z<p>Wschroed: /* Gerrit */</p>
<hr />
<div>__TOC__<br />
<br />
VTK version tracking and development is hosted by [http://git-scm.com Git].<br />
<br />
=Official Repository=<br />
<br />
One may browse the repository online using the [http://git.wiki.kernel.org/index.php/Gitweb Gitweb] interface at http://vtk.org/gitweb.<br />
<br />
==Cloning==<br />
<br />
These instructions assume a command prompt is available with <code>git</code> in the path.<br />
See our Git [[Git/Download|download instructions]] for help installing Git.<br />
<br />
One may clone the repository using [http://www.kernel.org/pub/software/scm/git/docs/git-clone.html git clone] through the native <code>git</code> protocol:<br />
<br />
$ git clone git://vtk.org/VTK.git VTK<br />
<br />
or through the (less efficient) <code>http</code> protocol:<br />
<br />
$ git clone http://vtk.org/VTK.git VTK<br />
<br />
All further commands work inside the local copy of the repository created by the clone:<br />
<br />
$ cd VTK<br />
<br />
For VTKData the URLs are<br />
<br />
git://vtk.org/VTKData.git<br />
http://vtk.org/VTKData.git<br />
<br />
For VTKLargeData the URLs are<br />
<br />
git://vtk.org/VTKLargeData.git<br />
http://vtk.org/VTKLargeData.git<br />
<br />
==Branches==<br />
<br />
At the time of this writing the repository has the following branches:<br />
<br />
* '''master''': Development (default)<br />
* '''release''': Release maintenance<br />
* '''nightly-master''': Follows '''master''', updated at 01:00 UTC<br />
* '''hooks''': Local commit hooks ([[Git/Hooks#Local|place]] in .git/hooks)<br />
<br />
Release branches converted from CVS have been artificially merged into master.<br />
Actual releases have tags named by the release version number.<br />
<br />
=Development=<br />
<br />
We provide here a brief introduction to '''VTK''' development with Git.<br />
See the [[Git/Resources|Resources]] page for further information such as Git tutorials.<br />
<br />
==Quick Start Guide==<br />
<br />
If you would like to get up and running quickly, we recommend you follow the [[VTK/Git/Simple|simple Git guide]]. It will guide you through getting your development environment setup, working on topic branches and merging your changes back into master.<br />
<br />
==Introduction==<br />
<br />
We require all commits in VTK to record valid author/committer name and email information.<br />
Use [http://www.kernel.org/pub/software/scm/git/docs/git-config.html git config] to introduce yourself to Git:<br />
<br />
$ git config --global user.name "Your Name"<br />
$ git config --global user.email "you@yourdomain.com"<br />
<br />
Note that "Your Name" is your ''real name'' (e.g. "John Doe", not "jdoe").<br />
While you're at it, optionally enable color output from Git commands:<br />
<br />
$ git config --global color.ui auto<br />
<br />
The <code>--global</code> option stores the configuration settings in <code>~/.gitconfig</code> in your home directory so that they apply to all repositories.<br />
<br />
==Hooks==<br />
<br />
The '''hooks''' branch provides local commit hooks to be placed in <code>.git/hooks</code>.<br />
It is shared by many <code>public.kitware.com</code> repositories.<br />
<br />
See the general [[Git/Hooks|hooks]] information page to set up your local hooks.<br />
<br />
==Workflow==<br />
<br />
We have now moved to use a branchy workflow, [http://public.kitware.com/Wiki/Git/Workflow/Topic branchy workflow] based on topic branches. We do not have a next integration branch at this point, and so you should ignore any reference to that and merge straight to master. The next sections describes use of gerrit and the topic stage, [[VTK/Git/Simple|simplified guide]] can be followed using supplied scripts and aliases.<br />
<br />
==Gerrit==<br />
<br />
If you have a patch that you want to be considered for inclusion in VTK, you can submit it to [http://review.source.kitware.com/ gerrit]. To register on gerrit, use the following steps:<br />
<br />
# Get an [http://openid.net/get-an-openid/ OpenID]<br />
# Register at http://review.source.kitware.com using your OpenID (link in upper right)<br />
# Set all fields in your profile at http://review.source.kitware.com/#settings<br />
# Add your ssh public key at http://review.source.kitware.com/#settings,ssh-keys<br />
<br />
You will then be ready to submit patches. A typical patch for gerrit will be a topic branch that includes a single commit. If your branch has multiple commits you should use "git rebase -i" to squash the commits into a single commit, or if this is not reasonable, then consider pushing your branch to github or to some other external site for review. Your topic branch should be based on either the release or the master, depending on where you want it to go. An example workflow is as follows:<br />
<br />
{| border="0"<br />
!colspan=2|Gerrit Usage Summary<br />
|-<br />
|align="center"|<br />
'''Initial Setup:'''<br />
|<br />
$ git remote add gerrit USERNAME@review.source.kitware.com:VTK<br />
|-<br />
|align="center"|<br />
'''Create topic branch:'''<br />
|<br />
$ git checkout master<br />
$ git pull (i.e. get your local repository up-to-date)<br />
$ git checkout -b topic-branch-to-create<br />
|-<br />
|align="center"|<br />
'''Push to Gerrit:'''<br />
|<br />
$ edit files<br />
$ git add<br />
$ git commit<br />
$ git gerrit-push (alias for git push gerrit HEAD:refs/for/master/topic-name)<br />
|-<br />
|align="center"|<br />
'''Revise a Gerrit topic:'''<br />
|<br />
$ edit files and "git add" each edited file<br />
$ git commit --amend<br />
$ verify that the commit log ends with the correct Change-Id<br />
$ git gerrit-push<br />
|-<br />
|align="center"|<br />
'''Squash commits:'''<br />
|<br />
$ git rebase -i HEAD~2 (number depends on number of commits to squash)<br />
$ verify that the commit log ends with the correct Change-Id<br />
$ git gerrit-push<br />
|-<br />
|align="center"|<br />
'''Merge topic into VTK:'''<br />
|<br />
$ git stage-push (alias for git push stage HEAD)<br />
$ git stage-merge (alias for ssh git@vtk.org stage VTK merge topic-name)<br />
|}<br />
<br />
To get your patch reviewed, go to [http://review.source.kitware.com/ http://review.source.kitware.com/] and add reviewers for your patch. Alternatively, you can post an email to the vtk-developers list asking for reviewers. If you do not have commit access for VTK, ask one of the reviewers to merge your topic into VTK.<br />
<br />
==Topic Stage==<br />
<br />
We provide a "[http://vtk.org/stage/VTK.git VTK Topic Stage]" repository to which developers may publish arbitrary topic branches and request automatic merges. To follow this workflow, you should have git version 1.7 or greater.<br />
<br />
The topic stage URLs are<br />
<br />
* <code>git://vtk.org/stage/VTK.git</code> (clone, fetch)<br />
* <code>http://vtk.org/stage/VTK.git</code> (clone, fetch, gitweb)<br />
* <code>git@vtk.org:stage/VTK.git</code> (push)<br />
<br />
See our [http://public.kitware.com/Wiki/Git/Workflow/Stage Topic Stage Workflow] documentation for general instructions.<br />
''(Currently VTK does not have a '''next''' branch. Just skip that part of the instructions and merge directly to master.)''<br />
When accessing the VTK stage, one may optionally substitute<br />
"<code>ssh git@vtk.org stage VTK ...</code>"<br />
for<br />
"<code>ssh git@public.kitware.com stage <repo> ...</code>"<br />
in the ssh command-line interface.<br />
<br />
{| border="0"<br />
!colspan=2|Stage Usage Summary<br />
|-<br />
|align="center"|<br />
'''Initial Setup:'''<br />
|<br />
$ git remote add stage git://vtk.org/stage/VTK.git<br />
$ git config remote.stage.pushurl git@vtk.org:stage/VTK.git<br />
|-<br />
|align="center"|<br />
'''Fetch Staged Topics:'''<br />
|<br />
$ git fetch stage --prune<br />
|-<br />
|align="center"|<br />
'''Create Local Topic:'''<br />
|<br />
$ git checkout -b ''topic-name'' origin/master<br />
$ edit files<br />
$ git commit<br />
|-<br />
|align="center"|<br />
'''Stage Current Topic:'''<br />
|<br />
$ git push stage HEAD<br />
|-<br />
|align="center"|<br />
'''Print Staged Topics:'''<br />
|<br />
$ ssh git@vtk.org stage VTK print<br />
|-<br />
|align="center"|<br />
'''Merge Staged Topic:'''<br />
|<br />
$ ssh git@vtk.org stage VTK merge ''topic-name''<br />
|-<br />
|align="center"|<br />
'''Check out Staged Topic:'''<br />
|<br />
$ git fetch stage<br />
$ git checkout -b ''topic-name'' remotes/stage/''topic-name''<br />
|-<br />
|align="center"|<br />
'''Abandon/Delete Staged Topic:'''<br />
|<br />
$ git push stage :''topic-name''<br />
<br />
|}<br />
<br />
If the merge attempt conflicts follow the printed instructions.<br />
<br />
==Github==<br />
<br />
The VTK repository is mirrored on github. Experimental branches that are not ready for staging can be published on github for review.<br />
<br />
The first step in creating a github branch is to create an account on github and make a fork of [http://github.com/Kitware/VTK http://github.com/Kitware/VTK]. Since this fork will be a mirror of the VTK master, there is no need to clone it on your local machine. Instead, you will just want to set github as an alternative remote in your existing local copy of the VTK git repository.<br />
<br />
To set github as an alternative remote, use the following commands:<br />
<br />
{| border="0"<br />
!colspan=2|Github Usage Summary<br />
|-<br />
|align="center"|<br />
'''Remote Setup:'''<br />
|<br />
$ git remote add github git@github.com:yourname/VTK.git<br />
$ git config remote.github.pushurl git@github.com:yourname/VTK.git<br />
|-<br />
|align="center"|<br />
'''Update the Remote:'''<br />
|<br />
# update from Kitware's master and push to github<br />
$ git pull<br />
$ git push github HEAD<br />
|-<br />
|align="center"|<br />
'''Push Branch to Github:'''<br />
|<br />
$ git checkout -b some-branch github/master<br />
# edit files and commit changes<br />
$ git push github HEAD<br />
|}<br />
<br />
The "update remote" step above should be done regularly on your master, in order to keep your github fork up-to-date with the VTK master. Do not use github's graphical interface for merging commits, it creates new commits by rebasing the commits you select against your VTK fork. These rebased commits will be very difficult to merge back into VTK master.<br />
<br />
The checkout command in the third step will automatically set github as the default remote for the new branch, but you still must specify "github HEAD" when you push or else you will push to the github master branch, instead of pushing to a new github branch. Also, since it bases the branch on your github fork, you should perform step 2 before creating the branch to make sure that your fork is up-to-date. This is just a suggestion, as it is always possible to rebase or merge at a later time.<br />
<br />
The default remotes for each of your branches are controlled by entries such as this in your .git/config file:<br />
<br />
[branch "my-branch-name"]<br />
remote = github<br />
merge = refs/heads/my-branch-name<br />
<br />
You can edit this file to make github the default remote and to set the remote branch name for your existing branches. Or you can always use "git push github HEAD" to push each branch to github, without changing the defaults.<br />
<br />
=Publishing=<br />
<br />
==Pushing==<br />
<br />
Authorized developers may publish work directly to <code>vtk.org/VTK.git</code> using Git's SSH protocol.<br />
To request access, fill out the [https://www.kitware.com/Admin/SendPassword.cgi Kitware Password] form.<br />
<br />
See the [[Git/Publish#Push_Access|push instructions]] for details.<br />
<br />
For VTK, configure the push URL:<br />
<br />
git config remote.origin.pushurl git@vtk.org:VTK.git<br />
<br />
For VTKData, configure the push URL:<br />
<br />
git config remote.origin.pushurl git@vtk.org:VTKData.git<br />
<br />
===Update Hook===<br />
<br />
The vtk.org repository has an <code>update</code> hook.<br />
When someone tries to push changes to the repository it checks the commits as documented [[Git/Hooks#update|here]].<br />
<br />
==Patches==<br />
<br />
Contributions of bug fixes and features are commonly produced by the community. Patches are a convenient method for managing such contributions.<br />
<br />
One may send patches after subscribing to our mailing list:<br />
<br />
* [http://www.vtk.org/mailman/listinfo/vtk-developers VTK Developers Mailing List]<br />
<br />
See our [[Git/Publish#Patches|patch instructions]] for details.<br />
<br />
= Troubleshooting =<br />
== fatal: The remote end hung up unexpectedly ==<br />
* If <tt>git push</tt> fails with "fatal: The remote end hung up unexpectedly", you probably forgot to set the push url with "git config" see [[#Pushing]].<br />
* if <tt>git pull</tt> or <tt>git fetch</tt> fails, you might be behind a firewall: try editing .git/config so that the urls start with http:// instead of git://<br />
<br />
== Restoring files locally ==<br />
Q: "I cloned the VTK repository. Now I "rm -rf Hybrid". How do I get it back?"<br><br />
A: git checkout Hybrid<br><br />
Q: "I modified a file locally. I want to revert it."<br><br />
A: git checkout myfile.cxx<br><br />
Q: "I want to get rid of all local changes in this directory and start clean."<br><br />
A: git checkout .<br />
<br />
=Resources=<br />
<br />
Additional information about Git may be obtained at sites listed [[Git/Resources|here]].</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41604VTK/ARB/Meetings/July 20112011-07-12T14:14:45Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
* Wrapping<br />
* SetInput()<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0 (i.e., scripts). v6.0 realistically in November time frame.<br />
* Python wrapping issues (New() vs. NewInstance() and reference counts, magic to control reference counts). Defer this to v6.0 (need to talk with David Gobbi who is spearheading the effort). Another option is to investigate using SmartPointers() instead, this needs to be investigated. Need to investigate relationship to Java (and Jython).<br />
* Berk- Leaving SetInput() alone with a new behavior is a problem. The general agreement is to change name and make sure there is a clear legacy / upgrade path between 5.8 -> 5.10 -> 6.0. v6.0 will have significant incompatibilities.<br />
<br />
== Next Time ==<br />
* Deprecated class discussions<br />
* List of things to help with transitions (either via email or another ARB meeting).</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41603VTK/ARB/Meetings/July 20112011-07-12T14:11:37Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
* Wrapping<br />
* SetInput()<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0 (i.e., scripts). v6.0 realistically in November time frame.<br />
* Python wrapping issues (New() vs. NewInstance() and reference counts, magic to control reference counts). Defer this to v6.0 (need to talk with David Gobbi who is spearheading the effort). Another option is to investigate using SmartPointers() instead, this needs to be investigated. Need to investigate relationship to Java (and Jython).<br />
* Berk- Leaving SetInput() alone with a new behavior is a problem. The general agreement is to change name and make sure there is a clear legacy / upgrade path between 5.8 -> 5.10 -> 6.0. v6.0 will have significant incompatibilities.<br />
<br />
== Next Time ==<br />
* Deprecated class discussions<br />
*</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41602VTK/ARB/Meetings/July 20112011-07-12T13:58:13Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
* Wrapping<br />
* SetInput()<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0 (i.e., scripts). v6.0 realistically in November time frame.<br />
* Python wrapping issues (New() vs. NewInstance() and reference counts, magic to control reference counts). Defer this to v6.0 (need to talk with David Gobbi who is spearheading the effort). Another option is to investigate using SmartPointers() instead, this needs to be investigated. Need to investigate relationship to Java (and Jython).<br />
* Berk- Leaving SetInput() alone with a new behavior is a problem. The general agreement is to change name and make sure there is a clear legacy / upgrade path between 5.8 -> 5.10 -> 6.0. v6.0 will have significant incompatibilities.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41601VTK/ARB/Meetings/July 20112011-07-12T13:47:21Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
* Wrapping<br />
* SetInput()<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0 (i.e., scripts). v6.0 realistically in November time frame.<br />
* Python wrapping issues (New() vs. NewInstance() and reference counts, magic to control reference counts). Defer this to v6.0 (need to talk with David Gobbi who is spearheading the effort). Another option is to investigate using SmartPointers() instead, this needs to be investigated. Need to investigate relationship to Java (and Jython).<br />
* SetInput()-</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41600VTK/ARB/Meetings/July 20112011-07-12T13:41:37Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0 (i.e., scripts). v6.0 realistically in November time frame.<br />
* Python wrapping issues (New() vs. NewInstance() and reference counts, magic to control reference counts). Defer this to v6.0 (need to talk with David Gobbi who is spearheading the effort). Another option is to investigate using SmartPointers() instead, this needs to be investigated.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41599VTK/ARB/Meetings/July 20112011-07-12T13:37:26Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0 (i.e., scripts). v6.0 realistically in November time frame.<br />
* Python wrapping issues (New() vs. NewInstance() and reference counts, magic to control reference counts).</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41597VTK/ARB/Meetings/July 20112011-07-12T13:33:01Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0 (i.e., scripts). v6.0 realistically in November time frame.<br />
*</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41595VTK/ARB/Meetings/July 20112011-07-12T13:32:18Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0 (i.e., scripts).<br />
*</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41592VTK/ARB/Meetings/July 20112011-07-12T13:29:55Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August. Legacy stuff will be in 5.10; it will be possible to remove "Legacy" in 5.10 to see what v6.0 will look like. There will be tools and documentation for moving to v6.0.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41590VTK/ARB/Meetings/July 20112011-07-12T13:28:07Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release. 5.10 release will be out in August.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41589VTK/ARB/Meetings/July 20112011-07-12T13:27:44Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model. Berk is 95% of the way through.<br />
* Schedule: still chugging away at 5.8 release.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41585VTK/ARB/Meetings/July 20112011-07-12T13:24:02Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
* Decoupling data model and execution model.</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41583VTK/ARB/Meetings/July 20112011-07-12T13:22:28Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility (with regards to pipeline)<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
*</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings/July_2011&diff=41582VTK/ARB/Meetings/July 20112011-07-12T13:21:56Z<p>Wschroed: </p>
<hr />
<div>== Attendees ==<br />
* Jeff<br />
* Berk<br />
* Will<br />
* Bill <br />
* Steve<br />
* Andrew <br />
* Stephane<br />
* Claudio<br />
* Paolo<br />
<br />
== Agenda ==<br />
* Review action items from last time<br />
* Modularization status<br />
* Backward compatibility<br />
<br />
== Notes ==<br />
* Berk described the modularization efforts. One major fly in the ointment is testing. One concern are "non-unit" tests, those that have wider dependencies.<br />
*</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings&diff=41581VTK/ARB/Meetings2011-07-12T13:20:09Z<p>Wschroed: /* Past Meetings */</p>
<hr />
<div>== Scheduled Meetings ==<br />
* TBD<br />
<br />
== Past Meetings ==<br />
* [[VTK/ARB/Meetings/July 2011|July 12, 2011]]<br />
* [[VTK/ARB/Meetings/January 2011|January 18, 2011]]<br />
* [[VTK/ARB/Meetings/October 2010|October 6, 2010]]<br />
* [[VTK/ARB/Meetings/August 2010|August 17, 2010]]<br />
* [[VTK/ARB/Meetings/July 2010|July 13, 2010]]<br />
* [[VTK/ARB/Meetings/April 2010|April 1, 2010]]<br />
* [[VTK/ARB/Meetings/January 2010|January 13, 2010]]<br />
* [[VTK/ARB/Meetings/November 2009|November 3, 2009]]<br />
* [[VTK/ARB/Meetings/October 2009|October 1, 2009]]<br />
<br />
== Potential Topics ==</div>Wschroedhttps://public.kitware.com/Wiki/index.php?title=VTK/ARB/Meetings&diff=41580VTK/ARB/Meetings2011-07-12T13:19:53Z<p>Wschroed: /* Scheduled Meetings */</p>
<hr />
<div>== Scheduled Meetings ==<br />
* TBD<br />
<br />
== Past Meetings ==<br />
* [[VTK/ARB/Meetings/July 2011|July 11, 2011]]<br />
* [[VTK/ARB/Meetings/January 2011|January 18, 2011]]<br />
* [[VTK/ARB/Meetings/October 2010|October 6, 2010]]<br />
* [[VTK/ARB/Meetings/August 2010|August 17, 2010]]<br />
* [[VTK/ARB/Meetings/July 2010|July 13, 2010]]<br />
* [[VTK/ARB/Meetings/April 2010|April 1, 2010]]<br />
* [[VTK/ARB/Meetings/January 2010|January 13, 2010]]<br />
* [[VTK/ARB/Meetings/November 2009|November 3, 2009]]<br />
* [[VTK/ARB/Meetings/October 2009|October 1, 2009]]<br />
<br />
== Potential Topics ==</div>Wschroed