<div>Hi Karl,</div><div><br></div><div>I'm glad to hear that you got it working. I have the cosines going down the columns of the matrix, so that the matrix transforms "raw data" coordinates into patient coordinates. In other words, option (1).</div>
<div><br></div><div>I was trying to remember why I chose that option... and I think this is the reason: the Z direction cosine vector is not always orthogonal to the X and Y cosine vectors. In a tilted-gantry CT scan, the volume is rhombus-shaped, and the Z direction cosine is the direction in which the gantry moves between slices (i.e. not the cross product of the X and Y vectors). In this case, you will get the correct transform if you choose option (1), but will get the wrong transform if you choose option (2).</div>
<div><br></div><div>Most DICOM scans provide an orthogonal volume, and for these it makes no difference. It is only with the rare non-orthogonal scan volumes that the difference becomes apparent.</div><div><br></div><div>
David</div><br><div class="gmail_quote">On Mon, Nov 22, 2010 at 12:35 PM, Karl <span dir="ltr"><<a href="mailto:bulkmailaddress@gmail.com" target="_blank">bulkmailaddress@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
David,<br>
<br>
Thanks for the help. I have gotten it to work. It was an issue with a<br>
matrix being transposed when it shouldn't. I have one final question. When<br>
I load the directional cosines from the header should they go into the rows<br>
or columns of the directional cosine matrix.<br>
<br>
For example I get the same results if I:<br>
1) load the cosines into columns of matrix, multiply origin by inverse, and<br>
multiply data set by cosine matrix<br>
2) load the cosines into rows of matrix, multiply origin by matrix, multiply<br>
data set by inverse of cosine matrix<br>
<br>
I know these are the same thing mathematically, but which one is correct<br>
based on convention. I believe the first one is correct.<br>
<br>
Thanks for all the help getting this to work.<br>
<font color="#888888"><br>
Karl<br>
</font><div><br>
-----Original Message-----<br>
From: <a href="mailto:vtkusers-bounces@vtk.org" target="_blank">vtkusers-bounces@vtk.org</a> [mailto:<a href="mailto:vtkusers-bounces@vtk.org" target="_blank">vtkusers-bounces@vtk.org</a>] On Behalf<br>
Of David Gobbi<br>
</div><div><div></div><div>Sent: Friday, November 19, 2010 4:22 PM<br>
To: <a href="mailto:vtkusers@vtk.org" target="_blank">vtkusers@vtk.org</a><br>
Subject: Re: [vtkusers] using itk transform in vtk<br>
<br>
Hi Karl,<br>
<br>
I meant to reply to you this morning but was distracted by other tasks...<br>
<br>
The DICOM code that I have (which is actually written in Python) does<br>
things like this:<br>
<br>
matrix = vtk.vtkMatrix4x4()<br>
for j in range(3):<br>
matrix.SetElement(j,0,rowOrientation[j])<br>
matrix.SetElement(j,1,columnOrientation[j])<br>
matrix.SetElement(j,2,sliceOrientation[j])<br>
<br>
The above code builds the direction cosines from the DICOM<br>
orientation, the slice orientation is the cross product of the other<br>
two orientations. Then, to convert the DICOM position into the<br>
ImageData origin, I use the following. This code is so old it uses<br>
Python Numeric... but I'll give it to you raw, and you can figure out<br>
how to do the same thing with vtkMatrix4x4.<br>
<br>
origin = Numeric.array(position)<br>
dircos = Numeric.transpose(Numeric.array([rowOrientation,<br>
columnOrientation, sliceOrientation]))<br>
origin = list(LinearAlgebra.solve_linear_equations(dircos,origin))<br>
<br>
I apologize for the python code, but as you can see, your own code<br>
more-or-less duplicates what I do. So all that I can suggest is the<br>
following:<br>
<br>
1) Check the value that reader->GetDataOrigin() returns. Make sure<br>
that it really is the same as the ImagePositionPatient value in the<br>
DICOM header.<br>
<br>
2) Check the math, just to make sure that matrices aren't accidentally<br>
transposed or any of the other mistakes that we often make.<br>
<br>
David<br>
<br>
<br>
On Fri, Nov 19, 2010 at 12:00 PM, Karl <<a href="mailto:bulkmailaddress@gmail.com" target="_blank">bulkmailaddress@gmail.com</a>> wrote:<br>
> David,<br>
> Thanks for walking me through another way. I have attempted to implement<br>
> what you propose, but am not doing something quite right. Would you mind<br>
> reviewing my procedure below and pointing out where I go wrong. I suspect<br>
> it is with the origin. The following code is called twice, once for the<br>
> non-transformed data, and once for the transformed data, hopefully they<br>
show<br>
> up in the same spot:)<br>
><br>
> // load data<br>
> vtkGDCMImageReader *reader = vtkGDCMImageReader::New();<br>
> reader->SetFileName( filename );<br>
> reader->SetFileLowerLeft(1); // do not recalculate origin or y-flip data<br>
><br>
> // get directional cosines<br>
> double directionalCosines[3][3];<br>
><br>
image->GetImageOrientationPatient(directionalCosines[0][0],directionalCosine<br>
> s[0][1],<br>
><br>
><br>
directionalCosines[0][2],directionalCosines[1][0],directionalCosines[1][1],<br>
> directionalCosines[1][2]);<br>
> directionalCosines[2][0] =<br>
><br>
directionalCosines[0][1]*directionalCosines[1][2]-directionalCosines[0][2]*d<br>
> irectionalCosines[1][1];<br>
> directionalCosines[2][1] =<br>
><br>
directionalCosines[0][2]*directionalCosines[1][0]-directionalCosines[0][0]*d<br>
> irectionalCosines[1][2];<br>
> directionalCosines[2][2] =<br>
><br>
directionalCosines[0][0]*directionalCosines[1][1]-directionalCosines[0][1]*d<br>
> irectionalCosines[1][0];<br>
><br>
> // create directional cosine matrix, transforms from raw data space to<br>
> patient space<br>
> vtkMatrix4x4* dc = vtkMatrix4x4::New();<br>
> dc->Identity();<br>
> dc->SetElement(0,0,directionalCosines[0][0]);<br>
> dc->SetElement(0,1,directionalCosines[0][1]);<br>
> dc->SetElement(0,2,directionalCosines[0][2]);<br>
> dc->SetElement(1,0,directionalCosines[1][0]);<br>
> dc->SetElement(1,1,directionalCosines[1][1]);<br>
> dc->SetElement(1,2,directionalCosines[1][2]);<br>
> dc->SetElement(2,0,directionalCosines[2][0]);<br>
> dc->SetElement(2,1,directionalCosines[2][1]);<br>
> dc->SetElement(2,2,directionalCosines[2][2]);<br>
><br>
> // inverse directional cosine matrix, transform from patient space to raw<br>
> data space<br>
> vtkMatrix4x4* dcInverse = vtkMatrix4x4::New();<br>
> dc->Invert(dc,dcInverse);<br>
><br>
> // calculate new origin, transform from patient space to raw data space<br>
> double origin[4];<br>
> origin[3]=1;<br>
> reader->GetDataOrigin(origin);<br>
> dcInverse->MultiplyPoint(origin,origin);<br>
><br>
> // apply new origin<br>
> vtkImageChangeInformation* information = vtkImageChangeInformation::New();<br>
> information->SetOutputOrigin(origin[0], origin[1], origin[2]);<br>
> information->SetInputConnection(reader->GetOutputPort());<br>
><br>
> // create polydata<br>
> vtkMarchingCubes* march = vtkMarchingCubes::New();<br>
> march->SetInputConnection(information->GetOutputPort());<br>
><br>
> // transform polydata from raw data space to patient space<br>
> vtkTransformPolyDataFilter* transformToPatientSpace =<br>
> vtkTransformPolyDataFilter::New();<br>
> vtkTransform* matrixTransform = vtkTransform::New();<br>
> matrixTransform->SetMatrix(dc);<br>
> transformToPatientSpace->SetTransform(matrixTransform);<br>
> transformToPatientSpace->SetInputConnection(march->GetOutputPort());<br>
><br>
> //*************************************<br>
> // this section is only called for the non-transformed data<br>
> // load and apply itk transform<br>
> vtkMatrix4x4* itkMatrix =<br>
> [param0 param1 param2 param9]<br>
> [param3 param4 param5 param10]<br>
> [param6 param7 param8 param11]<br>
> [0 0 0 1] // from itk writing out matrix<br>
> itkMatrix->Invert(itkMatrix,itkMatrix); // itk actually provides inverse<br>
> transforms<br>
><br>
> vtkTransform* matrixTransform = vtkTransform::New();<br>
> matrixTransform->SetMatrix(itkMatrix);<br>
><br>
> vtkTransformPolyDataFilter* itkTransform =<br>
> vtkTransformPolyDataFilter::New();<br>
> itkTransform->SetTranform(matrixTransform);<br>
><br>
itkTransform->SetInputConnection(transformToPatientSpace->GetOutputPort());<br>
> **************************************//<br>
><br>
> // display polydata<br>
> vtkPolyDataMapper* mapper = vtkPolyDataMapper::New();<br>
> mapper->SetInputConnection(transformToPatientSpace ->GetOutputPort());<br>
><br>
> vtkActor* actor = vtkActor::New();<br>
> actor->SetMapper(mapper);<br>
><br>
><br>
> I believe I am accomplishing the following.<br>
> Load the data already transformed by itk (data A). Apply transform to<br>
> change it from raw data space to patient space. Display data. VTK space<br>
> now corresponds to this patient space.<br>
><br>
> Load untransformed data (data B). Apply transform to change it from its<br>
raw<br>
> data space to its patient space. Apply inverse itk transform to change it<br>
> from its patient space to data A's patient space. Display data.<br>
><br>
> Thanks<br>
><br>
><br>
> -----Original Message-----<br>
> From: <a href="mailto:vtkusers-bounces@vtk.org" target="_blank">vtkusers-bounces@vtk.org</a> [mailto:<a href="mailto:vtkusers-bounces@vtk.org" target="_blank">vtkusers-bounces@vtk.org</a>] On Behalf<br>
> Of David Gobbi<br>
> Sent: Thursday, November 18, 2010 4:55 PM<br>
> To: <a href="mailto:vtkusers@vtk.org" target="_blank">vtkusers@vtk.org</a><br>
> Subject: Re: [vtkusers] using itk transform in vtk<br>
><br>
> I haven't looked into gdcmreslice myself, but the procedure that you<br>
> have outlined is very different from what I have used with DICOM<br>
> files. What I do is as follows:<br>
><br>
> 1) Read the image as-is, i.e. do not allow the reader to flip it or do<br>
> any other adjustments. Computing the vtkImageData Origin is tricky,<br>
> this Origin is in VTK's "data coordinate space" for the vtkImageData,<br>
> while DICOM's ImagePositionPatient is in DICOM patient coordinates.<br>
> So I have to use the direction cosines matrix (see step 3) to convert<br>
> ImagePositionPatient to the vtkImageData Origin that I use.<br>
><br>
> 2) Compute the Z spacing by measuring the distance between adjacent<br>
> DICOM slices, and checking the direction of the cross product of x, y<br>
> direction cosines from ImageOrientationPatient to make sure the slices<br>
> are in the right order<br>
><br>
> 3) Use the direction cosines to create an "image orientation matrix".<br>
> I do not re-order the axes of the image with vtkImageReslice (as far<br>
> as I'm concerned, doing so just complicates the whole process, and<br>
> cannot account for oblique orientations)<br>
><br>
> So at this point, I have a DICOM image with X as the row direction<br>
> (left-to-right), Y as the column direction (top-to-bottom), and Z as<br>
> the slice direction. I also have a vtkMatrix4x4 that is a<br>
> transformation matrix from this "raw" coordinate system to the DICOM<br>
> patient coordinate system. Then, I identify one of my DICOM data sets<br>
> as my "primary" data set, and identify its DICOM patient coordinate<br>
> system as being identical to my VTK world coordinate system.<br>
><br>
> When doing registrations, I use a vtkTransform that converts from the<br>
> DICOM patient coordinates of one image, to the DICOM patient<br>
> coordinates of another image. Each of these images also has a<br>
> vtkMatrix4x4 that contains its direction cosines, which relate the<br>
> vtkImageData data coordinates to the DICOM patient coordinates of that<br>
> image.<br>
><br>
> By doing the above, I keep the various coordinate spaces as<br>
> compartmentalizes as possible. I avoid flipping the image or<br>
> resampling it along a new set of axes because that tends to make<br>
> things more confusing, I prefer to just store a direction cosines<br>
> matrix for each image and leave the data alone.<br>
><br>
> David<br>
><br>
><br>
> On Thu, Nov 18, 2010 at 2:25 PM, Karl <<a href="mailto:bulkmailaddress@gmail.com" target="_blank">bulkmailaddress@gmail.com</a>> wrote:<br>
>> Mathieu + vtk community,<br>
>> Thanks for replying. I looked into gdcmrelice.cxx example today. I was<br>
>> unsuccessful in using that approach to achieve results. In case it<br>
> matters<br>
>> I have gdcm 2.0.16 and vtk 5.4.2.<br>
>><br>
>> If I understand correctly the following process is applied:<br>
>> 1)Load image with vtkGDCMImageReader. Due to differences in origin the<br>
>> reader shifts the origin along the y-axis (based on the directional<br>
> cosines)<br>
>> and reverses the order of the slices along the y direction.<br>
>><br>
>> 2)This reversal of order along the y-axis also results in the x-axis<br>
>> appearing flipped, so the vtkImageFlip filter is used to undo that<br>
effect.<br>
>><br>
>> 3)vtkImageReslice is then used to reorientate the data to lie along the<br>
>> actual dicom x,y,z axes instead of the row, column, slice sampling of the<br>
>> data set. This doesn't seem the best approach because it results in<br>
>> obliquely resampling the entire data set.<br>
>> Using this approach does not work correctly. The data does appear with<br>
> RAH<br>
>> orientation but does not get placed correctly.<br>
>><br>
>> My code goes something like this:<br>
>> vtkGDCMImageReader *reader = vtkGDCMImageReader::New();<br>
>> reader->SetFileName( filename );<br>
>><br>
>> vtkImageFlip *flip = vtkImageFlip::New();<br>
>> flip->SetInput(reader ->GetOutput());<br>
>> flip->SetFilteredAxis(0);<br>
>> flip->Update();<br>
>><br>
>> vtkImageReslice *reslice = vtkImageReslice::New();<br>
>> reslice->SetInput(flip->GetOutput());<br>
>> vtkMatrix4x4 *invert = vtkMatrix4x4::New();<br>
>> invert->DeepCopy(reader->GetDirectionCosines() );<br>
>> invert->Invert();<br>
>> reslice->SetResliceAxes(invert);<br>
>> reslice->Update();<br>
>><br>
>> vtkMarchingCubes* march = vtkMarchingCubes::New();<br>
>> march->SetInputConnection(reslice->GetOutputPort());<br>
>><br>
>> // create transform matrix<br>
>> vtkMatrix4x4* matrix = vtkMatrix4x4::New(); ...<br>
>> vtkTransform* matrixTransform = vtkTransform::New();<br>
>> matrixTransform->SetMatrix(matrix);<br>
>><br>
>> vtkTransformPolyDataFilter* transform =<br>
vtkTransformPolyDataFilter::New();<br>
>> transform->SetTranform(matrixTransform);<br>
>> transform->SetInputConnection(march->GetOutputPort());<br>
>><br>
>> vtkPolyDataMapper* mapper = vtkPolyDataMapper::New();<br>
>> mapper->SetInputConnection(transform->GetOutputPort());<br>
>><br>
>> vtkActor* actor = vtkActor::New();<br>
>> actor->SetMapper(mapper);<br>
>><br>
>> This is basically done twice. Once with the original data and the<br>
> transform<br>
>> from ITK, and once with the data set that has already been transformed by<br>
>> ITK and an identity transform. The two objects do not appear in the same<br>
>> location. The dicom images do start out having different origins and<br>
>> directional cosines. But if both are mapped to physical space correctly<br>
>> they should overlap exactly.<br>
>><br>
>> Thanks<br>
>><br>
>><br>
>> -----Original Message-----<br>
>> From: Mathieu Malaterre [mailto:<a href="mailto:mathieu.malaterre@gmail.com" target="_blank">mathieu.malaterre@gmail.com</a>]<br>
>> Sent: Thursday, November 18, 2010 7:04 AM<br>
>> To: <a href="mailto:bulkmailaddress@gmail.com" target="_blank">bulkmailaddress@gmail.com</a><br>
>> Subject: Re: [vtkusers] using itk transform in vtk<br>
>><br>
>> Have you tried the gdcmreslice.cxx example in GDCM ? Is it working for<br>
yuo<br>
> ?<br>
>><br>
>> Thx<br>
>><br>
>> On Thu, Nov 18, 2010 at 6:20 AM, Karl <<a href="mailto:bulkmailaddress@gmail.com" target="_blank">bulkmailaddress@gmail.com</a>> wrote:<br>
>>> Vtk folks and David,<br>
>>><br>
>>> Thanks for responding. I have yet to have success, but I believe I have<br>
>>> addressed the issues you point out.<br>
>>> I am using vtkGDCMImageReader with the FileLowerLeft flag set to 0. If<br>
I<br>
>>> understand correctly this importer correctly flips the Y and adjusts the<br>
>>> origin.<br>
>>><br>
>>> ITK transform is about the 0,0,0 physical coordinate. When I load the<br>
>> image<br>
>>> in itk the origin information should be used by this importer to<br>
properly<br>
>>> position the image so that the 0,0,0 physical coordinate corresponds to<br>
>> the<br>
>>> 0,0,0 vtk location. The inverse of the itk transform can then be<br>
> directly<br>
>>> applied to the imported image.<br>
>>><br>
>>> This still does not work however. I need to somehow get the vtk<br>
>> coordinate<br>
>>> system to accurately represent the physical coordinate system so that<br>
>>> objects from different dicom images, with different origins and<br>
>> directional<br>
>>> cosines, appear in proper relation to each other.<br>
>>><br>
>>> Any suggestions on how to proceed?<br>
>>><br>
>>> Thanks<br>
>>><br>
>>> -----Original Message-----<br>
>>> From: David Gobbi [mailto:<a href="mailto:david.gobbi@gmail.com" target="_blank">david.gobbi@gmail.com</a>]<br>
>>> Sent: Tuesday, November 16, 2010 4:31 PM<br>
>>> To: <a href="mailto:bulkmailaddress@gmail.com" target="_blank">bulkmailaddress@gmail.com</a><br>
>>> Cc: <a href="mailto:vtkusers@vtk.org" target="_blank">vtkusers@vtk.org</a><br>
>>> Subject: Re: [vtkusers] using itk transform in vtk<br>
>>><br>
>>> Hi Karl,<br>
>>><br>
>>> The VTK transform operates on the physical origin, which can be at any<br>
>>> desired location in the image. The relationship between DICOM<br>
>>> coordinates and VTK coordinates is explained thusly:<br>
>>><br>
>>> DICOM has three fields related to the positions of the image voxels in<br>
>>> physical (or patient) space:<br>
>>> - ImagePositionPatient (position of upper right corner)<br>
>>> - ImageOrientationPatient (row and column directions)<br>
>>> - PixelSpacing (X and Y)<br>
>>><br>
>>> VTK image data has two fields (no orientation):<br>
>>> - Origin (position of lower right corner)<br>
>>> - Spacing (X, Y, and Z)<br>
>>><br>
>>> Note that the image data "Origin" is a misnomer, it is not the<br>
>>> physical origin of the coordinate system. It is just the position of<br>
>>> the voxel in the lower left corner. It is the responsibility of the<br>
>>> VTK image reader that you are using to set the Origin and Spacing of<br>
>>> the data correctly. (And again, I will repeat that what VTK calls the<br>
>>> image "Origin" is not the physical image origin, it is just a<br>
>>> position).<br>
>>><br>
>>> The fundamental difference between these two coordinate systems is not<br>
>>> the location of the origin, it is a vertical flip since DICOM images<br>
>>> are top-to-bottom while VTK images are bottom-to-top. When dealing<br>
>>> with DICOM, I set the VTK image "origin" to the DICOM<br>
>>> ImagePositionPatient, and then apply a view transform that flips the<br>
>>> image by using a rotation of 180 degrees about the X axis.<br>
>>><br>
>>> - David<br>
>>><br>
>>> On Tue, Nov 16, 2010 at 2:02 PM, Karl <<a href="mailto:bulkmailaddress@gmail.com" target="_blank">bulkmailaddress@gmail.com</a>> wrote:<br>
>>>> Hi,<br>
>>>> I am trying to use an itk transform in vtk. Here is a high level<br>
>> overview<br>
>>>> of what I do:<br>
>>>><br>
>>>> 1) In itk use itk::ImageRegistrationMethod with itk::AffineTransform to<br>
>>> find<br>
>>>> a transform.<br>
>>>><br>
>>>> 2) I use itk::ResampleImageFilter to create an image transformed using<br>
>> the<br>
>>>> itk::AffineTransform.<br>
>>>><br>
>>>> 3) I use the GetLastTransformParameters() function to get the affine<br>
>>>> transformation matrix to use in vtk as.<br>
>>>> [param0 param1 param2 param9]<br>
>>>> [param3 param4 param5 param10]<br>
>>>> [param6 param7 param8 param11]<br>
>>>> [0 0 0 1]<br>
>>>><br>
>>>> 4) Now in vtk I try to display both the resampled image and the<br>
original<br>
>>>> image with the transform applied. They should show as exactly the same<br>
>> if<br>
>>> I<br>
>>>> can apply the transform in vtk the same as it is applied in itk.<br>
>>>><br>
>>>> 5) I load both images in vtk. The images are binary so I use marching<br>
>>> cubes<br>
>>>> to create meshes of them. I display the transformed image polydata as<br>
>> is.<br>
>>>><br>
>>>> 6) I apply a vtkTransform to the original image polydata. I do this by<br>
>>>> concatenating 3 vtkMatrix4x4. Two to adjust for different origins in<br>
itk<br>
>>> and<br>
>>>> vtk and one for the itk transform.<br>
>>>> Translate_Back * itkTransform * Translate_itk_origin_to_vtk_origin<br>
>>>><br>
>>>> Where<br>
>>>> itkTransform is as shown in 3 above.<br>
>>>><br>
>>>> Translate_itk_origin_to_vtk_origin is<br>
>>>> [1 0 0 -originX]<br>
>>>> [0 1 0 -originY]<br>
>>>> [0 0 1 -originZ]<br>
>>>> [0 0 0 1]<br>
>>>> Where the origin is as found in the dicom header for the original image<br>
>>>><br>
>>>> Translate_Back<br>
>>>> [1 0 0 originX]<br>
>>>> [0 1 0 originY]<br>
>>>> [0 0 1 originZ]<br>
>>>> [0 0 0 1]<br>
>>>><br>
>>>> 7) Set vtkTransform inverse flag and transform and display the original<br>
>>>> image.<br>
>>>><br>
>>>> 8) The two objects show up at different places in the scene instead of<br>
> at<br>
>>>> the same location.<br>
>>>><br>
>>>> If I understand this correctly the itk transform is to be applied about<br>
>>> the<br>
>>>> physical origin. Not the location of the corner pixel or the center of<br>
>>> the<br>
>>>> image.<br>
>>>><br>
>>>> The vtk transform uses an origin at the corner of the image.<br>
>>>><br>
>>>> By translating by the origin defined in the dicom header I move the<br>
>>> physical<br>
>>>> origin to the corner pixel of the image. This corrects for the<br>
>>>> discrepancies between the point with itk and vtk apply their<br>
transforms.<br>
>>>><br>
>>>> Also the itk transform goes from output to input, so the inverse<br>
>> transform<br>
>>>> needs to be applied in vtk.<br>
>>>><br>
>>>> Can anyone see what I am missing?<br>
>>>> Thanks<br></div></div></blockquote></div>