Difference between revisions of "Documentation:Nightly:Registration:RegistrationLibrary:RegLib C41"

From Slicer Wiki
Jump to: navigation, search
(Created page with ' = Slicer Registration Library Case #41: Mouse Brain MRI = === Input === {| style="color:#bbbbbb; " cellpadding="10" cellspacing="0" border="0" |[[Image:RegLib_C41_Thumb1.png|15…')
 
Line 1: Line 1:
 
 
= Slicer Registration Library Case #41: Mouse Brain MRI  =
 
= Slicer Registration Library Case #41: Mouse Brain MRI  =
 
=== Input ===
 
=== Input ===
Line 12: Line 11:
 
|}
 
|}
  
== Modules Used ==
+
== Modules used ==
 +
*[[Documentation/Nightly/Modules/Transforms| ''Transforms'']] : to fix the image coordinate system / orientation issue
 +
*[[Documentation/Nightly/Modules/N4ITKBiasFieldCorrection| ''N4ITKBiasFieldCorrection'']]: to correct for local intensity inhomogeneity
 +
*[[Documentation/Nightly/Modules/Editor| ''Editor'']]: to build masks for registration
 +
*[[Documentation/Nightly/Modules/BRAINSFit| ''General Registration (BRAINS)'']]: for affine and nonrigid registration
  
 
== Description ==
 
== Description ==
Aim is to register the two brains of control mice to eachother.
+
Aim is to register the two brains of control mice to eachother.  The original images have their header orientation info in a nonstandard form. As a consequence the image loads with sagittal and axial views switched. To fix we create a reorientation transformAlso the target image content is of low intensity  and small compared to the surrounding tissue, making masking essential. We will first correct the orientation issue and apply intensity bias correction, then build two masks for registration, and finally co-register the two images in two steps via affine and non-rigid BSpline transforms.
  
=== Download ===
+
== Download (from NAMIC MIDAS) ==
*DATA
+
<small>''Why 2 sets of files?  The "input data" mrb includes only the unregistered data to try the method yourself from start to finish. The full dataset includes intermediate files and results (transforms, resampled images etc.). If you use the full dataset we recommend to choose different names for the images/results you create yourself to distinguish the old data from the new one you generated yourself. ''</small>
**[[Media:RegLib_C41_DATA.zip‎|'''Registration Library Case 41 '''<small> (Data & Solution Xforms,  zip file 81 MB) </small>]]
+
*[http://slicer.kitware.com/midas3/download/?items=xx '''RegLib_C41.mrb''': input data only, use this to run the tutorial from the start <small>(Slicer mrb file. 32 MB). </small>]
*Documentation:
+
*[http://slicer.kitware.com/midas3/download/?items=xx '''RegLib_C41_full.mrb''': includes raw data + all solutions and intermediate files, use to browse/verify <small>(Slicer mrb file. 97 MB). </small>]
**See detailed pipeline with parameter settings below
 
  
 
== Keywords ==
 
== Keywords ==
Line 29: Line 31:
 
*reference/fixed MRI: 0.1 x 0.1 x 0.1 mm, 192 x 256 x 192
 
*reference/fixed MRI: 0.1 x 0.1 x 0.1 mm, 192 x 256 x 192
 
*moving MRI: 0.1 x 0.1 x 0.1 mm, 192 x 256 x 192
 
*moving MRI: 0.1 x 0.1 x 0.1 mm, 192 x 256 x 192
 
===Registration Challenges===
 
*The original images are not in standard RAS space, the orientation has axial and sagittal planes flipped
 
*target image content is of low intensity  and small compared to the surrounding tissue, making masking essential
 
 
  
 
== Procedure ==
 
== Procedure ==
 +
#'''Fix RAS orientation''': go to the Data
 
[[Image:RegLib_C41_Reorient_transform.png|150px|lleft|transform to reorient volumes to RAS space]]  
 
[[Image:RegLib_C41_Reorient_transform.png|150px|lleft|transform to reorient volumes to RAS space]]  
*'''Phase I: Reorient to RAS space'''
+
#'''Reorient to RAS space''':
#The original images have their header orientation info in a nonstandard form. As a consequence the image loads with sagittal and axial views switched. To fix we create a reorientation transform
+
##go to the [[Documentation/Nightly/Modules/Data| ''Data'']] module
#in the ''Data'' module, right click on the "Scene" node and select "Insert Transform" from the pulldown menu
+
###right click on the "Scene" node and select "Insert Transform" from the pulldown menu
#In the the ''MRML Node Inspector" below, rename the new transform to "Xf0_reorient" or similar.
+
###rename the new transform to "Xf0_reorient" or similar.
#place/drag the image inside the transform node.
+
###place/drag both images inside the transform node.
#go to the ''Transforms'' module & select "Xf0_reorient" from the menu
+
##go to the [[Documentation/Nightly/Modules/Transforms| ''Transforms'']] ' module & select "Xf0_reorient" from the menu
#change the identity matrix as shown in the image on the right. To modify an entry, select it via mouseclick, then hit ENTER.
+
###change the identity matrix as shown in the image here. To modify an entry, double-click the field, then edit and to confirm hit the ENTER key.
#when finished the 3 views should show, axial, sagittal and coronal views respectively (see result images).  
+
###switch the "1" in the 1st and last row to the last and first column, respectively. Change the sign of the "1" in the middle row to "-1"
#save
+
###you should see the images in the position as shown in the results below, with the eyes looking up in the axial (red) and left in the sagittal (yellow) view.
*'''Phase II: Bias Field correction'''
+
##save intermediate results
#go to the [http://www.slicer.org/slicerWiki/index.php/Modules:N4ITKBiasFieldCorrection-Documentation-3.6 N4 Bias Correction] module
+
#'''Bias Correction''': Correct for local intensity inhomogeneities [[Media: RegLib_C06_2_BiasCorrection.mov|(screencast for this step)]]
#select 11_5202 as input and create a new volume as output, renamed to 11_5202_b
+
##Open the [[Documentation/Nightly/Modules/Editor| ''Editor'']] module  (we first build a rough mask for better filtering results)
#increase iteration parameters to 50,40,40
+
###select "mouse1" as input image, accept the default colormap settings.
#click Apply
+
###select the "Threshold" tool. [[Image:ThresholdToolIcon.png|30px]].  
#repeat for second image 11_5221
+
###drag the left side of the threshold range slider until most of the brain is included. Then click "Apply"
#save
+
###select the "Dilate" tool. Click the "Apply" button 2-3 times until the mask encloses all of the brain and a bit beyond
*'''Phase III - Part 1: Brain Segmentation'''
+
###repeat for the "mouse2" image
#to obtain a usable mask for both images we first need to segment the two brains. You can use the mask for the first to speed up the second.
+
##Open the [[Documentation/Nightly/Modules/N4ITKBiasFieldCorrection| ''N4ITK MRI Bias Correction'']] module  (under ''Filters'' menu)
#go to the ''Editor'' module and select the "Draw" tool
+
###Input Image: "mouse1"
#in each slice, trace the brain contours and close with the ENTER key
+
###Mask Image: "mouse1-label" image generated above
#save when finished
+
###Output Volume: create & rename new"mouse1_N4"
#repeat for the second brain. You can save time by duplicating the first mask and then manually align it to roughly match the second. Then resample.
+
###Number of iterations: reduce to 200,150,100
*'''Phase III - Part 2: Brain mask for registration'''
+
###leave remaining parameters at defaults
#It is useful for the registration mask to include the outer edges of the brain. Hence we enlarge the mask slightly to extend past this boundary
+
###Apply.  This will take 1-2 minutes to process, depending on CPU
#Go to the ''Editor'' module, select the "Master" and "Label" volume as for above
+
##repeat for the "mouse2" image. Same settings
#Click on the "Dilation" icon
+
##save intermediate results
#check the "8-neighbor" box
+
#''Build Registration Masks''': we now rebuild masks for the registration. You could also use the masks generated above, but we will generate more accurate ones with relatively little effort. Note that registration masks need not be overly accurate but should include the main boundaries of the structure of interest, in this case the brain. We seek a method that will let us generate such a mask quickly and without excessive manual editing.
#click apply 2-3 times, wait each time for the process to finish and verify that the labelmap mask does indeed grow.
+
##Open the [[Documentation/Nightly/Modules/Editor| ''Editor'']] module  (we first build a rough mask for better filtering results)
#repeat for the other volume
+
###select "mouse1" as input image, accept the default colormap settings. Create a new label map (do not use the one already generated above)
#save under new name "_mask_dilated.nrrd" etc.
+
###select the "Magic Wand" tool. [[Image:MagicWandToolIcon.png|30px]].
*'''Phase IV: Affine Registration'''
+
###change the default tolerance and size limits to: 800 and 8000, respectively
#you may be able to skip this step and perform affine + nonrigid registration in one step, however the stepwise approach gives you more insight and control
+
###check the "Fill Volume" checkbox to apply the effect in 3D.
#Go to the ''BRAINSFit'' module
+
###left click inside the brain in any of the 3 views. Wait a few seconds for the effect to run and the display to update. You will see speckled segmentation overlay being added for each click. Do not worry about the speckled nature of the result at this point, we will fix that later. What we want is points to cover largely the extent of the brain in all dimensions. Use all 3 views to place seeds. See the screencast for details. Once you have most of the brain covered, proceed with next step below.
## enter the following settings:
+
###select the "Dilate" tool. Click the "Apply" button 2-3 times until the mask closes the gaps and encloses all of the brain and a bit beyond
##Fixed Image: 11_5202, Moving Image: 11_5221
+
###repeat for the "mouse2" image
##Registration phases: check boxes for  ''Include Rigid registration phase'', ''Include Scale Versor3D registration phase''  ''Include Affine registration phase''.
+
#'''Affine Registration''': open the [[Documentation/Nightly/Modules/BRAINSFit|''General Registration (BRAINS)'' module]]
##Output: Slicer Linear transform: create new, rename to "Xf1_Affine"
+
##''Input Images: Fixed Image Volume'': mouse1_N4
##Registration Parameters: increase ''Number Of Samples'' to 200,000
+
##''Input Images: Moving Image Volume'': mouse2_N4
##Registration Parameters: reduce "Transform Scale" to 400
+
##''Output Settings'':
##Control Of Mask Processing Tab: check ''ROI'' box, for ''Input Fixed Mask'' and ''Input Moving Mask'' select the two dilated labelmaps from above
+
###''Slicer Linear Transform'' (create new transform, rename to: "Xf1_Affine")
##Leave all other settings at default
+
###''Output Image Volume'' none. No resampling required for linear transforms
##click apply
+
##''Registration Phases'': select/check ''Rigid'' , ''Rigid+Scale'', ''Affine''
#go to ''Data'' module. You should see the "11_5221" volume moved inside the "Xf1.." transform node
+
##''Main Parameters'':
*'''Phase V: Nonrigid Registration'''
+
###increase ''Number Of Samples'' to 200,000
#Go to the ''BRAINSFit'' module
 
##enter the following settings:
 
##Fixed Image: 11_5202, Moving Image: 11_5221
 
##Registration phases: from ''Initialize with previously generated transform', select "Xf2_..." node created before.
 
##Registration phases: check boxes for  ''Include BSpline registration phase''
 
##Output: Slicer BSpline transform: create new, rename to "Xf2_BSpline5"
 
##Output Image Volume:  create new, rename to "11_5221_Xf2"; ''Pixel Type'': "short"
 
##Registration Parameters: increase ''Number Of Samples'' to 200,000; ''Number of Grid Subdivisions'': 5,5,5
 
##Control Of Mask Processing Tab: check ''ROI'' box, for ''Input Fixed Mask'' and ''Input Moving Mask'' select the two dilated labelmaps from above
 
 
##Leave all other settings at default
 
##Leave all other settings at default
 +
##click: ''Apply''; runtime < 1 min.
 +
#'''BSpline Registration''': open the [[Documentation/Nightly/Modules/BRAINSFit|''General Registration (BRAINS)'' module]]
 +
##Fixed Image: "mouse1_N4" , Moving Image: "mouse2_N4"
 +
##Registration phases: from ''Initialize with previously generated transform', select the"Xf1_Affine" node created before.
 +
##Registration phases: uncheck rigid,scale and affine boxes and check box for  ''BSpline'' only.
 +
##Output: Slicer Linear transform: select "None"
 +
##Output: Slicer BSpline transform: create new, rename to "Xf2_BSpline"
 +
##Output Image Volume:  create new, rename to "mouse2_Xf2"; ''Pixel Type'': "short"
 +
##Registration Parameters: increase ''Number Of Samples'' to 300,000; ''Number of Grid Subdivisions'': 5,5,5
 +
##Control Of Mask Processing Tab: check ''ROI'' box, for ''Input Fixed Mask'' and ''Input Moving Mask'' select the two masks generated above
 +
##Leave other settings at default
 
##click apply
 
##click apply
#Extend of nonrigid alignment will depend on the application. For better match the gridsize can be increased to 7x7x7 and higher, along with the number of sample points. Because larger grid sizes bear the risk of unstable and unfeasible deformations, an iterative approach with increasing DOF is recommended. Since the current version (3.6.1) of Slicer does not yet support concatenation of nonrigid transforms, such a stepwise approach implies multiple resampling of the moving volume and associated interpolation blurring.
 
  
 
=== Registration Results===
 
=== Registration Results===

Revision as of 16:06, 2 October 2013

Home < Documentation:Nightly:Registration:RegistrationLibrary:RegLib C41

Slicer Registration Library Case #41: Mouse Brain MRI

Input

MRI baseline lleft MRI follow-up
fixed image 1/target
MRI mouse #1
moving image
MRI mouse #2

Modules used

Description

Aim is to register the two brains of control mice to eachother. The original images have their header orientation info in a nonstandard form. As a consequence the image loads with sagittal and axial views switched. To fix we create a reorientation transformAlso the target image content is of low intensity and small compared to the surrounding tissue, making masking essential. We will first correct the orientation issue and apply intensity bias correction, then build two masks for registration, and finally co-register the two images in two steps via affine and non-rigid BSpline transforms.

Download (from NAMIC MIDAS)

Why 2 sets of files? The "input data" mrb includes only the unregistered data to try the method yourself from start to finish. The full dataset includes intermediate files and results (transforms, resampled images etc.). If you use the full dataset we recommend to choose different names for the images/results you create yourself to distinguish the old data from the new one you generated yourself.

Keywords

MRI, brain, mouse, masking, non-human, non-rigid,

Input Data

  • reference/fixed MRI: 0.1 x 0.1 x 0.1 mm, 192 x 256 x 192
  • moving MRI: 0.1 x 0.1 x 0.1 mm, 192 x 256 x 192

Procedure

  1. Fix RAS orientation: go to the Data

transform to reorient volumes to RAS space

  1. Reorient to RAS space:
    1. go to the Data module
      1. right click on the "Scene" node and select "Insert Transform" from the pulldown menu
      2. rename the new transform to "Xf0_reorient" or similar.
      3. place/drag both images inside the transform node.
    2. go to the Transforms ' module & select "Xf0_reorient" from the menu
      1. change the identity matrix as shown in the image here. To modify an entry, double-click the field, then edit and to confirm hit the ENTER key.
      2. switch the "1" in the 1st and last row to the last and first column, respectively. Change the sign of the "1" in the middle row to "-1"
      3. you should see the images in the position as shown in the results below, with the eyes looking up in the axial (red) and left in the sagittal (yellow) view.
    3. save intermediate results
  2. Bias Correction: Correct for local intensity inhomogeneities (screencast for this step)
    1. Open the Editor module (we first build a rough mask for better filtering results)
      1. select "mouse1" as input image, accept the default colormap settings.
      2. select the "Threshold" tool. 30px.
      3. drag the left side of the threshold range slider until most of the brain is included. Then click "Apply"
      4. select the "Dilate" tool. Click the "Apply" button 2-3 times until the mask encloses all of the brain and a bit beyond
      5. repeat for the "mouse2" image
    2. Open the N4ITK MRI Bias Correction module (under Filters menu)
      1. Input Image: "mouse1"
      2. Mask Image: "mouse1-label" image generated above
      3. Output Volume: create & rename new: "mouse1_N4"
      4. Number of iterations: reduce to 200,150,100
      5. leave remaining parameters at defaults
      6. Apply. This will take 1-2 minutes to process, depending on CPU
    3. repeat for the "mouse2" image. Same settings
    4. save intermediate results
  3. Build Registration Masks': we now rebuild masks for the registration. You could also use the masks generated above, but we will generate more accurate ones with relatively little effort. Note that registration masks need not be overly accurate but should include the main boundaries of the structure of interest, in this case the brain. We seek a method that will let us generate such a mask quickly and without excessive manual editing.
    1. Open the Editor module (we first build a rough mask for better filtering results)
      1. select "mouse1" as input image, accept the default colormap settings. Create a new label map (do not use the one already generated above)
      2. select the "Magic Wand" tool. 30px.
      3. change the default tolerance and size limits to: 800 and 8000, respectively
      4. check the "Fill Volume" checkbox to apply the effect in 3D.
      5. left click inside the brain in any of the 3 views. Wait a few seconds for the effect to run and the display to update. You will see speckled segmentation overlay being added for each click. Do not worry about the speckled nature of the result at this point, we will fix that later. What we want is points to cover largely the extent of the brain in all dimensions. Use all 3 views to place seeds. See the screencast for details. Once you have most of the brain covered, proceed with next step below.
      6. select the "Dilate" tool. Click the "Apply" button 2-3 times until the mask closes the gaps and encloses all of the brain and a bit beyond
      7. repeat for the "mouse2" image
  4. Affine Registration: open the General Registration (BRAINS) module
    1. Input Images: Fixed Image Volume: mouse1_N4
    2. Input Images: Moving Image Volume: mouse2_N4
    3. Output Settings:
      1. Slicer Linear Transform (create new transform, rename to: "Xf1_Affine")
      2. Output Image Volume none. No resampling required for linear transforms
    4. Registration Phases: select/check Rigid , Rigid+Scale, Affine
    5. Main Parameters:
      1. increase Number Of Samples to 200,000
    6. Leave all other settings at default
    7. click: Apply; runtime < 1 min.
  5. BSpline Registration: open the General Registration (BRAINS) module
    1. Fixed Image: "mouse1_N4" , Moving Image: "mouse2_N4"
    2. Registration phases: from Initialize with previously generated transform', select the"Xf1_Affine" node created before.
    3. Registration phases: uncheck rigid,scale and affine boxes and check box for BSpline only.
    4. Output: Slicer Linear transform: select "None"
    5. Output: Slicer BSpline transform: create new, rename to "Xf2_BSpline"
    6. Output Image Volume: create new, rename to "mouse2_Xf2"; Pixel Type: "short"
    7. Registration Parameters: increase Number Of Samples to 300,000; Number of Grid Subdivisions: 5,5,5
    8. Control Of Mask Processing Tab: check ROI box, for Input Fixed Mask and Input Moving Mask select the two masks generated above
    9. Leave other settings at default
    10. click apply

Registration Results

shown are, from left to right, panels of axial, sagittal and coronal views.
original brain in non-RAS orientation original, not RAS
original brain in proper RAS orientation original,after reorientation into RAS


original unregistered brains original, not registered
affine registered brains affine registered
registered brains after 5x5x5 nonrigid BSpline alignment registered brains after 5x5x5 nonrigid BSpline alignment
deformation visualized by grid overlay deformation visualized by grid overlay

Acknowledgments

Many thanks to Lili X. Cai from the Jasanoff Laboratory at MIT for sharing the Data and registration problem.