Slicer-3-6-FAQ

From Slicer Wiki
Jump to: navigation, search
Home < Slicer-3-6-FAQ
Back to Slicer 3.6 Introduction

This page contains frequently asked questions related to Slicer 3.6

Contents

User FAQ: Installation & Generic

Is slicer3 ready for end users yet?

Yes! See the home page.

DLL Problems on Windows

I just download the 3D Slicer binaries for Windows and unpacked it. When I doubleclicked the file "slice2-win32-x86.ext", it gave out the error message can't find package vtk while executing "package require vtk" invoked from within "set::SLICER(VTK_VERSION) [package require vtk]" (file "C:/slicer2/Base/tck/Go.tck" line 483). We've seen this sort of thing happen when you have incompatible dll's installed. E.g. so programs will install a vtkCommon.dll into your window system folder and windows tries to use it instead of the one that comes with windows. You could try doing a search for vtk*.dll in your system and remove or rename any that are not in slicer as a test.

User FAQ: Data

I have a CT and would like to create an STL file

  • If the CT is in DICOM format, use the File/Add Volume module to load the data into Slicer (see here for more about loading data).
  • Use the Interactive Editor to create a label map which contains the structure you are interested in (look in the tutorial pages for how to use the editor)
  • use the Modelmaker functionality to create a triangulated surface model
  • use File/Save to save the model in STL format.

How do I load my DICOM DTI data into Slicer?

User FAQ: Segmentation

Interactive Editor: I would like to segment more than one structure

Use the Per-Structure Volumes feature to create label volumes for each structure that you would like to segment interactively. After successful segmentation of all the structures, the individual label volumes can be merged.

User FAQ: Diffusion

I have DICOM DWI images. What do I need to do to process them in Slicer?

DWI formats are not properly standardized (as of 2010). In many cases, vendors put important information about gradients into the private fields. Our DICOM to NRRD converter has an extensive library of such special cases. Convert your DWI images to a NRRD volume, before beginning the post processing.

I want to register my diffusion scan to a structural scan

See DWI registration FAQ here

I have streamlines from DTI and would like to find the subset which goes through a particular region

The ROI selection module in Slicer 3.6 allows to filter for streamlines which pass (or do not pass) through label map ROI's

User FAQ : installation & general use

User FAQ : Registration

How do I fix incorrect axis directions? Can I flip an image (left/right, anterior/posterior etc) ?

Sometimes the header information that describes the orientation and size of the image in physical space is incorrect or missing. Slicer displays images in physical space, in a RAS orientation. If images appear flipped or upside down, the transform that describes how the image grid relates to the physical world is incorrect. In proper RAS orientation, a head should have anterior end at the top in the axial view, look to the left in a sagittal view, and have the superior end at the top in sagittal and coronal views.
Yes, you can flip images and change the axis orientation of images in slicer. But we urge to use great caution when doing so, since this can introduce subtantial problems if done wrong. Worse than no information is wrong information. Below the steps to flip the LR axis of an image:

  1. Go to the Data module, right click on the node labeled "Scene" and select "Insert Transform" from the pulldown menu
  2. Move/drag your image into/onto the newly created transform
  3. Go to the Transforms module and select the newly created transform from the "Transform Node" menu.
  4. Click in the top left field of the 3x3 matrix, where you see the number 1.0. Hit the RETURN key to activate editing.
  5. Place a minus sign in front of the 1, then hit the RETURN key again. If you have your image in the axial slice view, you should see it flip immediately.
  6. Go back to the Data module, right click on your image and select Harden Transform from the pulldown menu.
  7. The image will move outside the transform. Your change of axis orientation has now been applied.
  8. Save your image under a new name, do not use a format that doesn't store physical orientation info in the header (jpg, gif etc).; also consider saving the transform as documentation to what change you have applied.
  9. Note that this is saved as part of the image orientation info and not as an actual resampling of the image, i.e. if you save your image and reload it in another software that does not read the image orientation info in the header (or displays in image space only), you will not see the change you just applied.

To flip the other axes do the same as above but edit the diagonal entries in the 2nd and 3rd row, for flipping anterior-posterior and inferior-superior directions, respectively.

How do I fix a wrong image orientation in the header? / My image appears upside down / facing the wrong way / I have incorrect/missing axis orientation

  • Problem: My image appears upside down / flipped / facing the wrong way / I have incorrect/missing axis orientation
  • Explanation: Slicer presents and interacts with images in physical space, which differs from the way the image is stored by a separate transform that defines how large the voxels are and how the image is oriented in space, e.g. which side is left or right. This information is stored in the image header, and different image file formats have different ways of storing this information. If Slicer supports the image format, it should read the information in the header and display the image correctly. If the image appears upside down or with distorted aspect ratio etc, then the image header information is either missing or incorrect.
  • Fix: See FAQ above for a way to flip axes inside Slicer. You can also correct the voxel dimensions and the image origin in the Info tab of the Volumes module, and you can reorient images via the Transforms module. Reorientation however will work only if the incorrect orientation involves rotation or translation.
  • To fix an axis orientation directly in the header info of an image file:
1. load the image into slicer (Load Volume, Add Data,Load Scene..)
2. save the image back out as NRRD (.nhdr) format.
3. open the .nhdr with a text editor of your choice. You should see a line that looks like this:
 space: left-posterior-superior
 sizes: 448 448 128
 space directions: (0.5,0,0) (0,0.5,0) (0,0,0.8)
4. the three brackets ( ) represent the coordinate axes as defined in the space line above, i.e. the first one is left-right, the second anterior-posterior, and the last inferior-superior. To flip an axis place a minus sign in front of the respective number, which is the voxel dimension. E.g. to flip left-right, change the line to
 space directions: (-0.5,0,0) (0,0.5,0) (0,0,0.8)
5. alternatively if the entire orientation is wrong, i.e. coronal slices appear in the axial view etc., you may easier just change the space field to the proper orientation. Note that Slicer uses RAS space by default, i.e. first (x) axis = left-right, second (y) axis = posterior-anterior, third (z) axis = inferior-superior
6. save & close the edited .nhdr file and reload the image in slicer to see if the orientation is now correct.

How do I fix incorrect voxel size / aspect ratio of a loaded image volume?

  • Problem: My image appears distorted / stretched / with incorrect aspect ratio
  • Explanation: Slicer presents and interacts with images in physical space, which differs from the way the image is stored by a set of separate information that represents the physical "voxel size" and the direction/spatial orientation of the axes. If the voxel dimensions are incorrect or missing, the image will be displayed in a distorted fashion. This information is stored in the image header. If the information is missing, a default of isotropic 1 x 1 x 1 mm size is assumed for the voxel.
  • Fix: You can correct the voxel dimensions and the image origin in the Info tab of the Volumes module. If you know the correct voxel size, enter it in the fields provided and hit RETURN after each entry. You should see the display update immediately. Ideally you should try to maintain the original image header information from the point of acquisition. Sometimes this information is lost in format conversion. Try an alternative converter or image format if you know that the voxel size is correctly stored in the original image. Alternatively you can try to edit the information in the image header, e.g. save the volume as (NRRD (.nhdr) format and open the ".nhdr" file with a text editor.

I don't understand your coordinate system. What do the coordinate labels R,A,S and (negative numbers) mean?

  • It's very important to realize that Slicer displays all images in physical space, i.e. in mm. This requires orientation and size information that is stored in the image header. How that header info is set and read from the header will determine how the image appears in Slicer. RAS is the abbreviation for right, anterior, superior; indicating in order the relation of the physical axis directions to how the image data is stored.
  • For a detailed description on coordinate systems see here.

My image is very large, how do I downsample to a smaller size?

Several Resampling modules provide this functionality. If you also have a transform you wish to apply to the volume, we recommend the [[Modules:ResampleScalarVectorDWIVolume-Documentation-3.6 ResampleScalarVectorDWIVolume] module, otherwise the ResampleVolume module. For an overview of Resampling tools see here.
Resampling in place:

1. Go to the Volumes module
2. from the Active Volume pulldown menu, select the image you wish to downsample
3. Open the Info tab. Write down the voxel dimensions (Image Spacing) and overall image size (Image Dimensions), e.g. 1.2 x 1.2 x 3 mm voxel size, 512 x 512 x 86. You will need this information to determine the amount of down-/up-sampling you wish to apply
4. Go to the Resample Volume module
5. In the Spacing field, enter the new desired voxel size. This is the above original voxel size multiplied with your downsampling factor. For example, if you wish to reduce the image to half (in plane), but leave the number of slices, you would enter a new voxel size of 2.4,2.4,3.
6. For Interpolation, check the box most appropriate for your input data: for labelmaps check nearest Neighbor, for 3D MRI or other bandlimited signals check hamming. For most others leave the linear default. The sinc interpolator (hamming, cosine, welch) and bspline (cubic) interpolators tend to produce less blurring than linear', but may cause overshoot near high contrast edges (e.g. negative intensity values for background pixels)
7.Input Volume: Select the image you wish to resample
8. Output Volume:Select Create New Volume for output volume, and rename to something meaningful, like your input + suffix "_resampled"
9. Click Apply

Resampling in place to match another image in size:

1. Go to the ResampleScalarVectorDWIVolume module
2. Input Volume: Select the image you wish to resample
3. Reference Volume: Select the reference image whose size/dimensions you want to match to.
4. Output Volume: Select Create New Volume for output volume, and rename to something meaningful, like your input + suffix "_resampled"
5. Interpolation Type: check the box most appropriate for your input data: for labelmaps check nn=nearest Neighbor, for 3D MRI or other bandlimited signals check ws=windowed sinc. For most others leave the linear default. The ws and bspline (cubic) interpolators (hamming, cosine, welch) tend to produce less blurring than linear', but may cause overshoot near high contrast edges (e.g. negative intensity values for background pixels)
6. Click Apply. Note that if the input and reference volume do not overlap in physical space, i.e. are roughly co-registered, the resampled result may not contain any or all of the input image. This is because the program will resample in the space defined by the reference image and will fill in with zeros if there is nothing at that location. If you get an empty or clipped result, that is most likely the cause. In that case try to re-center the two volumes before resampling.

Resampling in place by specifying new dimensions:

1. Go to the ResampleScalarVectorDWIVolume module
2. Input Volume: Select the image you wish to resample
3. Reference Volume: leave at "none"
4. Output Volume: Select Create New Volume for output volume, and rename to something meaningful, like your input + suffix "_resampled"
5. Interpolation Type: check the box most appropriate for your input data: for labelmaps check nn=nearest Neighbor, for 3D MRI or other bandlimited signals check ws=windowed sinc. For most others leave the linear default. The ws and bspline (cubic) interpolators (hamming, cosine, welch) tend to produce less blurring than linear', but may cause overshoot near high contrast edges (e.g. negative intensity values for background pixels)
6. Click Apply. Note that if the input and reference volume do not overlap in physical space, i.e. are roughly co-registered, the resampled result may not contain any or all of the input image. This is
6. Output Parameters: here you specify the new voxel size / spacing and dimensions. Note that you need to set both. If only the voxel size is specified, the image is resampled but retains its original dimensions (i.e. empty/zero space). If only the dimensions are specified the image will be resampled starting at the origin and cropped but not resized.
  • new voxel size: calculate the new voxel size and enter in the Spacing field, as described above in in 'Resampling in place above, see step #5
  • new image dimensions: enter new dimensions under Size. To prevent clipping, the output field of view FOV = voxel size * image dimensions, should match the input
7. leave rest at default and click Apply

How do I register images that are very far apart / do not overlap

  • Problem: when you place one image in the background and another in the foreground, the one in the foreground will not be visible (entirely) when switching bak & forth
  • Explanation:Slicer chooses the field of view (FOV) for the display based on the image selected for the background. The FOV will therefore be centered around what is defined in that image's origin. If two images have origins that differ significantly, they cannot be viewed well simultaneously.
  • Fix: recenter one or both images as follows:
1. Go to the Volumes module,
2. Select the image to recenter from the Actrive Volume menu
3. Select the Info tab.
4. Click the Center Volume button. You will notice how the image origin numbers displayed above the button change. If you have the image selected as foreground or background, you may see it move to a new location.
5. Repeat steps 2-4 for the other image volumes
6. From the slice view menu, select Fit to Window
7. Images should now be roughly in the same space. Note that this re-centering is considered a change to the image volume, and Slicer will mark the image for saving next time you select Save.

How do I register a DWI image dataset to a structural reference scan? (Cookbook)

  • Problem: The DWI/DTI image is not in the same orientation as the reference image that I would like to use to locate particular anatomy; the DWI image is distorted and does not line up well with the structural images
  • Explanation: DWI images are often acquired as EPI sequences that contain significant distortions, particularly in the frontal areas. Also because the image is acquired before or after the structural scans, the subject may have moved in between and the position is no longer the same.
  • Fix: obtain a baseline image from the DWI sequence, register that with the structural image and then apply the obtained transform to the DTI tensor. The two chief issues with this procedure deal with the difference in image contrast between the DWI and the structural scan, and with the common anisotropy of DWI data.
  • Overall Strategy and detailed instructions for registration & resampling can be found in our [DWI registration cookbook]
  • you can find example cases in the DWI chapter of the Slicer Registration Case Library, which includes example datasets and step-by-step instructions. Find an example closest to your scenario and perform the registration steps recommended there.

How do I initialize/align images with very different orientations and no overlap?

I would like to register two datasets, but the centers of the two images are so different that they don't overlap at all. Is there a way to pre-register them automatically or manually to create an initial starting transformation?

  • Automatic Initialization:Most registration tools have initializers that should take care of the initial alignment in a scenario you described. If your images do not overlap at all, but the extents are similar, centered initializer should work. If the structures that are shown in the images are similar, moments initializer might help. Both of these options are available in BRAINSFit (GeometryCenterAlign and MomentsAlign, respectively). You can run BRAINSFit with just the initializer to see what kind of transformation it produces.
  • Manual Initialization: Use the Transforms module to create a manual initialization. Details in the manual registration tutorial and also in the FAQ below.

Can I manually adjust or correct a registration?

  • Problem: obtained registration is insufficient
  • Explanation: The automated registration algorithms (except for fiducial and surface registration) in Slicer operate on image intensity and try to move images so that similar image content is aligned. This is influenced by many factors such as image contrast, resolution, voxel anisotropy, artifacts such as motion or intensity inhomogeneity, pathology etc, the initial misalignment and the parameters selected for the registration.
  • Fix: you can adjust/correct an obtained registration manually, within limits. As outlined below. Your first try however should be to obtain a better automated registration by changing some of the input and/parameters and then re-run. There's also a dedicated Manual Registration Tutorial.
    • Manual Adjustment: If the transform is linear, i.e. a rigid or affine transform, you can access the rigid components (translation and rotation) of that transform via the Transforms module.
        1. In the Data module, drag the image volume inside the registration transform node
        2. Select the views so that the volume is displayed in the slice views
        3. Go to the Transforms module and adjust the translation and rotation sliders to adjust the current position. To get a finer degree of control, enter smaller numbers for the translation limits and enter rotation angles numerically in increments of a few degrees at a time

What's the difference between the various registration methods listed in Slicer?

Most of the registration modules use the same underlying ITK registration algorithm for cost function and optimization, but differ in implementation on parameter selection, initialization and the type of image toward which they have been tailored. To help choose the best one for you based on the method or available options, an overview of all registration methods incl. a comparison matrix can be found here.
To help choose based on a particular image type and content, you will find many example cases incl. step-by-step instructions and discussions on the particular registration challenges in the Slicer Registration Library. The library is organized in several different ways, e.g. consult this sortable table with all cases and the method used.
There is also a brief overview within Slicer that helps distinguish: Select the Registration Welcome option at the top of the Modules/Registration menu.

What's the purpose of masking / VOI in registration?

  • Problem: What does the masking option in some of the registration modules accomplish ?
  • Explanation: The masking option is a very effective tool to focus the registration onto the image content that is most important. It is often the case that the alignment of the two images is more important in some areas than others. Masking provides the opportunity to specify those regions and make the algorithm ignore the image content outside the mask. This does not mean that the rest is not registered, but rather that it moves along passively, i.e. areas outside the mask do not actively contribute to the cost function that determines the quality of the match. Note the mask defines the areas to include, i.e. to exclude a particular region, build a mask that contains the entire object/image except that region.
  • Note: masking within the registration is different from feeding a masked/stripped image as input, where areas of no interest have been erased. Such masking can still produce valuable results and is a viable option if the module in question does not provide a direct masking option. But direct masking by erasing portions of the image content can produce sharp edges that registration methods can lock onto. If the edge becomes dominant then the resulting registration will be only as good as the accuracy of the masking. That problem does not occur when using masking option within the module.
  • The following modules currently (v.3.6.1) provide masking:
    • BRAINSFit
      • found under: Masking Options tab
      • requires mask for both fixed and moving image
      • has option to automatically generate masks
      • some initialization modes (CenterOfHeadAlign) will not work in conjunction with masks
    • Expert Automated Registration
      • found under: Advanced Registration Parameters tab
      • requires mask for fixed image only
    • Robust Multiresolution Affine
      • found under: Optional tab
      • requires mask for fixed image only
      • provides option to define a mask as a box ROI
    • BRAINSDemonWarp
      • found under: Mask Options tab
      • requires mask for both fixed and moving image
      • has option to automatically generate masks

Registration failed with an error. What should I try next?

  • Problem: automated registration fails, status message says "completed with error" or similar.
  • Explanation: Registration methods are mostly implemented as commandline modules, where the input to the algorithm is provided as temporary files and the algorithm then seeks a solution independently from the activity of the Slicer GUI. Several reasons can lead to failure, most commonly they are wrong or inconsistent input or lack of convergence if images are too far apart initially.
  • Fix: open the Error Log window (Window Menu) and click on the most recent (top) entries related to the registration. Usually you will see a commandline entry that shows which arguments were given to the algorithm, and a standard output or similar that lists what the algorithm returned. More detailed error info can be found in either this entry, or in the ERROR: ..." line at the top of the list. Click on the corresponding line and look for explanation in the provided text. If there was a problem with the input arguments or the that would be reported here.
    • for example, running Robust Multiresolution Affine without input will report: "No input data assigned" etc.
  • If the Error log does not provide useful clues, try varying some of the parameters. Note that if the algorithm aborts/fails right away and returns immediately with an error, most likely some input is wrong/inconsistent or missing.
  • if variation does not succeed, try an alternative registration module. Some are more tailored toward particular image modalities and DOF than others
  • check the initial misalignment, if images are too far apart and there is no overlap, registration may fail. Consider initialization with a prior manual alignment, centering the images or using one of the initialization methods provided by the modules
  • write to the Slicer user group (slicer-users@bwh.harvard.edu) and inform them of the error. We're keen on learning so we can improve the program. Helpful to copy and paste the error messages found in the Error Log.

Registration result is wrong or worse than before?

  • Problem: automated registration provides an alignment that is insufficient, possibly worse than the initial position
  • Explanation: The automated registration algorithms (except for fiducial and surface registration) in Slicer operate on image intensity and try to move images so that similar image content is aligned. This is influenced by many factors such as image contrast, resolution, voxel anisotropy, artifacts such as motion or intensity inhomogeneity, pathology etc, the initial misalignment and the parameters selected for the registration.
  • Fix: Your first try however should be to obtain a better automated registration by changing some of to re-run the automated registration, while changing either initial position, initialization method, parameters or the method/module used. Most helpful to determine a good secondary approach is to know why the first one was likely to fail. Below a list of possible reasons and the remedies:
    • too much initial misalignment: particularly rotation can be difficult for automated registration to capture. If the two images have strong rotational misalignment, consider A) one of the initialization options (e.g. BRAINSfit or Expert Automated), B) a manual initial alignment using the Transforms module and then use this as initialization input
    • insufficient detail: consider increasing the number of sample points used for the registration, depending on time/speed constraints, increase to 5-10% of image size.
    • insufficient contrast: consider adjusting the Histogram Bins (where avail.) to tune the algorithm to weigh small intensity variations more or less heavily
    • strong anisotropy: if one or both of the images have strong voxel anisotropy of ratios 5 or more, rotational alignment may become increasingly difficult for an automated method. Consider increasing the sample points and reducing the Histogram Bins. In extreme cases you may need to switch to a manual or fiducial-based approach
    • distracting image content: pathology, strong edges, clipped FOV with image content at the border of the image can easily dominate the cost function driving the registration algorithm. Masking is a powerful remedy for this problem: create a mask (binary labelmap/segmentation) that excludes the distracting parts and includes only those areas of the image where matching content exists. This requires one of the modules that supports masking input, such as: BRAINSFit, ExpertAutomated, Multi Resolution. Next best thing to use with modules that do not support masking is to mask the image manually and create a temporary masked image where the excluded content is set to 0 intensity; the Mask Volume module performs this task
    • too many/too few DOF: the degrees of freedom (DOF) determine how much motion is allowed for the image to be registered. Too few DOF results in suboptimal alignment; too many DOF can result in overfitting or the algorithm getting stuck in local extrema, or a bad fit with some local detail matched but the rest misaligned. Consider a stepwise approach where the DOF are gradually increased. BRAINSfit and Expert Automated provide such pipelines; or you can nest the transforms manually. A multi-resolution approach can also greatly benefit difficult registration challenges: this scheme runs multiple registrations at increasing amounts of image detail. The Robust Multiresolution module performs this task.
    • inappropriate algorithm: there are many different registration methods available in Slicer. Have a look at the Registration Method Overview and consider one of the alternatives. Also review the sortable table in the Registration Case Library to see which methods were successfully used on cases matching your own.
    • you can adjust/correct an obtained registration manually, within limits, as outlined in this FAQ.

How many sample points should I choose for my registration?

  • Problem: unsure what the Sample Points setting means or how I could use it to improve my registration.
  • Explanation:All registration modules contain a parameter field that controls how much of the image is sampled when performing an automated registration. The unit is often an absolute count, but in some cases also a percentage. Default settings also vary among modules. The number of samples is an important setting that determines both registration speed and quality. If the sample number is too small, registration may fail because it is driven by image content that insufficiently represents the image. If sample number is too large, registration can slow down significantly.
  • Fix: If registration speed is not a major issue, better to err on the side of larger samples. Most default settings are chosen to yield relatively fast registrations and for most of today's image represent only a small percentage. Below the defaults for the different registration modules, for version 3.6.1:
 Defaults Used in Slicer Modules v.3.6.1
 Fast Rigid / Linear: 10,000
 Fast Affine: 10,000
 Fast Rigid: 10,000
 Fast Non-Rigid BSpline: 50,000
 Expert Automated (Rigid): 1%
 Expert Automated (Affine): 2%
 Expert Automated (BSpline): 10%
 BRAINSfit: 100,000
 BRAINSdemon Warp: N/A 
 Robust Multiresolution: N/A
 Surface Registration: 200

The table below relates total sample points and percentages to the most common image sizes. Also consider that sample points are chosen randomly so that some points may fall outside the actual object to be registered. That is not a bad thing per se, some background points are important, but not if they are too far from the edges of the object. So consider both total image size as well as the percentage of the image field of view that your object of interest obtains. E.g. if your object fills only half the image, double the sample points to get the desired amount of points within the object.

Image Size Total Points 10,000 50,000 100,000 200,000 1% 2% 5% 10% 20%
128 x 128 x 64 1048576 1.0% 4.8% 9.5% 19.1% ~10000 ~20000 ~52500 ~100000 ~200000
256 x 256 x 30 1966080 0.5% 2.5% 5.1% 10.2% ~20000 ~40000 ~97500 ~200000 ~400000
256 x 256 x 120 7864320 0.1% 0.6% 1.3% 2.5% ~77500 ~150000 ~400000 ~775000 ~1500000
192 x 192 x 192 7077888 0.1% 0.7% 1.4% 2.8% ~70000 ~150000 ~350000 ~700000 ~1500000
512 x 512 x 48 12582912 0.1% 0.4% 0.8% 1.6% ~125000 ~250000 ~625000 ~1250000 ~2500000
256 x 256 x 256 16777216 0.1% 0.3% 0.6% 1.2% ~175000 ~325000 ~850000 ~1750000 ~3250000
512 x 512 x 160 41943040 0.0% 0.1% 0.2% 0.5% ~425000 ~850000 ~2000000 ~4250000 ~8500000

The image below shows sample point densities on one slice of a brain MRI with 256x256x130 voxels. For robust registration, we recommend a useful coverage requires at least 1% coverage for affine, more (at least 5%) for nonrigid (BSpline) as DOF increase:
sample point densities on one slice of a brain MRI with 256x256x130 voxels; for robust registration, we recommend a useful coverage requires at least 1% coverage for affine, more (at least 5%) for nonrigid (BSpline) as DOF increase

I want to register two images with different intensity/contrast.

  • The two most critical image features that determine automated registration accuracy and robustness are image contrast and resolution. Differences in image contrast are best addressed with the appropriate cost function. The cost function that has proven most reliable for registering images with different contrast (e.g. a T1 MRI to a T2 or an MRI to CT or PET) is mutual information. All intensity-based registration modules use mutual information as the default cost function. Only the Expert Automated Registration module lets you choose alternative cost function.

No extra adjustment is therefore needed in terms of adjusting parameters to register images of different contrast. Depending on the amount of differences you may consider masking to exclude distracting image content or adjusting the Histogram Bins setting to increase/decrease the level of intensity detail the algorithm is aware of.

How important is bias field correction / intensity inhomogeneity correction?

While registration may still succeed with mild cases of inhomogeneity, moderate to severe inhomogeneity can negatively affect automated registration quality. It is recommended to run an automated bias-field correction on both images before registration. see here for documentation on this module. Masking of non-essential peripheral structures can also help to reduce distracting image content. See here for more on masking

Have the Slicer registration methods been validated?

The Slicer3.6 registration modules share the same same underlying ITK registration engines. For validation of those basic algorithms refer to the ITK software guide or the Insight Journal. Slicer registration of images and surfaces has been applied successfully in many cases, some of which are documented here here. For ongoing efforts on improving/validating Slicer registration performance please contact the Slicer User Mailing List or the Slicer Developer Mailing List.

I want to register many image pairs of the same type. Can I use Slicer registration in batch mode?

How can I save the parameter settings I have selected for later use or sharing?

The Parameter Set menu at the top tab of each module serves this purpose of saving and recalling instances of parameter settings. To save the current settings select Rename from the menu, give it a descriptive name and then select Create New Commandline Module from the menu. New settings are added to the menu as created. These settings are saved together with the Slicer Scene.mrml file. To save only tthe presets without any associate data, first save your entire scene, then delete all nodes in the MRML tree except the top Scene node, and then save this under a new name like "Slicer_Presets.mrml" etc. To load the presets in, use Import from the File menu, which will add the MRML entries to the existing scene. Do not load presets via "Load Scene", since that will delete all currently loaded data. A detailed user guide on loading presets can be found here

Registration is too slow. How can I speed up my registration?

The key parameters that influence registration speed are the number of sample points, the degrees of freedom of the transform, the type of similarity metric and the initial misalignment and image contrast/content differences. If registration quality is ok, try reducing the sample points first. Guidelines on selecting sample points are given here. The degrees of freedom usually are given by the overall task and not subject to variation, but depending on initial misalignment, both speed and robustness can improve by an iterative approach that gradually increases DOF rather than starting with a high DOF setting. The BRAINSfit module and Expert Automated Registration module both allow prescriptions of iterative DOF.
If using a cost/criterion function other than mutual information, note that the corresponding ITK implementation may not have been parallelized and hence may not be taking advantage of multi-threading/multi-CPU cores on your computer. This can slow down performance significantly.
Also see the Slicer Registration Portal Page for help on selecting registration methods based on criteria of speed, precision etc.

Registration results are inconsistent and don't work on some image pairs. Are there ways to make registration more robust?

The key parameters that influence registration robustness are the number of sample points, the initial degrees of freedom of the transform, the type of similarity metric and the initial misalignment and image contrast/content differences. particularly initialization methods that seek a first alignment before beginning the optimization can make things worse. If initial position is already sufficiently close (i.e. more than 70% overlap and less than 20% rotational misalignment), consider turning off initialization if available (e.g. in BRAINSfit or Expert Automated). Try increasing the sample points. Guidelines on selecting sample points are given here. The degrees of freedom usually are given by the overall task and not subject to variation, but depending on initial misalignment, robustness can greatly improve by an iterative approach that gradually increases DOF rather than starting with a high DOF setting. The BRAINSfit module and Expert Automated Registration module both allow prescriptions of iterative DOF. Also the Multiresolution module steps through multiple cycles of different image coarseness which is aimed primarily at robustness
If using a cost/criterion function other than mutual information, note that MI tends to be the most forgiving/robust toward differences in image contrast.
Also see the Slicer Registration Portal Page for help on selecting registration methods based on criteria of robustness, speed, precision etc.

One of my images has a clipped field of view. Can I still use automated registration?

Probably yes. Best remedy is to apply a mask that also clips the missing portion in the other image, or better still 2 masks that include the full region of interest that is to be registered and is present in both images. Also initialization is critical, more so than with full FOV images. If images have large amount of initial misalignment, try to center both first (Volumes/Info/Center Image) or perform a cursory manual alignment and use that as initialization. Examples of registrations with clipped FOV can be found here in the Registration Case Library.

I ran a registration but cannot see the result. How do I visualize the result transform?

There are 2 ways to see the result of a registration: 1) by creating a new resampled volume that represents the moving image in the new orientation , 2) by direct (dynamic) rendering of the original image in a new space when placed inside a transform. The latter is not available for non-rigid transforms, hence a registration that includes nonlinear components (BSpline, Warp) must be visualized by first resampling the entire volume with the new transform. If you ran a registration yet see no effect, the reason could be one of the following:

  • you did not request an output. In the module parameters section, you must specify either an output transform or an output image/volume. Select one or both (for nonrigid registration).
  • you requested a result transform but the moving volume is not placed inside the transform in the MRML tree. Not all modules automatically place the moving image inside the result transform node: The Expert Automated Module, for example, does not , so in that case do this manually in the Data module by dragging the moving volume node inside the result transform.
  • registration completed with an error. Check for status messages at the top of the module and look at the Error Log in the window menu to see if there were any errors reported.
  • you performed a non-rigid (BSpline) transform but did not yet request an output volume. To do so after registration is complete, use the Resample DWI/Vector module.

What's the difference between Rigid and Affine registration?

Rigid registration is a transform with 6 degrees of freedom (DOF): 3 translations (one along each axis) and 3 rotations (one around each axis). An affine registration includes 12 DOF, i.e. 3 additional DOF for scaling (along each axis) and shearing. So strictly speaking an affine transform is a non-rigid transform, even though linear, because the volume can distort. However in practice by non-rigid transform one usually refers to nonlinear transforms with more than 12 DOF, e.g. BSpline or polynomial models.

What's the difference between Affine and BSpline registration?

An affine registration includes 12 DOF, i.e. 3 for translations (one along each axis), 3 rotations (one around each axis), 3 for scaling (along each axis) and 3 for shearing. So strictly speaking an affine transform is a non-rigid transform, even though linear, because the volume can distort. However in practice by non-rigid transform one usually refers to nonlinear transforms with more than 12 DOF, e.g. BSpline or polynomial models. A BSpline registration employs a nonlinear nonrigid model that allows individual regions of the image to distort independently, but enforcing smooth transitions between them. A BSpline transform is not described by a 4x4 matrix like the affine model, but by a list of displacement vectors for each point along a prescribed grid. E.g. a 3x3x3 BSpline grid has 27 points that can move independently, yielding 27 DOF; a 5x5x5 grid analogously has 125 DOF etc.

The nonrigid (BSpline) registration transform does not seem to be nonrigid or does not show up correctly.

See FAQ on viewing registration results here. BSpline transforms are not available for immediate rendering by placing volumes or models inside the transforms. Only linear transforms can be viewed that way. A BSpline transform must be visualized by resampling the entire volume with the new transform. If you did not yet an output volume when running the registration, you can do so after the fact, use the Resample DWI/Vector module.

Can I combine multiple registrations?

Yes, you can nest multiple (affine) registrations inside eachother. You can generate combined ones via the right-click context menu in the Data module and selecting Harden Transform. Note that the original transform or volume is replaced when selecting Harden Transform, so it is recommended to rename the node afterwards to document the fact.
Currently (v.3.6.1) BSpline transforms cannot be combined with other transforms, so if you have combinations of Affine and BSpline or multiple BSpline transforms you need to resample multiple times to apply them all. We recommend to supersample the volume beforehand to counteract interpolation blurring.

Can I combine image and surface registration?

Not simultaneously. You can run surface and image registrations separately and then combine the transforms, but there is currently no support to combine both surface and intensity data into a single cost function.

What's the difference between BRAINSfit and BRAINSDemonWarp?

BRAINSfit performs affine and BSpline registration that commonly will have less than a few hundred DOF, whereas BRAINSDemonWarp performs a optic flow high-DOF warping scheme that has many thousands of DOF and is significantly less constrained.

Is the BRAINSfit registration for brain images only?

No, it is applicable and has been used successfully on non-brain image data. See the Registration Case Library for examples. The BRAINS name is derived from the ICTS at program at the University of Iowa, where it was developed, details of which you will find here.

What's the difference between Fast Rigid and Linear Registration?

None, both run the same underlying algorithm. The Linear Registration module is an older version that was kept for compatibility and reference reasons.

Which registration methods offer non-rigid transforms?

If including Affine as nonrigid, then all except the Fast Rigid module do support non-rigid transforms. In the more common understanding, non-rigid transforms with more than 12 DOF are offered by the Fast BSpline, BRAINSFit, Expert Automated, and BRAINSDemonWarp modules, as well as by the extensions like Plastimatch or HAMMER.
For a detailed review on specific registration features consult the Slicer Registration Portal Page.

Which registration methods offer masking?

Masking is supported directly by the Expert Automated module, BRAINSfit and Robust Multiresolution module. Note that the use of masks differs among these:

  • Expert Automated module: requires binary mask for fixed image only.
  • Robust Multiresolution module: also allows mask in the form of simple ROI block; requires mask for fixed image only.
  • BRAINSfit : requires mask for both the fixed and moving image.

For a detailed review on specific registration features consult the Slicer Registration Portal Page.

Is there a function to convert a box ROI into a volume labelmap?

Yes. Most registration functions that offer masking require a binary labelmap as mask input. This goes for the BRAINSfit and Expert Automated modules. The exception is the Robust Multiresolution module, which allows a simple ROI box as mask also. The Crop Volume module will generate a labelmap from a defined box. You can create a new ROI box or select an existing one. You must select an image volume to crop for the operation, even if you're only interested in the ROI labelmap. You need not select a dedicated output for the labelmap, it is generated automatically when the cropped volume is produced, and will be called Subvolume_ROI_Label in the MRML tree. After creating the box ROI labelmap, simply delete the cropped volume and other output like the "...resample-scale-1.0" volume.
Likely you will need the volume with the same dimension and pixel spacing as the reference image. The box volume produced above has the correct dimension, but is only 1 voxel in size. Hence there is a second step required, which is to resample the Subvolume_ROI_Label to the same resolution: use the Resample ScalarVectorDWI module and select the appropriate reference and Nearest Neighbor as interpolation method. Finally go to the Volumes module and check the Labelmap box in the info tab to turn the volume into a labelmap.

BRAINSDemonWarp won't save the result transform

BRAINSDemonWarp is a non-rigid registration method that is memory and computation intensive. Depending on image size & system, you may reach memory limits. Open the Error Log dialog (Window menu) and select the "BRAINSDemon Warp output" line to see the specific error that is being reported there. If you get an error other than the ones discussed here please send it to the Slicer User Mailing List.
A memory error would look something like this:

 vtkCommandLineModuleLogic (0x82a0d40): BRAINSDemonWarp standard error: 
 BRAINSDemonWarp(53365,0xa0230540) malloc: *** mmap(size=503316480) failed (error code=12)
 *** error: can't allocate region

In that case try reducing the number of Pyramid levels. Note that the output transform is a deformation field, which is a large 4-dimensional volume , and support for handling and visualizing deformation fields within Slicer (v.3.6.1) is limited. For example:

 ERROR: In /Users/hayes/Slicer-3-6/Slicer3/Libs/MRML/vtkMRMLTransformStorageNode.cxx, line 594
 vtkMRMLTransformStorageNode (0x3d280b20): Grid transform with a non-identity orientation matrix is not yet implemented

If the error log reports that BRAINSDemonWarp itself completed ok, but sending the transform file back to Slicer failed, that is related to the mentioned limitation in handling deformation fields in the GUI. In that case use the commandline option to save the deformation field to a file for later use.

Physical Space vs. Image Space: how do I align two registered images to the same image grid?

Slicer displays all data in a physical coordinate system. Hence an image can only be displayed correctly if it contains sufficient header information to relate the image voxel grid with physical space. This includes voxel size, axis orientation and scan order. It is therefore possible for two images to be aligned when viewed in Slicer, even though their underlying image grid is oriented very differently. To match the two images in image as well as physical space, the abovementioned axis direction, voxel size and image grid orientation must match. The procedure will depend on the image data, but the main tools at your disposal are the ResampleScalarVectorDWIVolume and Orient Images module.

Is there a way to perform an Eddy current correction on DWI in Slicer

There is a way, you need to download the GTRACT extension (you can get it from the menu View->Extension Manager) Once you have it you will see a new module category under the diffusion one called "GTRACT". Within this category there is a module called "Coregister B-values". This module takes a DWI image and outputs a DWI image in which every DWI is co-registered to one of the B0 images (by default the first one), this can be regarded as motion correction. Within this module there is a checkbox in the "Registration Parameters" section: "Eddy Current Correction". This will tune some of the registration parameters such that some Eddy Current artifacts are corrected.

The registration transform file saved by Slicer does not seem to match what is shown

When executing the following procedure:

  1. Create a transform.
  2. Adjust it by adjusting the 6 slider bars in the Transforms module.
  3. Save the transform as a .tfm file.
  4. Inspect the contents of the .tfm file in a text editor, and compare them to what is shown in the 4x4 matrix in the Transforms module.
  5. re-load the .tfm back into slicer and confirm you have the same data as you saved from slicer.

you will notice that, even though the reloaded transform does match, the contents of the .tfm file and what is displayed in the Transforms module do not match. The explanation of this is kind of buried in the details of transforms. The issue relates to the difference between slicer which uses a "computer graphics" view of the world and itk which uses an "image processing" view of the world. By this we mean that in slicer you have a matrix hierarchy and you think in terms of moving an object from one spot to another - so a transform that has a positive "superior" value wrapped around a volume moves the volume up in patient space.
But ITK thinks of transformations in terms of mapping backwards from the display space back to the original image. Imagine if you are stepping sequentially through the output pixels then ITK wants to know the transform that takes you back to the input pixels that it needs to use to calculate the output. This modeling vs. resampling issue is in addition to the LPS/RAS issue, which is the 2nd (invisible) difference between the two transforms.
In Summarry:

  1. The transform represented in the widget is in RAS.
  2. The transform represented in the tfm file is in LPS.
  3. The transform represented in the file is the inverse of the transform in the widget (plus it has the LPS/RAS conversion applied).
  4. The order of the parameters in the tfm are the elements of the upper 3x3 of the transform displayed in the widget followed by the elements in the last column of the widget.

As an example:

  1. take the transform from the widget: --> c = [0.996918 -0.078459 -0.000000 6.899965; 0.068016 0.864225 -0.498488 -95.999726; 0.039111 0.496951 0.866896 266.299559; 0.0 0.000000 -0.000000 1.000000]
  2. Take the inverse --> inv(c) =
   0.9969    0.0680    0.0391  -10.7644
  -0.0785    0.8642    0.4970  -48.8313
  -0.0000   -0.4985    0.8669 -278.7091
        0         0         0    1.0000 
3. LPS to RAS conversion will take you all the way to what is the file, i.e. pre and post multiply inv(c) by the respective matrices.

How can I see the parameters of the function that describe a BSpline registration/deformation?

To see the parameters of the transform, you have to write it to file and investigate by other means. The BSpline transform is saved as a ITK .tfm which is a text file containing the displacement vectors of each grid-point, plus any initial affine transform (if present). One nice way to visualize is to create a grid image of the same dimensions as your target, and then apply the transform to this grid image, you can then see the deformations as deformations in the gridlines. Example Grid images can be downloaded here. Use the Modules:ResampleScalarVectorDWIVolume for the resampling. https://www.slicer.org/wiki/Modules:ResampleScalarVectorDWIVolume-Documentation-3.6A quick way directly in slicer is to place the undeformed and deformed volumes into back- and foreground and fade back and forth with the fading slider. Another alternative is to convert the transform into a 4-D deformation field directly and visualize it in slicer using RGB color. See FAQ below on how to convert.

How can I convert a BSpline transform into a deformation field?

There is commandline functionality in Slicer to convert a BSpline ITK transform file (.tfm) into a deformation field volume. To execute, type (exchange /Applications/Slicer3.6.3 with the path of your Slicer installation):

 /Applications/Slicer3.6.3/Slicer3 --launch /Applications/Slicer3.6.3/lib/Slicer3/Plugins/BSplineToDeformationField --tfm InputBSpline.tfm 
  --refImage ReferenceImage.nrrd   --defImage Output_DeformationField.nrrd  

for more details try:

 /Applications/Slicer3.6.3/Slicer3 --launch /Applications/Slicer3.6.3/lib/Slicer3/Plugins/BSplineToDeformationField --help

My reoriented image returns to original position when saved; Problem with the Harden Transform function

right click on the image and select "Harden Transform" from the popup menu to reorient an image

You can apply an affine transform to an image by creating a transform, placing the volume inside that transform in the Data module, and then selecting Harden Transform via the context-menu (right click on the image volume). This will move the image back out to the main level and "apply" the transform. It will, however, not resample the image data, but rather place the information about the new orientation into the image header. When the image is saved, this information is saved also as part of the file header, as long as orientation data is supported by the file format. If the saved volume is now loaded by another software that does not consider this header orientation (e.g. ImageJ) or does not visualize the image in physical space, then the image will appear in its old position.
You can avoid this problem by actually resampling the image data. To do this, go to the Filtering/ResampleScalarVectorDWIVolume module, and select your image and transform as input, create a new volume as output and click Apply. This new volume will now be in the new orientation that will be retained if saved and reloaded elsewhere.

What is the Meaning of 'Fixed Parameters' in the transform file (.tfm) of a BSpline registration ?

A typical BSpline transform file will contain 2 transforms, an affine portion (commonly saved as "Transform 1" at the end of the file), and a nonrigid BSpline portion (commonly saved as "Transform 0"). The bulk of the BSpline part are 3D displacement vectors for each of the BSpline grid-nodes in physical space, i.e. for each grid-node, there will be three blocks of displacements defining dx,dy,dz for all grid nodes. After this field is a "Fixed Parameters" section that may look like this:

 FixedParameters: 8 8 8 -54.1406 -54.1406 -35 54.1406 54.1406 35 1 0 0 0 1 0 0 0 1

The first 3 numbers are the actual grid size (number of knots in each dimension), which is always larger than your requested grid because the grid is extended beyond the image margin to prevent clipping. The next 3 numbers is the origin of the grid, spacing of the grid, and the direction cosines of the grid. More details on the format in the ITK documentation (ITKSoftwareGuide.pdf)

I have some DICOM images that I want to reslice at an arbitrary angle

There's several ways to go about this. If you wish to register your image to another reference/target image, run one of the automated registration methods. If you wish to realign manually, most efficient way is to use the Transforms module. Once you have the desired orientation, you need to apply the new orientation to the image. You can do this in 2 ways: 1) without or 2) with resampling the image data.

  1. Without resampling: In the Data module, select the image (inside the transforms node) and select "Harden Transforms" from the pulldown menu. This will write the new orientation in physical space into the image header. This will work only if other software you use and the image format you save it as support this form of orientation information in the image header.
  2. With resampling: Go to the Filtering/ResampleScalarVectorDWIVolume module and create a new image by resampling the original with the new transform. This will incur interpolation blurring but is guaranteed to transfer for all image formats or software.

For more details on manual transform, see this FAQ and the Manual RegistrationTutorial here.

Developer FAQ

Where can I find out about writing code for slicer3?

The Developers page has lots of information.