Difference between revisions of "Documentation/4.2/Modules/TrainModel"

From Slicer Wiki
Jump to: navigation, search
(Nightly -> 4.2)
 
(Prepend documentation/versioncheck template. See http://na-mic.org/Mantis/view.php?id=2887)
 
Line 1: Line 1:
 +
<noinclude>{{documentation/versioncheck}}</noinclude>
 
<!-- ---------------------------- -->
 
<!-- ---------------------------- -->
 
{{documentation/{{documentation/version}}/module-header}}
 
{{documentation/{{documentation/version}}/module-header}}

Latest revision as of 07:47, 14 June 2013

Home < Documentation < 4.2 < Modules < TrainModel


For the latest Slicer documentation, visit the read-the-docs.



Introduction and Acknowledgements

Extension: LesionSegmentation
Acknowledgments: This work is part of the National Alliance for Medical Image Computing (NAMIC), funded by the National Institutes of Health through the NIH Roadmap for Medical Research, Grant U54 EB005149.
Author: Mark Scully ()
Contact: Mark Scully, <email>mark@biomedicalmining.com</email>

National Alliance for Medical Image Computing (NA-MIC)  

Module Description

This module is used to train new segmentation models for white matter lesion segmentation. In order to use this tool your data must include a T1, T2, FLAIR, brain mask, and expert lesion segmentation for each subject. All data must be preprocessed including intra-subject co-registration, AC-PC alignment, bias correction, consistent spacing between sequences, and brain mask creation.

Use Cases

  • Use Case: Segmenting white matter lesions in a disorder for which there is no current lesion segmentation model

If a lesion segmentation model already exists for the disorder of interest you most likely do not need to train a new one. However, if a model does not exist or the existing model(s) give poor performance on your data you can use this module to train a new model, and then use that model to automatically segment white matter lesions.

In order to train a new model you must first have preprocessed data on a number of subjects. The data required includes T1, T2, FLAIR, brain masks, and lesion masks. All data must be preprocessed including intra-subject co-registration, AC-PC alignment, bias correction, consistent spacing between sequences (i.e all 1mm isotropic), and brain mask creation. Subjects do not need to be registered to each other. A model can be created on a single subject, but greater than 6 is recommended and between 10 and 15 is best. The more subjets included in the model the slower model creation will be and the slower segmentation using that model will be. However, models using more subjects will almost always be more accurate. When selecting subjects to use for the model try to pick the largest variety that you can. This means, if appropriate for your dataset, multiple ages, multiple brain sizes, multiple lesion loads, a balance between genders, multiple stages of disease progression, etc. Then identify the set of images for a single subject that has the cleanest and highest contrast scans and use that subject's scans as the first set of inputs. They will be used as the reference for intensity standardization. You will need to use those images as the reference scans when predicting as well.

Tutorials

Coming soon!

Panels and their use

A list panels in the interface, their features, what they mean, and how to use them.

  • Input Options: Input options for lesion training.
    • Input Lesion Volumes [----inputLesionVolumes] : Required: A comma separated list of Lesion images
    • Input T1 Volumes [----inputT1Volumes] : Required: A comma separated list of T1 images
    • Input T2 Volumes [----inputT2Volumes] : Required: A comma separated list of T2 images
    • Input FLAIR Volumes [----inputFLAIRVolumes] : Required: A comma separated list of FLAIR images
    • Input Mask Volumes [----inputMaskVolumes] : Required: A comma separated list of Brain Mask images
    • Highest Quality Images Index [----inputIndexOfBestImages] : The index in the list of images that represents the best T1, T2, and FLAIR images. These images will be used as the standard that the other images are intensity standardized to. It defaults to the first image in the list. (This number is 1-indexed) Default value: 1
  • Advanced Options: Advanced input parameters
    • Percent NonLesion [----inputPercentNonLesion] : What percent of the nonlesion voxels should be used for training. Higher numbers results in larger model files and potentially slower runtimes. Default value: 5
  • Output Options: Output Options
    • Output Model Filename [----outputModel] : Required: Filename to save the generated model to.
User Interface


Similar Modules

References

  • Scully M, Anderson B, Lane T, Gasparovic C, Magnotta V, Sibbitt W, Roldan C, Kikinis R and Bockholt HJ (2010) An automated method for segmenting white matter lesions through multi-level morphometric feature classification with application to lupus. Front. Hum. Neurosci. doi:10.3389/fnhum.2010.00027

http://frontiersin.org/neuroscience/humanneuroscience/paper/10.3389/fnhum.2010.00027/


Information for Developers