Difference between revisions of "Documentation/4.0/Modules/OpenIGTLinkIF"

From Slicer Wiki
Jump to: navigation, search
Line 35: Line 35:
 
*Slice driving: The module can control volume re-slicing plane based on linear transform in the MRML scene.
 
*Slice driving: The module can control volume re-slicing plane based on linear transform in the MRML scene.
  
|[[image:Slicer3_OpenIGTLinkIF_Architecture.png|thumb|center|500px|The figure shows an example schematic diagram where multiple devices are communicating with 3D Slicer through the OpenIGTLink Interface. Each connector is assigned to one of the external devices for TCP/IP connection. The connectors serve as interfaces between the external devices and the MRML scene to convert an OpenIGTLink message to a MRML node or vice versa. ]]
+
[[image:Slicer3_OpenIGTLinkIF_Architecture.png|thumb|center|500px|The figure shows an example schematic diagram where multiple devices are communicating with 3D Slicer through the OpenIGTLink Interface. Each connector is assigned to one of the external devices for TCP/IP connection. The connectors serve as interfaces between the external devices and the MRML scene to convert an OpenIGTLink message to a MRML node or vice versa. ]]
 
|}
 
|}
  
Line 41: Line 41:
 
<!-- ---------------------------- -->
 
<!-- ---------------------------- -->
 
{{documentation/{{documentation/version}}/module-section|Use Cases}}
 
{{documentation/{{documentation/version}}/module-section|Use Cases}}
Most frequently used for these scenarios:
+
* '''MRI-compatible Robot System''' (BRP Project between BWH, Johns Hopkins University and Acoustic MedSystems Inc., "Enabling Technologies for MRI-Guided Prostate Interventions")
* Quantification of small changes in meningioma tumor volume from post-contrast MRI
+
**The 3D Slicer was connected to the MRI-compatible Robot by using OpenIGTLinkIF to send target position and to get current robot position. It was also connected to the MRI scanner to control scan plane for real-time image and receive MR images from the scanner.
 +
* '''Neurosurgical Robot Project'''(Nagoya Institute of Technology, Japan)
 +
**The 3D Slicer was connected to the optical tracking system (Optotrak, Northern Digital Inc.) to acquire current position of the end-effector of the robot.
  
 
<!-- ---------------------------- -->
 
<!-- ---------------------------- -->
Line 53: Line 55:
 
{{documentation/{{documentation/version}}/module-section|Panels and their use}}
 
{{documentation/{{documentation/version}}/module-section|Panels and their use}}
  
ChangeTracker is organized as a workflow that consists of the following steps:
+
''Under development''
 
 
* '''Step 1: Define input scans'''
 
 
 
{|
 
|Use drop-down controls to choose the two scans where you would like to measure pathology development. Currently, we support analysis of the images that correspond to two time points.
 
 
 
|[[Image:Slicer4_ChangeTracker_Step1.png|thumb|450px|Step 1: Scan selection]]
 
|}
 
 
 
* '''Step 2: Define volume of interest'''
 
 
 
{|
 
|This step of wizard includes the following user controls to facilitate Volume of Interest (VOI) selection:
 
* "Hide/show render" button: used to control visibility of volume rendering for the selected region
 
* "ROI Widget Controls: RAS Space" frame: contains sliders to initialize VOI in RAS (physical) space
 
"ROI Widget" refers to the three-dimensional selection box that appears in the 3d slice view once you begin to select VOI. You can define VOI by adjusting the colored handles of the ROI widget in the slice viewer or 3d viewer.
 
 
 
|[[Image:Slicer4_ChangeTracker_Step2.png|thumb|450px|Step 2: ROI Widget controls in wizard GUI]]
 
|}
 
 
 
 
 
* '''Step 3: Segment the analyzed structure'''
 
 
 
{|
 
|Use threshold control slider to find the intensity that most closely approximates tumor volume. Thresholded volume is rendered interactively in the 3D viewer as you are adjusting the threshold value, and is also visualized as semi-transparent label in the image slice viewers.
 
 
 
Note, that currently ChangeTracker expects that the tissue you monitor is hyperintensive on the image.
 
 
 
In the cases when threshold is not effective in segmenting the structure of interest, '''Advanced''' tab allows to prescribe the segmentation label image directly, instead of using threshold. (''this feature is currently disabled, under development'')
 
 
 
|[[Image:Slicer4_ChangeTracker_Step3.png|thumb|450px|Step 3: ChangeTracker ROI segmentation]]
 
|}
 
 
 
* '''Step 4: ROI Analysis'''
 
 
 
{|
 
|Choose the metric(s) you would like to use. ChangeTracker provides an extensible framework for developing and incorporating change quantification metrics into the workflow (see Information for Developers section). The metrics currently available are the following (follow the links for details and documentation):
 
* [[Documentation/4.0/Modules/IntensityDifferenceMetric | Intensity Difference Metric]] is based on intensity difference between the two ROIs
 
 
 
In some cases, the registration procedure that ChangeTracker is using may not be robust enough to align your data. If this happens and the baseline and followup ROIs are not aligned after this step, you can use the '''Advanced''' tab to register your data, and place followup volume under the transform. ChangeTracker will use the prescribed transform and will skip registration step.
 
 
 
|[[Image:Slicer4_ChangeTracker_Step4.png|thumb|450px|Step 4: ROI Analysis]]
 
|}
 
 
 
* '''Step 5: ROI Analysis Results'''
 
 
 
{|
 
|Results are reported as the change in tumor volume, separately for growth and shrinkage component. The quantitative results are reported in voxels, mL and percentage relative the the volume of the structure segmented in the baseline scan.
 
 
 
The visualization of the analysis results includes the following components upon the completion of analysis:
 
* Red slice view: resampled VOI for the second of the analyzed time-points
 
* 3d slice viewer: color-coded results of the change analysis. Red color corresponds to the estimated growth regions, Green corresponds to estimated shrinkage.
 
* Compare view: first row contains the resampled ROI corresponding to the first time-point as foreground, with the growth analysis results in the background. The second row contains resampled ROI corresponding to the second time-point.
 
 
 
|[[Image:Slicer4_ChangeTracker_Step5.png|thumb|450px|Step 5: ROI Analysis Results]]
 
|}
 
 
 
  
 
<!-- ---------------------------- -->
 
<!-- ---------------------------- -->

Revision as of 16:28, 13 January 2012

Home < Documentation < 4.0 < Modules < OpenIGTLinkIF


Introduction and Acknowledgements

This work is supported by NA-MIC, NCIGT, and the Slicer Community. This work is partially supported by NIH 1R01CA111288-01A1 "Enabling Technologies for MRI-Guided Prostate Interventions" (PI: Clare Tempany), P01-CA67165 "Image Guided Therapy" (PI: Ferenc Joelsz) and AIST Intelligent Surgical Instrument Project (PI: Makoto Hashizume, Site-PI: Nobuhiko Hata).
Author: Junichi Tokuda, Jean-Christophe Fillion-Robin, Nobuhiko Hata
Contact: Junichi Tokuda <email> tokuda@bwh.harvard.edu</email>

NA-MIC  
NCIGT  


Module Description

The OpenIGTLink Interface Module is a program module for network communication with external software / hardware using OpenIGTLink protocol. The module provides following features:

  • Data import: The module can import position, linear transform and image data from OpenIGTLink-compliant software to the MRML scene.
  • Data export: The module can export linear transform and image data from the MRML scene to external software.
  • Multi-connection: The module can manage multiple OpenIGTLink connections at the same time.
  • Locator visualization: The user can choose one of linear transforms in the MRML scene to visualize its position and orientation in the 3D space.
  • Slice driving: The module can control volume re-slicing plane based on linear transform in the MRML scene.
The figure shows an example schematic diagram where multiple devices are communicating with 3D Slicer through the OpenIGTLink Interface. Each connector is assigned to one of the external devices for TCP/IP connection. The connectors serve as interfaces between the external devices and the MRML scene to convert an OpenIGTLink message to a MRML node or vice versa.


Use Cases

  • MRI-compatible Robot System (BRP Project between BWH, Johns Hopkins University and Acoustic MedSystems Inc., "Enabling Technologies for MRI-Guided Prostate Interventions")
    • The 3D Slicer was connected to the MRI-compatible Robot by using OpenIGTLinkIF to send target position and to get current robot position. It was also connected to the MRI scanner to control scan plane for real-time image and receive MR images from the scanner.
  • Neurosurgical Robot Project(Nagoya Institute of Technology, Japan)
    • The 3D Slicer was connected to the optical tracking system (Optotrak, Northern Digital Inc.) to acquire current position of the end-effector of the robot.

Tutorials

Under development

The test data can be downloaded automatically by pushing the "Load test data" in the panel of the first step (you will need the internet connection).

Panels and their use

Under development

Similar Modules

  • Point to other modules that have similar functionality

References

  1. Tokuda J, Fischer GS, Papademetris X, Yaniv Z, Ibanez L, Cheng P, Liu H, Blevins J, Arata J, Golby A, Kapur T, Pieper S, Burdette EC, Fichtinger G, Tempany CM, Hata N, OpenIGTLink: An Open Network Protocol for Image-Guided Therapy Environment, Int J Med Robot Comput Assist Surg, 2009 (In print)

Information for Developers