Difference between revisions of "Slicer3:BrainLabModule"

From Slicer Wiki
Jump to: navigation, search
 
(27 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
[[Slicer3:Developers#Quick_Links_to_Slicer3_Projects|Back to '''Slicer Projects Page''']]
 
[[Slicer3:Developers#Quick_Links_to_Slicer3_Projects|Back to '''Slicer Projects Page''']]
 
= Aim =
 
= Aim =
Medical images often contain a wealth of information, such as anatomy and pathology, not explicitly accessible. One way to address this issue is via image annotation and markup. We propose to create a comprehensive framework for annotation and markup within 3D Slicer, enabling users to capture structured information easily. Furthermore, we will develop schemas for saving and recovering this information into and from XNAT, allowing queries of larger data sets of medical scans. This tool will provide clinicians with a relatively simple way to capture information latent in medical scans, and also to select micro-cohorts of medical scans for studying diseases.
+
Brainlab (http://www.brainlab.com) has recently introduced a customized client/server architecture called '''VectorVision Link (VV Link)''' to communicate with external IGT environment. This software API, open but not free, allows other programs to acquire tracking information and images from the BrainLab system. We propose to create a comprehensive workflow to interface 3D Slicer to Brainlab system and to use it for research. Specifically, we create simplified steps for neurosurgeons to use Slicer to perform some research in DTI visualization in OR while Brainlab is used as the primary navigation tool. Here is the scenario:
 +
 
 +
Brainlab system will still run as usual; we won't install any software and hardware on the Brainlab computer and won't affect its FDA status either. Slicer runs on a different computer. These two computers will be connected to each other using a network or router. During surgical procedures, the BrainLab sends real-time tracking data and/or images to Slicer. The tracking information may be used to seed dynamic DTI visualization in Slicer.
  
 
= Research Plan =
 
= Research Plan =
3D Slicer currently provides very basic technology for annotating images. This limits users in their ability to properly capture semantic information contained in images and data sets. We propose to address this issue by expanding Slicer's mark up and annotation capabilities. New features will include:  
+
Our workflow will include the following steps:
* a rich set of geometric objects for improved visual differentiation between annotations
+
#Set up connection between Brainlab and Slicer
* markers for measuring anatomical characteristics, such as the volume of an annotated region, to provide patient specific information difficult to extract from visual inspection
+
#Load a scene (default or user specific) into Slicer
* entry fields beyond free-text, such as graphics and external data, to capture comprehensive information and support for emerging domain specific ontologiesand
+
#Navigate with DTI visualization in Slicer as the surgeon performs tracking with Brainlab
* a full integration of these capabilities with the mrml tree to support Scenesnapshots, load, save both to disk and XNAT.
+
 
 +
This will hopefully help the surgeon efficiently integrate Slicer with Brainlab. We will implement the workflow by developing a Slicer module called '''BrainlabModule'''. This interactive module uses OpenIGTLink (http://www.na-mic.org/Wiki/index.php/OpenIGTLink) for data communication between BrainLab and Slicer, and uses Fiducial, DTMRI and FiducialSeeding modules for DTI visualization. Also, it uses existing Slicer functions to load MRML scenes.
  
We will implement these features by developing two different modules. The first module, called Marker Module, creates different types of markers based on current ITK technology. The user defines the appearance of the marker by specifying its color, size, and shape, such as points and 3D boxes. The user also labels each marker with tags and specifies its function, such as measuring the volume of a region.
+
The development of BrainlabModule will be completed in two phases. In the first phase, a Brainlab simulator will be created. That is, a sequence of synthetic tracking points (not from BrainLab system) will be streamed to Slicer where fiducial-driven DTI visualization will be performed. In the second phase, a real BrainLab system will be needed to test the entire workflow. Upon completion, both modes (Simulator and Brainlab) will co-exist in the module; the user may choose which mode to execute.  
  
The Annotation Module, the second module, provides the interface for annotating images with these markers. Users place the markers on the image and further specify the semantic information through free text, plots, and references to ontology and internet. The annotations are shown both in 3D and 2D viewers. The module also allows annotating entire scenes by linking annotations across images, as well as within an image. All annotations are stored in a database targeted towards medical imaging, called XNAT. The structure of the database is automatically defined by the tags of the markers. Thus, users can query across large image data sets by looking for specific tag values.  
+
Both modes are accompanied by training materials and documentation to ensure usability.
  
Both modules are accompanied by training materials and documentation to ensure usability.
 
 
=Design of Module=
 
=Design of Module=
 +
*[[Slicer3:BrainLabModule:Workflow|Version 1 of workflow (as of 04/02/2010)]]
 +
*[[Slicer3:BrainLabModule:ConnectStep|A working version of Connect Step (as of 04/09/2010)]]
  
 
=Key Personnel=
 
=Key Personnel=
 
Haiying Liu<br>
 
Haiying Liu<br>
 
Noby Hata <br>
 
Noby Hata <br>
Ron Kikinnis
+
Ron Kikinis
  
 
=Progress=
 
=Progress=
* 03/29/10
+
* Week of 04/05/2010
 +
** Work '''DONE''':
 +
*** The user interface (GUI) has been enhanced and implemented for the Connect Step where the network communication between BrainlabModule and a tracking source is going to be set up. Now we have a working version of Connect Step for tracking simulation. After few configurations, simulated tracking data will be streamed to OpenIGTLink module in Slicer once the Connect button is clicked. The data streaming process may be stopped by clicking the Close button.
 +
*** Since the connection between BrainlabModule and the tracking source is always on, it's a good idea to have a separate thread to handle the network communication. This will improve the Slicer response to the user interaction. The thread will be stopped by clicking the Close button in the interface.
 +
*** A stream of simulated tracking data points may be generated inside memory or read from a file. The file IO is chosen since in the future tracking data sets of real surgical cases, which are usually saved in a file, may be used for post-surgical re-examination or just demonstration only.
 +
** To do list:
 +
*** OpenIGTLink module is now configured through its interface. This configuration will be required to be implemented programmatically in the C++ code of BrainlabModule.
 +
*** Implementation of Load Step and Navigate Step will be executed next week.
 +
 
 +
* Week of 03/29/2010
 
** Ron, Noby and Haiying met at Ron's office to discuss the specs of the module and time frame for implementation.
 
** Ron, Noby and Haiying met at Ron's office to discuss the specs of the module and time frame for implementation.
 +
** Haiying completed the [[Slicer3:BrainLabModule:Workflow|workflow.]]
  
 
= Dependency =
 
= Dependency =
The following modules are required for BrainLab module to work properly:<br>
+
The following modules are required for Brainlab module to work properly:<br>
 
Fiducial<br>
 
Fiducial<br>
 
OpenIGTLink<br>
 
OpenIGTLink<br>
 
DTMRI<br>
 
DTMRI<br>
 
FiducialSeeding<br>
 
FiducialSeeding<br>
 +
 +
= Tutorials =
  
 
= Feature Request =
 
= Feature Request =

Latest revision as of 03:17, 27 April 2010

Home < Slicer3:BrainLabModule

Back to Slicer Projects Page

Aim

Brainlab (http://www.brainlab.com) has recently introduced a customized client/server architecture called VectorVision Link (VV Link) to communicate with external IGT environment. This software API, open but not free, allows other programs to acquire tracking information and images from the BrainLab system. We propose to create a comprehensive workflow to interface 3D Slicer to Brainlab system and to use it for research. Specifically, we create simplified steps for neurosurgeons to use Slicer to perform some research in DTI visualization in OR while Brainlab is used as the primary navigation tool. Here is the scenario:

Brainlab system will still run as usual; we won't install any software and hardware on the Brainlab computer and won't affect its FDA status either. Slicer runs on a different computer. These two computers will be connected to each other using a network or router. During surgical procedures, the BrainLab sends real-time tracking data and/or images to Slicer. The tracking information may be used to seed dynamic DTI visualization in Slicer.

Research Plan

Our workflow will include the following steps:

  1. Set up connection between Brainlab and Slicer
  2. Load a scene (default or user specific) into Slicer
  3. Navigate with DTI visualization in Slicer as the surgeon performs tracking with Brainlab

This will hopefully help the surgeon efficiently integrate Slicer with Brainlab. We will implement the workflow by developing a Slicer module called BrainlabModule. This interactive module uses OpenIGTLink (http://www.na-mic.org/Wiki/index.php/OpenIGTLink) for data communication between BrainLab and Slicer, and uses Fiducial, DTMRI and FiducialSeeding modules for DTI visualization. Also, it uses existing Slicer functions to load MRML scenes.

The development of BrainlabModule will be completed in two phases. In the first phase, a Brainlab simulator will be created. That is, a sequence of synthetic tracking points (not from BrainLab system) will be streamed to Slicer where fiducial-driven DTI visualization will be performed. In the second phase, a real BrainLab system will be needed to test the entire workflow. Upon completion, both modes (Simulator and Brainlab) will co-exist in the module; the user may choose which mode to execute.

Both modes are accompanied by training materials and documentation to ensure usability.

Design of Module

Key Personnel

Haiying Liu
Noby Hata
Ron Kikinis

Progress

  • Week of 04/05/2010
    • Work DONE:
      • The user interface (GUI) has been enhanced and implemented for the Connect Step where the network communication between BrainlabModule and a tracking source is going to be set up. Now we have a working version of Connect Step for tracking simulation. After few configurations, simulated tracking data will be streamed to OpenIGTLink module in Slicer once the Connect button is clicked. The data streaming process may be stopped by clicking the Close button.
      • Since the connection between BrainlabModule and the tracking source is always on, it's a good idea to have a separate thread to handle the network communication. This will improve the Slicer response to the user interaction. The thread will be stopped by clicking the Close button in the interface.
      • A stream of simulated tracking data points may be generated inside memory or read from a file. The file IO is chosen since in the future tracking data sets of real surgical cases, which are usually saved in a file, may be used for post-surgical re-examination or just demonstration only.
    • To do list:
      • OpenIGTLink module is now configured through its interface. This configuration will be required to be implemented programmatically in the C++ code of BrainlabModule.
      • Implementation of Load Step and Navigate Step will be executed next week.
  • Week of 03/29/2010
    • Ron, Noby and Haiying met at Ron's office to discuss the specs of the module and time frame for implementation.
    • Haiying completed the workflow.

Dependency

The following modules are required for Brainlab module to work properly:
Fiducial
OpenIGTLink
DTMRI
FiducialSeeding

Tutorials

Feature Request