Difference between revisions of "Slicer3:XCEDE use cases"

From Slicer Wiki
Jump to: navigation, search
 
(33 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
== Slicer3 use cases for XCEDE ==
 
== Slicer3 use cases for XCEDE ==
  
 +
=== Slicer use-case #0: Demo we would like to show for BIRN ===
 +
* Begin with complex scene loaded in slicer (freesurfer, query atlas, fMRI overlay, tractography, and multiple scene snapshots).
 +
* Publish scene to XCEDE compliant web service (initially XNAT, plus HID if possible)
 +
** user provides server base URI or uses default
 +
** user provides name/password
 +
** user optionally provides tags
 +
** each dataset in scene is uploaded serially
 +
** get uris for each dataset uploaded
 +
** compose MRML file using uris
 +
** upload MMRML file.
 +
* Slicer closes
 +
* Web interface to server is used to search by tags
 +
* Clicking MRML download link from web server starts slicer on local machine
 +
* Full scene is restored
 +
* (modify scene, reupload, repeat..)
 +
 +
''Options/extensions:''
 +
* initial scene could have been loaded through web service query based on subject metadata (returning XCAT that is loaded to slicer)
 +
* IGT use-case where multiple investigators contribute data to scene
 +
* workflow use case where investigator assigns processing tasks to RAs using tags to keep track of processing state
  
 
=== Slicer use-case #1: Managing image processing project in a Neuroscience Lab ===
 
=== Slicer use-case #1: Managing image processing project in a Neuroscience Lab ===
Line 6: Line 26:
 
====Overview====
 
====Overview====
 
   
 
   
*Step1:  Investigator uploads data from a study.  
+
*Step0:  Investigator creates a new project on the hosted repository
 +
*Step1:  Investigator uploads data from a study and associates it with the project.  
 
*Step2:  Later, the investigator queries for all “Acquisitions” uploaded as part of the study.
 
*Step2:  Later, the investigator queries for all “Acquisitions” uploaded as part of the study.
 
*Step3:  The investigator then assigns each acquisition to a researcher for a segmentation task.
 
*Step3:  The investigator then assigns each acquisition to a researcher for a segmentation task.
Line 21: Line 42:
 
* Each element's '''ID''' is made unique by the back end, so that each visit, acquisition, subject, study, etc. has a locally unique ID.  
 
* Each element's '''ID''' is made unique by the back end, so that each visit, acquisition, subject, study, etc. has a locally unique ID.  
 
* '''CompositeIDs''' are made globally unique by appending the uri of the hosted repository.
 
* '''CompositeIDs''' are made globally unique by appending the uri of the hosted repository.
* Each element can also have a '''user-assigned label''', which may be a more comfortable way for the user to refer to it (like scan001).
+
* Each element can also have a '''user-assigned label''' without a uniquenss requirement (like scan001), which may be a more comfortable way for the user to refer to it .
  
 
There are different ways in which an ID might be generated for an element.
 
There are different ways in which an ID might be generated for an element.
Line 31: Line 52:
 
* PUT = CREATE NEW
 
* PUT = CREATE NEW
 
* POST = UPDATE EXISTING
 
* POST = UPDATE EXISTING
 +
* confirm which is correct convention!!!
  
 
====Stepping thru details====
 
====Stepping thru details====
Line 36: Line 58:
  
  
'''Step1.''' An investigator uploads all tagged data from a study (using an upload script, or a webGUI or something outside of Slicer) – for instance:
+
'''Step0.''' An investigator creates a new project in the hosted repository:
 +
 
 +
(not sure how to do this on command line -- should discuss this.)
 +
 +
Probably done through web browser interface, but we should allow programmatic control at some point.
 +
 
 +
 
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step1.''' An investigator uploads all tagged data from a study (using an upload script, or a webGUI or something outside of Slicer) and associates it with the new project – for instance:
  
 
  curl –X PUT -T localFile –u user:passwd http://central.xnat.org/CENTRAL_OASIS_CS/OAS1_0009_MR1/uploadFileName  
 
  curl –X PUT -T localFile –u user:passwd http://central.xnat.org/CENTRAL_OASIS_CS/OAS1_0009_MR1/uploadFileName  
 +
or
 +
curl -X PUT -T localFile -u user:passwd http://central.xnat.org?project=CENTRAL_OASIS_CS&visit=OAS1_0009_MRI
 +
or
 +
upload file plus XML file that associates the file, project and visit.
  
QUESTIONS:
+
'''Response:''' Script looks for return value, parses XCEDE for the ID of each uploaded image, or parses error XML and posts message if upload failed.
* is uploadFileName the same as {id}?
 
* need to bundle data file and XCEDE wrapper somehow?
 
 
 
'''Response:''' Script looks for return value, parses error XML and posts message if upload failed.
 
  
 +
Note: data could also come from web browser GUI or through DICOM transfer.
  
 
--------------------------------------------------------------------
 
--------------------------------------------------------------------
  
  
'''Step2.''' Investigator queries for all acquisitions uploaded as part of the study, tagged with him as the investigator and with a particular protocol, which have not yet been assigned for segmentation. Maybe this is done via web form, maybe with a script or query tool inside Slicer.
+
'''Step2.''' Investigator queries for all visits uploaded as part of the study, tagged with him as the investigator and with a particular protocol, which have not yet been assigned for segmentation. Maybe this is done via web form, maybe with a script or query tool inside Slicer.
  
  curl –X GET http://central.xnat.org/acquisitions?project=CENTRAL_OASIS_CS&investigator=Marek%20Kubicki&protocol=fBIRN&technician=NULL
+
  curl –X GET http://central.xnat.org/visits?project=CENTRAL_OASIS_CS&investigator=Marek%20Kubicki&protocol=fBIRN&technician=NULL
  
  
'''Response:''' Receives back XCEDE XML describing list of uris for appropriate acquisitions and return code; If return code is good, parse XML to get list of uris and display them using some tool or another. Gets a list like:
+
'''Response:''' Receives back XCEDE XML describing list of uris for appropriate visits and return code; If return code is good, parse XML to get list of uris and display them using some tool or another. Gets a list like:
  
http://central.xnat.org/acquisition/OAS1_0006_MR1  
+
http://central.xnat.org/visit/OAS1_0006_MR1  
  
http://central.xnat.org/acquisition/OAS1_0009_MR1  
+
http://central.xnat.org/visit/OAS1_0009_MR1  
  
http://central.xnat.org/acquisition/OAS1_0054_MR1  
+
http://central.xnat.org/visit/OAS1_0054_MR1  
  
 
 
  
http://central.xnat.org/acquisition/OAS1_0450_MR1  
+
http://central.xnat.org/visit/OAS1_0450_MR1  
  
  
Line 73: Line 106:
 
'''Step3.''' Investigator updates the metadata with some segmentation assignments.
 
'''Step3.''' Investigator updates the metadata with some segmentation assignments.
  
  curl –X POST http://central.xnat.org/acquisition/OAS1_0006_MR1?technician=Tech1&status=assigned  
+
  curl –X POST http://central.xnat.org/visit/OAS1_0006_MR1?technician=Tech1&status=assigned  
  curl –X POST http://central.xnat.org/acquisition/OAS1_0009_MR1?technician=Tech2&status=assigned  
+
  curl –X POST http://central.xnat.org/visit/OAS1_0009_MR1?technician=Tech2&status=assigned  
  curl –X POST http://central.xnat.org/acquisition/OAS1_0054_MR1?technician=Tech2&status=assigned
+
  curl –X POST http://central.xnat.org/visit/OAS1_0054_MR1?technician=Tech2&status=assigned
 
  …
 
  …
  curl -X POST http://central.xnat.org/acquisition/OAS1_0450_MR1?technician=Tech2&status=assigned
+
  curl -X POST http://central.xnat.org/visit/OAS1_0450_MR1?technician=Tech2&status=assigned
  
'''Response:''' Get XCEDE back: QUESTION: what is returned here?
+
'''Response:''' Probably only need an integer back that flags success/failure.
  
  
Line 87: Line 120:
 
'''Step4.''' Technician “Tech2” queries for all of their assignments from this particular investigator.
 
'''Step4.''' Technician “Tech2” queries for all of their assignments from this particular investigator.
  
  curl –X GET http://central.xnat.org/acquisitions?technician=Tech2&investigator=Marek%20Kubicki&status=assigned  
+
  curl –X GET http://central.xnat.org/visits?technician=Tech2&investigator=Marek%20Kubicki&status=assigned  
  
'''Response:''' Receives back XCEDE XML describing list of uris for acquisitions they need to segment. If return code is good, parse XML to get list of uris and display them using some tool or another.
+
'''Response:''' Receives back XCEDE XML describing list of uris for visits containing data they need to segment. If return code is good, parse XML to get list of uris and display them using some tool or another.
  
  
Line 95: Line 128:
  
  
'''Step5.''' Technician downloads an XCEDE catalog describing a single acquisition
+
'''Step5.''' Technician downloads an XCEDE catalog describing a single visit
  
  curl –X GET http://central.xnat.org/acquisition/OAS1_0009_MR1?view=XCAT
+
  curl –X GET http://central.xnat.org/visit/OAS1_0009_MR1?view=XCAT
 +
or
 +
curl -X GET http://central.xnat.org/visit/OAS1_0009_MR1?view=XCEDE
  
Or, technician downloads all of their assigned acquisitions:
+
Or, technician downloads all of their assigned visits:
  
  curl –X GET http://central.xnat.org/acquisitions?project=CENTRAL_OASIS_CS&technician=Tech2&view=XCAT
+
  curl –X GET http://central.xnat.org/visits?project=CENTRAL_OASIS_CS&technician=Tech2&view=XCAT
  
 +
QUESTION: how do we form the commandline for multiple visits?
 
QUESTION: are multiple XCEDE catalogs returned, or one composite catalog?  
 
QUESTION: are multiple XCEDE catalogs returned, or one composite catalog?  
  
Line 115: Line 151:
 
'''Step6.''' At the day’s end, he wants to upload his work in progress with updated status
 
'''Step6.''' At the day’s end, he wants to upload his work in progress with updated status
  
QUESTION: doing a ‘post’ will presumably overwrite the original data (if it were present in the DB). Does the back end do some automatic versioning? Or should we do this explicitly on the client side, and then perform a ‘put’ of the new version instead of a ‘post’.
+
QUESTION: doing a ‘post’ will presumably overwrite the original data (if it were present in the DB). Back end will not do any automatic versioning: decisions about whether to overwrite, or create versions are made project-by-project on the client side.
  
 
Post a file that already has a uri on the DB, retrieved from the XCAT:
 
Post a file that already has a uri on the DB, retrieved from the XCAT:
Line 123: Line 159:
 
Put a newly created file:
 
Put a newly created file:
  
  curl –X PUT -T updated-aseg.mgz http://central.xnat.org/acquisition/OAS1_0009_MR1/{path}/aseg.mgz?  {all tags from the acquisition?}  &status=completed
+
  curl –X PUT -T updated-aseg.mgz http://central.xnat.org/visit/OAS1_0009_MR1/{path}/aseg.mgz?  {all tags from the visit?}  &status=completed
 +
 
 +
QUESTION: Provenance-- how to represent. Could associate provenance information in these ways:
  
QUESTION: what’s a good way to describe with metadata, and to know the appropriate path?
+
* add XCEDE Provenance XML into the data header and perform a single PUT operation,
 +
* PUT the data file, then POST an update to metadata using the ID returned from the first PUT.
 +
* PUT the data file, and then PUT a separate Provenance XML file that references the ID returned from the first PUT.
  
'''Response:''' Get XCEDE back. QUESTION: again, what's in here, and what to do with it?
+
'''Response:''' Get XCEDE back. If data was POSTED, we get an integer or ID back that indicates success/failure; if data was PUT, then we get back XCEDE XML with new ID.
  
  
Line 134: Line 174:
 
=== Slicer use-case #2: Query Atlas Module ===
 
=== Slicer use-case #2: Query Atlas Module ===
  
'''Contents:'''
+
====Overview====
 +
 +
*Step0A: Researchers describe FreeSurfer and FIPS datasets for each subject in a study using XCEDE spec.
 +
*Step0B: Researchers create a notebook referencing relevant FIPS and FreeSurfer datasets for each subject in a study.
 +
*Step1:  Researchers upload data and description to hosted repository. 
 +
*Step2:  Investigator queries hosted repository for all subjects with a particular diagnosis (SZ) and have a FreeSurfer analysis, and a FIPS analysis for a particular protocol (SIRP).
 +
*Step3:  Investigator downloads some subset of the datasets in an XCEDE catalog (or notebook?) and opens with Slicer.
 +
*Step4:  Combined results are viewed in the QueryAtlas's ontology- and atlas-based context. Investigator chooses search terms and queries web-based resources for related information.
 +
*Step5:  Investigator reviews search results and saves all data of interest in a bookmark file (or in the XCEDE notebook?) for continued study (or recommended reading.
 +
*Step6: Investigator publishes notebook containing references to datasets and related resources back to hosted repository.
  
This is a record of the way we're using the xcede2.0 catalog in order to load in a query atlas scene that contains:
+
====Notes====
 +
 
 +
This is a record of the way we're currently using the xcede2.0 catalog in order to load in a query atlas scene that contains:
  
 
* One FreeSurfer "lh" model and one FreeSurfer "rh" model at most
 
* One FreeSurfer "lh" model and one FreeSurfer "rh" model at most
Line 146: Line 197:
 
* the matrix to align the example_func + statistics to the brain.mgz: anat2exf.register.dat
 
* the matrix to align the example_func + statistics to the brain.mgz: anat2exf.register.dat
  
'''Assumptions:'''
+
'''Assumptions (which lead to brittle implementation):'''
 
* we are assuming a single lh or rh model (or one of each) is in the catalog. (Reason: Slicer will load the lh.aparc.annot file as a scalar overlay onto a model; so the overlay needs to be associated with a model automatically during load. Since the catalog attribute list contains no references to other datasets, the only way to associate the lh.aparc.annot to the lh.pial surface is by matching on the "lh." and "rh." in the model and annotation names.)
 
* we are assuming a single lh or rh model (or one of each) is in the catalog. (Reason: Slicer will load the lh.aparc.annot file as a scalar overlay onto a model; so the overlay needs to be associated with a model automatically during load. Since the catalog attribute list contains no references to other datasets, the only way to associate the lh.aparc.annot to the lh.pial surface is by matching on the "lh." and "rh." in the model and annotation names.)
  
Line 158: Line 209:
 
* Freesurfer:label-1 (for mgz files that are label maps) This gives us a hint about what kind of color map to use to display the image data. The default colormap for volume data is greyscale.
 
* Freesurfer:label-1 (for mgz files that are label maps) This gives us a hint about what kind of color map to use to display the image data. The default colormap for volume data is greyscale.
  
'''Validation:'''
+
====Stepping thru details====
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step0A.''' Researchers describe FreeSurfer and FIPS datasets for each subject in a study using XCEDE spec.
 +
 
 +
* Does FIPS data have a complete data description in XCEDE yet?
 +
* Has FIPS analysis data been described and uploaded into the BDR for all subjects (in some sanctioned place)
 +
* Has FreeSurfer data for all subjects been uploaded to the BDR to some sanctioned location yet (individual files or tarball)?
 +
 
 +
 
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step0B.''' Researchers create a notebook describing relevant FIPS and FreeSurfer datasets for each subject in a study.
 +
 
 +
If no decisions have been made yet as to where/how to store these various analyses, then maybe a "FIPS/FreeSurfer" notebook can be created on the BDR for individual subjects. The notebook would contain references to the required datasets (stored on or outside the BDR).
 +
 
 +
 
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step1.''' Researchers upload data and description (or notebook) to hosted repository.
 +
 
 +
curl –X PUT -u user:passwd -T MorphologyFunctionNotebook_SIRP.nb http://www.myHID.org/protocols/visit/{path}/MorphologyFunctionNotebook_SIRP.nb?  {all tags from the visit including protocol=SIRP Subject=fbphIIS10056 ...}
 +
 
 +
'''Response:''' Get back XCEDE XML with new ID.
 +
 
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step2.''' Investigator queries hosted repository for all subjects with a particular diagnosis (SZ) who have been tested with a particular protocol (SIRP).
 +
 
 +
curl –X GET http://www.myHID.org/visits?project=FBIRN&diagnosis=SZ&protocol=SIRP
 +
 
 +
'''Response:''' Receives back XCEDE XML describing list of uris for appropriate visits and return code; If return code is good, parse XML to get list of uris and display them using some tool or another. Gets a list like:
 +
 
 +
http://www.someHID.org/visit1/fbphIIS10056_SIRP
 +
http://www.someHID.org/visit2/fbphIIS10056_SIRP
 +
...
 +
http://www.otherHID.org/visit1/fbphIIS10048_SIRP
 +
http://www.otherHID.org/visit2/fbphIIS10048_SIRP
 +
 
 +
Now how will the investigator learn whether both FIPS and FreeSurfer data exist for the first uri?
 +
 
 +
curl -X GET http://www.myHID.org/...
 +
 
 +
'''Response:'''
 +
 
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step3.''' Investigator downloads some subset of the datasets in an XCEDE catalog (or notebook?) and opens with Slicer.
 +
 
 +
curl –X GET http://www.myHID.org/...
 +
 
 +
'''Response:''' If return code is not good, post error.
 +
 
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step4.''' (Inside Slicer) Combined results are viewed in the QueryAtlas's ontology- and atlas-based context. Investigator chooses search terms and queries web-based resources for related information. Perhaps all links to data in the notebook populate the QueryAtlas's saved-links menu.
 +
 
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step5.''' (Inside Slicer) Investigator reviews new search results and saves all references of interest in a local bookmark file for continued study or recommended reading.
 +
 
 +
 
 +
--------------------------------------------------------------------
 +
 
 +
 
 +
'''Step6.''' Investigator also updates a notebook containing additional references to datasets and related resources back to hosted repository.
 +
 
 +
curl –X POST -u user:passwd -T MorphologyFunctionNotebook_SIRP.nb http://www.myHID.org/protocols/visit/{path}/MorphologyFunctionNotebook_SIRP.nb?
 +
 
 +
 
 +
'''Response:''' Probably only need an integer back that flags success/failure.
  
 
=== Slicer use-case #3: Qdec Module ===
 
=== Slicer use-case #3: Qdec Module ===
  
 
Description needed...
 
Description needed...
 +
 +
 +
=== Slicer use-case #3: Notebooks ===
 +
 +
# P.I. Creates new project populated with data matching some criteria
 +
# Graduate student creates a notebook within XCEDE, notebooks contain pages referencing results of processing
 +
# Graduate student writes nifty segmentation algorithm, queries XCEDE for list of datasets
 +
# Algorithm runs on each dataset in turn, adding results to XCEDE and forming reference in page of graduate student's notebook
 +
 +
... time passes ...
 +
# Graduate student changes algorithm, creates new page in notebook and re-runs algorithm
 +
# P.I. queries XCEDE for page in student's notebook, importing algorithm results into R
 +
# P.I. happy with segmentation results, grants degree.
 +
 +
The main idea behind notebooks are the same as lab notebooks, essentially pages and pages of (un)related results.  Each page may contain an arbitrary number of results and may be exported into common formats suitable for analysis (CSV, Excel, R, SAS, etc).  This allows algorithms to be modified, while maintaining history and providence.
  
 
=== Prototype XCAT Files ===
 
=== Prototype XCAT Files ===

Latest revision as of 19:44, 20 August 2008

Home < Slicer3:XCEDE use cases

Slicer3 use cases for XCEDE

Slicer use-case #0: Demo we would like to show for BIRN

  • Begin with complex scene loaded in slicer (freesurfer, query atlas, fMRI overlay, tractography, and multiple scene snapshots).
  • Publish scene to XCEDE compliant web service (initially XNAT, plus HID if possible)
    • user provides server base URI or uses default
    • user provides name/password
    • user optionally provides tags
    • each dataset in scene is uploaded serially
    • get uris for each dataset uploaded
    • compose MRML file using uris
    • upload MMRML file.
  • Slicer closes
  • Web interface to server is used to search by tags
  • Clicking MRML download link from web server starts slicer on local machine
  • Full scene is restored
  • (modify scene, reupload, repeat..)

Options/extensions:

  • initial scene could have been loaded through web service query based on subject metadata (returning XCAT that is loaded to slicer)
  • IGT use-case where multiple investigators contribute data to scene
  • workflow use case where investigator assigns processing tasks to RAs using tags to keep track of processing state

Slicer use-case #1: Managing image processing project in a Neuroscience Lab

Overview

  • Step0: Investigator creates a new project on the hosted repository
  • Step1: Investigator uploads data from a study and associates it with the project.
  • Step2: Later, the investigator queries for all “Acquisitions” uploaded as part of the study.
  • Step3: The investigator then assigns each acquisition to a researcher for a segmentation task.
  • Step4: A researcher queries for all acquisition segmentations assigned to them.
  • Step5: A researcher downloads some subset of those assignments in an XCEDE catalog, opens the catalog with Slicer and creates a label map.
  • Step6: When finished, a researcher uploads completed work or work in progress and tags the work appropriately.

Notes

For example, we’ll use the public OASIS brains dataset on XNAT, and let “composite ID” for a visit be OAS1_0006_MR1, or OAS1_0009_MR1, etc.

  • Each visit (such as OAS1_0006_MR1) may contain multiple acquisitions, assessments, and derived data.
  • Each element's ID is made unique by the back end, so that each visit, acquisition, subject, study, etc. has a locally unique ID.
  • CompositeIDs are made globally unique by appending the uri of the hosted repository.
  • Each element can also have a user-assigned label without a uniquenss requirement (like scan001), which may be a more comfortable way for the user to refer to it .

There are different ways in which an ID might be generated for an element.

  • user may supply a preferred placeholder ID to which the back end applies a uniqueness check. The requested ID is used if unique.
  • no placeholder ID is supplied and the system just generates a unique ID.
  • system-generated ID is returned to user, who can accept or reject that ID.

Assuming also that PUT and POST are swapped on the spreadsheet Jeff sent last week. Below, we use the following:

  • PUT = CREATE NEW
  • POST = UPDATE EXISTING
  • confirm which is correct convention!!!

Stepping thru details



Step0. An investigator creates a new project in the hosted repository:

(not sure how to do this on command line -- should discuss this.)

Probably done through web browser interface, but we should allow programmatic control at some point.




Step1. An investigator uploads all tagged data from a study (using an upload script, or a webGUI or something outside of Slicer) and associates it with the new project – for instance:

curl –X PUT -T localFile –u user:passwd http://central.xnat.org/CENTRAL_OASIS_CS/OAS1_0009_MR1/uploadFileName 
or
curl -X PUT -T localFile -u user:passwd http://central.xnat.org?project=CENTRAL_OASIS_CS&visit=OAS1_0009_MRI
or 
upload file plus XML file that associates the file, project and visit.

Response: Script looks for return value, parses XCEDE for the ID of each uploaded image, or parses error XML and posts message if upload failed.

Note: data could also come from web browser GUI or through DICOM transfer.



Step2. Investigator queries for all visits uploaded as part of the study, tagged with him as the investigator and with a particular protocol, which have not yet been assigned for segmentation. Maybe this is done via web form, maybe with a script or query tool inside Slicer.

curl –X GET http://central.xnat.org/visits?project=CENTRAL_OASIS_CS&investigator=Marek%20Kubicki&protocol=fBIRN&technician=NULL


Response: Receives back XCEDE XML describing list of uris for appropriate visits and return code; If return code is good, parse XML to get list of uris and display them using some tool or another. Gets a list like:

http://central.xnat.org/visit/OAS1_0006_MR1

http://central.xnat.org/visit/OAS1_0009_MR1

http://central.xnat.org/visit/OAS1_0054_MR1

http://central.xnat.org/visit/OAS1_0450_MR1




Step3. Investigator updates the metadata with some segmentation assignments.

curl –X POST http://central.xnat.org/visit/OAS1_0006_MR1?technician=Tech1&status=assigned 
curl –X POST http://central.xnat.org/visit/OAS1_0009_MR1?technician=Tech2&status=assigned 
curl –X POST http://central.xnat.org/visit/OAS1_0054_MR1?technician=Tech2&status=assigned
…
curl -X POST http://central.xnat.org/visit/OAS1_0450_MR1?technician=Tech2&status=assigned

Response: Probably only need an integer back that flags success/failure.




Step4. Technician “Tech2” queries for all of their assignments from this particular investigator.

curl –X GET http://central.xnat.org/visits?technician=Tech2&investigator=Marek%20Kubicki&status=assigned 

Response: Receives back XCEDE XML describing list of uris for visits containing data they need to segment. If return code is good, parse XML to get list of uris and display them using some tool or another.




Step5. Technician downloads an XCEDE catalog describing a single visit

curl –X GET http://central.xnat.org/visit/OAS1_0009_MR1?view=XCAT
or
curl -X GET http://central.xnat.org/visit/OAS1_0009_MR1?view=XCEDE

Or, technician downloads all of their assigned visits:

curl –X GET http://central.xnat.org/visits?project=CENTRAL_OASIS_CS&technician=Tech2&view=XCAT

QUESTION: how do we form the commandline for multiple visits? QUESTION: are multiple XCEDE catalogs returned, or one composite catalog?

Technician opens the catalog in Slicer and works on the segmentation.

Response: If return code is not good, post error.




Step6. At the day’s end, he wants to upload his work in progress with updated status

QUESTION: doing a ‘post’ will presumably overwrite the original data (if it were present in the DB). Back end will not do any automatic versioning: decisions about whether to overwrite, or create versions are made project-by-project on the client side.

Post a file that already has a uri on the DB, retrieved from the XCAT:

curl –X POST {uri}?status=in%20progress

Put a newly created file:

curl –X PUT -T updated-aseg.mgz http://central.xnat.org/visit/OAS1_0009_MR1/{path}/aseg.mgz?  {all tags from the visit?}  &status=completed

QUESTION: Provenance-- how to represent. Could associate provenance information in these ways:

  • add XCEDE Provenance XML into the data header and perform a single PUT operation,
  • PUT the data file, then POST an update to metadata using the ID returned from the first PUT.
  • PUT the data file, and then PUT a separate Provenance XML file that references the ID returned from the first PUT.

Response: Get XCEDE back. If data was POSTED, we get an integer or ID back that indicates success/failure; if data was PUT, then we get back XCEDE XML with new ID.



Slicer use-case #2: Query Atlas Module

Overview

  • Step0A: Researchers describe FreeSurfer and FIPS datasets for each subject in a study using XCEDE spec.
  • Step0B: Researchers create a notebook referencing relevant FIPS and FreeSurfer datasets for each subject in a study.
  • Step1: Researchers upload data and description to hosted repository.
  • Step2: Investigator queries hosted repository for all subjects with a particular diagnosis (SZ) and have a FreeSurfer analysis, and a FIPS analysis for a particular protocol (SIRP).
  • Step3: Investigator downloads some subset of the datasets in an XCEDE catalog (or notebook?) and opens with Slicer.
  • Step4: Combined results are viewed in the QueryAtlas's ontology- and atlas-based context. Investigator chooses search terms and queries web-based resources for related information.
  • Step5: Investigator reviews search results and saves all data of interest in a bookmark file (or in the XCEDE notebook?) for continued study (or recommended reading.
  • Step6: Investigator publishes notebook containing references to datasets and related resources back to hosted repository.

Notes

This is a record of the way we're currently using the xcede2.0 catalog in order to load in a query atlas scene that contains:

  • One FreeSurfer "lh" model and one FreeSurfer "rh" model at most
  • the FreeSurfer structural brain.mgz image volume
  • the FreeSurfer example functional example_func.nii
  • some FIPS-generated Statistical volumes
  • the FreeSurfer aparc+aseg.mgz label map volume
  • the FreeSurfer lh.aparc.annot annotation file
  • the matrix to align the example_func + statistics to the brain.mgz: anat2exf.register.dat

Assumptions (which lead to brittle implementation):

  • we are assuming a single lh or rh model (or one of each) is in the catalog. (Reason: Slicer will load the lh.aparc.annot file as a scalar overlay onto a model; so the overlay needs to be associated with a model automatically during load. Since the catalog attribute list contains no references to other datasets, the only way to associate the lh.aparc.annot to the lh.pial surface is by matching on the "lh." and "rh." in the model and annotation names.)
  • we are detecting whether a matrix with the name "anat2efx" is in the catalog (we should probably check the uri instead) -- and if so, we are wrapping the list of all FIPS-generated statistical volumes and the example_func(noted above) in a derived transform in the load routine. We recognize files with the string "brain.mgz" "example_func" and "anat2exf" in them, in order to do some automatic calculation of the Slicer transform that does the Slicer ras2ras registration. Then we stuff volumes with "example_func" or "stat" in their name into the derived transform.
    • the tests to detect these files are weak -- need a better test: now, a structural volume with the string 'stat' embedded in it will be stuffed into the transform by mistake.
    • no registration transform is computed if volumes matching the brain.mgz, example_func and an anat2exf.register.dat file are not detected.

New value inventions for attributes:

  • Freesurfer:matrix-1 (this gives us a clue about file format).
  • Freesurfer:overlay-1 (for files that need to be associated with a model in order to load in Slicer)
  • Freesurfer:label-1 (for mgz files that are label maps) This gives us a hint about what kind of color map to use to display the image data. The default colormap for volume data is greyscale.

Stepping thru details



Step0A. Researchers describe FreeSurfer and FIPS datasets for each subject in a study using XCEDE spec.

  • Does FIPS data have a complete data description in XCEDE yet?
  • Has FIPS analysis data been described and uploaded into the BDR for all subjects (in some sanctioned place)
  • Has FreeSurfer data for all subjects been uploaded to the BDR to some sanctioned location yet (individual files or tarball)?




Step0B. Researchers create a notebook describing relevant FIPS and FreeSurfer datasets for each subject in a study.

If no decisions have been made yet as to where/how to store these various analyses, then maybe a "FIPS/FreeSurfer" notebook can be created on the BDR for individual subjects. The notebook would contain references to the required datasets (stored on or outside the BDR).




Step1. Researchers upload data and description (or notebook) to hosted repository.

curl –X PUT -u user:passwd -T MorphologyFunctionNotebook_SIRP.nb http://www.myHID.org/protocols/visit/{path}/MorphologyFunctionNotebook_SIRP.nb?  {all tags from the visit including protocol=SIRP Subject=fbphIIS10056 ...}

Response: Get back XCEDE XML with new ID.



Step2. Investigator queries hosted repository for all subjects with a particular diagnosis (SZ) who have been tested with a particular protocol (SIRP).

curl –X GET http://www.myHID.org/visits?project=FBIRN&diagnosis=SZ&protocol=SIRP

Response: Receives back XCEDE XML describing list of uris for appropriate visits and return code; If return code is good, parse XML to get list of uris and display them using some tool or another. Gets a list like:

http://www.someHID.org/visit1/fbphIIS10056_SIRP
http://www.someHID.org/visit2/fbphIIS10056_SIRP
...
http://www.otherHID.org/visit1/fbphIIS10048_SIRP
http://www.otherHID.org/visit2/fbphIIS10048_SIRP

Now how will the investigator learn whether both FIPS and FreeSurfer data exist for the first uri?

curl -X GET http://www.myHID.org/...

Response:



Step3. Investigator downloads some subset of the datasets in an XCEDE catalog (or notebook?) and opens with Slicer.

curl –X GET http://www.myHID.org/...

Response: If return code is not good, post error.



Step4. (Inside Slicer) Combined results are viewed in the QueryAtlas's ontology- and atlas-based context. Investigator chooses search terms and queries web-based resources for related information. Perhaps all links to data in the notebook populate the QueryAtlas's saved-links menu.



Step5. (Inside Slicer) Investigator reviews new search results and saves all references of interest in a local bookmark file for continued study or recommended reading.




Step6. Investigator also updates a notebook containing additional references to datasets and related resources back to hosted repository.

curl –X POST -u user:passwd -T MorphologyFunctionNotebook_SIRP.nb http://www.myHID.org/protocols/visit/{path}/MorphologyFunctionNotebook_SIRP.nb?


Response: Probably only need an integer back that flags success/failure.

Slicer use-case #3: Qdec Module

Description needed...


Slicer use-case #3: Notebooks

  1. P.I. Creates new project populated with data matching some criteria
  2. Graduate student creates a notebook within XCEDE, notebooks contain pages referencing results of processing
  3. Graduate student writes nifty segmentation algorithm, queries XCEDE for list of datasets
  4. Algorithm runs on each dataset in turn, adding results to XCEDE and forming reference in page of graduate student's notebook

... time passes ...

  1. Graduate student changes algorithm, creates new page in notebook and re-runs algorithm
  2. P.I. queries XCEDE for page in student's notebook, importing algorithm results into R
  3. P.I. happy with segmentation results, grants degree.

The main idea behind notebooks are the same as lab notebooks, essentially pages and pages of (un)related results. Each page may contain an arbitrary number of results and may be exported into common formats suitable for analysis (CSV, Excel, R, SAS, etc). This allows algorithms to be modified, while maintaining history and providence.

Prototype XCAT Files

<XCEDE>
<catalog ID="ID0">

<catalogList>

<catalog ID="ID1">
<entryList>

<entry ID="ID2" name="anat2exf" description="matrix that registers anatomical information to functional datasets" format="FreeSurfer:matrix-1" content="anat2exf" cachePath="" modelID="ID2" uri="./anat2exf.register.dat"/>

<entry ID="ID3" name="lh.pial" description="pial surface of left hemisphere" format="FreeSurfer:surface-1" content="lh.pial" cachePath="" uri="./lh.pial"/>

<entry ID="ID4" name="rh.pial" description="pial surface of right hemisphere" format="FreeSurfer:surface-1" content="rh.pial" cachePath="" uri="./rh.pial"/>

<entry ID="ID5" name="brain" description="extracted brain mri" format="FreeSurfer:mgz-1" content="brain" cachePath="" uri="./brain.mgz"/>

<entry ID="ID6" name="exf" description="example functional dataset" format="nifti:nii-1" content="example_func" cachePath="" uri="./example_func.nii"/>

<entry ID="ID7" name="zstat7" description="7th zstatisic contrast E1+E3+E5>Fix" format="nifti:nii-1" content="zstat7" cachePath="" uri="./zstat7.nii"/>

<entry ID="ID8" name="zstat14" description="14th zstatisic contrast P1+P3+P5>Fix" format="nifti:nii-1" content="zstat14" cachePath="" uri="./zstat14.nii"/>

<entry ID="ID9" name="zstat17" description="17th zstatisic contrast Learn>Fix" format="nifti:nii-1" content="zstat17" cachePath="" uri="./zstat17.nii"/>

<entry ID="ID10" name="aparc+aseg" description="parcellation and segmentation label map" labelmap="true" format="FreeSurfer:mgz-1" content="aparc+aseg" cachePath="" uri="./aparc+aseg.mgz"/>

<entry ID="ID11" name="lh.aparc.annot" description="annotations for surface of left hemisphere" format="FreeSurfer:overlay-1" content="lh.aparc.annot" cachePath="" uri="./lh.aparc.annot"/>

<entry ID="ID12" name="rh.aparc.annot" description="annotations for surface of right hemisphere" format="FreeSurfer:overlay-1" content="rh.aparc.annot" cachePath="" uri="./rh.aparc.annot"/>

</entryList>
</catalog>

</catalogList>

</catalog>
</XCEDE>

XCEDE REST Interface

A straw man REST interface to XCEDE. The implementation is using Grails (Groovy on Rails). Grails is a rapid application development environment patterned after Rails (Ruby on Rails). Grails, however, is completely Java-based utilizing industry standard components (Spring, Hibernate, Groovy). In addition, Grails supports many of the common AJAX frameworks and can be compiled and deployed as a .war file.

REST Interaction

REST support four basic operations akin to CRUD, Create, Read, Update and Delete. These verbs operate on resources (nouns) exposed as URIs, with CRUD relating to HTTP operations:

HTTP CRUD
POST Create, Update, Delete
GET Read
PUT Create, Update
DELETE Delete

A server has been set up on slicerl using code in the NAMICSandbox (see next section for details). Two interfaces are available, a human-friendly web site and a machine-friendly REST interface. The command line program curl can interact with the REST interface, and is useful for learning and debugging.

Retrieve a list of available Projects

7: curl -X GET http://slicerl.bwh.harvard.edu:8080/XCEDE/rest/v1/project/
<?xml version="1.0" encoding="UTF-8"?><list>
  <project id="1">
    <createdDateTime>2008-06-18 21:36:50.291</createdDateTime>
    <description>Neurological implications of working at BWH</description>
    <name>BWH-0001</name>
    <subjectGroups>
      <subjectGroup id="1"/>
    </subjectGroups>
  </project>
  <project id="2">
    <createdDateTime>2008-06-18 21:36:50.332</createdDateTime>
    <description>Bar</description>
    <name>Foo</name>
    <subjectGroups>
      <subjectGroup id="2"/>
    </subjectGroups>
  </project>
</list>

Update an existing Project (Note how Project 2 has been changed)

8: curl -X GET http://slicerl.bwh.harvard.edu:8080/XCEDE/rest/v1/project/2 > Project2.xml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   242    0   242    0     0    802      0 --:--:-- --:--:-- --:--:--     0

9: cat Project2.xml 
<?xml version="1.0" encoding="UTF-8"?><project id="2">
  <createdDateTime>2008-06-18 21:36:50.332</createdDateTime>
  <description>Bar</description>
  <name>Foo</name>
  <subjectGroups>
    <subjectGroup id="2"/>
  </subjectGroups>

11: perl -e 's/Bar/Garf/g' -pi Project2.xml

12: curl -X PUT -H 'Content-Type: application/xml' -d @Project2.xml http://slicerl.bwh.harvard.edu:8080/XCEDE/rest/v1/project/2
<?xml version="1.0" encoding="UTF-8"?><project id="2">
  <createdDateTime>2008-06-18 21:36:50.332</createdDateTime>
  <description>Garf</description>
  <name>Foo</name>
  <subjectGroups>
    <null/>
  </subjectGroups>
</project>

Create a new project (Note using POST creates a new project, while PUT updates an existing project, this command created a project with id=3)

15:  curl -X POST -H 'Content-Type: application/xml' -d @Project2.xml http://slicerl.bwh.harvard.edu:8080/XCEDE/rest/v1/project/
<?xml version="1.0" encoding="UTF-8"?><project id="3">
  <createdDateTime>2008-06-18 21:43:04.790 EDT</createdDateTime>
  <description>Garf</description>
  <name>Foo</name>
  <subjectGroups>
    <null/>
  </subjectGroups>
</project>

Delete a project

17: curl -X DELETE http://slicerl.bwh.harvard.edu:8080/XCEDE/rest/v1/project/3
Grails/XCEDE development

So, you're ready to kick the tires? Great! The first step is to check out the code from Subversion (make sure you do the second step!).

svn co http://www.na-mic.org/svn/NAMICSandBox/trunk/XCEDE

Resources that come with the Grails binary installation are omitted from Subversion and must be generated.

cd XCEDE
grails upgrade

and answer "yes" to both questions.

The main code is in XCEDE/grails-app/controllers/. Grails uses an MVC model and "coding by convention". Thus, if code is put in the correct place, it works without configuration. This is handy if you know what you are doing, frustrating if not. I would suggest going through "Getting Started with Grails" as a first tutorial.