Difference between revisions of "Main Page/SlicerCommunity"

From Slicer Wiki
Jump to: navigation, search
 
(609 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 +
<includeonly>----
 +
Go to <big>[[Main_Page/SlicerCommunity/2022|2022]] :: [[Main_Page/SlicerCommunity/2021|2021]] :: [[Main_Page/SlicerCommunity/2020|2020]] :: [[Main_Page/SlicerCommunity/2019|2019]] :: [[Main_Page/SlicerCommunity/2018|2018]] :: [[Main_Page/SlicerCommunity/2017|2017]] ::  [[Main_Page/SlicerCommunity/2016|2016]] :: [[Main_Page/SlicerCommunity/2015|2015]] :: [[Main_Page/SlicerCommunity/2011-2014|2014-2011]] :: [[Main_Page/SlicerCommunity/2005-2010|2010-2000]]</big>
 +
----</includeonly>
 +
<noinclude>
 
=3D Slicer Enabled Research=
 
=3D Slicer Enabled Research=
 +
[[Documentation/{{documentation/currentversion}}/Slicer|3D Slicer]] is a free open source software package distributed under a BSD style [[License|license]] for analysis, integration, and visualization of medical images. 3D Slicer allows even those with limited image processing experience to effectively explore and quantify their imaging data for hypothesis-driven research. 
 +
</noinclude>
  
[http://www.slicer.org 3D Slicer] is a free open source software package distributed under a BSD style [[Projects/slicerWeb:LicenseText|license]]. The majority of funding for the development of 3D slicer comes from a number of grants and contracts from the National Institutes of Health. See [http://www.slicer.org/wiki/Documentation/4.x/Acknowledgments Slicer Acknowledgments] for more information.
+
The community that relies on 3D Slicer is large and active: (numbers below updated on December 1<sup>st</sup>, 2023)
  
This page focuses on research that was done outside of our immediate collaboration community. That community is represented in the [http://www.slicer.org/publications/pages/display/?collection=11 publication database].
+
*[https://download.slicer.org/download-stats/ 1,467,466+ downloads] in the last 11 years (269,677 in 2023, 206,541 in 2022)
 +
*[https://scholar.google.com/scholar?hl=en&as_sdt=1%2C22&as_vis=1&q=%28%223D+Slicer%22+OR+%22slicer+software%22+OR+%22slicer+org%22+OR+Slicer3D%29+-Slic3r+&btnG= over 17.900+ literature search results on Google Scholar]
 +
**[https://scholar.google.com/scholar?hl=en&as_sdt=1%2C22&as_vis=1&q=%28cancer+OR+tumor+OR+radiation%29+AND+%28%223D+Slicer%22+OR+%22slicer+org%22+OR+Slicer3D%29+-Slic3r+&btnG= 13,400+ '''cancer''']
 +
**[https://scholar.google.com/scholar?hl=en&as_sdt=1%2C22&as_vis=1&q=%28brain%29+AND+%28cancer+OR+tumor+OR+radiation%29+AND+%28%223D+Slicer%22+OR+%22slicer+org%22+OR+Slicer3D%29+-Slic3r+&btnG= 7.290+ '''brain''']
 +
**[https://scholar.google.com/scholar?hl=en&as_sdt=1%2C22&as_vis=1&q=%28lung%29+AND+%28cancer+OR+tumor+OR+radiation%29+AND+%28%223D+Slicer%22+OR+%22slicer+org%22+OR+Slicer3D%29+-Slic3r+&btnG= 6,380+ '''lung''']
 +
**[https://scholar.google.com/scholar?hl=en&as_sdt=1%2C22&as_vis=1&q=%28breast%29+AND+%28cancer+OR+tumor+OR+radiation%29+AND+%28%223D+Slicer%22+OR+%22slicer+org%22+OR+Slicer3D%29+-Slic3r+&btnG= 3,980+ '''breast''']
 +
**[https://scholar.google.com/scholar?hl=en&as_sdt=1%2C22&as_vis=1&q=%28prostate%29+AND+%28cancer+OR+tumor+OR+radiation%29+AND+%28%223D+Slicer%22+OR+%22slicer+org%22+OR+Slicer3D%29+-Slic3r+&btnG= 3,080+ '''prostate''']
  
We invite you to provide information on how you are using 3D Slicer to produce peer-reviewed research. Information about the scientific impact of this tool is helpful in raising funding for the continued support.
+
*[https://pubmed.ncbi.nlm.nih.gov/?sort=pubdate&size=200&linkname=pubmed_pubmed_citedin&from_uid=22770690 2,147+ papers on PubMed citing the Slicer platform paper]
<br>
+
**Fedorov A., Beichel R., Kalpathy-Cramer J., Finet J., Fillion-Robin J-C., Pujol S., Bauer C., Jennings D., Fennessy F.M., Sonka M., Buatti J., Aylward S.R., Miller J.V., Pieper S., Kikinis R. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network. Magnetic Resonance Imaging. 2012 Nov;30(9):1323-41. PMID: 22770690. PMCID: PMC3466397.
----
 
<big>[https://www.slicer.org/wiki/Main_Page/SlicerCommunity-2016 2016]:: [http://wiki.slicer.org/slicerWiki/index.php/Main_Page/SlicerCommunity-2015 2015] :: [http://wiki.slicer.org/slicerWiki/index.php/Main_Page/SlicerCommunity-2011-2014 2014-2011] :: [http://wiki.slicer.org/slicerWiki/index.php/Main_Page/SlicerCommunity-2005-2010 2010-2005]</big>
 
----
 
  
=2017=
+
*[https://na-mic.github.io/ProjectWeek/ 39 events in open source hackathon series] continuously running since 2005 with 3260 total participants
 +
*[https://discourse.slicer.org/ Slicer Forum] with +8,138 subscribers has approximately 275 posts every week
  
==Anser EMT: The First Open-Source Electromagnetic Tracking Platform for Image-Guided Interventions==
 
  
{|width="100%"
+
<!--
|
+
The research of Slicer community is represented in the [http://www.slicer.org/publications/pages/display/?collection=11 publication database].
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28357627 Int J Comput Assist Radiol Surg. 2017 Mar 29. PMID: 28357627] 
+
-->
  
'''Authors:''' Jaeger HA, Franz AM, O'Donoghue K, Seitel A, Trauzettel F, Maier-Hein L, Cantillon-Murphy P.
+
The following is a sample of the research performed using 3D Slicer outside of the group that develops it. <includeonly> in {{#titleparts: {{PAGENAME}} | 2 | 3 }}</includeonly><noinclude>
  
'''Institution:'''  IHU Strasbourg, Strasbourg, France.
+
[[Main_Page/SlicerCommunity/2023|2023]] :: [[Main_Page/SlicerCommunity/2022|2022]] :: [[Main_Page/SlicerCommunity/2021|2021]] :: [[Main_Page/SlicerCommunity/2020|2020]] :: [[Main_Page/SlicerCommunity/2019|2019]] :: [[Main_Page/SlicerCommunity/2018|2018]] ::
 +
[[Main_Page/SlicerCommunity/2017|2017]] :: [[Main_Page/SlicerCommunity/2016|2016]] :: [[Main_Page/SlicerCommunity/2015|2015]] ::
 +
[[Main_Page/SlicerCommunity/2011-2014|2011-2014]] :: [[Main_Page/SlicerCommunity/2005-2010|2000-2010]]
  
'''Background/Purpose:''' PURPOSE:
 
Electromagnetic tracking is the gold standard for instrument tracking and navigation in the clinical setting without line of sight. Whilst clinical platforms exist for interventional bronchoscopy and neurosurgical navigation, the limited flexibility and high costs of electromagnetic tracking (EMT) systems for research investigations mitigate against a better understanding of the technology's characterisation and limitations. The Anser project provides an open-source implementation for EMT with particular application to image-guided interventions.
 
METHODS:
 
This work provides implementation schematics for our previously reported EMT system which relies on low-cost acquisition and demodulation techniques using both National Instruments and Arduino hardware alongside MATLAB support code. The system performance is objectively compared to other commercial tracking platforms using the Hummel assessment protocol.
 
RESULTS:
 
Positional accuracy of 1.14 mm and angular rotation accuracy of [Formula: see text] are reported. Like other EMT platforms, Anser is susceptible to tracking errors due to eddy current and ferromagnetic distortion. The system is compatible with commercially available EMT sensors as well as the Open Network Interface for image-guided therapy (OpenIGTLink) for easy communication with visualisation and medical imaging toolkits such as MITK and [http://slicer.org '''3D Slicer'''].
 
CONCLUSIONS:
 
By providing an open-source platform for research investigations, we believe that novel and collaborative approaches can overcome the limitations of current EMT technology.
 
|}
 
  
==SLIDE: Automatic Spine Level Identification System using a Deep Convolutional Neural Network==
+
We invite you to provide information using our [https://discourse.slicer.org/ discussion forum] on how you are using 3D Slicer to produce peer-reviewed research. Information about the scientific impact of this tool is helpful in raising funding for the continued support.
 +
</noinclude>
  
{|width="100%"
+
We monitor PubMed and related databases to update these lists, but if you know of other research related to the Slicer community that should be included here please email: marianna (at) bwh.harvard.edu.
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28361323 Int J Comput Assist Radiol Surg. 2017 Mar 30. PMID: 28361323] 
 
 
 
'''Authors:''' Hetherington J, Lessoway V, Gunka V, Abolmaesumi P, Rohling R.
 
 
 
'''Institution:'''
 
Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, Canada.
 
 
 
'''Background/Purpose:''' PURPOSE:
 
Percutaneous spinal needle insertion procedures often require proper identification of the vertebral level to effectively and safely deliver analgesic agents. The current clinical method involves "blind" identification of the vertebral level through manual palpation of the spine, which has only 30% reported accuracy. Therefore, there is a need for better anatomical identification prior to needle insertion.
 
METHODS:
 
A real-time system was developed to identify the vertebral level from a sequence of ultrasound images, following a clinical imaging protocol. The system uses a deep convolutional neural network (CNN) to classify transverse images of the lower spine. Several existing CNN architectures were implemented, utilizing transfer learning, and compared for adequacy in a real-time system. In the system, the CNN output is processed, using a novel state machine, to automatically identify vertebral levels as the transducer moves up the spine. Additionally, a graphical display was developed and integrated within [http://slicer.org '''3D Slicer''']. Finally, an augmented reality display, projecting the level onto the patient's back, was also designed. A small feasibility study [Formula: see text] evaluated performance.
 
RESULTS:
 
The proposed CNN successfully discriminates ultrasound images of the sacrum, intervertebral gaps, and vertebral bones, achieving 88% 20-fold cross-validation accuracy. Seventeen of 20 test ultrasound scans had successful identification of all vertebral levels, processed at real-time speed (40 frames/s).
 
CONCLUSION:
 
A machine learning system is presented that successfully identifies lumbar vertebral levels. The small study on human subjects demonstrated real-time performance. A projection-based augmented reality display was used to show the vertebral level directly on the subject adjacent to the puncture site.
 
|}
 
 
 
==Revealing Cancer Subtypes with Higher-Order Correlations Applied to Imaging and Omics Data==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28359308 BMC Med Genomics. 2017 Mar 31;10(1):20.  PMID: 28359308] | [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5374737/pdf/12920_2017_Article_256.pdf  PDF]
 
 
 
'''Authors:''' Graim K, Liu TT, Achrol AS, Paull EO, Newton Y, Chang SD, Harsh GR, Cordero SP, Rubin DL, Stuart JM.
 
 
 
'''Institution:''' Biomedical Engineering, University of California, Santa Cruz, CA, USA.
 
 
 
'''Background/Purpose:''' Patient stratification to identify subtypes with different disease manifestations, severity, and expected survival time is a critical task in cancer diagnosis and treatment. While stratification approaches using various biomarkers (including high-throughput gene expression measurements) for patient-to-patient comparisons have been successful in elucidating previously unseen subtypes, there remains an untapped potential of incorporating various genotypic and phenotypic data to discover novel or improved groupings.
 
METHODS:
 
Here, we present HOCUS, a unified analytical framework for patient stratification that uses a community detection technique to extract subtypes out of sparse patient measurements. HOCUS constructs a patient-to-patient network from similarities in the data and iteratively groups and reconstructs the network into higher order clusters. We investigate the merits of using higher-order correlations to cluster samples of cancer patients in terms of their associations with survival outcomes.
 
RESULTS:
 
In an initial test of the method, the approach identifies cancer subtypes in mutation data of glioblastoma, ovarian, breast, prostate, and bladder cancers. In several cases, HOCUS provides an improvement over using the molecular features directly to compare samples. Application of HOCUS to glioblastoma images reveals a size and location classification of tumors that improves over human expert-based stratification.
 
CONCLUSIONS:
 
Subtypes based on higher order features can reveal comparable or distinct groupings. The distinct solutions can provide biologically- and treatment-relevant solutions that are just as significant as solutions based on the original data.
 
 
 
|align="right"|[[image:Graim-BMCMedGenomics2017-fig5.jpg|thumb|300px| HOCUS of GBM MR Images. a. P-values of survival separation (log-rank test) for each of the orders of clustering across a range of k clusters. b. Kaplan-Meier plot of the third-order HOCUS clusters. c. Images of tumors within each cluster projected onto the MNI brain atlas. Showing sagittal, coronal, axial views. Brightness of color indicates the number of patients with tumor at a given location. Generated using [http://slicer.org '''3D Slicer''']. d. Violin plot showing tumor volumes within each third-order cluster. e. Molecular (gene expression based) subtypes within the clusters.]]
 
|}
 
 
 
==Three-Dimensional Volume Rendering of Pelvic Models and Paraurethral Masses Based on MRI Cross-Sectional Images==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28352953 Int Urogynecol J. 2017 Mar 28. PMID: 28352953] 
 
 
 
'''Authors:''' Doumouchtsis SK, Nazarian DA, Gauthaman N, Durnea CM, Munneke G.
 
 
 
'''Institution:'''
 
Department of Obstetrics & Gynaecology, Epsom and St. Helier University Hospital NHS Trust, Epsom, UK.
 
 
 
'''Background/Purpose:''' AIMS:
 
Our aim was to assess the feasibility of rendering 3D pelvic models using magnetic resonance imaging (MRI) scans of patients with vaginal, urethral and paraurethral lesions and obtain additional information previously unavailable through 2D imaging modalities.
 
METHODS:
 
A purposive sample of five female patients 26-40 years old undergoing investigations for vaginal or paraurethral mass was obtained in a tertiary teaching hospital. 3D volume renderings of the bladder, urethra and paraurethral masses were constructed using [http://slicer.org '''3D Slicer'''] v.3.4.0. Spatial dimensions were determined and compared with findings from clinical, MRI, surgical and histopathological reports. The quality of information regarding size and location of paraurethral masses obtained from 3D models was compared with information from cross-sectional MRI and review of clinical, surgical and histopathological findings.
 
RESULTS:
 
The analysis of rendered 3D models yielded detailed anatomical dimensions and provided information that was in agreement and in higher detail than information based on clinical examination, cross-sectional 2D MRI analysis and histopathological reports. High-quality pelvic 3D models were rendered with the characteristics and resolution to allow identification and detailed viewing of the spatial relationship between anatomical structures.
 
CONCLUSIONS:
 
To our knowledge, this is the first preliminary study to evaluate the role of MRI-based 3D pelvic models for investigating paraurethral masses. This is a feasible technique and may prove a useful addition to conventional 2D MRI. Further prospective studies are required to evaluate this modality for investigating such lesions and planning appropriate management.
 
|}
 
 
 
==A Study of Volumetric Variations of Basal Nuclei in the Normal Human Brain by Magnetic Resonance Imaging==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28078760 Clin Anat. 2017 Mar;30(2):175-82. PMID: 28078760] 
 
 
 
'''Authors:''' Elkattan A, Mahdy A, Eltomey M, Ismail R.
 
 
 
'''Institution:''' Department of Anatomy, Tanta University of Medical Sciences, Tanta, Egypt.
 
 
 
'''Background/Purpose:''' Knowledge of the effects of healthy aging on brain structures is necessary to identify abnormal changes due to diseases. Many studies have demonstrated age-related volume changes in the brain using MRI. 60 healthy individuals who had normal MRI aged from 20 years to 80 years were examined and classified into three groups: Group I: 21 persons; nine males and 12 females aging between 20-39 years old. Group II: 22 persons; 11 males and 11 females aging between 40-59 years old. Group III: 17 persons; eight males and nine females aging between 60-80 years old. Volumetric analysis was done to evaluate the effect of age, gender and hemispheric difference in the caudate and putamen by the [http://slicer.org '''3D Slicer'''] 4.3.3.1 software using 3D T1-weighted images. Data were analyzed by student's unpaired t test, ANOVA and regression analysis. The volumes of the measured and corrected caudate nuclei and putamen significantly decreased with aging in males. There was a statistically insignificant relation between the age and the volume of the measured caudate nuclei and putamen in females but there was a statistically significant relation between the age and the corrected caudate nuclei and putamen. There was no significant difference on the caudate and putamen volumes between males and females. There was no significant difference between the right and left caudate nuclei volumes. There was a leftward asymmetry in the putamen volumes. The results can be considered as a base to track individual changes with time (aging and CNS diseases).
 
|}
 
 
 
==MITK-OpenIGTLink for Combining Open-Source Toolkits in Real-Time Computer-Assisted Interventions==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/27687984 Int J Comput Assist Radiol Surg. 2017 Mar;12(3):351-61. PMID: 27687984] 
 
 
 
'''Authors:''' Klemm M, Kirchner T, Gröhl J, Cheray D, Nolden M, Seitel A, Hoppe H, Maier-Hein L, Franz AM.
 
 
 
'''Institution:''' Laboratory for Computer-Assisted Medicine, Department of Electrical Engineering and Information Technology, Offenburg University, Offenburg, Germany.
 
 
 
'''Background/Purpose:''' PURPOSE:
 
Due to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and [http://slicer.org '''3D Slicer''']. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow.
 
<br>METHODS:
 
MITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization.
 
<br>RESULTS:
 
We present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of [Formula: see text] and [Formula: see text] is possible at up to 512 and 128 Hz, respectively.
 
<br>CONCLUSION:
 
With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or [http://slicer.org '''3D Slicer'''] and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style license (http://mitk.org).
 
|}
 
 
 
==Increased Cerebellar Gray Matter Volume in Head Chefs==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28182712 PLoS One. 2017 Feb 9;12(2):e0171457.  PMID: 28182712] | [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5300254/pdf/pone.0171457.pdf  PDF]
 
 
 
'''Authors:''' Cerasa A, Sarica A, Martino I, Fabbricatore C, Tomaiuolo F, Rocca F, Caracciolo M, Quattrone A.
 
 
 
'''Institution:''' Istituto di Bioimmagini e Fisiologia Molecolare, Consiglio Nazionale delle Ricerche, Catanzaro, Italy.
 
 
 
'''Background/Purpose:'''OBJECTIVE:
 
Chefs exert expert motor and cognitive performances on a daily basis. Neuroimaging has clearly shown that that long-term skill learning (i.e., athletes, musicians, chess player or sommeliers) induces plastic changes in the brain thus enabling tasks to be performed faster and more accurately. How a chef's expertise is embodied in a specific neural network has never been investigated.
 
<br>METHODS:
 
Eleven Italian head chefs with long-term brigade management expertise and 11 demographically-/ psychologically- matched non-experts underwent morphological evaluations.
 
<br>RESULTS:
 
Voxel-based analysis performed with SUIT, as well as, automated volumetric measurement assessed with Freesurfer, revealed increased gray matter volume in the cerebellum in chefs compared to non-experts. The most significant changes were detected in the anterior vermis and the posterior cerebellar lobule. The magnitude of the brigade staff and the higher performance in the Tower of London test correlated with these specific gray matter increases, respectively.
 
<br>CONCLUSIONS:
 
We found that chefs are characterized by an anatomical variability involving the cerebellum. This confirms the role of this region in the development of similar expert brains characterized by learning dexterous skills, such as pianists, rock climbers and basketball players. However, the nature of the cellular events underlying the detected morphological differences remains an open question.
 
 
 
 
 
|align="right"|[[image:Carasa-PlosOne2017-fig2.jpg|thumb|300px| Sample color-coded automated brain segmentation results.
 
A 3D surface image (created with  [http://slicer.org '''3D Slicer''']  v 4.6, www.slicer.org) showing typical automated subcortical segmentation of the cerebellum performed by FreeSurfer (v 5.3). Scatter plot of the mean normalized volumes of the left and right cerebellar cortex for each single subject has been plotted. Advanced neuroimaging analysis reveals bilateral cerebellar volumetric increase in the chef group with respect to non-expert individuals.]]
 
|}
 
 
 
==Three-dimensional Printing of X-ray Computed Tomography Datasets with Multiple Materials using Open-source Data Processing==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28231405 Anat Sci Educ. 2017 Feb 23. PMID: 28231405] 
 
 
 
'''Authors:''' Sander IM, McGoldrick MT, Helms MN, Betts A, van Avermaete A, Owers E, Doney E, Liepert T, Niebur G, Liepert D, Leevy WM.
 
 
 
'''Institution:''' Department of Biological Sciences, College of Science, University of Notre Dame, Notre Dame, Indiana., USA.
 
 
 
'''Background/Purpose:''' Advances in three-dimensional (3D) printing allow for digital files to be turned into a "printed" physical product. For example, complex anatomical models derived from clinical or pre-clinical X-ray computed tomography (CT) data of patients or research specimens can be constructed using various printable materials. Although 3D printing has the potential to advance learning, many academic programs have been slow to adopt its use in the classroom despite increased availability of the equipment and digital databases already established for educational use. Herein, a protocol is reported for the production of enlarged bone core and accurate representation of human sinus passages in a 3D printed format using entirely consumer-grade printers and a combination of free-software platforms. The comparative resolutions of three surface rendering programs were also determined using the sinuses, a human body, and a human wrist data files to compare the abilities of different software available for surface map generation of biomedical data. Data shows that  [http://slicer.org '''3D Slicer''']  provided highest compatibility and surface resolution for anatomical 3D printing. Generated surface maps were then 3D printed via fused deposition modeling (FDM printing). In conclusion, a methodological approach that explains the production of anatomical models using entirely consumer-grade, fused deposition modeling machines, and a combination of free software platforms is presented in this report. The methods outlined will facilitate the incorporation of 3D printed anatomical models in the classroom.
 
|}
 
 
 
==SEEG Assistant: A 3D Slicer Extension to Support Epilepsy Surgery==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [https://www.ncbi.nlm.nih.gov/pubmed/28231759 BMC Bioinformatics. 2017 Feb 23;18(1):124.  PMID: 28231759] | [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5324222/pdf/12859_2017_Article_1545.pdf  PDF]
 
 
 
'''Authors:''' Narizzano M, Arnulfo G, Ricci S, Toselli B, Tisdall M, Canessa A, Fato MM, Cardinale F.
 
 
 
'''Institution:''' Department of Informatics, Bioengineering Robotics and System engineering (DIBRIS), University of Genoa, Genova, Italy.
 
 
 
'''Background/Purpose:''' In the evaluation of Stereo-Electroencephalography (SEEG) signals, the physicist's workflow involves several operations, including determining the position of individual electrode contacts in terms of both relationship to grey or white matter and location in specific brain regions. These operations are (i) generally carried out manually by experts with limited computer support, (ii) hugely time consuming, and (iii) often inaccurate, incomplete, and prone to errors.
 
<br>RESULTS:
 
In this paper we present SEEG Assistant, a set of tools integrated in a single [http://slicer.org '''3D Slicer''']  extension, which aims to assist neurosurgeons in the analysis of post-implant structural data and hence aid the neurophysiologist in the interpretation of SEEG data. SEEG Assistant consists of (i) a module to localize the electrode contact positions using imaging data from a thresholded post-implant CT, (ii) a module to determine the most probable cerebral location of the recorded activity, and (iii) a module to compute the Grey Matter Proximity Index, i.e. the distance of each contact from the cerebral cortex, in order to discriminate between white and grey matter location of contacts. Finally, exploiting [http://slicer.org '''3D Slicer''']  capabilities, SEEG Assistant offers a Graphical User Interface that simplifies the interaction between the user and the tools. SEEG Assistant has been tested on 40 patients segmenting 555 electrodes, and it has been used to identify the neuroanatomical loci and to compute the distance to the nearest cerebral cortex for 9626 contacts. We also performed manual segmentation and compared the results between the proposed tool and gold-standard clinical practice. As a result, the use of SEEG Assistant decreases the post implant processing time by more than 2 orders of magnitude, improves the quality of results and decreases, if not eliminates, errors in post implant processing.
 
<br>CONCLUSIONS:
 
The SEEG Assistant Framework for the first time supports physicists by providing a set of open-source tools for post-implant processing of SEEG data. Furthermore, SEEG Assistant has been integrated into [http://slicer.org '''3D Slicer'''] , a software platform for the analysis and visualization of medical images, overcoming limitations of command-line tools.
 
 
 
|align="right"|[[image:Narizzano-BMCBioinformatics2017-fig3.jpg|thumb|300px| CPE out performs manual segmentation in complex and critical cases. a As an example of SEEG complexity, we show MRI and thresholded post-implant CT scans for one subject from our cohort. Contacts are shown as groups of white voxels. This case illustrates the complexity of SEEG implants with electrode shafts following non-planar directions (e.g. X), shafts targeting almost the same geometrical point (e.g. R and R’). b CPE segments all contacts (green spheres) belonging to each electrode from post-implant CT scans, represented here as red 3D meshes obtained tessellating the thresholded data to ease visualization. c Show the right pial surface with 3D post-implant thresholded-CT meshes and the cut plane used in panel d where the example of X and X’ electrodes are shown. Those examples represent the case of non-planar insertion trajectories which yielded an artefactually fused electrode. CPE integrating the knowledge of the electrode model can segment the contact positions more accurately than visual inspection.]]
 
|}
 
 
 
==Associations of Radiomic Data Extracted from Static and Respiratory-Gated CT Scans with Disease Recurrence in Lung Cancer Patients Treated with SBRT==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28046060 PLoS One. 2017 Jan 3;12(1):e0169172.  PMID: 28046060]| [http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0169172&type=printable  PDF]
 
 
 
'''Authors:''' Huynh E, Coroller TP, Narayan V, Agrawal V, Romano J, Franco I, Parmar C, Hou Y, Mak RH, Aerts HJ.
 
 
 
'''Institution:''' Department of Radiation Oncology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, USA.
 
 
 
'''Background/Purpose:''' Radiomics aims to quantitatively capture the complex tumor phenotype contained in medical images to associate them with clinical outcomes. This study investigates the impact of different types of computed tomography (CT) images on the prognostic performance of radiomic features for disease recurrence in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiation therapy (SBRT). 112 early stage NSCLC patients treated with SBRT that had static free breathing (FB) and average intensity projection (AIP) images were analyzed. Nineteen radiomic features were selected from each image type (FB or AIP) for analysis based on stability and variance. The selected FB and AIP radiomic feature sets had 6 common radiomic features between both image types and 13 unique features. The prognostic performances of the features for distant metastasis (DM) and locoregional recurrence (LRR) were evaluated using the concordance index (CI) and compared with two conventional features (tumor volume and maximum diameter). P-values were corrected for multiple testing using the false discovery rate procedure. None of the FB radiomic features were associated with DM, however, seven AIP radiomic features, that described tumor shape and heterogeneity, were (CI range: 0.638-0.676). Conventional features from FB images were not associated with DM, however, AIP conventional features were (CI range: 0.643-0.658). Radiomic and conventional multivariate models were compared between FB and AIP images using cross validation. The differences between the models were assessed using a permutation test. AIP radiomic multivariate models (median CI = 0.667) outperformed all other models (median CI range: 0.601-0.630) in predicting DM. None of the imaging features were prognostic of LRR. Therefore, image type impacts the performance of radiomic models in their association with disease recurrence. AIP images contained more information than FB images that were associated with disease recurrence in early stage NSCLC patients treated with SBRT, which suggests that AIP images may potentially be more optimal for the development of an imaging biomarker.
 
 
 
'''Funding:'''
 
*U01 CA190234/CA/NCI NIH HHS/United States
 
*U24 CA194354/CA/NCI NIH HHS/United States
 
 
 
|align="right"|[[image:journal.pone.0169172.g001.PNG|thumb|300px| A) Examples of free breathing (FB) and average intensity projection (AIP) images, demonstrating the observable differences in tumor phenotype between each image type. AIP images were reconstructed from 4D computed tomography (CT) scans. B) Schematic representation of the radiomics workflow for FB and AIP images. I. CT images of the patient are acquired and the tumor is segmented. II. Imaging features (radiomic and conventional features) are extracted from the tumor volume. III. Radiomic features undergo a feature dimension reduction process to generate a low-dimensional feature set based on feature stability and variance. IV. Imaging features are then analyzed with clinical outcomes to evaluate their prognostic power. FB and AIP radiomics features are compared. A set of 644 radiomic features was extracted from tumor volumes isolated from FB or AIP images (Fig 1B) using an in-house Matlab 2013 toolbox and [http://slicer.org '''3D Slicer'''] 4.4.0 software]]
 
|}
 
 
 
==Early Experiences of Planning Stereotactic Radiosurgery using 3D Printed Models of Eyes with Uveal Melanomas==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [https://www.ncbi.nlm.nih.gov/pubmed/28203052 Clin Ophthalmol. 2017 Jan 31;11:267-71.  PMID: 28203052] | [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5298814/pdf/opth-11-267.pdf  PDF]
 
 
 
'''Authors:''' Furdová A, Sramka M, Thurzo A, Furdová A.
 
 
 
'''Institution:''' Department of Ophthalmology, Faculty of Medicine, Comenius University, Bratislava, Slovakia.
 
 
 
'''Background/Purpose:''' OBJECTIVE:
 
The objective of this study was to determine the use of 3D printed model of an eye with intraocular tumor for linear accelerator-based stereotactic radiosurgery.
 
<br>METHODS:
 
The software for segmentation ([http://slicer.org '''3D Slicer''']) created virtual 3D model of eye globe with tumorous mass based on tissue density from computed tomography and magnetic resonance imaging data. A virtual model was then processed in the slicing software (Simplify3D®) and printed on 3D printer using fused deposition modeling technology. The material that was used for printing was polylactic acid.
 
<br>RESULTS:
 
In 2015, stereotactic planning scheme was optimized with the help of 3D printed model of the patient's eye with intraocular tumor. In the period 2001-2015, a group of 150 patients with uveal melanoma (139 choroidal melanoma and 11 ciliary body melanoma) were treated. The median tumor volume was 0.5 cm3 (0.2-1.6 cm3). The radiation dose was 35.0 Gy by 99% of dose volume histogram.
 
<br>CONCLUSION:
 
The 3D printed model of eye with tumor was helpful in planning the process to achieve the optimal scheme for irradiation which requires high accuracy of defining the targeted tumor mass and critical structures.
 
 
 
|align="right"|[[image:jFurdova-ClinOphtalmol2017-fig3.jpg|thumb|300px| A) Virtual model of the eye, outer view; arrow indicates optic nerve. A virtual 3D model of eye globe with tumor based on tissue density was created from CT and MRI data by using the [http://slicer.org '''3D Slicer'''] software for segmentation.]]
 
|}
 
 
 
==Intra-rater Variability in Low-grade Glioma Segmentation==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/27837437 J Neurooncol. 2017 Jan;131(2):393-402. PMID: 27837437] 
 
 
 
'''Authors:''' Bø HK, Solheim O, Jakola AS, Kvistad KA, Reinertsen I, Berntsen EM.
 
 
 
'''Institution:''' Department of Radiology and Nuclear Medicine, St. Olavs University Hospital, Trondheim, Norway.
 
 
 
'''Background/Purpose:''' Assessment of size and growth are key radiological factors in low-grade gliomas (LGGs), both for prognostication and treatment evaluation, but the reliability of LGG-segmentation is scarcely studied. With a diffuse and invasive growth pattern, usually without contrast enhancement, these tumors can be difficult to delineate. The aim of this study was to investigate the intra-observer variability in LGG-segmentation for a radiologist without prior segmentation experience. Pre-operative 3D FLAIR images of 23 LGGs were segmented three times in the software [http://slicer.org '''3D Slicer''']. Tumor volumes were calculated, together with the absolute and relative difference between the segmentations. To quantify the intra-rater variability, we used the Jaccard coefficient comparing both two (J2) and three (J3) segmentations as well as the Hausdorff Distance (HD). The variability measured with J2 improved significantly between the two last segmentations compared to the two first, going from 0.87 to 0.90 (p = 0.04). Between the last two segmentations, larger tumors showed a tendency towards smaller relative volume difference (p = 0.07), while tumors with well-defined borders had significantly less variability measured with both J2 (p = 0.04) and HD (p < 0.01). We found no significant relationship between variability and histological sub-types or Apparent Diffusion Coefficients (ADC). We found that the intra-rater variability can be considerable in serial LGG-segmentation, but the variability seems to decrease with experience and higher grade of border conspicuity. Our findings highlight that some criteria defining tumor borders and progression in 3D volumetric segmentation is needed, if moving from 2D to 3D assessment of size and growth of LGGs.
 
|}
 
 
 
==Hybrid Positron Emission Tomography Segmentation of Heterogeneous Lung Tumors using 3D Slicer: Improved Growcut Algorithm with Threshold Initialization==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [https://www.ncbi.nlm.nih.gov/pubmed/28149920 J. Med. Imag. 2017 Jan-Mar;4(1), 011009.  PMID: 28149920] | [[media:Thomas-JMI2017.pdf | PDF]]
 
 
 
'''Authors:''' Thomas HM, Devakumar D, Sasidharan B, Bowen SR, Heck DK, Jebaseelan J, Samuel E.
 
 
 
'''Institution:''' VIT University, School of Advanced Sciences, Department of Physics, Vellore, Tamil Nadu 632004, India.
 
 
 
'''Background/Purpose:''' This paper presents an improved GrowCut (IGC), a positron emission tomography-based segmentation algorithm, and tests its clinical applicability. Contrary to the traditional method that requires the user to provide the initial seeds, the IGC algorithm starts with a threshold-based estimate of the tumor and a three- dimensional morphologically grown shell around the tumor as the foreground and background seeds, respectively. The repeatability of IGC from the same observer at multiple time points was compared with the traditional GrowCut algorithm. The algorithm was tested in 11 nonsmall cell lung cancer lesions and validated against the clinician-defined manual contour and compared against the clinically used 25% of the maximum standardized uptake value [SUV-(max)], 40% SUV<sub>max</sub>, and adaptive threshold methods. The time to edit IGC-defined functional volume to arrive at the gross tumor volume (GTV) was compared with that of manual contouring. The repeatability of the IGC algorithm was very high compared with the traditional GrowCut (p = 0.003) and demonstrated higher agreement with the manual contour with respect to threshold-based methods. Compared with manual contouring, editing the IGC achieved the GTV in significantly less time (p = 0.11). The IGC algorithm offers a highly repeatable functional volume and serves as an effective initial guess that can well minimize the time spent on labor-intensive manual contouring.
 
 
 
|align="right"|[[image:jThomas-JMI2017-fig3.png|thumb|300px| A) A representative example of the uncertainty volume observed with the [http://slicer.org '''3D Slicer'''] GrowCutmethod. (a) The lesion was delineated in three separate runs. There was variability with each run and the composite error in the variability calculated as the uncertainty volume is highlighted in green in (b).]]
 
|}
 
 
 
==Pre-clinical Validation of Virtual Bronchoscopy using 3D Slicer==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/27325238 Int J Comput Assist Radiol Surg. 2017 Jan;12(1):25-38. PMID: 27325238]
 
 
 
'''Authors:''' Nardelli P, Jaeger A, O'Shea C, Khan KA, Kennedy MP, Cantillon-Murphy P.
 
 
 
'''Institution:''' School of Engineering, University College Cork, College Road, Cork, Ireland.
 
 
 
'''Background/Purpose:''' Lung cancer still represents the leading cause of cancer-related death, and the long-term survival rate remains low. Computed tomography (CT) is currently the most common imaging modality for lung diseases recognition. The purpose of this work was to develop a simple and easily accessible virtual bronchoscopy system to be coupled with a customized electromagnetic (EM) tracking system for navigation in the lung and which requires as little user interaction as possible, while maintaining high usability.
 
Methods:
 
The proposed method has been implemented as an extension to the open-source platform, [http://slicer.org '''3D Slicer''']. It creates a virtual reconstruction of the airways starting from CT images for virtual navigation. It provides tools for pre-procedural planning and virtual navigation, and it has been optimized for use in combination with a [Formula: see text] of freedom EM tracking sensor. Performance of the algorithm has been evaluated in ex vivo and in vivo testing.
 
Results:
 
During ex vivo testing, nine volunteer physicians tested the implemented algorithm to navigate three separate targets placed inside a breathing pig lung model. In general, the system proved easy to use and accurate in replicating the clinical setting and seemed to help choose the correct path without any previous experience or image analysis. Two separate animal studies confirmed technical feasibility and usability of the system.
 
Conclusions:
 
This work describes an easily accessible virtual bronchoscopy system for navigation in the lung. The system provides the user with a complete set of tools that facilitate navigation towards user-selected regions of interest. Results from ex vivo and in vivo studies showed that the system opens the way for potential future work with virtual navigation for safe and reliable airway disease diagnosis.
 
|}
 
 
 
==Theoretical Observation on Diagnosis Maneuver for Benign Paroxysmal Positional Vertigo==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28084876 Acta Otolaryngol. 2017 Jan 13:1-8. PMID: 28084876] 
 
 
 
'''Authors:''' Yang XK, Zheng YY, Yang XG.
 
 
 
'''Institution:''' Neurology Department , Wenzhou People's Hospital , Wenzhou , Zhejiang , PR China.
 
 
 
'''Background/Purpose:''' To make a comprehensive analysis with a variety of diagnostic maneuvers is conducive to the correct diagnosis and classification of BPPV.
 
OBJECTIVE:
 
Based on the standard spatial coordinate-based semicircular canal model for theoretical observation on diagnostic maneuvers for benign paroxysmal positional vertigo (BPPV) to analyze the meaning and key point of each step of the maneuver.
 
MATERIALS AND METHODS:
 
This study started by building a standard model of semicircular canal with space orientation by segmentation of the inner ear done with the [http://slicer.org '''3D Slicer'''] software based on MRI scans, then gives a demonstration and observation of BPPV diagnostic maneuvers by using the model.
 
RESULTS:
 
The supine roll maneuver is mainly for diagnosis of lateral semicircular canal BPPV. The Modified Dix-Hallpike maneuver is more specific for the diagnosis of posterior semicircular canal BPPV. The side-lying bow maneuver designed here is theoretically suitable for diagnosis of anterior semicircular canal BPPV.
 
|}
 
 
 
==Anatomical Study and Locating Nasolacrimal Duct on Computed Topographic Image==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/27977487 J Craniofac Surg. 2017 Jan;28(1):275-9. PMID: 27977487] 
 
 
 
'''Authors:''' Zhang S, Cheng Y, Xie J, Wang Z, Zhang F, Chen L, Feng Y, Wang G.
 
 
 
'''Institution:''' Department of Endocrinology, First Hospital of Jilin University, Changchun, China.
 
 
 
'''Background/Purpose:''' We performed a novel anatomical and radiological investigation to understand the structure of nasolacrimal duct (NLD) and to provide data to help surgeons locate the openings of NLD efficiently based on landmarks.
 
MATERIALS AND METHODS:
 
We examined the NLD region using computed tomography images of 133 individuals and 6 dry skull specimens. Multiplanar reconstruction of the computed tomography images was performed, and the anatomical features of the NLD were studied in the coronal, sagittal, and axial planes. The long and short diameters of NLD were measured along its cross-section. The position of NLD was localized using the nostril, concha nasalis media, and medial orbital corner as landmarks. The free and open source software, 3D Slicer, was used for the segmentation of the NLD and 3D visualization of the superior and inferior openings of the NLD.
 
RESULTS:
 
The length, angle, and diameter of NLD were significantly influenced by the age in females compared to those in males. The inferior opening of the NLD could be located efficiently using the nostril and the midsagittal line while the superior opening of NLD could be located using the medial orbital corner. Third, [http://slicer.org '''3D Slicer'''] enabled us to measure the distance between the skin and the bony structure in the image.
 
CONCLUSION:
 
Our study indicates that the sex and age of the patient should be considered while selecting the optimal NLD stent for a patient, and that the precise location of NLD in reference to landmarks can simplify the surgical difficulties and reduce the risk of injury during the transnasal operation.
 
|}
 
 
 
==Intra-rater Variability in Low-grade Glioma Segmentation==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/27837437 J Neurooncol. 2017 Jan;131(2):393-402. PMID: 27837437] 
 
 
 
'''Authors:''' Bø HK, Solheim O, Jakola AS, Kvistad KA, Reinertsen I, Berntsen EM.
 
 
 
'''Institution:''' Department of Radiology and Nuclear Medicine, St. Olavs University Hospital, Trondheim, Norway
 
 
 
'''Background/Purpose:''' Assessment of size and growth are key radiological factors in low-grade gliomas (LGGs), both for prognostication and treatment evaluation, but the reliability of LGG-segmentation is scarcely studied. With a diffuse and invasive growth pattern, usually without contrast enhancement, these tumors can be difficult to delineate. The aim of this study was to investigate the intra-observer variability in LGG-segmentation for a radiologist without prior segmentation experience. Pre-operative 3D FLAIR images of 23 LGGs were segmented three times in the software [http://slicer.org '''3D Slicer''']. Tumor volumes were calculated, together with the absolute and relative difference between the segmentations. To quantify the intra-rater variability, we used the Jaccard coefficient comparing both two (J2) and three (J3) segmentations as well as the Hausdorff Distance (HD). The variability measured with J2 improved significantly between the two last segmentations compared to the two first, going from 0.87 to 0.90 (p = 0.04). Between the last two segmentations, larger tumors showed a tendency towards smaller relative volume difference (p = 0.07), while tumors with well-defined borders had significantly less variability measured with both J2 (p = 0.04) and HD (p < 0.01). We found no significant relationship between variability and histological sub-types or Apparent Diffusion Coefficients (ADC). We found that the intra-rater variability can be considerable in serial LGG-segmentation, but the variability seems to decrease with experience and higher grade of border conspicuity. Our findings highlight that some criteria defining tumor borders and progression in 3D volumetric segmentation is needed, if moving from 2D to 3D assessment of size and growth of LGGs.
 
|}
 
 
 
==Open Wedge High Tibial Osteotomy using Three-Dimensional Printed Models: Experimental Analysis using Porcine Bone==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/27876267 Knee. 2017 Jan;24(1):16-22. PMID: 27876267] 
 
 
 
'''Authors:''' Kwun JD, Kim HJ, Park J, Park IH, Kyung HS.
 
 
 
'''Institution:''' Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University, Daegu, Republic of Korea.
 
 
 
'''Background/Purpose:''' BACKGROUND:
 
The purpose of this study was to evaluate the usefulness of three-dimensional (3D) printed models for open wedge high tibial osteotomy (HTO) in porcine bone.
 
METHODS:
 
Computed tomography (CT) images were obtained from 10 porcine knees and 3D imaging was planned using the [http://slicer.org '''3D Slicer'''] program. The osteotomy line was drawn from the three centimeters below the medial tibial plateau to the proximal end of the fibular head. Then the osteotomy gap was opened until the mechanical axis line was 62.5% from the medial border along the width of the tibial plateau, maintaining the posterior tibial slope angle. The wedge-shaped 3D-printed model was designed with the measured angle and osteotomy section and was produced by the 3D printer. The open wedge HTO surgery was reproduced in porcine bone using the 3D-printed model and the osteotomy site was fixed with a plate. Accuracy of osteotomy and posterior tibial slope was evaluated after the osteotomy.
 
RESULTS:
 
The mean mechanical axis line on the tibial plateau was 61.8±1.5% from the medial tibia. There was no statistically significant difference (P=0.160). The planned and post-osteotomy correction wedge angles were 11.5±3.2° and 11.4±3.3°, and the posterior tibial slope angle was 11.2±2.2° pre-osteotomy and 11.4±2.5° post-osteotomy. There were no significant differences (P=0.854 and P=0.429, respectively).
 
CONCLUSION:
 
This study showed that good results could be obtained in high tibial osteotomy by using 3D printed models of porcine legs.
 
|}
 
 
 
==MRI Visible Fe<sub>3</sub>O<sub>4</sub> Polypropylene Mesh: 3D Reconstruction of Spatial Relation to Bony Pelvis and Neurovascular Structures==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28124074 Int Urogynecol J. 2017 Jan 25.  PMID: 28124074] 
 
 
 
'''Authors:''' Chen L, Lenz F, Alt CD, Sohn C, De Lancey JO, Brocker KA.
 
 
 
'''Institution:''' Pelvic Floor Research Group, Biomedical Engineering Department, University of Michigan, Ann Arbor, MI, USA.
 
 
 
'''Background/Purpose:'''
 
INTRODUCTION AND HYPOTHESIS:
 
To demonstrate mesh magnetic resonance imaging (MRI) visibility in living women, the feasibility of reconstructing the full mesh course in 3D, and to document its spatial relationship to pelvic anatomical structures.
 
<br>METHODS:
 
This is a proof of concept study of three patients from a prospective multi-center trial evaluating women with anterior vaginal mesh repair using a MRI-visible Fe<sub>3</sub>O<sub>4</sub> polypropylene implant for pelvic floor reconstruction. High-resolution sagittal T2-weighted (T2w) sequences, transverse T1-weighted (T1w) FLASH 2D, and transverse T1w FLASH 3D sequences were performed to evaluate Fe<sub>3</sub>O<sub>4</sub> polypropylene mesh MRI visibility and overall post-surgical pelvic anatomy 3 months after reconstructive surgery. Full mesh course in addition to important pelvic structures were reconstructed using the  [http://slicer.org '''3D Slicer''']® software program based on T1w and T2w MRI.
 
<br>RESULTS:
 
Three women with POP-Q grade III cystoceles were successfully treated with a partially absorbable MRI-visible anterior vaginal mesh with six fixation arms and showed no recurrent cystocele at the 3-month follow-up examination. The course of mesh in the pelvis was visible on MRI in all three women. The mesh body and arms could be reconstructed allowing visualization of the full course of the mesh in relationship to important pelvic structures such as the obturator or pudendal vessel nerve bundles in 3D.
 
<br>CONCLUSIONS:
 
The use of MRI-visible Fe3O4 polypropylene meshes in combination with post-surgical 3D reconstruction of the mesh and adjacent structures is feasible suggesting that it might be a useful tool for evaluating mesh complications more precisely and a valuable interactive feedback tool for surgeons and mesh design engineers.
 
|}
 
 
 
==Biomaterial Shell Bending with 3D-printed Templates in Vertical and Alveolar Ridge Augmentation: A Technical Note==
 
 
 
{|width="100%"
 
|
 
'''Publication:''' [http://www.ncbi.nlm.nih.gov/pubmed/28215503 Oral Surg Oral Med Oral Pathol Oral Radiol. 2017 Jan 4. PMID: 28215503] 
 
 
 
'''Authors:''' Draenert FG, Gebhart F, Mitov G, Neff A.
 
 
 
'''Institution:''' Oral & Maxillofacial Surgery, University of Marburg, Germany.
 
 
 
'''Background/Purpose:'''
 
Alveolar ridge and vertical augmentations are challenging procedures in dental implantology. Even material blocks with an interconnecting porous system are never completely resorbed. Shell techniques combined with autologous bone chips are therefore the gold standard. Using biopolymers for these techniques is well documented. We applied three-dimensional (3-D) techniques to create an individualized bending model for the adjustment of a plane biopolymer membrane made of polylactide.
 
<BR>STUDY DESIGN:
 
Two cases with a vertical alveolar ridge defect in the maxilla were chosen. The cone beam computed tomography data were processed with a [http://slicer.org '''3D Slicer'''] and the Autodesk Meshmixer to generate data about the desired augmentation result. STL data were used to print a bending model. A 0.2-mm poly-D, L-lactic acid membrane (KLS Matin Inc., Tuttlingen, Germany) was bended accordingly and placed into the defect via a tunnel approach in both cases. A mesh graft of autologous bone chips and hydroxylapatite material was augmented beneath the shell, which was fixed with osteosynthesis screws.
 
<br>RESULTS:
 
The operative procedure was fast and without peri- or postoperative complications or complaints. The panoramic x-ray showed correct fitting of the material in the location. Bone quality at the time of implant placement was type II, resulting in good primary stability.
 
<br>CONCLUSIONS:
 
A custom-made 3-D model for bending confectioned biomaterial pieces is an appropriate method for individualized adjustment in shell techniques. The advantages over direct printing of the biomaterial shell and products on the market, such as the Xyoss shell (Reoss Inc., Germany), include cost-efficiency and avoidance of regulatory issues.
 
|}
 

Latest revision as of 06:37, 1 December 2023

Home < Main Page < SlicerCommunity


3D Slicer Enabled Research

3D Slicer is a free open source software package distributed under a BSD style license for analysis, integration, and visualization of medical images. 3D Slicer allows even those with limited image processing experience to effectively explore and quantify their imaging data for hypothesis-driven research.


The community that relies on 3D Slicer is large and active: (numbers below updated on December 1st, 2023)

  • 2,147+ papers on PubMed citing the Slicer platform paper
    • Fedorov A., Beichel R., Kalpathy-Cramer J., Finet J., Fillion-Robin J-C., Pujol S., Bauer C., Jennings D., Fennessy F.M., Sonka M., Buatti J., Aylward S.R., Miller J.V., Pieper S., Kikinis R. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network. Magnetic Resonance Imaging. 2012 Nov;30(9):1323-41. PMID: 22770690. PMCID: PMC3466397.


The following is a sample of the research performed using 3D Slicer outside of the group that develops it.

2023 :: 2022 :: 2021 :: 2020 :: 2019 :: 2018 :: 2017 :: 2016 :: 2015 :: 2011-2014 :: 2000-2010


We invite you to provide information using our discussion forum on how you are using 3D Slicer to produce peer-reviewed research. Information about the scientific impact of this tool is helpful in raising funding for the continued support.


We monitor PubMed and related databases to update these lists, but if you know of other research related to the Slicer community that should be included here please email: marianna (at) bwh.harvard.edu.