DICOM:Database

From Slicer Wiki
Jump to: navigation, search
Home < DICOM:Database

Local databases to organize DICOM header information are often used in medical image applications and workstations. This page is used to organize information and examples.

History

Bill started a thread on the slicer-devel mailing list about improving dicom parsing performance. This message includes a sample sql database schema as an attachment.

At the CTK meeting, Marco Nolden showed a similar approach has been followed as part of the MITK project using DCMTK to fill an SQLite database.

Example Data

Considerations

  • It would be ideal if the database schema was standardized and could be used with any DICOM toolkit (GDCM and/or DCMTK).
  • The MITK schema is nice because it uses the standard DICOM field names for the columns, for example PatientsUID, ModalitiesInStudy, etc).
  • Eventually we could create an ITK IO Factory plugin reader that an read when given an SQLite filename and a query string that specifies a volume. With something like: "/tmp/dicom.db:SeriesUID=1.2.3...." If the SQL database kept the width, height, and pixel data offset then the files could be read quickly without re-parsing.
  • Marco plans to contribute a cleaned up version of the MITK code to the CTK git repository.
  • Jim suggests that the image table hold information about the resolution, pixel size, (or field of view), coordinate frame (imagepositionpatient, imageorientationpatient), acquisition time, etc. as well as an offset into the file for the start of the pixel data. The goal should be that once the data had been entered into the database we never have to use a dicom parser again on that subject. Unless we are looking to pull out a very special tag. We should probably put some summary information up at the series level. You can't always do that so maybe the series table needs a column which indicates whether the whole series is homogeneous.
    • Steve agrees with Jim, but suggests that maybe we have multiple databases: a central database with minimal information to point sort out patients/studies/series and then another database file per study that has the detailed information. This per-study database file could be use for the fast loading without making the central database file grow too big.