NIMS User Example

From VISTA LAB WIKI

Jump to: navigation, search

This page is a tutorial of how we plan to use the NIMS database. The anticipated user is someone who scans regularly. One purpose of this page is to try to predict potential pitfalls.

If you scan and want to use NIMS, please feel free to edit this and add potential issues or details of how you scan.

Contents

[edit] Using the database while at the scanner

[edit] 0. Prior to Scanning

People using the database should :

  1. Create a user account on the NIMS page
  2. Assign their user account to a group (E.g., "Wandell Lab")
  3. Create an entry for their experiment (E.g., "Development of Reading")
  4. Assign their experiment to a relevant grant
    1. They will need an administrator to enter that grant information
    2. If this information is not available yet, the experimenters should still be able to proceed


[edit] 1. At The Scanner: Enter Exam Information

The experimenters and the subject enter the scanner suite. The subject has filled out all screening forms, IRB forms, etc. While the subject is getting ready to be scanned, the experimenter enters the subject and exam information into the scanner. This information will be saved in the header files of all series for this exam. (A series can be either an anatomical, dMRI, or functional scan).

Steps involving the database:

  1. Log into the NIMS database under your username
  2. Select the relevant experiment
  3. A list of exams already assigned to this experiment should be shown. If there are no exams yet assigned, there should be a page which says, "No exams are assigned yet." There should be a button marked "Add New Exam".

Things that might go wrong or create complications:


[edit] 2. Collect Anatomies / Scout Scans

The subject is all ready to scan and is put in. The first series is usually a short scout/localizer scan, which verifies that the subject is in the scanner correctly, the coil is working, the subject has a brain, etc. This series consists of a number of dicom files.

In a functional scanning session, there are then usually a few other series between the localizer and the functionals. For example the next series may be a second, higher-resolution localizer scan. The purpose of the this is to provide a higher resolution visualization aid in slice prescription for the subsequent functional scans. Then there is usually either a full-brain 3D T1 anatomical series, or a set of T1 anatomies coregistered to the functionals ("Inplanes"), or both. (For a diffusion scan, usually there is also a T1 anatomy at the beginning?). All of these anatomical series consist of a number of dicom files.

Steps involving the database:

  1. For the first series you want to keep in this session, pull the data onto a transfer computer.
    • (If the new scanner lets you connect to the database from the scanner computer, you may be able to do it from the scanner computer. At the Lucas center, you need to "push" the data onto a transfer computer.)
  2. Create a .zip file with the series files.
    • EXAMPLE: tar czvf 3plane.tar.gz 001/
    • The syntax for this example is: "c" indicates that the input folder (001/) will be compressed, "z" indicates the tar will also be gzipped (needed to upload to the database), "v" indicates verbose output to let you know what's going on, and "f" indicates that the compression should be forced: make the zip file, without asking a bunch of questions.
  3. On the NIMS page for the experiment, select the "Add new Exam" button
  4. There should be a page with a prompt to browse for the .zip file, as well as fields to add comments and tags to this exam. Tags might include things like "Retinotopy", "Word Localizer," "Longitudinal", "Outside Lab Subject", "Lab Member Subject". Comments can be more detailed.
  5. The user uploads the file, and a page is presented with the exam information extracted from the series header, as well as the metadata the user just typed in.
  6. The user should be able to verify this information and correct typos/mistakes.
  7. You then proceed to a page marking all series for this exam. The first series you uploaded should be shown. There should also be a button marked "Add New Series".

Things that might go wrong or create complications:

  • The same exam belonging to multiple experiments -- how to handle this?

[edit] 3. Collect Functionals

For functional experiments, the main experiment usually consists of several functional series (or "runs" or "scans"). Sometimes these series will all be identical; we obtain many of them to improve signal-to-noise. Sometimes there will be a few types of functional scans. For instance, it is often the case that the first couple of series are "functional localizers" experiment used to identify regions of interest in the brain, and subsequent series are the main experiment.

Usually, the user will do some pre-processing on all of the functional series at once (for example, motion correction). Later analyses may operate on the different groups separately. These groupings may be important for database queries later on. For example, a researcher might want to identify all subjects who have participated in a 'face localizer' experiment. 'Face localizers' are often a short part of a scan session. So an experiment on memory or emotion or decision making or retinotopy might have a face localizer in the beginning. The experiments will probably not be named 'face localizers', and might not even have 'faces' in the title. So perhaps these groupings can be added to the individual series as tags.


Steps involving the database:

  1. For each new functional run, create a .zip file containing all the files associated with this run.
    1. For Lucas Center spiral runs, these include but are not limited to the following format files:
      • P*.7 file: raw k-space data file (contains header information along with the data, but in a hard-to-access format)
      • P*.7.mag file: reconstructed image-space file (contains data only)
      • E*P*.7.mag file: header information only (older format)
      • P*.7.hdr file: header information only (newer format)
      • P*.7.mag.mot file: vector of estimated motion during the run
      • P*.7.mag.zits files: for runs with excessive subject motion, or other sources of outlier signals (scanner spikes?), these files are sometimes created and mark the outlier time points
    2. For EPI scans, the functional data are automatically reconstructed (no k-space data are available usually) and kept a directory full of DICOM files, one per slice per time point.
  2. In the NIMS page, you should be pointing to the "Exam" page with a list of series. Press the "Add New Series" button.
  3. You should get a page with a prompt to browse for the functinoal .zip file, as well as fields for metadata, including comments and tags. One potentially useful dedicated field would also be "Subject performance:" the mean accuracy on the task or some other metric of subject performance could be input, where appropriate, to have a dedicated record of how well subjects did in the scanner.
  4. You should get a confirmation page to review the file upload and metadata, and correct mistakes where needed.
  5. After uploading and verifying the information, you should be taken back to the exam page, where the new functional run has been added to the list of series for this exam.
  6. Any extra files in the .zip file which aren't recognized -- as for other cases of uploading series -- should be archived and associated with the series, even if it isn't known what kind of file it is.
  7. Repeat as needed.

Things that might go wrong or create complications:

  1. Some functional sequences don't always use the recon that happens at the scanner. (For instance, some 3D functional pulse sequences needed to be reconstructed offline). In this case, there won't be a P*.7.mag file, there will only be a P*.7 file, and the headers may not be available. We want to be able to extract header information in this case as well, even though that may be tricky.
    • Two options might be: first, assume the user has uploaded an anatomical series, where the header is readable, and preserve that information; second, keep a current copy of Gary Glover's header-extraction code ("writeihdr?"). (See: [1]). That code extracts only the header information from a P-file.
  2. Some functional series are missing individual DICOM files. We should be able to re-upload a version of that series once the missing DICOMs are retrieved.


[edit] 4. Collect Diffusion MRI scans

(What kind of files do diffusion MRI scans create by default?)

DTI Data:

    Raw DTI Data - DICOMS:
    • Diffusion MRI scans generate dicom images, usually numbering in the thousands (~2-6).
    • These images are taken from the scanner and transferred to the Lucas DICOM computer.
    • The data are then placed into a directory, the name of which is determined by the type of diffusion scan that was run. For example, a scan with a grads file code 87 and a b-value of 900 the folder will be named dti_g87_b900
    • That directory is then zipped and ready for transfer to our data servers.
    Raw Anatomical Data - DICOMS:
    • Typically, when we run a diffusion scan we also run 1-2 T1 anatomical scans for alignment purposes.
    • These data are taken from the scanner and transferred to the Lucas DICOM computer where they are placed into the subject's folder with folder names like spgr1, spgr2 ...
    • These folders are then zipped and ready for transfer to our data server.

Sample file structure resulting from a typical DTI scan

 subCode090819  > raw > dti_g87_b900.zip
                      > dti_g89_b2000.zip
                      > spgr1.zip
                      > spgr2.zip
                      > scout.zip



Steps involving the database:

Things that might go wrong or create complications:

[edit] Using the database to upload pre-existing data sets

Personal tools