BlueMatter

From VISTA LAB WIKI

Jump to: navigation, search

Blue Matter (BM) is a software developed by AS for finding a projectome that optimally satisfies a cost function that balances DTI data fit with volume constraints.

Contents

[edit] Running Blue Matter to normalize fiber density

Besides fancy contrack applications, BM can be run on a set of fibers produced by stream line tractography (such as STT with MrDiffusion), to find a subset of fibers that (a) best predict the diffusion data; (b) can fit within the space occupied by white matter, modeling these fibers as fascicles of uniform diameter across the fibers and along fiber length;


There are two fairly well commented scripts (not functions!) with hopefully useful code snippets.

  • Matlab script which prepares input files is here
 http://white.stanford.edu/trac/vistasoft/browser/trunk/mrDiffusion/analysisScripts/Longitude/dti_Longitude_NormalizeAllFibersDensity2steps.m
  • A bash script that performs unix-based steps (format conversions + blue matter steps, including intelligent use of azure scratch space) is provided below. It is referred to as runSAmpi.sh further.

[edit] Preparing input files (mostly MATLAB)

[edit] A set of fibers to be normalized, in pdb format

These fibers may be produced by STT algorithm using mrDiffusion. These fibers must terminate in grey matter (both ends), otherwise BM might not handle them properly. One way of dismissing the fibers that do not end in grey matter is by intersecting your fiber set with a gray matter mask. Load your fiber group (fg), create an ROI from your grey matter mask (roi), then use dtiIntersectFibersWithRoi([], {'and', 'both_endpoints'}, minDist, roi, fg) with minDist=1.73 (twice the default). Warning: when using grey matter mask, corticospinal tract fibers may be thrown out with bath water. I wonder if dtiClipFiberGroupToROIs would have been a better option – if you tried, share your experience/suggestions here.

Consider a situation where you would like to perform model fit for only fibers in left hemisphere. You will need to CLIP your fiber set, which has already been intersected with GM mask, using a saggital plane, to include only fiber points to the left of that plane (e.g., use fg = dtiClipFiberGroup(fg, rlClip)).

Alternatively, you could create a grey matter mask which defines all points where fibers are allowed to terminate, and pass it to Blue Matter with '-g' argument. In that case, blue matter will consider any possible fiber segment (of original input fiber set) that connects two possible points in a g-mask. This approach is NOT recommended if you are looking to find which particular fibers from your input dataset are "good enough" to be included with the final Projectome (that is, testing "goodness" of fibers), because the solution may contain multiple segments of one original fiber.

I use dtiWriteFibersPdb to export the fibers to pdb. I suppose, mtrExportFibers is another option – if you tried, please share your experience/suggestions here.

[edit] Contrack parameters file

You can create one using parameters taken from AS early example configuration for density normalization.

  cd(fullfile(project_folder, subjectID{s}));
  spName=fullfile('fibers', 'conTrack', 'ctr_paramsDN.txt');
  p.dt6File=dt6File; p.roiMaskFile='brainMask.nii.gz';
  p.timeStamp=datestr(now,30);p.roi1File=[]; p.roi2File=[];     
  p.wm=false;p.pddpdf=true;p.dSamples=300;p.maxNodes=300;p.minNodes=10; p.stepSize=1;
  p = ctrInitParamsFile(p, spName); 

Questions for AS:

  • How is ctr_params.txt file generated?
  • Which parameters are used by contrack_score? Which could be replaced by zeros and dummies?
  • Why different from met_params.txt?

[edit] Average aligned raw dti data, bvals and bvecs

A handy script is available:

         dtiRawAverage(rawNii, rawBvecs, rawBvals);

E.g.,

         dtiRawAverage('dti_g13_b800_aligned.nii.gz', 'dti_g13_b800_aligned.bvecs', 'dti_g13_b800_aligned.bvals');

The results will be saved in the same folder where rawNii file resides as dti_g13_b800_aligned_avg.nii.gz, dti_g13_b800_aligned_avg.bvals and dti_g13_b800_aligned_avg.bvecs.

[edit] Brain mask

One of the parameters blue matter requires, '-m', specified all the voxels that a fiber is allowed to pass through. This usually will include the union of white and gray matter masks (since any "plausible" fiber usually has both ends terminated in gray matter). You can use brainMask.nii.gz already contained in the dtiRawdir directory, but you want to make sure to exclude voxels that have a raw data value (any gradient direction) of zero. Run the snippet below. The new “safe” mask brainMask.nii.gz will overwrite the original one in dtiRawdir directory.

         dtiRawdir=fullfile(project_folder, subjectID{s}, 'raw');  
         dtiMakeBrainMaskSafe(fullfile(pwd, 'dti06trilinrt'),  dtiAlignedAvgFile);

How does blue matter use this ('-m') mask? 1. To through away "implausible" fibers. If your input dataset includes fibers whose points are OUTSIDE of brainMask, and you are not supplying '-g' parameter to bluematter scripts, the whole fiber may be dismissed from the solution as implausible (Tony: is this so?). 2. To determined which voxels will contribute to computing error between observed and model-predicted diffusion data.


[edit] pdb+contrack params file = SBFloat data

For your convenience, this step is included in runSAmpi.sh (an example shell script mentioned above) preceding BM executables. This step relies on outputs of steps 1.1.1 and 1.1.2.

 /home/sherbond/src/dtiTools/contrack_score/contrack_score -i ctr_params.txt -p outputfilename.SBfloat --thresh 1000000 --seq inputfiberset.pdb
  • Threshold (--thresh) is some number greater than or equal to the number of paths in paths.pdb. I used a very large number of 10^6 which is the number greater than any STT-generated fiberset.
  • The --seq parameter means process the pathways sequentially and don't do any scoring...thus keeping all of the pathways and not needing the information in the ctr_params.txt file.

There is a chance you do not have a pdf file (in case if you did not use bootstrapping in your dti processing procedures). This will pose a problem when running contrack_score (?true? - this is true, you'll have to rerun dtiRawPreprocess and use bootstrapping to generate the pdf file - MP). You can use pdf file from /dti06/bin. Copy it to your “bin” directory. I suppose you can use ctrInit to produce one – this will require using pddDispersion.nii.gz from dti06 generated by dtiRawPreprocess, and tensors.nii.gz from dti06rt generated by dtiRawFitTensor.

[edit] Running Blue matter executables

There are three executables for running blue matter (in that order): trueSAPrep, trueSA, and trueError. You can check out two kinds of executables from our vistasoft repository, gpe = Global Projectome Estimation folder (trunk/mrDiffusion/gpe)

  • MPI-compiled executables (to run on azure or other machines with MPI== "Message Passing Interface" environment)
     trueSAPrep_mpi
     trueSA_mpi
     trueError_mpi
  • nonMPI compiled versions
     trueSAPrep
     trueSA
     trueError

The three executables run consequently use largely the same parameters, see examples below in runSAmpi.sh.


[edit] Parameters

  • SUB=xmin, xmax, ymin, ymax, zmin, zmax

for the subvolume you want fitted (image space coordinates: that is, to fit a full volume of 5x5x5 you will specify SUB=0, 4, 0, 4, 0, 4). This parameter was introduce to deal with memory limitations (can't load too much data on a machine with limited memory). If this brick is bigger than your mask ('-m'), I suppose nothing happens. If it is smaller than your mask, I suppose it acts as a more restrictive mask?

  • W=.2

This is lambda in MICCAI paper. AS current results suggest it is a good default parameter.

  • ITERS=1,12000,1,20000

AS believes trueSA should converge before maximum iterations are reached. I do not understand exactly what each of these numbers are, but I assume they define number of iterations if does not converge. Default convergence behavior of trueSA is set to 50 iterations of change of .01 in the model score.

  • diameter 0.2

Diameter of a fascicle. Recently AS suggested this parameter to be .2 or bigger. We used .2 for longitudinal white matter analysis.

  • You need to supply "canonical" diffusivity values for:
--dg <float>
Diffusivity of unoriented cellular tissue (we assume it SHOULD BE SAME AS THAT OF GM?).
--dr <float>
Diffusivity of radial axis of a fascicle.
--dl <float>
Diffusivity of longitudinal axis of a fascicle.
--dc <float>
Diffusivity of CSF.
--std <float>
Standard deviation of MRI noise.

These values are usually approximately the same across datasets collected with the same protocol. These values depend on both b-value and gradient slew rate etc. of your protocol.

Here is how you obtain them: Open dtiFiberUI and load a representative dt6.mat file. Find a voxel within WM (e.g., CC). Grow a little sphere ROI around that voxel (e.g., 2 mm) and obtain average properties within that ROI ( Analyze->ROIs->ROI stats (verbose)). Mean eigenvalues will be reported. Take the first eigenvalue as your 'dl', and the second eigenvalue as your 'dr'. Now switch your background to Mean Diffusivity, make a small ROI within grey matter and collect ROI stats -- "mean" value reported is what you want to use as 'dg'. Finally, make an ROI within a ventricle to obtain 'dc' as a mean mean diffusivity value across this ROI. To get STD value you need to switch your background to "b0", make an ROI outside of the HEAD, and extract ROI stats. The value of 'std' reported there reflects the standard deviation of MR noise. If someone could report here typical 'dg', 'dr', 'dl', 'dc' and 'std' value typical for b=800 g=1300 used for some Wandell Lab DTI data, that would be appreciated.

Typical 'dl', 'dr', 'dg', 'dc' and 'std' values for b=800 g=1300 data used for some Wandell Lab DTI projects:

  • dl = 1.83
  • dr = 0.5
  • dg = 0.85
  • dc = 3.3
  • std = 40.7

[edit] Meaningless errors

Annoying errors which you should ignore when running BM, and which should be deprecated in the next version of the code.

  • When running trueSAPrep:
     Loading tensors volume ... 
     ** ERROR (nifti_image_read): failed to find header file for 
     Loading GM volume ... 
     ** ERROR (nifti_image_read): failed to find header file for 
  • When running trueSA:
     Loading GM volume ...
     ** ERROR (nifti_image_read): failed to find header file for 

[edit] Output

[edit] Solution

The solution is in nSTT/pids/model.pid. It needs to be converted back to pdb format. This step is included in runSAmpi.sh.

 pid2pdb   -p <filename> -v <filename> -o <filename>    -d <filename> [--] [--version] [-h]
  • -p: your pid filename
  • -v: diffusion volume filename, e.g., ${RAWDIR}/dti_g13_b800_aligned_avg.nii.gz
  • -o: output pdb filename (what you want to name your resulting file)
  • -d: PDB root filename: original pathway database… essentially allfibers_0.SBfloat (original fibers file that had full brain tractography).
  • -i: indices filename

If provided, pid2pdb will spit out the indices into the database for the retained pathways. Note: these indices start with ZERO!

 pid2pdb   -p model.pid -v dti_g13_b800_aligned_avg.nii.gz -o <densitynormalizedfibersfile.pdb>   -d  <originalfibersfile.SBfloat> –i <indices.ind>

Sometimes you will need to transform the PDB file back to .mat. I use fg=mtrImportFibers on PDB file, followed by dtiWriteFiberGroup. List of retained fibers may be stored, together with reference to the original, “parental” dataset.

 fid=fopen(IndicesFileName);
 DN_ind=textscan(fid, '%d');DN_ind=DN_ind{1}; fclose(fid);
 dtiWriteFibersSubset(ParentFGFileName, DN_ind+1, DensityNormalizedFGFileName, DensityNormalizedFGLabel);

[edit] Logs/Quality metrics

  • runSAmpi.sh produces log files for each of the three “true” steps (log directory and filenames can be customised -- currently on azure scratch space).
  • trueError produces the following output maps:
  • --error

The error E is a convex combination of E1, which measures the difference between the predicted and observed diffusion weighted images, and E2, the amount the volume is overfilled. The parameter lambda modulates the balance between E1 and E2 and is selected empirically. We seek solutions that predict the data to within the measurement noise; we also seek solutions with no more fascicles than a given voxel volume allows. To compute a pure data/model error, run trueError with w(lambda)=0 /on the files produced by running trueError with w=.02/.

  • --fraction (volume fraction)

The mm^3 voxel consumed by white matter in the solution fascicle model.

  • --rwm (Ratio WM2Non)

"Optimal" ratio of white matter to non-white matter in a voxel. This measure is computed from diffusion data independently from the fascicle model. It is therefore not exactly a quality metric, but may be useful.


[edit] Notes on Running trueSA on a full brain: an intelligent initial guess

Density normalization of a full brain fiber set may be very time consuming. Providing a reasonable initial guess greatly reduces convergence time. A reasonable guess can be obtained by first normalizing a subset of fibers (first X from the full fiberset) and using the solution from the first run (model.pid) as a starting point for the full brain estimation. The initial solution is even better if it is computed from the fibers known to lay withing the major backbone of white matter skeleton -- e.g., if these fibers have been classified as one of the twenty known white matter atlas groups. This strategy is exemplified in script dti_NormalizeAllFibersDensity2steps.m which performs full brain tractography and finds a subset of Mori atlas fibers. The fibers are then arranged in the full brain set such that the Mori atlas fibers are followed by unclassified fibers. Density normalization is performed on the Mori subset, producing a model.pid, which is then passed as an optional parameter to runSAmpi.sh and used as an initial guess for full brain density normalization.

[edit] Notes on Running trueSA using MPI vs. nonMPI [stub]

Current Blue Matter executables have been compiled for MPI environment. This means that theoretically computations can be done more efficiently, taking advantage of multiCPU computing stations. It is not clear whether one should use more than one CPU for a simple task of normalizing 50000 fibers or so, typical for a full brain STT, Mori classified. Multiprocessor mode is reportedly advantageous for contrack billion pathway set analyses. When applied to a simple STT-produced Mori-classified whole brain dataset, we observed no convergence for a couple of days when running Blue Matter in a multiprocessor mode (32 CPUs), compared to 7-9 hours convergence on a single CPU on the same machine.

No systematic investigation off Blue Matter performance in single-and multiCPU mode has been perfromed to date. Below are some useful notes.

  • There is a switch for the number of processors in the runMPI.sh script, defaulted to 1.
  • runSAmpi runSAmpi.sh calls for BM functions compiled for MPI environment which might not run on machines that do not have MPI installed.
  • You need to run trueSAPrep and trueSA in MPI mode. No MPI need for trueError.
  • You can only run them on a 64 bits machine (trueSAPrep compiled for 64 bin architecture only).
  • For each processor, you need to have a file prefix_PID.SBfloat in your data directory. runSAmpi.sh script does not need to be modified as long as it contains a reference to a single file (e.g., prefix_0.SBfloat). Files prefix_*.SBfloat could be exact copies of each other: runSAmpi prepares copies and all that and stores them on the scratch space of azure: /azure/scr1

[edit] Code example (runSAmpi.sh)

An SH script that batch processes steps required for performing Blue Matter analysis on one dataset. Save is as yourscript.sh script and run with following parameters.

 sh path-to-yourscript/yourscript.sh  <PATH-TO-SUBJECT-DTI-DIRECTORY> <INPUTDB.pdb> <OUTPUTPDBNAME.pdb>  [g] [b] [initialsolutionModel.pid]

Goes without saying your script must have appropriate permissions to run. This example will work on azure -- it uses azure's own scratch space not shared with other network computer.

#!/bin/bash
echo "Running your runSAmpi script"
#
# 1. Parameters (Edit before running) -- set the VALUES AS DESCRIBED IN THE "Parameters" SECTION ABOVE
#################################################################################################################
#For a controversy on running trueSA in multiprocessor mode, see "http://white.stanford.edu/newlm/index.php/BlueMatter#Notes_on_Running_trueSA_using_MPI_vs._nonMPI_.5Bstub.5D"
# Setting numberProcessors=1 makes it single processor mode. 
numberProcessors=1 

W=.2
DL=1.83
DR=0.5
DC=3.3
DG=0.85
STD=40.7
ITERS=1,12000,1,20000
fiberDiameter=.1; #In mm (Found .2 to be a little too aggressive)
# Decreasing fiberDiameter makes BlueMatter less aggressive by allowing a higher density of fibers in the fascicles


#These values are xmin xmax ymin ymax zmin zmax for the volume you want fitted
#Eg., for left hemisphere only use SUB=0,40,0,105,0, 75
SUB=0,80,0,105,0,75

## 2. Prepare system paths -- change if needed
###############################################################################################################
#Your trueSA executables should be in this gpe directory -- change it to paths to your own version if u like
export PATH=$PATH:/home/akevan/matlab/svn/vistasoft/trunk/mrDiffusion/gpe
export PATH=$PATH:/home/akevan/matlab/svn/vistasoft/trunk/mrDiffusion/contrack

## 3. EXTRA CHECKS: Stuff below may need to be edited to conform to your directory structure
## 3.A. Input parameters are read into the following variables, and used in (4) below 
###############################################################################################################
#($1-$6 correspond to the parameters following the function call ("ssh function name param1 param2 param3 param4 param5 param6") in that order. Params 4 and 5, if not provided, will default to g=13 and b=800 as in longitudinal kids data. Params6 is optional. 
SDIR=$1
INPUTPDB=$2
OUTPUTPDB=$3
g=${4:-13}
b=${5:-800}
#$6 could be initial solution, no need to default

DIR=`dirname $SDIR`
cd $DIR

## 3.B. Key files to be processed through the Blue Matter pipeline are assumed to conform to a following naming convention:
# (If BM chokes, Check that all the files referred to below do exist)
#
CONTRAC_PARAMS=${DIR}/fibers/conTrack/ctr_paramsDN.txt
RAWDIR=${DIR}/raw
RAW=${RAWDIR}/dti_g${g}_b${b}_aligned_trilin_avg.nii.gz
BVALS=${RAWDIR}/dti_g${g}_b${b}_aligned_trilin_avg.bvals
BVECS=${RAWDIR}/dti_g${g}_b${b}_aligned_trilin_avg.bvecs
DTIDIR=${SDIR}/bin
B0=${DTIDIR}/b0.nii.gz
TEN=${DTIDIR}/tensors.nii.gz
M=${DTIDIR}/brainMask.nii.gz
FG_NAME=`basename ${OUTPUTPDB}`
LOGPREFIX=log_SA`basename $DIR``basename $SDIR`_${FG_NAME%.pdb}
SCRDIR=/azure/scr1/bluematterscratch/`basename $DIR``basename $SDIR`_${FG_NAME%.pdb}

if [ -f $CONTRAC_PARAMS ]
then
echo found CONTRAC_PARAMS
else echo !!!! CONTRAC_PARAMS $CONTRAC_PARAMS does not exist!!!
fi
if [ -d $RAWDIR ]
then
echo found RAWDIR
else echo !!!!  RAWDIR $RAWDIR does not exist!!!
fi
if [ -f $RAW ]
then
echo found RAW data 
else echo !!!!  RAW data $RAW does not exist!!!
fi
if [ -f $BVALS ]
then
echo found BVALS
else echo !!!!  BVALS $BVALS does not exist!!!
fi
if [ -f $BVECS ]
then
echo found BVECS 
else echo !!!!  BVECS $BVECS does not exist!!!
fi
if [ -d $DTIDIR ]
then
echo found DTIDIR
else echo !!!!  DTIDIR $DTIDIR does not exist!!!
fi
if [ -f $B0 ]
then
echo found B0
else echo B0 $B0 does not exist!!!
fi
if [ -f $TEN ]
then
echo found TEN
else echo !!!! TEN $TEN does not exist!!!
fi
if [ -f $M ]
then
echo found M
else echo !!!!  M $M does not exist!!!
fi

echo "Subject directory is $DIR"
echo "DTI data in the directory $SDIR"
echo "Logs about this Blue Matter run can be found in $SCRDIR"

# END EXTRA CHECKS###
# Stuff below does not need to be edited in most cases 
#
#############################################################################################
# 4. Flag: working on it
# Place a flag into the directory with input pdb to indicate that this dataset is being processed
# (Useful if you are running a loop that send  multiple jobs to Azure for simultaneous processing
echo `date` >${OUTPUTPDB%.pdb}.TMP

# 5. Prepare scratch space which will be used since we have to make unnessesary copies of the same dataset
mkdir -p ${SCRDIR}

#*# 6. Transform pdb to SBfloat 
if [ -e $INPUTPDB ]; then echo ; else echo "Input PDB file does not exist"; exit; fi
INPUTSBFLOAT=${SCRDIR}/data_0.SBfloat
echo "contrack_score.glxa64 -i $CONTRAC_PARAMS -p $INPUTSBFLOAT --thresh 1000000 --seq $INPUTPDB"
contrack_score.glxa64 -i $CONTRAC_PARAMS -p $INPUTSBFLOAT --thresh 1000000 --seq $INPUTPDB
#thresh should be set to number of fibers greater than in input pdb file

#Create a copy of data for each CPU. In case of a single CPU there should be a file named blah_0.SBfloat
for pid in `seq 1 $(($numberProcessors-1))`
do
cp $INPUTSBFLOAT ${SCRDIR}/data_${pid}.SBfloat
done
PIDS=${SCRDIR}/pids; #Complains "no such a file or dir", but it does not matter?
PDB=${SCRDIR}/data_0.SBfloat

mkdir -p ${PIDS};
rm -rf ${PIDS}/voxels;
mkdir -p ${PIDS}/voxels;

# Check if a solution already exists (make a backup copy if so)
# If an initial solution is provided, place it in the directory where BM will look for it, and name accordingly ($PIDS/voxels/model.pid).

if [ -f ${PIDS}/voxels/model.pid ]
then
 echo "model.pid already exists! Copied to model.pid_old`date +%Y%m%d_%H%M_%S`"
 mv  ${PIDS}/voxels/model.pid  ${PIDS}/voxels/model.pid`date +%Y%m%d_%H%M_%S` 
fi

if [ -z "$6" ]; then
    echo "No initial solution provided"
else
    cp $6 ${PIDS}/voxels/
    echo "Using initial solution supplied in the function call"
fi

##########################################################################################################################################################
##*# 7. Run prepSA
## This is first of three key BM steps. A special log file will be created to host the output of this step. 
## This step will run for only a few minutes and will prepare files required by trueSA (the "main" BM step)
EXE="mpirun -n $numberProcessors trueSAPrep"
ARG="-o $PIDS -r ${RAW} -d ${PDB} --val ${BVALS} --vec ${BVECS} -0 ${B0} -m ${M} -s ${SUB} -v 3 "

CMD="${EXE} ${ARG}"
echo $CMD
$CMD &> ${SCRDIR}/${LOGPREFIX}_prepMPI.txt


##*# 8. Run SA
## This is the second of three key BM steps. A special log file will be created to host the output of this step. 
## This step will run for many hours and will generate your solution. 

EXE="mpirun -n $numberProcessors trueSA"
ARG="-o $PIDS -r ${RAW} -d ${PDB} --val ${BVALS} --vec ${BVECS} -0 ${B0} -m ${M} --ten ${TEN} -s ${SUB} -v 3 -t ${ITERS} -w ${W} --diameter ${fiberDiameter} --std $STD  --minpts 3 --dl $DL --dr $DR --dc $DC --dg $DG " 

CMD="${EXE} ${ARG}"
echo $CMD
$CMD &> ${SCRDIR}/${LOGPREFIX}_trueMPI.txt

##*# 9. Run checks
## This is third of three key BM steps. A special log file will be created to host the output of this step. 
## This run will create various files with quality assurance metrics. 

EXE="trueError"
OUTFILES="-p ${SCRDIR}/pSTT_${FG_NAME%.pdb}.nii.gz  --error ${SCRDIR}/eSTT_${FG_NAME%.pdb}.nii.gz --fraction ${SCRDIR}/fSTT_${FG_NAME%.pdb}.nii.gz --rwm ${SCRDIR}/rSTT_${FG_NAME%.pdb}.nii.gz"
ARG="-o ${PIDS} -r ${RAW} -d ${PDB} --val ${BVALS} --vec ${BVECS} -0 ${B0}  -m ${M} --ten ${TEN} -s ${SUB} -v 3 -w ${W} --diameter ${fiberDiameter}  ${OUTFILES} "

CMD="${EXE} ${ARG}"
echo $CMD
$CMD &> ${SCRDIR}/${LOGPREFIX}_errorMPI.txt
##########################################################################################################################################################

#*# 10. The solution is in pids/model.pid. It needs to be converted back to pdb format. Here. 
#pid2pdb   -p <filename> -v <filename> -o <filename>    -d <filename> [--] [--version] [-h]
#-p: your pid filename
#-v: diffusion volume  filename, e.g., ${RAWDIR}/dti_g13_b800_aligned_avg.nii.gz#-o: output pdb filename (what you want to name your resulting file) 
#-d: PDB root filename: original pathway database¡­ essentially allfibers_0.SBfloat (original fibers file that had full brain tractography). 
CMD=" /home/shireesh/Tony/fasctrack/pid2pdb  -p ${PIDS}/model.pid -v ${RAWDIR}/dti_g${g}_b${b}_aligned_trilin_avg.nii.gz -o ${OUTPUTPDB}  -d  ${INPUTSBFLOAT} -i ${OUTPUTPDB%.pdb}.ind"
echo $CMD
$CMD

# 11. CLEANUP AFTER MYSELF IN THE SCRATCHSPACE. Stuff to keep: -d -e -f -r from trueError, logs, model.pid 
rm -rf ${PIDS}/voxels
rm -f ${SCRDIR}/data*
ln -s $INPUTPDB ${SCRDIR}/inputPdbFileLink
ln -s $OUTPUTPDB ${SCRDIR}/outputPdbFileLink

# 12. REMOVE THE "WORKING ON IT" flag -- done
rm -f ${OUTPUTPDB%.pdb}.TMP

# 13. Provide your email below to get notified when the blue matter FINALLY converged
grep 'converged' ${SCRDIR}/${LOGPREFIX}_trueMPI.txt | sendmail akevan@stanford.edu

Personal tools