[Insight-users] MultiResMIRegistration Example

Lydia Ng lng@insightful.com
Thu, 11 Apr 2002 11:32:11 -0700


David:

I'm glad we are getting closer :)

Your feedback from yesterday raised several issues.
My goal yesterday to get you at least on track to getting
reasonable results. So here is my response to some of the
issues - most of this is my opinion
and others in the ITK consortium may have a different view.

[1] Parameter selection
When I first implemented the Viola and Wells, the results
I got was pretty pathetic as well. I took me several months
to finally figure out how to make it work. The main lessons
I learnt are:

- you need to do the registration in a multiresolution way
- you got to use lots of iterations (order of thousands per level)
- you got to normalize the image in some way otherwise to will
need to reselect the standard deviation (parzen window) width
for each and every image you want to register
- you can't set your learning rate too high or you will quickly
walk out of your capture region
- you need to scale between rotation parameters and translation
parameters

I have work out for myself a set of heuristic - I know Bill has
his own set.

- start the multiresolution so that coarsest level is approx isotropic
- don't downsample too much (say about 64x64 in-plane)
- normalize the image to mean zero, std of 1
- use stddev of approx 0.4
- use 50-80 sample points (using more is too costly)
- set translation scale to approx the size of image in mm
- set learning rate conservatively say 1e-3

I used these "rules" as a start and then refine.=20
Setting these parameters is an art and will depend on the images=20
and modality.

My heuristic for the translation scale comes from:
if rotate about the center by x degrees how does that translate
to motion at the edge of the image?

I strongly believe you can get better results for the "5 slices
removed" case. Start by upping the number of iterations. And
then play around with learning rate/number of iterations tradeoff.

[2] Memory issues
My personal opinion is that memory is cheap and is not unreasonable
to require minimum system requirement of 0.5-1 G of memory.

Following up what I said about the multi-resolution pyramid. There
is a trade-off issue here speed vs memory. At the moment I went for
speed. Computing all the levels at once allow things to be done
recursively. A possible tradeoff may be to write things to disk
when it is not being used.

[3] Speed issues
You idea for an alternative of computing MI is interesting.=20
In implementing the MI metric I tried to be faithful to the
original Viola and Wells paper as paper. It good to have well known
baseline with published results. Another charter of ITK
is to allow people to compare different methods.=20
The registration framework is designed for allowing people to
plug in different metrics.

There is some room in the current implementation of MI for
speed improvement without starting becoming a significant
different method altogether. In the literature, there are several other
ways of computing MI. I believe is the hope for this project that=20
the community will start contributing components (like alternative
ways of computing MI) to ITK.

- Lydia

> -----Original Message-----
> From: Lydia Ng=20
> Sent: Thursday, April 11, 2002 9:59 AM
> To: insight-users@public.kitware.com
> Subject: FW: FW: [Insight-users] MultiResMIRegistration Example
>=20
>=20
> Another post from David.
>=20
> -----Original Message-----
> From: David Rich [mailto:David.Rich@trw.com]
> Sent: Wednesday, April 10, 2002 11:03 PM
> To: Lydia Ng
> Cc: luis.ibanez@kitware.com
> Subject: RE: FW: [Insight-users] MultiResMIRegistration Example
>=20
>=20
> Lydia,
> (Will you post this to the mailing list?  My mailer would not=20
> recognize
> the address.  Thanks.)
>=20
> Okay, you have definitely piqued my interest!  I first ran the program
> with the image registered to  an exact copy.  I used 100=20
> sample points,
> as before.  The results were much better this time:
>=20
> Target filename: regfile2.raw
> Big Endian: 0
> Image Size: [128, 128, 105]Image Spacing: 2.72 2.72 2.72=20
>=20
> Reference filename: CopyOfRegfile2.raw
> Big Endian: 0
> Image Size: [128, 128, 105]Image Spacing: 2.72 2.72 2.72=20
>=20
> Number of levels: 5
> Target shrink factors: 4 4 4=20
> Reference shrink factors: 4 4 4=20
> Number of iterations: 250 250 250 250 250=20
> Learning rates: 0.0001 5e-005 1e-005 5e-006 1e-006=20
> Translation scales: 348 348 348 348 348=20
>=20
> Number of spatial samples:  100
> Standard deviation of Target:  0.4
> Standard deviation of Reference:  0.4
> Registered image filename: regfile3.dat
> Big Endian: 0
>=20
> Dump PGM files: 1
> PGM directory: pgmdir
>=20
> Reading in target.
> Reading in reference.
> Normalizing the target.
> Mean: 30.5417 StdDev: 63.514
> Normalizing the reference.
> Mean: 30.5417 StdDev: 63.514
> Setting up the registrator.
> Start the registration.
> Final parameters:=20
> 0.000285657  0.000150937  8.54086e-005  1  0.0254802  -0.000343909
> -0.00488092 =20
> Transforming the reference.
> Writing registered image to regfile3.dat.
> Writing PGM files of the target.
> Writing PGM files of the reference.
> Writing PGM files of the registered image.
>=20
> You will notice that I have changed both the input and output=20
> to include
> both the number of samples and the standard deviations for the target
> and reference, which indicate the 100 samples and default standard
> deviations.
>=20
> The results imply almost an exact match.
>=20
> I then used the reference image with slice 0 missing:
>=20
> Target filename: regfile2.raw
> Big Endian: 0
> Image Size: [128, 128, 105]Image Spacing: 2.72 2.72 2.72=20
>=20
> Reference filename: Regfile2-slice0.raw
> Big Endian: 0
> Image Size: [128, 128, 104]Image Spacing: 2.72 2.72 2.72=20
>=20
> Number of levels: 5
> Target shrink factors: 4 4 4=20
> Reference shrink factors: 4 4 4=20
> Number of iterations: 250 250 250 250 250=20
> Learning rates: 0.0001 5e-005 1e-005 5e-006 1e-006=20
> Translation scales: 348 348 348 348 348=20
>=20
> Number of spatial samples:  100
> Standard deviation of Target:  0.4
> Standard deviation of Reference:  0.4
> Registered image filename: regfile3.dat
> Big Endian: 0
>=20
> Dump PGM files: 1
> PGM directory: pgmdir
>=20
> Reading in target.
> Reading in reference.
> Normalizing the target.
> Mean: 30.5417 StdDev: 63.514
> Normalizing the reference.
> Mean: 30.8354 StdDev: 63.7476
> Setting up the registrator.
> Start the registration.
> Final parameters:=20
> 0.000299443  0.000164776  8.04235e-005  1  0.0265114  -0.00140282
> -1.36595 =20
> Transforming the reference.
> Writing registered image to regfile3.dat.
> Writing PGM files of the target.
> Writing PGM files of the reference.
> Writing PGM files of the registered image.
>=20
> Again, the registration is excellent.  The offset in z is=20
> half a pixel,
> which I am not sure is quite the right answer, but the=20
> process did have
> to stretch the reduced image set by one slice.
>=20
> I then registered the image with the first 5 slices removed, and again
> the results were excellent:
>=20
> Target filename: regfile2.raw
> Big Endian: 0
> Image Size: [128, 128, 105]Image Spacing: 2.72 2.72 2.72=20
>=20
> Reference filename: Regfile2-slices04.raw
> Big Endian: 0
> Image Size: [128, 128, 100]Image Spacing: 2.72 2.72 2.72=20
>=20
> Number of levels: 5
> Target shrink factors: 4 4 4=20
> Reference shrink factors: 4 4 4=20
> Number of iterations: 250 250 250 250 250=20
> Learning rates: 0.0001 5e-005 1e-005 5e-006 1e-006=20
> Translation scales: 348 348 348 348 348=20
>=20
> Number of spatial samples:  100
> Standard deviation of Target:  0.4
> Standard deviation of Reference:  0.4
> Registered image filename: regfile3.dat
> Big Endian: 0
>=20
> Dump PGM files: 1
> PGM directory: pgmdir
>=20
> Reading in target.
> Reading in reference.
> Normalizing the target.
> Mean: 30.5417 StdDev: 63.514
> Normalizing the reference.
> Mean: 32.0688 StdDev: 64.7051
> Setting up the registrator.Start the registration.
> Final parameters:=20
> 0.000315852  0.000151921  6.09405e-005  1  0.0267852  -0.00191591
> -6.81043 =20
> Transforming the reference.
> Writing registered image to regfile3.dat.
> Writing PGM files of the target.
> Writing PGM files of the reference.
> Writing PGM files of the registered image.
>=20
> Again, the offset in z is about half the thickness of the slices
> removed.
>=20
> During the registration, I observed the memory allocation for the
> executable.  It typically jumped up and down, indicating some memory
> cleanup.  However, it still ran between 30MB to 50MB, which raises
> questions as to whether or not the intent of using heavily templated
> code to improve operation efficiency is being successful.  However, in
> the meantime, I can now say that the code does work effectively (at
> least in this case)--even if the target gets sampled repeatedly.
>=20
> Now, can you help me understand the differences in the operations
> between before and now?  What is the specific purpose of the=20
> translation
> scale?  Is it the total amount that the image can be moved?  If it is
> compared with the rotation parameter of 1.0 for complete=20
> rotation, is it
> necessary to use the full width of the image to get=20
> comparable offsets?
> And how are the learning rates used in conjunction with the=20
> translation
> and rotation paameters?  Anything else you can tell me to help me
> understand the proper application here would be greatly appreciated.=20
>=20
> Dave
>=20
>=20
>=20
>=20