[Insight-users] Registration optimizer drawbacks

Bjorn Hanch Sollie bhs at pvv . org
Wed, 14 Aug 2002 13:20:36 +0200 (CEST)


Hi all!

I have been using the GradientDescent and RegularStepGradientDescent
optimizers in registration, and I'm currently trying to document the
positives and the negatives of one versus the other.

The learning rate of the GradientDescent optimizer obviously has the
drawback that the length of the step is proportional to the derivative
of the metric, and it can be pretty hard to get an idea of how this
derivative will change as the registration progresses.  This means
that what is a good learning rate for the initial iterations of the
registration, might become inappropriate later on.

This is indeed a serious drawback, but I fail to see why it wouldn't
always be be better to use the RegularStepGradientDescent optimizer
instead.  I presume both optimizers are in the tookit for good reason.
So, to put things in perspective, I'd like to know is what the
drawbacks of the RegularStepGradientDescent optimizer are, because to
me it really looks like it will always be a better choice.  I'll
appreciate it if anyone can provide me with any clues.

Thanks in advance!

-Bjorn
-- 
The History of the Universe
Chapter 1: Bang!  Chapter 2: Sss...  Chapter 3: Crunch!
The End