On the difference between dynamic and static diagonalizations:
"Of course, just knowing that these work on such a small matrix doesn’t mean that we can determine absolute worth. For example, the dynamic shifted inverse power method of doom takes significantly more steps per iteration than the shifted inverse power method, but tends to converge much, much faster. If one were to simply look at time, it turns out that, in fact, it depends. Of course. The nondynamic method went faster (though possibly just due to machine speed – a larger matrix would be a better test, however the larger matrix felt it was it’s patriotic duty to kill the machine. Twice.) "
On the inability of my flimsy algorithm to converge
"Of course, right off, one has to be suspicious of a matrix which looks so very, very sketchy in terms of order. There does not seem to be any real, rational reason why we would use this matrix to test our code. There is.
Explaning why "(a/b)^n" works...
Obviously, as you take such a ratio raised to successively higher powers, one becomes dominant, and the code can return it. Lovely. Except that it requires us to have both: A) An eigenvalue which is larger in magnitude than every other eigenvalue and B) A clearly dominant eigenvalue, or else it could take (to steal from Carl Sagan) billions, and billions, and billions of iterations
I can't believe I got away with writing the way I did. I guess being able to form sentences was a skill most engineers lack :/
cranked out at 2:59 AM | |
|template © elementopia 2003|