QT argue that if the refinement algorithm fails, there is good reason to believe that no shadow exists. They apply two arguments. First, from the more rigourous study of simpler systems, glitches are known to exist and are not just a failure of any particular refinement algorithm. Secondly, QT's results are consistent with a conjecture by GHYS on the frequency of glitches.
However, during the many shadowing runs done for this thesis, I
came across several cases in which a shadow failed to be found for
shadow steps 0..S
, but a shadow was found for the superset
0..2S. This occured for various values of S from 1 up
to 32, where shadow steps were of size 0.1, and tended
to occur more often with larger M.
Thus, there
were cases in which a shadow was not found for a noisy segment
3.2 time units long, but one was found for an extension of the
same segment out to 6.4 time units.
Clearly, if
a shadow exists for 6.4 time units, the first 3.2 time units of it must
also be a valid shadow of the first 3.2 units of the original noisy
orbit. This was verified by using the first half of the 6.4 unit
shadow as an initial guess for the 3.2-length shadow; the refinement
procedure then quickly succeeded in constructing a shadow with the standard
boundary conditions for a 3.2-length shadow. Thus, the refinement
procedure initially failed to find a shadow for the 3.2-length
noisy orbit, even though one exists.
One of the smaller noisy orbits for which this occurred was also re-tested with all the above optimizations turned off, with the same result. Thus this phenomenon does not seem to be related to the optimizations.
It is difficult to say why this happens, but it certainly seems significant. Such a phenomenon also occurs with Newton's method: whether it converges often depends critically on the initial guess; a certain guess may converge to a zero, while a nearby guess may not. To lessen the chance of this failure, it is recommended that if the refinement procedure fails to find a shadow for a length S, that the noisy orbit's length be increased (eg., doubled), and then shadowing tried again on the longer orbit. This is expensive, but necessary to lessen the chance of failing to find a shadow that exists.
There are several other interesting issues concerning reliability.
Is it necessary to refine an orbit until the 1-step errors
are close to the machine precision? QT noted that most refinement
sequences usually either showed geometric convergence within the first
few refinements, or else didn't converge at all.
Most of the experiments in this thesis showed similar behaviour.
Thus it may seem
that if geometric convergence is observed, it is safe to assume
that geometric convergence would continue, and stop refinement before
reaching the machine precision. However, I have also observed several
cases in which refinement progressed quickly at first, but then
the 1-step errors stopped decreasing before reaching
the machine precision. I think these cases raise some interesting
issues. It is arguable that these cases are the
most interesting ones: why did refinement work at first
but then fail? What characteristics of the orbit make it hard to
shadow?
If refinement works at first but fails before reaching
the machine precision, does this mean there exist cases in which
hypothetical refinement beyond machine precision would have
failed? If this were the case, then clearly refinement to machine
precision does not always imply the existence of a true shadow
orbit. Sometimes this may simply be a word length problem:
perhaps convergence would have continued if more floating-point digits
were available. I do not believe this is always the case, however,
because sometimes convergence stopped 6 decimal digits above
machine precision (i.e., at about ).
One problem with the current standard machine precision of
about is that there is not much room between the scale
at which geometric convergence starts (about
or so), and
the machine precision. Thus it may be difficult to recognize geometric
convergence when it occurs.
Possibly, more research into higher-precision refinement should be conducted.
Conversely, I have seen cases in which refinement seemed not to be
progressing well for several steps, but eventually refinement
to machine precision was successful. This phenomenon sometimes also is
observed using Newton's method on standard zero-finding problems. It may
or may not be an interesting issue specifically concerning shadowing.
Finally, when deciding if a refinement is successful,
how close to machine precision is ``close''? I have
found empirically that as the number of dimensions increases,
the minimum achievable maximum 1-step error increases slightly
with the number of moving particles. Given a machine precision
of , I have found that maximum 1-step errors
reach a plateau somewhat below about
,
where
is the size of the shadow step. Although I have not
tried to precisely quantify this effect, there are reasonable arguments
to explain it. First, when large shadow steps are used, it should be
no surprise that even the accurate integrator's output will vary
based on slight perturbations of the input, and that the output
variations will be larger if the shadow step is longer.
Since the errors may be assumed to be random at each internal step taken
by the accurate integrator, we may expect a variation-of-sum effect
that grows as
; this is probably a better estimate than
the linear growth assumed in the above formula. Furthermore, the
minimum achievable maximum 1-step error may be expected to grow as
, rather than M, because the Euclidean norm used to
measure the 1-step error has O(M) terms inside the radical. Further
study of a reasonable limit may prove fruitful.
Another problem is bounding the maximum number of refinements allowed before the algorithm admits defeat, even if no other ``fail'' criterion has been met. I have found cases in which refinement had to switch back and forth several times between constant RUS and recomputed RUS, resulting in 40 or more total iterations (over 3/4 of which used constant RUS), eventually finding a successful shadow. It is hard to know how any particular upper limit on the number of refinements would affect reliability. I have arbitrarily chosen 100, although some may consider this high. Usually, however, if the number of iterations is more than about 10, then shadowing the orbit for twice as long will fail. In other words, if the number of refinement iterations is high, then there is a mild trouble spot somewhere in the orbit. This trouble spot will become even more of a problem as we attempt to build a longer shadow, because the volume of phase space around the trouble spot that can contain shadows shrinks as the attempted shadowing distance increases (see the discussion on ``brittleness'' of orbits in [8]). The total number of refinements, when refinement is working well, seems independent of the dimension of the problem, and is typically about 5.