I first distinguish between two general types of errors.
Input errors are errors that can be controlled directly while devising and implementing models of N-body systems. Input errors in N-body systems may also be called approximations, and may be divided into modelling approximations and implementation approximations. Modelling approximations simplify the system being simulated, and include:
Output error measures the difference between the output and a real system, and results from the cumulative effect of all the input errors. A simulation with small output errors is said to have high accuracy. Given that N-body systems are chaotic, and that their simulation introduces the above input errors, we must now ask precisely what we mean by the ``accuracy'' of a simulation. Amazing as it may seem, there is currently no clear definition of simulation accuracy [29]. Obviously, attempting to follow the individual paths of all N particles is infeasible; Goodman, Heggie and Hut [10] show that this would require O(N) digits of precision. On the other hand, most astronomical publications quote energy conservation as their only measure of output error, even though there are infinitely many solutions with equal energy but vastly different phase-space trajectories. Some of these simulations even use an energy-conserving integrator like leapfrog, in which case quoting energy conservation is of dubious merit, because the integrator conserves energy no matter how big other errors become!
In large N-body simulations,
one is not usually concerned with the precise evolution of individual particles,
but instead with the evolution of the distribution of particles.
Most practitioners know that the exponential magnification of errors means
they cannot possibly trust the microscopic details,
but they believe that the statistical results
are independent of the microscopic errors,
although little work has been done to test this belief [10].
Barnes and Hut [5] claim that
astrophysical N-body simulations require only ``modest'' accuracy
levels, but also concede that quoting energy conservation isn't enough, and
that more stringent tests are needed.
An example of conservation of macroscopic properties is given by Kandrup and Smith [19]. They show that a histogram of the e-folding times of individual particles stays constant within statistical uncertainties, even though the phase-space distribution of those particles is vastly different for different initial conditions.
However, until more stringent tests are applied to N-body simulations, we'll never know, for example, if our simulations of spiral galaxies produce spirals for the same reason that real spiral galaxies do.
We now must distinguish between the desired properties of simulations, and deviations that simulations make from those properties, i.e., the output errors they make.
This would be possible in principle for chaotic maps, but not for ODEs. For maps, an arbitrary precision arithmetic package could be employed, but this is infeasible because it requires keeping all the digits of every operation, and each multiplication operation typically doubles the number of digits.
For problems in which the map is really an ODE integration, like N-body systems, this is not possible even in principle, because no numerical integration routine is known that can give exact solutions to arbitrary nonlinear ODEs.
If this property could be realized, all our troubles would be over, for it is a sufficient condition under any reasonable definition of simulation validity. Unfortunately, ODE integrations have truncation errors, so the magnitude of these errors will be magnified exponentially on a short time scale. Goodman, Heggie and Hut [10] offer some solace in that the exact evolution could be closely followed for a long time if O(N) digits are kept, but this is currently infeasible. There seems little hope of obtaining valid simulations by this criterion.
Certainly global conservation of these quantities is necessary for any reasonable definition of simulation validity, but it is unclear what other properties are implied by such conservations.
Since, with large simulations, we are only interested in the evolution of the distribution of particles, and since the initial conditions are usually generated from some random distribution anyway, this is almost as good as option (1). The study of shadowing relates precisely to this property.
What if shadowing turns out also to be an unattainable goal? We will need to demand less stringent properties of simulations, such as:
This would be almost as good as shadowing, at least for collisionless systems, but it is unclear how one would go about proving the existence of this property for a simulation.
This is the least stringent property, that I can think of, that a large N-body simulation would need to be considered valid; i.e., it is the weakest necessary condition I can think of. Note it is more stringent than energy conservation. However, it is again unclear how one would go about proving this property exists for a simulation.