One of the fundamental limitations of current N-body simulations of stellar systems is that they usually contain several orders of magnitude fewer particles (``stars'') than real stellar systems contain. Thus, each simulated particle feels a much more ``lumpy'' or granular gravitational field than a star would feel in a real system. For collisional systems this may mean that timescales and statistics are distorted by some approximately computable amount , but for collisionless systems this spurious collisionality is simply a source of error . The effect of close encounters can be moderated via force softening, but the granular effect of the gravitational potential is strong even for particles that are separated by large distances ([75, section 1.3a,], ). A common force computation method for collisionless systems is the particle-mesh method (eg., ), in which softening is implicit and equal to the distance between mesh points. For particle-particle or tree methods for force computation, softening is added explicitly by modifying the gravitational force as
where is the softening parameter. It is usually chosen to approximate the average inter-particle distance. However, Sellwood  notes that if a disk is being simulated in two dimensions, then the softening parameter can instead be used to approximate a disk of thickness . He also argues that a simple smoothing of the gravitational potential is probably an oversimplification, since real spiral galaxies contain giant molecular clouds and star clusters that make the potential significantly less smooth than the theorist's ideal. He includes a diagram clearly showing that spurious spiral structure can develop in a cold disk as a result of discreteness noise. This structure is clearly visible in a disk of 2,000 particles after only 1 revolution, and is visible even in a disk with 20,000 particles after a few revolutions. Discreteness noise can also introduce spurious heating of a collisionless cold disk, so that artificial disk-cooling mechanisms must be introduced [68, 57].
Dyer and Ip  note that equation (2) is physically inconsistent: particles act like Plummer distributions when they are contributing to the field, but are points when the field acts upon them, leading to a violation of Newton's Third Law and thus a violation of conservation of momentum. They fix this with a model in which softened particles are treated as uniform density spheres. Non-overlapping spheres interact as points, while a polynomial is used to compute the force on spheres that overlap. (The spheres are not solid in any physical sense. They can pass through each other like ``ghosts''.)
When approximate force computation algorithms are used, they are generally the next most dominant source of error. Barnes and Hut  analyzed their tree code  for errors and found that it works well for collisionless systems, but their results were inconclusive about collisional systems. (Jernigan and Porter  introduce a tree method that is completely different from the Barnes and Hut algorithm, and works very well for collisional systems.) Barnes and Hut  found that errors in global conservation of energy were dominated by force errors, and concluded that a modest error tolerance in force of about one part in per particle was sufficient to conserve global energy to one part in , and that ``further improvements in tree methods for a fixed N is much less desirable than simply increasing N.'' Hernquist, Hut, and Makino  came to a similar conclusion after doing many experiments with various accuracies of force computation and values of N from to . They measured individual particle energies (which are conserved in collisionless systems), and found that, for a given N, the energy deviations were almost identical regardless of the force error, and that energy errors decrease uniformly as N is increased. In other words, the effects of discreteness noise dominate those of force errors, so that simply increasing N is the most desirable improvement that can be made to current N-body simulations. They state that they believe this result is ``independent of N'' (their emphasis). This author believes that they are probably correct for values of N likely to be used in the foreseeable future, but the statement is clearly false in general, because we could, in principle, use the actual N of a real system.
A bizarre alternative is offered by Bryant [11, 12], who introduces a model in which velocities are always bounded and stellar encounters are interpreted as mutual absorption and emission of the simulated bodies.