10 Simulating reality in complex situations
How we make predictions when multiple processes occur simultaneously
This is the eleventh post in this series. You may want to read the Introduction titled, The Myth of Scientific Uncertainty, and posts numbered 1-9 first.
In both my areas of research, electrochemistry and mass spectrometry, multiple forces are the norm. In the first, there is the electrical attraction between the electrode and ions in solution, the complex nature of the electric field around the electrode, and the motion of the solution relative to the electrode. In the second case, electrical and magnetic forces with complex distributions affect the trajectory of ions in the instrument. Individually, the forces have predictable effects, but in combination, a single mathematical solution is often impossible.
When I began my scientific career, the only solution was a painstaking trial adjusting the physical parameters of actual electrochemical cells. The general availability of high-speed computing changed all that through a process called simulation.
Here’s how simulation works. A force (generally a field strength) acting on the object of study is calculated for every point in the operating space. The effect of the force on the object can be calculated for every position it occupies. This is done for each known force acting on the object. Then from the object’s starting position, the computer calculates where the object will be in the next small increment of time as a consequence of all the forces. It then does so for the next time increment and so on. The trail of successive positions is the path of the object through the calculated region. Scientists no longer need to seek a single mathematical formula to resolve such problems.
The increments of time between each calculation need to be small so that there is only a miniscule change in each force over the change in position. The smaller and therefore more numerous the steps, the more accurate the result. The computing power required increases with larger spaces, the greater the number of steps and objects followed, and the more forces involved.
I resolved the means to focus ions in a new type of mass spectrometer sixteen years ago using the computing power of a laptop[1]. Gary Hieftje’s group built an instrument following the prediction of the simulation and it worked exactly as the simulation predicted[2]. The problem I solved by simulation was modest compared to the many complex systems scientists simulate in virtually every field of investigation.
Simulation has become an essential tool in all areas of science and engineering. Models of everything from water molecules to black holes can be found in scientists’ computers around the world. For example, if we can simulate the way a drug and an enzyme interact, other potential drugs can be ‘tested,’ even hypothetical molecules that have not yet been synthesized. Molecular simulation now complements the more traditional tests for the biological activity of test substances.
The degree to which computer simulations can mimic reality and what it means if they do are legitimate topics for discussion. The process being simulated does not occur in a series of micro steps in the same way straight-line segments do not make a circle, no matter how small the segments. Thus, the results of computer simulations are estimations of reality. Depending on the model and the complexity of the system modeled, they can vary from extremely accurate, to probable, to just one possibility.
Verification of the simulation process comes from repeatedly correct predictions. Just as with the observational variances discussed in the earlier section, the degree of accuracy of simulations can be found and calculation modes adjusted to produce results adequate to the task. When all significant factors are incorporated into the simulation, the remaining influences are uncontrolled variables.
In some simulations, there can be some variation in the forces acting in each step. When you allow for a range of conditions during a simulation and run it multiple times, each outcome is likely to be different. We see this in the prediction of a hurricane’s path. Superimposing many repetitions of the simulation produces a range of paths which gives us a general trend, but the exact path becomes increasingly uncertain the farther you get from the starting point. The larger the effect of the variable factors influencing the result, the wider will be the divergence of solutions.
This is how, by mapping the forces in a space and applying simple force equations to objects over small increments of time, we can predict the outcomes in complex systems. The digital computer has made it a practical and commonly used tool. The degree to which the results correspond to the real world requires empirical confirmation.
In the next post, we will look at the systems scientists have devised to reduce ambiguity in communication. They are essential to progress, but they can hinder radical discovery.
Please spread the word that there are some things we know for sure, and we can logically demonstrate how we know that.
[1] Christie G. Enke* and Gareth S Dobson, Anal. Chem. 79. 8650-8661 (2007)
[2] Alexander W. G. Graham, Steven J Ray, Christie G. Enke, Charles Barinaga, David W. Koppenaal, Gary M Hieftje, J. Am. Soc. Mass Spectrom. 22, 110-117 (2011)
Another area where knowing the limits (or the number or infinity of variables that may matter) is crucial. I like to look at the weather reports that discuss not just the percent chance of this or that, but also the degree of certainty the meteorologists have and why they have that degree of certainty, which always includes noting how other possible factors may come in to play affecting a different outcome. In contrast, the simulations you're talking about above are the type that limit the variables and can be investigated carrying from one context to another (as in drug testing) or predicting the utility of a new tool (as in designing a new mass spectrometer).