The core idea is deceptively simple: every observable phenomenon in the entire universe can be modeled by a neural network. And that means that the universe itself can be a neural network.
Vitaly Vanchurin, Professor of Physics at the University of Minnesota Duluth, published an incredible article on the arXiv pre-print server last August entitled “The World as a Neural Network”. We succeeded to this day when Victor Tangermann of Futurism published an interview with Vanchurin in which the paper was discussed.
The big idea
According to the paper:
We discuss the possibility that the entire universe is, at its most basic level, a neural network. We identify two different types of dynamic degrees of freedom: “trainable” variables (e.g. bias vector or weight matrix) and “hidden” variables (e.g. state vector of neurons).
Basically, Vanchurin’s work here tries to explain the gap between quantum and classical physics. We know that quantum physics explains very well what is going on in the universe on a very small scale. For example, if we deal with single photons, we can deal with quantum mechanics on an observable, repeatable, measurable scale.
But when we start to pan we have to use classical physics to describe what happens because we lose the thread in the transition from observable quantum phenomena to classical observations.
The basic problem with devising a theory of everything – in this case, one that defines the nature of the universe itself – is that it usually replaces one proxy for God with another. Where theorists have postulated everything from a divine creator to the idea that we all live in a computer simulation, the two most sustained explanations for our universe are based on different interpretations of quantum mechanics. These are referred to as “many worlds” and “hidden variables” interpretations and are the ones Vanchurin tries to reconcile with his “world as a neural network” theory.
To this end, Vanchurin concludes:
In this article we have discussed the possibility that the entire universe is, at its most basic level, a neural network. This is a very bold claim. Not only are we saying that the artificial neural networks can be useful for analyzing physical systems or discovering physical laws, we are saying that the world around us actually works that way. In this regard, it could be considered a proposition for the theory of everything, and as such it should be easy to prove otherwise. All that is needed is to find a physical phenomenon that neural networks cannot describe. Unfortunately (or fortunately) it’s easier said than done.
Take it quickly: Vanchurin specifically says that he does not add anything to the interpretation of the “many worlds”, but this is where the most interesting philosophical implications lie (in the humble opinion of this author).
If Vanchurin’s work spreads in peer review, or at least leads to a stronger scientific fixation on the idea of the universe as a fully functioning neural network, then we have found a thread to orientate ourselves towards a successful theory of anything.
If we are all nodes in a neural network, what is the network used for? Is the universe a huge closed network or a single layer in a larger network? Or maybe we are just one of trillions of other universes connected to the same network. When we train our neural networks, we run thousands or millions of cycles until the AI is properly “trained”. Are we just one of countless training cycles for the larger purpose of a machine larger than universal?
You can read the entire paper here on arXiv.
Published on March 2, 2021 – 19:18 UTC