Plato Reformulated

imgres.jpgThe question of whether information precedes matter, or vice versa, is not a new argument.  In some respects, I’m wondering if Plato was right.  The concept of an attractor that specifies the conformation of a system correlates pretty well with the Platonic notion of form. The advantage we have over Plato is that we understand these days that reality is relative — that objects only appear mass-like, and space only appears space-like, when we are speaking of reality on the scale of a human observer.  Once we start looking at systems that are too large, too small, or too fast, these appearances start to evaporate. To give an example, look at the word Hawking uses to refer to matter-energy in the context of his theory about how stuff could escape from a black hole.  Since he is referring to quantum effects, it would not be correct to refer to this stuff as “matter” or “energy” as we experience such things in the macroscopic world.  In particular, neither matter nor energy could escape from a black hole.  Whatever “stuff” is at the quantum level could, though.  Hawking calls this stuff “information.”  

The reason he calls it this is so gobsmackingly obvious, he seldom bothers to explain himself.  In the matter of what’s more real — existence or essence — existence can’t win. The only question then, is whether there is such a thing as essence.  The only alternative view these days is nihilism. I’ve argued elsewhere that nihilism seems to be gaining traction in the marketplace of ideas; and my question is, whether that is the most, or the least rational way of looking at things.

Saying, or even proving that information precedes matter does not answer that question.  The term “information” is quite generic; in fact, if you use the term they way neuroscientists use it, it’s probably even more generic than you might think.

We look at the brain as a computer.  Computers deal in information.  From the viewpoint of the computer, information can be divided into two spaces: input vs output, where “input” refers to information that went into hardware design and software  development. “Garbage in – garbage out” or “GIGO” is a pithy statement that refers to these two information spaces.  We typically apply this term to complex software models, for example when we refer to climate models.  We don’t frequently apply this term to hardware, because if your computer isn’t wired up correctly, it probably won’t output anything at all; it probably won’t even turn on.  But we do occasionally encounter a design error in the chip that causes the computer to spit out erroneous results.

In the case of neural networks, we likewise distinguish two information spaces: the training set, and the output set.  GIGO applies to the training; if you want decent output, you must use due care to choose a good training set.  This is why teachers get paid the big bucks.  In the past, the challenge was to separate the wheat from the chaff: we figured the

information space in the training set consisted of meaningful information and noise, and we planned to squelch out the noise as best we could.  Since then, we have found that “TMI” is a real thing; too much meaningful information can overwhelm a naive neural network, leading to bad output.  So we have backed off on the information dichotomy.  Everything we feed the neural network during training is called “noise.”  We classify noise as deterministic vs stochastic.  “Deterministic” noise has some clearly-defined relationship with the desired output.  “Stochastic” noise is more or less random.  Now, there’s a reason we use the  term “stochastic” rather than “random.”  Stochastic noise is not predictable in context.  Programming a neural network is an exercise in theory-building; the network is developing theories about how one attribute is related, if at all, to another.  Stochastic noise doesn’t have a relevant theory in this context; but one man’s noise might be another man’s information, and it is not at all clear that truly random noise actually exists.  There’s a world of difference between “random variability” and “unaccounted for variables”; randomness is an assumption, not a fact.

A pattern is emerging.  Information can be classified as to whether or not it is meaningful, recognizing that meaningfulness is relative.

I’ve been searching, without success, to find an applicable concept in neuroscience, physics, computer science, or evolutionary biology (like that would ever happen).  But I think the concept of mathematical Platonism might apply.  This refers to the idea that math is the study of fundamentally true things, but allows that we might not have the language to express this truth perfectly.

Previously, I proposed Dave’s Laws of information dynamics, to describe the behavior of information in a system of reality that is higher, or deeper, than the Newtonian reality for which the laws of thermodynamics apply.  The first law, which has to do with the conservation of information, may be over-broad.  Questions:

  1. Can any information be created?  Maybe. Intelligent systems form theories.  Memory, for example, is a theory about the relationship between one attribute and another.  There is evidence that these theories are rational, so it’s possible that theorizing will be driven in a direction specified by the attractor.  But that doesn’t rule out the possibility of innovation.
  2. Can any event change the game?  I think probably not. I think information has to follow a gradient that is specified by the relationship between initial conditions and the attractor. Complexity could give rise to intelligent systems capable of producing information, but I think complexity can only approach the attractor.  It can’t redefine the attractor, and it should follow that it can’t redefine the initial conditions either.
  3. Can information be lost?  Theoretically, I suppose.  It might be a stronger position to posit that information can be “lost” in the sense that, at any given time, the total amount of information in a universe can and will be degraded into a mixture of deterministic and stochastic noise.   This strikes me as one way of conceptualizing the information gradient. So we would say that the information space of the initial conditions must be greater than that of the attractor, by a wide margin.  And that the conversion of information into stochastic noise is what drives the system toward the attractor.

I have a bit of a tendency to equate “deterministic noise” with “information.”  I probably shouldn’t.  I don’t think a closed universe can manufacture its own deterministic noise, and I think that needs to be the first law, but I’m having a hard time finding the right way to express that concept.

I think where I want to go with the First Law might have something to do with

imgres.jpg

Pretty much how you figure he would look.

Godel’s theorem.  Godel was among the most influential of the mathematical Platonists.  Godel’s theorem is (believe it or not) a mathematical proof of the Cartesian fallacy.  It appears he has proven that no logical system can derive its own axiomatic principles.

Godel is basically saying that everything has a pre-requisite.  You are going to learn a certain set of information in Calc II, but first you have to take Calc I.  If Godel’s theorem is true, you can’t possibly “figure it out” just by taking Calc II; if you skip Calc I, you will never get it.  Just remember: Godel’s theorem is expressed as a mathematical concept, but it applies to information in general. If you believe there is such thing as random noise, fine.  Godel has nothing to say about that.  Any information that has any meaning in any context is subject to the rule.

If you look at it in terms of set theory, then no set can define a logical system.  Any logical system must be a part of a superset, which contains its axioms.  So the initial conditions has to be a superset of the attractor.

How does this sound?  For any closed universe:

  1. No system can define its initial conditions. (Godel’s Theorem)
  2. Complexity approaches the attractor.
  3. Initial conditions define an attactor that specifies a conformation of time, space, matter, and energy.

 

Meh.

I don’t know how well Dave’s Laws describe ultimate reality, and I don’t  care.  Reality is always a relevant-range question.  From the viewpoint of what it means to be a human, I don’t think the Newtonian/thermodynamic model quite captures the relevant range.  My theory of information dynamics does, at least, capture the two prevailing concepts of what it means to be a human.  Both the monotheistic viewpoint and the mystical viewpoint are accommodated, I think.  It also speaks to free will, and its limits.  Which is also a relevant-range question.  It addresses our search for meaning, and it accommodates meaningful self-awareness.  I’m liking it so far.


 

Ok, ok.  You’re right.  This is an anthropic view of our universe.

In other words, I’m saying that the evolution of human life was an inevitable outcome of our universe. Alternatively, my model of our universe is based on the prevailing theory of how the brain works.  In that respect, it looks like I’m projecting the structure of my own mind onto all of creation.

In my defense, I think there should be something more than a passing resemblance; I mean, I’ve been living here my entire life.  I was born here.  In fact, this universe gave birth to me. Moreover, the connectionist theory of cognition is the same theory that explains how societies work, and how evolution works, and how ecosystems work, and how star nurseries work.  It’s the story of complex systems, and you see that story everywhere around here.   In general, I think this is just how things roll in our neck of the woods.

Remember, “our neck of the woods” does not equal The Universe, where the capital “U” means, “everything.”  I’m just talking about our universe, what we call “the visible universe.”  Big place, but we know for a fact that’s not all there is.  For example, we know for a fact that gravity doesn’t come from our universe, it leaks in from the outside.

We have no idea what The Universe really looks like.  So far, we have been able to surmise the existence of a really big place called The Bulk, in which there are floating any number of “branes,” or matrices in which a visible universe can form.  We think there is at least one other “visible universe” floating around out there, and there might be a bunch of them.

Evidently ours is defined by a set of initial conditions that favors the development of complexity in general, and favors the development of human-style intelligence in particular.  The one next door might only favor the development of cats, and the next one over might only contain enough information to specify salt crystals.  I’m not talking about the theoretical cat-universe.  I’m talking about this one.

It follows, then, that I’m not talking about God.  To the extent we see God as the first cause, I don’t know how far down the chain of command we are.  I know human beings are dependent upon our own initial conditions, and that those conditions are in turn dependent on superior information.  Godel’s theorem goes to infinity.  To the extent that we conceptualize God as the one thing — whether as first cause, in the Freudian sense, or the set of all sets, in the Jungian sense — we aren’t going to figure it out in this time-space continuum.  I guess the other thing we could say about our universe is that it is based on duality.  I don’t think we can get away from that, and still be actual humans. It is our destiny to think, and you can’t think about one thing.  You have to have at least two concepts in mind to bang together.  That’s where thinking starts and ends.  Throwing up your hands and saying, “it is what it is,” is not thinking, regardless of whether you say those words with despair or ecstasy.

So simmer down.   I’m not trying to get you in touch with the Great Whatever; I’m just trying to get you grounded in reality.  To be perfectly clear about it, “Everything” is not reality. Not from where you’re standing.   But guess what.  “Nothing” is not reality either.  Remember that the next time some smart-aleck post-modernist starts rambling about the illusory nature of truth.  Remind him that it wouldn’t kill him to think.  That’s all I’m saying.

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s