Who’s In Charge: Us or our Technology?

So who is in charge?  Who controls the flow of technology?  Is it us?  Or does the technology now control us?

We live in a technology infused world.  Our current civilization sits on a foundation accumulated through the past few centuries and built of machine power.  We cannot separate ourselves from this cumulative support system without regressing to a pre-industrial way of life.  Which is something few of us either want or are equipped to deal with.  We have forgotten how to live that life, and despite the sentimentalism of our artists and poets, it is not returning without a crash forced upon us.  We will not return willingly.

The rise of this technologically mediated existence crept up on us.  At first the technology was resisted: it displaced human beings from their workplace.  It took over doing the work.  It does the work better, more efficiently, more reliably, and in greater volume.  What’s not to like about that?

The disruption is not to like, but that is more inconvenience in the fullness of history rather than a reason for stopping.  People suffer, but society benefits.  Productivity rises.  Wealth increases and we all, eventually, enjoy the fruits of the rise of technology.

Or so the story goes.  And, largely, the story is correct.  History says so.

With the continual accumulation of technology comes a continuing increase in complexity.  Society is constantly sliced and diced into specialties, localities, and the networks between them.  The whole edifice develops along a trajectory that no one controls.  It controls itself.  We get swept up in the continual movement towards greater reliance on technology.  We need the extra productivity, it is, after all, what allows us to live this life.  So we are dependent.  We are subsumed into the flow of innovation.

It is one of the great ironies of recent history that a major achievement of the Enlightenment, the development of the individual as an agent operating within a natural environment, has been challenged by the need for us, as individuals, to surrender our independence to the machinery we depend upon.  Not just challenged, but replaced.  The complexity of modern life with its interdependencies and constant reliance on the energy and information that power our technology forces us into a solidarity with one another. Our theories still reflect the original ideas of individuality, they have preserved an existence that might have been more true before the rise of our modern technological landscape, but they are not true within it.

As we have accumulated more useful knowledge we have become more in thrall to it.  This is an age of information.  It is not an age of material things.  That is the threshold technology is moving us, inexorably, over.  And it’s producing some odd things.

A couple of news items sparked my curiosity in recent days.  First was a Richard Waters article in the Financial Times talking about the abilities of artificial intelligence and how AI capabilities are arriving at what he sees as a concerning frontier.  Apparently an Israeli system known affectionately as GPT-3 came up with a technique known as “in-context learning” in order to solve problems.  The system’s creators had not given it this capability.  GPT-3 was self-taught.  Is this a problem?  What else does GPT-3 know that its creators are unaware of?  When will it tell them?  Waters sees this emergence as a boundary we ought not cross without serious thought. He says: “There is an obvious downside to machines working things out for themselves”.

The problem with this fear is that for most of us we have arrived there already.

The slicing and dicing I mentioned above has already moved all of us away from being able to “know” whether the truths being told to us are in fact true.  Think of the frontiers of mathematics.  In some specialties there are very few “experts”.  Very few.  They are the ones who adjudicate the development of new knowledge.  They patrol the frontiers.  They mediate what is true or not.  [Let’s not get into discussions of “truth” here].  The point is that we rely on those in the specialty to produce an honest assessment of themselves.  How would we ever know whether they are telling the truth?  After all, progression through the academic ranks relies on acceptance within those ranks of a person’s “contribution”.  The entire thing could be a self-supporting scam for all we know.

So most of us have no idea about the efficacy of the knowledge out there.  We have no idea where it came from.  We have no idea of its truth.  Some of it affects us.  Most does not.  That it came from Einstein or GPT-3 seems a point of indifference.  Just add it to the pile.

George Dyson disagrees.  He sees great danger in where we are with respect to AI.  Not, ironically, due to its digital capabilities.  He’s rather dismissive of them.  Rather, he sees greater danger in what he calls the analog computing that runs in parallel.  I recommend you read his short essay published in one of John Brockman’s recent collections.  This one was called “Possible Minds: Twenty-five ways of looking at AI”.

Here’s Dyson:

There is no precise distinction between analog and digital computing.  In general, digital computing deals with integers, binary sequences, deterministic logic, and time that is idealized into discrete increments, whereas analog computing deals with real numbers, nondeterministic logic, and continuous functions, including time as it exists as a continuum in the real world”.

Later he adds this example:

“Many systems operate across both analog and digital regimes.  A tree integrates a wide range of inputs as a continuous function, but if you cut down that tree, you find that it has been counting years digitally all along.”

The real point emerges, though, from his emphasis that analog computing is where the real complexity arises and where, ultimately, control resides.  Dyson tells us that there are three laws of AI:

  1. Any effective control system must be as complex as the system it controls, which he attributes to Ross Ashby.
  2. That a defining characteristic of a complex system is that it constitutes its own simplest behavioral description. i.e. the simplest model of an organism is the organism itself.  Attempts to reduce the system’s behavior to formal description simply makes things more complicated, not less.  This he attributes to von Neumann.
  3. Without attribution he then adds that the third law is that any system simple enough to be understood will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

It is this third law that gives pause, and ought to worry Waters.  It is possible to build a system without understanding it.  This is, according to Dyson, an insoluble loophole that AI developers have to confront.  Our relationship with AI will always be one of faith, not proof.   He concludes:

“We worry too much about machine intelligence and not enough about self-reproduction, communication, and control.  The next revolution in computing will be signaled by the rise of analog systems over which digital programming no longer has control.  Nature’s response to those who believe they can build machines to control everything will be to allow them to build a machine that controls them instead”.

In a sense we have arrived at that point already.  In our headlong rush to solve the great economic problem of the past we have come to rely on machinery and technology.  Nature is unforgiving.  Our escape has simply enmeshed us within a different trap.  Our reliance on technology is now existential.  Technology is necessary.  It is absorbed into our way of life and, perhaps even more so, into our psyches.

We are within the machine.

 


Earlier I mentioned “a couple of news items”.  One was the Water’s article already discussed.  The other was a remarkably small throw away comment within an Economist article discussing China and to a lesser degree the Communist Party’s attitude towards high-technology businesses and their information flows.  The paragraph starts by mentioning laws that some states and Western nations have put in  place to protect consumer information.  It then goes on:

“But Chinese regulators are going further.  In a largely ignored, jargon-filled policy paper from the State Council, China’s cabinet, in April last year, data were named as a “factor of production” alongside capital, labour, land, and technology.”

The Information Age has arrived.  At least in China.  I would still prefer to see labor and capital dropped to the side as being dependent upon the mix of energy and information needed to perform work.  But, hey, it’s progress.  And the Economist’s rather snarky reference to “largely ignored, jargon-filled policy paper” could easily apply to 99% of what our academics produce in an average year.

Which reminds me of what Dyson says.  If an economy acts as large complex system — think Hayek here — then it can only be understood as itself.  Any attempt to reduce it to formal descriptions only adds complications.  This is my problem with Hayek.  His attack on central planning relied on the notion that an economy was too large and diffuse for a central planner to manage it.  It was an inscrutable complex system.  That’s OK so far.  But it also disallows any other notion of planning.  Or at least it disallows any theory of explanation as to how the system works at all.  We simply would never be able to know.  The system is what it is.  It is its own description.  Theory is just an act of faith beyond that point.  It is a matter of opinion.  Economists who posit things like equilibrium are doing so simply to help their preferred explanation and too are their models work.  They have no “knowledge” of equilibrium.  No evidence.  So they can make the jump of faith and shuffle on.  But the models are impressive.

Oh well.