Friday, March 10, 2017

Artificial Intelligence (not Intelligibility)

“People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world.”― Pedro Domingos

In theory, A.I. shouldn't replace humanity. That's not to say it's going to be quite disruptive. We're seeing it. If you've got a job that can be done better and more cheaply by a robotic algorithm, just count your days.

All those Uber and truck drivers are hanging on a thread. But they've got a few good years left. A.I. is also going to bring down a bunch of higher level workers too. It seems to me stock trading and legal advise can be done better by a machine. Perhaps, I may prefer the machine to the Wall Street trader or lawyer.

But silicon is not carbon. As Kevin Kelly duly noted, “in real life silicon suffers from a few major drawbacks. It does not link up into chains with hydrogen, limiting the size of its derivatives. Silicon-silicon bonds are not stable in water. And when silicon is oxidized, its respiratory output is a mineral precipitate, rather than the gaslike carbon dioxide. That makes it hard to dissipate. A silicon creature would exhale gritty grains of sand. Basically, silicon produces dry life.”

And last time I checked, humans are not dry life. What is dry is when we allow ourselves to become a closed system. And this closed posture to life (or the buffered self as Charles Taylor likes to say) creates a lack of Self-awareness that prevents us from becoming fully human.

This is the gist of intelligibility: it is the very Source that renders intelligence possible (not to mention all the other higher potentials humans are designed for: creativity, love, beauty, goodness, truth). But A.I. just isn't going there, because open verticality it just ain't got, in actuality and potentiality, despite all our projections.  

I recently read this article by Robert Epstein, who makes a good case that our brain is nothing like a computer. “The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell? (Epstein)”

The more we learn about the brain, the less we know. It seems we are nowhere near disentangling the brain's complexity, much less understanding its interactions with consciousness. Perhaps the Buddhists are right, and the substrate for memories are in storehouse consciousness (or Alaya). In other words, all these thoughts and memories live independently of our physical bodies. 

If you see human intelligence (and beingness) as a mechanical process or symbolic manipulations that can be isolated, measured, and optimized, then maybe you should find yourself a good A.I. wife in a few years. You probably don't care that much about the deeper sensibilities, self-reflection, virtues, sentiments, higher faculties, and relational nuances & subtleties that make us sentient beings.


It goes back to another similar blog post, where I'll quote myself: “Stanley Kubrick was concerned as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens out. So in other words, artificial intelligence won't come to us, but we will come to it.” Maybe that's why he made 2001's Hal so creepy.

So if A.I. continues to become more human-like, while we become more machine-like, where's the point where we'll all meet? I wouldn't call that the Singularity, but more like the relational loss of intelligibility.

No, in theory, A.I. shouldn't replace us. Unless we allow it.