Wetware and software

May 31, 2017

I got lucky: my interests lie at the intersection of two exciting trends which I never predicted would come together:

  1. a rapid improvement in understanding how our minds work;
  2. software being based on models akin to how our brain works.

Perhaps unsurprisingly, this has lead me to have an unusual viewpoint: I think software and wetware (our brain and the ‘programs’ that run in them) are kinda similar.

And they’re getting more similar at an increasing rate. As more neural net & ‘deep learning’ drives decisions, the difference between ‘us’ and ‘them’ is going to get smaller and smaller. One runs on avocado toast and Red Bull, the other on silicon and electricity.

Of course people are still wary about how well algorithms can make decisions, And probably for much of history, they haven’t been great.

But since we underestimate compound growth in many areas, I think this is one to look out for. Machines are coming close to having the same raw computing power as a human brain, and it’s just a question of setting them up correctly.

So now it’s a question of how AI/software will link together with our wetware across society. And I think there are some wonderful similarities and differences in how the architecture of the brain evolved, and how software neural nets have and will evolve in the future.

Towards modular, integrated components

John Allen’s The Lives of the Brain is an excellent journey following the evolution of our brain. Let’s consider how we see:

→ Our eyes have rods and cones, each specialized to trigger based on color, movement and direction.

→ These signals then go to specialized areas that interpret multiple firings into very simple base shapes like vertical or horizontal lines, or curves etc.

→ These signals are aggregated into larger shapes and holistic ‘gestalts’

→ Next, we’ve got the recognition phase: matching a set of shapes to an idea or concept in our head. This is actually really neat, in that we can see a rabbit in both these pictures.

→ Finally, we have cross-functional associative areas which allow us to relate the idea of rabbit to other ideas, impressions etc.

We know about this modular system because when small parts break down, very specific and local problems occur. There are people who will swear they are blind, but will flinch if provoked because their lower-order systems still work. (sideline, read Blindsight).

The key thing here is that there is no single “the algorithm”. It’s a bunch of semi-independent modular components which have layered on and integrated with each other over time (lots and lots of time).

And that’s true of modern software too. There is no one function: there are literally tons of modularly written and tested things which interact with each other. But it’s come about much faster, to complement tasks too boring, mundane, or complex for humans to do reliably.

Consider that it’s considered best practice to always test an entire ‘entity’ of software (a package, a program, an API) using two different kinds of tests:

  • “unit tests” check that the function does the correct thing internally;
  • ‘Integration tests’ check that two functions correctly interact with each other, especially if one depends on the other.

While I’m not aware of any, I can imagine creating ‘emergent properties’ tests to look for dramatically unexpected behavior across an entire system. It’s key that we build up dependable modular systems that reliably do what they specifically do, and can interact with each other reliably.

The relative limitations of wetware and software

This post is somewhat inspired by a post Ben Carlson wrote on “The limitations of algorithms’ a while ago. While I won’t dispute that software/hardware is limited, I think it’s helpful to note where/when/why it’s more or less limited than wetware. Let’s stay realistic to where we should & shouldn’t use software or algorithms, not whether or not they’re perfect.

Transparency and Auditability When a human says “I think X”, you often can’t traceback why. Was it what news they listened to? What was the input? What was the process they followed? Is there a bug in their wetware? Software is transparent*. If you want to know why an algorithm had an output, it’s pretty much guaranteed you can give an input, and watch the algorithm do what it does. * Neural nets and other non-deterministic learning models are making this less true… by being more like us.

Faster accumulation of knowledge Our wetware systems are the best naturally evolved systems out there for learning from others and predecessors and evolving intra-generationally. But our speed pales in comparison to good software. Software systems can accumulate knowledge far faster than we can, in no small part because lessons learned by one instance can be quickly and perfectly incorporated by another. If one instance of an algorithm fails because of an unexpected input, we have it send back an error message. This let’s us fix it, and re-deploy the fix to all instances. Take a moment and imagine if every doctor had all the experience and wisdom of all doctors collectively across history and geography. Imagine how much faster medical knowledge could progress with immediately shared knowledge. I agree with Ben that algorithms exist in a competitive space, and so they have to evolve. What worked last year won’t work this year. But algorithms learn faster than humans do, and once they’ve been updated to avoid a mistake, never make the same mistake again. I wish I could say that for myself.

Garbage In, Garbage out Yes, totally. But human wetware is hardly better at preventing Garbage Inputs. Consider that we pro-actively selected into news sources and social media where the views reinforce our own. Our google searches are phrased to find answers we agree with. Historically, we’ve probably far better able to identify ‘garbage in’ in software, than we are in our own minds. One of my papers is about the ‘bias blind spot’ of individual investors and how we don’t consider our own biases when forecasting returns, but we do predict others errors. Let’s not let the mote in our own eye blind us.

The confirmation bias in the wetware I believe we have to be especially wary of ideas or positions that will feel good to us. Does it make you feel good to think that wetware is smarter than software? If so, be careful. One recent study shows how we’re biased towards wetware over hardware:

We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake.

The author guesses that the lack of trust is because we humans believe algorithms can’t learn, but humans can. I wonder if, as software starts learning faster, younger generations starting believing the opposite: that humans can’t be trusted.

Bigger picture

Now, to be clear, this doesn’t mean machines will ‘be smarter’. In fact, in the near term most of their benefit will be not in being ‘smarter’, but in being a different kind of smart. There is a famous finding in computer science called the Moravec Paradox which I love:

it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

While it can initially be hard to swallow that what is the apex of intelligence for most humans (speedy math and logic) is trivial for computers. But, you can also rest easy in that there are still comparative advantages and “gains from trade” across us and machines. We’re still much better than them at many things.

Which brings me to the most perplexing question: can we create an intelligence very different from our own? Isn’t it funny that we’ve started using neural nets similar to our own wetware? Are we just a real world Geppetto?

Would we recognize intelligence as such if a completely unintuitive architecture or process is performing it? Yes, I am completely aware of the irony of a wetware based brain writing these thoughts out. I’m sitting down. I’m being humble. But these are exciting times.