Behavioral science reading list

I’m often asked what I think people should read as an introduction to behavioral science.

If you want the shortest, easiest introduction possible, go with The Little Book of Behavioral Investing.

Beyond that, it depends on what are you looking to know about. 

For making yourself a better investor

Little Book of Behavioral Investing

Finance for Normal People

Personal Benchmark

For working as a financial advisor

What Investors Really Want

Nudge

Behavioral Investment Management (warning: quanty!)

For improving behavior through technology

The Smarter Screen

Design for Behavior Change

 

Easy general reads

Thinking, Fast and Slow

Undoing Project

Predictably Irrational

Misbehaving

 

Academic overviews

A Course in Behavioral Economics

Thinking and Deciding

Heuristics and Biases

Choices, Values, Frames

 

Wetware and software

I have to admit, I got lucky in that my interests lie at the intersection of two exciting trends which I never predicted would come together:

  1. a rapid improvement in understanding how our minds work;
  2. software being based on models akin to how our brain works.

Perhaps unsurprisingly, this has lead me to have an unusual viewpoint: I think software and wetware (our brain and the ‘programs’ that run in them) are more similar than we usually think. And they’re getting more similar at an increasing rate. As more neural net & ‘deep learning’ drives decisions, the difference between ‘us’ and ‘them’ is going to get smaller and smaller. One runs on avocado toast and Red Bull, the other on silicon and electricity.

Of course people are still wary about how well algorithms can make decisions, And probably for much of history, they haven’t been great (I’m looking at you, Clippy).  But since we underestimate compound growth in many areas, I think this is one to look out for. Machines are coming close to having the same raw computing power as a human brain, and it’s just a question of setting them up correctly. So now it’s a question of how AI/software will link together with our wetware across society. And I think there are some wonderful similarities and differences in how the architecture of the brain evolved, and how software neural nets have and will evolve in the future.

Towards modular, integrated components

If you’d like a good review of the evolution of the brain, I’d recommend John Allen’s The Lives of the Brain.

Let’s consider how we see:

→ Our eyes have rods and cones, each specialized to trigger based on color, movement and direction.

→ These signals then go to specialized areas that interpret multiple firings into very simple base shapes like vertical or horizontal lines, or curves etc.

→ These signals are aggregated into larger shapes and holistic ‘gestalts’

→ Next, we’ve got the recognition phase: matching a set of shapes to an idea or concept in our head. This is actually really neat, in that we can see a rabbit in both these pictures.

→ Finally, we have cross-functional associative areas which allow us to relate the idea of rabbit to other ideas, impressions etc.

We know about this modular system because when small parts break down, very specific and local problems occur. There are people who will swear they are blind, but will flinch if provoked because their lower-order systems still work. (Sideline, read Blindsight).

The key thing here is that there is no single “algorithm”. It’s a bunch of semi-independent modular components which have layered on and integrated with each other over time (lots and lots of time).

And that’s true of modern software too. There is no one function: there are literally tons of modularly written and tested things which interact with each other. But it’s come about much faster, to complement tasks too boring, mundane, or complex for humans to do reliably.

Consider that it’s considered best practice to always test an entire ‘entity’ of software (a package, a program, an API) using two different kinds of tests:

  • “unit tests” check that the function does the correct thing internally;
  • ‘Integration tests’ check that two functions correctly interact with each other, especially if one depends on the other.

While I’m not aware of any, I can imagine creating ‘emergent properties’ tests to look for dramatically unexpected behavior across an entire system. It’s key that we build up dependable modular systems that reliably do what they specifically do, and can interact with each other reliably.

 

The relative limitations of wetware and software

This post is somewhat inspired by a post Ben Carlson wrote on “The limitations of algorithms’ a while ago. While I won’t dispute that software/hardware is limited, I think it’s helpful to note where/when/why it’s more or less limited than wetware. Let’s stay realistic to where we should & shouldn’t use software or algorithms, not whether or not they’re perfect.

Transparency and Auditability

When a human says “I think X”, you often can’t traceback why. Was it what news they listened to? What was the input? What was the process they followed? Is there a bug in their wetware?  

Software is transparent*. If you want to know why an algorithm had an output, it’s pretty much guaranteed you can give an input, and watch the algorithm do what it does.

* Neural nets and other non-deterministic learning models are making this less true… by being more like us.

Faster accumulation of knowledge

Our wetware systems are the best naturally evolved systems out there for learning from others and predecessors and evolving intra-generationally. But our speed pales in comparison to good software. Software systems can accumulate knowledge far faster than we can, in no small part because lessons learned by one instance can be quickly and perfectly incorporated by another. If one instance of an algorithm fails because of an unexpected input, we have it send back an error message. This let’s us fix it, and re-deploy the fix to all instances.

Take a moment and imagine if every doctor had all the experience and wisdom of all doctors collectively across history and geography. Imagine how much faster medical knowledge could progress with immediately shared knowledge. 

I agree with Ben that algorithms exist in a competitive space, and so they have to evolve. What worked last year won’t work this year. But algorithms learn faster than humans do, and once they’ve been updated to avoid a mistake, never make the same mistake again.

I wish I could say that for myself.

Garbage In, Garbage out

Yes, totally. But human wetware is hardly better at preventing Garbage Inputs. Consider that we pro-actively selected into news sources and social media where the views reinforce our own. Our google searches are phrased to find answers we agree with. Historically, we’ve probably far better able to identify ‘garbage in’ in software, than we are in our own minds. One of my papers is about the ‘bias blind spot’ of individual investors and how we don’t consider our own biases when forecasting returns, but we do predict others errors.

Let’s not let the mote in our own eye blind us.

The confirmation bias in the wetware

I believe we have to be especially wary of ideas or positions that will feel good to us. Does it make you feel good to think that wetware is smarter than software? If so, be careful.

One recent study shows how we’re biased towards wetware over hardware:

We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake.

The author guesses that the lack of trust is because we humans believe algorithms can’t learn, but humans can. I wonder if, as software starts learning faster, younger generations starting believing the opposite: that humans can’t be trusted. 

Bigger picture

Now, to be clear, this doesn’t mean machines will ‘be smarter’. In fact, in the near term most of their benefit will be not in being ‘smarter’, but in being a different kind of smart. There is a famous finding in computer science called the Moravec Paradox which I love:

it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

While it can initially be hard to swallow that what is the apex of intelligence for most humans (speedy math and logic) is trivial for computers. But, you can also rest easy in that there are still comparative advantages and “gains from trade” across us and machines. We’re still much better than them at many things.

Which brings me to the most perplexing question: can we create an intelligence very different from our own? Isn’t it funny that we’ve started using neural nets similar to our own wetware? Are we just a real world Geppetto? Would we recognize intelligence as such if a completely unintuitive architecture or process is performing it?

Yes, I am completely aware of the irony of a wetware based brain writing these thoughts out. I’m sitting down. I’m being humble. But these are exciting times.

A little bit of confidence is a dangerous thing

First comes motivation.

When a person becomes motivated to invest (rather than keep money in a savings account) they want to do a good job. But the learning curve can be steep… and thus expensive. The new investor may pay in time, trading commissions, high expense ratios and of course mental effort. But the price must be paid.

My completely anecdotal, unscientific impression is that a person’s investing expertise follows a path roughly like this:

  • At first they know they don’t know, and so avoid it by investing in cash savings accounts.
  • As they learn from reading or dipping their toes into trading single name stocks, they get confident. They might branch out into more other macro-speculative asset classes, using levered S&P 500 funds, gold, or commodities. Things they are familiar with. At this point, they are comfortable with trading, listening to the news, feeling like their hand is on the wheel.
  • Over time, a very very weak feedback loop kicks in: the single line stocks rip their face off and they can’t sleep at night, they realize for all their trading, they aren’t coming out very far ahead; they realize the commissions are killing them, and decide the pursuit of market alpha isn’t worth it. It generally takes 2 years for 80% of day traders quit trying to beat the market. Some keep losing, but never quit.  
  • By the end, they often actually are wiser than they think. But more importantly they know how much they don’t know about the future, but do know about things they can control.

As you can see, it’s the ‘little bit of knowledge’ area where the biggest disconnect between perceived expertise and actual expertise occurs. When an investor believes they know about stocks, trading, and they have some ‘system’ that gives them confidence (or fear of losses) to more actively manage their portfolio. That’s where the biggest potential for harm exists, especially if the market has been benevolent so far.

A small, but very common mistake

DIY investors often try to make good decisions by consulting free, independent information sources. So if they want to look at the performance of a fund, they might go to finance.yahoo.com, and see this graph of the five year performance of bond fund BND:



This bond fund looks horrible. I mean, granted, the losses haven’t been huge, but there has been no growth over five years. Who would ever invest in that?

What a new investor might not realize is that Yahoo Finance graphs display price-only history. For funds which pay out a significant part of their return in coupons or dividends, this systematically under-counts the return. You’re looking at how much it would cost to buy the fund at any point in time, but not how much wealthier you would or wouldn’t be.

If you include dividends, BND looks pretty good for a bond fund:

This is a surprisingly common misunderstanding. And it’s just about knowing how to correctly research historical performance, not even about making predictions for the future.

It’s hard to learn well

Investing and markets are one of the toughest environments to learn in. An investor doesn’t get high quality feedback they can learn from – sometimes it’s not clear if a decision was good or bad for years, if not decades. It’s hard to assess and compare returns correctly to know if you’d have been better off somewhere else… and that’s a cake walk compared to predicting if you’d be better off investing one way or another.

Easy success and confidence is the enemy

If there is one consistent bias in investing, is is overconfidence. Investors trade too much, try to market time to much, and have concentrated portfolios which expose them to significant risk of losing a lot of money. Fortunes are lost much faster than they are made, and while you might get lucky in the short term, you should probably be investing for the right (longish) term.

It’s wise to always believe you know a bit less than you should.

Get the education, but don’t pay full private school tuition

(If you don’t get the reference, watch Good Will Hunting) To be clear, I’m not saying you shouldn’t experiment, explore and try to figure things out for yourself. Getting your hands dirty and really seeing how things work over a long period of time is a great way to end up a savvy investor. So feel free to make mistakes, just make sure they are mistakes you can afford. When making concentrated bets, only use money you can afford to lose completely. When reacting to market moves, consider making your changes only half as extreme as you might desire. When choosing an active manager, consider it a marriage you’re committing to for at least 7 years, if not longer. Active management is likely to underperform too, and fair-weather active investors have even higher behavior gaps than passive ones.