The best future is rarely found by making optimal choices

A Human Algorithm: Flynn Coleman

 ===

“Their most hopeful vision of the future is centred around compassion not convenience, emotional not artificial intelligence. The path towards this vision seems to require little technical innovation; it demands simply that people care about people – an idea so laughably naïve, yet so radically transformative.”

Kai

===

  • Preface: apologies to all my technology friends because I am going to mangle some technology concepts to make some points.

With all the talk about ChatGPT (large language models), which seems to bleed into a larger technology angst discussion, I thought it may be important to remind everyone about, well, humans.

The one thing smart machines and humans have in common is choicemaking. From there the paths diverge – fairly significantly.

A major difference between humans and computers is that at any given moment people are not choosing among all possible steps. What this means is when humans think of possibilities or ‘desired futures’ they are not even close to making the optimal choices. Instead, we, people, typically lean in on a deeply nested hierarchy of ‘knowns’ in our minds which we recognize as ‘better’ paths, or stepping stones, toward the future we desire. These nested ‘knowns’ are typically bounded biases (self-sealing beliefs). The self-sealing beliefs issues gets compounded by the fact we tend to pursue short term goals, or more immediate gratification/results, rather than maximizing for the future. We may think we are, and we may even may make valiant attempts to do so, but we are inevitably constrained by those pesky nested beliefs and shorter term gratification.

Which leads me to smart machines.

Given enough data a smart machine can recognize your self-sealed beliefs (even the ones you may have submerged fairly deeply). From there it can manipulate your nested beliefs which inevitably changes your view of future possibilities. We all have these nested beliefs, often called heuristic thinking, and they do help us navigate shit. That said. Because they are so embedded it’s difficult for us to see let alone reshape ourselves, but a computer can – sometimes incrementally over time and sometimes in one fell swoop. As you can imagine, this can be very good, and it can be very bad. And that is where this whole thing gets tricky. While we all have personal preferences, computers simply have data and the knowledge to see your preferences – but not understand them or the reasoning behind them – and smart machines are built with the purpose to modify human preferences because, well, smart machines modify human experiences. Preference modification is a slippery slope. It can modify good and modify bad and, well, machines are machines and without some intuition they may mathematically steer a preference away from a more optimal path – specifically to you – simply because the machine has assessed it to be so. This gets compounded by, quoting Brian Christian, “when a computer guesses, it is alarmingly confident.” Look. Machines will most likely make preference modification more statistically optimal (even weighing a 51% probability in a binary way), but the truth is some of the best preference driven choices people make are the less obvious ones. So maybe what I am suggesting is that ‘always optimal’ leads to a less-than-optimal destination. Yeah. Imagine that if everyone did what the computer said the entire world inches a bit closer to the mean (average) and runs the risk of eliminating the soaring heights attained in the improbable. And, if humans are anything, they are improbable machines.

Which leads me to how smart machines are both smart and dumb.

Machines do have an ability to learn at a variety of levels. At one level is repetitive task automation where the machine has been taught how to perform a specific task reliably, but its knowledge cannot grow any further based on its experience or in response to changing conditions. This is a closed loop system and typically very affordable. At another level the computer is able to observe the effects of its performance or results of its analysis and make adjustments to what it “knows”, usually by experimenting with other possibilities that might perform better. This sounds good and it is good, however, it can’t question anything because it doesn’t really have the ability to assess context awareness and learning. It cannot really question anything beyond any of the inputs it has been told to consider in its optimization. This is also a closed loop system just broader than the first that I outlined – a little less affordable.

This closed loop is important because far too many people talk about AI as being some omniscient thing, i.e., an open loop all-knowledge-encompassing process. It can be, but mostly it can’t because of cost. I cannot tell you how many times I have talked with a software developer about an AI/algorithm-based idea and they have responded “that is a great idea, but I don’t know anyone who would have the budget to build it.” What this means is ‘closed loop’ is also ‘budget-manageable.’

Anyway. A smart machine will end up being good at pattern recognition (assuming it has a large enough data base) and will be bad at intuitive sense. For example, machines “see” items as a patchwork of lines and dots and ones and zeros from which they identify things that match what they have in their database. But they cannot easily identify the essence of what item is. And even if a machine is able to successfully match an object to an image in its database, a slight change in perspective can baffle the shit out of a smart machine.

Our brains, however, are quite capable of taking different perspectives and variations into account. Our brains are subconsciously performing trillions of calculations. Basically, artificial intelligence is trapped in a closed loop. That loop can very large like large language models, but however vast they appear, alas, they are a closed loop. Brains, on the other hand, are an open system with nested loops inside. And this is maybe where common sense is not so common gets explained. Individual’s brains love residing in a nested loop. That nested loop of thinking is experience-based ‘data’ where the identification of what needs to be solved pops up as common sense. That said. The issue would be the same as a smart machine in that this heuristic thinking is a closed loop system which ignores additional information flowing in. That being said the brain is an open loop system and has the ability to be to be able to feed consistent additional information into any one of the closed nested loops within our brain, if we elect to permit it in.

“The fact that digital computers are able to outperform humans and performing some mental tasks should come as no surprise they were designed to do just that.”

University professor Phil Auerswald

Which leads me to the fact good decisionmaking is not always the optimal decision.

Good decision making has always been in the purview of humans. While we all bitch and moan about the lack of good decision making by certain people, the truth is the majority of us make good decisions most of the time. Or, maybe said, we make the best decision that we possibly can in the moment with the information that we have at hand. Which leads me to say people have learned to make the really hard decisions by first making a lot of easy ones. People gain skills and confidence from simple situation decisionmaking where it can be determined afterwards whether a decision was made well or not. So. If we were to allow smart machines to take over these ‘starter decisions’ and leave only the highly ambiguous situations for humans, there is a risk that effective decision making would be a less-than-optimal choice for humans to make. Yeah. They would just suck at decisionmaking. Yeah. If fewer people understand how to diagnose a decision because they’ve never had to run the gauntlet of all the easy ones, how would we be able to even assess whether a good decision has been made on the hard ones later on? I sometimes worry that we will find ourselves under a smart machine decision making autocracy by default in which less educated, or less concerned, decision makers automatically follow the decisions of smart machines. I’m not suggesting this will always be a bad thing. In fact it raises the possibility that a lot of good things happen faster. But it will only be a good thing until it’s not. Sometimes it will be a very very bad thing. And I worry we humans won’t know which is which. But then again, if we absolve ourselves of responsibility for all the easy decisions, aren’t we by default divesting humans from the hard decisions?

Here is what I know.

A smart machine will always make optimal choices. The best future is rarely, if not ever, found by making optimal choices. In fact, it’s the less-than-optimal choices that tend to foster the spectacularly-more-than-optimal results. And I tend to think humans are the best spectacular error and spectacular success machines in the world. Ponder.

===

“No, I’m not interested in developing a powerful brain. What I’m after is just a mediocre brain, something like the president of the American Telephone and Telegraph company.”

Alan Turing

Written by Bruce