“The real threat of AI is not killer robots or rogue star destroyers, but rather systems that lack accountability, or consideration of their economic impact.”

Mike Walsh


It seems fair to say that the future of companies’ success will be shaped, in some form or fashion, by algorithms. The key word is “shaped” because ultimately it will be humans who use the shapes and therein lies the greater responsibility a business has – to value, its own existence, and the customer.

Which leads us to algorithm maturity.

Algorithms are a dime a dozen, human algorithms not so much. A Human Algorithm is one that shapes itself to the contour of each person’s preferences, desires, wants and needs – to their benefit. This may sound impossible because 90% of the environment is unobservable to humans – but not so for algorithms. A mature algorithm views billions of historical data points each telling a tale of human behavior. That said. While we speak of data, we actually should speak of patterns. If an algorithm shadows a person online, it does so comparing billions of data points and interactions until it observes a pattern. Now. That ‘pattern’ may not be an entire journey, but rather a part of the journey. And that is where mature behavioral hubs truly matter. They find the lily pads of pattern connection and offer some insightful information to the user. This begets another pattern of which the algorithm, once again, is seeking cues and guessing, but ‘guessing’ by using cues to assess versus billions of data points.

This offers a bit of robustness to the experience modifications as it adapts to the randomness of every individual. This is where a robust behavioral data hub really offers value. 100% robustness hinges on insight that cannot exist – choices yet to be made. But with an experienced behavioral knowledge base, just like with people, the algorithm can adapt with each click & choice with a higher probability of tracking the individual shopper preferences. To be clear. Algorithms, early in their life, simply have to guess more often than an algorithm leaning in on a robust behavioral hub <that’s why we call them Human Algorithms>. That guessing leads to either underfitting or overfitting the data of which neither leads to an optimal experience. A human algorithm increases the probability of ‘right fitting’ data, through pattern recognition, which leads to ‘right experience.’

A mature, one with billions of historical interactions, robust database is important because AI doesn’t adopt human preferences it simply just learns to predict (apply probabilities) what the preferences of any given individual is. So, a Human Algorithm seeks to help humans achieve their preferences because in doing so any ‘sale’ isn’t really a transaction; its attained preferences. To be clear. Without the (1) robust database with long history, (2) history of behavioral learning, (3) viability of identifying patterns, and (4) a well-designed algorithm/AI the likelihood of ‘attaining preferences’ decreases significantly. Why? Once again. a less mature algorithm with a less robust data base simply guesses more often with regard to any preference on any given click and, as Brian Christian stated in “The Alignment Problem”, “when a computer guesses, it is alarmingly confident.” Several billion interactions stored to apply against any one human interaction always helps an algorithm “see” how to better meet preferences which offers value to the shopper <link to Value lift piece> and it guesses better – and rechecks its confidence on the next click.

Which leads to the design of a Human Algorithm.

To design AI that reflects an ideal shopping experience, we have to recognize the human imperfections/irrationality on occasion, that is part of everyone’s story. Let’s call it the irrational or randomness that can steer us into either a good place or a bad place. And while an algorithm shouldn’t decide what is best for a person it can, quite clearly, steer someone away from the worst or maybe nudge us away from irrational, or our worst, instincts and back toward our original intent. Research has shown in retail outlets that removing tempting unhealthy items from the checkout areas can help to reduce the number of sugary foods that are bought. This is also true in online shopping experiences. A human algorithm can actually create a ‘healthier shopping experience’, avoiding guilt which often turns into depression, which actually translates into a ‘higher enduring value’ transaction and experience.

While it would be silly to suggest any business algorithm shouldn’t be about ‘winning’, as in a sale, it isn’t silly to suggest ‘how you win matters.’ It is within that thought in which a Human Algorithm offers its most unique benefit. The truth is a well-developed human algorithm can be developed to assist, without manipulation, to positive outcomes without negative lingering consequences which would inevitably defray the end value over time. Behavioral shouldn’t be manipulative, but rather augment an individual’s thinking, desires and wants. Basically, a human algorithm should get closer to something ‘truthful’ to the individual shopper.

Which leads us to shopping should be the friend, not the enemy.

Shopping experiences, between the seller and buyer, shouldn’t be an adversarial relationship. It’s important for businesses to understand that, when customers visit their websites, the expectation factor is also very much at play. If you present your visitors with an experience that strays too much from what they anticipate, they will end up feeling a loss of control and will perform undesirable actions, like exiting the page entirely, in order to return the feeling of control to their hands. This ‘power of perceived control’ is important not just for customers, but the ethics of algorithm design. If a person feels pushed, sold or manipulated they will inevitably push back creating some friction in the interaction <it also triggers biochemical reactions>. The optimal shopping experience is actually one that is coherent, not consistent, i.e., the click-tale never strays too far from what shopper preferences are.

A human algorithm should seek to be a ‘proven beneficial algorithm’ (Human Compatible, Stuart Russell). The truth is the ‘proven’ part is not exactly attainable yet, however, beneficial intent is. A human algorithm is not about ‘pushing or nudging’, but rather shadowing human wants and desires. Each click tells a tale if you listen close enough and that is actually where the amount, and historical experience, of data matters. Shadowing demands a behavioral hub with billions of interaction experiences in order to effectively shadow a shopper.

Which leads us to ‘loose algorithms’ versus bounded algorithms.

Loose algorithms, paradoxically, are more fragile algorithms in that without enough pattern knowledge will more likely follow a linear journey. Basically any ‘immature’ algorithm is a loose algorithm. Without a robust behavioral hub to lean on, to put some pattern driven boundaries, a shopper can click their way down a ‘this or that’ path leading to either something not available (disappointment) or, worse, something they don’t want in the end (anger at a less-than-optimal shopping experience). Conversely, a robust behavioral hub offers a “bounded algorithm” which accommodates the random irrational meanderings of a shopper yet, given pattern experience, will bring a shopper back into a shopping zone in which satisfaction is more likely. A traditional algorithm doesn’t really care about what the shopper wants, it only cares about what the shopper clicks on. A Human Algorithm recognizes it is beneficial to seek the optimal balance between wants, desires, and needs.

Regardless. The responsibility of a Human Algorithm is not to keep the shopper on the right path, instead, the responsibility is to help them find their own path to purchase.

Which leads us to responsibility to humans.

Business, B2B or consumer, inevitably is about humans and algorithms are simply methods to augment human preferences and hopes.

Behavioral economics has uncovered a lot of learning about the human mind and choice making. When it comes to business responsibility the most concerning is the evidence indicating that many of people’s decisions are unintentional, or automatic, in nature. This implies that our own thoughts and behavior are not under our own intentional control but, rather, are strongly influenced by environmental factors.

These automatic processes have fundamental implications to how we behave in general, and how we behave online. One of the most researched aspects of this automatic process is called the Priming Effect. This is where exposure to one stimulus influences the way we responds to a second stimulus. Mental structures such as schemes and stereotypes are automatically activated on the mere presence of those structures – even with words. The brain activation works quicker among ideas that are naturally related and research has shown the Priming Effect not only influences our thoughts, but also influence our behavior. An algorithm, in the wrong hands, can use the Priming Effect to manipulate shoppers to transactions which look good in the short term, but only increase anxiety/regret in the long term.

The Human Algorithm optimizes value.

The danger of AI isn’t really whether some criminal using it will be successful, but rather that a business poorly designs its algorithms, is successful, and they lose control of that poorly designed system. A poorly designed system increases the likelihood of a lower value experience and outcome.

That said.

Designing a Human Algorithm demands a belief that technology augments people and what’s inside a person (preferences, motivation, attitudes, hopes, desires) and success is not just about efficient shopping journeys, but effectively tapping into shopping motivations and offering coinciding options tied to preferences. This algorithm leans into a robust mature behavioral database with enough historical patterns/interactions to create the optimal ‘fit’ experience which, correspondingly, creates the highest transactional value.

A Human Algorithm has the capability to offer the highest transactional value through a combination of structural lift from a mature robust data base, well designed AI, a coherent experience with an outcome aligned with individual preferences.


“It is optionality that makes things work and grow.”

Nassim Taleb, AntiFragile

thought: Maybe algorithms shouldn’t provide answers, but options. Maybe, more importantly, we become a little less comfortable with the need for construct and more comfortable with using algorithms as dynamic application of ‘movable construct’ at the right time and place.

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Written by Bruce