==
“The real threat of AI is not killer robots or rogue star destroyers, but rather systems that lack accountability, or consideration of their economic impact.”
Mike Walsh
==
It seems fair to say that the future of companies’ success will be shaped, in some form or fashion, by algorithms. The key word is “shaped” because ultimately it will be humans who use the shapes and therein lies the greater responsibility a business has – to value, its own existence, and the customer.
Which leads us to algorithm maturity.
Algorithms are a dime a dozen, human algorithms not so much. A Human Algorithm is one that shapes itself to the contour of each person’s preferences, desires, wants and needs – to their benefit. This may sound impossible because 90% of the environment is unobservable to humans – but not so for algorithms. A mature algorithm views billions of historical data points each telling a tale of human behavior. That said. While we speak of data, we actually should speak of patterns. If an algorithm shadows a person online, it does so comparing billions of data points and interactions until it observes a pattern. Now. That ‘pattern’ may not be an entire journey, but rather a part of the journey. And that is where mature behavioral hubs truly matter. They find the lily pads of pattern connection and offer some insightful information to the user. This begets another pattern of which the algorithm, once again, is seeking cues and guessing, but ‘guessing’ by using cues to assess versus billions of data points.
This offers a bit of robustness to the experience modifications as it adapts to the randomness of every individual. This is where a robust behavioral data hub really offers value. 100% robustness hinges on insight that cannot exist – choices yet to be made. But with an experienced behavioral knowledge base, just like with people, the algorithm can adapt with each click & choice with a higher probability of tracking the individual shopper preferences. To be clear. Algorithms, early in their life, simply have to guess more often than an algorithm leaning in on a robust behavioral hub <that’s why we call them Human Algorithms>. That guessing leads to either underfitting or overfitting the data of which neither leads to an optimal experience. A human algorithm increases the probability of ‘right fitting’ data, through pattern recognition, which leads to ‘right experience.’
A mature, one with billions of historical interactions, robust database is important because AI doesn’t adopt human preferences it simply just learns to predict (apply probabilities) what the preferences of any given individual is. So, a Human Algorithm seeks to help humans achieve their preferences because in doing so any ‘sale’ isn’t really a transaction; its attained preferences. To be clear. Without the (1) robust database with long history, (2) history of behavioral learning, (3) viability of identifying patterns, and (4) a well-designed algorithm/AI the likelihood of ‘attaining preferences’ decreases significantly. Why? Once again. a less mature algorithm with a less robust data base simply guesses more often with regard to any preference on any given click and, as Brian Christian stated in “The Alignment Problem”, “when a computer guesses, it is alarmingly confident.” Several billion interactions stored to apply against any one human interaction always helps an algorithm “see” how to better meet preferences which offers value to the shopper <link to Value lift piece> and it guesses better – and rechecks its confidence on the next click.
Which leads to the design of a Human Algorithm.
To design AI that reflects an ideal shopping experience, we have to recognize the human imperfections/irrationality on occasion, that is part of everyone’s story. Let’s call it the irrational or randomness that can steer us into either a good place or a bad place. And while an algorithm shouldn’t decide what is best for a person it can, quite clearly, steer someone away from the worst or maybe nudge us away from irrational, or our worst, instincts and back toward our original intent. Research has shown in retail outlets that removing tempting unhealthy items from the checkout areas can help to reduce the number of sugary foods that are bought. This is also true in online shopping experiences. A human algorithm can actually create a ‘healthier shopping experience’, avoiding guilt which often turns into depression, which actually translates into a ‘higher enduring value’ transaction and experience.
While it would be silly to suggest any business algorithm shouldn’t be about ‘winning’, as in a sale, it isn’t silly to suggest ‘how you win matters.’ It is within that thought in which a Human Algorithm offers its most unique benefit. The truth is a well-developed human algorithm can be developed to assist, without manipulation, to positive outcomes without negative lingering consequences which would inevitably defray the end value over time. Behavioral shouldn’t be manipulative, but rather augment an individual’s thinking, desires and wants. Basically, a human algorithm should get closer to something ‘truthful’ to the individual shopper.
Which leads us to shopping should be the friend, not the enemy.
Shopping experiences, between the seller and buyer, shouldn’t be an adversarial relationship. It’s important for businesses to understand that, when customers visit their websites, the expectation factor is also very much at play. If you present your visitors with an experience that strays too much from what they anticipate, they will end up feeling a loss of control and will perform undesirable actions, like exiting the page entirely, in order to return the feeling of control to their hands. This ‘power of perceived control’ is important not just for customers, but the ethics of algorithm design. If a person feels pushed, sold or manipulated they will inevitably push back creating some friction in the interaction <it also triggers biochemical reactions>. The optimal shopping experience is actually one that is coherent, not consistent, i.e., the click-tale never strays too far from what shopper preferences are.
A human algorithm should seek to be a ‘proven beneficial algorithm’ (Human Compatible, Stuart Russell). The truth is the ‘proven’ part is not exactly attainable yet, however, beneficial intent is. A human algorithm is not about ‘pushing or nudging’, but rather shadowing human wants and desires. Each click tells a tale if you listen close enough and that is actually where the amount, and historical experience, of data matters. Shadowing demands a behavioral hub with billions of interaction experiences in order to effectively shadow a shopper.
Which leads us to ‘loose algorithms’ versus bounded algorithms.
Loose algorithms, paradoxically, are more fragile algorithms in that without enough pattern knowledge will more likely follow a linear journey. Basically any ‘immature’ algorithm is a loose algorithm. Without a robust behavioral hub to lean on, to put some pattern driven boundaries, a shopper can click their way down a ‘this or that’ path leading to either something not available (disappointment) or, worse, something they don’t want in the end (anger at a less-than-optimal shopping experience). Conversely, a robust behavioral hub offers a “bounded algorithm” which accommodates the random irrational meanderings of a shopper yet, given pattern experience, will bring a shopper back into a shopping zone in which satisfaction is more likely. A traditional algorithm doesn’t really care about what the shopper wants, it only cares about what the shopper clicks on. A Human Algorithm recognizes it is beneficial to seek the optimal balance between wants, desires, and needs.
Regardless. The responsibility of a Human Algorithm is not to keep the shopper on the right path, instead, the responsibility is to help them find their own path to purchase.
Which leads us to responsibility to humans.
Business, B2B or consumer, inevitably is about humans and algorithms are simply methods to augment human preferences and hopes.
Behavioral economics has uncovered a lot of learning about the human mind and choice making. When it comes to business responsibility the most concerning is the evidence indicating that many of people’s decisions are unintentional, or automatic, in nature. This implies that our own thoughts and behavior are not under our own intentional control but, rather, are strongly influenced by environmental factors.
These automatic processes have fundamental implications to how we behave in general, and how we behave online. One of the most researched aspects of this automatic process is called the Priming Effect. This is where exposure to one stimulus influences the way we responds to a second stimulus. Mental structures such as schemes and stereotypes are automatically activated on the mere presence of those structures – even with words. The brain activation works quicker among ideas that are naturally related and research has shown the Priming Effect not only influences our thoughts, but also influence our behavior. An algorithm, in the wrong hands, can use the Priming Effect to manipulate shoppers to transactions which look good in the short term, but only increase anxiety/regret in the long term.
The Human Algorithm optimizes value.
The danger of AI isn’t really whether some criminal using it will be successful, but rather that a business poorly designs its algorithms, is successful, and they lose control of that poorly designed system. A poorly designed system increases the likelihood of a lower value experience and outcome.
That said.
Designing a Human Algorithm demands a belief that technology augments people and what’s inside a person (preferences, motivation, attitudes, hopes, desires) and success is not just about efficient shopping journeys, but effectively tapping into shopping motivations and offering coinciding options tied to preferences. This algorithm leans into a robust mature behavioral database with enough historical patterns/interactions to create the optimal ‘fit’ experience which, correspondingly, creates the highest transactional value.
A Human Algorithm has the capability to offer the highest transactional value through a combination of structural lift from a mature robust data base, well designed AI, a coherent experience with an outcome aligned with individual preferences.
===
“It is optionality that makes things work and grow.”
Nassim Taleb, AntiFragile
thought: Maybe algorithms shouldn’t provide answers, but options. Maybe, more importantly, we become a little less comfortable with the need for construct and more comfortable with using algorithms as dynamic application of ‘movable construct’ at the right time and place.


==
In fact.
On a daily basis we are faced with questions of “what we will abandon to save our future & our dreams.”
===
Ok. I tend to believe most of what Seth Godin says is fluff. But (to give him credit) he has noted that effective entrepreneurs and leaders often perform the same tasks as the rest of us, except for a critical 5 minutes per day. In those 5 minutes they are somehow able to cut through the thousands of possible choices and select the one choice that creates value. That, my friends, is called “lack of fear of choice.”
Or maybe the fear we won’t do anything at all.
Which leads me to the image to the left.







The ripple effect of not trusting anybody bleeds into every aspect of Life and in doing so it bleeds in terms of action and inaction <or the slowing down of action>.
Well. This is called ‘social trust’ and social trust produces good things.
===

process, the presidency itself, democracy, America’s position in the world, and our constitutional rights & freedoms, I tend to believe one of the most egregious actions he did was by doing all of that lying and destroying any semblance of the overall standard of respectful discourse a civilized society typically has.
have listed above which we should now put our big boy & girl pants on .. and solve.
The strength of a country is defined in how it deals with its worst moments. Trump represents the worst, represented the worst and in his wake he left us with the worst. I say that because, well, he is coming back. Twitter is a megaphone for all his shit.
——
The secret actually is finding the key that unlocks your own inner strength, or inner character or inner passion or <to keep with the thought> the key that opens the door to your own flowers of unusual beauty. Yup. The secret is finding the flowers of unusual beauty that lie within your own walls and give them freedom to prosper in the light of day.
But that is just what I think. And please do not tell me a book can give you the secret to Life.
Value creation, in an emergent-focused business, is not causal, it is correlative. In other words, to create value in a conceptual thinking world one has to accept everything has to do with everything else. Business, and value creation, is a succession of events that link with each other (whether we want them to or not). You must see the invisible to control your destiny. And this is where AI helps. It can bring the invisible into the visible world – concepts, doing and progress.
Some people call this ‘finding purpose’ I do not. Some people call this “self management.” I do not. This is simply enabling a worker to know as much as they can in order to produce the best possible work (and outcomes). In other words, enlighten the individual to maximize their effectiveness.
Maybe surprisingly I am leaning on Gilbreth’s “scientific management” concept in terms of maximizing individual productivity, but I am turning it 180degrees and instead of efficiency I speak in terms of effectiveness.
………….. finite projects each focused on Velocity spur productive Progress ……..
The enemy of velocity are the ‘to do’ lists are endless with lots checked-off, but never get shorter and the people who are working long hours but what is done never seems to create any meaningful progress.
I will conclude on ‘opportunistic’ with the uncomfortable truth that opportunistic is tricky, sometimes not obvious and often uncovers itself in a one-degree difference as in “
===
choice. Maybe its better said to suggest it is the space between sensemaking and choice making (apologies to Daniel Schmachtenberger because I believe this mangles his much deeper thinking on these topics). It is where what you end up creating is shaped by, well, what one views as right or wrong. The reason I see its importance residing between value creation and progress/velocity is because if an organization crafts a concept with high value, absent of ethical creation, and it moves on to the next phase and gains its likely velocity the concept shifts into high gear only amplifying its lowest aspect – lack of ethical value. In other words, velocity amplifies value whether it is created ethically or not.

Ethics are not absolutes. Your ethics, or values, may not be mine <so maybe we should seek empathy toward other’s feelings and beliefs>. Yet. We must demand our algorithms have some absolutes in order to kill off the bad or evil we know exists. Maybe technology should be seeking what someone called “approximately moral.” This can be done in the actual outputting software (inputs to people), but it is possibly even more important in the ‘constraints’ software – the spies spying on the algorithm spies.


Marshall McLuhan’s words” “we shape our technology and our technology shapes us.”
down to people. People doing the right things. Creating the right things in the right way. It was, once again, Mary Parker Follett who can guide us toward the right future using her own words as she described ‘Right’: