
==============
I used to tell people we use research to INFORM decisions and not MAKE decisions.
Then.
I told people we use numbers to INFORM decisions and not MAKE decisions.
Then.
I told people we use data to INFORM decisions and not MAKE decisions.
Now.
I tell people we use algorithms to INFORM decisions and not MAKE decisions.
Me
===============
“An idea of working based on three pillars: science, insight and faith. Science because I’m a social scientist by training. I believe in data, robustness of information, making sure you’re on the right track. Insight because if you’re not able to draw insights from research, you’re not a strategist, just someone observing the data. And faith because you never know what’s going to work, so you always need a bit of faith to get everyone started.”
Laura Chiavone
================
I believe great companies have one common infrastructure characteristic: culture. Good companies can be grounded
in systems, processes, operations, etc, however, the step up to ‘great’ demands a culture (which is always implemented by people) to elevate the ‘infrastructure aspects. To be clear. “Culture” is not some ‘thing’, or values, or some nebulous feeling, it is an emergent consequence of how people interact with each other within a business. It is not what someone does or doesn’t do, it is what happens when people do things with each other. I thought of this because Mike Walsh has a new book, The Algorithm Leader, which suggests that the most successful companies of the future will support/augment/enhance that culture infrastructure – with algorithms. Now. Before anyone defaults into thinking this translates into “empty soul, technology order taker” company, or even holocracy (ponder how polar opposites could be relevant to the algorithm topic), let me share some thoughts on how I believe the thinking suggests structural value creation lift: for business & humans. To me this will occur through a balance of stability (knowledge infrastructure), uncertainty (quests versus missions) & understanding of Antifragility (selective redundancy maximizing untidy opportunities).
Let me pose some thoughts on the relationship between algorithms and antifragility upfront.
====================
“It is optionality that makes things work & grow.” AntiFragile
Maybe algorithms shouldn’t provide answers, but options. Maybe, more importantly, we become a little less comfortable with the need for construct and more comfortable with using algorithms as dynamic application of ‘movable construct’ at the right time & place.
“The antifragility of some comes necessarily at the fragility of others. In a system the sacrifice of some units – fragile units or people – are often necessary for the well-being of other units or the whole.” AntiFragile
Maybe algorithms should evaluate tradeoffs, fragility versus antifragility & well-being factors (of units & whole).
“The crux of complex systems , those with interacting parts, is that they convey information to these component parts through stressors.” AntiFragile
Maybe algorithms should not highlight solutions, but rather stressors or what Donella Meadows called ‘leverage points.’.
“There is a category of things that we can call half invented, and taking the half invented into the invented is often the real breakthrough.” AntiFragile
Maybe algorithms shouldn’t seek new innovations but rather ‘half invented’ ideas ripe for innovation maturity.

“Meaningful progress is non-linear.” How To Lead a Quest
Algorithms should enable an organization to identify progress paths to explore and discover rather than simply meet the needs of present identified ‘paths’ of progress or solve present identified issues & vulnerabilities.
=================
Full disclosure on my business beliefs. Throughout my career I have always felt comfortable by making the less certain decisions just certain enough that someone would go “well, it seems riskier, but, if you own it, go do it.” I say all that because I believe all Future of Work discussion should be grounded on the relationship between certainty & uncertainty – for the business, the people within the business organization and people’s minds/attitudes.
Algorithm leadership.
Most people want certainty therefore they let research make decisions, use numbers to make decisions, show data to make decisions and, increasingly, will suggest algorithms make decisions.
This is just a different type of efficiency couched in efficient operations. It will be called “efficient decision making.” The problem is this efficiency is just an attempt to strip a decision of uncertainty and, well, the best, most effective; decisions always carry along some burden of uncertainty.
What made me think about this? I just finished rereading AntiFragile (Nassim Taleb) and reread Mike Walsh’s first chapter of his new book The Algorithm Leader.
The former is about figuring out how to maximize from disorder or uncertainty while the latter is not becoming too dependent upon seemingly ‘certainty.’
—
“The future of companies, regardless of size, will be shaped by algorithms.”
Mike Walsh
—
Ultimately, it will be humans who use the shapes created by algorithms to assess options, evaluate antifragile components and navigate asymmetrical uncertainty.
It within this dynamic environment in which we should note business is inherently fragile. HBR once said “business is a quivering mass of vulnerabilities.” I say that because as a pendulum swings one way it will inevitably want to swing the other way. We inherently feel the fragile pendulum swing and start seeking to build ‘un-natural’ antifragile aspects to create a sense of antifragility. Aspects like systems, process, rules, KPIs, data/dashboards and, yes, algorithms. Depending on how fragile we see, or feel, the business to be the more likely we use the created mechanisms to ‘tell us what to do.’ We must fight against those instincts.
Frankly, this is where generations DO become relevant in discussing business. Older workers, 50somethings, can be an impediment by seeing past experience as ‘certainty’ . On the other hand, some 50somethings can actually be a bridge between some certainty-type learnings and younger people who are more comfortable with disorder (but they don’t necessarily have the expertise do discern the best bridges between certainty & uncertainty).
Here is what I do know. Business people inherently abhor risk, business organizations inherently gravitate toward the ‘safest’ and numbers, research, data & algorithms look like life rafts in a risky, safe seeking business world. That said. I also know progress is rarely found without some risk and is often found on ‘not-the-safest’ path. Algorithms create a false sense of ‘right thing to do.’ any leader who leans on algorithms too much isn’t leading. Period.
Uncertainty leadership.
For this I lean in on How to Lead a Quest by Dr. Jason Fox. In times of uncertainty a business does not need business ‘heroes’ but rather people aligned on a quest and leaders who embrace the uncertainty of a complex interconnected multi-dimensional business world.
—-
“You must learn to be still in the midst of activity and to be vibrantly alive in repose.”
Indira Ghandi
—-
Contrary to popular belief I would suggest a highly successful algorithmic leader is likely a 50something who has navigated research, then data, the ‘dictating’ decisions challenge gauntlet, & who were more likely to see how seemingly unrelated disparate fragments could be coalesced into decisions and futures that the numbers/data didn’t completely support, but also did not completely discount. That was a long winded way to introduce the idea of “data decipherers”. This type of leadership invaluable to an organization more & more steeped in numbers, dashboards, data & algorithms.
Jason Fox calls this “shining a light on the path before us.” Leadership will not use algorithms for ‘squinting into the future but rather to identify the stepping stones in a sea of uncertainty. They will offer people moments of some certainty without promising a certain future nor even promising steady progress organizationally. It will be more about uncovering options, making choices to alleviate stressors, so that teams can breakthrough while others provide the organization with the redundancies to protect an organization from uncertainty vulnerabilities.
Here is what I do know. Psychologically businesses will arc toward a belief algorithms will provide an increased tidiness and symmetry to business. that is a false sense of tidiness. Business will become increasingly untidy, the paths will become increasingly complex, therefore business will become increasingly uncertain with regard to the best, and proper steps, necessary for progress. Pragmatically business will need more, and better, leadership comfortable with uncertainty despite more numbers, data and algorithms.
Antifragile leadership.
Crisis, disaster resolution is rarely about resources available, but rather people* availability.
-
* ‘people availability’: this is all of physical energy, skills and mental energy combined.
Here I lean in Taleb’s AntiFragile. We tend to build redundancies incorrectly, don’t assume for disorder well & only enhance fragility in plans. We too often see AntiFragile as a “leadership concept” when in reality it is best absorbed by agile/adaptable teams.
Risk management almost always solely focuses on resources necessary to sustain, and manage, foreseen crisis. As Taleb points out the largest flaw in that is most crisis do not look similar to ones on the past (they will have similarities but still be unique). In addition. Most organizations build in redundancy safety nets as, well, a net. Because we dislike fragility so much we start building in antifragile everywhere. We tend to think of leadership as “what if” redundancy design. Antifragile leadership should be more ‘aligning resources to meet different scenarios.” Some people would call this ‘agile.’ I would not. I would simply call this pivoting (I am old school) based on the Law of the Situation. Great businesses have always been able to pivot to meet market challenges and opportunities. This is pivoting – no more, no less.
This is where I would view using algorithms a little differently than other people. Algorithms tend to look at opportunities when I believe they could be better used to identify stress points and stressors. Most good leaders are best as problems solvers or maybe better said “removers of obstacles to opportunistic behavior.’ (that doesn’t mean they don’t optimize existing operations/situations just that where good leaders get paid the big bucks is getting moments/situations unstuck). I would also argue identifying stressors permits smarter experimenting and tinkering.
Here is what I do know. Algorithms, used properly, permit people to stop just optimizing for the present and start attempting to optimizing the future. Yes. It may mean being less efficient in the short term (sometimes), but, done well, will create a more effective long term construct.
Conclusion:
I think the smart businesses of the future will be “directed to act” by algorithms, but not managed by. The latter demands acceptance of algorithm as qualified to make us to do something behaviorally, the former demands we accept algorithms as something that ‘informs’ our doing. Somewhere in between is the decision of how much we, people, are accountable for thinking. Algorithms inherently encourage us to believe business is not best when it is random. Yet. The best businesses resist the urge to suppress randomness and permit people to be more accountable for some untidy decisions with some untidy outcomes.
All businesses will exist, in some form or fashion, grounded in algorithms. I am fairly sure that’s a given. The challenge will be to not get consumed by algorithms.
To realize algorithms do not give answers, but outline options.
To realize algorithms don’t define redundancies, but rather where and when to apply redundancy resources (therefore help to define how to create proper redundancies).
To realize algorithms don’t necessarily create innovation opportunities but rather find opportunities to ‘reinvent’ ideas.
And.
To realize algorithms will not replace people, but rather augment minds and skills OF people. Ponder.



This makes the world really really difficult for most thinkers because, if they are honest, they get a lot wrong initially and most thinking may have seeds of smartness and truth initially, but other than that, well, a lot is garbage. I call it “not quite right” thinking. Taleb called them “half invented ideas.” The ideas that didn’t get traction immediately and, well, if we are honest, business life and our own spans of attention tend to discard an idea if it doesn’t show immediate possibilities or success. But if you keep the fragments of good ideas around, and twist them around a bit every once in a while, like a kaleidoscope eventually they can come together in a vivid image. Voila, your garbage has turned into non garbage. Your half invented becomes a useful invention.
In the end, all garbage has value. It may not all turn into a treasure someday, but it all shapes perspective. And perspective is imperative in the in-between time when outlines of some really important things can be a bit vague. Sometimes the garbage is what gets you out of the in-between. And maybe that is the most important point I have shared today. Ponder.
I almost called this ‘refinding technological optimism.’ Okay. Maybe it is more about anti-technological dystopia. I am not suggesting we be utopian, just that I question why we should ditch optimism. I thought about this in a conversation with Faris Yakob as we pondered our possibly naïve optimism about technology in the early 2000’s. Anyway. Two of the books I consistently pluck off my shelf to remind myself that technology dystopia has not always been the norm: 
To be optimistic is to believe in human ingenuity with no foreseeable limit. Technology can be crafted as an unending cascade of advancement. What this means is a belief that each advancement can not only eliminate the present technological issues, but also stretch the limits of what is currently possible. In its constant stretching both good, and bad, can occur but the good is constantly erasing the bad. Yeah. It’s an understanding that technology is both empowering/enabling and oppressive/constraining; often simultaneously. But within optimism is a rejection of a conclusion that the world is ugly and the people are bad. Optimism rejects dystopia as well as the status quo. I believe the status quo never invents the future, and vision, creativity and innovation crafts the future. Yeats is correct, 
I believe there about the same number of neurons in the brain as there are a number of stars in universe. We have used technology to explore space, to explore the brain, to explore the body, to explore the capacity of humans. Technology is the ship which can carry us to the farthest parts of the universe. My real fear and pessimism reside not with technology, but with ourselves – human beings. We have met the enemy and it is us. The truth is technology simply amplifies all the worst things of human beings. I’m not speaking of evil, although it lurks in the depths of the Internet, I’m speaking more about conformity. The internet defines how things should look like and scores of people line up to conform to that likeness. The same thing occurs with ideas and, well, everything. We are imitation machines. More access to all this information and imagery and words just simply encourages us all to become average, i.e., to all become the same. At some point we will all look like each other, speak like each other, and even use all the same words. That is my fear. My optimism resides in the belief that people are not average, they do like to be distinct, and they like progress to something new and better. Yeah. All progress is grounded in some spectacular risk, some spectacular mistake, or some spectacular idea that encourages everybody to zig while everyone else is zagging. And my optimism also encompasses technology because technology mirrors humanness.

A major difference between humans and computers is that at any given moment people are not choosing among all possible steps. What this means is when humans think of possibilities or ‘desired futures’ they are not even close to making the optimal choices. Instead, we, people, typically lean in on a deeply nested hierarchy of ‘knowns’ in our minds which we recognize as ‘better’ paths, or stepping stones, toward the future we desire. These nested ‘knowns’ are typically bounded biases (self-sealing beliefs). The 
Which leads me to the fact good decisionmaking is not always the optimal decision.
Ethics are our morals in action. Ethical behavior is the system we develop framed within our moral code. Our moral code, or our morals, are a system of beliefs emergent from our values. Values are the foundation of our ‘right/wrong judgement’ which create some belief system. This is personal, an individual decision, not universally accepted.





Which leads me to coherence.

In Search of Excellence was the first book I faced in my career that became a ‘formula’ for business people I worked with. Normally sane thoughtful independent thinking business people (mostly men) would pull out the book or point to it on some shelf and would say “we need to do this.” Without question it became the first business bible, of many business bibles to come, of what everyone needs to do to be excellent. And while I could debate some aspects of the book itself suffice it to say, its good, has some great ideas, but is not a bible.
individual destinies, supporting self-development, objects of true love, and in the end the only instrument able to fulfill the need for immortality of the self.
(successfully addressing a need) matched with customers who want that combo. Branding people were grumpy.

In the wayback machine, we had more control over what we would see. Plus. A smaller community controlled what we saw (in some form or fashion). Simplistically, we had to work harder to see the less-than-normal shit. The problem in today’s world is what we are ‘shown’ doesn’t care about proportionality so the ‘less-than-normal shit’ takes on an oversized shitstorm feeling of everywhere all the time. All this to say we get caught in the wretched inbetween of knowing that access to all this information makes us smarter, can make us safer, and actually can create a more equitable (accountable) society AND knowing that a consistent onslaught, or drip-by-drip, of exposure to crappy behavior, well, suggests we begin imitating some of that behavior in order to be ‘competitive’ in today’s crappy society/world. It’s kind of like the tragedy of commons just with behavior. This all breeds a sense of what is called ‘


<assuming you have a hierarchy type organization>. That means your tactical connection engineers need a head engineer <for ALL tactical engineers regardless of department or expertise>.
I do think tactical connection engineers will also need to be better data interpreters <decoders>. Why? Well. data people are finally realizing to maximize customer value it isn’t about just customer satisfaction but also an emotional tier in which you appeal to some fundamental motivations and emotional motivators <desire to feel a sense of belonging, to succeed in life, or to feel secure for details>. HBR has a nice article on this: “
Yeah. I am saying clustering leads to mediocrity. That said, oddly, business tends to like clustering, or, they do not discourage clustering. Let me be clear. Given an opportunity to be excellent, a business will always choose the path to mediocrity. Yes. Always. What I mean by that is in every situation – customer service, capital investment, ideation, innovation, creativity, planning, strategy, implementation – given an opportunity to choose a Spinal Tap 11 choice, a business will always choose a 9 or below choice. And from there on out that ‘plus number’ begins diminishing bit by bit. And while I imagine I could point to a variety of reasons, let me focus on clustering as the culprit.
Business loves numbers. Which leads me to remind everyone one of businesses/s biggest lies is “the numbers never lie.” Numbers lie all the time. Even beyond how people torture numbers until they say what is wanted, numbers create clusters. Yeah. As soon as you find a number you like it becomes a magnet for other numbers, resources, energy, focus, etc. What this means is business relentlessly clusters resources against a diminishing growth opportunity. Invariably ROI can never really improve, in a meaningful way, but intrepid business people will always find ways to suggest things are good and getting better. Once again, it never hits 11 and is only getting closer and closer to 1. The only way to get off that slippery slope of lessening growth is, well, declustering.
