knowledge distribution business model

===================

‘Institutional inertia’ needed to be countered. There is no doubt we are at the moment is on a journey, and that journey needs to arrive at a place of inclusion further on than we are at the moment.

What matters to me in terms of my own responsibility and my own advocacy is that we don’t settle for second best, that we keep trying to move the organisation forward.”

Paul Bayes

======

“The future is not just about inventing disruptive technology; it is about reimagining human experiences.”

Mike Walsh

==================

“But the brain does much more than just recollect it inter-compares, it synthesizes, it analyzes, it generates abstractions. The simplest thought like the concept of the number one has an elaborate logical underpinning. “

Carl Sagan

============

 

This piece began with a Ray Wang of Constellation Research tweet. “I believe the future of works is graphs” and this image:

 

Ray, being the open-minded smart person he is, said “tell me what you’re thinking” when I said I’d debate a bit. So here we go.

 

Where I disagree: Having been in my share of C-level meetings this chart is a little deceiving. It will, practically speaking, end up in a hierarchical triangle rather than an equalized quadrant.

 

This may not be a disagreement rather than an add-on thought to the original thought, but, business begins, and will over-emphasize, emphasis on automation in the future. They’ll see it as “structural efficiency”, but couch it in “smarter digital infrastructure” terms implying effectiveness value creation. degrees of that thinking will be correct, but in totality it will be mostly bullshit. Automation equals replicable activity which equals less randomness which equals less expense (overall) and there isn’t a business person sitting in some corner office guided by spreadsheets who isn’t going salivate over that idea. Regardless. Yes. Different businesses have different needs. An additive manufacturer is different from a hospitality business or an entrepreneur or a packaged goods company. I find it helpful to use Mintzberg’s 5 organizational types:

Mintzberg: Decline and Rise of Strategic Planning

 

The decision of how to use Graphs and even aspects of automation (what & how much) will be defined by the type of organization in combination with its business ideology (scenario planning, strategic planning, emergent, deliberative).

That said. My belief remains the same. All organizational types will arc their decision making toward automation first and foremost, hence, my version of the chart.

Before I leave this section, I would be remiss if I didn’t point out that efficiency is your friend, until it’s not. Patterns are your friend, until they’re not. Replication has a half-life in an emergent dynamic marketplace context. Ponder.

 

Where we agree: The purpose of a future organization should be guided by effective decision-making. Smarter decision-making. More insightful, knowledgeable, decision-making (and I would argue that Purpose, with a capital P would be achieved as an outcome of this without actually making it an objective). We agree the future of maximizing business potential (note: I do not call this the future of work because some businesses will decline this future and may do well) resides in the interplay between the people in the organization and knowledge management (through some version of graphs or knowledge tools). We agree more effective use of knowledge will make a business more competitive. I believe we both agree the highest order of knowledge management value will be achieved through total analysis of the data fabric (semantic tools, big data, algorithms, etc. – in my words ‘creating a nano factory organization).

 

This leads me to my thoughts on graphs, dashboards, illustrative infographic summaries, knowledge graphs and what type of organization they would be most useful to.

** note: I love Ray’s thinking & direction but believe it currently has limited upside because organizations, business models, are not structured to actually maximize the idea and, well, suffice it to say it’s like placing an eagle in a cage.

 

Whatever we design ultimately has an impact back upon us. I say this because this puts pressure on the initial algorithm design so that it not only delivers, but responds. Or. Maybe we take the pressure off the initial design and construct two parallel paths of information flow so that users can gain some clarity of the conflict of knowledge. Conflict? If the intent is to encourage critical thinking, and conceptual thinking, the objective should not simply be to simplify decision-making/choice making but rather make the choices themselves the most effective. Maybe we should call this algorithmic efficacy.

I would suggest this efficacy is important for a variety of reasons:

 

Far too often we say “the world has become more complicated.” No. it has not. It has simply become more complex.

Far too often we say “the world has become faster.” No. it has not. A minute is still a minute, an hour an hour and a day a day.

Far too often we do not talk about overstimulation, how we numb ourselves to certain things (as a personal defense mechanism and not for some nefarious reason) and the fact that the sheer amount of information we are bombarded with forces us to ‘self-ratio’ (source: McLuhan) what we take in and what we assess and how we do both.

 

Far too often we start parsing out segments & aspects of what could be construed as part of the ‘overstimulation totality.” But. Here is the deal. Overstimulation, while we want to blame technology, is the combination of the medium, words, pictures, movement, symbols, associations, tone of voice, etc. received in totality. Individual elements, while interesting to debate, have no real meaning on their own. It is the totality in which we begin to re-ratio our attention, attitudes, beliefs, behaviors and thoughts. This is important because many worthy ideas have been sacrificed at the altar of overstimulation.

 

The efficacy helps us navigate complexity AND complications. I would note Complexity just is and Complications, 99% of the time, are human created. As I say all the time: anything humans make, humans can unmake, therefore, that is the great thing about complications, we make them, we can unmake them. Complexity cannot be unmade. It can be navigated, used to your benefit or ignored (which is not to your benefit). The world is made up of our inputs. And if you believe this then the biggest decision one can make has nothing to do with simplicity, chaos complexity or complications, it is what kind of business do you want to go on a quest in a complex world (vision, mission, objectives).

 

“Most of us tend to believe that planning in advance makes groups more effective and that centralized control is especially important… But studies repeatedly show the importance of…emergent groups.”

 Keith Sawyer, Group Genius

 

Which leads me back to the visual/thought Ray Wang shared.

For the most part business people know there are no certainties in choice making, only probabilities (and conditional probabilities versus contextual probabilities). That doesn’t mean we don’t attempt to reach certainty nor does it translate into someone at the head of the conference room table swiveling in your direction and saying “okay, if you want to do it will you stand by the results/outcomes/objectives?” (note: for some reason business people feel assessing your ‘certainty’ is a way of assessing your confidence). Regardless.  Reality, in decision-making, is always driven by two things:

How you make sense of what reality is (variables, conditional probabilities & existing resources)

How you define your choices (contextual probabilities, possible resources & consequences)

 

“success, or failure, typically resides somewhere in the wretched hollow in between the pursuit of too much certainty and the acceptance of too much uncertainty.”

Reality is always impacted by resources available, and needed, at the time. Dashboards/graphs tend to ignore resources and simply, or simplistically, offer pathways. I say this because any time we discuss technology augmenting people or, in extremes, replacing people when it comes to decision-making and actions, we need to always keep in mind what data ‘sees’ and what it ‘shows.’ What we see informs the attitudes not just of people, but the algorithms themselves. In addition, knowledge tends to pool rather than trickle throughout so the digital infrastructure must feed the graphs to ensure the institutional knowledge, data, individual knowledge and resources don’t pool. I would suggest Algorithms, and especially in a dual tracked knowledge sharing system, may actually be the order needed to provide the guardrails to an emergent organization.

 

What I mean by that is if we focus on a 10,000 foot view, we actually end up diminishing skills/abilities/instincts from a lateral view, and vice versa. So, how to simultaneously nudge both skills, yet, let those who naturally skew to one or the other optimize their learning and abilities.

 

 

Technology-distributed Knowledge.

The reason the development of effective algorithm knowledge distribution or semiotic annotated platforms are important to an emergent business model is because knowledge distribution cannot simply be standarized and be effective. A non-personalized distribution is a superficial attempt.

The key is effective distributed knowledge. Effective is reflected in relevant knowledge distributed in ways that are relevant to people that encourages them to think and do things and, ultimately, make effective choices.

The knowledge distribution needs to fundamentally, continuously, change the way people THINK about how they produce & do business and encourage, and prompt, them to actually DO the change. Efficiency is easy to change. This model explores HOW to conduct business to demand a change in beliefs, attitudes and mindset. This means the data & knowledge must constantly nudge reality perceptions because most ‘reality’ is driven by the resources needed at the time. Knowledge needs to “anti-hack” natural business heuristics which hide realities of ‘what could be.’ Managing what people will see informs attitudes and perceptions of reality.  Things change perception wise when prompted with new knowledge, then people change.

An added benefit of a malleable organization driven by emergence is that it shapes itself opportunities at different paces. The reality of any organization is each component in a system, people included, feel comfortable moving at different paces. So instead of demanding a consistent pace throughout the organization has different individuals, and teams, working at different paces – coherent in progress, but inconsistent in its pacing. While this may appear to be inefficient it actually not only increases effectiveness against any specific opportunity, but also makes the organization more resilient (anti-fragile) overall with an ability to absorb challenges and opportunities within stride by consistently allowing good ideas, thinking, improvements and practice sift down to improve deeper levels of the organizational systems. When all workers are knowledge workers it permits ideas/strategies to be cherry picked by the systems to either improve or become part of the institutional memory. This includes what Taleb calls “half-invented ideas.”

This pacing thought I am offering is possibly a technological addendum, or enabling layer, to a Learning Organization concept where algorithms uncover organic formulas of aspects of people that they may not necessarily (a) feel comfortable offering, (b) actually see as relevant to situation (c) coalesce without some prompting. The algorithms are designed with the intent, nudge people & organizations into ‘ecosystems’ that wouldn’t have happened naturally but thru the nudging to encourage something even better emerge. In a distributed decision-making environment this would represent emergent benefits not dictated behavior.

 

Some thoughts on an Intelligence software system

At its core this intelligence software is an iterative knowledge distribution system grounded in both predictive (recognized) patterns and emergent (unpredictable) patterns.

 

I envision two streams of intelligence, one predictive/deliberate and one emergent. The concern with having only a predictive stream, one based on boundaries, is even constantly evolving technology cannot outrun problems or be fluid enough to recognize emergent patterns (even technology doesn’t know what it doesn’t know). So, I envision a software intelligence/information distribution infrastructure which is rigid enough in one stream to consistently distribute useful intelligence, based on recognized symbols/words/topics, to have pattern recognition and, yet, have another stream which is fluid enough to reimagine patterns.

 

The intent is to find the optimal newness in intelligence, i.e., purposefully build in some intelligence dissonance with the information flow with the objective to both captures defined ontologies as well as escape from defined ontologies. Bottom line. Context matters. If we change the circumstances, we change the inspiration. The intent is to inspire from the known and the unknown.

This is an ambitious idea. it seeks to corral the ambiguity in the larger system of intelligence, as expressed by humans, and share it with people with enough clarity of the complexity challenge that someone, or someones, become curious enough to conceptualize ideas. This idea is an attempt to appeal to the rationality, and irrationality, of thinking minds. Explore the recognizable patterns, the easiest to replicate resources efficiently against, and explore the unrecognizable patterns which may represent the most effective velocity opportunities.

Lastly, and importantly, I recognize Knowledge Graphs, ontologies, semantic tooling is still in its formative stages, that said, I believe it is an idea in search if a business model rather than a business searching for an idea to make the business better. The latter is why I believe Semantic tooling type thinking is struggling, the former is where I believe the future a “graph business world” will thrive.

Anyway.

It was Karl Popper who saidknowledge consists in the search for truth” as well as “it is not the search for certainty.”

Real thinking work, reflecting new knowledge and new learning, becomes actual things we do for customers or actions we take in front of customers. Each of these actions is a part of the overall tapestry of the architecture of the company. The distribution of knowledge, done well, encourages the search for truth, not certainties. It is within truth that probabilities increase and the organization, the people within, become more attuned to emergence.

 

“If you facilitate multiple memories of the future, you build up your organisation’s sensitivity to potential ‘weak signals’, the hints on the breeze that it’s time to pivot and pursue a direction you’ve already sort of explored.”

Dr Jason Fox

 

I follow up the search for truth, a human attribute, with a technology truth: “The computer is a moron.” (Peter F. Drucker) Ok, it’s not, but it has its limitations. On this topic I am reminded of Intelligence Analyst Walter Laqueur in 1988 “A World of Secrets” saying “the need for Human Intelligence has not decreased, but it has become fashionable to denigrate the importance of humans assets because technical means (technology) are politically and intellectually more comfortable.” This came at the height of the struggle going on in the world of intelligence between those who deny the usefulness of agent and those who defended the worth of Human Intelligence in the face of the technological revolution.  At the time the New York Times estimated, from intelligence officials, 85% of all information gathered came from all the hardware sources (ELINT/SIGINT/PHOTINT/RADINT) opposed to information gained by spies (HUMINT).

The ultimate conclusion the intelligence experts reached was “what’s the good of knowing the SIGINT without knowing what the motive is.” In other word, technology shows, humans understand. AI, technology, will never replace humans, simply augment them.

Anyway.

Knowledge management has the potential to expand the development of instinct, imagination and ideation/thinking as well as pragmatic thinking/reasoning throughout an organization.  In addition, it can create, and enhance, linguistic coherence which, ultimately, has an impact on organizational culture. Any information distribution algorithm system must have some aspect of semiotic linguistics (semantic web). I envision this would tap into existing vocabulary as well as establish some common narrative linguistic aspects.

 

Gary Klein

Sense-making is the ability or attempt to make sense of an ambiguous situation.

More exactly, sensemaking is the process of creating situational awareness and understanding in situations of high complexity or uncertainty in order to make decisions.

 

What is really nuts is how no one seems to talk about how digital infrastructure, knowledge management through graphs, can be used to increase better sense making and choice making. Basically, they talk about how people will learn to use the digital infrastructure instead of the infrastructure augmenting people. That seems ass backwards to me.

 

I define sense-making as how do you make sense of the world so that you can act in it. Therefore, you need a business model where technology augmenting people is at the core. You need more than graphs and data; you need ways to liberate the minds as the way to unlock potential. Maximizing business productivity, progress & profit resides in better sensemaking and choice making (effective decisions within good sensemaking).

 

 “Clockwork Condition” (the unfinished sequel to Clockwork Orange)

“man trapped in the world of machines, unable to grow as a human being and become himself.”

 

I mean, how do you use the benefits of a digital infrastructure if you cannot identify the right problem/opportunity (sensemaking)?

Digital infrastructure/graphing is going to do the sensemaking? (insert me laughing so hard my sides hurt).

If one cannot make sense of what is, or is not, happening then choice making is useless.

“Data” provided by the graphs will make the choices obvious? (insert me laughing so hard my sides hurt).

 

Data only informs, it doesn’t decide. The same is so with graphs. Yeah. Some of those digital transformation people may say that, but it is most often said under their breath after showing a bunch of fancy graphs showcasing how data can make the choice paths obvious.

It seems crazy to me to have to say this, but, digital infrastructure informs decision-making/choice making, data informs decision-making/choice making, AI informs decision-making/choice making and all simply inform context needed to make sense of what is happening (sensemaking). None make decisions. All offer possibilities with probabilities associated (assuming someone can assess the possibilities and apply some probabilistic thinking).

Most importantly, all will inform people. People will make sense of what they are informed by and inevitably make the most consequential decisions and choices. It seems like digital transformation people should be thinking about how people think. Maybe help them become digital/data/sensemaking literate (not experts, just literate).

But. That’s me. I think the purpose of business, and what you do to your business, is to benefit people.

In the end.

Graphs aren’t really knowledge management tools; they are narrative creators. Language, narratives (not stories), are important to understanding (stories tend to offer solutions). Therefore, maybe it would help us to think of our own human-ness in algorithmic terms. If we don’t understand, we don’t trust.

In addition, Graphs are handy but, in general, they are backwards looking. The danger in this is that we rarely see the consequences of anything we do, including technological implementation/innovation, until the consequences are fully developed and have created irrevocable changes. In other words, its hard to undo what has been done.

Norbert Wiener, Moral Consequences of Automation

 

That said. Algorithms (as well as automation), used well or used poorly, will inherently change our responsibilities to, and of, our jobs. They will also provide the possibilities to leverage resources better (strategy is always using resources to achieve desired objectives) with better understanding of our responsibilities to the job.

The danger is we teach business people to rely on the graphs and trust them more than we actually realize we do. It is easy to head down this path because what’s behind the graphs (the black box as it were) looks even more complex and more dynamic than ever before (generously sprinkled with potential problems).

The danger is while the cloud represents an almost limitless pool of ever growing knowledge and data I would be remiss if I didn’t point out that the cloud, in and of itself, can be just as stupid, if not stupider, than any one individual. More knowledge, used poorly, simply makes one stupider rather than smarter. The collective knowledge is only as good as who uses it. As a corollary, individuals, and small groups of people, within an institution (augmented by the cloud collective knowledge) get smarter iteratively (even if they misuse knowledge because they learn from mistakes). Conceptually they get smarter than the cloud due to understanding of context.

I say all of this because in a conceptual age, in which understanding concepts and implementing well against them being the objective of the institution, these separate ideas should remain separate & distinct from each other. This must remain so because conversely, the more similar the two concepts get the more stupid both ideas get (arcs toward mediocrity in terms of response to emergent concepts).

 

The appeal of knowledge graphing is apparent. It can alleviate anxiety by promising certainty in an uncertain dynamic world. It offers efficient decision-making to complex problems while avoiding thorny critical thinking issues (and possibly ethical ones).

I will say that I believe when the knowledge sharing tools are offered as wholesale replacements for critical thinking, or conceptual thinking, efficacy of decision making becomes challenged if not questionable. Conceptual thinking is actually a firewall to ethical fading, technology blindness and even objective blindness.

We should be seeking to balance humans and technology similar to what Ray actually suggests graphically.

While it may seem counterproductive, any graph, or data sharing, should actually seek to entangle people in some way. This is actually an idea called ‘designing for entanglement’ which is outlined in Entangled by archaeologist Ian Hodder .  Entangling creates some intrinsic traction which gets embedded in conceptual thinking (and increases ability to cope, create, maintain progress & adapt).

We should be ‘fitting’ technology to this messy, complex, humans in order to maximize their thinking. Yes. This may sound slower, but in the end it is timely. In other words, timely is better than faster.

 

Maybe Ray is right in that the future of work resides in graphs, but I would caution everyone that the intent of an effective future should be making people smarter, better critical thinkers and better conceptual thinkers and graphs, in an efficiency grounded world, is much more likely to become a race to a dumber, default driven, less-than-knowledge worker, business world.

—–

“The solution to all problems no matter the scale ultimately requires human creativity.”

Chase Jarvis

 

 

** NOTE: while I didn’t dig deep into the moral consequences of automation, I highly recommend anyone interested read Norbert Weiner’s Moral Consequences of Automation from 1960

 

 

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Written by Bruce