
===
“However, even if we accept that citizens are not primarily causally responsible for our poor information environments, it could be argued that they nonetheless have a remedial responsibility to mend them.”
Solmu Anttila
===
This is about technology, maybe partially about generative AI, but more about technologies impact on culture and how we think. Let me begin with technology’s improbable intelligence. Yeah. I believe we should feel comfortable suggesting AI is intelligent. I say that recognizing I think the really smart people get caught up in the wrong discussions. As I’ve noted in the past the whole argument about whether artificial intelligence is actual intelligent or not is dancing on the head of a pin when the everyday schmuck like me, or anyone who uses a computer on a daily basis just for everyday work, the computer is pretty intelligent, artificial intelligence is anything but artificial, and we’re pretty glad that it’s a little bit more intelligent than we are because it elevates us. That said. I’ve always liked a thought that Norbert Wiener offered us in 1960. He briefly mentioned we should reject a certain attitude of people toward technology. What is that attitude? It is the assumption that machines cannot possess any degree of originality. And, yes, I associate intelligence and originality. Regardless. The assumption is shaped around the understanding that nothing can come out of a machine which has not been put into it. beyond originality, the output, this also suggests technology must always be open to human intervention WITHIN a process (not just post hoc). But going back to originality, the reason Weiner emphasized his thoughts on originality is because, at that time, many people diminished the concept of machines, or technology, outrunning human control, therefore, diminishing the idea that machines could gain control. Yeah. improbable intelligence has two interesting aspects: originality and control.
Let me discuss control first. Going back to Weiner he believed that machines can, and do, transcend some of the limitations of their designers and that in doing so they may be simultaneously effective and dangerous. He made this statement with a couple of thoughts in mind:
- a concern that technology would take on elements of whose behavior we cannot comprehend sooner or later.
- a recognition the technology often acts far more rapidly than humans are capable of and are far more precise in performing the details of their operations.
And while this may not be true intelligence what it does do is transcend humans in the performance of tasks – both doing tasks and thinking tasks. Which makes me circle back to the thought technology could theoretically take control over mankind. I don’t believe that is meant in a whole cloth way, but rather because technology moves so fast, so precisely, and in such an overwhelming way in terms of information flow, that people’s attempts to augment technology as they recognize issues, those attempts will be irrelevant, and ineffective, because technology never stops moving and the old issue has been replaced by a newer, more dangerous, issue. In other words, technology will always be ahead in this race. I imagine the grander point to be made is the only way to avoid complete disaster is if understanding of the technology equals the performance of the technology. I would be remiss if I didn’t point out this is true of intelligence: human intelligence must be equal to technology intelligence (improbable as that may sound). Yeah. Improbable, no? Human speed makes effective control of technology a pipe dream. By the time we can react to our cognitive conclusions we may have already run off the cliff. Yeah. It is highly likely that given the fast pace technology assimilates and generates output it would either (a) be impossible to foresee the undesirable consequences or (b) be incapable of assessing the onslaught of output effectively against any strategic objective or (c) our ‘control’ decisions are obsolete by the time implemented. this ends my point on control, or, better said, our lack of control of our improbably intelligent technological partners.
Which leads me to originality.
Generally speaking, we mangle the concept of originality and totally misuse the concept with regard to technology’s output. How do we mangle the concept? Everyone should just assume that nothing is original that almost everything is a derivative of something that already exists and what really matters is what you do with what exists and can it be used in an original way. But, with regard to technology, Norbert Wiener points to a misconception that technology’s output will never be original because it is dependent upon the information that has been input. But if we apply the true definition of originality, the one I just pointed out about using what exists in imaginative ways, technology can certainly meet that originality definition. In addition, it can reconfigure data at a faster pace than humans. I stated that to suggest that technology can offer originality faster. Ponder that. Maybe the time has arrived to acknowledge a bit more liveliness and creativity, if not originality, in our technologies. And maybe if we do so we, people, can become a bit less mechanical (think in machine terms). I say that because maybe, just maybe, if we concede a bit of originality to technology and maybe if we stop thinking of ourselves in machine like terms, we may begin to view people as having the ability to evolve in tandem with technology.
Ponder that because there are absolutely some good consequences as well as some bad consequences.
Which leads me to the improbable.
It seems like we have entered an era where we have become significantly more accepting of the improbable. That doesn’t mean that in the past, and I mean centuries in the past, people’s beliefs didn’t dance around the edges of some improbable things. But typically, traditional thinking, let’s call it reason based thinking, was grounded in some basic societal agreements within which the improbable things stood out distinctly enough that the majority of people, well, recognized they were improbable. Technology has changed that. With all the information bludgeoning us the reality is the improbable has, perceptionwise, gained a degree of possible. It’s not that everybody agrees that it’s probable, they just don’t completely dismiss it. and technology appears to be the megaphone of the improbable (intelligent and unintelligent). In the cacophony of information available to everyone there will always be a voice shouting that some fact makes all other facts irrelevant and that you should be paying attention to this ‘fact’ and if the source of it has some semblance of credibility, it starts edging its way into the possibility zone. I believe it was HL Menken who said there is no idea so stupid that you can’t find a professor who will believe it. That has a grain of truth in today’s world. seemingly experts mingle with ordinary people in terms of accepting some fairly improbable things despite having access to more good information than ever before. Technology elevates the improbable voice while also making some significant truths and thoughts invisible and therefore they become less relevant.
Which leads me to improbable intelligence becoming an authority.
Since the dawn of time people have believed in some authority. The only thing that has changed has been what that authority is and in today’s world it has become a revolving wheel of which we will pick and choose the authority that we want to believe. And while much of early authority intelligence was of dubious intelligence – royalty and religion – science pricked that improbable intelligence with probable intelligence. And that is where technology has assumed some authority; by dismantling the entire authority system to such an extent that we can’t discern who to believe in, therefore, what to believe. The world we live in is fairly incomprehensible to most of us and discerning the improbable from the probable, with some certainty, is beyond most cognitive abilities. Technology’s improbable intelligence plays a significant role in that almost no fact, actual or imagined, surprises us for long because what is an unacceptable contradiction to reality has become blurred. Technology has encouraged us that there is no reason not to believe – in anything. Technology’s improbable intelligence has taken on the role of probable intelligence. I cannot remember who shared this metaphor, but let me share it.
Prior to the technological world that we live in today which certainly had technology in the industrial sense life for most of us was like an unopened deck of cards. You could open the deck of cards and after flipping them over one by one you could be relatively comfortable of what the next card was going to be. But what today’s technology world has done has shuffled the deck so while every single one of us is familiar with the deck and we’re familiar with the cards that are within the deck, just because we see a card we can’t be confident of what the next card will be when it’s flipped over.
Anyway.
I imagine my point today is to say arguing whether AI/technology is intelligent or not, whether it is original or not, seems like wasted energy. While we argue, it assumes the role of the expert in original thought and reason and, well, intelligent dialogue. It has already outpaced even the best minds in society. So maybe we should accept the fact it is quite probably intelligent and quite probably going to run over many of the societal good things we embrace and start being a bit intelligent ourselves in solving what is quickly becoming an existential societal issue. Ponder.



Suffice it to say, 24/7 technology has challenged most of what we thought about our self-identity. In the good old days self identity was a bit easier because we had a fairly limited exposure, neighborhood/school/work/community, to images and shared experiences which led to shaping what we saw as “self”. In today’s world we are faced with an onslaught of information which we are, frankly, incapable of assimilating within our cognitive scope. And while many people discuss this in terms of stress, knowledge, decision making, today I discuss it in terms of self identity.

We could use technology to help us understand why things are as they are as well as envision ways of what we could do with our lives. But here’s the deal. The struggle ultimately resides in ‘self.’ What I mean by that is ‘the core or the center of who and what we are.’ We all strive after something which we deem good or better sort of our personal version of progress. But if we are not careful this becomes good for the self and not the greater good as in not taking into consideration the larger whole. So, unless we as individuals sort out our center, our urges, impulses, and desires in a coordinated way we are doomed to constant confusion living in a contradictory identity state. This could quite possibly be self-destructive in a technological world which is constantly trying to attack us within its own coordinated, orderly system of ideas of what it thinks we should be and who we should be. To be clear.
perspectives when she dies, wins.” That’s the self-identity game. It used to be a more simplistic “what I believe represents what I am” but with today’s technology world who I am, if you seek to have a center that holds within multiple contexts, is an accumulation of perspectives. If the industrial age encouraged a standardization of identity, technology is ripping us apart. Overcome by details and information we have become almost incapable of conceptualizing anything – including our own identity. Consequently, we have begun crafting the details of who we want to be seen as to compete in a world in which other’s identities flash before us detail by detail. Detail by detail we push out into the world and before you know it you are no longer a self – as a solid concept – but rather a bunch of details and pieces you think have some value. And this is where stories come in. Thinking conceptually may be too much of a mind bender, but having a story, or stories, is not as tough. Good stories and well-maintained identities embracing stories endure. This is actually part of the Third Wave Toffler mentioned. 2nd Wave media tightly reinforced, within stable distribution structures (major TV networks & major papers/magazines) shared world views and some semblance of common sensemaking within which an identity could comfortably reside (or, conversely, create a counter culture identity). In today’s environment worlds are created through our digital connection points, perspectives are gained through many interactions, and we need to become more comfortable projecting our identity, all facets, through this digital connectivity of almost infinite networks of other humans. The reality is technology is getting better; and worse. Technology is becoming easier to craft the identity we would like to project, but it is getting worse in that if you are not careful algorithms pick at the little gaps seeking to exploit with fear, doubt, and victimhood. Clearly, the lines have been erased between what we would have considered our self-identity and the digital worlds that represent our identity. The technological world has forced us to think of ourselves, in many ways, as content. And in some ways that is good. If our identities are content and useful content should have some substance, maybe, just maybe, by treating it like content we will make sure it is worthy of our self. Ponder.