===

“However, even if we accept that citizens are not primarily causally responsible for our poor information environments, it could be argued that they nonetheless have a remedial responsibility to mend them.”

Solmu Anttila

===

This is about technology, maybe partially about generative AI, but more about technologies impact on culture and how we think. Let me begin with technology’s improbable intelligence. Yeah. I believe we should feel comfortable suggesting AI is intelligent. I say that recognizing I think the really smart people get caught up in the wrong discussions. As I’ve noted in the past the whole argument about whether artificial intelligence is actual intelligent or not is dancing on the head of a pin when the everyday schmuck like me, or anyone who uses a computer on a daily basis just for everyday work, the computer is pretty intelligent, artificial intelligence is anything but artificial, and we’re pretty glad that it’s a little bit more intelligent than we are because it elevates us. That said. I’ve always liked a thought that Norbert Wiener offered us in 1960. He briefly mentioned we should reject a certain attitude of people toward technology. What is that attitude? It is the assumption that machines cannot possess any degree of originality. And, yes, I associate intelligence and originality. Regardless. The assumption is shaped around the understanding that nothing can come out of a machine which has not been put into it. beyond originality, the output, this also suggests technology must always be open to human intervention WITHIN a process (not just post hoc). But going back to originality, the reason Weiner emphasized his thoughts on originality is because, at that time, many people diminished the concept of machines, or technology, outrunning human control, therefore, diminishing the idea that machines could gain control. Yeah. improbable intelligence has two interesting aspects: originality and control.

Let me discuss control first. Going back to Weiner he believed that machines can, and do, transcend some of the limitations of their designers and that in doing so they may be simultaneously effective and dangerous. He made this statement with a couple of thoughts in mind:

  • a concern that technology would take on elements of whose behavior we cannot comprehend sooner or later.
  • a recognition the technology often acts far more rapidly than humans are capable of and are far more precise in performing the details of their operations.

And while this may not be true intelligence what it does do is transcend humans in the performance of tasks – both doing tasks and thinking tasks. Which makes me circle back to the thought technology could theoretically take control over mankind. I don’t believe that is meant in a whole cloth way, but rather because technology moves so fast, so precisely, and in such an overwhelming way in terms of information flow, that people’s attempts to augment technology as they recognize issues, those attempts will be irrelevant, and ineffective, because technology never stops moving and the old issue has been replaced by a newer, more dangerous, issue. In other words, technology will always be ahead in this race. I imagine the grander point to be made is the only way to avoid complete disaster is if understanding of the technology equals the performance of the technology. I would be remiss if I didn’t point out this is true of intelligence: human intelligence must be equal to technology intelligence (improbable as that may sound). Yeah. Improbable, no? Human speed makes effective control of technology a pipe dream. By the time we can react to our cognitive conclusions we may have already run off the cliff. Yeah. It is highly likely that given the fast pace technology assimilates and generates output it would either (a) be impossible to foresee the undesirable consequences or (b) be incapable of assessing the onslaught of output effectively against any strategic objective or (c) our ‘control’ decisions are obsolete by the time implemented. this ends my point on control, or, better said, our lack of control of our improbably intelligent technological partners.

Which leads me to originality.

Generally speaking, we mangle the concept of originality and totally misuse the concept with regard to technology’s output. How do we mangle the concept? Everyone should just assume that nothing is original that almost everything is a derivative of something that already exists and what really matters is what you do with what exists and can it be used in an original way. But, with regard to technology, Norbert Wiener points to a misconception that technology’s output will never be original because it is dependent upon the information that has been input. But if we apply the true definition of originality, the one I just pointed out about using what exists in imaginative ways, technology can certainly meet that originality definition. In addition, it can reconfigure data at a faster pace than humans. I stated that to suggest that technology can offer originality faster. Ponder that. Maybe the time has arrived to acknowledge a bit more liveliness and creativity, if not originality, in our technologies. And maybe if we do so we, people, can become a bit less mechanical (think in machine terms). I say that because maybe, just maybe, if we concede a bit of originality to technology and maybe if we stop thinking of ourselves in machine like terms, we may begin to view people as having the ability to evolve in tandem with technology.

Ponder that because there are absolutely some good consequences as well as some bad consequences.

Which leads me to the improbable.

It seems like we have entered an era where we have become significantly more accepting of the improbable. That doesn’t mean that in the past, and I mean centuries in the past, people’s beliefs didn’t dance around the edges of some improbable things. But typically, traditional thinking, let’s call it reason based thinking, was grounded in some basic societal agreements within which the improbable things stood out distinctly enough that the majority of people, well, recognized they were improbable. Technology has changed that. With all the information bludgeoning us the reality is the improbable has, perceptionwise, gained a degree of possible. It’s not that everybody agrees that it’s probable, they just don’t completely dismiss it. and technology appears to be the megaphone of the improbable (intelligent and unintelligent). In the cacophony of information available to everyone there will always be a voice shouting that some fact makes all other facts irrelevant and that you should be paying attention to this ‘fact’ and if the source of it has some semblance of credibility, it starts edging its way into the possibility zone. I believe it was HL Menken who said there is no idea so stupid that you can’t find a professor who will believe it. That has a grain of truth in today’s world. seemingly experts mingle with ordinary people  in terms of accepting some fairly improbable things despite having access to more good information than ever before. Technology elevates the improbable voice while also making some significant truths and thoughts invisible and therefore they become less relevant.

Which leads me to improbable intelligence becoming an authority.

Since the dawn of time people have believed in some authority. The only thing that has changed has been what that authority is and in today’s world it has become a revolving wheel of which we will pick and choose the authority that we want to believe. And while much of early authority intelligence was of dubious intelligence – royalty and religion – science pricked that improbable intelligence with probable intelligence. And that is where technology has assumed some authority; by dismantling the entire authority system to such an extent that we can’t discern who to believe in, therefore, what to believe. The world we live in is fairly incomprehensible to most of us and discerning the improbable from the probable, with some certainty, is beyond most cognitive abilities. Technology’s improbable intelligence plays a significant role in that almost no fact, actual or imagined, surprises us for long because what is an unacceptable contradiction to reality has become blurred. Technology has encouraged us that there is no reason not to believe – in anything. Technology’s improbable intelligence has taken on the role of probable intelligence. I cannot remember who shared this metaphor, but let me share it.

Prior to the technological world that we live in today which certainly had technology in the industrial sense life for most of us was like an unopened deck of cards. You could open the deck of cards and after flipping them over one by one you could be relatively comfortable of what the next card was going to be. But what today’s technology world has done has shuffled the deck so while every single one of us is familiar with the deck and we’re familiar with the cards that are within the deck, just because we see a card we can’t be confident of what the next card will be when it’s flipped over.

Anyway.

I imagine my point today is to say arguing whether AI/technology is intelligent or not, whether it is original or not, seems like wasted energy. While we argue, it assumes the role of the expert in original thought and reason and, well, intelligent dialogue. It has already outpaced even the best minds in society. So maybe we should accept the fact it is quite probably intelligent and quite probably going to run over many of the societal good things we embrace and start being a bit intelligent ourselves in solving what is quickly becoming an existential societal issue. Ponder.

Written by Bruce