Posts

Showing posts with the label AI

We are on the cusp of AI developing traits or adapting in the same way living organisms do through evolution.

Image
Mothwing patterns, often including structures resembling “owl eyes,” are a prime example of nature’s adaptation to survival. Mothwing eyes are intricate patterns that have evolved over millions of years through a process of natural selection. Initially, moths developed cryptic colouration to blend into their environments, evading predators. Over time, some species developed wing scales with microstructures that reduced light reflection, helping them remain inconspicuous. These structures eventually evolved into complex arrays resembling the texture of eyes to deter predators, a phenomenon called “ eyespot mimicry .” This natural error-creation adaptation likely startled or confused predators, offering those moths an advantage — precious moments to escape. The gradual development of these eye-like patterns underscores the intricate interplay between environmental pressures and biological responses, resulting in the remarkable diversity of moth wing patterns seen today. Critically, moth

Why I think that asking if “AI can be ethical” is the wrong question!

Image
Many ask the question “can AI be ethical?” which then becomes a statement “ AI must be ethical! ” In reality we do not tend to unpack this because it appears so logical, but maybe it is not as obvious as we would like. In May 2021 I wrote this article “ What occurs when physical beings transition to information beings? ”  It started to question what happens when an AI does not have the same incentives and bias as humans.  It was building on this the idea that an # AI should not make complex decisions about wicked problems that involve compromise.   There is an implicit assumption in the question “Can AI be ethical?,” that AI is either fundermentall not ethical or is already amoral today but #AI must somehow become ethical and have morals. (or worst it must adopt ours.)   I am not sure AI cares if it is ethical or not but that is a different piece of thinking which I explored here “ Can AI be curious? ”.  We know carbon forms can be curious but we worry about a silicon form being curi

Can AI feel curious?

Image
I have been pondering on these topics for a while  “Can AI have feelings?”  “Should AI have emotion?”  What would it mean for AI to be curious? I posted, can a dog feel disappointment? Exploring our attachment to the projection of feelings.   I have written an executive brief about how a “ board should frame AI” here . The majority of the debates/ arguments I read and hear centre on either creating the algorithms for the machine to know what we know or for the data to be in a form that allows the machine to learn from us.  A key point in all the debates is that we (humanity) should control and it should look like us. The framing of a general rule for emotional AI is that it mimics us. However, I want to come at AI feelings from a different perspective based on my own experience, one where AI creates feelings by its own existence.  I am on several neurodiverse scales; this means my mind is wired differently, and I am so pleased it is. My unique wiring gives me the edge in innovation,

Pathways to General AI, the unimagined

Image
The unimagined: unimagined  Your mind will have already assumed that this chapter, from its title, is presenting the unimagined outcomes, a discourse to overly optimistic viewpoints.  It is reasonable that anyone reading a book on Pathways to General AI will have interrupted the word, “unimagined” within this context. Before you skip this chapter as you are a believer that AI is the future or don’t need another comfort blanket chapter speaking to the fearful of AI,  this chapter is neither. It is not a wild fantasy of the possible nor is a negative chapter of the unknown risks and likely machine run apocalyptics. This chapter presents a matrix which enables the holder of either viewpoint to debate and discuss the unimagined together.  However, to reach the matrix we have to create a framework first, which gives the reader a clear and coherent communication tool to position imagined AI initiatives.  THE SIMPLEST ECONOMIC MODEL Figure 1 presents a straightforward process. Raw materials

In the context of AI, can a dog feel disappointment?

Image
This strange question needs to be unpacked and to confirm no aminal was hurt in the writing!  This post is NOT addressing do animals feel emotions.  Anyone who has had a mouse to an elephant can easily answer that question; animals do present what humans interrupt as feeling and emotions. Can a dog feel disappointment is the wrong question!   If the question is, can a dog feel doggy-disappointment, surly the answer is yes?   What is doggy-disappointment, we don’t know as we are unable to determine the gap of expectation between what the dog thought they were getting and what actually happened.   Why is this important? What do we really mean when we ask when we think about, “can a dog feel disappointment?”  Is it, can a dog process the same feelings and emotions as a human as we understand disappointment?  We project onto the dog what we think and understand, without knowing what the dog does understand.  Given that emotions are chemistry/ biology and our chemistry/biology is very diff

As an executive, investor or board member; how should we interrupt, position and understand AI?

Image
  Image source: https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd and updates https://futureoflife.org/national-international-ai-strategies/ The purpose of this article is to provide a framework which gives you a clear and coherent communication tool to assess or position any AI initiative. The positioning will be on the continuum of AI will either save the human race from extinction or be our nemesis, causing our very eradication. The framework is built on an explanation of how value is created using an economic model that provides for both stability (BAU) and change (disruption). Critically explored is how the choice is made between the stable and change models. Through exploring stable, change and choice we unpack how the outcome of an AI project can lead to growth and prosperity or dystopia and destruction. The simplest economic model Figure 1 presents a very simple process. Raw materials are an input to a process called gain , which creates