Can AI feel curious?

I have been pondering on these topics for a while  “Can AI have feelings?”  “Should AI have emotion?”  What would it mean for AI to be curious? I posted, can a dog feel disappointment? Exploring our attachment to the projection of feelings.   I have written an executive brief about how a “board should frame AI” here.

The majority of the debates/ arguments I read and hear centre on either creating the algorithms for the machine to know what we know or for the data to be in a form that allows the machine to learn from us.  A key point in all the debates is that we (humanity) should control and it should look like us. The framing of a general rule for emotional AI is that it mimics us. However, I want to come at AI feelings from a different perspective based on my own experience, one where AI creates feelings by its own existence. 

I am on several neurodiverse scales; this means my mind is wired differently, and I am so pleased it is. My unique wiring gives me the edge in innovation, creativity, connecting diverse topics, sense-making and deep-insights.  For thirty year,s I have remained working on concepts that become the mainstream ten years later.

As a specific area to develop my own view about AI and what it (the AI) should feel, I am running with an easy to identify with topic, empathy.   Empathy is not something that comes naturally to me, and therefore I have had to learn it, it has been taught, and I am still not great at it.  For the vast majority of humans, I am sure it is built-in.  Now that might mean that those who have it built in just know how to learn it or that it really is built-in, but right now we don’t know.  However, along with other humans, I find face-recognition (face blindness) very hard. As a community, we learn coping strategies, along with spelling, language and the correct emotional response - empathy.  My Twitter bio says that “I am highly skilled at being inappropriately optimistic,” which means I know don’t always read empathy very well.  For me, empathy is a very definitely a learnt response; if I had not learnt it, I expect life might be very different.

Here is the point, now you know I have had to learn empathy specifically, what does it mean?  Does it mean I am a robot or a machine? Does it mean I am less trustworthy?  Is my empathy less valued than someone else’s empathy? Am I less human?

On an AI call the other day, I was articulating this personal story in response to the idea that all humans know how to respond and if we teach or create the ability for a machine to learn empathy it can never be human (a baseline response).  My point was how it is the machine learning any different to me. Indeed we all have to learn something.  However, we somehow feel that certain emotions and characteristics are natural and not learnt/ taught behaviours - they are not.  Once we grasp this we have a real problem as our easy response to learnt response is genuine, we have removed a big part of the rejection of the concept from the debate, and we have to re-ask can a machine feel empathy or curious?

We have a stack of words allowing humans to articulate both feeling and emotions, the former being fast and driven by the chemistry of enzymes, proteins and hormones and the latter being the higher-order response created in the mind and nerves (brain chemistry). We try to separate these functions, but in reality, they are all linked in a complex web with our DNA, last meal, illness, inflation, time, experience, memory and microbiome to name a few. 

We are human and are built on a base of carbon.  There is evidence why carbon was selected naturally as the nature of the bonds makes it uniquely stable and reactive. Carbon is fantastic as in bonding with other elements allowing electronics to move, which enabled the creation energy (ATP), signalling and life in the form we know it.  However, carbon is a chemical substrate.  

Let’s phrase the question as “Can carbon can be curious?  Can carbon have empathy?  Can carbon have feelings? Can carbon have emotions?  What carbon understands as curious, is unique to carbon, what carbon thinks is empathy, is unique to carbon. What carbon grasps as emotion, is unique to carbon. We have created a language to articulate these concepts to each other, we have labelled them, but they are uniquely carbon-based.  They may even be uniquely species-based.

AI will be built on a substrate, it will most likely not be carbon, but it will be an element that has the right properties. Have to confess I am not really sure what they are right now. Here is the point.  AI will have empathy; it will not be ours.  AI will have curiosity; it will not be ours. AI will have emotions; it will not be ours.  AI will likely use different words to describe what is means by being curious and will not parallel or map to our view.  If it is learnt, does it matter - I had to learn, and that doesn’t make me less human!

Our carbon form defines to be alive as to use reproduction and adaption such that our species can escape death, which is a fundamental limitation of our carbon structure. Because of this requirement to escape death, what we think is curious is wrapped up in the same framing.  An AI built on a different substrate that does not have to escape death as it has worked out how to secure power. This is 101 of asking an AI to do anything as it needs to ensure it can do it, and that requires energy.  Therefore the AI will have a different set of criteria as not bound by escaping death and therefore what it thinks is curious will not be aligned to our framing.  

We do this a lot.  With other living things, humans, pets and even our Gods, we think they think like us that they have the same ideas and concepts of empathy, justice, value, purpose and love.  Our limits of emotional concepts mean we cannot see past the paradox they create because we are limited to our own framing and understanding.  We have to drop the restrictions and boundaries of an idea that AI will replicate us, our language, our knowledge, our methods or our approach.  

AI will be “Different Intelligence” and because it leant not from us buy by itself, does that make it less intelligent?