Artificial Intelligence – AI – has been catching the intensive attention of media these days… not only from a perspective of all the benefits that AI technology will be able to bring us, but also from a perspective that it fundamentally questions our existence ; what is a human being, after all?
We are forced to face this uncomfortable question when developing AI technology, and in trying to answer it, we have come to realize that it is not as simple as it had seemed to be… We encounter numerous dilemmas :
- Can AI “think” like humans? – But what is it for humans to “think” in the first place?
- Will AI ever have “mind” like humans? – But what is “mind”?
- Will AI ever have “free will” like humans? – But do we really have “free will” anyway?
And the list goes on… Why are these questions important? It’s because it determines the role of AI in the future, including its legal responsibility in case of accident (e.g. automatic vehicle) and writing new regulations/laws so that AI would not turn into a threat to our society.But today, I’m not going into debates on these controversial questions. Instead, I’d like to express my random thought (as usual…) on the issue, which focuses on the prelude of these questions by challenging the assumption when we talk about AI.
When we raise a mere topic of AI, the fact that we call it AI – artificial intelligence – already shows the assumption that human intelligence and artificial intelligence are two complete different things.
It’s pretty obvious, right?
However, it means that even before going into a debate of whether AI can “think”, has “mind” or “free will”, there is an underlying intention that we want them (AI and humans) to be different. In other words, no matter where the debate may lead us, AI and humans will always be different. It is not factors that differentiate AI from humans (ability to think, mind, free will etc.), rather, it is labels we put (how we recognize and call them) that are different.
AI and humans, it’s not they ARE different, but they OUGHT TO be different.
Let’s consider the impact of labeling, dogs and cats. Now, dogs and cats are different animals (even a 5-year-old knows it!), but they are different because we label them differently. For you, all stones are just stones, but for a stone specialist, some stones might not be “just stones” and have specific names. Once we recognize them differently, we label them differently, and therefore, we understand them as different objects.
Coming back to dogs and cats, we are so used to the idea of dogs and cats as different animals, but if we focus on similarities, it’s quite surprising how we can actually recognize them differently ; they both have 4 legs, walk in the same way etc. There are cats that look like dogs, but they are still cats, and vice versa. But so long as we label them separately, they are not the same.
Which is why I think, as long as we continue to use the term AI, therefore recognize it separately from humans, AI can never replace humans.
So some debates on AI seem a bit pointless (especially those trying to convince how humans are different to AI by claiming that AI can never have emotion/mind like humans do), because they ARE different in nature as long as we label them separately.
The power of labeling is immense, because it shapes our understanding of the world, creates assumptions unconsciously, and determines our thinking… AI might never have “free will”, but do we? Really?