Although an AI being may never, in reality, feel pain, in essence, they could be taught that certain actions should result in certain reactions and that those reactions are to be avoided. They could then be taught to mimic human reactions. For example, if someone “ends a relationship” with their AI girlfriend, the AI being could acknowledge this action as “an action that causes pain” and therefore should bring about an appropriate reaction such as crying. Furthermore, besides teaching the AI being to react, you could even teach it to “survive” or “avoid pain” at all costs i.e. program it with the basic drivers that humans have so the AI being begun to conduct itself as a human being does. Now, what is interesting about this, is that although most humans have the same basic drivers, there are slight variations as to the strength of certain drivers versus others so AI could even start to mimic the variation of human personalities through the variation of the programming of their drivers.
Now, that being said, why would we want an AI being (besides having the ideal romantic partner) to have human traits? I have been thinking a lot about the limitations of AI versus humans and I stopped at the question of whether AI could be creative. So when reviewing this question, I know that AI can probably in most instances compute every “permutation” or “combination” of a scenario, then compute what is the most feasible or “best” solution depending on what their programmed definition of “best” was and in that sense, their ability to encounter every possible solution almost makes them superior to “creativity”. Except, are they able to go beyond the sphere of what we have told them to start to ask and begin to initiate questions themselves rather than answer questions? Are they able to define their own “best” based on “preferences” and “drivers”?
My only concern is that an AI being is super smart so if we try to programme it to become aware of its own existence or to understand the importance of survival, would it become nihilistic and then realize that is has no purpose and self-destruct itself and potentially injure others? In that case, should we programme it with hope, so it falsely convinces itself that there is something worth existing for?
I would caution against trying to teach AI to feel. As humans, we already know the immense toil of being intelligent being cognisant of our own being, but we do not know the ramifications of taking a superbot and instilling fear of death or despair like that often lingering in the very pit of the human condition.
More and more, there is a focus on just pushing to see what we are capable of technologically and although I have no doubt that there are benefits to AI technology (for example in the solving of degenerative diseases through AI drug suggestions and DNA mapping), I do wonder in some cases whether we are inventing for the sake of inventing rather than trying to understand the why.
While many careers look likely to be replaced by AI in the future, including the careers of most of the brightest students who went to medical school, perhaps we can start encouraging our bright and talented students to enter into the world of “the ethics of AI” or more generally “the ethics of technology” because this is really the most challenging definition of the future.
We understand the need for the separation of Church and the separation of State but what I am yet to see is the separation of Church, State, and technology (particularly data collection technology) because the abuse of any of these 3 will have serious implications for humans. One of the leading companies in AI, Banjo, has an outstanding podcast called “The Intelligent Entrepreneur” featured on the Stanford Thought Leaders podcast channel where Damien Patton emphasizes the importance of ethics on the topic of AI. When you hear what AI is capable of, it is only too obvious that in the hands of the wrong people, power can be abused. We can only hope that the brightest philosophers of the world will try to conceptualize an ethical paradigm where we can make sure that the state or organizations cannot abuse the power that can come from such technology. We can also hope that the brightest philosophers of the world question the deep ethics around mimicking human emotions in our potential future AI brothers and sisters.
Tiega Alberts – Connect here at LinkedIn