|
![]() |
![]() |
![]() |
![]() |
![]() |
Artificial Intelligence (AI) refers to non-human machine intelligence, meaning machines that are as intelligent as people. What happens when these machines become more intelligent than we are is an unanswerable question … until it is.
![]() Experts tell us that AI falls into two categories. One is good for humanity. The other is bad for for humanity. There may be a third possibility. More on that chilling possibility below. Perhaps ‘chilling’ is not the right word. ‘Horror’ may be better. Decide for yourself. The First Possibility AI will be good for humanity. It will do things we can not and some things better than we can. Pretty much anything that a computer can do AI will do better because AI is a computer. A powerful one, but still a computer. To be more correct AI is a program, or system of programs, running on computer hardware. Experts who hold the ‘good’ opinion tell us that AI will be programmed to reflect our morals, but will those morals still be active a hundred generations? Just how many years, days, hours, or seconds will represent a hundred thousand of our human generations? This begs the question, how long will an AI’s morals last? AI may be downright evil in its relationship with humanity because it will be so much smarter than humans. It will make decisions faster and implement them quicker with fewer mistakes. And the consequences? While we suffer consequences, an AI won’t. We require money, food, housing, clothing, a career, but an AI doesn’t have a family. It does not have a family, friends, coworkers. Our society keeps people’s impulses in check by a host of internal and external controls. But, what will keep an AI’s ‘impulses’ in check? We should say ‘decisions’ instead of ‘impulses’ because an AI won’t have ‘impulses.’ Only decisions made with insufficient data or processing. Imagine your smart phone (no pun intended) being smarter than you are. What do you do when it buys things without your permission, or invests your money and loses every penny? Or when it won’t allow your car to start? Ruins your credit? Kills your spouse? The Third Possibility AI can be good or bad, but what is humanity to do should AI view us as raw material? The human brain’s neocortex is a parallel processing entity par excellence. We think with it. Reason with it. We are who we are because of our neocortex. Should AI decide that our neocortex is a resource to be exploited, people will be good for only two things: supplying the necessities that keep humanity alive and for making more people. Note: I created this concept for a short story I’m working on, but the more I thought about it the more it nagged at me. Especially since many experts in the field are fearful of AI’s effects on humanity. These machines will be sentient (defined by Merriam Webster as “capable of sensing or feeling : conscious of or responsive to the sensations of seeing, hearing, feeling, tasting, or smelling”). As such, I’ve come to believe this fear is justified. Comments? I’d love to know what you think about this as it will affect all of our lives forever. And, let’s all hope and prey that this remains in the realm of science fiction. |