NicoElNino - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Beware the 'thought police:' The dangers of human and AI integration

Humans could see numerous cognitive and medical enhancements as AI integration evolves, but the tech industry must take a close look at the potential negative consequences.

Humans have always strived to become smarter, faster and more creative through the use of tools. Starting with the basic calculator and advancing to the amazing computing devices we use today, our use of technology as a tool is moving ahead at breathtaking speed. We are now no longer limited by our geographic and social boundaries, nor are we held back by the limited data in our heads, the local library or the knowledge and intellect of local elders.

Once we are able to read memories and write information to the human brain, the opportunities will be endless. There is a distinct possibility that we may be able to resolve issues of dementia, mental illness and other issues through the selective retention and deletion of data and the use of artificial intelligence (AI) computing. Imagine if we could remove the damaging childhood memories that cause mental illness? Or that we could turn off delusional thinking and replace it with positive formulaic thoughts? Imagine if we could delete or replace the images of war that produce post-traumatic stress disorder? What if we could implant the equivalent of a college education in a matter of minutes, and guarantee retention because AI enhances the knowledge by combining the implanted information with experiential learning?

There is little doubt that AI integration with humans through implanted electronics promises to bring enhancements in thought processing, information gathering and creative cognitive abilities. Implants will bring back the use of limbs, speech and cognition for the injured or those born with a defect.

But these AI technologies have a darker side to them: If we no longer need to remember things, calculate or analyze data in our own minds, use critical thinking skills to make our daily and yet still important decisions, we could eventually be unable to be critical thinkers and to solve problems when we really need to do so.

Artificial intelligence has already shown the ability to calculate and correlate facts in ways that we as humans find difficult, without intuition, judgment and subjectivity. But at what point does something begin to decide what you should think? Is there a line where AI becomes influential and goes beyond just presenting facts? Will self-learning AI learn to modify its own results to sway human decisions? Elon Musk has said that he is very concerned about certain types of AI. He said, "Not all AI futures are benign. Not all." He added, "There are certainly some types of AI that are not good for the future." I could not agree more. In the wrong hands, used for the wrong purposes, AI can certainly bring about serious problems.

We must be cognizant of where AI and human integration takes us. When technology is involved, it can fail and be unpredictable.

We must be cognizant of where human and AI integration takes us. When technology is involved, it can fail and be unpredictable. Let's look at some potential scenarios that could be massively destructive:

1. The ultimate loss of thought privacy. Nothing in our human existence is more vital to independence than the privacy of our thoughts. If the capability to read memories and active thought processes comes to fruition -- a requirement of AI integration -- I am confident there will be ways to intrude upon those thoughts. Imagine for a second the potential damage if others could truly read your mind. To learn what someone has done, what they have seen and what they are planning would become the holy grail of legal discovery. If you want to imagine what it will be like when the privacy layer of our minds is broken, read about how Miss Teen USA Cassidy Wolfe had her bedroom camera hacked for a year. The possibilities are truly horrifying.

2. If AI is able to think, which is ultimately what it is designed to do, it could overwrite, modify or delete information it decides is no longer needed. What if technology could be set to trigger when you have certain thoughts? Would the thought police we speak of become a reality?

3. Hackers are a daily problem for our computing systems. Imagine if a hacker could get into our heads and inject or control thoughts. You could literally be held hostage by a person or system. What if they could decide what you remember or what you think you remember? Or cause you to make a decision that you have no control over? If hackers can control or even see what you think, this is a game-changer for human existence.

4. Critical controls and deadly results. The Department of Homeland Security announced earlier this year that hackers can control implantable devices. It is already known that there are thousands of vulnerabilities in these devices and that accessing them could cause deaths. We don't need to make predictions about this risk -- it is already a reality.

5. Uniqueness of knowledge and personality. What makes us all different is in large part what we learn and how we think. The things that make some people feel ill or scared make others excited and energized. Our unique experiences and choices make us different individuals. While it might be fun or beneficial on some levels, if we come to a point where memories, experience and education can be shared or even stolen from one and given to another, the human experience will change forever.

There has been much written about the good that may come from AI, but I have not seen much outside of science fiction about the potential nasty results. Keep in mind that many of our technological advances have been very accurately predicted in our science fiction entertainment. From space travel and flying cars to the cell phone and even a watch that you talk to, we've had a glimpse of the future.

Once we cross the chasm to true physical integration of AI and computing technology into the human mind, we can no longer predict their immense negative potential, but we can try. In fact, we must try and then act upon these issues. We cannot afford to allow the law and social constructs to ignore or significantly delay in response. The potential impact on human life is just too great to wait for reactive social dialogue and legislative action.

Thankfully, we have not passed the point of no return. If we act now, we can help to slow the inevitable negative impacts of AI integration. We can focus on what we can imagine and work to limit the negative impacts to some degree as we watch what promises to be an amazing next phase of human advancement. Ignorance and denial that the changes are coming is the true danger. Humans will generally defend their existence when they understand a threat. In order to understand it, we must have a digitally literate society and not allow the secrecy of science to hide the truth.

Next Steps

CIO SearchLight: Tech luminaries develop AI principles

Director General for WHO expresses artificial intelligence concerns

AI tech, software bots to transform IT ops

This was last published in July 2017

Dig Deeper on Information technology governance

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are some other potential benefits and ramifications of human thoughts and abilities being enhanced by AI technologies?
Cancel
Kevin, your article is thoughtfully written and I share many of your concerns.  From where I stand, humans are simply not ready for direct neural connection from man to machine.  The same can be said for in vitro modification of humans.  While both technologies could be a great boon to human kind, we lack consistent global frameworks, laws and oversight.  When it comes to the invasive bio-engineering you are discussing, "just because we can doesn't mean we should."
Cancel
Kevin- 
I agree that using artificial intelligence in humans has the potential of having dangerous effects. I have a few thoughts to add to your well written article.

1) It could also could cause the brain to not function and destroy or no longer grow (plasticity) brain material where the brain is no longer solo-utilized. The desire to improve and grow will no longer be needed. Over time the human brain may deteriorate (for lack of total use) while AT becomes stronger and stronger in people's bodies/brains. 

2) Also artificial intelligence could become preferred over normal human intelligence- so artificially charged brains could become preferred by some over non-changed humans. A.I taking over the human race. Unaltered people discriminated against. A.I. is not actual "human" intelligence. So being "artificial"  preferred over being a real person.

3) Also a person's drive, moral, ethical, and spiritual framework could be affected. Our strengths and weaknesses are a vital part of what makes each person unique and life interesting. If everyone had AI then our society could become less compassionate and merciful. The ultimate survival of the AI fittest might be the outcome of AI intelligence in our society. 

I agree that bio-engineering ethics need to be explored- because the consequences could in fact destroy human kind. 
Cancel

-ADS BY GOOGLE

SearchCIO

SearchHealthIT

SearchCloudComputing

SearchDataCenter

SearchDataManagement

SearchSecurity

Close