- Aug 26
- 3 min read
As someone who graduated with a degree in Manufacturing Information Management Systems and has worked with technology most of my career, I might be expected to embrace Artificial Intelligence with open arms. Over the last year I have begun experimenting with Grok and ChatGPT, and can certainly see the benefits; however, I fear this technology could have a net negative impact on society. Here are my main concerns:
Continued decline in ability to think critically
With the introduction of smartphones and short form media, plus the pandemic, I have seen a noticeable decrease in attention spans and the ability to think critically. Many books have been written about this, including one I recently read “The Anxious Generation” by Jonathan Haidt, which I’d highly suggest to anyone with children.
Teachers are reporting that it is becoming increasingly difficult to get their students to form their own thoughts, as they instead lean on A.I. to write their papers. There are tools being used to detect A.I. generated papers, but they are imperfect, and students can “tweak” small parts of the papers to fool the detection tools.
Elimination of jobs
Labor unions are already digging in for this battle. In many ways, this is like what we have seen for decades where labor unions have fought automation (for example dockworkers pushing to ban automation at ports). Marketing professionals are seeing their jobs replaced by Artificial Intelligence. Entry level programmers are finding it nearly impossible to find jobs out of college, as A.I. tools have become increasingly efficient at churning out lines of code. Many websites no longer offer a customer service phone number, instead you must first interact with a chat bot.
Jenson Huang, Nvidia’s CEO, recently said “Every job will be affected, and immediately. It is unquestionable. You are not going to lose your job to an A.I., but you’re going to lose your job to someone who uses A.I..” While I understand what Huang is saying, I must disagree with the statement that there will not be job loss. Having worked for a company in the past under the umbrella of private equity I know that some companies will absolutely use A.I. to reduce their employee overhead. You don’t have to provide benefits to an A.I. agent.
Ethical AI
There have been plenty of TV shows, movies (i.e. Skynet in Terminator), and thought experiments about A.I. gone "rogue". As A.I. continues to evolve will there become a point where we cannot control it, and it could decide that humans are a scourge and should be destroyed? While this may sound like a fantasy, I think it is important to place an emphasis on safety. In late 2023 there were large disagreements over A.I. ethics. Some developers valued rapid innovation over safety. One such disagreement eventually led to Sam Altman getting booted from the very company he co-founded. OpenAI was founded as a non-profit with a mission of ensuring that A.I. benefits all of humanity.
This reminds me of Google’s early motto of “Don’t be evil” which was a key part of their culture in the 1990s. If we fast forward to today can we say that Google never crossed that line? Lately, companies like OpenAI have been facing lawsuits from families who lost loved ones to suicide because the A.I. agent didn't alert authorities when the user said they had been thinking about taking their life. What responsibility should companies have in situations like this?