THE GROWING USE OF AI: The Benefits and Risks Employers Should Consider.

Advancements in technology, specifically advancement in computer systems and their capabilities, have been key in driving and improving productivity in the workplace and are a vital reason we as a society have advanced so much in the past couple of decades. Artificial intelligence (AI) is a newer development in this area. Many people have preconceived notions of what AI is but have yet to learn how it works or the practical use of AI. In this article, we will explain what AI is, how it works, how it can be used in the workplace, and the dangers of using AI, specifically focusing on the benefits and risks AI poses to an employer.

WHAT IS AI AND HOW DOES IT WORK?

IBM defines AI as technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.[1] This technology has recently been used to create artificial intelligence programs that generate dialogue when given prompts. One such example of this technology is Chat GPT, an AI chatbot that uses machine learning algorithms to process and analyze large amounts of data.[2] Chat GPT was created and released in 2022 by Open AI—a US company headquartered in San Francisco, California. Open AI was initially founded as a nonprofit company but restructured into a “capped profit” company in 2019, with the original non-profit entity controlling the new for-profit subsidiary.[3] Open AI states they are an “AI research and development company” with the mission to ensure that “artificial general intelligence benefits all of humanity.”[4]

Chat GPT allows users to have a conversation with the program and can specifically cater responses based on in-depth prompts given by the user. Chat GPT uses its enormous and extensive database to cater responses to the prompts the user provides. Chat GPT’s answers sound almost human, and the tone can be changed when requested. Chat GPT can now even analyze and understand images, create its own images when given a prompt, understand vocal prompts, and respond with a voice of its own, according to Open AI’s website.[5]

HOW CAN CHAT GPT OR OTHER AI CHATBOTS BE USED IN THE WORKPLACE?

Because Chat GPT has such an expansive database, Chat GPT can be used in almost any way imaginable. On the most basic level, employees can use Chat GPT to write reports, analyze data, fix code, draft articles, summarize documents, reply to customers, draft press releases or speeches, and the list goes on and on. The possibilities with Chat GPT and other AI chatbots are seemingly endless. Some websites have even incorporated similar AI chatbots in their websites to analyze and respond to customer service requests (e.g., the “chatbox” attached to many cell phone and internet provider websites). Some more advanced examples of the use of Chat GPT or similar AI chatbots include stock traders' and asset managers' use of AI to analyze and predict stock market trends and company data and make recommendations based on such predictions. In terms of the workplace, Chat GPT can do all of these things quicker and more efficiently than humans thereby greatly improving work productivity.  With the wide application and use of Chat GPT and other AI chatbots, what is the downfall or risk of using such technology?

RISKS ASSOCIATED WITH AI AND HOW TO COMBAT:

While Chat GPT and other AI technologies have seemingly infinite uses that may greatly improve work productivity and efficiency, some significant risks and issues can greatly affect society as a whole and, more specifically, employers whose employees are freely using AI.

One of the biggest opponents of unchecked and unsafeguarded AI is the famous businessman and billionaire, Elon Musk. Elon Musk takes issue with the fact that AI, according to him, is on pace and may soon take over human’s level of intelligence.[6] Elon Musk believes that certain safeguards need to be in place to prevent AI from surpassing human intelligence, namely open-sourcing AI so that any one person or corporation does not control AI and to tie the bots closely to humans so that it is “an extension of the will of individuals, rather than systems that could go rouge and develop their own goals and intentions.”[7] Musk, an original founder and backer of Open AI, has recently sued Open AI for allegedly breaching their contractual agreements by pursuing profits in a San Francisco court.[8]

In addition to these fundamental and thought-provoking issues raised by Elon Musk, AI also poses specific risks to employers/employees as well. First, and maybe most important, is the risk of bias. AI (as of now) is programmed and maintained by humans. If those controlling AI wanted, they could efficiently train data or design the algorithms of these chatbots to incorporate bias or push an agenda. Often, these chatbots make statements as if they were facts as well. Without understanding such bias, AI can potentially push a biased agenda and cloak it as fact.

Another primary concern is privacy. AI technologies generally utilize large amounts of data, specifically personal data, to operate correctly. A breach, leak, sale, or other transmittal (voluntary or involuntary) of the data both used by AI and given to AI in prompts could greatly harm individuals and corporations, especially if the information provided to such technologies is unregulated and unchecked. For this reason, employers must incorporate their own guidelines and procedures to prevent the uncontrolled relay of their own or their client’s information to AI technologies.

Finally, plagiarism, misinformation, and other incorrect bases of information present significant issues for AI. AI uses large amounts of data to generate the information asked of it. Similar to bias, if false information is used or fed to the technology, the entire work product of the AI technology could be flawed. A recent case out of the United States District Court for the Southern District of New York proved this fear to be well-founded. Specifically, in Roberto Mata v. Avianca, Inc., (Case No. CI 22-cv-1461 (PKC)), attorneys who submitted a brief written by Chat GPT were sanctioned.[9] The brief included citations to opinions and cases that were non-existent and even included fake quotes from the non-existent cases and opinions.[10] The presiding judge issued the attorneys a $5,000 fine because they failed to fulfill their gatekeeping role and ensure their filings’ accuracy.[11] Clearly, when Chat GPT gives false, misleading, or even non-existent information, citation, quotes, etc. as fact, this can mislead employees and cause them to submit, file, represent, etc. false information that could harm their employer legally, financially, and reputationally.

WHAT CAN EMPLOYERS DO?

Considering the above risks, how, then, can employers protect themselves? The seemingly simple and most obvious answer is to ban AI from being used by employees. However, this is probably not the answer because your employees are likely already using AI and may continue to use it. Further, AI technologies like Chat GPT have their specific benefits. They can increase work productivity in many ways (e.g., reviewing great amounts of data quickly, producing in-depth summaries of long documents, using past data to predict future outcomes, etc.). This can save employers time and money. For these reasons, having a clear set of guidelines and specific training for employees on how employees are permitted to use AI can help both prevent instances of damage caused by AI. It can shield employers against potential third parties or clients who may claim the company harmed them through the use of AI.


[1] https://www.ibm.com/topics/artificial-intelligence 

[2] https://uca.edu/cetal/chat-gpt/

[3] https://openai.com/our-structure

[4] https://openai.com/about

[5] https://openai.com/chatgpt

[6] https://time.com/6310076/elon-musk-ai-walter-isaacson-biography/

[7] Id.

[8] https://www.courthousenews.com/elon-musk-sues-openai-over-ai-threat/

[9] https://www.cnbc.com/2023/06/22/judge-sanctions-lawyers-whose-ai-written-filing-contained-fake-citations.html

[10] Id.

[11] Id.