The Ups and Downs of Using AI

Everywhere you turn these days Artificial Intelligence, or AI as it is generally referred to, is in the news.  Whether it is an article talking about how AI will soon take over the world, or how it will free many of us from routine tasks we don’t want to do, it’s hard to escape the discussion about the risks and opportunities with AI.

AI has actually been around for many decades but has only recently taken off with the advent of large language models (i.e. ChatGPT), enabled by the computing power currently available and access to massive amounts of data that is necessary to train the AI models.

Many of you may have tried AI through a version of ChatGPT or experimented with other programs such as Google Bard.   Even if you think you have not tried AI, you probably are using software programs, such as spell and grammar checkers and Google search, which use AI to improve accuracy and results.

While AI is being integrated into a wide variety of software programs, I wanted to touch on some of the ways our clients are using ChatGPT and the associated risks with using AI.   Some common uses that you might try include asking it to draft a pitch to a podcast about an idea you have, asking it to draft a cover letter to send samples of your product or asking it to draft a script to use when talking to employees about the virtues of being on time.  Other uses include asking it to draft slides explaining a concept you want to present on or share with a potential customer about your product. Some have tried using it as well to write a simple contract with a customer.  While the material instantly prepared might not be exactly what you want, it will likely be better than you expected and can be a good place to start from and then refine to better fit your business.

The risk of using AI for such purposes relate to the general tendency to assume that what is created is accurate.  This is because the usual red flags that exist when you read something that is not accurate are not there.  For example, if you get a phishing email and you see a typo or grammatical error it raises a red flag, and you’ll look more carefully at it before clicking on it (hopefully!).  Or, if you ask someone at your company to write something up you’ll review it carefully because you won’t necessarily assume it is completely accurate from the start.

However, with the work product created by AI, such as ChatGPT, the usual red flags are missing, and people tend to assume that computers are accurate.  The reason the red flags are not there is because it will look great (error free, organized well etc.).   ChatGPT is programmed to try to give you want you want but is not bound by any ethical or moral guidelines and is not instilled with a sense of making sure what it provides is accurate or correct.  In contrast, someone at your office helping you draft something is, hopefully, guided by ethical and moral principles and is genuinely trying to be accurate.

This is what leads to AI providing wildly incorrect responses, usually referred to as “hallucinations”. The famous example of this was in a recent lawsuit where an attorney representing one side asked AI to draft a legal brief and the AI program literally invented citations to legal cases that did not exist.  They looked completely accurate and legitimate but were in fact completely made up.  The key take away here is to review and check everything that is created by AI for accuracy before using it.

One alternative is to use a program that is “hallucination free”.  These programs solve the issue of hallucinations by restricting the universe of possible responses to only a certain set of documents or parameters or have other programming restrictions to prevent or at least reduce the chances of the AI creating answers out of nothing without any factual basis to back it up.

Finally, it is important to note that open AI programs, such as ChatGPT, are not necessarily confidential.  Uploading confidential or trade secret information into it does not mean it will remain confidential since that information may be used to train the AI model.  My recommendation is that you avoid uploading anything you would not want a competitor to see or that would breach a duty of confidentiality that you are under (such as a Nondisclosure Agreement).

We here at Scherer Smith & Kenny LLP remain available to address any questions you may have related to your business.  For additional information, please contact Brandon Smith at bds@sfcounsel.com.

-Written by Brandon Smith