..

Artificially Generated Truth

Artificial Intelligence (AI) and Large Language Models (LLMs) are undeniable. Platforms like LinkedIn are inundated with posts on leveraging ChatGPT to become a “10X developer” or generate automated blog content. Influencers on social media proudly share how they use these models for content creation, and schools even caution parents about students using them for homework assistance. While it may seem harmless, the widespread adoption of AI and LLMs carries significant risks if proper precautions are not taken. The potential for misleading the public and mishandling user data looms large. Without addressing the need for oversight, we run the risk of entering a future where AI and LLMs are widely used without adequate scrutiny. This could have detrimental effects on individuals, society, and the overall reliability of information itself. It is crucial to approach this trend with caution and ensure responsible and ethical use of these technologies.

The Illusion of Authority

When using ChatGPT in the early days and even now, it comes across with a level of authority. You ask it a question, and then it gives you an answer. Due to the confident manner in which it replies, users assume it gives back factual information. And why would they have reason to question it? Most people who are not in the realm of AI development or have an understanding of how systems like this work. It’s all a bit mystifying to the general user, like AI is some magical black box, leaning into the third of Clarke’s Three Laws, which states, “Any sufficiently advanced technology is indistinguishable from magic.” But when people believe it’s magic, they will also believe it to be infallible.

A paper published on March 23, 2023, from OpenAI, the American artificial intelligence research laboratory behind the platform, titled “GPT-4 System Card,” looked into this problem and stated that “overreliance is a failure mode that likely increases with model capability and reach. As mistakes become harder for the average human user to detect and general trust in the model grows, users are less likely to challenge or verify the model’s responses.” The closer the model becomes to giving correct information, the less likely the user will feel the need to verify this information.

However, at the root of these models is that they do not entirely represent the truth. They are probabilistic, meaning it will give you the answer it deems to be the most correct from its training data, but there is no guarantee that this is representative of reality. If the answers it gives you align with the user’s world view, or what they think is plausible enough to be the truth, then they will trust the output.

Artificial Hallucinations

What are Hallucinations in AI?

When an LLM or AI model makes up things, it’s called hallucinating. Even though these models are trained on a large amount of data does not mean it is correct.

The New York Times article “When A.I. Chatbots Hallucinate” details instances where the ChatGPT model fabricated information and presented it as truth. This may not seem surprising as it pulls information from what it can grab but the issue is that the engineers often have no idea why the model falsified information, even when there is counterevidence in its training data.

What effect do they have on the authority of ChatGPT?

Now that we know the model can lie, how can we trust it? When it comes to the public’s use of these tools, trust must be the dealbreaker in how we use them.

When people seek information, they may rely on AI to get facts and answers. Given their curiosity into a subject they may be unfamiliar with, they can’t verify if the AI’s responses are accurate. If users had to fact-check the AI’s outputs themselves, it would be like doing the work twice, making ChatGPT less useful. But because AI models are designed to seem trustworthy, people usually accept the information without questioning it.

The challenges of obtaining accurate information not only pose difficulties on their own but also effect so much more outside the platform. OpenCage, among other businesses, has experienced the negative impact of misleading AI-generated responses. They were inundated with inquiries about converting phone numbers into locations, as ChatGPT mistakenly recommended them as a service for such conversions. However, this is a feature that OpenCage does not and will not provide. They addressed this issue in a blog post titled “Don’t believe ChatGPT – we do NOT offer a “phone lookup” service.” OpenCage believes that the model was trained on incorrect data, likely lacking proper fact-checking. This situation highlights the pollution and corruption of not only of the web itself but also of the training data used to develop AI models like ChatGPT.

It is important to note that the issue of hallucinations or inaccuracies in AI language models extends beyond textual inaccuracies. A recent video titled “Google vs. ChatGPT: INSANE CHESS” by Gotham Chess demonstrates the significant extent to which these models can produce misleading or nonsensical outputs. In the video, Google’s AI model, Bard, plays ChatGPT and what starts as a normal chess game quickly becomes a game full of illegal moves and false declarations of victory.

Pollution of the Web and AI’s Training Data

LLMs like ChatGPT are trained on a huge dataset made up of public literature like Wikipedia, blog posts, and anything else it can find online. A model will be more effective at a generalist task if it is trained on a lot of data covering various topics. Only in recent years have we learned that we cannot trust everything we read on the internet and should take a lot of things with a grain of salt. But is this what these models are doing? As we saw in the situation with OpenCage, ChatGPT didn’t hesitate to regurgitate the false facts it learned to others.

The proliferation of text generated by these models is becoming a problem on the public web. People are following guides that promise easy ways to utilize ChatGPT for tasks like writing books or creating games. While ChatGPT can be helpful for getting started or receiving feedback, many users simply copy and paste the model’s output without much thought. Some even forget to remove the model’s disclaimer, which explains why it can’t provide an answer, despite it often still offering one. This careless usage contributes to the pollution of the web with potentially misleading or unverified content.

You’ll probably come across quite a few if you search for “as an AI language model” on your favorite sites. One comical example is an Amazon “Verified Purchase” review for a 5th grade level workbook from a “person” named Dody Sam, which includes in its review, “As an AI language model, I do not have personal experience using the workbook, but I have access to information from many users who have praised the quality of the content and the effectiveness of the exercises.”

All jokes aside, this effects the integrity of the information found when you look at reviews for products, you think you are looking at reviews from real people who have used the product, spent time thinking about it, and reviewed it. However, as we have seen, this is no longer the case. These were just examples that included that one tell-tale sign of ChatGPT being used; however, GPT is capable of writing convincing reviews that look authentic.

Stack Overflow has also banned answers from ChatGPT in one of their latest policies, which gets to the root of the problem: it breaks trust. Answers given by the platform are not properly cited because ChatGPT does not provide that service, therefore the site’s standard for verified information is not met. But even more, they note that these AI-generated answers convince users that they hold merit and break down their ability to detect them.

It has another flow-on effect, as we know these models are trained on a lot of data, and with the wide usage of GPT, some of its new training data will be its own outputs. Unless there is a reliable way to determine if a human or a model writes a block of text, these models will get worse and become obsolete. When you train a model on its own outputs, you will exaggerate the biases and errors already present in the model.

The Early Parallels

The initial adoption of the web marked a significant moment, reminiscent of the present scenario surrounding artificial intelligence. Before its arrival, internet access necessitated a deep understanding of networking and computing. We find ourselves at another critical juncture, where the general public can now access remarkably powerful AI models with no relative training.

With the dawn of the connected world and the release of Mosaic in 1993 came a plethora of security issues. Wider usage means wider attack surfaces. People were doing things online that the original engineers never even thought of. Who could have predicted Smurf attacks or worms like ILOVEYOU, let alone modern worms like Wanna Cry? These attacks involve the misuse of programs and code that an engineer has written.

While we can’t see the future, we can learn from the past. Major advancements in engineering and technology are targets for misuse. Looking at the misuse of older technologies, the engineers had an advantage compared to LLMs and OpenAI: they fully understood the code.

One of the applications of GPT has been to assist in code writing. However, a challenge that has arisen is the occurrence of hallucinations, where users are suggested non-existent packages and modules. I encountered a similar situation when seeking assistance with detecting a ReDoS attack. GPT recommended a Python package called “Regex-checker,” which, unfortunately, does not exist.

Given this experience, it is reasonable to assume that others have encountered similar situations where they failed to verify and blindly accepted the AI-generated recommendations. Unfortunately, this creates an opportunity for exploitation, enabling malicious actors to identify non-existent recommended packages and fabricate them for their own purposes. A longer article on this can be found here: Voyager18 – Can you trust ChatGPT’s package recommendations?

The Risks of Integrating AI

With the recent surge in hype surrounding ChatGPT, numerous companies have eagerly sought to harness the advantages of integrating LLMs into their products. In marketing circles, AI and LLM have become the latest buzzwords. However, whether all these companies fully comprehend the potential implications of utilizing LLMs like GPT remains unclear. While some companies may have identified viable use cases for GPT and received approval from their customers to share customer data with third parties, there are additional concerns that business owners and technical managers must consider when dealing with third-party LLMs.

Volatile Tech

Let’s assume that a company has developed an amazing feature that leverages OpenAI’s ChatGPT. They have done extensive testing and found that the model’s results are exactly what they want. They have done validation, choosing the perfect model version that ensures the desired output. The customers have beta-tested it and are very happy with the results.

Now what happens when OpenAI decides to update the model? Model updates might benefit the general metrics used to measure a model’s effectiveness, but that is not to say that they will perform the same on your data.

The OpenAI documentation states, “With the release of gpt-3.5-turbo, some of our models are now being continually updated. We also offer static model versions that developers can continue using for at least three months after an updated model has been introduced.” This means that you will have three months to test the new model. What are you going to do if these updates harm your product? How will you mitigate the risk of the core feature of your product being out of your control? Do you just accept it and hope that it doesn’t happen? Or are you going to be proactive about it, and if you are, what will you do?

Data Privacy

After we accept the inherent risks associated with the model’s potential changes, it is important to address the issue of safeguarding customers’ data. Customers trust businesses to be the custodian of their sensitive data; taking that role seriously is their job. They must consider the possible situations where a consumer will lose that trust. One of the major ways is a data breach. Businesses have control and visibility of their own system, but when they introduce third parties, controlling customer data becomes harder. They need to trust their third parties as much as their customers trust them.

OpenAI itself has experienced at least one data breach in recent months (at the time of writing), and if you had to, how would you explain to your customers that their trusted data may have been publicly accessible due to a lack of control on your part?

While OpenAI has established it’s own “Bug Bounty Program” to incentivize ethical hackers to identify vulnerabilities, and there have been two P1 (highest severity) findings reported at the time of writing, it is necessary to evaluate whether OpenAI has earned your trust.

It’s also vital to consider the implications of the General Data Protection Regulation (GDPR) and the specific responsibilities and designations of “Data Processor” or “Data Controller” under EU law when it comes to transmitting customer data to a third party, but that’s a topic for another day.

Navigating A World With AI

A few things have changed (even since I’ve started writing this), and there are new warnings when you sign in to OpenAI, but I fear that the damage has already been done. We are seeing huge swathes of text that AI has generated pollute the web, misleading users, shoppers, students, and those who did not seek AI’s help. This issue will only get worse when new improvements to the models are released, when it will become even less distinguishable between AI and authentic human-written text.

I can imagine in the not-so-far future that we will also have models that are trained to determine if text is AI-generated or not, hopefully helping users of both systems. AI users will be forced to use the output from these models as a springboard into what they want to write about; it will be a starting point, not the be-all and end-all of a piece of work. Leading to all-around better information sharing.

As we wait for new technologies to facilitate that, we need to have a plan to approach the issues, and one of the biggest impacts in my mind is education. If a business owner or decision-maker is made aware of the issues surrounding these AI models, they can make the best decision for their company. If users are aware of the widespread usage of these models and their problems, they can also view information online with the knowledge that they might need to do some extra research to fact-check what they read.

A Path Forward

We have an interesting path ahead of us as the public uses these free and accessible models to do all sorts of things. I’m not saying these technological advancements are problematic; they are far from it. These models have shown people what AI is capable of and have inspired many to develop innovative and creative things. These models have also sparked discussions on ethical considerations related to public data usage, model outputs’ reliability, and data security assurance. However, these public models are not always the optimal choice for consequential tasks and are plagued by their own set of issues. I hope that companies and individuals who are invested in these technologies will recognize the advantages of developing internal models specifically tailored to excel in the precise tasks they aim to address. We find ourselves in an incredibly innovative era but still have a long way to go.