Articles

Trust in AI: The Internet is at a Tipping Point

Business
James Grieco
James Grieco
Nov 2, 2023
5
min read
Trust in AI: The Internet is at a Tipping Point

Artificial Intelligence’s rise this year, sparked in part thanks to the amazement many had after seeing ChatGPT’s capabilities, is only the beginning of a long saga that will dominate the vast majority of technological innovations that hit the market over the next several years. Investment in AI, particularly generative AI, is at all-time highs

Even with the investment scene buzzing over what products could come out of the next wave of AI, the public is less convinced that everything is going to turn out okay. A July 2023 study conducted by the MITRE Corporation found that just 39% of adults in the U.S. thought today’s AI technologies are “safe and secure,” down a whopping 9% from the same study done in November 2022.

We can attribute that drop to a variety of bad PR on the AI front, with lawsuits piling up over how OpenAI trained ChatGPT (spoiler: it likely did so with broad, indiscriminate data scraping) to its makers publicly obfuscating and avoiding comment on how the system works. 

Why would they do something like that? It could be because they don’t completely know how the system works. In an interview with Vox this summer, prominent AI researcher Sam Bowman said, “If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second, and we just have no idea what any of it means.”

AI has also taken a starring role in the Writers’ and Actors’ guild strikes that have shut Hollywood down for most of the year. The studio system, in an attempt to minimize production costs amid falling streaming service performance, have publicly stated that an intriguing prospect for the movie industry is to reduce writing and acting jobs in favor of AI-written scripts and AI-generated background actors (and AI de-aging and voicework to replicate movie stars). 

Unsurprisingly, 67% of the public support the strikes rather than the studios.

AI is not new in our society, nor is the coming explosion in the amount of AI-powered tools a harbinger of dystopian conflict. Many of the uses of AI and machine learning across tech produce better products; MineOS has AI aspects in the system that help make what were previously long and difficult processes like data classification easy and manageable under a single person. 

However, the most prominent uses of AI that people typically interact with in day-to-day life have not left a resoundingly positive impression on the public. Google ads following you around the internet? Social media controlling what you see on your news feed? The results you get from Google translate? Siri and Alexa? 

All AI-powered. 

And beyond that, what else do most of these tools have in common? Major data protection and privacy concerns.

People may not realize that AI acts as a core component of those things, but they recognize the data privacy issues. Then ChatGPT came along and reinvigorated those debates because of how revolutionary it seemed. 

Many, including cybersecurity researchers like Lukasz Olejnik, believe OpenAI violated the EU’s GDPR in the development of ChatGPT. Olejnik noted in an official complaint, "In my opinion [the violations] may be systemic … I am afraid that the industry of AI/LLM may struggle with proper risk assessments and design.” 

The fact that such powerful systems start development without extensive risk assessment and product design is frightening. While government regulations and assessment standards were not in place when ChatGPT was being developed, a company failing to do any due diligence and ignoring usual compliance standards like data minimization and privacy by design is a damning fact for any innovator.

Governments are beginning to put guardrails in place, with the White House releasing an executive order on AI this week and the EU nearing agreement on its AI Act, but regulation cannot be the only answer to this issue.

The MITRE survey mentioned above saw broad bipartisan support not only for government action on AI safety, but more oversight and consideration within the tech industry itself; 81% said industry should invest more in AI assurances and a whopping 85% said they want industry to transparently share AI assurance practices before bringing products equipped with AI technology to market.

Part of this reason is because of the privacy harms AI could cause. The survey also found significant worry among respondents that AI would make it easier to facilitate cyber attacks and identity theft, both of which are on the rise over time.

The potential drawbacks of AI are simply too great to ignore and let anyone do as they please, but the potential benefits of AI make it a tricky dynamic to modulate, but not stifle, innovation. 

That then is the role the innovators and tech scene itself will have to play. The novelty of ChatGPT and many AI tools has already faded, and now the public has come to expect bigger and better products even in the few months since AI became the next big thing (RIP Web 3.0). 

But in that rush to fill demand and keep people wanting more, Tech must acknowledge and address that going too far too fast will break things and push the internet further down a path very few asked for: one where monetization always trumps user experience and data privacy is brushed aside as a niche topic. 

People do not trust AI because AI and the people behind it have not given them a definitive reason to do so. Data privacy is a cause everyone can get behind and yet it has been given little real thought as a way to balance new AI technologies and a safe, functioning online society. 

For everyone’s sake, we need less talk about innovation and more action on responsible product development and data privacy principles.

The internet is at its tipping point. Will AI break it?