Articles

Are Smart Cities a Good Idea?

Software
James Grieco
James Grieco
Feb 14, 2024
4
min read
Are Smart Cities a Good Idea?

Data privacy is a hot button topic since it affects everyone who uses the internet, but despite that ubiquity, few are perpetually conscious of just how much of their data is out in the world. With the upcoming AI boom set to follow up the hype of ChaGPT, data privacy problems are only going to get bigger and more worrying for society.

ChatGPT has had its own share of privacy problems, from its complete lack of transparency on how the system works to the fact that OpenAI likely trained the system on as much data as it could scrape from the internet as possible, all without receiving consent from anyone. Unsurprisingly, the lawsuits have followed en force.  

And yet intellectual property infringements are just the beginning, as after intense scrutiny, the Italian Data Protection Authority has banned ChatGPT, alleging that it violates data privacy laws such as the EU’s GDPR. A primary issue Italy has taken with the LLM is that users are not able to easily send data subject access requests to OpenAI to have their data deleted from the system. 

Giving people rights over their own data is a core tenet of any data privacy law, and we already see some AI-powered systems neglecting those rights in favor of producing stronger and stronger computational outputs. 

And all of that is just happening online. If these risks move to the realm of the physical world, the public has no remedy to stop it. 

Enter the concept of “smart cities.” A smart city is a city where traditional networks and services are infused with technology–most of which is AI-powered in some way or another–to make them more efficient for consumers. 

There are ample reasons to support smart cities. Some cities, such as Singapore and Oslo, have already put these ideas into effect on a contained scale, with results showing benefits to the environment, the economy, and residents’ quality of life.

All of that is well and good. However, those results are playing out without the full futuristic implementation many think we are headed towards to revamp urban life. The reach of civil engineering, backed by an unprecedented wave of AI technologies in the coming decade, could very well get society to a point that once only resembled science fiction.

That idea of a “smart city” is a fundamental risk to people’s privacy.

The main gambit, then, is designing these systems with privacy in mind, embracing data protection principles to ensure they work with as little risk to individuals as possible. 

But when imagining a smart city at scale, it seems incredibly difficult to reconcile how privacy by design and data minimization, core concepts that some of the most responsible and forward-thinking companies today are using to make their products safer, can work within a system that–for lack of a better word–will monitor hundreds of thousands if not millions of people. 

As ChatGPT and its first-wave competitors out of Google and Meta have shown, technological advancement on this scale needs untold amounts of data to work. Some of the more sinister implementations of this include facial recognition technology (as well as other biometric markers) and social scoring or analysis-type systems. 

The EU’s AI Act, set to become the world’s first comprehensive AI regulation when it passes in several months, acts on a risk-based approach, requiring stricter and stricter controls and oversight depending on the implementation of an AI system. At the top of the risk pyramid sits social scoring and facial recognition technologies.

These are technological overreaches that we identify as problematic, hence why places like Rite Aid are being banned from using the tech today.

And yet, much of what will conceivably power the smart cities of tomorrow will need some versions of this technology in order to make massive gains society expects–and potentially, wants. 

More than anything, finding the balance between data consumption and implementation will be the key to these technologies, but the history of innovation has routinely steamrolled over privacy as an issue, never giving it a second thought until public backlash warrants fixing privacy problems and harm that should have been avoided altogether.

We may be several years away from full-blown smart cities, but if we fail to consider and act on these values now, we are setting society and technology up for an unhealthy, exploitative relationship. The potential for mass surveillance, system biases, and the damage of data breaches outweigh potential benefits until we know AI and data privacy can co-exist without conflict.

People might want AI to revolutionize the world, but what we want is not always what’s best for us. If we blindly rush toward AI innovation, not just online but in the real world, the repercussions will be disastrous and privacy will forever take a backseat to convenience and security.