Artificial intelligence is transforming our cities, but at what cost? As public spaces become increasingly digitised, we risk losing the human connections that bind us. Elif Davutoğlu explores how AI is reshaping public life — and suggests measures we can take to preserve our shared spaces
AI is changing urban lives in many ways, from managing traffic to powering surveillance cameras and chatbots. But beyond these visible uses, AI is also quietly reshaping something much deeper: the way we interact with one another in public spaces.
Public spaces have always been places for people from different backgrounds to meet, interact, and sometimes build solidarity. Parks, buses, libraries, and plazas are not just functional, they are where we form social trust. If AI changes how we use these spaces, it can also change how we coexist.
AI makes many daily activities more efficient, but also more private. Instead of walking to a local store, we order online. Rather than take the bus, we might use an AI-powered ride-share app. We no longer ask for directions, but follow our phones.
Each of these choices makes life more convenient. But they also reduce the chance of running into other people, especially those who are not like us. This kind of contact, even if brief or uncomfortable, helps build tolerance and empathy. When fewer people share public spaces, our sense of community weakens.
We have been slowly losing the skills and habits needed to live with strangers. Now, AI may be speeding up that process
Sociologist Richard Sennett calls this the fall of public man. He argues that we are slowly losing the skills and habits needed to live with strangers. AI may be speeding up that process.
Many cities now invest in 'smart' infrastructure including traffic sensors, facial recognition systems, or predictive policing tools. These technologies promise safety and efficiency. But they also raise concerns about exclusion, surveillance, and fairness.
In Toronto, for example, Google’s Sidewalk Labs planned to build a smart district full of AI systems. Local residents pushed back. They feared that private companies would collect too much data and control public services without proper accountability. Google eventually cancelled the project.
When public spaces are filled with invisible systems that track our behaviour, people may no longer feel safe or equal. Minority communities in particular are often more heavily policed or surveilled by these systems. This damages trust not only in the technology but also in the fairness of public life.
In public spaces filled with invisible systems tracking our behaviour, people may no longer feel safe. Minority communities — often more heavily policed by such systems — may feel unfairly targeted
Ruha Benjamin, a scholar of race and technology, warns that many AI systems simply automate old forms of inequality. She calls this the New Jim Code. If public space becomes more hostile or exclusive, it’s no longer truly public.
Public spaces are not just physical. We also interact in digital spaces on forums, social media, and news platforms. But AI shapes these, too.
Recommendation algorithms decide what we see, who we follow, and what news reaches us. This often leads to echo chambers in which people are exposed only to views with which they already agree. The kind of open, respectful disagreement that democracy needs becomes harder to find.
The public sphere — a term coined by philosopher Jürgen Habermas — is where citizens debate and form collective opinions. If AI systems distort this process, democratic conversation suffers. And when we can’t agree on basic facts or speak across differences, solidarity is hard to build.
The Sidewalk Labs case is not just a cautionary tale about data and privacy; it also reveals something deeper about our collective hopes and disappointments in the digital age. Despite mounting evidence of harm or exclusion, we continue to invest in the idea that more technology (and particularly AI) will solve complex social problems and improve public life.
This enduring faith reflects what theorist Lauren Berlant calls cruel optimism: an attachment to ideals or systems that, while promising improvement, often inhibit the very flourishing they are meant to support. In the context of AI, this optimism takes the form of promises of safety, convenience, and efficiency that obscure the quieter losses: the erosion of spontaneous social encounters, the normalisation of surveillance, and the retreat of democratic control from public life.
When we allow algorithmic systems to reshape public space without critical reflection, we risk trading the vitality of public life for the appearance of order
Berlant’s concept invites us to consider not only what AI does, but what it displaces, and how the search for optimised urban life may come at the cost of precisely the values that make city life meaningful: unpredictability, mutual recognition, and shared vulnerability. When we allow algorithmic systems to reshape public space without critical reflection, we risk trading the vitality of public life for the appearance of order.
AI is transforming public spaces in ways that extend far beyond convenience and efficiency. As these technologies continue to evolve, we must ask difficult questions about what kind of public life we want to preserve, and whether our attachments to digital solutions are hindering the democratic ideals we seek to uphold.
Rather than accepting AI’s integration into public space as inevitable or neutral, we need to confront the emotional and political investments we’ve placed in its promise. Only then can we begin to rebuild trust — not just in our technologies, but in each other.