I had a front-row seat to the social media revolution in global affairs roles at Twitter and Meta. The same mistakes are happening in AI

Sean Evins is Partner, Kekst CNC; Ex-Head of Public Policy at Meta.

Sean Evins
Sean Evins.
courtesy of Kekst

I’m not a tech naysayer. Far from it. But we’re doing it again.

A new era of technology is taking off. AI is reshaping economies, industries, and governance. And just like the last time, we’re moving fast, breaking things, and building the plane while flying it (to use some common tech phrases). These mantras have driven innovation, but we’re now living with the unintended consequences.

For over a decade, I worked in the engine room of the social media revolution, starting in U.S. government, then at Twitter and Meta. I led teams engaging with governments worldwide as they grappled with platforms they didn’t understand. At first, it was intoxicating. Technology moved faster than institutions could keep up. Then came the problems: misinformation, algorithmic bias, polarisation, political manipulation. By the time we tried to regulate it, it was too late. These platforms were too big, too embedded, too essential.

The lesson? If you wait until a technology is ubiquitous to think about safety, governance, and trust then you’ve already lost control. And yet we are on the verge of repeating the same mistakes with AI.

The new infrastructure of intelligence

For years, AI was viewed as a tech issue. Not anymore. It’s becoming the substrate for everything from energy to defence. The underlying models are getting better, deployment costs are dropping, and the stakes are rising.

The same mantras are back: build fast, launch early, scale aggressively, win the race. Only now we’re not disrupting media instead we’re reinventing society’s core infrastructure. 

AI isn’t just a product. It’s a public utility. It shapes how resources are allocated, how decisions are made, and how institutions function. The consequences of getting it wrong are exponentially greater than with social media.

Some risks look eerily familiar. Models trained on opaque data with no external oversight. Algorithms optimised for performance over safety. Closed systems making decisions we don’t fully understand. Global governance void whilst capital flows faster than regulation.

And once again, the dominant narrative is: “We’ll figure it out as we go.”

We need a new playbook

The social media era playbook of move fast, ask forgiveness, resist oversight won’t work for AI. We’ve seen what happens when platforms scale faster than the institutions meant to govern them.

This time, the stakes are higher. AI systems aren’t just mediating communication. They’re starting to influence reality from how energy is transferred to how infrastructure is allocated during crises. 

Energy as a case study

Energy is the best example of an industry where infrastructure is destiny. It’s complex, regulated, mission-critical, and global. It’s the sector that will either enable or limit the next phase of AI.

AI racks in data centres consume 10-50 times more power than traditional systems. Training a large model requires the same energy as 120 homes use annually. AI workloads are expected to drive a 2-3x increase in global data centre electricity demand by 2030.

Already, AI is being embedded in systems optimising grids, forecasting outages, and integrating renewables. But without the correct oversights, we could face scenarios where AI systems prioritise industrial customers over residential areas during peak demand. Or crises where AI makes thousands of rapid decisions during emergencies that leave entire regions without power and no one can explain why or override the system. This is not about choosing sides. It is about designing systems that work together, safely and transparently.

Don’t repeat the past

We’re still early. We have time to shape the systems that will govern this technology. But that window is closing. So, we must act differently. 

We must understand that incentive structures shape outcomes in invisible ways. If models prioritise efficiency without safeguards, we risk building systems that reinforce bias or push reliability to the edge until something breaks.

We must govern from the beginning, not the end. Regulation shouldn’t be a retroactive fix but a design principle. 

We must treat infrastructure as infrastructure. Energy, compute, and data centres must be built with long-term governance in mind, not short-term optimisation. 

We cannot rush critical systems without robust testing, red teaming and auditing. Once embedded at scale, it’s nearly impossible to reverse harmful design choices.

We must align public, private, and global actors, which can be achieved through truly cross-sector events like ADIPEC, a global energy platform that brings together governments, energy companies and technology innovators to debate and discuss the future of energy and AI.  

No company or country can solve this alone. We need shared standards and interoperable systems that can evolve over time. The social media revolution showed what happens when innovation outpaces institutions. With AI, we get to choose a different path. Yes, we’ll move fast. But let’s not break the systems we depend on. Because this time, we’re not just building networks. We’re building the next foundation of the modern world.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.