Looker_Studio - stock.adobe.com
How agentic AI could destroy social media: the need for proactive governance
If organisations using agentic and generative AI don’t codify ethics and oversight now, the future may be filled with AI agents using generative AI to communicate with other agents, destroying trust in social media
I’ve seen this before. In 2000, I built a chatbot that could talk to other bots across networks. We called it a ‘botnet’ when numerous bots would carry out coordinated actions across one or more networks. Imagine a message in one chatroom triggering a response in another chatroom or even on a different server, long before Application Programming Interfaces (APIs) or workflow automation tools such as Hootsuite, Zapier, or N8N existed. You set a rule about a command, an event trigger, an auto-response if a certain type of message came through, and the system would act.
Twenty-five years later, agentic AI is built on that same premise, only at cloud scale. Systems now monitor events, make judgments, and take actions with little human supervision. If we don’t govern that layer properly, we risk turning the public Internet into a closed circuit of machines talking to machines.
From text to action
Where generative AI writes, agentic AI executes. It can draft, post, reply, buy, schedule, and optimise without waiting for approval. McKinsey’s 2025 survey found that half of enterprises are already piloting autonomous workflows. But governance is lagging behind adoption. KPMG’s global study on trust in AI found that nearly three-quarters of people are unsure what content online is genuine. The illusion of engagement is a threat to real communication. When organisations can profit using agents instead of humans, we are heading towards an economy that devalues authenticity and human input.
We are drifting into an absurd situation that very few organisations seem to be thinking about. An AI agent can generate a social media post. Other AI agents can amplify it. More agents can comment on it, react to it, and optimise it. A call to action gets clicked, a meeting is booked, and the person who joins that call is greeted by an AI sales agent. In some cases, the system booking the call may also be an AI.
From the outside, everything looks positive. Engagement is up. Activity is up. Productivity appears higher. It looks like value is being created. But if you slow it down, these systems are mostly just talking to themselves. Machines are generating signals for other machines, and platforms still count it as success.
That is where the problem sits. Social platforms were originally built to connect people, especially when they were far apart, to share parts of their lives and communicate as humans. Over time, they became commercialised. Advertising arrived. Influencers followed. Whatever their flaws, those models still relied on recognisable people and some form of social proof. What we are doing now is different. We are filling the same systems with automation that can simulate interaction at scale without any real human presence. When machines generate the signal, validate the signal, and monetise the signal, the basis for social proof collapses. At that point, this is no longer just about keeping a human in the loop. It is about what communication, markets, and trust turn into when authenticity is optional, and whether organisations deploying these systems have seriously thought through how they will govern them or do business in that environment.
The new trust gap
Social media was already saturated with noise made by people. Agentic AI magnifies that at cloud scale, with unprecedented speed. Brands and influencers are automating reactions and comments to tip the algorithms in their favour. Those gains erode as trust erodes. When users can’t tell whether an account, article, or comment comes from a person, it boosts one’s existing scepticism, and inspires new scepticism in those usually receptive to communication from others.
Persistent reliability issues in generative AI continue to strain trust. PwC highlights a widening trust gap as organisations are adopting AI faster than they can govern it with appropriate transparency, accountability, and control.
They are right to be concerned. When a system takes an action, someone still owns the decision. We can’t blame robots. Without defined responsibility, even well-intentioned automation can create reputational and regulatory exposure. And for those responsible for risk and governance, this debate goes far beyond technical analysis. Compliance is necessary.
Governance as code, not just policy
Leaders cannot fix this problem with big, bold statements, nor can a privacy or information security policy do anything about the dangers of ungoverned AI. Governance must live inside the administrative, operational, and technical architecture of the solutions using AI - the whole of it. As I’ve seen across dozens of transformation projects, the organisations that succeed are the ones that write intent into the system itself. That means auditable controls, versioned prompts, human sign-off before external actions are performed, and activity logs that can stand up in court. When people have asked me whether or not we’ll encounter unhinged, reckless AI, I tell them the same thing, that people will have to decide whether the robots have the ethics modules and strict guidelines never to do the wrong thing.
The EU AI Act, which is gradually increasing in its force, will explicitly require documentation and traceability for high-risk AI systems. The U.S. government’s 2024 executive order on responsible AI would have done the same for federal agencies, but the new administration rescinded this order. The UK’s regulator-led model and the new Data (Use and Access) Act are pushing towards more innovation while still protecting data subjects as emphasised under GDPR. Compliance is the baseline going forward, and this will rein in unmanaged agentic AI, at least by those who will stay above board.
As a lawyer, I look at agentic AI behavior the same way I look at how contractors behave under a service agreement: permitted activities must be clearly identified, provable, and reviewable. If an AI agent manages to act beyond the scope of permission, that’s not a bug. That’s a complete failure of governance. Responsible businesses would consider a contractor acting outside of its scope as in violation of their contract, with relevant governance responses or remediation required. Moreover, if your business can’t explain how or why an AI system took a public action, you don’t control your reputation anymore.
The board’s responsibility
While many entrepreneurs I meet look at how agentic AI helps with marketing, sales, or IT, it’s very much a board-level issue like cybersecurity or compliance. Well-governed boards define what’s permissible, set escalation protocols, and require evidence of control. Advisory board professionals with AI expertise can add value through identifying AI-specific risks. As advisors evolve their competencies to include AI, their advice becomes more as they bring independent judgment and structured review to a rapidly evolving risk surface.
Organisations focused on advisory board best practice for supporting governance boards will instill clarity on what the technology can do, what it must not do, and who answers for it. Without clear governance, any short-term gains from agentic AI come at the cost of trust and reputational control. That is the line boards must now manage.
Security and delivery discipline
Agentic AI introduces every risk executive-level security professionals are trained to manage, but it also demands executive-level operational discipline. Projects need defined ownership, change control, and closure. Every agent should have a build file, a test record, and a rollback plan. Logs should be immutable. Permissions should expire. Proving due diligence when regulators ask how your AI systems behave will be the norm.
The security principle remains the same: the system that can act for you can also cause others to think the system is you. That means social channels, customer communications, and internal data pipelines all require human checkpoints. Agents should assist people, not replace them.
What’s next
Regulation is catching up, albeit unevenly. The EU’s staged enforcement has already begun in 2025, with further enforcement due in 2026. The U.S. Office of Management and Budget now requires federal bodies to report all AI use cases. The UK’s AI Safety Institute is expanding its evaluation work into the private sector. And Colorado’s new AI Act, effective from the start of 2026, sets a precedent for mandatory impact assessments in consumer-facing automation. The overall signal is clear: the days of “deploy now, govern later” are ending.
The companies that adapt early will be the ones that treat agentic AI as an operational discipline. That means applying project management rigour, security architecture, and legal review before rollout. It means documenting every automated action that touches the public. And it means understanding that automation is not a quick fix, it requires care and compliance.
Continuing the culture of extending what’s possible
AI should make people faster, not invisible. The goal isn’t to eliminate human judgment; it’s to amplify it. Agentic systems are inevitable. The question is whether they will serve human intent or drown it out. Governance that lives in code, in contracts, and in culture is what determines whether agentic systems extend human capability or replace it.
