top of page

What the Financial Times’ AI Strategy Teaches All Organizations About Quality, Trust, and Innovation



By Rhea Wessel


The Financial Times is leaning into artificial intelligence. But not in the ways that grab headlines—no journalists have been replaced by AI, and robots are not churning out columns. Instead, the FT has taken a principled, nuanced, and rigorous approach to AI, one that offers a blueprint for companies beyond media.


At the recent International Journalism Festival, AI use in newsrooms was widely discussed. The FT’s head of digital platforms, Matthew Garrahan, who has also been a reporter and editor for over 25 years, laid out a vision that’s not just about technology—it’s about editorial integrity, experimentation, transparency, and trust. For companies looking to use AI without eroding the credibility of their communications or brands, there’s much to learn from the FT’s evolving playbook.


A Space to Play, a Culture to Protect

The FT’s “AI Playground” is a central element of its strategy. It’s not flashy—it’s an internal, secure space where reporters and editors can experiment with large language models using the FT’s own content. But its significance is profound.


The AI Playground allows journalists to prompt models to summarize, rewrite, or adapt FT stories without fear of content leaking externally. It’s an infrastructure choice, yes, but also a cultural investment. “It’s been a good place to practice prompt engineering—or as they like to call it, prompt editing—because journalists are really well-placed to write good prompts,” Garrahan explained, highlighting the close link between clear writing and effective AI use.


This simple renaming—prompt editing—signals a deeper point: excellence in AI work, like excellence in journalism, depends on clarity, precision, and context. And it takes a particular kind of organizational culture to nurture that connection.


Companies across sectors can learn from this. Before rolling out GenAI tools broadly, consider this: Have you created a safe environment for experimentation? Have you linked AI use to core skills—such as writing, editing, or analysis?


Guardrails That Enable, Not Inhibit

The FT has developed an AI Code of Conduct, an editorial addendum that governs how tools like ChatGPT or image generators are used. Every AI-related request or project—whether it’s a proposed use of AI-generated imagery or a backend tool for editors—goes through a formal approval process managed by an internal panel. This panel includes senior editors and legal counsel and tracks all usage in a governance registry.


These mechanisms are not meant to slow things down but to “test as many things as we can,” the Garrahan said. The FT wants its newsroom to remain “a hub for innovation.” The editorial guardrails, rather than stifling creativity, create the conditions under which responsible and sustainable AI use can thrive.


Companies should take note.


Too often, discussions about AI governance are separated from the operational realities of teams. But FT’s example shows that you can pair ambition with accountability. If your marketing team want to explore AI, what’s your governance structure?


Why Writing Still Matters (Perhaps More Than Ever)

Underlying the FT’s entire AI strategy is a deep commitment to quality writing and storytelling. The organization is actively experimenting with AI-generated summaries of articles, but not without rigorous oversight and editorial caution.


The experiment started with human-generated summaries—a team of editors tested whether bullet-point recaps above articles increased reader engagement. Initial results were promising: user sessions appeared longer. But when the FT shifted to AI-generated summaries, it did so in a limited trial (5% of articles), with a fail-safe: readers must actively click to see the summary.


Critically, the FT is only using AI to summarize short articles (about 400–500 words), and only after ensuring no hallucinations in the output. The editorial line is clear: “Editors don’t like factual inaccuracies in stories,” Garrahan said. “If we get them in this test, then we’ll have to rethink whether we deploy the summaries experiment at scale.”


The implication for companies is important. Generative tools can produce near-endless content. But what matters more than volume is precision. Your stakeholders—whether customers, partners, or regulators—will hold you accountable for accuracy, tone, and consistency. The way to ensure quality isn’t just better tools, but better writers. And when writers are empowered to use AI as an extension of their editorial judgment—not a replacement—the results will be far more trustworthy.


Building for Engagement, Not Just Efficiency

Another smart use of AI at the FT is its integration into the comments moderation workflow. The publication uses a tool called Utopia to screen out offensive or racist comments—a task once fully handled by human moderators. Now, the AI catches the worst behavior, allowing human editors to focus on generating questions that foster debate and dialogue under stories.


This isn’t automation for its own sake. It’s AI applied where it enhances the editorial experience for both readers and staff. “We welcome a bit of antagonism,” the speaker noted, “but generally we don’t want abuse or outright offense to be caused.”


The broader lesson: Use AI to elevate the human work that matters. If your organization manages communities, supports customers, or hosts internal knowledge sharing, how are you directing AI to reduce friction, not flatten the experience?


A Defense Against Disintermediation

The FT is also grappling with a question many companies are only beginning to face: Will AI agents erode the direct relationship with audiences?


The concern is real. With so many AI-powered summaries, fewer people may visit source websites or engage with original material. The FT has responded by signing a deal with OpenAI, which gives the AI company access to FT content—but puts a paywall behind all click-throughs. The model is simple: let AI boost visibility, but don’t give away your core product.


For B2B firms and professional services, the analogy is clear. As clients increasingly rely on AI to synthesize information, your firm’s role shifts from being a source of generic answers to a provider of high-value insight. If AI is going to compress your content, then the content must be so distinct and so authoritative that clients still want the original. That means deeper research, more specific points of view, and stronger brand voice.


Toward a Trust-First Future

Ultimately, the FT’s approach to AI underscores something many companies may not have considered enough: trust is not a byproduct of compliance. It’s a strategic asset built through consistency, clarity, and integrity.


The FT’s editor Roula Khalaf was one of the first to write a public letter to readers about how the publication would use AI. Another letter is coming soon, updating readers on the latest experiments and learnings. This transparency is protective. It guards the FT’s reputation, its editorial standards, and its compact with readers.


Companies may not need to write open letters about their use of AI, but they do need to recognize the stakes.


We are entering an era where every brand is a media brand. How you write, how you summarize, how you respond—all of it speaks to who you are. And with AI in the mix, these choices will only grow more consequential.

 
 
bottom of page