At the end of November 2025 the AI Sports Summit, as part of the Dutch AI Week, raised the question that would have sounded like science fiction not long ago: what happens when sport, in all its chaos and beauty, starts to learn? Not in the metaphorical way we describe a team “learning the game,” but literally through sensors, algorithms and systems that see patterns humans miss and make decisions at a speed we can’t match.
Belgian trendwatcher Bert van Thilborgh opened the door with a simple but disarming observation: sport has entered an era of measurability that no longer belongs only to Olympians and elite labs. Cheap sensors, wearables and apps have pulled performance data into everyday life. It turns a casual runner’s training log into the kind of feedback loop that used to be guarded behind federation doors. His slides framed it as the “democratization of measurability in sports training & performance,” and the subtext was clear: once measurement becomes mainstream, prediction and personalization inevitably follow.
That’s where AI slips into the story. Not as a distant replacement for coaches, but as an always-on assistant. If data is the new oxygen of sport, AI is the lung that can actually use it. From injury prevention to technique analysis, the promise is less about creating superhumans and more about reducing the friction that stops people from playing, progressing and staying healthy. Bert’s examples stretched from training support to virtualization, hinting at a world where workouts are no longer tied to a club timetable or a physical venue, but can be guided at home through immersive environments and adaptive feedback.
Then he pivoted to the place where sport’s romantic relationship with human imperfection gets complicated: officiating. In tennis, automated line-calling systems already embody a cultural shift because when a machine is more accurate, we start renegotiating what we actually value. Is the drama of a wrong call part of the theatre or an injustice we’ve tolerated for too long? Bert referenced high-profile examples such as Wimbledon’s electronic line calling and the broader tension the VAR era has created: fewer mistakes, yes, but also new debates about flow, authority and trust.
And then, like any good futurist, he went one step further. Sport, he suggested, isn’t only being improved by AI, it may also be invented by it. His presentation highlighted Speedgate as “the first sport developed by AI,” a concept derived from patterns across hundreds of existing sports, recombining what people already love into something new. It’s a provocative thought: if human bias shapes which sports get attention, funding and prestige, what happens when the blueprint is generated by an intelligence that starts from data rather than tradition?
Stefano Pintus from the European Commission brought the conversation back down to earth, in the best way. Because revolutions are not only built on possibility; they’re built on frameworks. In his “perspectives on AI in sport,” he placed sport inside a wider European “apply AI” agenda, emphasizing that adoption isn’t just about clever tools, but about the full stack: computing infrastructure, data, skills, workable rules and the ecosystem that supports real-world implementation.
Stefano offered a useful lens for anyone trying to make sense of the hype: most of what we call AI today is narrow AI, powerful but bounded, while more general forms of intelligence remain theoretical or hypothetical. That distinction matters because it keeps sport leaders honest. You can deploy AI now to ease administration, support coaching, improve performance, prevent injuries and engage fans. Only if you have the right data and the people who know how to work with it.
He also framed sport through three practical arenas: grassroots sport, competitive sport and fan engagement. Each has different needs and therefore different AI opportunities; from volunteer and club management to elite strategy and content personalization. But Stefano didn’t pretend the path is frictionless. His presentation underlined the challenges that quietly decide whether pilots become progress: digital literacy gaps, data availability and bias, data protection and ethics. In other words, the biggest risk isn’t that AI will arrive; it’s that it will arrive unevenly, benefiting the best-resourced organizations first and widening a digital divide inside sport itself.

“Start by promoting AI literacy because “people can’t use what they don’t understand,” and without basic competence, organizations drift into fear, misuse or missed opportunity.”

To counter that, he pointed to community as infrastructure. SHARE 2.0, a European community with 500+ members across ministries, administrations, Olympic committees, federations and clubs, is designed to share learning and build capacity because knowledge travels faster when it’s networked.
Morten Pohl from the DOSB made it practical, almost disarmingly so. His talk asked a question every federation, municipality and club is quietly wrestling with: how do you deal with the effects of AI in sport when your organization is already complex before AI even enters the room?
His answer wasn’t a grand tech manifesto. It was a recipe for sanity. Start by promoting AI literacy because “people can’t use what they don’t understand,” and without basic competence, organizations drift into fear, misuse or missed opportunity. In his framing, AI literacy becomes the new baseline for safe adoption, from spotting hallucinations and privacy risks to building the confidence needed to experiment responsibly.
Then, build scalable pilot projects. Not moonshots, 90-day experiments that solve real pain. In volunteer-driven sport structures, time saved is impact gained. Morten shared how the DOSB approach included guardrails, training for colleagues, clear points of contact and gradual tool approval with data protection involved. It's about governance that evolves with use, instead of governance that freezes innovation.
Finally, connect. Morten emphasized that no organization can master this alone, which is why coordination and cooperation aren’t “nice to have,” they’re the only way to avoid expensive duplication. His “Sportdeutschland Digital” community is a signal of where Europe is heading: shared learning, shared standards and shared momentum.
Put the three perspectives together and a clearer picture emerges. AI in sport isn’t one story. It’s three stories running in parallel. It’s the performance story, where wearables and algorithms democratize insight. It’s the governance story, where Europe tries to turn innovation into something trustworthy, fair and scalable. And it’s the organizational story, where the winners won’t be the ones who buy the flashiest tools, but the ones who teach their people, pilot with purpose and build networks that make the whole system smarter.
The real provocation of the summit wasn’t “will AI change sport?” That’s already happening. The provocation was deeper: will we shape that change so sport remains human at its core, more accessible, safer, more engaging, or will we let the future arrive by default? In Breda, the message felt urgent and oddly hopeful: the next era of sport won’t be decided by machines. It will be decided by the choices we make about how to use them.