Key Takeaways
- The Times Higher Education AI and Digital Maturity Index (AI-DMI) frames university AI readiness across four pillars – Strategy, People, Utilisation and Technology – and finds that technology investment is consistently outpacing actual use, with people and leadership emerging as the weakest links across the sector.
- The same pattern shows up in Silicon Reef’s work with universities: digital fragmentation, low staff confidence and uneven adoption hold AI value back more than the tools themselves do.
- Microsoft 365 already provides the foundation most universities need, but only when shaped intentionally – through a single source of truth, design for everyday work, strong search and structure, AI embedded in familiar workflows, and governance and change leading the rollout rather than following it.
- AI readiness is cumulative, not a one-off launch. The universities making real progress build confidence before capability, focus on staff and student experience first, and treat AI as an enabler rather than a strategy in itself.
AI is no longer a speculative conversation in higher education. It’s already shaping how universities operate, communicate and support their people. Yet despite the pace of innovation, many institutions feel a growing sense of tension. The tools are there, expectations are rising, but meaningful impact often feels harder to achieve than anticipated.
The recent Times Higher Education AI and Digital Maturity Index (AI DMI) brings much needed clarity to this challenge. By stepping back from individual technologies and looking at institutional readiness as a whole, the research offers a grounded view of where universities really are, and what’s holding them back.
From our perspective at Silicon Reef, the findings align closely with what we see every day in conversations with universities across the UK, Europe and beyond. Universities aren’t short on ambition, but many are still working out how to embed that ambition into the day to day reality of staff.
Jump to:
What the Research Tells Us About AI Readiness
One of the most important contributions of the AI DMI is its framing of AI readiness as multidimensional. Rather than focusing solely on infrastructure or tools, the index evaluates institutions across four interconnected pillars: Strategy, People, Utilisation and Technology.
A few themes stand out clearly.
First, technology readiness often outpaces utilisation. Many universities have invested heavily in digital platforms, cloud services and collaboration tools, yet see inconsistent or shallow usage across staff groups. This gap is particularly visible in more mature digital environments, where the assumption is often that availability equals adoption.
Second, people and leadership remain the weakest links. Staff are being asked to work differently without always feeling fully supported or reassured as they do so.
Third, governance and alignment matter as much as access. Institutions with clearer strategies, ethical frameworks and leadership direction show stronger confidence and uptake – even when their technical infrastructure is less advanced. Where the “why” and the “how” are visible, the “what” of AI feels less daunting.
Taken together, the message is clear. AI readiness isn’t something universities can buy. It’s something they must build deliberately, over time, across culture, capability and confidence.
What We’re Seeing on the Ground
These themes aren’t abstract to us. They’re showing up repeatedly in conversations with our clients – University of Leeds, King’s College London, Mohamed Bin Zayed University of Artificial Intelligence, and many others – both in direct relation to AI, and the broader digital workplace.
The institutions making the strongest progress aren’t necessarily the ones moving fastest. They’re the ones being most deliberate. Mark Dorey, Assistant Director of Communications and Engagement (Community) at the University of Leeds, describes what that looks like in practice:
“At Leeds, we’ve taken a responsible approach to AI adoption. We recognised early that this isn’t simply a technology challenge, it’s about helping people feel confident enough to use it well and responsibly.
Rather than rushing to roll out tools for the sake of it, our focus has been on building the right foundations first, creating clear guidance for staff and students, putting robust guardrails around data, transparency and academic integrity and creating spaces where colleagues can learn from one another and experiment responsibly.”
That kind of considered, foundations-first approach is what we consistently see working. But arriving at it usually means confronting the more basic obstacles that still hold universities back day-to-day.
The most common is digital fragmentation. Information scattered across hundreds of sites, inconsistent navigation, unclear ownership and outdated content all undermine trust. When staff don’t trust what they find – or can’t find it quickly – even the most capable platforms go underused, and any AI layer on top has a shaky foundation to work from.
We also see concerns about adoption. Universities are rightly wary of investing in new digital experiences that fail to gain traction. Questions about “tumbleweed after launch”, moderation of internal dialogue and cultural readiness come up far more often than questions about features or licences. Senior leaders want to know not just what’s possible, but what will actually change day to day behaviour.
At the same time, there’s growing recognition that the status quo is no longer working. Over reliance on email, one way internal communications and static intranets are increasingly seen as a barrier to engagement. Leaders want platforms that support conversation, self service and clarity, without introducing unnecessary risk or noise.
Perhaps most interestingly, we’re seeing a shift in mindset. Universities are becoming more deliberate, more reflective and more evidence driven. Rather than jumping straight to “AI transformation”, many are asking more fundamental questions:
- How do our people actually work today?
- Where do they lose time?
- What gets in the way of collaboration?
That’s a healthy place to start.
How Microsoft 365 Solves These Challenges
Microsoft 365 already sits at the heart of most universities’ digital estates. The opportunity – and the responsibility – is to use it intentionally, in ways that respond directly to the kinds of gaps highlighted in the AI DMI.
At its best, Microsoft 365 provides a shared foundation for communication, collaboration and knowledge management. SharePoint, Teams, Viva, Copilot and Power Platform can create a coherent digital workplace where information is findable, conversations are visible and work flows more smoothly.
But the research, and our experience, shows that success depends less on what’s switched on, and more on how it’s shaped. A few principles consistently make the difference:
Create a single source of truth
Staff engagement rises sharply when there’s one trusted place to go for information, with clear ownership and governance. Consolidating content into a well designed, well managed SharePoint intranet helps tackle fragmentation and builds confidence in the digital workplace.
Design for everyday work, not edge cases
Tools should support the most common staff tasks first. Finding policies, collaborating across teams, getting answers quickly, accessing support services. AI adds most value when it takes the friction out of those journeys.
For example, an agent that keeps intranet content aligned with source systems, triages and routes routine “how do I…” IT queries, or auto builds targeted briefings for different staff groups from existing communications. You don’t need to replace the whole process, but you can take the heavy lifting out of the repetitive work that takes up time and attention.
Invest in search, structure and usability
Intelligent search, clear information architecture and a good mobile experience are often more impactful than new functionality. When content is properly structured and tagged, Copilot and other AI tools can surface more accurate, relevant results. Findability is foundational to AI readiness.
Normalise, don’t sensationalise, AI
Copilot works best when it’s embedded into familiar workflows – drafting, summarising, preparing and analysing – rather than positioned as something separate or special. When staff encounter AI as a useful assistant in tools they already use, rather than a big new destination, anxiety drops and practical experimentation increases.
Put governance and change first, not last
Clear guidelines, moderation models and training give staff confidence to participate. That includes being explicit about where AI can and cannot be used, what data it can access and how outputs should be checked. Without this, adoption tends to stall. People are unsure what’s expected of them, so they use the tools cautiously, if at all.
These principles are easier to describe than to live by. Where we see them genuinely taking hold, it’s because universities are pairing the technology with sustained investment in their people, and tying both to the realities of everyday work. Mark Dorey describes how Leeds is approaching that pairing:
“We’re also investing in capability. Alongside wider staff training, we’ve launched pilot AI and data apprenticeships to help colleagues build deeper expertise and understand how these tools can improve day-to-day work in practical ways. That reflects our belief that long-term success depends as much on skills and confidence as it does on technology itself.
As we refresh our University strategy, we’re thinking carefully about how AI can support a more student-centred, digitally enabled and adaptable institution. That means focusing on practical use cases that remove friction from everyday experiences, whether that’s improving access to information, reducing repetitive administrative tasks or giving colleagues more time to focus on higher-value work.”
Investment in people running alongside investment in platforms; a focus on the friction in everyday experiences rather than the headline use cases. That’s the principles above in motion.
Used this way, Microsoft 365 becomes the backbone for better organisational habits – the foundations that make meaningful use of AI possible.
Silicon Reef’s View: AI Readiness is a Journey, not a Leap
The AI DMI describes four stages of maturity: Incidental, Intentional, Integrated and Optimised. Most universities we work with sit somewhere in the middle. They’re not starting from scratch, but they’re not yet operating AI as a systemic capability either.
From our perspective, the most successful institutions share a few common traits. They start internally, focusing on staff experience and operational value before external showcase projects. They build confidence before capability, investing in training, governance and experimentation. And they align technology with purpose, ensuring every digital decision connects back to institutional strategy and culture.
Crucially, they accept that AI readiness is cumulative. It grows out of behaviours and ways of working that build up over time – trust, clear guidance, consistent practice. You don’t become instantly ‘AI-ready’ by launching a new platform or buying a new AI tool. Universities tend to get the most from AI when they move with intent and care, rather than simply trying to move first. They understand that people adopt what helps them, trust what is well governed and engage where they feel heard.
This thinking doesn’t stop with staff. The same deliberate, people-first approach is shaping how universities are preparing the next generation – and how they’re starting to redefine what success with AI actually means. Mark Dorey captures both:
“We’re also very conscious that universities have a responsibility to prepare students for a world where AI will be embedded in almost every profession. That has implications for curriculum design, assessment and employability, and we’re already adapting our approach to help students use these tools critically, ethically and confidently.
Ultimately, we see AI as an enabler, not a strategy in itself. Success won’t be defined by how many tools we deploy, but by whether we use technology in ways that genuinely improve the experience of working and studying at Leeds.”
That reframing – AI as an enabler, success measured by experience – sits very close to our own view. The universities that get this right won’t be the ones with the most deployments. They’ll be the ones whose staff and students notice the difference.
Research vs Reality: Are we Seeing the Same Picture?
So does the AI DMI research match what we’re seeing in reality?
On balance, yes. The report’s picture of strong technology foundations, weaker utilisation and uneven people readiness is very close to what we encounter day to day. Where the research talks about gaps between infrastructure and use, we see it in fragmented intranets, duplicated content and collaboration tools that staff struggle to navigate. Where it highlights low recognition of digital and AI skills, we hear it in questions about confidence, workload and “what this means for my role” far more than in requests for new features.
The difference is one of distance. The AI DMI offers a wide angle view of systems and sectors; our work brings us into the everyday details of how staff actually work and how students and colleagues experience the digital estate. AI readiness will be won or lost in closing the gap between strategic intent and lived experience. That is the journey we see the sector on today. And it’s the journey we’re committed to supporting – thoughtfully, practically and with people firmly at the centre.
Where is your university on the journey?
AI readiness looks different at every institution, and the right next step is rarely the same twice. If you’d like an informal conversation about how the themes in this piece map to your university – and where the practical opportunities sit – we’re always happy to talk.