A Brilliant Noise Whitepaper

Prepared Minds

How to Start Your Team's AI Revolution

Written by Antony Mayfield. Edited by Stephanie Hubbard.

Contributors: Dr Jason Ryan, Katie St Laurence, Rachel Stubbs, Harriet Malina-Derben

Prepared Minds whitepaper

AI isn't just a new technology — it prompts us to think differently. And the biggest barrier to progress right now isn't the tech itself — it's literacy.

Prepared Minds lays out the Brilliant Noise manifesto for AI literacy.

It explores what it takes to build true AI fluency across teams and provides a practical guide for leaders who want to move beyond experimentation and create the conditions for lasting change.

About Brilliant Noise

For 15 years, we've helped some of the world's most ambitious brands navigate the complexities of digital change – building capability, shifting culture and supporting teams through real, sometimes messy, transformation.

That experience shaped us. It taught us how organisations adapt, how people respond to new technology, and what it really takes to do things differently.

When generative AI emerged, we didn't wait and see. We rebuilt the business from the ground up – refocusing, retooling our methods, rethinking our role to the brands we work with.

Because we believe AI isn't just the next wave of technology. It's the next transformation. And we know exactly what it takes to lead one.

AI: What's the catch?

Imagine if you offered your colleagues a simple way to achieve an extra day's work each week without working longer hours. The response would be something along the lines of: "Sure, but what's the catch?"

Generative artificial intelligence (AI) systems like ChatGPT offer those performance boosts to anyone, but the catch is this: using AI is easy to start but deeply challenging to develop.

It's easy because the tools are freely available, the interfaces are familiar and anyone who can ask a question can get started.

It's challenging because using the tools can quickly disappoint, raise difficult questions, or become baffling.

There's so much opportunity. And yet, organisational AI adoption is proving tricky.

The challenge lies in leading teams to develop AI skills that will fully unlock its potential.

Like learning a language

The reason that learning to use AI effectively is so hard is because it's more like learning a language than learning to use a computer system.

AI doesn't behave like any other computer system we've experienced before. Like language, AI is not consistent. It changes. It behaves much more like a human brain than a computer — constantly learning and adapting to its environment and stimulus.

And learning to use it is a lot like when we learn a language — we need to learn words but we also need constant exposure — to not only learn to speak in this new language but also how to THINK with it too.

Just like learning a language, using AI effectively requires practice, immersion, and continuous learning.

It requires us to become AI literate.

What is AI literacy?

A working definition

AI literacy is an evolving set of skills, including critical thinking, knowing the limitations of AI systems, the ability to assess their outputs and understanding where they can complement or enhance human cognition and expertise in a given field.

It's the ability to understand, evaluate and use artificial intelligence systems and tools in a responsible, ethical and effective way.

AI has moved very quickly from an edge technology into a mainstream tool. Technical expertise isn't required in the same way as it once was in order to use this technology to achieve sophisticated results. However, it's still crucial to grasp the principles behind how AI is built and operates — this knowledge helps leaders make informed decisions about when and how to integrate AI into their processes.

AI literacy involves recognising AI's strengths and limitations, understanding that outputs may be flawed or biased, and knowing where AI can best complement human thinking. Critical thinking is key — not just in spotting errors, but in questioning how AI arrives at conclusions and how our own biases may influence the results. It's only through continued use and experimentation we gain the wisdom to make these kinds of judgments.

AI is not just a technical tool anymore; it's been irrevocably released into the world and therefore carries organisational and moral implications. Using it effectively means understanding its role as a tool that enhances — rather than replaces — human intelligence, so we must learn how to "dance with the system", as systems thinking pioneer Donella Meadows called it.

A framework for AI literacy

AI literacy develops in stages. People start by experimenting on their own and progress towards designing organisation-wide systems. As they move up the ladder, the gains shift from simple efficiency to strategic advantage.

This framework sets out five non-technical levels of generative AI competency. It reflects what we see across teams and client organisations today, and it's designed to help leaders make clear decisions about where they are and what they need next.

0 Individual

Play & explore

This is where curiosity takes hold. Individuals try out new tools, test prompts and see what's possible. Usage is inconsistent, but confidence grows quickly. People get early wins through exploration, even if they don't yet have a structured approach.

1 Individual

Structured prompting

At this level, individuals learn how to guide AI systems more deliberately. They build repeatable prompts, create simple bots and start to use AI consistently in their day-to-day work. The focus shifts from play to dependable results.

2 Team

Process integration

Teams begin to weave AI into their workflows. They design AI tools that improve speed, reduce friction and automate parts of a process. The impact is collective rather than individual – teams feel the gains as work becomes smoother and faster.

3 Division

Agent fluency

Divisions go beyond basic tools and start managing AI agents as active collaborators. People understand how agents work, how to coordinate them and how to oversee more complex systems. This is where teams move from automation to orchestration.

4 Organisation

Systems design & leadership

At the highest level, organisations build AI–human systems that support strategy, not just execution. Leaders shape how AI is deployed across the whole organisation and make decisions about capability, governance and long-term direction. AI becomes a strategic asset rather than a set of tools.

The AI Literacy Ladder

The AI literacy ladder shows how individuals and organisations progress from early experimentation to strategic, system-wide deployment. Each step reflects a shift in focus, scale and capability – from simple prompting skills to managing agents and designing AI–human systems. It gives leaders a clear view of where they are today and what they need to unlock next.

4

Systems design & leadership

Organisation
3

Agent fluency

Division
2

Process integration

Team
1

Structured prompting

Individual
0

Play & explore

Individual

Literacy levels are moving

Like everything to do with AI, the levels of literacy are moving and evolving. We think of them like three trains moving at different speeds, each 10 times faster than the one below.

Level one is easy to hop on to. Once you're able to get things done faster with good prompts and fluency — nudging and pushing the system to do what you need or do better — then you're able to develop your knowledge of AI faster. Once you transition to level two — developing bots and repeatable processes or even small apps with AI to speed up team work — you start to develop knowledge and skills at a faster rate, an order of magnitude faster.

This could explain another common puzzle for users of AI. Regardless of age or profession, we repeatedly hear variations of the question: "I just don't understand why everyone isn't using this!" We've heard that from bosses talking about their teams, creatives talking about technical colleagues, even undergraduates talking about their fellow students. The reason for this is they're usually moving quickly away from their previous point of knowledge, and it's hard to remember what it was like before.

Three phases of AI integration

AI literacy creates value. Knowledge opens doors to more knowledge. As teams learn about AI, they discover new gaps in our understanding, which we then fill. We've found this process leads to better work and business gains.

1

Do what you do now but better

We can speed up everyday tasks like writing emails, taking notes, creating reports, or writing newsletters. Often, with better results.

2

Do what you do in new ways

Work that requires multiple tasks in sequences – processes – with more than one person, can be organised differently to take advantage of AI. For example, using meeting conversations as raw data to create reports, product descriptions and proposals leads to different, faster, better ways of getting things done.

3

Find new things

New tech brings new markets and ways of doing things. It's hard to see these at first. For example, when the iPhone came out, no one had thought of Uber, Tinder, Instagram or TikTok yet. We're in the early days of AI.

How to build AI in your team

Five practical steps for developing AI capability:

1

Get hands-on asap

People learn faster when they get hands-on experience with AI systems.

We try to create a sense of 'AI vertigo' — the sudden expansion of possibilities and shift in problem-solving approaches — early on. Framed in the right way this can be exhilarating and motivating.

We get people working with AI as soon as possible, getting them to try different models and prompt structures so they can see the difference these things can make to the quality of their results. Once they've done it themselves, we explain why it works or doesn't.

This 'show then tell' approach sparks curiosity and engagement, making subsequent explanations more meaningful and relatable.

A sense of surprise activates our brain's capacity for learning. Find the right demo, and it usually doesn't take long before even the most sceptical of sceptics are surprised and delighted by something AI can do for them.

2

Understand the machine

To fully grasp the implications of AI, we've found it's important to show the timeline of human efforts and events that have created thinking machines and how recent innovations in generative AI came to be.

A brief look at the history of AI helps put today's advancements in context, while recognising the differences between narrow AI, general AI, and other approaches provides a clearer understanding of what the technology can and cannot do.

Understanding the machine usually follows this structure:

  • History of AI: A brief overview of AI's evolution helps put current developments in perspective.
  • Types of AI: The differences between narrow AI, general AI, and various AI approaches to provide a full picture of the technology's capabilities and limitations.
  • Large Language Models (LLMs): The basics of how LLMs (the systems that power generative AI) work, as they form the backbone of many current AI applications.
  • The AI Revolution: Explore why generative AI represents a fundamental shift in technology and its potential impacts across industries.
3

Filter the noise

Deep technical knowledge isn't essential for everyone. But we've found that our clients appreciate having a basic understanding of how AI systems work in terms of:

  • The companies involved and who owns the different AI models
  • The commercial values of things like the data chips used to fuel AI models
  • The power dynamics and corporate agendas

They want to know this so they can filter the noise and make sound judgments. Understanding the competitive landscape, or 'Tech Race', also helps inform strategic decisions around AI adoption and investment.

Our curriculum usually covers:

  • The AI Value Stack: The layers of technology, processes and companies that contribute to AI's functionality, from data collection to application development.
  • The Tech Race: The competitive landscape in AI development that informs strategic decisions about adoption and investment.
  • The Power of Prompts: Why prompts work in directing AI behaviour and how to craft effective prompts for different purposes.
4

Beware the thinking traps

'Delegation dodging' — the habit of avoiding delegation because "it's faster to do it myself" — often stems from thinking traps like lack of trust or fear of failing at more complex tasks.

A similar dynamic occurs with AI adoption, where people hesitate to integrate AI into their workflow after initial training because they perceive that learning a new workflow will set them back.

The goal is to create a system of co-intelligence between you and AI, learning to collaborate rather than focusing on 'correct' use.

There's no perfect way to use LLMs yet, but experimenting with smaller, low-stakes tasks — like drafting emails or reports — can help build AI literacy before applying it to high-stakes projects, strategic decisions or products.

Think of it as finding the right fit between your working style and the technology, understanding the trade-offs between adapting your methods or the tools themselves.

5

Address security and ethics

The era of blanket AI bans is over.

AI literacy must go beyond technical skills to cover ethics, including fairness, transparency, and accountability. Users need to understand the impact of AI decisions and actively mitigate risks. Addressing biases is key to ensuring fair and inclusive outcomes, making equity a central part of AI education.

Sustainability is also a growing concern, especially with the energy demands of large AI models. Raising awareness of AI's environmental impact can help organisations adopt more sustainable practices.

A strong AI policy should include safe practice guidelines, clear rules on data usage, and myth-busting around what AI does and doesn't do with your data.

We've found teams want the following:

  • What an AI policy should include
  • Safe practice guidelines
  • What data can be used where and when
  • Myth-busting (what AIs do and don't do with your data)

Conclusion: AI literacy is an essential skill

AI has moved very quickly from an edge technology into a mainstream tool. And now it's irretrievably a part of our lives and our work.

It will continue to enhance and accelerate human thought and innovation in unprecedented ways. What the Industrial Revolution did for physical strength and effort, generative AI will do for thinking. It will speed up how long it takes us to do simple repetitive tasks, while boosting productivity and creativity.

For this reason, literacy in these technologies is no longer optional — it's essential. What is needed right now is for organisations' employees — and especially their leaders — to develop an ease not only in making decisions about AI but in using it themselves.

AI presents the biggest opportunity in the last 20 years. By investing in AI literacy now, organisations can unlock unprecedented productivity, innovation, and long-term success.

Start your team's AI revolution

Ready to dive deeper? Let's discuss how you can start applying these insights today.

Appendix: Sources & further reading

Recommended reads

These sources are highly influential in general and have been specifically useful to the writing of this paper.

  • The Jagged Frontier paper
    Dell', F., Saran, A., Mcfowland, R., Krayer, L., Mollick, E., Candelon, F., Lifshitz-Assaf, H., Lakhani, K. and Kellogg, K. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality.
  • Co-Intelligence, by Ethan Mollick
    Mollick, E. (2024) Co-Intelligence: Living and Working with AI. New York: WH Allen.
  • The Future is Digital, by George Rzevski
    Rzevski, G. (2023) The Future is Digital: How Complexity and Artificial Intelligence will Shape Our Lives and Work. 1st ed. Southampton: WIT Press.
  • How To Have a Good Day, by Caroline Webb
    Webb, C. (2016) How to Have a Good Day: Harness the Power of Behavioral Science to Transform Your Working Life. London: Bantam Press.
  • Right Kind of Wrong, by Amy Edmondson
    Edmondson, A.C., 2023. The right kind of wrong: the science of failing well. New York: Atria Books.

Written by Antony Mayfield. Edited by Stephanie Hubbard.

Contributors: Dr Jason Ryan, Katie St Laurence, Rachel Stubbs, Harriet Malina-Derben

Published by Brilliant Noise

© Brilliant Noise 2024. All rights reserved.