A Very Short Briefing on AI - Brilliant Noise

A Very Short Briefing on AI

By Brilliant Noise, October 2023. 12 min read. Posts

Brilliant Noise CEO Antony Mayfield was interviewed on The Major Difference podcast last week about generative AI and its impact on business and society.

If you don’t can’t spare an hour right now, here’s the VSB (very short briefing) in:

  • 12 seconds: Two sentences.
  • 50 seconds: 8 bullet points.A list of bullet points.
  • 8 minutes: A summary article going into more depth on the main points.

The following is a summary of the main points in one sentence, a few key points and a longer article. 

TWO SENTENCES

We must experiment openly yet responsibly with Generative AI today to uncover constructive applications, while proactively shaping its development through education and ethical guardrails rather than reacting with fear. Maximising creativity from diverse minds will help realise the vast potential of AI to improve human life if guided by shared values.

MAIN POINTS

  • There is a lot of hype and fear-mongering around AI, but we should avoid getting paralyzed by fear and instead start experimenting with the technology.
  • Generative AI like GPT-3 is allowing billions of people to access powerful language models for free, unleashing huge potential for creativity and innovation. We’re still early in figuring out how best to apply it.
  • AI will go through 3 phases – making things better, enabling new workflows, and redesigning organisations around it. Businesses need to start getting hands on with it now.
  • Prompt engineering is becoming critical – combining domain expertise with skill at structuring effective prompts will produce the best results from AI.
  • Education across society is urgently needed so we can make informed decisions. Policymakers especially need to understand the technology better.
  • Regulation is important not as an answer but to provide guidance for responsible development. However, containment will be very difficult.
  • There are opportunities to use AI to enhance human life and solve problems, but also risks of misuse. We need to move past fear and harness it positively.
  • The best thing individuals and organisations can do is start experimenting, while establishing ethical guardrails. More ideas from more people will produce better results.

SUMMARY

There is a lot of hype and fear-mongering around AI, but we should avoid getting paralyzed by fear and instead start experimenting with the technology. 

The rapid advancements in artificial intelligence have sparked both incredible hype and doomsday fears about its implications. However, as digital strategist Antony Mayfield argues, getting paralyzed by scaremongering is counterproductive. Instead, Mayfield advocates that individuals and businesses should take an open yet ethical approach – start experimenting with AI tools, establish reasonable guardrails, and see where they can positively apply the technology. While risks like job losses or deepfakes are real, the best way forward is not to shun AI but to proactively find ways to harness its upsides while mitigating the downsides. As Mayfield says, “Fear is not a useful reaction. This is the time for imagination, not retreat.” With some prudent experimentation and an ethics-first mindset, businesses can discover how to use AI as a creative enabler that makes work better rather than something to dread.

Generative AI like GPT-3 is allowing billions of people to access powerful language models for free, unleashing huge potential for creativity and innovation.

The release of large language models like GPT-3 marks a pivotal moment where AI is being democratised, according to Mayfield. For the first time, billions of people can access incredibly powerful natural language processing systems for free online. This is unleashing a wave of creativity as people from all backgrounds experiment with generating text, images, code, and more using just their imaginations and literacy skills. We are still in the very early days of understanding the full potential of these generative models. However, their availability to the masses represents a huge opportunity to find new applications that were unthinkable before. As Mayfield says, “Where machine learning needs big datasets, [generative AI] lets anyone with a creative spark achieve incredible things.” While it may take years to discover the very best uses, this democratisation of AI is opening the doors to a new era of innovation.

AI will go through 3 phases – making things better, enabling new workflows, and redesigning organisations around it. Businesses need to start getting hands on with it now.

According to Mayfield, businesses should expect AI to progress through three key transformation phases. First, it will make existing tasks and processes better – faster, higher quality, more efficient. Next, as skills develop, it will enable entirely new workflows and ways of structuring work. And finally, the most advanced organisations will undergo complete redesign around AI capabilities. Mayfield stresses that because of the rapid pace of change, companies cannot view this as a far-off development. They must start experimenting with the technology now. As Mayfield says, “The smartest thing any organisation can do is get AI into the hands of as many employees as possible. Give them guidance on using it responsibly, but let their creativity run wild. You’ll get the best results from maximising ideas.” Companies that proactively reshape themselves around AI will gain a competitive advantage. Those that delay risk being left behind.

Prompt engineering is becoming critical – combining domain expertise with skill at structuring effective prompts will produce the best results from AI. 

One of the key skills emerging from the rise of generative AI is prompt engineering – the ability to write effective prompts that produce useful results. As Mayfield explains, having deep expertise in a domain along with skill at prompt writing is a powerful combination. Knowing how to clearly structure and frame a request to an AI system makes the difference between gibberish and genius. Mayfield gave the example of writing marketing copy by prompting the AI with different agency styles to generate many variations. The human then edits and refines the outputs. According to Mayfield, “Understanding why prompts work well and curating the best ones will be critical – like having a book of spells. This creative skill will separate the best prompt engineers.” As businesses adopt AI, they will need talent who can be great prompt engineers in their field, guiding the technology with carefully engineered prompts. It will become a key human capability alongside AI tools.

Education across society is urgently needed so we can make informed decisions. Policymakers especially need to understand the technology better.

A persistent theme from Mayfield is the urgent need for education on AI across all levels of society. This is crucial so that individuals, business leaders, and policymakers can make informed decisions about how to steer the technology. Right now, most lack a real understanding of what AI is, what it can and can’t do, and its potential benefits and risks. This knowledge gap is hindering progress. As Mayfield  states, “Policymakers especially need a deep education on AI’s realities so they can develop smart regulations.” For example, many lawmakers do not grasp the nuances between narrow AI, general AI, and the latest generative models. Mayfield advocates that rather than resist AI, the best way forward is to get hands-on experience with the technology accompanied by ethical training. Companies should follow this example and make sure staff have a solid grounding. Broader education will enable society to reap the upside of AI while better anticipating and managing any downsides that arise.

Regulation is important not as an answer but to provide guidance for responsible development. However, containment will be very difficult.

On the complex issue of regulating AI, Mayfield believes some government oversight is necessary but unlikely to be a complete solution. He says regulations are important not as the answer itself but to provide guidance that enables responsible development of AI. However, Mayfield is sceptical that full containment or restriction of generative models is feasible given their nature. Unlike previous technologies concentrated in certain organisations or locations, powerful AI systems are already spreading globally. Mayfield points out that even tech giants like Google now have no special advantage in AI development. He cautions against overconfidence that regulations can tightly control this borderless technology. While thoughtful policy and guardrails have value, we cannot rely on containment strategies alone. As Mayfield summarised, “There are bad consequences as well as good. But AI is here and we have to learn to harness it positively.” Balanced government guidance combined with ethical use by organisations and individuals may be the most pragmatic way forward.

There are opportunities to use AI to enhance human life and solve problems, but also risks of misuse. We need to move past fear and harness it positively. 

Mayfield emphasises that along with legitimate risks, artificial intelligence presents huge opportunities to improve human life and address global problems. Applications in areas like medical diagnosis, scientific research, and combating climate change could provide tremendous benefits. However, Mayfield cautions there are also dangers like job losses, algorithmic bias, and even existential threats if AI becomes truly uncontrollable. With this high-stakes balance, Mayfield argues succumbing to fear is counterproductive. While risks must be taken seriously, the optimal path is prudent experimentation with ethics front of mind. If individuals and organisations avoid reflexive rejection of AI and instead actively shape its development for good, we can maximise its potential while minimising harm. As Mayfield says, “By moving past fear and focusing imagination on the positive, we can harness AI as a creative engine that enhances our best human attributes.” With care and conscience, we may craft an AI-enabled future that solves more problems than it creates.

The best thing individuals and organisations can do is start experimenting, while establishing ethical guardrails. More ideas from more people will produce the best results. 

Given the powerful capabilities of AI, what should responsible developers do in response? According to Mayfield, the answer is active experimentation. He advocates that individuals and organisations should start exploring applications of AI today. This hands-on approach is the best way to learn about its potential and pitfalls. However, Mayfield stresses that experimentation must be guided by ethical guardrails. Developers should establish guidelines and principles to avoid harmful uses. If society engages with AI openly yet prudently, we will discover the most constructive applications. Mayfield believes welcoming ideas from diverse sources is crucial, saying “More imaginations and perspectives will produce the best results.” Overall, Mayfield sees pragmatic but ethical experimentation as the ideal path. Proactive innovation guided by shared values may allow humanity to maximise benefits of AI while keeping risks contained.

Stay in the loop

Sign up to BN Edition, Brilliant Noise’s newsletter about digital transformation and AI, or contact the team direct at hello@brilliantnoise.com.