Skip to main content
search

The rise of AI is undeniable; according to the US International Trade Administration, the UK AI market is worth more than £16.8 billion and is expected to grow to £801.6 billion by 2035. The number of UK AI companies has increased by over 600% over the last ten years.

Currently, most organisations are in the phase of experimenting with AI, often under a top-down directive, to explore the potential benefits AI can bring to the business. This experimentation is primarily driven by technology teams assessing technological capabilities. 

However, there is a clear challenge in transitioning these programmes from experimentation to production. A recent stat from Databricks states that 85% of firms don’t feel ready to move AI into production.  

What often seems to be lacking, though, is a sense of whether or not the organisation is ready to embrace AI. Aside from the technical capabilities, the requisite controls from GRC and governance/privacy perspectives have been considered, as well as whether the business processes are in place to support it. Not only that but whether you’ve defined what success looks like. 

Given the current state of AI experimentation, the question that arises is – are you truly ready for AI? Have you considered the concept of AI Readiness? To be among the 15% of companies that can effectively implement their AI initiatives, it’s crucial to assess your AI Readiness. 

So, what exactly do I mean by being ready for AI, or AI readiness?  

First, there’s a distinction between AI Governance and AI Readiness. AI Governance concentrates on the lifecycle of AI systems from design to operation. It involves the creation of policies and guidelines to manage AI risks and maximise benefits, focusing on transparency, accountability, fairness, and overall social impact. 

AI readiness focuses mainly on the organisation’s level of preparedness to adopt and integrate AI effectively. It is an umbrella concept encompassing many factors, such as strategic alignment, organisational capabilities, technology, and data. 

For this discussion, I’m going to focus on AI Readiness, which, in my opinion, has four core aspects; I’ll dive into each of these: 

  • Strategy – do you have one? What are you trying to achieve? What does success look like, and how do you measure it? 
  • People & process – you may run a great experiment, but is the business ready to consume it, embrace it and have the right processes to do so? 
  • Governance – how do you control the use to maximise value and reduce risk? How do you make sure it’s used for good? 
  • AI-ready Data – this is linked heavily to the governance angle; this is about whether the data can and should be used, whether it’s available, understood and trusted without bias. 

Start with Strategy

The first and foremost question is – do you have a clear AI strategy? It may sound obvious, but many organisations are driven by technology in isolation, leading to aimless experimentation. A well-defined AI strategy is the compass that guides your AI initiatives towards the desired business outcomes. 

If you don’t, this has to be the starting point. Experimentation alone, whilst it may expose you to the technical capabilities you have at your disposal, won’t ultimately serve any purpose other than to give you some ideas for the future. 

Like any other technology or innovation programme, your AI strategy must align with your business strategy. If the AI programmes don’t, they risk becoming another pet project. Aligning your AI strategy with your business strategy ensures that your AI initiatives are relevant and contribute to your business goals. 

The last key factor is value – as part of the strategy, you need to define what value means to the business and how that feeds into the business strategy – it needs to be quantified and agreed on; otherwise, determining the success of a programme or potential success becomes entirely subjective. 

Lastly, before initiation, you must understand whether moving to production is possible; otherwise, it’s pointless. If the experiment is successful, how do you move from experiment to production? Does the cost of change outweigh the delivery? 

For some, this step may come after – it may be that you want to experiment, prove the theory and then evaluate how to move to production, but if you don’t want to waste time and resources, my recommendation would be to at least have a high-level concept of whether it’s possible. 

People & Process 

The use of AI in day-to-day operations can have fundamental impacts on working practices and the skill sets required to operate in an AI-supported business, so when it comes to moving AI into production, the people and process element can’t be overlooked.  

People need to be trained and upskilled to work with AI, understand it, and understand its possible impact. From an AI governance perspective, they need to understand the potential effects of misusing AI platforms – AI is no different to any other technology platform in this sense. You need an appreciation and a plan as to how you’re going to integrate AI into people’s working lives. 

Similarly, from a process perspective, we need to consider the impact and possible changes to day-to-day business processes through the insertion of AI. You can’t maintain the same processes and expect them to work – at the very least they need to be evaluated. If AI can’t fit the process or you can’t alter the process, it begs the question of whether AI is suitable for moving to production. 

One last point is that GRC processes need to be considered even in experimentation. Whilst the big players like Microsoft, Google, and AWS are firmly in this space, there are a million and one niche providers right now; just because you’re only experimenting, they still need to be risk assessed and those experimenting need to have some guardrails in place so that the right data is used and used safely. 

Governance & AI Ready Data  

Some people may separate these two things, and traditionally, in part, that comes from the perception that data governance is a blocker rather than a means to unlock data value when, in reality, it can be quite the opposite. 

Like anything, the data you use needs to be trusted to get the most from AI. More often than not, poor data will lead to poor outcomes whether the technical tool you’re using to derive the value is AI, analytics/BI or anything else. 

Much like part of the overall strategy, you need to consider what good looks like from a data perspective. 

Data governance is how you drive that trust and make the data available. It’s how you can develop AI-ready data and make it available. That comes through data governance components such as data catalogue, business glossary, data lineage and data quality. These are all key components in driving data trust and quality so that AI platforms can take advantage of the data. The impact of not getting this right, as has been proven with cases like that of Air Canada, has already been highlighted. 

The last aspect to consider here comes more from a security and privacy angle. It doesn’t matter whether you are experimenting or in production; the data that you use has to be secured and treated with the same consideration as your production platforms. The same rigour needs to be in place, and there is value from a risk mitigation perspective in completing vendor risk assessments for those AI platforms in the first place. If you can’t use them in production, it makes experimentation with them a pointless endeavour. 

Ultimately, AI can drive many benefits if adopted correctly, but launching into it without the right level of consideration can be risky and cost time, energy and money. Before venturing into that, working with an organisation like mine or running through the process internally is a must. 

 

At Nephos, we combine technical expertise and the strategic business value of traditional professional service providers to deliver innovative data solutions. Our solutions ensure your data is fully prepared for AI, enhancing data quality and compliance, so you can achieve more accurate, actionable insights and unlock the full potential of AI-driven outcomes. Click here to know how.

Lee Biggenden

Lee, the Co-founder and Managing Director of Nephos, brings a wealth of experience and a pioneering spirit to the forefront of data system integration. Lee's thought leadership content offers invaluable insights into transforming data storage, processing, governance, and protection. Through his writings, Lee shares the latest trends, challenges and advancements in the data technology landscape - helping organisations to not only adapt but thrive in the digital era.

Close Menu

© Nephos Technologies Ltd.