The Strategic Problems Everyone is Trying to Solve with AI

I conducted three bigger projects on how to capture value with AI. During a period of eight months, I got some kind of understanding of the common themes that top executives are thinking about when it comes to AI. I will now share some common elements of what all of these projects tried to nail. For confidentiality reasons, I will not go into too much detail, but some things I can share are very specific, while others are more generic.

Tinker, tailor, soldier, spy,

Scientist, inventor, reaching high,

Teacher, pilot, soaring by,

AI is teamwork; that's no lie.

In big organisations, the first order of business is governance

Size comes with inertia. It provides stability but makes things slow. How long does it take to get just an approval? Weeks is the closest unit here. How many AI-enabled SaaS tools or major versions with significant new capabilities will be introduced within a month? Craploads. 

We were asked an innocuous question about which vector database we would recommend. Looking into it revealed that there are literally hundreds of options. None of them have really been around that long with proven availability of expertise and miles behind them in production use with high loads. Prime movers are expensive and come with trade-offs; cool new projects are more niche than you have ever seen; and everything is wrapped in a hype that makes it hard to distinguish between propaganda and well-meaning misinformation. 

Well, how about we pick one we know that works and just stick to it? It might not have the most developed features, it could perform better, and it should enjoy better support. But it gets the job done and is reasonably priced. Just roll with it and ensure it’s secure enough, and don’t bother assessing every option that pops up down the road.

AI is no longer the great unknown that can be ignored with a wave of a hand.

People doing development work on themes around AI actually prefer a given working stack that is good enough, then get support on it and focus on doing their thing. Freedom of choice was seen as a small price to pay when the alternative was constant evaluation and testing of brand-new tech. Too much of a good thing can be overwhelming. Just make sure you build the stack in such a way that the components can be changed later on.

Modularity is actually a funky thing in this context. With AI and especially LLMs, you want to encourage the availability of several overlapping services. Since there is no one-size-fits-all, even relatively simple tasks benefit from having a “second opinion” on the same task. How do you spot hallucinations? In addition to asking the same thing multiple times from the same model and comparing outputs, you can ask the same thing from multiple different models and compare them. If the answers vary, it’s likely that the generative model has demonstrated some machine creativity. 

Too much of a good thing can be overwhelming. Just make sure you build the stack in such a way that the components can be changed later on.

Companies typically have a lot of experience in dealing with multiple overlapping technologies. Their intent is usually more towards consolidating it (with varying success). Balancing between not just running the latest shiny thing but also providing enough coverage and variety is a challenge for current decision-making. It is going to take some time for key individuals to get the a-ha on AI and then gather a critical mass. Meanwhile, who can and then dares to make decisions on it? 

AI will enhance everything – also our limitations

My way of working has changed, and I claim my personal productivity is different. No, this text is not generated with AI (excluding the caption rhyme and some fact-checking), and I rarely generate storylines for my presentations. I do work on different ways of saying things and use GenAI to bounce ideas around. I am still a bit amazed at how well it handles problems like “How would this approach handle cases in certain contexts?” – and I love how I can dig deeper and iterate with it. I can combine frameworks, play around with contexts, and operate with complex concepts. On top of that, I can generate memorable visualisations, specifications, and code with it. 

My problem with my personal productivity is that the tools I use are extensions of my mind. Just the sheer volume and quality of output is such that the generated content becomes questionable. It is very laboursome to verify properly. Do I become even more blind to my own biases and shortcomings in my thinking? We’re still talking about individuals and personal productivity.

Process efficiency is then another thing. Sure, we can accelerate relatively stable processes with well-defined tasks that can even be chunked into small pieces to become more manageable. A good definition of context helps here a lot. If we hope that an AI-enabled tool can make a choice out of limited options, it works well. If we hope it creates elaborate plans with outputs that are open-ended, there’s a risk to be considered. “Trash in, catastrophe out!” is the new adage of AI automation.

If we hope that an AI-enabled tool can make a choice out of limited options, it works well. If we hope it creates elaborate plans with outputs that are open-ended, there’s a risk to be considered.

In addition to streamlining and automation, processes will surely enjoy AI-driven development. One thing is that as process data is made accessible for AI, a lot of things will become visible. The process data will no longer be encapsulated within the confines of process boundaries. Combined with ML and AI capabilities of pattern recognition, a single process automation can be extended to orchestrate several processes. It signifies dynamically executing processes depending on the circumstances. This would, in practice, lead to a process network configuration that can handle much more complexity. This kind of ability comes with a price, but so does all organizational maturity. Do it right, and it will simplify your life. Do it wrong, and your operations will take overcooked spaghetti to the next level.

We didn't even get to new business models. Maybe that’s a topic that deserves its own article.

Risky Business

When it comes to strategy, there’s always the risk consideration. AI is no longer the great unknown that can be ignored with a wave of a hand. It comes with concrete risks – no longer silly ideas of machines taking over the world and enslaving us all. But what are the interesting risks, then? 

First, there’s overreliance. It’s a great idea that human beings stay in control. Machine models will provide alternative opinions based on the data available to them. People can then draw inspiration from these viewpoints. Another approach is that the machines create suggestions, and actual factual human beings can then review and modify them before they are put into use. Well, guess what a species that has evolved over billions of years to conserve as much energy as possible will do? It tries to be as lazy as humanly possible (pun intended). Yeah, overreliance and bypassing controls is a real risk. 

Machine models will provide alternative opinions based on the data available to them. People can then draw inspiration from these viewpoints.

This risk is closely related to the dopamine economy. It means business models that are based on the addictive nature of neurotransmitter behaviour. We get instant gratification out of a cleverly engineered and administered experience on the screen that we keep chasing – getting exposed to commercially or ideologically motivated messages while at it. Attention has become the gold of the information age. AI can make this even better targeted, more personalised, and more influential. Just take a look around while sitting on a train or bus. Suddenly, the idea of machines enslaving us all does not feel so distant. It just did not take the form of killer robots from the Terminator franchise. It’s something more subtle but still blatant. Some companies think about how to utilise this, some try to prevent the competitors from doing it, and some accept it as a fact of life. No one has easy answers on how this will influence us in the long run.

Then, there is transparency and explainability. A good example is data privacy legislation in Europe. While GDPR does not explicitly use the term “right to explanation”, the combined requirements effectively ensure that individuals can receive explanations about automated decisions affecting them. If AI is used as a part of the process that makes decisions related to me, I can demand an explanation of how it reaches the conclusion. “We asked AI, and it determined it based on the data we have on you” sounds a bit too much like magic. Neural networks and deep learning are not easily explainable. For most of us, they act like black boxes. Introduce some non-determinism by leaving the default setting on model temperature to be bigger than zero, and you have a mess on your hands. 

And then there’s privacy. AI systems require and benefit from large data sets. It leads to vast amounts of information – part of it is personal and with a risk of missing the explicit consent. AI-powered tools like facial recognition can monitor public and even private spaces, infringing on what we consider private. The possibility of profiling and targeted manipulation through more sophisticated social engineering can make security work a nightmare. Re-identification of anonymised data is another concern. Identifying multiple data sources, finding correlations, and combining them can reveal sensitive information that was intended to remain private. Throw in biased training data, inadequate consent mechanisms, deep fakes, and outdated regulations. Do you get the idea? Thought so.

Should data quality be mentioned as a risk? For many companies, problematic data quality is not something that MIGHT happen but a fact of life. Most companies never bother to get their game together. With the advent of AI, the importance of data has been realised in a new way. Better quality means better results out of the data. Unmanaged data leads to outputs and analyses that are far from ideal and costly to compute. AI can be used to improve data quality, but unfortunately, it is not magic. The worse your data is, the less transparency you have. It sounds unbelievable, but companies are – once again – considering data quality improvement initiatives. 

For many companies, problematic data quality is not something that MIGHT happen but a fact of life.

I must also mention that the problem with synthetic data is just that. It’s synthetic. It’s a data set with the same statistical distribution as the training data. It might look fine at first glance, but dig deeper, and your taste buds will be saturated with something that feels like plastic instead of tasty tofu horse marinated in chili oil. It does not have – for lack of a better word – soul. 

The grand promise of AI is to manage complexity 

Will AI transform work? You bet. Will it eliminate the old jobs? At least for the tech workforce, it’s not in the foreseeable future. AI applications will require loads of infrastructure, data sources, transmit mechanisms, business logic, consuming applications, and plenty of old-fashioned troubleshooting that has always been in demand.

Still, we get services that feel like magic. Well, maybe a bit of self-repeating magic after a while, but magic nevertheless. The point is that it’s all built on top of the previous stacks. Sure, things can be outsourced and bought as services. It does not remove the need to perform the jobs related to those. It’s just someone else who provides all the building blocks. During my 20 years in tech, the increase in productivity has been insane. Instead of armies of coders, a small team can get a lot done. Perhaps we should just see this as the next step in productivity. 

Measuring AI cannot be done in complete isolation. We’ll need the generic ROIs, lead times, and revitalisation metrics on the top. Then, we’ll need the bottom-up metrics on availability and user satisfaction. In between, we can run a set of AI-related metrics: precision, recall, and level of hallucinations, to name the most important ones. These metrics are mainly process-related. They represent AI’s ability to respond to queries and orchestrate process execution. There are some established metrics, but not a definitive set of them. A lot of them still come from the ML discipline. You’ll need a set of layered metrics. The AI KPIs alone make very little sense. 

Measuring AI cannot be done in complete isolation.

The corporate functions seem to play a role here. AI is a cross-functional effort by nature. While taking a deep dive into an individual function might lead to measurable improvements, we’re still dealing with silos and ignoring the infamous total value. Don’t get the point wrong. The functions should be and do exactly what they are. Finance must drive the funding and manage overspending on tokens, legal needs to manage the risk, and HR has to conduct training programs. How do you make them all play together? A lot of it depends on circumstances, but at least the mandate to change things must come from the top. Otherwise, it’s going to be a small proof of concept that will never become the transformative force that defines the future.

Juho Jutila

Juho is a Business Architect with over 20 years of experience in building competitive strategies and leveraging both emerging and proven technologies to help global organisations succeed. Before Vuono Group, he worked as a consultant at Accenture, Columbia Road and Futurice, to name a few.

Next
Next

Rebekka Sihvola joins Vuono Group