Enterprise AI Strategy
It’ll probably be your fault, too.

Now is the time to act to keep pace with the market. However, most large organizations today are woefully unprepared to harness AI—specifically large language models (LLMs)—correctly. The reasons are many, but here we share the most common failure patterns we’ve seen so far. More importantly, we tell you how to avoid them.

What do we know? We’ve spent years deploying AI for the the Intelligence Community, Department of Defense, Governments, and the world’s largest enterprises before it was cool. And we’ve been building with Generative AI since the beginning.
Thank you for your interest, and enjoy the whitepaper!
Feel free to reach out to info@clarative.ai with any questions.
Thank you for your interest!
Continue Reading
Something went wrong while submitting your email.
Misunderstanding the Opportunity
Let’s face it, most organizations were blindsided in late November of 2022. The AI arms race began and C-suites across the globe needed to seize the moment lest they appear slow or luddite. Conversely, many innovative leaders saw this dazzling new piece of technology and began to think of applications. “Wouldn’t it be great if I could just talk to my company’s documents?” “What if I could ask questions of my enterprise’s data and have reports and charts generated for me?” “I’d love to have all of my meetings transcribed and summarized for me.”As we explain here, most of these are going to fail.

Instead they should be thinking about their company’s problems. Who has trouble discovering company resources? Where are analytics a bottleneck to faster insight and action? Where is meeting context lost today?

Executives looked at LLMs like ChatGPT and thought, “I need to get one of those inside my organization.” While LLMs are powerful, they in reality are much too blunt an object to affect meaningful change in context-rich and siloed environments like the modern enterprise—they are not artificial general intelligence (AGI) or even narrow AGI, though they do a decent job masquerading as such in a chat context.

This sort of top-down, big thinking is natural as these language models are “large” after all. The issue is that this thinking ignores the realities of the way your business functions, the problems it actually has, and, most importantly, the people experiencing these problems. LLMs are not the one-size-fit-all solution leadership envision them to be. With large language models, executives first need to think small. As with all previous technology, a solution broad enough for all is useful to none. That’s why 85% of AI projects fail.
We’ve witnessed numerous failure cases where major and even innovative companies relegated development of LLM capability to their internal data science, AI, or IT functions. They dedicated resources to tinker around and solution in a vacuum, typically “fine-tuning” open source or proprietary models on internal data. In some cases we’ve even seen companies train an entire LLM. However, from healthcare, to retail, to finance, we haven’t seen a single one of these succeed in moving to production use for meaningful workflows. They discovered the same thing every 19-year-old CS student on X (formerly Twitter) discovered months ago: building production-ready AI applications is a lot harder than they’d imagined. These sorts of science fair projects never make it out of the garage, are full of critical vulnerabilities, and they are incredibly costly to the organization.

On the other hand, the success cases we’ve seen here is where cross-departmental task forces have come together to identify particular pain points, discuss intended user groups, and evaluate possible AI-assisted solutions (both in-house and external). This brings us to the next challenge.As we explain here, most of these are going to fail.

Incomplete Education; Incorrect Expectations
From executives to stakeholders to technologists, we’ve witnessed a wide variation in AI literacy. To make matters worse, this technology ecosystem is evolving faster than any other in the history of humanity which makes keeping up with it more than a full-time job. This is why we built an AI system just to help us keep up with AI systems research, which we’re making available upon request.

What this means practically is that limited understanding of an LLM’s capabilities and pitfalls has led to poor investment decisions and unrealistic expectations. Conversely, it has led leaders to overlook rich applications because of incomplete knowledge.

It is incumbent upon leaders intending to use AI to educate themselves on its value and limitations. You cannot rely solely on the market to educate you. There is a lot of misinformation floating around.

Even those organizations that have successfully managed their knowledge gaps have fallen victim to yet another another trap. The WGLL trap: they didn’t define What Good Looks Like. Without a clear evaluation framework, how can you understand the relative lift you can expect to achieve? Even technically proficient organizations that have cleared the hurdles of aligning on “the what” and understanding “the how” have fallen prey to not defining success and WGLL. Most folks are still hoping they can sprinkle some AI on their problems and they’ll magically go away.

One of the very first things we do with our customers is define success.
The Technology Will Fail You
“Dammed if I do. Dammed if I don’t.”
Thank you for your interest, and enjoy the whitepaper!
Feel free to reach out to info@clarative.ai with any questions.
Companies today are in a tough position. They absolutely have to deliver a fruitful AI strategy or face losing to the competition. The same thing happened in the data revolution, and those that rose to the occasion emerged 23x better than their competition. Unfortunately, even if enterprises manage to avoid the educational and process-oriented pitfalls listed above, the technology and security risks are almost sure to ensnare them.

Let’s assume the applications have been identified, the problems understood, the user bases selected, and the success criteria defined. From here, organizations have a few options to obtain a solution and all of them introduce technical risk.
Buy It
Nearly every technology provider has added the letters “AI” to the first 3 words that appear on their homepage. Very few have done anything novel, or really at all, with said AI.

This includes some of the innovative companies that come to mind when you think of Big Tech. The truth is, they were caught just as off guard as you were.

They certainly haven’t been asleep at the wheel since, however. New AI assistants, copilots, enablers, tools, and gadgets galore have made their way into every aspect of their offerings or will be in short order. What do they all have in common?

They’re going to lock you into their models, lock you into their applications, and lock you into their platforms. Their innovation in AI only serves to drive usage, consumption, reliance, and lock-in to their existing business models.

But what if in a year there’s a new latest-greatest model you want to switch to (like there currently is every week)? What if you want to ensure cross-compatibility with another model or platform? What if you want the technology to work with the internal model that you finally got off the ground? How do you remain model agnostic in such a rapidly changing world? Organizations are trading quick solutions for a new form of lock-in: model lock-in.

Most are totally unaware of this fact.
What’s more, Big Tech have no incentive to revolutionize work if the revolution comes at the expense of their saturation in your org. But that’s where the true alpha lies—in reimagining business processes and freeing us from these technological encumbrances. This is the classic Innovator’s Dilemma.

While incremental improvements and AI copilots will proliferate, largely led by Big Tech, real gains will not be served by your existing tools. Prudent decision makers will partner with revolutionary solution providers or will have to build their solutions themselves.

On the other hand, the revolutionary solutions we are seeing spin up from newer upstarts serve for fantastic demos but are frankly often ill suited for modern enterprise use. The reason is that these teams are inexperienced in building scalable and secure solutions for today’s enterprise requirements and guidelines. We outline many of the security issues baked into such products here. Reach out if you would like our overview of security best practices when building with LLMs.
Build It
A viable option is to build the solution internally, but it can be fraught with peril without the correct expertise.

Which LLM model do you use? Are you orchestrating multiple? With what framework? Should you host models yourself? Are you building to avoid model and platform lock-in or do you believe OpenAI Enterprise will really always be the best option going forward? Are you prepared to face AI regulation on top of data governance? Can you safely have the model make internal API calls in an auditable way? How will you think about security? How will you keep costs down?

We work with enterprise partners to address these and the host of other landmines that riddle this landscape.

Most are just beginning to tackle these questions, but they all are of major consequence. Take, for example, security. All of the frameworks your team is likely evaluating contain critical vulnerabilities as we discuss here. Its not only the model that can jeopardize your security and compliance.
With great power comes great responsibility
LLMs are not a great savior. They’re a tool and a building block. From expectations to education to technical risk, they pose a number of potential challenges and security risks when working with them. If you are bringing them into your company, talk to us first.

Our team consists of data experts from Palantir, Google, and Benchling with years of experience building data and AI systems for the Intelligence Community, Department of Defense, healthcare, leading financial institutions, and the world’s largest enterprises. If you’re considering going down this path, we provide free consultations with no commitment. Schedule time to talk with us here or drop us a line below.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.