Generative AI Use Cases: Where to Get Started
Reflections from Oracle’s Data & AI Forum

By Charlie Darney, Director of Engineering
March 22nd, 2023


After attending Oracle’s Data & AI Forum in Philadelphia on March 14th, we are feeling upbeat and excited about the ways Archetype can leverage generative AI to help enterprises increase efficiency and deliver value for their customers. Artificial intelligence has been around for decades, beginning here in Pennsylvania at Carnegie Mellon University with the development of a machine learning algorithm to win a chess game. The latest form of AI to enter the business lexicon is generative AI – with the likes of ChatGPT and Gemini. These systems are poised to drive an incredible amount of business value but will require clean data and clearly defined use cases to successfully function.

This recent wave of interest about Gen AI is not a fad. Gen AI is ready for use by anyone for almost anything that requires text generation. The name Gen AI derives from the technology’s ability to generate content, a task historically left to humans. Breakthroughs by leaders like OpenAI and Google have given us the ability to write anything from poetry to source code using natural language commands. An application of LLMs and Gen AI that’s gaining interest in the market is through retrieval-augmented generation (RAG). RAG connects LLMs to your enterprise’s private data through APIs so that you can access things like your company’s sales materials, policies and procedures or client information through simple questioning or requests. Imagine having a personal assistant that can answer questions about your specific organization’s policies, sales collateral, or even plan your next meeting. RAG harnesses the power of LLMs to make you and your employees more efficient, which is why this application of Gen AI is a top priority for enterprises.

Common Roadblocks for Implementing Gen AI

There are still some roadblocks to making Gen AI widely available and used. Common roadblocks include –

  • Trust & governance – Cloud services like ChatGPT are risky because you don’t really know where your data is going.
  • Security & privacy concerns – LLMs are inherently random and sometimes produce responses that aren’t reliable enough to be considered safe.
  • Regulatory compliance – Industries like healthcare are governed by regulatory statutes that block many viable use cases of generative AI.
  • Use cases – The large quantity of use cases provides a challenge on prioritizing which to address first.
  • Access to quality talent – Building artificial intelligence is hard and finding qualified talent is even harder. Skills like data engineering are not common at organizations and talent is often needed to move forward.
  • Access to infrastructure to run AI – Building your own AI requires an incredible amount of processing power and memory.
  • Cost – Considering all the points above, it can be surmised that generative AI does not come cheap.

With all that said, most businesses don’t need to build their own LLMs. The open-source communities for generative AI are well-positioned to deliver incredible value for many enterprises. Oracle, for example, is using open-source models like Aya (Cohere) and LLaMA (Meta) to accelerate their AI offerings and avoid the delays and costs associated with custom-built LLMs and generative AI.

How to Get Started?

As mentioned above, use cases are a limiting factor in terms of the large number of potential areas for focus. To start, think about organizing your use cases into patterns to identify commonalities where you can deliver value, and prioritize the ones that impact your businesses’ bottom line. An example framework that you can use is:

  • Measure the ability to demonstrate value
  • Assess the level of risk
  • Determine the feasibility based on data you have
  • Determine the feasibility based on the human capital required to execute

Also, make sure you are setting firm boundaries to maintain security and governance – not overly challenging to comply with, but enough to provide guardrails to your implementation. A good starting point is identifying any use cases that will allow you to use public data, and always make sure there is human oversight, and a human is doing a quality check at the end.

Here are some use cases for generative AI that we are particularly excited about –

  • Employee productivity – Knowledge sharing and knowledge management so that your employees are no longer searching and summarizing information.
  • Code generation – LLMs have the potential to make a huge impact on software development and product engineering through AI assistants that can help troubleshoot and generate code, streamline testing effort, and even deliver end-to-end solutions based on a few sentences.
  • Content generation – Marketers can see up to 70% reduced time to their first draft of marketing materials using LLMs. These models can be tuned to express specific emotions and brand standards, and can even generate accompanying media like images, sound, or video.

The most exciting part about AI is that this is only the beginning. The recent wave of interest is opening businesses’ budgets for deep exploration and research of applications of AI. We’re confident that in the not-so-distant future we’ll be talking about how we can’t imagine a world without AI.