What You'll Really Have to Know About Generative AI

A ChatGPT test conversation on a mobile phone

What You Need to Know

The new AI systems transform prompts into answers.
One cause of bad AI answers is lack of data.
Another cause is overly broad or confusing prompts.

The rate of innovation brought on by artificial intelligence in the last 12 months is enough to make your head spin.

ChatGPT has successfully passed industry exams such as bar exams, medical licensing exams, college admissions assessments, and many others.

Now more than ever, news organizations report that AI is automating many routine tasks and achieving significant efficiencies.

As a financial professional, you may wonder where AI leaves you and your career.

During my nearly 23-year career on the technology side of life insurance, I have seen many technological trends come and go. Each brings apprehension about how the new technology will disrupt the way we do business.

However, looking back on these years, I have not yet observed a technological trend that replaced a significant number of jobs, at least not industrywide, and not for the long term.

Generally, these trends tend to change job roles rather than replace them.

AI Vocabulary

To adapt to AI, you’ll need to understand AI vocabulary, whether you apply the technology yourself or manage AI practitioners directly.

AI: Technology that gives computers the ability to learn to perform human-like processes without being directly programmed for these tasks.
Machine learning (ML): A subset of AI that involves a machine using data to learn new tasks.
Generative AI: Machine learning technology that gives computers the ability to learn how to generate new data, such as images, videos, audio files or text compositions.
Large language model (LLM): A generative AI system that has learned how to create text compositions by studying large sources of human language, such as Wikipedia.
Pre-training: Having an AI learn from a large, general language source before exposing it to specialized data related to specific tasks.

See also  Cato Scholar Makes Case for Means-Testing Social Security

Famous AIs

ChatGPT is a well-known generative AI system that you can “chat” with.

The last three letters in its name are important.

The G stands for “generative,” and the P stands for “pre-training.”

The T stands for “transformer” — a neural network design that transforms one type of unstructured data into another.

Transformer technology is the advance now driving the generative AI revolution.

ChatGPT is an LLM that can transform your prompt — text that you enter — into another batch of text: a response.

Other generative AI systems may work with different inputs and outputs. Stable Diffusion, for example, is a popular transformer that outputs images in response to textual prompts.

Describe an idea in words, and Stable Diffusion will make a picture based on those words.

Other transformers work in reverse, transforming an image into a textual caption that describes that image.

AI Literacy

With those basics out of the way, here are three concrete skills that insurance professionals like you need to succeed in this new world of generative AI.

1. Prompt Engineering

I’ve used the term “prompt” a few times to describe the text you give the generative AI algorithm.

Creating these prompts is called prompt engineering, and it is rapidly becoming a sought-after AI skill.

As an insurance professional, you may see electronic health records, or EHRs, from many sources and vendors.

Your task is to extract and standardize certain vitals from this data.

To do this, you might construct a prompt as follows:

Your objective is to extract the most recent (by date) body temperature, pulse rate, respiration rate and blood pressure from the health record described between the brackets. Convert all values to metric. If you cannot find a value, return null for that value. [health record data]

See also  Aegon Sees Strong Annuity Buyout Demand

The response should be a list of the most recent values for these vital signs in metric units.

This prompt could be further refined; you could specify exactly how the individual values are delimited and identified.

Additionally, you could specify the exact unit for each.

As you get better at prompt engineering, you can reduce the number of errors made by ChatGPT or other LLMs.

Using automation, you could now run this prompt over a large number of EHRs and output the results to a database.

2. Validating Results and Flagging Hallucinations

Ideally, the EHR prompt that we just developed will always get the appropriate data and return it to you. However, results from LLMs are not always reliable.

LLMs can sometimes return incorrect results or fabricate a result.

When an LLM makes up a result, the LLM is said to be “hallucinating” — another important generative AI term.

Hallucination can be particularly common when data is either obscure or missing.

Consider if the EHR data that our prompt should extract is missing.

Similarly, the EHR may not be clear enough for the LLM to find all the data you seek.

In cases where the information is missing, unpredictable results or hallucinations may easily occur.

It is always important to specify how to handle missing data in your prompt. As you can see, I requested the value “NULL” for missing values.