May 8, 2025
(Podcast) AI in the Insurance and Restoration Industry: My Guide for Claims Professionals and Attorneys
DISCLAIMER: The information provided in this article is for educational purposes only and does not constitute legal, claims, insurance, or cybersecurity advice. This content should not be considered professional consulting services. Always consult with qualified professionals regarding specific situations related to your business.
I had the great pleasure of being a guest on the Level Up Claims Podcast with attorney Galen Hair! We had a fascinating conversation about AI in the insurance and claims industry. I've created this abbreviated guide to getting started with AI as a claims, legal, or restoration industry professional, based on our chat.
Watch the podcast on YouTube, below:
Save the episode on your favorite platform, and don't forget to subscribe to Level Up Claims Podcast:
Apple Podcasts: Listen Now
Spotify: Listen Now
(Video) YouTube: Watch Now
Episode Chapters - Level Up Claims Podcast #125 AI and Digital Literacy ft Sarah Mary Parker
00:00 - Introduction to Level Up Claims Podcast
01:01 - Introducing Sarah Parker, AI Expert in the Claims Industry
01:58 - The Dangers of Anthropomorphizing AI Technology
03:05 - Sarah's Journey into Public Adjusting After a Friend's House Fire
05:32 - The Importance of Empathy and Service in Her Career Choice
07:20 - The "Non Nobis Solum" Philosophy - Not for Ourselves Alone
08:36 - Entrepreneurship: Identifying Problems and Creating Solutions
10:02 - How Kuva Media Started: Solving Her Own Marketing Problems
12:15 - The Merging of Public Adjusting, Media, and Technology
13:35 - Demystifying AI: What It Actually Is vs. Common Misconceptions
16:17 - Digital Literacy in the Insurance and Restoration Industries
18:28 - How Insurance Companies Are Using AI at Breakneck Speed
19:42 - Accessible AI Tools for Public Adjusters and Small Agencies
22:40 - Privacy Concerns: Data Protection When Using AI
25:13 - Ethics and Regulations for AI Implementation in Claims
27:53 - The Importance of Reading Privacy Policies
29:46 - Dark Side of Data Sharing: The National Suicide Hotline Example
32:37 - Closing Thoughts: Digital Literacy and Ethical AI Usage
33:30 - What "Leveling Up" Means to Sarah Parker
35:43 - Conclusion and Final Thanks
My Thoughts on Demystifying AI for Insurance Professionals
On this episode of Level Up Claims, I had a great conversation with host attorney Galen Hair of law firm InsuranceHQ about what AI really is, how it can be used ethically in claims processing, and the important considerations for insurance professionals looking to implement this technology.
We barely scratched the surface of AI use in business in this episode, but it will give you a practical starting place to exploring use of AI and getting educated.
This article expands on concepts and topics covered in the podcast episode, so you'll have more context if you check out the episode first.
A Bit About My Background
For those who don't know me, I entered the public adjusting field after witnessing a close friend's difficult experience with a home fire insurance claim. I was drawn to public adjusting as a profession that combines legal knowledge, insurance expertise, and construction understanding while allowing me to help others.
I founded my public adjusting firm Parker Public Adjusting in 2015, guided by the principle of "non nobis solum" ("not for ourselves alone", an excerpt from a work by Cicero), the belief that service to others is a basic civic and moral duty, especially within business.
Later, I founded Kuva Media to address marketing challenges, after struggling to find marketing companies to work with for my own public adjuster firm. None of them could properly communicate what public adjusters actually do, without resorting to antagonistic positioning against insurance companies, which is a weak and ineffective way to educate and sell what public adjusters provide.
I tested everything on my own company first, and when other people started asking "Who did that for you?", then Kuva Media was born!
Demystifying AI
In my conversation with Galen, I emphasized the importance of demystifying AI to increase digital literacy in our industry. AI is often misunderstood in two extreme ways:
The "Terminator/Skynet" view
AI as a dangerous, sentient entity that threatens humanityThe "best friend" view
AI as a magical, all-knowing helper that perfectly understands users
Both perspectives are erroneous and potentially dangerous because they anthropomorphize technology, which decreases something called our digital literacy, or our understanding of how to properly use technology, software, and hardware.
As I mentioned in the podcast, "When you anthropomorphize technology, then you lose something called digital literacy […]"
In simple terms, I define AI as "essentially an algorithm gone wild"; not a sentient being, not truly generative (it can only rearrange and utilize data it was given in its training sets), and not something that can be controlled like a traditional database.
Unlike a database where you can see and manage entries, you cannot see what AI "knows," which presents unique challenges for data management, privacy, and cybersecurity.
AI Accessibility and Affordability
I have good news for you: AI tools are currently very accessible and affordable. You don't need to be a computer programmer or invest hundreds of thousands of dollars to train your own AI model.
For beginners, I recommend starting with either:
Claude (by Anthropic)
ChatGPT (by OpenAI) - (Beware: high hallucination rates as high as 33% or 79%, depending on the model. I'll cover hallucination rates in the next article section)
However, I must caution you about what information you input into these systems. Since AI is an algorithm and not a database, you cannot delete data once it's been entered. This is particularly important for client data, which requires additional and complex safeguards to use compliantly.
If you're a beginner to AI, assume that it's not safe to use private, client, or sensitive data within most AI models, including ChatGPT and Claude, until you increase your digital literacy.
If you have questions about how to use AI safely with data, you can reach out to me.
Digital Literacy in the Insurance Industry
A definition I love for digital literacy is this one, from the American Library Association (ALA):
Like information literacy, digital literacy requires skills in locating and using information and in critical thinking. Beyond that, however, digital literacy involves knowing digital tools and using them in communicative, collaborative ways through social engagement. ALA’s Digital Literacy Task Force defines digital literacy as “the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills.”
A Practical Example of Digital Literacy
A simple and practical example of digital literacy at work, is understanding and using different settings within software and websites, including knowing that most have privacy settings.
In this example here, most users of LinkedIn don't know that LinkedIn is using their data for training AI models, and that they can turn it off:

Why is the Insurance Restoration Industry Behind?
During the podcast, Galen and I discussed how the restoration and insurance industries (particularly on the policyholder side) have relatively low digital literacy rates. I suggested two main reasons for this:
Technology lag in cognitive work
Public adjusting and legal work require significant intellectual service that technology is only recently catching up to assist with (through sentiment analysis, opinion making, data analysis, etc.).Tech industry focus
Software companies target larger industries first, leaving niche fields like public adjusting with less exposure to technological education through sales outreach.
Meanwhile, insurance carriers are rapidly implementing AI and digital products, sometimes using them to automatically generate denial letters in minutes without proper checks for accuracy.
Related, Allstate has reported that they've been using AI for "all claims correspondence" (with their PR department later asking the publication to remove the article, for some reason).
Ethical Considerations and Risk Management for Public Adjusters, Attorneys, and other Professionals
I stressed in my conversation with Galen the importance of treating AI adoption as an ethics, risk management, and regulatory standards issue, particularly around data privacy:
Understand regulations
Start with the legal "floor" (this comes from the phrase, "laws are the floor of ethics, not the ceiling") - know your state's data breach reporting laws, privacy laws, and minimum statutory cybersecurity requirements.Read privacy policies of software and websites
Understand how individual software companies and websites use your data. There should be a page at the bottom of every website or app that you use, that addresses their Privacy Policy. While you're at it, check out their Terms of Use or Terms of Service as well!
With all of the new and existing state privacy laws, companies must be more specific about what data they collect, how they use it, who they share it with, and how long they retain it. Many states have privacy laws now. Here's an example from a privacy-focused software that my agency uses:
With many software companies in the business of also selling user data, a good phrase to remember is, "If it's free, you are the product." Meaning, your data may be the monetized product, if you're offered a free software. It's not always the case, but a good frame of mind to start from, until you understand the Privacy Policy of individual apps better. This is not limited to free products and services, however.
Reading and learning to understand Privacy Policies are an excellent first step in digital literacy and data management.Trust, but verify
Unfortunately, privacy policies can be vague or worded unethically, with companies or organizations not following their own privacy policies at all.
Here is an example of an AI notetaking app, who stated in their privacy policy:
"We use AI models to process your notes, but we do not send any of your information for training purposes, and your notes remain absolutely private within your account.
Private Entries: Your notes, recording, and their transcriptions are for your eyes and ears only."
However, "We use AI models to process your notes" and "for training purposes" raised a red flag for me, so I reached out to the company for clarification:
It turns out that if I'd used this app, my data would in fact, not be "private", since they use an external AI model and company to process all voicenotes, specifically the Whisper model by OpenAI (the maker of ChatGPT):
So, their privacy policy wasn't transparent enough for my preferences, as they did not disclose that they process all voicenotes for transcription through an underlying service.
For companies that don't follow their own privacy policies at all, during the podcast I shared an example about the 988 National Suicide Hotline, where federally-funded private administraters were found to sell and share user data, including user data was shared with Facebook, law enforcement, and other parties (including a program where parties could apply to secretly listen in on calls) without proper consent.
Because the people administering the National Suicide Hotline were not licensed healthcare workers, HIPAA laws for privacy did not apply.
Even the nonprofit Crisis Text Line shared user data, that was then used to create a for-profit AI model, likely made easy since nonprofits are often exempt from state privacy laws.Be healthily skeptical of AI salespeople
They often don't fully understand the products they're selling, nor do many understand (or are even aware of) data and privacy management considerations within AI or software.
It's your responsibility to be informed about what you're purchasing, what your risk thresholds are, and what your state and/or professions' data, cybersecurity, and privacy regulations are.Understand that all AI models have hallucinations
All AI models make things up out of thin air once in a while.
It's a severe and pervasive issue that is largely kept from the public by lack of reporting, and by giving it a polite, public relations-friendly label. ChatGPT, Perplexity, and many other AI tools and products have been found to have unacceptable levels of hallucinations and inaccuracy.
My (Abbreviated) AI Implementation Guide for Claims Professionals
Based on my conversation with Galen, I've put together this brief implementation guide for my fellow claims professionals who are interested in exploring AI:
Step 1: Understand What AI Is (and Isn't)
AI is an algorithm that processes and rearranges existing data
It is not sentient, truly generative, or controllable like a database
Avoid anthropomorphizing AI technology
Step 2: Assess Your Digital Literacy
Understand basic technology concepts and terminology
Learn about basic, different types of AI:
Large Language Models (LLMs) like ChatGPT and Claude
Image analysis and generation tools
Audio transcription and processing tools
Step 3: Evaluate Potential Use Cases
Claims documentation analysis
Policy interpretation assistance
Communication template drafting
Data analysis and pattern recognition
Pay extra attention to data processing and privacy here: not all AI is appropriate for processing client data
Step 4: Address Ethical and Privacy Concerns
Review your state(s) regulations around data privacy
Carefully read and understand software privacy policies
Implement safeguards for client data
Consider what information should not be input into AI systems
Step 5: Start Small with Accessible Tools
Begin with user-friendly platforms like ChatGPT or Claude
Experiment with small, non-sensitive data tasks
Build protocols for ethical AI usage in your company
Train staff on proper AI implementation
Step 6: Continuously Reassess and Adapt
Monitor for "hallucinations" (AI generating false information)
Stay updated on regulatory changes
Evaluate the effectiveness of your AI implementation
Adjust your approach based on outcomes
Answering Galen's Signature Question: "What Does Leveling Up Mean to You?"
At the end of every episode, Galen asks his guests the same signature question: "What does leveling up mean to you?" When he posed this question to me, I shared that:
Internal growth
True leveling up comes from increased awareness and learning, not external validation. Sometimes you have little level-ups and big jumps, but gaining any awareness about a situation or yourself is leveling up.Personal agency
The biggest level-up that someone can do for themselves is recognizing their own agency. As I told Galen, "Always be thinking about agency. If you're not making a decision about something, someone else is making that decision for you."Bringing others along
I noted that Galen himself exemplifies this by how he shares knowledge through his podcast. True advancement isn't just about personal growth but helping others grow alongside you.
My Final Thoughts
AI is becoming an inevitable part of our industry, with carriers already implementing it at "breakneck speed." For public adjusters, attorneys, and contractors working with policyholders, understanding and ethically implementing AI tools is becoming increasingly necessary to remain competitive.
My key takeaway that I want to share with you is that AI is both accessible and affordable, but implementation must begin with digital literacy and a commitment to protecting customer data. By approaching AI as a risk management issue and focusing on ethical implementation, we can harness its benefits while avoiding potential pitfalls.
If you want to hear more about my thoughts on AI in the insurance industry, please watch the full episode of Level Up Claims and subscribe to Galen's fantastic podcast!
Sarah Mary Parker is the founder of Parker Public Adjusting (celebrating its 10th anniversary in 2025) and Kuva Media. This guide was created based on her appearance on the Level Up Claims podcast with host Galen Hair.