• Call: (888) 472-4262
  • Client Access
Graham Company
  • Call: (888) 472-4262
  • Who we work with
    Construction Manufacturing & Distribution Health & Human Services Real Estate Financial & Professional Services Other Industries
  • What we do
    Property & Casualty Employee Benefits Surety Services Personal Lines Cyber Solutions Risk Financing Our Specialties
  • Who we are
    Our History Our People Our Community
  • Our Difference
    The Graham Way Innovation Technical Development Safety Services Claims Management
  • Careers
    Our Job Board
  • Knowledge Center
AI is Everywhere: Understand the Risks

October 14, 2025

AI is Everywhere: Understand the Risks

On the surface, what could be more exciting than Artificial Intelligence (AI)? AI is being widely adopted, both in the workplace and in people’s personal lives. A recent study found that nearly 80% of Generation Z have used an AI tool, with 59% saying they use AI on a monthly basis. (The AI Gap: Gen Z Balances Curiosity and Care in a Changing Tech Landscape, 2025) However, as with any new, rapidly expanding technology, there are associated risks that may cause you to think twice about both the information you share, and the information you get back.

We’re here to help you understand those risks and stay safe when using AI tools.

Human Bias in AI

How do these AI models learn? The simple answer is that they’re trained using machine learning algorithms, which means they analyze large amounts of data to recognize patterns and make predictions. This process involves a system called a “neural network,” which is designed to mimic how the human brain works by connecting many simple units together to process information and learn from examples. The problem? The selection of data used to train AI models is determined by humans, and as a result, human biases can be reflected in the data and subsequently in the AI’s outcomes. This can lead to those biases being transferred to the AI, resulting in consequences such as discrimination during hiring processes and the spread of misinformation.

AI Hallucinations and Misinformation

Speaking of misinformation, it’s important to be vigilant when using an AI tool. With people increasingly using chatbots and large language models (LLM) as search engines, fact checking is essential. LLMs have a tendency to “hallucinate,” not unlike a colleague misremembering a piece of information or getting something wrong. Just as you’d be concerned with colleagues providing inaccurate information, you should be skeptical when an AI tool displays something that appears false. So why does this happen?  When an AI model is asked a question outside of the scope of the data it was trained on, it answers based on probability rather than factual knowledge.

Aside from these hallucinations, bad actors and cyber criminals can “lead” an AI into providing misinformation, often via “jailbreaking.” Users can attempt to remove the guardrails by creating a hypothetical scenario or roleplay. Responses that would ordinarily be deemed unethical are now part of the role the tool is performing and are no longer restricted. A recent study found that these jailbreaking attempts carry a 20% success rate. (Tom Krantz, n.d.)

Security Risks

Cyber criminals are becoming more adept at utilizing AI tools for nefarious purposes. They can clone voices in real time during calls, create phishing emails designed to socially engineer people into opening links that can infect systems with malware, and create deepfakes. Deepfake is a term that covers a range of digital techniques, but it’s most commonly used to describe technology that can swap faces in videos or alter audio recordings. The result? It can make it look and sound like someone is saying or doing something they never actually did. These tactics are highly effective at sneaking into an organization’s network. Once cybercriminals carve out a backdoor, they can wreak havoc—disrupting critical systems, draining financial resources, and causing serious damage to a company’s reputation.

Aside from bad actors, there is also the potential for unintended harm done by those attempting to use an AI Tool for work. While larger companies may have protections in place to wall off non-proprietary AI tools, smaller companies are often unable to take these kinds of security measures due to budget constraints or lack of bandwidth. . That information, potentially containing Personally Identifiable Information, Personally Identifiable Health Information or Confidential Corporate Information is now a part of the data ecosystem teaching the AI model.

Stay Safe

With AI seemingly everywhere, from strange videos on social media to search engine algorithms, you have likely interacted with an AI Tool without even realizing it. With such a ubiquitous presence, there are several ways you can protect yourself and your organization. Carefully consider the information you share with these models, avoiding anything that may be considered sensitive information. Be aware of potential hallucinations and remember to check multiple sources to corroborate information. AI is likely here to stay, rapidly expanding beyond the professional arena and into our day-to-day lives. It is incumbent on us to leverage its strengths while being aware of and safeguarding against potential risks.

Evan Lobaugh,

Cyber Strategic Marketing Specialist

[email protected]

Share:
Tags: Artificial IntelligenceCyberCyber Risk
RECENT POSTS
Can “Non-Combustible” Construction Be Your Source of Long-Term Savings?
Can “Non-Combustible” Construction Be Your Source of Long-Term Savings?

May 05, 2026

Strengthening Healthcare Cyber Resilience with HHS’s Updated RISC Toolkit
Strengthening Healthcare Cyber Resilience with HHS’s Updated RISC Toolkit

Apr 07, 2026

Stay Cyber Safe this Season: Avoid These Common Holiday Shopping Scams
Stay Cyber Safe this Season: Avoid These Common Holiday Shopping Scams

Dec 01, 2025

The WA Cares Act and the Future of Long-Term Care Insurance
The WA Cares Act and the Future of Long-Term Care Insurance

Nov 14, 2025

RELATED POSTS
Stay Cyber Safe: Holiday Shopping Tips for 2024
Stay Cyber Safe: Holiday Shopping Tips for 2024

Nov 05, 2024

No One Is Immune: The Critical Importance of Cybersecurity and Vendor Management
No One Is Immune: The Critical Importance of Cybersecurity and Vendor Management

Sep 03, 2024

2023 Cyber Market Update
2023 Cyber Market Update

Feb 21, 2023

Holiday Safe Shopping
Holiday Safe Shopping

Dec 17, 2020

Home
Contact
Events
Company News
News
Branding
Privacy Policy
Terms of Use
Manage Cookies
Graham Company

Follow us

Graham Company
Home
Contact
Events
News
Branding
Privacy Policy
Terms of Use

Follow us

Thank you for your submission.

Sorry! something went wrong. Please try again.

© Copyright . The Graham Company. All Rights Reserved. Site by Brand X Republic