On the surface, what could be more exciting than Artificial Intelligence (AI)? AI is being widely adopted, both in the workplace and in people’s personal lives. A recent study found that nearly 80% of Generation Z have used an AI tool, with 59% saying they use AI on a monthly basis. (The AI Gap: Gen Z Balances Curiosity and Care in a Changing Tech Landscape, 2025) However, as with any new, rapidly expanding technology, there are associated risks that may cause you to think twice about both the information you share, and the information you get back.
We're here to help you understand those risks and stay safe when using AI tools.
Human Bias in AI
How do these AI models learn? The simple answer is that they’re trained using machine learning algorithms, which means they analyze large amounts of data to recognize patterns and make predictions. This process involves a system called a "neural network," which is designed to mimic how the human brain works by connecting many simple units together to process information and learn from examples. The problem? The selection of data used to train AI models is determined by humans, and as a result, human biases can be reflected in the data and subsequently in the AI’s outcomes. This can lead to those biases being transferred to the AI, resulting in consequences such as discrimination during hiring processes and the spread of misinformation.
AI Hallucinations and Misinformation
Speaking of misinformation, it’s important to be vigilant when using an AI tool. With people increasingly using chatbots and large language models (LLM) as search engines, fact checking is essential. LLMs have a tendency to “hallucinate,” not unlike a colleague misremembering a piece of information or getting something wrong. Just as you’d be concerned with colleagues providing inaccurate information, you should be skeptical when an AI tool displays something that appears false. So why does this happen? When an AI model is asked a question outside of the scope of the data it was trained on, it answers based on probability rather than factual knowledge.
Aside from these hallucinations, bad actors and cyber criminals can “lead” an AI into providing misinformation, often via “jailbreaking.” Users can attempt to remove the guardrails by creating a hypothetical scenario or roleplay. Responses that would ordinarily be deemed unethical are now part of the role the tool is performing and are no longer restricted. A recent study found that these jailbreaking attempts carry a 20% success rate. (Tom Krantz, n.d.)
Security Risks
Cyber criminals are becoming more adept at utilizing AI tools for nefarious purposes. They can clone voices in real time during calls, create phishing emails designed to socially engineer people into opening links that can infect systems with malware, and create deepfakes. Deepfake is a term that covers a range of digital techniques, but it's most commonly used to describe technology that can swap faces in videos or alter audio recordings. The result? It can make it look and sound like someone is saying or doing something they never actually did. These tactics are highly effective at sneaking into an organization’s network. Once cybercriminals carve out a backdoor, they can wreak havoc—disrupting critical systems, draining financial resources, and causing serious damage to a company’s reputation.
Aside from bad actors, there is also the potential for unintended harm done by those attempting to use an AI Tool for work. While larger companies may have protections in place to wall off non-proprietary AI tools, smaller companies are often unable to take these kinds of security measures due to budget constraints or lack of bandwidth. . That information, potentially containing Personally Identifiable Information, Personally Identifiable Health Information or Confidential Corporate Information is now a part of the data ecosystem teaching the AI model.
Stay Safe
With AI seemingly everywhere, from strange videos on social media to search engine algorithms, you have likely interacted with an AI Tool without even realizing it. With such a ubiquitous presence, there are several ways you can protect yourself and your organization. Carefully consider the information you share with these models, avoiding anything that may be considered sensitive information. Be aware of potential hallucinations and remember to check multiple sources to corroborate information. , rapidly expanding beyond the professional arena and into our day-to-day lives. It is incumbent on us to leverage its strengths while being aware of and safeguarding against potential risks.