banner-image

Your Employees are Already Using GenAI. How Will You Communicate the Security Risks?

By Eynan Lichterman
image August 15, 2024 image 4 MIN READ

Did you know that 75% of people are already using Generative AI (GenAI) at work? GenAI tools are defined as any artificial intelligence that can generate content such as text, images, videos, code, and other data using generative models, often in response to prompts. Examples include Open AI’s ChatGPT, GitHub’s Copilot, Claude, Dall-E, Gemini, and Google Workspace’s new functionality that connects Gemini to Google apps, to name just a few. 

Like any new technology, GenAI comes with a side of risk, and recent data from Cisco uncovered that 27% of businesses have banned the use of GenAI entirely for security reasons. However, with such widespread adoption, and such groundbreaking potential — closing the door to GenAI is likely to be a mistake. Instead, PWC recommends that “Demonstrating that you’re balancing the risks with the rewards of innovation will go a long way toward gaining trust in your company — and in getting a leg up on the competition.”

To make responsible use of GenAI, and support employees in freely using the tools to upgrade their productivity, you need to start by understanding what the industry is dealing with. 

 

Understanding the Potential Risk of GenAI Tools

As excited as your employees are about the productivity benefits of using GenAI tools, you can bet the attackers are feeling the same way. As teams get to grips with how AI can free up hours in the day on tasks like content creation, code writing, and design, hackers are finding innovative ways to use GenAI as a new attack surface to steal sensitive information and disrupt business operations. 

To stay one step ahead, organizational policies and employee education should evolve to take into consideration the new threats. As a starting point, security teams should speak to employees about: 

Understanding the Potential Risk of GenAI Tools

The Impact of GenAI on Phishing

Even without your employees independently using GenAI tools in the workplace, the risks of generative AI can still target your organization. One example is the huge impact of GenAI on the efficacy of phishing scams. 

Try this thought experiment: if you asked your employees to point out the tell-tale signs of a phishing email, what do you think they would describe? Not too long ago, markers of your average phishing scam were poor spelling and grammar, broken language, and unprofessional designs — making it easy for staff to spot a garden-variety phishing attack when it arrived in their inbox. 

With the advent of GenAI tools, hackers now have access to free online tools that allow them to spin up highly professional-looking content faster than ever before. Even videos and images of known associates can be faked using GenAI, which means employees need to be more on guard than ever. According to research completed by the Harvard Business Review, Artificial intelligence changes this playing field by drastically reducing the cost of spear phishing attacks while maintaining or even increasing their success rate.” Organizations should expect “a vast increase in credible and hyper-personalized spear-phishing emails that are cheap for attackers to scale up en masse.” 

The warning from HBR is clear — “We are not yet well-equipped to handle this problem. Phishing is already costly, and it’s about to get much worse.”

This means that even if you’re one of the 27% of organizations that have banned the use of GenAI, the chances of a successful data breach or cyberattack against your organization have still increased. 

Changing your Training Approach in the Era of GenAI

The threat of GenAI comes from both directions — from unaware employees using new technology without realizing its potential threats, and from hackers leveraging these tools intentionally to launch ever more sophisticated and believable attacks of their own. 

However, the methodology behind security awareness training to reduce the risk of phishing simulations has remained the same in principle. Organizations simply need to increase the frequency of their training, as well as the variety of the simulations they use to meet the growing threat. At CybeReady, we recognize that employees don’t always feel accountability over security within an organization and that CISOs have too much to handle to be continually proactive. That’s where we come in. 

Our comprehensive SaaS awareness program continually trains 100% of your employees, with realistic simulations that reduce risk, engage users, and promote a positive culture of security awareness organization-wide. 

We also provide training materials that can be distributed to your employees to empower them to use AI for innovation and productivity purposes, without adding risk. Download your free AI training toolkit to access: 

Download your free Cybersecurity Awareness AI Learning Kit here.

4a34e52d-562b-4e1e-8b71-5c005a7559a9