Security awareness

AI best practices: How to securely use tools like ChatGPT

Keatron Evans
May 10, 2023 by
Keatron Evans

Artificial intelligence (AI) tools, like ChatGPT, can be used to create anything from song lyrics to new recipes — and now, people are using this technology to help with their jobs.

AI is useful but can also be a huge cybersecurity and privacy risk. So it’s important to understand what artificial intelligence is and the best practices you should employ to use it safely:

 

What is AI and why is it a useful tool?

AI is a type of computer technology that makes machines "smart.” It helps machines perform tasks with human-like intelligence.

Take ChatGPT, for example. Just type in what you want it to write — a song about your dog, for instance — and it generates one in seconds.

AI is a valuable business tool. It can analyze data, generate reports and even write code. But before you use any app, you need to understand the inherent security risks associated with AI.

See Infosec IQ in action

See Infosec IQ in action

From gamified security awareness to award-winning training, phishing simulations, culture assessments and more, we want to show you what makes Infosec IQ an industry leader.

Be selective in choosing your AI chatbot

Be careful which AI application you use. AI is intensely popular with everyone right now, and hackers have taken notice. Threat actors have even created fake AI apps designed to trick you into downloading them. And if you do, they will seize the opportunity to install malware created to steal all of your data.

→ Only use AI tools that are approved and verified by your company.

Never input personal identification information (PII) into AI chatbots

Don’t use any protected data or personal information when utilizing or experimenting with AI. For instance, say you have a confidential sales report and want to generate a summary using AI. You upload the report, but now the data that you entered is stored on ChatGPT’s servers — and it will use that data to answer queries from other people, possibly exposing your company’s confidential information.

Hackers exploited ChatGPT code vulnerabilities to steal user information.

Additionally, hackers may find a way to steal that data directly from the app itself. In fact, just recently, ChatGPT confirmed that they suffered a data breach. Hackers exploited a vulnerability in ChatGPT’s code that allowed them to see the history of other active users.

Regardless, uploading sensitive data or information could violate privacy laws, which would result in your company possibly facing large fines and lead to your termination.

→ Therefore, you should never upload protected data or PII when using an AI app.

Be cautious when using AI photo editors

You upload a photo of yourself so you can turn it into a cartoon. Fun, right? But when you upload that photo, you may be inadvertently handing over valuable information about yourself to the AI software. Your image, location data and other metadata could be at risk. There are even concerns about companies using photos uploaded to AI photo editors to train facial recognition software within the cybersecurity community.

→ Always be cautious about what you upload into an AI app, even when they’re just images.

Do not rely on AI chatbots to always provide factually accurate information

An AI tool is only as good as the data it uses. If its data is old or incomplete, the content it generates could be biased, inaccurate or just plain wrong.

The same thing goes for computer code. Programmers have started to use AI tools to write code for them. It does save them time, but they run the risk of generating code that could very well have errors that cause it to be unstable or insecure.

→ Whatever data you get from an AI tool, always verify it carefully before using it.

AI is helping hackers generate a better mousetrap

AI can make your or your coworkers’ jobs easier, but keep in mind that AI is also making hackers’ jobs easier, too.

In the past, hackers would send the same poorly worded phishing email to thousands of people. Trained spam filters could then identify and catch those emails before putting you or your organization at risk.

But now, hackers are using AI to generate millions of customized, well-written emails, which are much harder for both spam filters and humans to discern. This means that more sophisticated phishing emails will be arriving in your and your coworkers’ inboxes, putting yourself and your organization at an increased risk.

How to stay cyber secure against advanced AI phishing attacks

The best thing to do to protect yourself, your coworkers and your organization is to stay alert. Don’t click any links or download any attachments from email addresses you don’t recognize. If you aren’t sure, verify the request with the sender at a confirmed email address or phone number before doing anything.

Get six free posters

Get six free posters

Reinforce cybersecurity best practices with six eye-catching posters found in our free poster kit from our award-winning series, Work Bytes.

Knowing how to best use AI will keep you & your organization cyber-safe

AI has incredible potential - and its popularity will continue to grow.

But just because it’s popular doesn’t mean it’s always safe. Use this quick checklist whenever doubts arise:

  • Always check with your organization before downloading any app or using any AI tool.
  • Inquire from your manager about AI security policies your organization may have in place.
  • Always take into consideration the potential risk before entering any information or data into an AI application.

As AI evolves, how we use it will change. Stay up to date with your organization’s latest guidelines to ensure that you’re staying safe with this technology. Keep yourself and your organization cyber secure when using ChatGPT with these free training resources from Infosec.

Demo Infosec IQ Now

Keatron Evans
Keatron Evans

Keatron Evans is at the forefront of AI-driven cybersecurity innovation. As VP of Portfolio Product and AI Strategy at Infosec, he leads the development of cutting-edge solutions that are redefining industry standards. With over 20 years of experience, Keatron brings a unique blend of expertise:
  • AI pioneer: AWS-certified Generative AI Subject Matter Expert
  • Product visionary: Drives Infosec's AI-integrated cybersecurity product strategy
  • Cybersecurity expert: Author of "Chained Exploits: Advanced Hacking Attacks from Start to Finish"
  • Intelligence sector innovator: Founding member of an AI company that developed offensive cybersecurity tools for U.S. intelligence organizations
Keatron is a sought-after speaker at major industry events like the RSA Conference and a trusted expert for media outlets including CNN and Fox News. His forward-thinking approach focuses on harnessing AI to create adaptive cybersecurity solutions, positioning him as a key influencer in the private and public sectors. Beyond his professional pursuits, Keatron is an avid martial artist and musician, bringing a multifaceted perspective to his innovative work in technology and leadership.