The key risks
1) Sharing sensitive information
Anything you type into an AI tool may be stored or used to improve services depending on the provider and settings, so avoid entering information that could harm you if it leaked. This includes passwords, bank details, copies of ID, full addresses, private medical/legal info, or anything you wouldn’t want shared.
2) Being persuaded to take unsafe actions
LLMs can sound confident even when wrong and sometimes generate unsafe instructions (e.g. recommending risky downloads or steps that weaken your security). Treat AI responses like advice from the internet: useful as a starting point, but something you should check before acting on.
3) Prompt injection
Some prompts are designed to override safety rules (“ignore previous instructions…”) or hide malicious instructions in content the AI reads. This matters most when AI is connected to files or tools, but it can also mislead individuals into sharing more information than they should.
4) AI scams and impersonation
AI can make scam messages more convincing (better spelling, more realistic tone, personalised details). If something feels urgent or pressuring, slow down and verify through a trusted channel (e.g. phone the organisation using a known number).
5) AI features in devices (including toys)
AI-enabled devices (smart assistants and some toys) can carry familiar smart device risks such as weak/default passwords, exposed admin pages, insecure apps/portals, and poor update mechanisms. These weaknesses can lead to data exposure or unauthorised access.
Practical Steps: Safer ways to use AI
1. Keep sensitive information out of prompts
Use this rule of thumb: If you wouldn’t post it publicly, don’t paste it into an AI tool.
Avoid:
- Passwords, one-time codes, recovery questions/answers
- Bank details, card details, account numbers
- Copies/photos of passports, driving licences, IDs
- Full names + date of birth + address combinations (high identity-theft risk)
Safer alternative: Replace personal details with placeholders (e.g. “NAME”, “ACCOUNT NUMBER”) if you just need help drafting a message.
2. Don’t blindly follow AI instructions
Before you click, download, pay, or run commands:
- Verify key facts with trusted sources (official websites, your bank, or known contacts).
- Be cautious with suggested links, software downloads, browser extensions, or “quick fixes”.
- If AI generates code or system commands, treat them as untrusted unless a knowledgeable person reviews them.
3. Use AI safely for writing and productivity
AI is usually safer when you use it for:
- Drafting general text (emails, CVs, reports) without personal data
- Summarising public information (articles you can share openly)
- Brainstorming ideas and checklists (then you refine them)
It is riskier when you use it for:
- Personal legal/medical decisions without professional advice
- Anything involving credentials, finances, or identity documents
4. Secure your accounts and devices (because AI won’t protect you)
Good cyber hygiene still matters:
- Use strong passwords and enable MFA where possible
- Keep your device and apps up to date (updates fix known security issues)
- Avoid unknown app stores or suspicious downloads
If you want a simple baseline, our 5 Steps to Cyber Security and 10 Online Safety Tips are good starting points.
5. AI devices and toys: quick safety checks
If you buy or use AI-enabled devices (smart assistants, connected toys):
- Change default passwords and turn off features you don’t need
- Keep firmware/software updated
- Don’t expose admin pages to the internet (avoid “remote access” unless you understand the risks)
- Consider whether the device collects voice recordings/transcripts and where they are stored
6. If something feels wrong
If you think you shared something sensitive with an AI tool or you’ve been targeted by an AI-powered scam:
- Stop and don’t send more information
- Change passwords and ensure MFA is enabled on important accounts
- Contact your bank if money/financial details were involved
- Report suspicious emails and cyber concerns to the Cyber Security Centre