Back to Blog

Don’t Give Sensitive Info to AI

A Black woman with glasses and curly hair sits in a cafe working on her laptop.

Artificial intelligence tools are incredibly common, very helpful—and potentially risky.

Many people are turning to chatbots powered by large language models (LLMs) for everything from writing help to legal advice. In a new study from Anagram, almost half of employees admit they’re using banned AI tools at work.

But beware: what you type in could be used to “train” these models and may not stay private.

The Private Becomes Public

LLMs like ChatGPT, Claude, and Gemini are trained on massive amounts of data, including text from books, websites, and user inputs. While some companies claim not to use personal conversations to improve their models, others retain and analyze them unless you opt out or use a privacy-safe version.

Even if companies anonymize your data, your input might still include sensitive information that, when combined with other data, can identify you. This is especially true if you share full names, addresses, medical issues, financial information, or proprietary work data.

LLMs don’t forget the way people do. Your input could live on in server logs, future model updates, or in outputs shown to other users. Experts advise that you treat AI chats like a public forum—because they might be one, now or in the future.

Protect Yourself

Here are tips for avoiding LLM privacy problems:

  • Never input Social Security numbers, banking info, or passwords.
  • Avoid sharing full names, addresses, or anything that identifies you or others.
  • Don’t disclose private health or legal information.
  • Be cautious when using AI at work—don’t paste in confidential business data.
  • Check the platform’s privacy policy and opt out of data sharing or training if possible.
  • Use “incognito” or business versions of AI tools when available.
  • Treat AI as a helpful assistant, not a vault.

Related Articles