Irresponsible Sharing of Sensitive Data with AI
By Richard Golian1 February 2025 Castellano Slovenčina
In recent years, artificial intelligence (AI) has increasingly integrated into our daily lives, and its influence continues to grow. Chatbots and AI assistants help us complete tasks, automate processes, and improve efficiency. But it is astonishing how thoughtlessly corporate and personal data are shared with AI tools without considering the risks. Many fail to realize the extent of the exposure and potential consequences.
AI and Irresponsible Leaks of Corporate Data
I understand the temptation – uploading a spreadsheet into ChatGPT or Gemini and letting AI assist with analysis. The latest trend is the Chinese chatbot DeepSeek, which is rapidly gaining popularity. However, many employees do not consider that they are copying entire customer databases, internal reports, business strategies, product sales data, and other sensitive marketing information into these tools—without any idea where this data ultimately goes!
Why Is This a Huge Problem?
- Loss of Control Over Data – Do we really think AI simply forgets what we provide? Data can become part of its knowledge base, and even though developers claim they do not store it, the truth is often more complicated.
- Geopolitical Risks – DeepSeek is a Chinese AI, and we all know how things work in China. Companies are under government scrutiny, and if you think your data cannot end up in the wrong hands, it’s time to wake up.
- Violation of GDPR and Other Regulations – Many do not realize that they might be violating GDPR, exposing their company to hefty fines. A single thoughtless action can create serious problems.
- Competitive Threat – If we believe our competitors are not seeking ways to access valuable data, we are mistaken.
Employees must recognize that every interaction with AI can have consequences. It is crucial that they:
- Think Before Sharing Data – Before uploading any data into an AI tool, they should evaluate whether it truly needs to be processed this way.
- Consult Company Policies – Companies should have clearly defined rules on what data employees can share with AI. If such policies do not exist, it is in employees' own interest to push for their creation. This can prevent situations where they inadvertently create a problem that jeopardizes not only the company but also their own job security.
- Use Internal AI Solutions – Whenever possible, they should prioritize AI models managed and controlled by the company instead of public chatbots.
- Improve Their Digital Literacy – The more employees understand how AI works, the better they can protect sensitive data.
AI and Irresponsible Sharing of Personal Data
Sharing data with AI is not just about databases and business strategies. Every question we ask chatbots provides them with details about our thinking, interests, and values. In the future, these insights could be sold to companies for even more aggressive advertising targeting or to political parties for manipulation of public opinion.
Our interactions with chatbots also reveal our knowledge, problem-solving abilities, and thinking patterns. Essentially, this creates a database of the intelligence of the entire human population. Even my imagination is not enough to grasp how this might be exploited in the future, but the probability that someone will use this information against certain groups of people is very high.
This is a serious issue, and it is high time we start acting responsibly. Let’s not be lulled by convenience and assume that this does not concern us. If we do not wake up now, it may soon be too late.