AI profiling poses growing threat to privacy and national security

16:1410/07/2025, Perşembe
AA
File photo
File photo

Expert warns misuse of personal data could lead to ethical breaches and state-level risks

Artificial intelligence systems are raising urgent concerns over privacy and national security due to their ability to analyze, label and profile sensitive user data, a senior data science engineer said, warning of gaps in transparency and oversight.

While AI tools such as chatbots are widely praised for simplifying daily life, the use of personal data in shaping responses — including potential profiling of political beliefs, religious views, or health information — has sparked debate about violations of data protection laws and the ethical use of autonomous technologies.

Recent reports claim systems like ChatGPT have the ability to extract and classify user traits based on interactions, including politically or religiously sensitive identifiers. AI developers, including OpenAI, say users retain the right to opt out of data training, and that administrative and technical safeguards are in place.

But Emre Durgut, a Türkiye-based senior data science engineer, told Anadolu the current approach remains too opaque.

“AI systems have technical, legal, and social responsibilities,” he said. “The biggest risk right now is that the personal data of users can be utilized to create profiles — these profiles may contain sensitive personal info, such as political views, religious beliefs, and the health of a person.”

“If this data is processed without the user's consent or in violation of the law, that would raise some serious privacy concerns,” he said.

Durgut argued that such processing may conflict with data protection laws, including the European Union's General Data Protection Regulation (GDPR) and Türkiye's Personal Data Protection Law (KVKK), which require informed consent, transparency, and limits on the use of sensitive data.

“These regulations are based on the principles of explicitly giving consent, data minimization, purpose limitation, and transparency,” he said. “If these ‘hidden' profiles are created with sensitive labels, this constitutes a clear violation of the law.”

He also noted that, as the field of user profiling is still developing, independent oversight remains weak.

“Independent auditors have yet to be involved,” he said. “Comparing user inputs with system outputs can show how data is labeled — you can get clues by asking a few technical questions on what data is being processed.”

Durgut said that as chatbots become more personalized, the risk of discrimination in areas such as hiring, advertising or credit decisions increases.

“If an AI system incorrectly labels someone as being of a certain political opinion, ethnic origin, or religion, that could lead to concrete discrimination — from personalized ads to job rejections to biased financial evaluations,” he said.

He warned that leaked profile data could fall into the hands of malicious actors, increasing the risk of social engineering attacks, blackmail, and identity fraud.

“In cases involving public officials or high-profile individuals, such breaches could pose a threat to national security,” he said.

To reduce these risks, Durgut urged users to avoid sharing sensitive personal data with AI platforms and advised switching between different tools to minimize exposure.

“AI developers must clearly explain how they process user data and must update their usage policies in ways that are easy for the public to understand,” he said.

“These firms should regularly invite independent auditors to evaluate their applications, publish the results, and give users more control over their data,” he said, adding that strong internal governance — including ethics committees and enforceable policies — is also essential.

#ChatGPT
#artificial intelligence
#Chatbot
#personal data
#profiling