Controversy with LinkedIn: they discover that their AI is trained with private user data

-

Although the firm mentions this use in its privacy policy, users were not directly informed or given the option to opt out of the process.

After the controversy over the use of personal data in the development of artificial intelligence launched by Meta in July of this year, called Meta AI and which is available on WhatsApp, criticism now falls on a new heavyweight of social networks.

This is LinkedIn, the flagship jobs platform, which began training AI models but with data from its users without their explicit consent.

Similar to what Facebook, also belonging to Mark Zuckerberg’s Meta, did, the technology is used by the job portal to suggest publications that align with the interests of each user.

The Microsoft-owned platform implemented this measure without alerting its users directly, something that drew criticism for the lack of transparency and control over the use of personal information and the subsequent privacy around the users who use it.

The new policy, which went into effect on September 18, 2024, allows LinkedIn to use the data provided by its users to improve and develop its AI systems.

“When such news is known, one of the main ethical concerns is the feeling that users lose control over their own information. Also, the question arises as to whether companies use this information for their own projects without the user being aware of it,” Jorge Lukowski, Global Director of Marketing and Communication at NEORIS, tells iProUP.

Se reveló que

LinkedIn Collected Personal Information Without Users’ Consent to Train Artificial Intelligence Models

Julián Colombo, CEO of N5, stresses that transparency is “crucial”, since the lack of clarity on how data is collected and used generates mistrust, so he recommends that platforms be open in their data collection practices.

Another important point he mentions is bias and discrimination: “It is necessary to ensure that the data represents a diversity of perspectives. In addition,data security cannot be neglected either; the use of large volumes of information increases the risks of leaks and abuse, which requires robust security measures to protect users. It is crucial that platforms take responsibility for the use of data and thus promote a culture of consent, transparency and accountability.”

Although the company mentions this use in its privacy policy, users were not directly informed or given the option to opt out of this process.

As a result, the non-profit Open Rights Group (ORG) has asked the Information Commissioner’s Office (ICO), the UK’s independent data protection body, to launch an investigation into LinkedIn and other networks that automatically use user data for training.

LinkedIn details in its Q&A section that “as with most features, when you interact with our platform, we collect and use (or process) data about your use of the platform, including personal data.”

This would include the “use of generative AI (AI models used to create content) or other AI features, your posts and articles, how often you use LinkedIn, your language preference, and any feedback you’ve provided to our teams. We use this data, in accordance with our privacy policy, to improve or develop LinkedIn’s services,” the portal specifies.

sdv

Various international organizations have already proposed regulations to regulate the use of AI among companies that handle data

The brand’s plans will officially apply as of November of this year, and will affect users in the United StatesCanada and any country where laws have not legislated on the use of this information. In the case of countries within the European Union (EU), the situation will be different.

LinkedIn Messes with User Privacy: The Ethical Implications of Using AI

This situation raised concerns among privacy experts, who highlight the risk that sensitive data could be used inappropriately to train generative models without proper security measures or consent.

“This type of situation leads us to reconsider how we manage the privacy of user data. Although AI-based solutions require large volumes of information to be effective, technologies such as GPT allow us to create “fictitious” profiles that can simulate real patterns without resorting to private data,” Lukowski reflects.

“Asusers, we must be aware of the information we share, since even data that does not seem sensitive can be used to define us. It’s not about shutting down our social networks, but about being more thoughtful about what we post and how it could be used,” he adds.

In this regard, a report presented by the UN highlighted the “urgent” need to regulate AI, which highlighted its concerns about its impact on society and emphasized the importance of legislative frameworks that ensure user privacy. “Legislation plays a crucial role in regulating platforms,” says Colombo.

LinkedIn also detailed that, in some regions such as the European Union (EU), the European Economic Area (EEA) and Switzerland, no resident data was used to train its AI models, due to the privacy regulations of the General Data Protection Law (GDPR) and others that govern those jurisdictions.

The jobs platform limited itself to efficiently disclosing its policies, but many users are demanding greater clarity on how their data will be handled in the future.

“Legislation plays a crucial role in all of this: it provides a regulatory framework for companies to know how to do appropriately in a relatively new field and to work with governments to create laws that benefit both parties. An example could be allowing users to voluntarily provide their data when it is relevant to an AI project, or giving them the right to know what it will be used for,” says the head of NEORIS.

Share this article

Recent posts

Popular categories