Privacy
When using generative AI technology, you often unknowingly expose your data. Many people are quick to agree to the terms and conditions without checking how their data is being processed. Companies behind this technology can collect your input, including personal information such as phone numbers, email addresses, and names. It is important to be aware of privacy risks.
Until recently, there were no specific regulations regarding the use of AI. However, as of 1 August 2024, the EU AI Act has come into effect. Many AI companies still need to adjust their practices to comply with this new European legislation.
Sharing information in generative AI tools, such as prompts or documents, carries certain risks. It is often unclear what these tools do with the data you input. Therefore, it is essential to be mindful of the information you enter into any given tool.
Avoid entering confidential, personal, or sensitive information into a generative AI tool. This includes things like names, addresses, research data, research results, biometric data, medical information, or personal identification details.
Where possible, opt out of data collection. For example, with ChatGPT, you can disable chat history, and in some schools Copilot operates in a secure environment where data is not used for training language models.
Mistral vs. DeepSeek
Mistral is a European company with a strong emphasis on privacy and GDPR compliance. Users retain rights to both their input and output, and Mistral is transparent about how data is processed. DeepSeek, a Chinese AI provider, collects extensive user data, including chat history, device information, and even keystroke patterns. This data is stored on servers in China, raising concerns about potential government access and oversight.
Think carefully about which AI tool you choose to work with. Do you value privacy? If so, take the time now to adjust the appropriate settings in the language model of your choice.