{"id":13867,"date":"2024-05-13T02:59:33","date_gmt":"2024-05-13T06:59:33","guid":{"rendered":"https:\/\/sikaoer.com\/how-to-ai-fine-tune-your-chatbot-privacy-settings\/"},"modified":"2024-05-13T02:59:33","modified_gmt":"2024-05-13T06:59:33","slug":"how-to-ai-fine-tune-your-chatbot-privacy-settings","status":"publish","type":"post","link":"https:\/\/sikaoer.com\/how-to-ai-fine-tune-your-chatbot-privacy-settings\/","title":{"rendered":"How To AI: Fine Tune Your Chatbot Privacy Settings"},"content":{"rendered":"
\n<\/p>\n
\n
\n<\/figure>\n<\/div>\n
It may be easy, even comforting, to imagine that using AI tools involves interacting with a purely objective, stoic, independent machine that knows nothing about you. But between cookies, device identifiers, login and account requirements, and even the occasional human reviewer, the voracious appetite online services have for your data seems insatiable.<\/p>\n
Privacy is a major concern that both consumers and governments have about the pervasive spread of AI. Across the board, platforms highlight their privacy features\u2014even if they're hard to find. (Paid and business plans often exclude training on submitted data entirely.) But any time a chatbot \u201cremembers\u201d something can still feel intrusive.<\/p>\n
In this article, we will explain how to tighten your AI privacy settings by deleting your previous chats and conversations and by turning off settings in ChatGPT, Gemini (formerly Bard), Claude, Copilot, and Meta AI that allow developers to train their systems on your data. These instructions are for the desktop, browser-based interface for each.<\/p>\n
\n
<\/div>\n
<\/div>\n<\/div>\n
ChatGPT<\/strong><\/h2>\n
Still the flagship of the generative AI movement, OpenAI\u2019s ChatGPT has several features to improve privacy and alleviate concerns about user prompts being used to train the chatbot.<\/p>\n
In April, OpenAI announced that ChatGPT could be used without an account. By default, prompts shared through the free, account-less version are not saved. But, if a user does not want their chats used to train ChatGPT, they still need to toggle the \u201cTemporary chat\u201d settings in the ChatGPT dropdown menu at the top left of the screen.<\/p>\n
\n<\/figure>\n<\/div>\n
If you have an account and subscription to ChatGPT Plus, however, how do you keep your prompts from being used? GPT-4 gives users the ability to delete all chats under its general settings. Again, to make sure chats are also not used to train the AI model, look lower to \u201cData controls\u201d and click the arrow to the right of \u201cImprove the model for everyone.\u201d<\/p>\n
\n<\/figure>\n<\/figure>\n<\/div>\n
A separate \u201cModel improvement\u201d section\u00a0 will appear, allowing you to toggle it off and select “Done.” This will remove the ability of OpenAI to use your chats to train ChatGPT.<\/p>\n
\n<\/figure>\n<\/div>\n
There are still caveats, however.<\/p>\n
“While history is disabled, new conversations won\u2019t be used to train and improve our models, and won\u2019t appear in the history sidebar,” an OpenAI spokesperson told Decrypt. “To monitor for abuse\u2014and reviewed only when we need to\u2014we will retain all conversations for 30 days before permanently deleting.”<\/p>\n
Claude<\/strong><\/h2>\n
\u201cWe do not train our models on user-submitted data by default,\u201d an Anthropic spokesperson told Decrypt. \u201cThus far, we have not used any customer or user-submitted data to train our generative models, and we\u2019ve expressly stated so in the model card for our Claude 3 model family,\u201d<\/p>\n
\u201cWe may use user prompts and outputs to train Claude where the user gives us express permission to do so, such as clicking a thumbs up or down signal, on a specific Claude output to provide us feedback,\u201d the company added, noting that it helps the AI model \u201clearn the patterns and connections between words.\u201d<\/p>\n
Deleting archived chats in Claude will also keep them out of reach. \u201cI do not retain or have access to any previously deleted chats or conversations,\u201d the Claude AI agent helpfully answers in the first person. \u201dMy responses are generated based on the current conversation only.\u201d<\/p>\n
\n<\/figure>\n<\/div>\n
Like ChatGPT, Claude does hold on to some information as required by law.<\/p>\n
\u201cWe also retain data in our backend systems for the amount of time specified in our Privacy Policy unless required to enforce our Acceptable Use Policy, address Terms of Service or policy violations, or as required by law,\u201d Anthropic explains.<\/p>\n
As for Claude's collection of information across the web, an Anthropic spokesperson told Decrypt that the AI developer\u2019s web crawler respects industry-standard technical signals like robots.txt that site owners can use to opt-out of data collection, and that Anthropic does not access password-protected pages or bypass CAPTCHA controls.<\/p>\n
Gemini<\/strong><\/h2>\n
By default, Google tells Gemini users that \u201cyour conversations are processed by human reviewers to improve the technologies powering Gemini Apps. Don't enter anything that you wouldn't want to be reviewed or used.”<\/p>\n
But Gemini AI users can delete their chatbot history and opt out of having their data used to train its model going forward.<\/p>\n
To accomplish both, navigate to the bottom left of the Gemini homepage and locate \u201cActivity.\u201d<\/p>\n
\n<\/figure>\n<\/div>\n
Once on the activity screen, users can then turn off \u201cGemini Apps Activity.\u201d<\/p>\n
\n<\/figure>\n<\/div>\n
A Google representative explained to\u00a0Decrypt\u00a0what the \u201cGemini Apps Activity\u201d setting does.<\/p>\n
\u201cIf you turn it off, your future conversations won\u2019t be used to improve our generative machine-learning models by default,\u201d the company representative said. \u201cIn this instance, your conversations will be saved for up to 72 hours to allow us to provide the service and process any feedback you may choose to provide. In those 72 hours, unless a user chooses to give feedback in Gemini Apps, it won\u2019t be used to improve Google\u2019s products, including our machine learning technology.\u201d<\/p>\n
There is also a separate setting to clear out your Google-connected YouTube history.<\/p>\n
\n<\/figure>\n<\/div>\n
Copilot<\/strong><\/h2>\n
In September, Microsoft added its Copilot generative AI model to its Microsoft 365 suite of business tools, its Microsoft Edge browser, and Bing search engine. Microsoft also included a preview version of the chatbot in Windows 11. In December, Copilot was added to the Android and Apple app stores.<\/p>\n
Microsoft does not provide the option to opt out of having user data used to train its AI models, but like Google Gemini, Copilot users can delete their history. The process is not as intuitive on Copilot, however, as previous chats still show on the desktop version\u2019s home screen even after being deleted.<\/p>\n
\n<\/figure>\n<\/div>\n
To find the option to delete Copilot history, open your user profile at the top right of your screen (you must be signed in) and select \u201cMy Microsoft Account.\u201d On the left, select \u201cPrivacy,\u201d and scroll down to the bottom of the screen to find the Copilot section.<\/p>\n
\n<\/figure>\n<\/div>\n
Because Copilot is integrated into Bing\u2019s search engine, clearing activity will also clear search history, Microsoft said.<\/p>\n
A Microsoft spokesperson told\u00a0Decrypt that the tech giant protects consumers\u2019 data through various techniques, including encryption, deidentification, and only storing and retaining information associated with the user for as long as is necessary.<\/p>\n
\u201cA portion of the total number of user prompts in Copilot and Copilot Pro responses are used to fine-tune the experience,\u201d the spokesperson added. \u201cMicrosoft takes steps to de-identify data before it is used, helping to protect consumer identity,\u201d adding that Microsoft does not use any content created in Microsoft 365 (Word, Excel, PowerPoint, Outlook, Teams) to train underlying \u201cfoundational models.\u201d<\/p>\n
Meta AI<\/strong><\/h2>\n
In April, Meta\u2014the parent company of Facebook, Instagram, and WhatsApp\u2014rolled out Meta AI to users.<\/p>\n
\u201cWe are releasing the new version of Meta AI, our assistant, that you can ask any question across our apps and glasses,\u201d Zuckerberg said in an Instagram video. \u201cOur goal is to build the world's leading AI and make it available to everyone.\u201d<\/p>\n
Meta AI does not provide users the option to opt out of having their inputs used to train the AI model. Meta does give the option to delete past chats with its AI agent.<\/p>\n
To do so from a desktop computer, click the Facebook settings tab at the bottom left of your screen, located above your Facebook profile image. Once in settings, users have the option to delete conversations with Meta AI.<\/p>\n
Meta does explain that deleting conversations here will not delete chats with other people in Messenger, Instagram, or WhatsApp.<\/p>\n
\n<\/figure>\n<\/figure>\n<\/div>\n
A Meta spokesperson declined to comment on whether or how users could exclude their information from being used in Meta AI model training, instead pointing Decrypt to a September statement by the company about its privacy safeguards and the Meta settings page on deleting history.<\/p>\n
\u201cPublicly shared posts from Instagram and Facebook\u2014including photos and text\u2014were part of the data used to train the generative AI models,\u201d the company explains. \u201cWe didn\u2019t train these models using people\u2019s private posts. We also do not use the content of your private messages with friends and family to train our AIs.\u201d<\/p>\n
But anything you send to Meta AI will be used for model training\u2014and beyond.<\/p>\n
\u201cWe use the information people share when interacting with our generative AI features, such as Meta AI or businesses who use generative AI, to improve our products and for other purposes,\u201d Meta adds.<\/p>\n
Conclusion<\/h2>\n
Of the major AI models we included above, OpenAI\u2019s ChatGPT provided the easiest way to delete history and opt-out of having chatbot prompts used to train its AI model. Meta's privacy practices appear to be the most opaque.<\/p>\n
Many of these companies also provide mobile versions of their powerful apps, which provide similar controls. The individual steps may be different\u2014and privacy and history settings may function differently across platforms.<\/p>\n
Unfortunately, even cranking all privacy settings to their tightest levels may not be enough to safeguard your information, according to Venice AI founder and CEO Erik Voorhees, who told Decrypt that it would be naive to assume your data has been erased.<\/p>\n
\u201cOnce a company has your information, you can never trust it\u2019s gone, ever,\u201d he said. \u201cPeople should assume that everything they write to OpenAI is going to them and that they have it forever.\u201d<\/p>\n
\u201cThe only way to resolve that is by using a service where the information does not go to a central repository at all in the first place,\u201d Voorhees added\u2014a service like his own.<\/p>\n
Edited by Ryan Ozawa.<\/p>\n
\n
<\/p>\n
Stay on top of crypto news, get daily updates in your inbox.<\/h3>\n<\/div>\n<\/div>\n<\/div>\n