AI Ethics Security
Private ChatGPT

“Private ChatGPT Version” to businesses

Microsoft is testing a “private ChatGPT” alternative to its language model, which is designed to alleviate concerns about data privacy. The private version, called “Microsoft Private AI,” would be limited to a specific organization and its data would not leave the organization’s network. This move comes as companies are increasingly concerned about the security and privacy of their data, particularly in light of recent high-profile data breaches. ChatGPT is a powerful language model that can generate human-like responses to text prompts, but it requires large amounts of data to train. Microsoft has previously faced criticism for its data privacy practices, including allegations of collecting data from users without their consent. The private AI model is being tested with select customers, and Microsoft plans to offer it as a service in the future.
The model uses federated learning, which allows multiple organizations to collaborate on the training of the model without sharing data. Microsoft is not the only company exploring private versions of language models. Google and OpenAI have also announced similar initiatives. Some experts believe that private language models could become a new industry standard, particularly as more companies seek to protect their data and intellectual property.This would keep sensitive data from being used to train ChatGPT’s language model and could also prevent inadvertent data leaks—imagine a chatbot that revealed information about one company’s product road map to another company just because both companies used ChatGPT as with Samsung happened.

The catch is that these isolated versions of ChatGPT could cost a lot more to run and use. The report says that the private instances “could cost as much as 10 times what customers currently pay to use the regular version of ChatGPT.”