More than 4% of employees have put sensitive corporate data into ChatGPT
In a recent report Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information to the ChatGPT.
In one case, an executive cut and pasted the firm’s 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient’s name and their medical condition and asked ChatGPT to craft a letter to the patient’s insurance company. Apart of awareness for employees confidentiality agreements and policies should prohibit on employees referring to or entering confidential, proprietary, private or trade secret information into AI chatbots or language models, such as ChatGPT. Since ChatGPT was trained on largely available online information, employees might receive and use information from the tool that is trademarked, copyrighted, or the intellectual property of another person or entity, creating legal risk for employers.”