The anvil is dropping on ChatGPT’s presence in the workplace—for now.
Major companies, including Amazon, Goldman Sachs, and Verizon, have banned or restricted the OpenAI-owned chatbot, which exploded in popularity late last year. Many have cited privacy risks for the clampdowns since ChatGPT uses data from conversations to improve its accuracy, although users can opt out via ChatGPT’s settings or a Google form.
Of course, several companies use ChatGPT and other generative A.I. tools like Google Bard to improve productivity and free up time spent on menial tasks like drafting emails or reviewing code. Companies like Coca-Cola and Bain & Company have even inked partnerships with OpenAI.
But for many employers, the security risks far outweigh the productivity benefit, as evidenced by Samsung. In early April, the company reported that employees accidentally leaked confidential internal source code and meeting recordings while using the chatbot (Samsung has since banned ChatGPT’s use for employees). Apple became the latest company to ban ChatGPT, at the time of publication, citing similar concerns over data leaks.
These companies are not alone in their apprehension. Italy temporarily banned the service, stating that OpenAI isn’t compliant with Europe’s GDPR privacy law, and the E.U. is close to passing a groundbreaking law governing A.I.’s use in the bloc.
While the chatbot is currently restricted in certain offices, that might not always be the case. Some companies, like JPMorgan Chase and Deutsche Bank, have indicated they’ll allow ChatGPT if it’s fully vetted for safe use. Others, like Goldman Sachs and Samsung, are developing their own A.I. tools for employee usage.
Below are companies that have banned or restricted employees from using ChatGPT at work.
Apple is restricting workers from using ChatGPT and other third-party A.I. tools, citing concerns about confidential data leaks. The tech giant also told employees not to use the automated software code-writing program Copilot, developed by GitHub and OpenAI (both owned or backed by Microsoft). Apple, which tapped Google veteran John Giannandrea to lead its A.I. efforts in 2018, is reportedly developing its own A.I. tools.
Bank of America
Bank of America added ChatGPT to its list of unauthorized apps that employees are prohibited from using for business, insiders told Bloomberg in February. The bank is one of many to implement stricter compliance measures around internal communications after U.S. regulators levied over $2 billion in fines for the firm’s failure to monitor employee use of unauthorized messaging apps like WhatsApp.
CEO Michael Weening announced on LinkedIn that the Calif.-based telecommunications company banned ChatGPT across all business functions and devices in April. He cited Samsung’s recent leak as the driving reason for the ban, writing it is “disconcerting” that ChatGPT could expose sensitive information like confidential internal memos or customer contracts under NDA to outsiders. “The world is changing. Big time. People need to read the fine print (or lack of it) and educate teams.”
Citigroup added ChatGPT to its standard firm-wide controls for third-party software, where certain categories of websites are automatically restricted. “We are actively exploring a number of avenues that would give us a better understanding of the benefits and potential associated risks of using this technology,” a spokesperson tells Fortune.
Access to ChatGPT has been disabled for Deutsche Bank staff since at least February. The restriction is standard practice for third-party websites, a spokesperson tells Fortune, noting its aim is to protect the bank from data leakage rather than an indication of the tool’s usefulness. In the meantime, the bank says it will evaluate how to best use the platform while protecting its own and client’s data. The bank is developing A.I. chatbots and was an early business partner of Google Cloud’s generative A.I. offerings.
Like Citigroup, Goldman Sachs blocked access to ChatGPT for employees through an automatic restriction on third-party software, as Bloomberg first reported in February.
But Goldman is in the process of developing its own generative A.I. tools. Goldman chief information officer Marco Argenti said the bank has several “proof of concepts” to streamline tasks like document classification and categorization, summarizing earnings calls, or research for a daily digest.
America’s largest bank restricted the use of ChatGPT for employees in late February, a person familiar with the decision told Bloomberg. The decision is part of standard controls for third-party software, the person said. But the bank may be open to using the tool in the future, noting in its 2022 annual report that it’s “imagining new ways to augment and empower employees with A.I. through human-centered collaborative tools and workflow, leveraging tools like large language models, including ChatGPT.” The bank did not respond to Fortune’s request for clarification on how it’s restricted.
The aerospace and defense technology company blocked the tool earlier this year, claiming it doesn’t share company or customer information with outside tools until vetted. One employee told the Wall Street Journal he used the chatbot for months prior to the ban, asking it concept questions about code before turning the plain language response into code itself. He compared the bot to a “really cool, patient mentor who’s not going to be annoyed by you asking a lot of questions.” Northrop Grumman did not respond to Fortune’s request for comment.
The telecommunications giant is also concerned about privacy and security with ChatGPT. Verizon informed employees in mid-February that the program is not accessible via corporate systems due to the risk of losing control of sensitive information like customer data and source code. “Our top priority is our four stakeholders: communities, customers, investors/shareholders, and society, and we have to be thoughtful when introducing a new and emerging technology such as a ChatGPT,” a communications manager at Verizon wrote to employees.
South Korea-based Samsung banned employee use of ChatGPT and other generative A.I. tools in early May. The restriction came after engineers accidentally leaked confidential information, including internal code and meeting recordings, in April, by uploading it to the chatbot. Employees are banned from accessing the tools on company-owned devices and its internal networks. However, Samsung creating its own A.I. tools that employees can use for software development, translation, and summarizing documents.
Below are companies that have not outright banned ChatGPT but requested employees not to share confidential information on the platform:
Accenture restricts its 700,000 employees from using ChatGPT and other generative A.I. tools while coding, noting they must obtain permission to share company or client data. Accenture did not respond to Fortune’s request for clarification on the restrictions.
Amazon warned employees against sharing confidential company information with ChatGPT, citing instances in which the chatbot’s responses looked similar to internal data, according to messages viewed by Insider.
Amazon employees have reported receiving a popup warning that they are entering a third-party site that “may not be approved for use by Amazon Security” when visiting ChatGPT’s website. Engineers looking for coding support are encouraged to use an internal A.I. tool called CodeWhisperer. But employees are not outright restricted from using ChatGPT or similar tools. “We have safeguards in place for employee use of these technologies, including guidance on accessing third-party generative A.I. services and protecting confidential information,” a spokesperson tells Fortune.
The Australian division of the professional services firm warned employees against using any material or information outputted by ChatGPT and prohibited sharing firm or client data with the tool, according to an internal newsletter. However, it still permits access to ChatGPT and encourages employees to experiment personally. Jacqui Visch, PwC Australia’s chief digital and information officer, told the Australian Financial Review the company is exploring cybersecurity and legal considerations before permitting the chatbot for any business use.