Microsoft, an investor in OpenAI, recently encountered a temporary block on its employees’ access to ChatGPT, a renowned AI tool developed by OpenAI.
The incident, revealed during internal announcements, stemmed from inadvertent security system tests for large language models (LLMs).
Microsoft’s spokesperson clarified:
“We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees.”
The access restriction extended beyond ChatGPT to other external AI services but was promptly rectified by Microsoft after identifying the error.
Microsoft’s Relationship with OpenAI
This incident highlights Microsoft’s reliance on OpenAI’s services, which run on Microsoft’s Azure cloud infrastructure.
It leverages OpenAI’s services to enhance its Windows operating system and Office applications.
Microsoft’s announcement cited security and data concerns, leading to the temporary unavailability of AI tools for employees.
“Due to security and data concerns a number of AI tools are no longer available for employees to use. While it is true that Microsoft has invested in OpenAI, and that ChatGPT has built-in safeguards to prevent improper use, the website is nevertheless a third-party external service. That means you must exercise caution using it due to risks of privacy and security. This goes for any other external AI services, such as Midjourney or Replika, as well.”
However, access to ChatGPT was swiftly reinstated once the error was identified, with a Microsoft spokesperson clarifying that the security system tests were inadvertently applied to all employees.
Microsoft encouraged its employees and customers to utilise services such as Bing Chat Enterprise and ChatGPT Enterprise.
Microsoft spokesperson added:
“We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”
ChatGPT Usage Restrictions
ChatGPT broad training on internet data has led some companies to restrict its use to prevent sharing confidential information.
In response to the evolving landscape of AI risks, OpenAI launches a Preparedness team led by Aleksander Madry.
Based at the Massachusetts Institute of Technology’s Center for Deployable Machine Learning, this team focuses on assessing and managing risks associated with artificial intelligence models.
Their scope includes areas such as individualised persuasion, cybersecurity, and misinformation threats.
Microsoft and OpenAI’s actions reflect the global efforts of companies in the AI domain to ensure safe and responsible use of AI technologies.
As the world grapples with the potential risks of frontier AI, companies are actively working to push the boundaries of AI capabilities while mitigating associated risks.