Patricia Alfheim
June 23, 2025

OpenAI connectors are live – and your data might be too

OpenAI connectors are live – and your data might be too

OpenAI recently announced connectors that allow ChatGPT to access and act on enterprise data through third-party applications.

These new capabilities let generative AI tap directly into core business systems – shared drives, CRMs, calendars, email, messaging platforms, and more.

This effectively turns ChatGPT into a system-level agent — capable not only of retrieving information, but also acting on it. Whether connecting to Microsoft’s cloud platform, internal file shares, or productivity tools, the AI can interact across systems much like a human user would. That’s a powerful step forward, but it also amplifies the need for safeguards. Without usage restrictions and clear boundaries, these agents can operate with far more access than anyone realizes.

It’s a major step in usability, but it also introduces serious risk for enterprises:

What happens when AI begins interacting with internal systems without proper oversight, restrictions, or governance?

The risk isn’t the AI – it’s uncontrolled access

These new integrations make it easy for anyone to connect AI to business-critical systems (and you can bet many already have). However, in doing so, they bypass many of the safeguards enterprises rely on to protect sensitive information.

Without clear controls, ChatGPT can start pulling from (and acting on) everything from customer records and confidential documents to private conversations and strategic plans. This can be enabled by individual users and therefore can happen without IT security teams or data owners knowing it’s happening.

The problem is not just that these connections can be made without oversight. Once they are in place, there is limited control over what the AI can access within those systems. If a connection is established to a file share, for example, the AI may be able to see far more than intended as most underlying systems were not designed to enforce fine-grained usage limits or to monitor how data is being queried and interpreted.

Three critical governance gaps

Without the right safeguards in place, connecting AI tools like this create a direct path between sensitive enterprise systems and external exposure. Even when access appears to be technically controlled, that alone doesn’t guarantee the connection is secure, compliant, or appropriate for the type of data being exposed.

This highlights three critical governance gaps:

  • No usage restrictions
    Most data inside enterprise systems isn’t labeled or tagged with purpose, consent, or usage conditions. That means AI can access and use data without knowing what it’s for or whether it’s appropriate to act on. Sensitive HR files, draft financials, or personal notes could easily be swept up without intention or approval.

  • No data provenance
    When AI pulls information from multiple sources, it becomes almost impossible to trace its origin. Who created it? What system did it come from? Has it been validated? Without clear lineage, organizations lose the ability to audit and defend how data is being used (especially in regulated environments).

  • No enforcement layer
    Access controls alone aren’t enough. Organizations need fine-grained governance that can enforce how data is used, not just who can access it. That applies equally to people and to AI agents – both need clear boundaries on what data they can interact with, and under what conditions.



Get control before a breach

As the use of AI connectors expands, security teams must treat them as a new class of integration; one that demands closer scrutiny and stronger controls than most current systems provide.

Taking action now means more than adjusting permissions or issuing internal guidance. It requires building a coordinated approach across security, data, and AI teams to define clear policies for how AI tools interact with enterprise systems. That includes setting thresholds for what types of data can be exposed, introducing monitoring to track usage patterns, and ensuring there is accountability for how data is accessed and used.

Organizations should also revisit their data architecture. It’s no longer enough to control who can access a system—there must be a way to define how each piece of data can be used, under what conditions, and by which systems or agents. This requires metadata, context, and usage policies that are attached to the data itself and can be interpreted automatically. 

Platforms like IndyKite are designed to support this model, enabling fine-grained, machine-readable governance that travels with the data. Without this level of precision, AI will continue to operate in gray areas that traditional controls can’t effectively manage.

This isn’t just about managing one integration point. It’s about building a foundation for AI to interact safely across systems. As more teams embed AI into operational workflows, the ability to enforce data usage policies consistently – across tools, departments, and external partners – will be essential. 

Organizations that invest now in data-level governance and contextual controls will be better positioned to scale AI securely and responsibly. 

Learn more about securing your AI by downloading the AI Security playbook.

Keep updated

Don’t miss a beat from your favourite identity geeks