A newly patched ChatGPT vulnerability has revealed how even a small oversight can expose deeper layers of cloud infrastructure. The issue is fixed now, but the details show how close an attacker came to reaching internal Azure systems powering the AI platform.
The discovery came from Jacob Krut, a bug bounty hunter and security engineer at Open Security. He found the weakness while building a custom GPT. He was testing how custom versions of ChatGPT connect to outside services. The feature depends on user-supplied URLs. Those URLs guide how the AI talks to external APIs.
The flaw appeared because ChatGPT was not validating these URLs tightly enough. That simple gap opened the door to a server-side request forgery scenario. SSRF attacks allow an attacker to force a system to make internal requests on their behalf. These requests can reach places the attacker should never see.
Krut used this window to reach a local Azure endpoint linked to the Instance Metadata Service. IMDS is a sensitive part of Microsoft’s cloud. It handles identity tokens and configuration data. It also connects cloud applications with other internal resources.
By hitting IMDS, Krut could access the Azure identity token used by ChatGPT. That token is powerful because it authenticates the system to other cloud services. If an attacker gained it, they could possibly touch internal Azure resources inside OpenAI’s environment.
This link from a simple URL validation mistake to a cloud identity token shows why SSRF is so dangerous. A small gap can turn into a massive potential breach.
Krut reported his findings through OpenAI’s bug bounty program on BugCrowd. OpenAI rated the flaw as high severity and patched it quickly. What remains unknown is whether he received a reward. Earlier this year, OpenAI announced rewards up to one hundred thousand dollars for critical bugs. Yet recent payouts have averaged under eight hundred dollars. The highest public reward since May has been five thousand dollars.
Security experts say this case highlights how fast SSRF can escalate. Christopher Jess from Black Duck said this is a textbook example of how a tiny validation miss can turn into a cloud-level issue. He explained that SSRF remains in the OWASP Top 10 because a single crafted request can hit metadata endpoints, privileged identities, and internal services.
This incident shows that even the world’s most advanced AI tools depend on layers of cloud components. When one of those layers is exposed, the impact can spread deeper than expected. It also shows why cloud security must evolve as fast as AI itself. A minor bug in a framework can open a path to cloud resources that power millions of AI interactions.
The patched flaw is now a reminder that every feature must be tested with the same level of scrutiny as the model itself. ChatGPT continues to evolve, and so does the importance of securing the systems behind it. When a ChatGPT vulnerability appears, the stakes are high, because the platform sits at the intersection of AI, cloud identity, and global access.