xAI Security Breach Exposes Private API Key to SpaceX and Tesla Language Models

· 1 min read

article picture

A security breach at Elon Musk's artificial intelligence company xAI has exposed sensitive API credentials that could have allowed unauthorized access to private large language models (LLMs) containing internal data from SpaceX, Tesla, and Twitter/X.

The leak occurred when an xAI technical staff member accidentally published an API key on GitHub, where it remained exposed for approximately two months. Security researchers discovered the key provided access to over 60 private and fine-tuned LLM models, including unreleased versions of the Grok chatbot and specialized models like "tweet-rejector" and "grok-spacex-2024-11-04."

Philippe Caturegli, chief hacking officer at Seralys, first identified the leak, which was subsequently analyzed by security firm GitGuardian. The exposed credentials remained active even after GitGuardian alerted the xAI employee on March 2, continuing to grant access until April 30 when xAI's security team was directly notified.

"The exposed models appear to be fine-tuned using proprietary SpaceX and Tesla data," said Eric Fourrier from GitGuardian, emphasizing these were not intended for public access.

Security experts warn that unauthorized access to private LLMs could enable malicious activities including prompt injection attacks and supply chain compromises. The incident raises concerns about xAI's security practices and credential management protocols.

The leak comes amid increased scrutiny of AI deployment in government sectors, particularly as Musk's Department of Government Efficiency (DOGE) integrates AI tools across federal agencies. While no evidence suggests government data was compromised through this specific leak, the incident highlights potential risks in handling sensitive information through AI systems.

xAI has not responded to requests for comment about the security breach. The GitHub repository containing the exposed API key has since been removed.