Two critical vulnerabilities were published December 23, 2025 affecting LangChain Python (CVE-2025-68664, CVSS 9.3) and JavaScript (CVE-2025-68665, CVSS 8.6). Both are serialization injection flaws in dumps(), dumpd(), and toJSON() methods.

The Vulnerability

User-controlled data with ‘lc’ keys gets treated as legitimate LangChain objects during deserialization rather than plain user data. This leads to remote code execution.

What makes this dangerous:
LLM injection can craft the exploit payload.

The Attack Chain

Normally, defense-in-depth includes input validation. But with LLM applications, there’s no traditional input validation that prevents this attack chain:

  1. Attacker sends prompt injection to your chatbot
  2. LLM generates a response containing the malicious ‘lc’ key structure
  3. Your application serializes that response using LangChain methods
  4. Deserialization treats the LLM output as a LangChain object → code execution

Traditional security tools miss this completely. WAFs see valid HTTPS requests. Input filters see normal prompts. The weaponized payload only exists after the LLM processes it.

The Blind Spot

The malicious code isn’t in the user input. The LLM generates it.

This is where traditional AppSec approaches fail. You can validate user inputs all day long, but if the LLM itself generates malicious output, your input validation never sees it.

Patching and Protection

Patched versions: langchain-core 0.3.81 and 1.2.5 (Python), @langchain/core 0.3.80 and 1.1.8 (JavaScript).

If you’re running LangChain in production, you need:

  • Immediate patching to fixed versions
  • Output validation - not just input validation
  • Runtime monitoring of LLM behavior
  • Audit trails of serialization operations

A New Category of Risk

This is CWE-502 (Deserialization of Untrusted Data) meeting OWASP LLM01 (Prompt Injection). Traditional AppSec approaches aren’t enough for LLM applications.

When your security model assumes user input is the threat, but the real threat is what your AI generates, you have a fundamental gap.

This vulnerability isn’t an edge case. It’s a preview of what happens when LLM outputs become attack vectors that traditional security tools can’t see.