Prompt Injection
11 entries
Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution
Cybersecurity researchers have discovered a vulnerability in Google's agentic integrated development environment (IDE), Antigravity, that could be exploited to achieve code execution. The flaw, since...
Claude SDK for TypeScript: Memory Tool Path Validation Allows Sandbox Escape to Sibling Directories
The local filesystem memory tool in the Anthropic TypeScript SDK validated model-supplied paths using a string prefix check that did not append a trailing path separator. A model steered by prompt inj...
LangChain serialization injection vulnerability enables secret extraction
## Context A serialization injection vulnerability exists in LangChain JS's `toJSON()` method (and subsequently when string-ifying objects using `JSON.stringify()`. The method did not escape objects...
LangChain serialization injection vulnerability enables secret extraction in dumps/loads APIs
## Summary A serialization injection vulnerability exists in LangChain's `dumps()` and `dumpd()` functions. The functions do not escape dictionaries with `'lc'` keys when serializing free-form dictio...
LlamaIndex vulnerable to Creation of Temporary File in Directory with Insecure Permissions
A vulnerability in the `default_jsonalyzer` function of the `JSONalyzeQueryEngine` in the run-llama/llama_index repository allows for SQL injection via prompt injection. This can lead to arbitrary fil...
PYSEC-2024-115
A vulnerability in the GraphCypherQAChain class of langchain-ai/langchain-community version 0.2.5 allows for SQL injection through prompt injection. This vulnerability can lead to unauthorized data ma...
Langchain SQL Injection vulnerability
A vulnerability in the GraphCypherQAChain class of langchain-ai/langchain version 0.2.5 allows for SQL injection through prompt injection. This vulnerability can lead to unauthorized data manipulation...
Langchain SQL Injection vulnerability
In Langchain before 0.0.247, prompt injection allows execution of arbitrary code against the SQL service provided by the chain.
Langchain Server-Side Request Forgery vulnerability
In Langchain before 0.0.329, prompt injection allows an attacker to force the service to retrieve data from an arbitrary URL, essentially providing SSRF and potentially injecting content into downstre...
LangChain vulnerable to code injection
In LangChain through 0.0.131, the `LLMMathChain` chain allows prompt injection attacks that can execute arbitrary code via the Python `exec()` method.
PYSEC-2023-18
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.