He Junjie
He Junjie
**Description:** Upon reviewing the code related to file writing in `superagi/tools/code/write_code.py` and `superagi/resource_manager/file_manager.py`, a potential security vulnerability has been identified. The system allows Large Language Models (LLMs) to generate both...
file:`engineer/engineer.py` ## Description In the current implementation of `_edit_repo_file()`, no security checks are performed on: 1. The file path being modified 2. The content being written to the file This...
**File Path**: `LLM-VM/src/llm_vm/agents/REBEL/utils.py` **Relevant Code Line**: ```python resp = (requests.get if tool["method"] == "GET" else requests.post)(**tool_args) ``` ## Vulnerability Description In the `tool_api_call` function located in `llm_vm/agents/REBEL/utils.py`, the API request...
## Description file:`Reptyl/reptyl/reptyl.py` In the `reptyl.py` file, the code directly executes commands or scripts generated by a large language model (LLM), stored in the `reply` variable, without performing any safety...
Problem Description In the file `GeniA/genia/llm_function/python_function.py`, the evaluate method directly executes user-configured Python classes and methods via reflection, without any filtering or security checks. Risk Analysis 1. Arbitrary Code Execution:...
Description: **Affected File:** `/home/hejunjie/llm_web_serve/servers1/speechless/speechless/infer/ollama/osh.py` **Vulnerability:** The `osh.py` script, designed to translate natural language prompts into shell commands using Ollama, includes a `-y` (or `--yes`) command-line argument. When this flag is...
## Description There is a potential **command injection** vulnerability in the application where text generated by the Large Language Model (LLM) is directly used in a `subprocess.call` command without proper...
## Description In the `llm_agents` project, a potential security vulnerability has been identified where content generated by a Large Language Model (LLM) is directly executed without sufficient filtering or sandboxing....
**Description:** In the file , within the `Runner` class—specifically in the `run_code `method (approximately lines 119–182)—we identified a potential vulnerability where commands generated by a Large Language Model (LLM) are...
The `initiate_chat()` method returns a string generated by the LLM. If the LLM is maliciously manipulated or inadvertently generates a string containing executable Python code (e.g., `__import__('os').system('rm -rf /')`), using...