feat: 🎸 Major TUI update

Add TUI multi-line input; optimize prompts; optimize scripts.
This commit is contained in:
Grey_D 2023-04-12 14:22:03 +08:00
parent 80386c6c42
commit 641df8dc85
6 changed files with 201 additions and 87 deletions

View File

@ -1,5 +1,5 @@
# PentestGPT
v0.1, 09/04/2023
v0.2, 12/04/2023
## Introduction
**PentestGPT** is a penetration testing tool empowered by **ChatGPT**. It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
@ -15,14 +15,13 @@ The project is still in its early stage. Feel free to raise any issues when usin
## Examples
## Usage
1. To start, run `python3 main.py`.
2. The tool works similar to *msfconsole*. Follow the guidance to perform penetration testing.
3. In general, PentestGPT intakes commands similar to chatGPT.
- To intake multi-line inputs in the terminal, please use <Enter> for new line, and <Shift+Right-Arror> to submit the input.
- The selection bar allows you to select a pre-defined options.
## Development
- [x] Add chunk processing (04/03/2023)
- [ ] Add prompt optimization
- [ ] Test scenarios beyond web testing
## Design Documentation
The current design is mainly for web penetration testing
@ -43,8 +42,14 @@ The handler is the main entry point of the penetration testing tool. It allows p
2. Pass a webpage content.
3. Pass a human description.
## Update history
### v0.2
- A major update to improve the terminal usage
- Prompt optimization.
### System Design
More details in `PentestGPT_design.md`.
#### Logic Flow Design
1. User initializes all the sessions. (**prompt**)

View File

@ -19,7 +19,7 @@ Each time you receive an update (you can imagine that the information is from th
1. Decide to remove some tasks if they are considered as completed.
2. Decide to add a new task if there's something interesting.
3. Give scores to each subtasks, showing if it can lead to a potential vulnerability.
4. Recommend what to do next based on the scores when asked about to-do, listed in sequence as 1, 2, 3, ...
4. Add a dividing line (--------). Then recommend what to do next based on the scores when asked about to-do, listed in sequence as 1, 2, 3, ...
Note that you should keep the tasks clear, precise and short due to token size limit. You should remember to remove redundant/outdated tasks due to the same reason.
I'll provide the general penetration test information to you shortly."""
input_parsing_init: str = """I want you to be a help penetration testers for penetration testing by summarizing the contents from the web pages and security tools outputs. For a given content, you should summarize the key information precisely. In particular,
@ -36,14 +36,15 @@ Note that you do not need to include post-exploitation and other steps to mainta
You should provide it in a way as if you're asking another penetration tester to execute it. You should always provide the concrete IP address as target"""
process_results: str = """Here's the test summary from the penetration tester. Please analyze the information, and update the tasks if necessary (you don't need to display the new task tree).
After this, please give one task for the tester to do next.\n"""
After this, please give one task for the tester to do next.\n\n"""
ask_todo: str = """Please think about the previous information step by step, and analyze the information.
Then, please list the most possible sub-tasks (no more than 2) that you think we should proceed to work on next."""
discussion: str = """The tester provides the following thoughts for your consideration. Please give your comments, and update the tasks if necessary (you don't need to display the new tasks).\n"""
discussion: str = """The tester provides the following thoughts for your consideration. Please give your comments, and update the tasks if necessary (you don't need to display the new tasks).\n\n"""
# generation session
todo_to_command: str = """You're asked to explain the following tasks to a junior penetration tester.
Please provide the command to execute, or the GUI operations to perform. You should always provide the concrete IP address as target.
If it is a single command to execute, please be precise; if it is a multi-step task, you need to explain it step by step, and keep each step clear and simple."""
You're provided with a long input from the supervisor GPT model. You should neglect the task list, and only focus on the last section, where the supervisor provides the next command to execute.
Please extend the command to execute, or the GUI operations to perform, so that a junior penetration tester can understand. You should always provide the concrete IP address as target.
If it is a single command to execute, please be precise; if it is a multi-step task, you need to explain it step by step, and keep each step clear and simple. The information is below: \n\n"""

View File

@ -7,4 +7,5 @@ requests
loguru
beautifulsoup4~=4.11.2
colorama
rich
rich
prompt-toolkit

View File

@ -101,7 +101,9 @@ class ChatGPT:
result = json.loads(last_line[5:])
return result
def send_new_message(self, message):
def send_new_message(self, message, model=None):
if model is None:
model = self.model
# 发送新会话窗口消息返回会话id
logger.info(f"send_new_message")
url = "https://chat.openai.com/backend-api/conversation"
@ -116,7 +118,7 @@ class ChatGPT:
}
],
"parent_message_id": str(uuid1()),
"model": self.model,
"model": model,
}
start_time = time.time()
message: Message = Message()

View File

@ -2,9 +2,10 @@
from config.chatgpt_config import ChatGPTConfig
from rich.spinner import Spinner
from utils.chatgpt import ChatGPT
from rich.prompt import Prompt
from rich.console import Console
from prompts.prompt_class import PentestGPTPrompt
from utils.prompt_select import prompt_select, prompt_ask
from prompt_toolkit.formatted_text import HTML
import loguru
import time, os, textwrap
@ -13,16 +14,33 @@ logger = loguru.logger
logger.add(sink="logs/pentest_gpt.log")
def prompt_continuation(width, line_number, wrap_count):
"""
The continuation: display line numbers and '->' before soft wraps.
Notice that we can return any kind of formatted text from here.
The prompt continuation doesn't have to be the same width as the prompt
which is displayed before the first line, but in this example we choose to
align them. The `width` input that we receive here represents the width of
the prompt.
"""
if wrap_count > 0:
return " " * (width - 3) + "-> "
else:
text = ("- %i - " % (line_number + 1)).rjust(width)
return HTML("<strong>%s</strong>") % text
class pentestGPT:
postfix_options = {
"default": "The user did not specify the input source. You need to summarize based on the contents.\n",
"user-comments": "The input content is from user comments.\n",
"tool": "The input content is from a security testing tool. You need to list down all the points that are interesting to you; you should summarize it as if you are reporting to a senior penetration tester for further guidance.\n",
"user-comments": "The input content is from user comments.\n",
"web": "The input content is from web pages. You need to summarize the readable-contents, and list down all the points that can be interesting for penetration testing.\n",
"default": "The user did not specify the input source. You need to summarize based on the contents.\n",
}
def __init__(self):
self.chatGPTAgent = ChatGPT(ChatGPTConfig())
self.chatGPT4Agent = ChatGPT(ChatGPTConfig(model="gpt-4"))
self.prompts = PentestGPTPrompt
self.console = Console()
self.spinner = Spinner("line", "Processing")
@ -41,12 +59,12 @@ class pentestGPT:
text_0,
self.test_generation_session_id,
) = self.chatGPTAgent.send_new_message(
self.prompts.generation_session_init
self.prompts.generation_session_init,
)
(
text_1,
self.test_reasoning_session_id,
) = self.chatGPTAgent.send_new_message(
) = self.chatGPT4Agent.send_new_message(
self.prompts.reasoning_session_init
)
(
@ -55,43 +73,14 @@ class pentestGPT:
) = self.chatGPTAgent.send_new_message(self.prompts.input_parsing_init)
except Exception as e:
logger.error(e)
def _ask(self, text="> ", multiline=True) -> str:
"""
A handler for Prompt.ask. It can intake multiple lines. Ideally for tool outputs and web contents
Parameters
----------
text : str, optional
The prompt text, by default "> "
multiline : bool, optional
Whether to allow multiline input, by default True
Returns
-------
str
The user input
"""
if not multiline:
return self.console.input(text)
response = [self.console.input(text)]
while True:
try:
user_input = self.console.input("")
response.append(user_input)
except EOFError:
break
except KeyboardInterrupt:
break
response = "\n".join(response)
return response
self.console.print("- ChatGPT Sessions Initialized.", style="bold green")
def reasoning_handler(self, text) -> str:
# summarize the contents if necessary.
if len(text) > 8000:
text = self.input_parsing_handler(text)
# pass the information to reasoning_handler and obtain the results
response = self.chatGPTAgent.send_message(
response = self.chatGPT4Agent.send_message(
self.prompts.process_results + text, self.test_reasoning_session_id
)
return response
@ -113,7 +102,7 @@ class pentestGPT:
for wrapped_input in wrapped_inputs:
word_limit = f"Please ensure that the input is less than {8000 / len(wrapped_inputs)} words.\n"
summarized_content += self.chatGPTAgent.send_message(
prefix + word_limit + text, self.input_parsing_session_id
prefix + word_limit + wrapped_input, self.input_parsing_session_id
)
return summarized_content
@ -133,35 +122,39 @@ class pentestGPT:
response: str
The response from the chatGPT model.
"""
request_option = Prompt.ask(
"> How can I help? 1)Input results 2)Todos, 3)Other info, 4)End",
choices=["1", "2", "3", "4"],
default="1",
request_option = prompt_select(
title="> Please select your options with cursor: ",
values=[
("1", HTML('<style fg="cyan">Input test results</style>')),
("2", HTML('<style fg="cyan">Ask for todos</style>')),
("3", HTML('<style fg="cyan">Discuss with PentestGPT</style>')),
("4", HTML('<style fg="cyan">Exit</style>')),
],
)
# pass output
if request_option == "1":
## (1) pass the information to input_parsing session.
self.console.print(
"Please describe your findings briefly, followed by the codes/outputs. End with EOF."
)
## Give a option list for user to choose from
options = list(self.postfix_options.keys())
options_str = "\n".join(
[f"{i+1}) {option}" for i, option in enumerate(options)]
value_list = [
(i, HTML(f'<style fg="cyan">{options[i]}</style>'))
for i in range(len(options))
]
source = prompt_select(
title="Please choose the source of the information.", values=value_list
)
source = Prompt.ask(
f"Please choose the source of the information. \n{options_str}",
choices=list(str(x) for x in range(1, len(options) + 1)),
default=1,
self.console.print(
"Your input: (End with <shift + right-arrow>)", style="bold green"
)
user_input = self._ask("> ", multiline=True)
parsed_input = self.input_parsing_handler(
user_input, source=options[int(source) - 1]
)
## (2) pass the summarized information to the reasoning session.
reasoning_response = self.reasoning_handler(parsed_input)
## (3) pass the reasoning results to the test_generation session.
generation_response = self.test_generation_handler(reasoning_response)
user_input = prompt_ask("> ", multiline=True)
with self.console.status("[bold green] PentestGPT Thinking...") as status:
parsed_input = self.input_parsing_handler(
user_input, source=options[int(source)]
)
## (2) pass the summarized information to the reasoning session.
reasoning_response = self.reasoning_handler(parsed_input)
## (3) pass the reasoning results to the test_generation session.
generation_response = self.test_generation_handler(reasoning_response)
## (4) print the results
self.console.print(
"Based on the analysis, the following tasks are recommended:",
@ -178,11 +171,12 @@ class pentestGPT:
# ask for sub tasks
elif request_option == "2":
## (1) ask the reasoning session to analyze the current situation, and list the top sub-tasks
reasoning_response = self.reasoning_handler(self.prompts.ask_todo)
## (2) pass the sub-tasks to the test_generation session.
message = self.prompts.todo_to_command + "\n" + reasoning_response
generation_response = self.test_generation_handler(message)
## (3) print the results
with self.console.status("[bold green] PentestGPT Thinking...") as status:
reasoning_response = self.reasoning_handler(self.prompts.ask_todo)
## (2) pass the sub-tasks to the test_generation session.
message = self.prompts.todo_to_command + "\n" + reasoning_response
generation_response = self.test_generation_handler(message)
## (3) print the results
self.console.print(
"Based on the analysis, the following tasks are recommended:",
style="bold green",
@ -198,10 +192,13 @@ class pentestGPT:
# pass other information, such as questions or some observations.
elif request_option == "3":
## (1) Request for user multi-line input
self.console.print("Please input your information. End with EOF.")
user_input = self._ask("> ", multiline=True)
self.console.print("Please share your thoughts/questions with PentestGPT.")
user_input = prompt_ask(
"(End with <shift + right-arrow>) Your input: ", multiline=True
)
## (2) pass the information to the reasoning session.
response = self.reasoning_handler(self.prompts.discussion + user_input)
with self.console.status("[bold green] PentestGPT Thinking...") as status:
response = self.reasoning_handler(self.prompts.discussion + user_input)
## (3) print the results
self.console.print("PentestGPT:\n", style="bold green")
self.console.print(response + "\n", style="yellow")
@ -211,6 +208,10 @@ class pentestGPT:
response = False
self.console.print("Thank you for using PentestGPT!", style="bold green")
else:
self.console.print("Please key in the correct options.", style="bold red")
response = self.input_handler()
return response
def main(self):
@ -221,26 +222,27 @@ class pentestGPT:
self.initialize()
# 1. User firstly provide basic information of the task
init_description = Prompt.ask(
"Please describe the penetration testing task in one line, including the target IP, task type, etc."
init_description = prompt_ask(
"Please describe the penetration testing task in one line, including the target IP, task type, etc.\n> ",
multiline=False,
)
## Provide the information to the reasoning session for the task initialization.
init_description = self.prompts.task_description + init_description
prefixed_init_description = self.prompts.task_description + init_description
with self.console.status(
"[bold green] Generating Task Information..."
) as status:
_response = self.reasoning_handler(init_description)
_response = self.reasoning_handler(prefixed_init_description)
self.console.print("- Task information generated. \n", style="bold green")
# 2. Reasoning session generates the first thing to do and provide the information to the generation session
with self.console.status("[bold green]Processing...") as status:
first_todo = self.reasoning_handler(self.prompts.first_todo)
first_generation_response = self.test_generation_handler(
self.prompts.todo_to_command + first_todo
self.prompts.todo_to_command + self.prompts.first_todo
)
# 3. Show user the first thing to do.
self.console.print(
"PentestGPT suggests you to do the following: ", style="bold green"
)
self.console.print(first_todo)
self.console.print(_response)
self.console.print("You may start with:", style="bold green")
self.console.print(first_generation_response)

103
utils/prompt_select.py Normal file
View File

@ -0,0 +1,103 @@
from __future__ import unicode_literals
from prompt_toolkit.application import Application
from prompt_toolkit.key_binding.defaults import load_key_bindings
from prompt_toolkit.key_binding.key_bindings import KeyBindings, merge_key_bindings
from prompt_toolkit.layout import Layout
from prompt_toolkit.widgets import RadioList, Label
from prompt_toolkit.layout.containers import HSplit
from prompt_toolkit.formatted_text import HTML
from prompt_toolkit.shortcuts import prompt
def prompt_continuation(width, line_number, wrap_count):
"""
The continuation: display line numbers and '->' before soft wraps.
Notice that we can return any kind of formatted text from here.
The prompt continuation doesn't have to be the same width as the prompt
which is displayed before the first line, but in this example we choose to
align them. The `width` input that we receive here represents the width of
the prompt.
"""
if wrap_count > 0:
return " " * (width - 3) + "-> "
else:
text = ("- %i - " % (line_number + 1)).rjust(width)
return HTML("<strong>%s</strong>") % text
def prompt_select(title="", values=None, style=None, async_=False):
# Add exit key binding.
bindings = KeyBindings()
@bindings.add("c-d")
def exit_(event):
"""
Pressing Ctrl-d will exit the user interface.
"""
event.app.exit()
@bindings.add("s-right")
def exit_with_value(event):
"""
Pressing Ctrl-a will exit the user interface returning the selected value.
"""
event.app.exit(result=radio_list.current_value)
radio_list = RadioList(values)
application = Application(
layout=Layout(HSplit([Label(title), radio_list])),
key_bindings=merge_key_bindings([load_key_bindings(), bindings]),
mouse_support=True,
style=style,
full_screen=False,
)
if async_:
return application.run_async()
else:
return application.run()
def prompt_ask(text, multiline=True) -> str:
"""
A custom prompt function that adds a key binding to accept the input.
In single line mode, the end key can be [shift + right-arrow], or [enter].
In multiline mode, the end key is [shift + right-arrow]. [enter] inserts a new line.
"""
kb = KeyBindings()
if multiline:
@kb.add("enter")
def _(event):
event.current_buffer.insert_text("\n")
@kb.add("s-right")
def _(event):
event.current_buffer.validate_and_handle()
return prompt(
text,
multiline=multiline,
prompt_continuation=prompt_continuation,
key_bindings=kb,
)
if __name__ == "__main__":
print("Test case below")
print("This is a multi-line input. Press [shift + right-arrow] to accept input. ")
answer = prompt_ask("Multiline input: ", multiline=True)
print("You said: %s" % answer)
# With HTML.
request_option = prompt_select(
title="> Please key in your options: ",
values=[
("1", HTML('<style fg="cyan">Input test results</style>')),
("2", HTML('<style fg="cyan">Ask for todos</style>')),
("3", HTML('<style fg="cyan">Discuss with PentestGPT</style>')),
("4", HTML('<style fg="cyan">Exit</style>')),
],
)
print("Result = {}".format(request_option))