AI Interaction Hacks: Tips and Tricks for Crafting Effective Prompts
AI, especially Large Language Models (LLMs), has seen rapid and significant growth, and everyone is excitedly hopping on the hype train. There have been quite a few interesting and innovative experiments that a lot of people have done to leverage and unlock the potential of what AI can do. One of which is to boost productivity. At HackerOne, we are also encouraged to experiment more with AI and when it comes to LLMs specifically, there becomes a point where things are more of an art than science: prompting.
What Is a Prompt?
A prompt is an instruction that you give to an LLM to retrieve the information that you need or to have the LLM perform the task that you’d like it to do. There are so many things that we can do with LLMs. And so much information that we can get by just simply asking a question. Although, we have to keep in mind that it is not the silver bullet to everything (for instance it is bad at math). However, how can we actually make sure that we get the answer we expect? That is the challenge!
How Can We Write Effective Prompts?
1. Be clear and specific
The more specific and clear you are in your prompt, the better instruction it will be for the model in terms of what to do. Don’t be vague. Be direct and concise. For example, a good prompt would be “Summarize the key findings of the following article in 150 words or less”. While a less effective one might say, “This is a very long article and I want to know only the important things. Can you point them out but make sure to not make it too long?”
2. Provide context
LLMs like (chat)GPT, Claude, Titan, among others, are trained on very large datasets that are typically public information. This means that they lack specific knowledge or context about private or internal domains, like HackerOne Assessments only means Pentest-As-A-Service inside HackerOne. There are a few ways to write a prompt, with or without context:
- Zero-Shot Prompt: tends to be direct and doesn’t provide any context. E.g.: “Generate an appropriate title that describes the following security vulnerability.”
- One-Shot Prompt: In this example, we ask the AI to give us a suggestion for remediation, and we provide context for what the report is about. For example: “The report below describes a security vulnerability where an XSS was found on the asset xyz.com. Please provide the remediation guidance for this report.”
- Few-Shot Prompt: Similar to One-Shot Prompting, but we give the AI a couple more examples:
"The report below describes a security vulnerability found by a hacker. Extract the following details from the report:
- CWE id of the security vulnerability (example: CWE-79)
- CVE id of the security vulnerability (example: CVE-2021–44228)
- Vulnerable host (example: xyz.com)
- Vulnerable endpoint (example: /endpoint)
- The technologies used by the affected software"
Generally, the more examples you give, the better the results would be. This means that the model is gaining more context into your domain and, therefore, can understand your intention better. It would also reduce ambiguity and direct the system to generate more accurate and relevant responses. In other words, it is like adjusting the settings on a camera to capture the perfect shot, ensuring that the AI focuses on your specific needs.
Crafting effective prompts requires testing and is typically done in an iterative manner. My suggestion would be to start by experimenting with a variety of prompts to gauge the AI’s responses. A good prompt tends to yield accurate, relevant, and coherent responses that are in line with the topic of interest, depending on what you’d like to get out of it. If you feel like the response that you get is off-topic or inaccurate, it is a pretty good indicator that you have to adjust your prompt. Rephrase it, make them more specific, be more clear, or provide additional context until you achieve the desired results. Keep refining your prompts until they meet your expectations!
The Ultimate Guide to Managing Ethical and Security Risks in AI