Note: Howdy, Sarah here! This page summarizes current practices and prompting tricks to improve LLM output.

I originally compiled it based on the prompt engineering portion of the 2.5 hour “Application Development Using Large Language Models” tutorial given by Andrew Ng and Open AI’s Isa Fulford at NeurIPS (Dec 11, 2023), and I’ve added several enrichments and examples since. While the talk is not available online, these comprehensive but quick notes hopefully provide a good, quick overview :)

Additionally, many additions have been made since, including contributions from Swyx, Jerry Liu and Brian Huang.

*** note: these are also good tips for regular human communication as well 😉*

Table of contents:


Tip #1: Write clear and specific instructions

Give detailed context for the problem. Reducing ambiguity reduces the likelihood of irrelevant or incorrect outputs.

You can also use delimiters to clearly indicate distinct parts of the input. For example: section titles, triple quotes, triple backticks, triple dashes, angle brackets, “####”

Specify the desired output format or length of output. One method is to ask the model to adopt a persona. For example:

“Pretend you’re a creative writer”

“Respond in roughly two sentences.”