Prompt engineering

Last updated: 2023-06-18

This page contains various tips for prompt engineering/improving the way you work with LLMs.

Using LLMs well is often about reducing available information, not increasing it

GPT4 has been trained on most(?) of the internet. This means it has access to huge amounts of information, most likely including the information you are looking for. Consequently, the trick is often to provide useful context to GPT not to add signal, but to filter out noise.

Some ways you might achieve this:

  • Ask for responses in a particular voice e.g. "How would [expert X] solve this?"
  • Provide context about yourself and the domain in which your problem exists.

Thread on Assorted Ways I Use LLMs for Programming

See also