AI SUMMARY: Prompting is hard, AIs can help

Eliciting Human Preferences with Language Models. Li et al. 2023

Prompting is hard. An AI can help you prompt it better.

This paper creates a method (GATE) where the AI and user have a bit of back-and-forth to co-create the prompt.

Accuracy. GATE outperforms or equals both example-giving and prompts as input to ChatGPT-4 for two tasks: content recommendation and email verification. For moral reasoning an AI-led yes/no back-and-forth outperformed the open-ended one.

Usability. Using GATE is equally or less mentally demanding than user-written prompts.

— — —

CONTEXT

AI's do stuff.

We need to tell them to do stuff.

What they do, and how well they do it depends on how we ask.

Typically we ask in two ways: give it a bunch of examples and ask it to extrapolate, or ask an open-ended instruction (prompt).

Example-giving is tough where there are many edge-cases.

Prompts demand a user know what to ask upfront and know what the AI expects.

Humans recognition memory is ~30% better than Recall memory. It makes sense to allow the AI to help us use Recognition memory.

— — —

Simon's speculative YES And:

  • Lots of opportunity to refine for specific domains. Not only maximising performance but minimizing time to eliciting the prompt.

  • Open question as to whether this leads to user preference shift over time. We can expect that prompting will lead to users drifting towards prompts that are more machine-predictable. It did this in most other domains. Will there be any differences in drift between GATE and user prompting?

— — —

Source:

— — —

This is a human-summary of a single pass of the paper. By a very fallible human.

The intent here is to give a principal component of the core concept presented.

I emphasise general usefulness, not technical novelty, or even technical usefulness.

Use at your own risk.

Previous
Previous

DESIGN FICTION: Birth Flower

Next
Next

DESIGN FICTION: Dead Pokemon