← All Leaders

Sander Schulhoff

AI Security Researcher at Leading AI Red Teaming Expert

Sander Schulhoff is a leading researcher in adversarial robustness who runs the biggest AI red teaming competition, works with frontier AI labs on model defenses, and teaches the leading course on AI security.

Dimension Profile

Strategic Vision 66%
Execution & Craft 59%
Data & Experimentation 82%
Growth & Distribution 29%
Team & Leadership 30%
User Empathy & Research 42%

Key Themes

prompt engineering is real and matters self-criticism techniques 200 prompting techniques prompt injection and red teaming bad prompts vs good prompts impact HackAPrompt competition

Episode Summary

Sander Schulhoff, the OG prompt engineer, shares findings from The Prompt Report — the most comprehensive study of prompt engineering ever done, analyzing 1,500 papers and identifying 200 techniques. He demonstrates why prompt engineering matters enormously (bad prompts = 0%, good prompts = 90%), shares the most effective techniques like self-criticism, and explains prompt injection and red teaming.

Leadership Principles

  • Bad prompts can get 0% on a problem while good prompts can boost to 90% — prompt engineering matters
  • Self-criticism techniques — asking the LLM to check and improve its own response — are highly effective
  • Prompt engineering isn't dead and won't be with the next model — it keeps being important

Notable Quotes

"Studies have shown that using bad prompts can get you down to 0% on a problem, and good prompts can boost you up to 90%. People will always be saying 'it's dead' but then it comes out and it's not."

— On why prompt engineering continues to matter despite claims of its death

"A set of techniques we call self-criticism: you ask the LLM to check its response, criticize itself, and then improve itself."

— On one of the most effective prompting techniques from analyzing 1,500 papers

Want to know how you compare?

Take the Assessment