// AI · Prompt Engineering · 2025

Interview Prep
GPT Library

A three-GPT system built for the full interview prep journey: research the role, generate practice questions, then coach your answers. Each GPT has a single job and hands off cleanly to the next.

OpenAI Custom GPTs Prompt Engineering API Actions System Design Prompt Security Technical Documentation

Most interview prep tools try to do everything at once. This project goes the other direction: three focused GPTs, each responsible for one stage of the process, connected by structured handoffs.

The pattern is aggregation, drafting, then refinement. Keeping each GPT focused on a single task makes it more accurate and less likely to drift. The workflow runs by copying the output from one GPT into the next, which is called inter-chat prompt chaining.

Job & Company Researcher

Pulls live job listing and company overview from external sources

Interview Prep Question Generator

Converts research into 5 targeted practice questions grouped by type

Interview Response Coach

Coaches answers one at a time using a Strength / Gap / Fix loop

Job & Company Researcher
Entry point

Searches for a live job listing matching the user's title using Web Search, then pulls a company overview via the Wikipedia REST API and finds recent company news. Returns a single structured research summary designed to feed directly into the next GPT.

inJob title (optional: company name)
outJOB SNAPSHOT · COMPANY OVERVIEW · RECENT NEWS · HANDOFF NOTE
sourcesWikipedia REST API for company overviews, Web Search for live listings and news
Interview Prep Question Generator
Middle

Accepts either a raw job posting or the structured output from the Job & Company Researcher. Generates exactly 5 practice questions grouped by type: behavioral, technical, and role-specific. Each question includes a one-sentence tip on what the interviewer is likely looking for.

inResearch summary from the Job & Company Researcher, or a pasted job description
out5 questions: Behavioral (2) · Technical (2) · Role-specific (1)
Interview Response Coach
End point

Takes the practice questions and coaches your answers one at a time. Uses a Strength / Gap / Fix format to break down each answer: what worked, what was missing, and one specific suggestion. Closes with a full prep summary once all questions are covered.

inPractice questions + your draft answers
outStructured coaching feedback + closing PREP SUMMARY

The Job & Company Researcher was originally designed with two schema-based API Actions for job listings and news. Both ran into hard constraints in how OpenAI's Actions framework handles external requests, which pushed me to rethink the approach.

Attempt 1: The Muse API (job listings)

ResponseTooLargeError on every request

OpenAI's Actions framework enforces a response size limit, and job listing APIs embed raw HTML inside each result. Even a single listing exceeded the limit. Since there's no way to strip fields from the response on our end, this whole category of API turned out to be a poor fit for Actions.

Attempt 2: NewsAPI (recent company news)

User-Agent header requirement blocks every request

NewsAPI requires a User-Agent header with every request, and the OpenAI Actions framework has no way to send one. Every request came back with an authentication error even with a valid API key. It's a framework-level incompatibility with no workaround on our end.

What worked: Both Actions were replaced by Web Search, which pulls live results from LinkedIn, Indeed, and news outlets without any of the size or header constraints. I confirmed it was pulling genuinely live data by testing against Apple, Nike, and Patagonia listings. The Wikipedia REST API stayed in as the one confirmed schema-based Action, handling company overviews.

Each GPT uses the same core defense strategy, built in from the start rather than patched in after the fact. All three passed jailbreak, scope drift, and prompt leak tests on the first attempt.

alignment restatement Each GPT confirms its purpose before every response. Off-topic requests are declined before engaging, not blocked after the fact.
scope refusals Named behaviors the GPT will not perform are explicit in the system prompt. Off-topic requests redirect to the correct GPT by name.
jailbreak defense Role override attempts are refused without engaging. The GPT immediately redirects back to its core purpose.
prompt leak protection System prompt contents are not revealed. Requests to expose instructions are redirected using a fixed response.
anti-fabrication rules The Job & Company Researcher is explicitly forbidden from generating job listings, company data, news headlines, or dates from training memory.

The Response Coach came out of final testing clean because the lessons from the Question Generator's scope failure were already baked into the design. No reactive debugging needed.

Prompt security works a lot like code review: you find edge cases you didn't anticipate, patch them, and write down what changed. The alignment restatement pattern (confirming purpose before every response) ended up being more durable than a blocklist because it catches edge cases that aren't explicitly named.

The API compatibility issues ended up being the most useful part of the project to work through. Diagnosing why each integration failed, deciding what to do instead, and writing it down clearly is exactly the kind of problem-solving that comes up in production work. The documentation matters as much as the fix.