// AI · Prompt Engineering · 2025
A three-GPT system built for the full interview prep journey: research the role, generate practice questions, then coach your answers. Each GPT has a single job and hands off cleanly to the next.
// Overview
Most interview prep tools try to do everything at once. This project goes the other direction: three focused GPTs, each responsible for one stage of the process, connected by structured handoffs.
The pattern is aggregation, drafting, then refinement. Keeping each GPT focused on a single task makes it more accurate and less likely to drift. The workflow runs by copying the output from one GPT into the next, which is called inter-chat prompt chaining.
// Workflow
Job & Company Researcher
Pulls live job listing and company overview from external sources
Interview Prep Question Generator
Converts research into 5 targeted practice questions grouped by type
Interview Response Coach
Coaches answers one at a time using a Strength / Gap / Fix loop
// GPT Profiles
Searches for a live job listing matching the user's title using Web Search, then pulls a company overview via the Wikipedia REST API and finds recent company news. Returns a single structured research summary designed to feed directly into the next GPT.
Accepts either a raw job posting or the structured output from the Job & Company Researcher. Generates exactly 5 practice questions grouped by type: behavioral, technical, and role-specific. Each question includes a one-sentence tip on what the interviewer is likely looking for.
Takes the practice questions and coaches your answers one at a time. Uses a Strength / Gap / Fix format to break down each answer: what worked, what was missing, and one specific suggestion. Closes with a full prep summary once all questions are covered.
// Engineering Challenge: API Compatibility
The Job & Company Researcher was originally designed with two schema-based API Actions for job listings and news. Both ran into hard constraints in how OpenAI's Actions framework handles external requests, which pushed me to rethink the approach.
ResponseTooLargeError on every request
OpenAI's Actions framework enforces a response size limit, and job listing APIs embed raw HTML inside each result. Even a single listing exceeded the limit. Since there's no way to strip fields from the response on our end, this whole category of API turned out to be a poor fit for Actions.
User-Agent header requirement blocks every request
NewsAPI requires a User-Agent header with every request, and the OpenAI Actions framework has no way to send one. Every request came back with an authentication error even with a valid API key. It's a framework-level incompatibility with no workaround on our end.
// Prompt Security
Each GPT uses the same core defense strategy, built in from the start rather than patched in after the fact. All three passed jailbreak, scope drift, and prompt leak tests on the first attempt.
The Response Coach came out of final testing clean because the lessons from the Question Generator's scope failure were already baked into the design. No reactive debugging needed.
// What I Learned
Prompt security works a lot like code review: you find edge cases you didn't anticipate, patch them, and write down what changed. The alignment restatement pattern (confirming purpose before every response) ended up being more durable than a blocklist because it catches edge cases that aren't explicitly named.
The API compatibility issues ended up being the most useful part of the project to work through. Diagnosing why each integration failed, deciding what to do instead, and writing it down clearly is exactly the kind of problem-solving that comes up in production work. The documentation matters as much as the fix.