How we tailor résumés without inventing anything

If you ask a large language model to "rewrite this résumé for this job," the first draft you get back will be a good résumé. It will also, in ~30% of cases, contain experience you never had.

In my first tests I watched a model give me four years at a company I'd worked at for two, promote me to "Staff" because the posting asked for a Staff engineer, and — my personal favourite — claim I'd "led a team of 7" when the only team I'd ever led was me and a flaky Raspberry Pi.

Three constraints, in order

To make honest tailoring possible, Forte's résumé builder runs under three constraints, in descending severity:

  1. Facts come from the profile, not the posting. The model is given your structured profile (companies, titles, dates, bullet summaries, skills) as the only source of facts. The posting is used for language and emphasis, not content.
  2. A vision-review pass catches what prompting misses. After pdflatex compiles the tailored résumé, a second GPT-4o call looks at the PDF as an image alongside your source profile and flags any claim that doesn't trace back to the source. If it finds one, we loop.
  3. The user sees the diff, every time. An "AI-generated — review before using" banner sits above every output. You can regenerate, edit, or discard.

Why vision review beats text review

Early versions used a second text-only pass. It missed things the rendered PDF made obvious — like a bullet that wrapped badly and implied a company name it didn't. Once you render to PDF, reviewing the pixels is strictly better than reviewing the source: the pixels are what a recruiter sees.

The restraint part

The hardest prompt-engineering I've done on Forte isn't making the model write better bullets. It's making the model write fewer. If your résumé has a three-line bullet about Selenium, and the posting is a Playwright role, the right move is usually to shorten the Selenium line and add a short line citing the Playwright side project you actually did — not to invent Playwright experience. The model needs to be told, repeatedly, "omission is fine; invention is not."

Keep reading

Why we score jobs instead of filtering them

Every job board promises a better filter. We think the filter is the wrong abstraction — and here's what we built instead.

A QA engineer's honest take on AI job tools

I've been a test automation engineer for six years. Here's what I think actually works — and what's hype — in the 2026 crop of AI job tools.