Foggy Find: Turning KQED News Into a Daily Puzzle (With a Little Help From Claude AI)

Background

KQED is Northern California’s leading public media organization: NPR and PBS member, daily journalism, podcasts, and a growing digital audience. Since 2021, Uptech Studio has been KQED’s mobile development partner. We rebuilt their iOS and Android apps in Flutter, added CarPlay and Android Auto, and keep shipping features that make the app a destination instead of just another stream.

One of those destinations is KQED Games. On the web that includes crosswords and a radio news quiz, but until recently the mobile app didn’t have any game content. Enter Foggy Find. The goal for games isn’t to be a sidecar. It’s to give people another way to spend time with KQED, to surface stories they might not have clicked on in the feed, and to make the news feel more approachable and repeatable. Foggy Find fits that by turning the day’s news into a word search. You’re not just solving a generic grid; you’re finding words that came from real KQED stories, and you can jump from the puzzle straight to the article. So the game drives both engagement and discovery. For KQED, it’s also an educational play: the content we surface has to be age-appropriate and maintain an uplifting tone, which meant building curation into the pipeline from the start.

The question we had to solve: how do you generate a new, high-quality puzzle every day that’s actually grounded in that day’s coverage?

The Challenge

A daily word-search game sounds simple until you think about the pipeline. Someone has to pick the stories, pick the words, build the grid, and ship it. Doing that by hand every day doesn’t scale. Doing it with a fully static or templated system would give you puzzles that feel generic and disconnected from the news. We needed something in between: automated enough to run every day without a human in the loop, but smart enough to reflect real editorial choices and real vocabulary from KQED’s stories.

We also had to fit into KQED’s existing stack and workflows. The puzzle had to feel native inside the Flutter app, work offline once the day’s puzzle was available, and play well with their CMS and content APIs. And because this is public media with a nonprofit budget, we had to be thoughtful about AI: transparent where it matters, reliable enough to run daily without manual intervention, cost-efficient, and aligned with KQED’s editorial and educational standards.

Our Approach

We designed Foggy Find so that the *content* of the puzzle is driven by KQED’s own journalism. Each day we pull in article content from the day’s coverage. From that set, we need to select which material is suitable for a broad, often younger audience, then turn it into a word list and grid that’s solvable and fun. That’s where Claude comes in, via a two-stage pipeline, not a single “generate a puzzle” call.

Stage 1: Content curation via sentiment analysis. Before we generate any words, we process article excerpts (single sentences) through Claude to perform sentiment analysis. The model evaluates each excerpt and assigns a quantified sentiment score. We use those scores, plus configurable thresholds, to decide which content makes it into the pool for that day’s puzzle. The goal is to keep the game uplifting and age-appropriate. No one wants heavy or distressing headlines in a casual word search. This stage also lets us track how Claude assesses content over time, so we can tune the thresholds without manually reviewing every sentence. For a nonprofit with limited editorial bandwidth, that automation is meaningful: we maintain quality standards while reducing manual review.

Stage 2: Dynamic word generation. Once we have a curated set of articles, we send that content to Claude again with a different job: generate vocabulary words optimized for the specific board configuration we’re building for. Foggy Find supports multiple grid sizes—8×8, 10×11, 17×14, and others, and the right word count and length distribution depend on the board. So we don’t ask for “some words”; we ask for words that fit the dimensions, with length and count tuned so the puzzle is neither trivial nor impossible. That flexibility also lets KQED scale the same pipeline across different contexts and age groups later if they want to vary difficulty or format.

Why Claude 4.5. We evaluated models with an eye on cost, prompt engineering effort, and performance. KQED operates on a nonprofit budget, so we needed something that could run daily at scale without breaking the bank. We also wanted a model that would behave predictably enough that we could rely on it in an automated pipeline—fewer surprise outputs, less ongoing prompt fine-tuning. Claude 4.5 fit: it delivered the consistency and quality we needed for sentiment scoring and word generation, with minimal prompt engineering to get to production, and at a cost that aligned with KQED’s constraints. We didn’t need the absolute highest capability; we needed the right capability for this use case.

Prompt engineering and reliability. We did extensive prompt development using Claude’s desktop application before wiring things into the backend. Iterating there let us nail down response format, guardrails (word length, no offensive terms, thematic consistency), and output structure so that when we moved to the API, we had predictable, parseable responses. That predictability is what makes the pipeline safe to automate: we know what we’re going to get, and we validate it before storing or serving. We also built in retries and fallbacks—if a run fails or produces something odd, we can retry or fall back to a prior day’s puzzle rather than show a broken experience. Treating the model as one step in a production system, not the whole system, was as important as choosing the model.

The rest of the feature is classic product and engineering: a Flutter UI that renders the grid, tracks progress, and links solved words to the corresponding article; a backend job that runs on a schedule, runs both pipeline stages, validates the response, and stores the puzzle; and integration with KQED’s content APIs so the puzzle and the linked stories stay in sync.

Results

KQED mobile app users get a new puzzle every day, tied to that day’s coverage and curated for tone and appropriateness. They can jump from the puzzle to the story each word came from. The two-stage pipeline runs without manual intervention: sentiment filters the pool, then word generation produces the grid for the chosen board size. Claude 4.5 handles both stages at a cost that fits a nonprofit budget.

For KQED, Foggy Find is another way to keep people inside the app and to surface journalism through a different, educational lens. The modular design also gives them a foundation to expand: the same pipeline could support additional board sizes, difficulty levels, or even other game formats later. For us, it was a concrete example of using a large language model in production for bounded, well-defined tasks—curation and generation—rather than one vague “make a puzzle” step. That distinction, plus the discipline of clear inputs, outputs, and validation at each stage, is one we’d apply again.

Takeaways

  • Split the job into stages. We didn’t ask Claude to do one big “make a puzzle” step. We split it into curation (sentiment on excerpts, configurable thresholds) and generation (words for a specific board size). Bounded tasks gave us predictable outputs and made the system easier to tune and explain.
  • Editorial and educational guardrails belong in the pipeline. Public media cares what goes in front of the audience. Sentiment analysis lets us keep the tone uplifting and age-appropriate without manually reviewing every sentence. That’s especially valuable for a nonprofit with limited editorial bandwidth.
  • Match the model to the problem and the budget. We chose Claude 4.5 for adequate performance, minimal prompt engineering, and cost that fits a nonprofit. You don’t always need the most capable model; you need the right one for the use case and the constraints.
  • Invest in prompt engineering and reliability. We iterated on prompts in Claude’s desktop app until we had consistent, parseable responses. Then we added validation, retries, and fallbacks so the pipeline could run daily without a human in the loop. The model is one step in a production system, not the whole system.
  • Games can drive discovery. Foggy Find isn’t just engagement for its own sake. Linking words to articles turns the puzzle into a path back into the journalism—more time in the app, more exposure to stories.

Foggy Find is one of several ways we’ve helped KQED differentiate their app and deepen engagement. It’s also a great use of AI for educational content creation. A pipeline that balances capability with cost and keeps editorial standards in the loop. If you’re thinking about automated content pipelines, sentiment-based curation, or AI-assisted games in a resource-constrained environment, we’d be happy to talk about what we learned.

We make great products
Looking for a partner to help you create a successful business and amazing software products? Get in touch with Uptech Studio today.
Get Started