I invite you to upgrade to a paid subscription. Paid subscribers have told me they appreciate me creating the programming projects and would like to see more of them in the future. Coding Challenge #117 - AI Powered Support BotThis challenge is to build your own AI powered support assistant.
Hi, this is John with this week’s Coding Challenge. 🙏 Thank you for being a subscriber, I’m honoured to have you as a reader. 🎉 If there is a Coding Challenge you’d like to see, please let me know by replying to this email📧 Coding Challenge #117 - AI Powered Support BotThis challenge is to build your own AI-powered customer support bot - and then discover, the hard way, why production AI applications need more than just an API key and a system prompt. This challenge was created in collaboration with Orq.ai, whose Router provides a single API across 400+ models from 20+ providers - with built-in fallbacks, cost routing, and observability. Free to start, no markup on token costs. Every developer has used a support bot. Most have opinions about them. In this challenge you’ll build one for a fictional version of Coding Challenges, giving it context about the available projects - Build Your Own Redis, Docker, Git and the rest - so it can answer questions about which challenge to tackle, what skills you’ll learn, how to get started, and general troubleshooting. It starts simple. But step by step you’ll layer on the production concerns that real AI applications face: resilience when a provider goes down, observability so you know what’s happening, and cost routing so you’re not burning money on simple questions. By the time you’ve built all of that yourself, you’ll have a deep appreciation for what an AI gateway does - and you’ll see just how much code disappears when you use one. If You Enjoy Coding Challenges Here Are Four Ways You Can Help Support It
The Challenge - Building Your Own AI Powered Support BotYou’re going to build an AI customer support bot that answers questions about Coding Challenges projects. Along the way you’ll experience the real production pain points of working with LLMs - provider lock-in, reliability, observability, and cost - and then see what happens when you replace your hand-rolled infrastructure with a single gateway endpoint. Step ZeroIn this introductory step you’re going to set your environment up ready to begin developing and testing your solution. You’ll need to make a few decisions:
Testing: Make a simple API call to your chosen provider with a basic prompt like “Hello, who are you?” and verify you get a coherent response back. If that works, you’re ready to move on. Step 1In this step your goal is to build a working support bot using a single LLM provider. Build an interactive command-line application that takes user questions and responds using your chosen LLM. The bot should have a system prompt that includes the Coding Challenges context you downloaded in Step Zero, instructing it to act as a helpful customer support agent that answers questions based on that context. Your bot should maintain a conversation history so follow-up questions work naturally. If a user asks “Which challenge should I start with?” and then follows up with “What will I learn from that one?”, the bot should understand what “that one” refers to. Keep it simple. One provider, one model, one API key hardcoded (or read from an environment variable). No fallbacks, no logging, no clever routing. Just a bot that works. Testing:
Step 2In this step your goal is to add resilience by introducing a fallback to a second LLM provider. Imagine your primary provider goes down. Right now your bot is completely broken. To simulate this, temporarily use an invalid API key for your primary provider so every request fails. Now fix it. Sign up for a second LLM provider and integrate their SDK alongside the first. When a request to the primary provider fails, your bot should automatically retry with the fallback provider. The user should get an answer either way. This sounds straightforward, but pay attention to the friction. You now have two SDKs with different interfaces, two API keys to manage, two different authentication mechanisms, and subtly different request and response shapes. Your code needs to handle the differences, normalise the responses, and manage the error handling for both. Once your fallback is working, remove the invalid API key and restore normal operation. Keep the fallback logic in place - you’ll want it for reliability. Testing:
Step 3In this step your goal is to add observability by tracking token usage, latency, and cost for every request. In production, you need to know what’s happening. How many tokens are you using? How much is each request costing? How long are responses taking? Without this information you’re flying blind. Build a logging layer that captures the following for every LLM request:
Store these logs however you like - a local file, an in-memory list, a database. Add a way to view a summary: total requests, total tokens, total cost, average latency, and the breakdown between primary and fallback usage. You might also consider capturing the request and response payloads (watch out for PII), a conversation or session ID so you can trace a whole support session, retry counts, and time-to-first-token once you’re streaming. These aren’t required, but they’re the kinds of things a production observability stack typically includes. This is the kind of infrastructure that every production AI application needs, and building it yourself gives you an appreciation for how much work it is to get right. You need to handle it consistently across both providers despite their different response formats. Testing:
Step 4In this step your goal is to add cost-aware routing so that simple questions go to cheaper models and complex questions go to more capable (and more expensive) ones. Not all questions are equal. “What’s the pricing?” is a simple lookup that any small model can handle. “I’m a backend developer who knows Python but wants to learn systems programming - which challenges should I do and in what order?” needs genuine reasoning ability. Build a routing layer that classifies incoming questions and directs them to the appropriate model. You’ll need at least two tiers:
How you classify queries is up to you. You could use keyword matching, a separate lightweight LLM call to classify the question, message length heuristics, or some combination. The point is to reduce cost without noticeably reducing quality. A note on what’s actually changing: the system prompt and context data stay the same for every request - the cheap model still needs to see the Coding Challenges context to answer “What’s the pricing?” The saving comes from running fewer parameters per token, not from sending less context. Don’t be tempted to trim the context for simple queries; that quickly leads to wrong answers. This is where your codebase starts to feel the weight. You now have multiple providers, fallback logic, per-request logging across all of them, and routing logic that needs to work with all your models. Take a moment to look at your code. Count the lines dedicated to infrastructure versus the lines dedicated to the actual support bot logic. Testing:
Step 5In this step your goal is to replace all of the infrastructure you built in Steps 2 through 4 with the Orq.ai Router. Sign up for a free Orq.ai account and get your API key. The Router provides an OpenAI-compatible endpoint, which means you can point any OpenAI SDK at it by changing the base URL and API key. That’s it. Replace your multi-provider setup, your fallback logic, your logging infrastructure, and your routing layer with a single API call to the Router endpoint. The Router handles:
Now look at your code. The fallback handling from Step 2, the logging layer from Step 3, and the routing logic from Step 4 can all be removed. Your bot should be back to something close to the simplicity of Step 1, but with all the production capabilities you spent three steps building by hand. Testing:
Going FurtherYou’ve built a support bot and experienced the full arc from simple prototype to production-ready AI application. Here are some ways to push further:
P.S. If You Enjoy Coding Challenges Here Are Four Ways You Can Help Support It
Share Your Solutions!If you think your solution is an example other developers can learn from please share it, put it on GitHub, GitLab or elsewhere. Then let me know via Bluesky or LinkedIn or just post about it there and tag me. Alternately please add a link to it in the Coding Challenges Shared Solutions Github repo Request for FeedbackI’m writing these challenges to help you develop your skills as a software engineer based on how I’ve approached my own personal learning and development. What works for me, might not be the best way for you - so if you have suggestions for how I can make these challenges more useful to you and others, please get in touch and let me know. All feedback is greatly appreciated. You can reach me on Bluesky, LinkedIn or through SubStack Thanks and happy coding! John Invite your friends and earn rewards
If you enjoy Coding Challenges, share it with your friends and earn rewards when they subscribe.
|
#6636 WWE 2K26 v1.06 + 6 DLCs [Monkey Repack] Genres/Tags: Arcade, Fighting, Sports, 3D Companies: 2K Games, Visual Concepts Languages: ENG/MULTI6 Original Size: 129.2 GB Repack Size: 107.8 GB Download Mirrors (Direct Links) .dlinks {margi… Read on blog or Reader FitGirl Repacks Read on blog or Reader WWE 2K26, v1.06 + 6 DLCs [Monkey Repack] By FitGirl on 28/03/2026 # 66 3 6 WWE 2K2 6 v1.0 6 + 6 DLCs [Monkey ...

Comments
Post a Comment