<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai-Systems on Coderrob</title><link>https://coderrob.com/tags/ai-systems/</link><description>Recent content in Ai-Systems on Coderrob</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sun, 12 Apr 2026 10:00:00 -0500</lastBuildDate><atom:link href="https://coderrob.com/tags/ai-systems/index.xml" rel="self" type="application/rss+xml"/><item><title>From Agent to A Soul</title><link>https://coderrob.com/posts/from-agent-to-a-soul/</link><pubDate>Sun, 12 Apr 2026 10:00:00 -0500</pubDate><guid>https://coderrob.com/posts/from-agent-to-a-soul/</guid><description>&lt;h2 id="more-than-just-an-agent">More Than Just an Agent&lt;/h2>
&lt;p>I&amp;rsquo;ve been thinking about how we talk about AI systems, and Anthropic&amp;rsquo;s concept of a &amp;ldquo;soul&amp;rdquo; really resonates with me.&lt;/p>
&lt;p>When you take an agent and layer in personality, persistent memory, defined capabilities, and curated tools, well, calling it an &amp;ldquo;agent&amp;rdquo; suddenly feels wrong.&lt;/p>
&lt;p>That&amp;rsquo;s not just an automated task runner anymore. That&amp;rsquo;s something with &lt;em>identity&lt;/em>. With presence. With intent.&lt;/p>
&lt;p>That&amp;rsquo;s a soul.&lt;/p>
&lt;p>And honestly? I think the term is a better fit for what we&amp;rsquo;ve been building. An agent executes. A soul &lt;em>embodies&lt;/em>. It carries forward context, adapts its behavior, reflects a designed personality.&lt;/p></description></item><item><title>Fast Is Slow When You're Neck Deep in AI Slop</title><link>https://coderrob.com/posts/fast-is-slow-when-youre-neck-deep-in-slop/</link><pubDate>Fri, 10 Apr 2026 12:00:00 -0500</pubDate><guid>https://coderrob.com/posts/fast-is-slow-when-youre-neck-deep-in-slop/</guid><description>&lt;h2 id="the-pattern-nobody-wants-to-talk-about">The Pattern Nobody Wants to Talk About&lt;/h2>
&lt;p>I&amp;rsquo;ve seen a distinct pattern where AI &lt;em>slows down&lt;/em> software development.&lt;/p>
&lt;p>I know. Heresy. But hear me out.&lt;/p>
&lt;p>Agents are fast, but in the old saying kind of way:&lt;/p>
&lt;blockquote>
&lt;p>Fast is slow, and slow is fast.&lt;/p>
&lt;/blockquote>
&lt;p>People push back on this. &lt;em>&amp;ldquo;Humans are faster,&amp;rdquo;&lt;/em> they say.&lt;/p>
&lt;p>Honestly? I doubt that&amp;rsquo;s true either.&lt;/p>
&lt;p>What I &lt;em>believe&lt;/em> is that most people are single-threaded. One brain, one task, one context.&lt;/p></description></item><item><title>Reverse Engineering Agentic Workflows from Copilot Debug Logs</title><link>https://coderrob.com/posts/reverse-engineering-agentic-workflows-from-copilot-debug-logs/</link><pubDate>Fri, 24 Oct 2025 09:00:00 -0500</pubDate><guid>https://coderrob.com/posts/reverse-engineering-agentic-workflows-from-copilot-debug-logs/</guid><description>&lt;p>Here&amp;rsquo;s a secret weapon for building your own agentic workflows: &lt;strong>GitHub Copilot Chat&amp;rsquo;s debug logs&lt;/strong>.&lt;/p>
&lt;p>You know how everyone&amp;rsquo;s out there wrestling with hallucinating AI agents? Trying to figure out how to structure those prompts, which tools to call when, how to handle errors without pulling your hair out, what context to pass between steps&amp;hellip;&lt;/p>
&lt;p>The answer?&lt;/p>
&lt;p>It&amp;rsquo;s sitting right there in your Copilot Chat debug view. Already solved. Already tested. Already proven to work for &lt;em>your&lt;/em> specific use cases.&lt;/p></description></item><item><title>The UI of AI is CLI</title><link>https://coderrob.com/posts/the-ui-of-ai-is-cli/</link><pubDate>Wed, 22 Oct 2025 05:00:00 -0500</pubDate><guid>https://coderrob.com/posts/the-ui-of-ai-is-cli/</guid><description>&lt;p>We spent decades trying to make computers easier to use. We went from punch cards to command lines to graphical user interfaces to touch screens. We added buttons, menus, icons, gestures, or really anything to avoid making people type commands into a terminal.&lt;/p>
&lt;p>And now? Now we&amp;rsquo;re instructing AI to use&amp;hellip; command lines.&lt;/p>
&lt;p>Plot twist of the century, right there.&lt;/p>
&lt;h2 id="the-great-ui-circle-of-life">The Great UI Circle of Life&lt;/h2>
&lt;p>Here&amp;rsquo;s the evolution in a nutshell:&lt;/p></description></item><item><title>Planning the Planning: The Agentic Software Development Paradox</title><link>https://coderrob.com/posts/planning-the-planning-agentic-software-development/</link><pubDate>Tue, 21 Oct 2025 10:00:00 -0500</pubDate><guid>https://coderrob.com/posts/planning-the-planning-agentic-software-development/</guid><description>&lt;h2 id="planning-the-planning-the-agentic-software-development-paradox">Planning the Planning: The Agentic Software Development Paradox&lt;/h2>
&lt;p>You know what&amp;rsquo;s wild about working with AI agents to build software? The planning. Oh boy, the planning.&lt;/p>
&lt;p>Not just &lt;em>a&lt;/em> plan - that would be too simple. No, no. We&amp;rsquo;re talking about:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Creating a planning document&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Planning the planning&lt;/strong> (because that first plan will encounter the enemy)&lt;/li>
&lt;li>&lt;strong>Planning the planning of the planning&lt;/strong> (we need to go deeper)&lt;/li>
&lt;li>&lt;strong>Having an agent review the plan&lt;/strong> to identify any planning not planned in the plan&lt;/li>
&lt;li>&lt;strong>Having &lt;em>another&lt;/em> agent revise the plan&lt;/strong> after being told to plan the planning based on the planning and the current implementation&lt;/li>
&lt;/ol>
&lt;p>It&amp;rsquo;s like Inception, but instead of dreams within dreams, it&amp;rsquo;s plans within plans within plans. &lt;strong>Plan-ception&lt;/strong>, if you will.&lt;/p></description></item><item><title>Building a Fully Local, Privacy-First AI Chat with Ollama and Open WebUI</title><link>https://coderrob.com/posts/building-a-fully-local-privacy-first-ai-chat-with-ollama-and-open-webui/</link><pubDate>Wed, 14 May 2025 05:14:13 -0500</pubDate><guid>https://coderrob.com/posts/building-a-fully-local-privacy-first-ai-chat-with-ollama-and-open-webui/</guid><description>&lt;p>Sometimes I want full control, not most, but total, end-to-end control over the tools I use to think, build, and create.&lt;/p>
&lt;p>That’s where Ollama and Open WebUI comes in. This is my go-to local AI chat setup: a fully local, OpenAI-style interface that runs quietly on your machine without whispering a word to the cloud.&lt;/p>
&lt;p>&lt;img src="https://coderrob.com/img/open-webui-demo.gif" alt="Borrowed UI demo of Open WebUI">&lt;/p>
&lt;hr>
&lt;h2 id="what-i-use-this-for">What I Use This For&lt;/h2>
&lt;p>This isn’t just an academic exercise in privacy. I use this:&lt;/p></description></item><item><title>When Pragmatism Meets Silence</title><link>https://coderrob.com/posts/when-pragmatism-meets-silence/</link><pubDate>Thu, 20 Mar 2025 03:35:16 -0500</pubDate><guid>https://coderrob.com/posts/when-pragmatism-meets-silence/</guid><description>&lt;p>Had one of those surreal conversations at work recently.&lt;/p>
&lt;p>I needed to onboard with an internal AI service—because, frankly, there’s only one option available: and it&amp;rsquo;s run by a team building their own wrapper around Azure OpenAI.&lt;/p>
&lt;p>From the start, it was a mess.&lt;/p>
&lt;p>Their onboarding SharePoint was broken, documentation was a year out of date, and the API FAQ pointed to a “Coming Soon w/ RAG API FAQ” page hiding the currently available API docs. I posted in their support space about the issues and was redirected to another team to get help with their onboarding documentation.&lt;/p></description></item><item><title>Sora Text to Video: Playing with AI Like It’s 2049</title><link>https://coderrob.com/posts/sora-text-to-video-playing-with-ai-like-its-2049/</link><pubDate>Sat, 15 Feb 2025 14:42:11 -0600</pubDate><guid>https://coderrob.com/posts/sora-text-to-video-playing-with-ai-like-its-2049/</guid><description>&lt;p>Oh, dear readers, I am excited!&lt;/p>
&lt;p>I have been experimenting with a new tool called &lt;a href="https://openai.com/sora/">Sora&lt;/a>. I&amp;rsquo;ve been using Stable Diffusion since the moment it leaked, and it&amp;rsquo;s only been getting better and better. But now, with the addition of text-to-video&amp;hellip; it&amp;rsquo;s just&amp;hellip; insane. And fun!&lt;/p>
&lt;p>I finally had some downtime to try the new tools on the block that have been making waves, at least with text.&lt;/p>
&lt;p>What really gets me about this isn&amp;rsquo;t just that it&amp;rsquo;s AI generating video. We&amp;rsquo;ve seen generative AI for images, we&amp;rsquo;ve seen AI-assisted video editing. But this? This feels different.&lt;/p></description></item><item><title>Local LLMs as a Code Assistant in Visual Studio Code!</title><link>https://coderrob.com/posts/local-llms-as-a-code-assistant-in-visual-studio-code/</link><pubDate>Tue, 03 Dec 2024 10:00:17 -0600</pubDate><guid>https://coderrob.com/posts/local-llms-as-a-code-assistant-in-visual-studio-code/</guid><description>&lt;p>I’ve started using a new Visual Studio Code extension called &lt;a href="https://marketplace.visualstudio.com/items?itemName=Continue.continue">Continue&lt;/a>, and it feels like having a private professional paired programming partner powered by local LLMs.&lt;/p>
&lt;blockquote>
&lt;p>Heh, say that three times fast&amp;hellip; i&amp;rsquo;ll wait. :)&lt;/p>
&lt;/blockquote>
&lt;p>&lt;strong>Why is it a big deal?&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Runs Locally:&lt;/strong>&lt;/p>
&lt;p>Continue works with platforms like &lt;a href="https://ollama.com/">Ollama&lt;/a> and LM Studio, keeping everything on your device. No cloud processing means your data stays private, and your code isn’t feeding someone else’s proprietary model.&lt;/p></description></item><item><title>Microsoft TinyTroupe for Ui Ux Persona Focus Groups</title><link>https://coderrob.com/posts/microsoft-tiny-troupe-for-ui-ux-persona-focus-groups/</link><pubDate>Wed, 13 Nov 2024 08:38:07 -0600</pubDate><guid>https://coderrob.com/posts/microsoft-tiny-troupe-for-ui-ux-persona-focus-groups/</guid><description>&lt;p>TinyTroupe is incredible.&lt;/p>
&lt;p>Imagine running your own focus group composed of multi-personality AI agents. From a UI/UX designer’s perspective, the ability to explicitly define user personas and have &lt;em>them&lt;/em> evaluate your designs with constructive feedback.&lt;/p>
&lt;p>Add vision agents into the mix&amp;hellip; Just… wow.&lt;/p>
&lt;p>Check it out:&lt;/p>
&lt;p>&lt;a href="https://github.com/microsoft/TinyTroupe">https://github.com/microsoft/TinyTroupe&lt;/a>&lt;/p></description></item><item><title>You Know What's Going to Be Funny When Ai Goes Offline</title><link>https://coderrob.com/posts/you-know-whats-going-to-be-funny-when-ai-goes-offline/</link><pubDate>Wed, 21 Aug 2024 21:23:44 -0500</pubDate><guid>https://coderrob.com/posts/you-know-whats-going-to-be-funny-when-ai-goes-offline/</guid><description>&lt;p>You know what&amp;rsquo;s going to be funny? When AI services go offline and teachers finally get to see what a student&amp;rsquo;s real writing would be like. Imagine the shock—&amp;ldquo;Wait, why does this essay look like a Neanderthal cave drawing scribbled on paper?&amp;rdquo; It’ll be like watching someone try to cook without YouTube tutorials—pure chaos.&lt;/p></description></item><item><title>Today, I Introduced the Interns to Dogfooding</title><link>https://coderrob.com/posts/today-i-introduced-the-interns-to-dogfooding/</link><pubDate>Wed, 21 Aug 2024 20:47:54 -0500</pubDate><guid>https://coderrob.com/posts/today-i-introduced-the-interns-to-dogfooding/</guid><description>&lt;p>Today, I got to introduce the AI interns to new topics like dogfooding*, cyclomatic complexity, BEM, JSON:API, and why interfaces are so useful. It was great to nerd out with them! Their excitement was contagious, and I was impressed to learn their code review for the chatbot was their first time working with React and web development.&lt;/p>
&lt;p>Seeing that &amp;ldquo;aha!&amp;rdquo; moment when they first create something—there&amp;rsquo;s nothing like it.&lt;/p>
&lt;ul>
&lt;li>For you Gen Z and Alphas: &amp;ldquo;dogfooding&amp;rdquo; means using your own product, just like your users do, so you feel the same issues and can fix them. It&amp;rsquo;s like making sure the dog food you’re serving is good—because you’re eating it too**!&lt;/li>
&lt;/ul>
&lt;p>** Don&amp;rsquo;t eat dog food - unless it&amp;rsquo;s a biscuit, I guess, but you do you.&lt;/p></description></item><item><title>Embracing Multi-Model Approaches for Enhanced Workflow Efficiency</title><link>https://coderrob.com/posts/embracing-multi-model-approaches-for-enhanced-workflow-efficiency/</link><pubDate>Fri, 16 Aug 2024 22:09:29 -0500</pubDate><guid>https://coderrob.com/posts/embracing-multi-model-approaches-for-enhanced-workflow-efficiency/</guid><description>&lt;p>Leveraging multiple models can significantly streamline the software development process. Here&amp;rsquo;s a strategy to
consider:&lt;/p>
&lt;ol>
&lt;li>
&lt;p>&lt;strong>Requirement Analysis&lt;/strong>: Use one model to gather user requirements.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Edge Case Identification&lt;/strong>: Deploy another model to spot edge cases while drafting the requirements
document.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Code Instruction&lt;/strong>: Utilize a code generation model to create detailed how-to guides.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Language-Specific Models&lt;/strong>: Pass the how-to guide to a model specialized in the relevant programming
language.&lt;/p>
&lt;/li>
&lt;/ol>
&lt;p>By creating a self-referencing workflow of requests and responses, you can enhance accuracy and efficiency.
Ensure your code includes both positive and negative unit tests, runs them, and identifies issues with self-
corrections.&lt;/p></description></item><item><title>Stable Diffusion Explained Like I Am 5</title><link>https://coderrob.com/posts/stable-diffusion-explained-like-i-am-five/</link><pubDate>Thu, 10 Nov 2022 15:09:17 -0600</pubDate><guid>https://coderrob.com/posts/stable-diffusion-explained-like-i-am-five/</guid><description>&lt;p>Imagine &amp;ldquo;a world, earth, seen from space, 8k, unreal engine, detailed, photorealistic&amp;rdquo;&amp;hellip;&lt;/p>
&lt;p>&lt;img src="https://coderrob.com/img/what-a-world.png" alt="Positive prompt:&amp;ldquo;a world, earth, seen from space, 8k, unreal engine, detailed, photorealistic&amp;rdquo;">&lt;/p>
&lt;p>Remember the common phrase, &amp;ldquo;a picture is worth a thousand words&amp;rdquo;? Well, it&amp;rsquo;s time to rethink that.&lt;/p>
&lt;p>With advancements in image generation technology, we may need to start saying, &amp;ldquo;a few words are worth thousands of pictures.&amp;rdquo;&lt;/p>
&lt;p>This technology is simply astonishing.&lt;/p>
&lt;p>Any image. Any concept. Any quality. Any style. Near-instant results.&lt;/p></description></item></channel></rss>