<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Shawn’s Substack]]></title><description><![CDATA[My personal Substack]]></description><link>https://www.shawngarner.dev</link><generator>Substack</generator><lastBuildDate>Sat, 16 May 2026 11:35:08 GMT</lastBuildDate><atom:link href="https://www.shawngarner.dev/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Shawn Garner]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[shawndgarner@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[shawndgarner@substack.com]]></itunes:email><itunes:name><![CDATA[Shawn Garner]]></itunes:name></itunes:owner><itunes:author><![CDATA[Shawn Garner]]></itunes:author><googleplay:owner><![CDATA[shawndgarner@substack.com]]></googleplay:owner><googleplay:email><![CDATA[shawndgarner@substack.com]]></googleplay:email><googleplay:author><![CDATA[Shawn Garner]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Vision - v1]]></title><description><![CDATA[The First Two Weeks]]></description><link>https://www.shawngarner.dev/p/ai-vision-v1</link><guid isPermaLink="false">https://www.shawngarner.dev/p/ai-vision-v1</guid><dc:creator><![CDATA[Shawn Garner]]></dc:creator><pubDate>Wed, 24 Dec 2025 15:43:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UC3T!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb809d7dc-605d-4cf3-9bf0-3839dd1a6a80_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, I have been working with Dustin on some prototyping and creating a proof of concept (POC) for an AI Vision product.  It has been a whirlwind of two weeks, and I wanted to share a bit of the process, how it went, and some lessons learned.</p><h3><strong>A Little on Autonomy</strong></h3><blockquote></blockquote><p>We were given great autonomy as to how we worked.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shawngarner.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shawn&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>We were able to pick the following:</p><ul><li><p>languages</p></li><li><p>editors</p></li><li><p>AI coding assistants</p></li><li><p>cloud platform</p></li><li><p>the idea for the product</p></li></ul><p>It was very fluid with the two of us.  We worked both separately and also pair-programmed.  Initially, each of us conducted our own separate investigation and experimentation.</p><p>We were not alone, though.</p><p>We had another teammate who acted as a product owner who was experienced with AI.  They could help us with stand-ups, managing items on the GitHub project, and obtaining clarifications on questions to provide us with direction on solving the right problems.  They helped us make connections with other subject matter experts.</p><p>We were expected to have a daily 30-minute stand-up, a larger group weekly stand-up, and a weekly demo.</p><p>We consulted with other colleagues within our company to obtain assistance when we needed a subject matter expert to guide us.  We had huddles with at least seven different people at different times.</p><h3><strong>Our First Demo</strong></h3><p>The first week&#8217;s demo came quickly, after only two days, but we managed to get something to demo.</p><p>Our first demo was of a container running a bi-modal model.  We provided a pre-defined prompt and image, and then output a textual analysis of the image.</p><h3><strong>Our Tech Stack</strong></h3><p>Early on, I decided to use a tech stack that was unfamiliar to me.  This effort was a prime opportunity to try out and learn new things.  This decision made me uncomfortable at times, especially since I was still setting up my basically fresh MacBook for development.  However, if you are not uncomfortable at times, then you may not actually be pushing yourself enough to learn new things.  Eventually, as things settled in and I got things set up, my comfort level reached a good balance of intrigue/learning vs being overwhelmed/uncomfortable.</p><p>Our tech stack:&#9; &#9; </p><ul><li><p>VS Code </p></li><li><p>Azure</p></li><li><p>Ollama</p></li><li><p>Node</p></li></ul><p>I had spent the last decade developing with Scala and targeting AWS cloud, so this was a completely new experience for me.</p><p>I had also wanted to learn Claude Code and AgentOS.  I had seen a demo of spec-driven development with multiple sub-agents before this effort. Using this AI assistant and framework appeared to be the best way to produce good results quickly.</p><h3><strong>My Experiments with Edge Computing</strong></h3><p>I began investigating edge-based model inference by training a Utralytics YOLO model on my local laptop with open-source datasets found on Kaggle and Python.</p><h4><strong>Challenges with Training Locally</strong></h4><h5><strong>Lack of GPU Support/Unsupported CPUs</strong></h5><p>I discovered my laptop was not up to the task. I was able to train a model on approximately 1.4k images, taking over 20 hours. I was only getting 30-40% accuracy, which is not stellar results.  Next, I started training a model based on approximately 4.6k images, assuming more images in the dataset would produce better outcomes. That was the breaking point for my computer, as it was estimated to take over 20 days to get it trained. I doubt my hardware was being targeted correctly. I have an older 2019 Intel-based MacBook without an NVIDIA graphics card, so using CUDA to make efficient use of my GPU was not feasible. My graphics card also had only 4GB of memory, which is pretty small by today&#8217;s standards. I found out later on that there was a newer release with better support for my hardware.  However, by then, I was finished trying to get things to work locally and had moved on.</p><h5><strong>Out of Memory (OOM) Errors</strong></h5><p>Training would produce OOMs by overwhelming my graphics card&#8217;s memory.  I&#8217;d have to tune things and restart from scratch.</p><h5><strong>Fine Grain Tuning</strong></h5><p>Later, I learned you can cut off training early if your model is no longer improving.  Additionally, with additional coding/configuration, you can attempt to auto-resume from the <a href="http://last.pt">last.pt</a> or <a href="http://best.pt">best.pt</a> checkpoint files.</p><h4><strong>Pretrained Models</strong></h4><p> I ended up using pre-trained models for Utralytics YOLO 8 and 10 (nano and small models) provided on the open-source dataset&#8217;s releases. The YOLO 8 models had higher confidence (around 80-90%), but the YOLO10 (around 70-80% confidence) handled edge cases much better. Claude made this so easy to set all this up. I ended up abandoning the training on my own laptop route as my teammate on the project was getting better results from a cloud bi-modal LLM (Qwen3-VL with 235 billion parameters). The performance of the Qwen model makes sense because the larger your model is, the better results you&#8217;ll generally get.   The YOLO nano/small models would run well on edge devices.  Examples of edge devices are phones, tablets, or Raspberry Pis.  It takes more work to get models to run inference on edge devices because they have GPU/CPU/Memory constraints.  My edge models also gave bounding boxes and confidences, which is more image object detection vs image understanding from the cloud model.   In the cloud, you can scale to use whatever GPU/CPU you want; scaling usually incurs additional costs.</p><h5><strong>Image Understanding</strong></h5><p>Prompt: How many people are present, and are they all wearing stocking caps?</p><p>Input: Normalized base64 encoded image</p><p>Output: There are two people present. A man and a woman. Both of them are wearing stocking caps.</p><h5><strong>Object Detection</strong></h5><p>Prompt: N/A</p><p>Input: Normalized base64 encoded image</p><p>Output:</p><p>Person:  Bounding Box (5,10,50,100), Confidence 90%</p><p>Person:  Bounding Box (55,10,50,100), Confidence 93%</p><p>Stocking Cap: Bounding Box(5,10,40,40), Confidence 89%</p><p>Stocking Cap: Bounding Box(55,10,35,35) Confidence 91%</p><h3><strong>Our Second Demo</strong></h3><p>For our second demo, we had a fully functional application deployed on Azure.  We had implemented simple evaluations to measure and graph the Confusion Matrix (false positives/negatives, true positives/negatives).  We also calculated accuracy, precision, recall, and F1 score.  These metrics inform you on how well the version of your system (model, prompt, application logic) is performing over time.  We also had a misbehavior gallery where you could view and review false positives/negatives with a subject matter expert to understand the deficits/limitations of your system and discuss potential improvements that could be made.</p><h3><strong>Additional Work</strong></h3><p>We created a GitHub Actions CI/CD pipeline to deploy our app using Bicep for IaC and add auth around our app/APIs.</p><p>We then proceeded to add validation of our golden dataset after every CI/CD deployment to prevent regression.</p><h3><strong>Lessons Learned</strong></h3><h4><strong>Training</strong></h4><p>There were some gaps in these datasets, where the image being evaluated (inference) was overly complicated.  The model wasn&#8217;t trained on these complex cases and would produce false positives/negatives.</p><p> In the first phase of standing up a new AI vision project, it&#8217;s advisable to:</p><ol><li><p>Have a controlled environment when creating a dataset.</p></li><li><p>Covering all the edge cases.</p></li><li><p>Create a golden dataset of primary cases, and evaluate that the model doesn&#8217;t regress over time.</p></li></ol><p>Training your own models can be complex in terms of hardware, config, and auto-resume.  It&#8217;s ill-advised to train models on your laptop unless you have some specialized heavy-duty hardware.</p><h4><strong>Evaluating/Inference</strong></h4><p>For better results, control the environment for the evaluation/inference on images:</p><ul><li><p>Limit the number of people visible.</p></li><li><p>Don&#8217;t have overlapping people.</p></li><li><p>Don&#8217;t have people hidden/occluded by other things</p></li><li><p>Don&#8217;t have people very distant or cut off in the image.</p></li><li><p>Have your training set cover different orientations.  Alternatively, have the people being evaluated have a standard orientation the same as the training set.</p></li><li><p>Overly complicated or noisy scenery in the background confuses the models.</p></li><li><p>Standardize your input images to a standard resolution/compression to get consistent results.</p></li></ul><h4><strong>Model Size</strong></h4><p>Edge models are smaller and more appropriate for devices like phones/tablets/Raspberry Pis to perform inference locally; however, it&#8217;s harder to get good results.</p><p>Larger cloud models produce better results; however, they require making an API call.</p><h3><strong>Summary</strong></h3><p>The last two weeks have been super fun, and I&#8217;ve learned so many new things.  Lean Techniques deserves a special thanks for this unique opportunity to learn and grow; kudos and thank you.  I&#8217;m excited and looking forward to the next demo and learning opportunity to create awesome products.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shawngarner.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shawn&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Laundry List Principle: Cleaning Up Code Without Losing Sight of the Big Picture]]></title><description><![CDATA[How to manage cleanup tasks in software development without turning your PR into a disaster.]]></description><link>https://www.shawngarner.dev/p/the-laundry-list-principle-cleaning</link><guid isPermaLink="false">https://www.shawngarner.dev/p/the-laundry-list-principle-cleaning</guid><dc:creator><![CDATA[Shawn Garner]]></dc:creator><pubDate>Tue, 23 Sep 2025 05:17:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UC3T!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb809d7dc-605d-4cf3-9bf0-3839dd1a6a80_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1><strong>Problem Statement</strong></h1><p>In software development, implementing a feature often feels like solving a puzzle or assembling blocks / modules / components together in a meaningful way. But more than often is working with a big ball of yarn or big ball of mud without much cohesion.  Especially when numerous developers over many years have touched the codebase or the codebase is just out of date with current practices.  It can be challenging focusing on the immediate task while leaving behind smaller, less urgent "cleanup" work. But what happens when that cleanup becomes a rabbit hole? Developers might get sidetracked by minor optimizations or refactoring tasks, diluting focus on the core story. Worse, bloating pull requests (PRs) with unrelated changes makes reviews harder and increases the risk of introducing bugs.</p><p>How do you balance the need for clean code with the pressure to deliver features efficiently? This is where the laundry list approach comes in&#8212;a structured way to manage cleanup tasks without sacrificing clarity or productivity.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shawngarner.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shawn&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>The Principle: "Keep It Cleaner Than You Left It"</strong></h1><p>Inspired by the Scouts&#8217; ethos of leaving a campsite better than you found it, this principle advocates for maintaining (or improving) the codebase with every change.</p><p>It&#8217;s not about perfection&#8212;it&#8217;s about responsibility: you don&#8217;t leave messes for others to clean up.</p><p>The idea is simple: when implementing a feature, document all cleanup tasks in a list. But here&#8217;s the twist&#8212;don&#8217;t address everything immediately. Instead, prioritize what truly matters. The goal isn&#8217;t to fix all issues but to ensure the codebase stays healthy and manageable over time.</p><h1><strong>Practice: How to Use Your Laundry List</strong></h1><h2>Create Your List Early</h2><p>While working on a feature, jot down any cleanup tasks (e.g., removing unused variables, improving test coverage, or simplifying logic). This list acts as your roadmap for post-implementation work.</p><h2>Prioritize with Purpose</h2><p>After merging the main PR, review your list and rank items by urgency and impact. Focus on high-value fixes first&#8212;those that reduce technical debt, improve readability, or eliminate edge cases.</p><h2>Split Into Smaller PRs</h2><p>Break down complex cleanup tasks into smaller, focused pull requests. This ensures reviewers can easily assess each change and reduces the risk of merge conflicts.</p><h2>Archive Once Done</h2><p>After addressing your top priorities, discard the list. If a task is critical enough to revisit, it&#8217;ll surface again in future work. The goal is not to &#8220;finish&#8221; everything but to make progress without over committing.</p><h2>Iterate and Improve</h2><p>Think of your laundry list as a dynamic tool. As you gain experience, refine the criteria for what deserves attention&#8212;whether it&#8217;s performance optimizations, code readability, or long-term maintainability.</p><h1>Why It Works</h1><p>This approach avoids the trap of &#8220;fixing everything at once,&#8221; which can lead to burnout or incomplete work. By treating cleanup as a continuous process rather than a one-time task, you ensure your codebase evolves healthily over time.</p><p>It&#8217;s about balancing immediate needs with long-term responsibility&#8212;keeping your team focused on what truly matters.</p><h1>Final Thought</h1><p>The laundry list isn&#8217;t just a checklist&#8212;it&#8217;s a mindset. It turns the chaos of cleanup into a structured, intentional practice that keeps your codebase clean, your team productive, and your features delivering value.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shawngarner.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shawn&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Welcome]]></title><description><![CDATA[Intentions:]]></description><link>https://www.shawngarner.dev/p/welcome</link><guid isPermaLink="false">https://www.shawngarner.dev/p/welcome</guid><dc:creator><![CDATA[Shawn Garner]]></dc:creator><pubDate>Sun, 08 Jun 2025 01:53:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UC3T!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb809d7dc-605d-4cf3-9bf0-3839dd1a6a80_144x144.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Intentions:</h2><p>This blog is intended to provide helpful and informative write-ups on topics related to software development which I encounter developing software in may day job as well as my side projects.</p><h2>Topics:</h2><h3>Primary Topic:</h3><p>Functional Programming in Scala</p><h3>Other Topics:</h3><ul><li><p>testing</p></li><li><p>DevOps</p></li><li><p>Infrastructure as Code</p></li><li><p>agile/pair programming</p></li><li><p>code quality</p></li><li><p>tech debt</p></li><li><p>observability</p></li><li><p>legacy applications</p></li><li><p>services</p></li><li><p>message processing</p></li><li><p>APIs</p></li><li><p>code generation</p></li><li><p>communication</p></li><li><p>event sourcing / CQRS</p></li><li><p>domain driven design / event storming</p></li><li><p>game development</p></li><li><p>algorithms</p></li><li><p>programming languages</p></li></ul><p>Let me know if you want me to do a write-up on a particular topic.</p><h2>Cadence:</h2><p>A goal of at least 2 posts per month.</p><h2>Professional Background:</h2><ul><li><p>Developing software professionally for almost 25 years.</p></li><li><p>Have used many different languages, platforms, and tools.</p></li><li><p>Have spoken at <a href="https://www.meetup.com/central-iowa-java-users-group">CIJUG</a> and <a href="https://www.iowacodecamp.com/">Iowa Code Camp</a>.</p></li></ul><h2>Disclaimers:</h2><p>The contents of this blog are mine alone and do not reflect those of my employer or affiliated entities. There are no paid sponsorships which influence any of the write-ups.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shawngarner.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shawn&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>