Ubuntu Dev Environment Setup on Windows (WSL)

It’s been hard for me to sit back and watch everyone dig into Claude Code without me. This weekend I took a little time to get it installed and it was a journey. Claude Code doesn’t run native on Windows so I had to do some WSL magic. I want to write about Claude Code, but not write now. Let’s just say I’m not going back to Cursor, VS Code, Windsurf or Aider… I’m team Claude Code, until the next shiny thing distracts my attention.

Any way Ubuntu isn’t my thing so this was harder than it should be. Good thing we have these super intelligent machines at our disposal. I asked my AI assistant George to write an install guide. Below is what I got. This wasn’t as straight forward as the guide lays it out, but it wasn’t too hard to go back and forth with George to get everything up and running.

I am basically running Claude Code in Cursor running in Ubuntu. Fun times.


This guide walks you through installing and configuring a full-featured development environment using WSL, Ubuntu, and essential development tools. You’ll be ready to develop locally with Docker, Claude Code, Azure, GitHub, and more.

1. Prerequisites

  • Windows 10/11 with WSL enabled
  • WSL2 installed (wsl –install in PowerShell)
  • A Linux distribution (we used Ubuntu 24.04)
  • Terminal: Windows Terminal

2. Setup Folder Structure

mkdir -p ~/projects
cd ~/projects

Keep all your development projects in this folder. Easy to back up, mount, and manage.

4. Git + GitHub CLI

sudo apt update
sudo apt install git gh -y
gh auth login

Use gh to clone, create, and manage GitHub repos easily.

5. Node.js (via nvm)

# Install NVM (Node Version Manager)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
# Load nvm into current shell session
export NVM_DIR="$HOME/.nvm"
source "$NVM_DIR/nvm.sh"
# Verify nvm is available
nvm –version
nvm install --lts
nvm use –lts
nvm alias default 'lts/*'
node -v
npm -v

3. Claude Code + Cursor Setup

Install Cursor (AI coding editor)

In WSL Ubuntu:

cd ~/projects
mkdir my-project && cd my-project
cursor .

Inside Cursor terminal, run:

npm install -g @anthropic-ai/claude-code
claude doctor

Inside Claude run:

/terminal-setup
/init

If Claude is stuck “Synthesizing” or “Offline”:

  • Make sure Claude is signed in and internet is stable and you are working on a folder in Ubuntu not Windows
  • Press Esc to cancel
  • Restart Cursor and check Claude Code panel

6. .NET SDK

wget https://packages.microsoft.com/config/ubuntu/24.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
sudo apt update
sudo apt install dotnet-sdk-8.0 -y

7. Azure CLI

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az login --use-device-code

If login doesn’t open a browser, copy the code and paste it at: https://microsoft.com/devicelogin

8. Docker CLI

Install Docker Desktop for Windows with WSL integration enabled.

In Ubuntu:

docker version
docker context ls

Ensure your context is set to desktop-linux if needed:

bash

docker context use desktop-linux

If permission denied on /var/run/docker.sock:

sudo usermod -aG docker $USER
newgrp docker

9. Optional Dev Tools (Highly Recommended)

ToolUse CaseInstall Command
makeBuild automationsudo apt install make
jqJSON manipulationsudo apt install jq
htopProcess monitorsudo apt install htop
fzfFuzzy file findersudo apt install fzf
ripgrepFast file search (used by Claude)sudo apt install ripgrep
treeDirectory visualizersudo apt install tree
redis-cliRedis command-linesudo apt install redis-tools
psqlPostgreSQL command-linesudo apt install postgresql-client

10. Azure Functions Core Tools

npm install -g azure-functions-core-tools@4 --unsafe-perm true

11. Terminal Enhancements

Oh My Zsh (already installed if you ran /terminal-setup)

To fix re-opening setup prompt:

rm ~/.zshrc.pre-oh-my-zsh

Add completions:

gh completion -s zsh >> ~/.zshrc
az completion >> ~/.zshrc
source ~/.zshrc

12. Sync Windows Projects to Ubuntu (Optional)

rsync -avh /mnt/c/Users/YourName/path/to/project/ ~/projects/

Final Checklist

AreaTools
AI PairingClaude Code + Cursor
Git WorkflowsGit + GitHub CLI (gh)
Web DevNode.js, npm, nvm
C#/.NET.NET SDK 8
CloudAzure CLI + Functions Core
ContainersDocker + Desktop Integration
Data AccessPostgreSQL + Redis CLI
Dev Toolingjq, htop, tree, ripgrep, fzf

You’re done. You’ve got a complete AI-first, cloud-ready, full-stack dev environment right in your Ubuntu WSL.

Background Agents in Cursor: Cloud-Powered Coding at Scale

Build Faster with Cloud-First Automation

Imagine coding without ever leaving your IDE, delegating repetitive tasks to AI agents running silently in the background. That’s the vision behind Cursor’s new Background Agents, a new feature that brings scalable, cloud-native AI automation directly into your development workflow.

From Local Prompts to Parallel Cloud Execution

In traditional AI pair-programming tools, you’re limited to one interaction at a time. Cursor’s Background Agents break this mold by enabling multiple agents to run concurrently in the cloud, each working on isolated tasks while you stay focused on your core logic.

Whether it’s UI bug fixes, content updates, or inserting reusable components, you can queue tasks, track status, and review results, all from inside Cursor.

Why This Matters

Problem: Manual Context Switching Slows Us Down

Every time we need to fix layout issues, update ads, or create pull requests, we context-switch between the browser, editor, GitHub, and back.

Solution: One-Click Cloud Agents

With Background Agents, we:

  • Offload UI tweaks or content changes in seconds
  • Automatically create and switch to feature branches
  • Review and merge pull requests without leaving the IDE

It’s GitHub Copilot meets DevOps, fully integrated.

How It Works

  1. Enable Background Agents under Settings → Beta in Cursor.
  2. Authenticate GitHub for seamless PR handling.
  3. Snapshot your environment, so the agent can mirror it in the cloud.
  4. Assign tasks visually using screenshots and plain language prompts.
  5. Review results in the control panel with direct PR links.

Each agent operates independently, meaning you can:

  • Fix mobile UI bugs in parallel with adding a new ad card.
  • Update dummy content while another agent links it to a live repo.

Keep tabs on multiple tasks without blocking your main flow.

Note: This is expensive at the moment because it will use the Max Mode.

The Impact: Focus Where It Matters

  • 🚀 Speed: Complete multi-step changes in minutes.
  • 🧠 Context: Stay immersed in Cursor with no GitHub tab juggling.
  • 🤝 Collaboration: Review, update, and deploy changes faster as a team.

What’s Next?

The Cursor team is working on:

  • Auto-merging from Cursor (no GitHub hop)
  • Smarter task context awareness
  • Conflict resolution across overlapping branches

Is this is the future of development workflows, agent-powered, cloud-native, and editor-first?

Try It Out

Enable Background Agents in Cursor and assign your first task. Start with a UI fix or content block update and see how you like it. Just remember that this service uses Max Mode and is expensive so be careful.

If you are looking to improve your development workflow with AI, let’s talk about it.

enterprise_software_is_broken

Enterprise SaaS is Broken. AI Agents Can Fix It.

Let’s talk about enterprise software.

Everyone knows the dirty secret: it’s complex, bloated, slow to change, and ridiculously expensive to customize. It’s a million dollar commitment for a five-year implementation plan that still leaves users with clunky UIs, missing features, and endless integration headaches.

And yet, companies line up for enterprise software as a service (SaaS) products. Why? Because the alternative, building custom systems from scratch, can be even worse.

But what if there was a third way?

I believe there is. And I believe AgenticOps and AI agents are the key to unlocking it.

The Current Limitation: AI Agents Can’t Build Enterprise Systems (Yet)

There’s a widely held belief that AI agents aren’t capable of building and maintaining enterprise software. And let’s be clear: today, that’s mostly true.

Enterprise software isn’t just code. It’s architecture, security, compliance, SLAs, user permissions, complex business rules, and messy integrations. It’s decades of decisions and interdependencies. It requires long-range memory, system-wide awareness, judgment, and stakeholder alignment.

AI agents today can generate CRUD services and unit tests. They can refactor a function or scaffold an API. But they can’t steward a system over time, not without help.

The Disruptive Model: Enterprise System with a Core + Customizable Modules

If I were to build a new enterprise system today, I wouldn’t sell build a monoliths or one-off custom builds.

I’d build a base platform, a composable, API-driven foundation of core services like auth, eventing, rules, workflows, and domain modules (like claims, rating engines, billing, etc. for insurance).

Then, I’d enable intelligent customization through AI agents.

For example, a customer could start with a standard rating engine, then they could ask the system for customizations:

> “Can you add a modifier based on the customer’s loyalty history?”

An agent would take the customization request:

  • Fork the base module.
  • Inject the logic.
  • Update validation rules and documentation.
  • Write test coverage.
  • Submit a merge request into a sandbox or preview environment.

This isn’t theoretical. This is doable today with the right architecture, agent orchestration, and human-in-the-loop oversight.

The Role of AI Agents in This Model

AI agents aren’t building without engineers. They’re replacing repetition. They’re doing the boilerplate, the templating, the tedious tasks that slow innovation to a crawl.

In this AgenticOps model, AI agents act as:

  • Spec interpreters (reading a change request and converting it into code)
  • Module customizers (modifying logic inside a safe boundary)
  • Test authors and validators
  • Deployment orchestrators

Meanwhile, human developers become:

  • Architects of the core platform
  • Stewards of system integrity
  • Reviewers and domain modelers
  • Trainers of the agent workforce

The AI agent doesn’t own the system. But it extends it rapidly, safely, and repeatedly.

This Isn’t Just Faster. It’s a Better Business Model.

What we’re describing is enterprise software as a service as a living organism, not a static product. It adapts, evolves, and molds to each client’s needs without breaking the core.

It means:

  • Shorter sales cycles (“Here’s the base. Let’s customize.”)
  • Lower delivery cost (AI handles the repetitive implementation work)
  • Faster time to value (custom features in days, not quarters)
  • Higher satisfaction (because the system actually does what clients need)
  • Recurring revenue from modules and updates

What It Takes to Pull This Off

To make this AgenticOps model work, we need:

  • A composable platform architecture with contracts at every boundary (OpenAPI, MCP, etc.)
  • Agents trained on domain-specific architecture patterns and rules
  • A human-in-the-loop review system with automated guardrails
  • A way to deploy, test, and validate changes per client
  • Observability, governance, and audit logs for every action an agent takes

Core build with self serve client customizations.

AI Agents Won’t Build Enterprise Software Alone. But They’ll Change the Game for Those Who Do.

In this vision, AI Agents aren’t here to replace engineers. In reality, they may very well replace some engineers, but they could also increase the need for more engineers to manage this agent workforce. Today, AI Agents can equip engineers and make them faster, freer, and more focused on the work that actually moves the needle.

This is the future: enterprise SaaS that starts composable, stays governable, and evolves continuously to meet client needs with AI-augmented teams.

If you’re building this kind of Agentic system, or want to, let’s talk about it.

Execution is Everything: Building an AgenticOps Playbook That Works

Ideas are easy; execution is the hard part.

We’ve all seen great strategies gather dust simply because the path from planning to action wasn’t clear. The problem isn’t always the ideas or the people, often, it’s the absence of a structured playbook for execution.

When execution falters, it’s usually due to unclear roles, inconsistent processes, or poor communication. Over the years, I’ve seen firsthand how these issues erode momentum and hinder even the most talented teams.

A practical playbook addresses these pitfalls directly. It documents not just what needs to be done, but also how to do it consistently, who is responsible at each step, and why it matters. Clear processes remove guesswork, improve collaboration, and make execution repeatable and scalable.

But a good playbook isn’t rigid. It’s a living document, evolving as teams learn and conditions change. Regularly scheduled feedback loops ensure continuous improvement, allowing the team to adapt swiftly and effectively.

Recently, I’ve been exploring the idea of “Playbooks as Code,” inspired by the concept of infrastructure as code. Infrastructure as code allows teams to provision and manage cloud resources through scripts, ensuring consistency, measurability, and testability. Similarly, implementing playbooks as automated workflows, using tools like Microsoft Power Automate or Zapier, lets us codify execution steps. This approach transforms a documented playbook into a deployable, executable workflow, initiated at the push of a button. It ensures consistent, measurable, and testable workflows, significantly enhancing reliability and efficiency.

If you’re finding your team struggles to turn strategic intent into results, consider whether your execution clarity matches your strategic clarity. Building a detailed, flexible execution playbook, and perhaps exploring playbooks as code, might just be the most impactful thing you do this year.

What’s been your experience with execution playbooks or automated workflows? I’d love to learn from your insights. If you want to build one with me, let’s talk about it.

Aligning Client Goals with User Needs: It’s Not Either-Or

The balancing act between what clients (product owners) want and what users need isn’t easy, but it doesn’t have to be a trade-off. Often, teams feel torn prioritizing client objectives for quick wins or leaning heavily into user needs for long-term satisfaction. But true strategic clarity comes from aligning these perspectives, not choosing between them.

Think about it, clients seek measurable outcomes, whether it’s revenue, market share, or operational efficiency. Users, meanwhile, value intuitive experiences that genuinely solve their problems. Misalignment can lead to products that look good on paper but fail in practice.

In retrospect, I’ve learned through experience that the secret lies in embedding user-centric design into strategic planning from day one. When users’ needs directly inform business objectives, something powerful happens, products resonate deeply, adoption grows, and client goals naturally follow.

This isn’t theoretical, it’s practical wisdom. By clearly documenting how each feature, action, or decision maps to both client objectives and user needs, ambiguity fades. Teams make better decisions faster because they have a north star guiding every step.

Ultimately, strategic clarity isn’t about compromising or pleasing everyone superficially. It’s about achieving alignment that creates genuine, sustainable value for all stakeholders involved.

Applying this concept to AgenticOps is critical, especially given widespread uncertainties around the value, safety, and trustworthiness of AI among clients and users. Establishing clear, transparent strategies early in the process can significantly influence the success or failure of an AgenticOps implementation. 

What’s your approach to balancing these needs? I’d love to hear your thoughts, let’s talk about it.

AgenticOps: From Strategy to Continuous Improvement

Have you ever had a great idea fall flat during execution? Or found your team stuck between prioritizing client demands and user needs? Perhaps you’ve struggled with chaos in data management or wondered how to effectively measure and improve performance. These challenges aren’t unique I’ve encountered and wrestled with them too.

That’s why I’m writing a series on AgenticOps as a collection of insights and experiences aimed at navigating the complex world of product strategy, execution, technical workflow planning, disruptive marketing, and continuous improvement.

Throughout this series, we’ll explore AgenticOps and:

  • Strategic clarity and how aligning client goals and user needs can drive powerful outcomes.
  • Using execution playbooks to turn great strategies into actionable and consistent results through clear roles, processes, and “playbooks as code.”
  • Intelligent workflow architecture and managing data complexity with adaptive, AI-driven workflows.
  • Disruptive go-to-market strategies is interesting because AI is going to disrupt more than markets. I think bold disruption is essential for impactful market entries for AI first companies and reentry for incumbents retooling with AI.
  • Continuous improvement systems as robust measurement systems that drive ongoing growth and improvement.

My goal with this series is not only to share what I’ve learned but also to start meaningful conversations. As I wrap my head around how to apply AI to business problems for clients in my day job, this is how I record my thoughts. If you are thinking about similar topics, I invite you to read, reflect, and share your experiences and insights along the way.

Stay tuned for the upcoming posts, and feel free to jump into the discussion at any time! I talk to AI too much, so I could use some human interaction.

How Do You Steer Towards Success? The North Star Metric

I have been thinking a lot about aligning product teams with product strategy. Here’s a post about one tool in the alignment arsenal. The North Star.

What’s a North Star Metric Anyway?

A North Star Metric (NSM) is the one metric that matters most. It represents the core value your product delivers to users and serves as your guiding light for long-term success. It keeps your team aligned, focused, and moving in the right direction. Simple as that.


Why Bother With a North Star Metric?

Having a North Star Metric means you’re not chasing random numbers that look good but don’t really matter. Here’s why it’s a game-changer:

1. Keep Your Eye on the Prize

Forget vanity metrics. Your NSM ensures you’re tracking what really moves the needle for users and the business.

2. Get Everyone on the Same Page

From product to marketing to ops, everyone should be rowing in the same direction. A solid NSM helps teams sync up.

3. Build for the Long Haul

Short-term wins are great, but sustainable growth is the goal. A good NSM makes sure you’re scaling in the right way.


How to Pick a North Star Metric That Works

A great NSM should be simple, actionable, and tied to real business outcomes. Here’s how you figure it out:

1. Understand Your Core Value

Ask yourself: What’s the main reason people use our product? Your NSM should reflect the value users get from it.

2. Connect It to Growth

If your NSM improves but your business isn’t growing, you’ve got the wrong metric. Pick something that’s directly tied to success.

3. Make It Measurable

If you can’t track it, you can’t improve it. Your NSM should be easy to monitor and analyze.

4. Don’t Ignore Other Metrics

A North Star Metric is important, but it’s not the only thing you should track. Pair it with other KPIs for a complete picture.


Real-World North Star Metrics

Some of the biggest companies out there rely on their North Star Metrics to guide growth:

  • AirbnbNights booked (measures marketplace health and user value)
  • SpotifyMinutes streamed (tracks engagement and content value)
  • SlackMessages sent per user (measures engagement and product dependency)

Each of these metrics is directly linked to user experience and business success. They’re not just numbers—they tell the story of product value.


Picking the Wrong North Star Metric? Here’s What Happens

Messing up your NSM can lead to some bad decisions. Avoid these common pitfalls:

1. Chasing Vanity Metrics

Page views, downloads, or social media followers might look great, but they don’t necessarily mean you’re delivering value.

2. Making It Too Complex

If it takes a whole team just to calculate your NSM, it’s too complicated. Keep it simple.

3. Ignoring User Experience

A metric focused purely on revenue might drive bad decisions—like aggressive upselling—that hurt user trust.

4. Choosing a Short-Term Fix

A good NSM isn’t about short-term wins. It should reflect the bigger picture and long-term success.


How to Use a North Star Metric to Actually Get Results

Having an NSM is one thing. Making it work for you is another. Here’s how to put it to good use:

1. Let It Guide Your Decisions

Use your NSM to prioritize product updates, marketing campaigns, and operational strategies.

2. Track It Like a Hawk

Measure your NSM over time to understand trends and make data-driven decisions.

3. Keep Everyone in the Loop

Make sure the whole company knows what the NSM is and why it matters.

4. Be Willing to Adapt

If your NSM isn’t driving the right behaviors, change it. Business evolves, and so should your metric.


Wrapping Up

A well-chosen North Star Metric keeps teams focused, drives meaningful growth, and ensures your product delivers real value. But remember, it’s not set in stone.

Bottom Line

As a team, stay focused; Stay aligned; Keep moving towards success. Your North Star Metric is your roadmap, make sure it’s taking you where you actually want to go.

Don’t have an NSM, let’s talk about setting one.

Editing a Podcast Using AI Without Losing My Mind

Editing podcasts used to be a nightmare. Manually cutting out filler words, scrubbing through audio waveforms, and trying to stitch everything together without making it sound like a Frankenstein monster. It was a process I dreaded and I was a professional audio engineer. Then I tried Descript, and suddenly, editing a podcast didn’t feel like an uphill battle anymore.

Why Descript?

Because I don’t have time to waste. Descript treats audio and video like a text document, meaning I can edit my podcast like I’m editing a Word doc. No more staring at confusing waveforms or trying to make precise audio cuts, I just edit words in the transcript and Descript does the rest.

The Editing Process

Here’s how I turn raw audio into a polished episode with way less stress than usual.

1. Automatic Transcription

I upload the podcast audio and Descript spits out a transcript in minutes. It may not be perfect, but it’s close enough that I only have to tweak a few words here and there. This alone saves me a ton of time.

2. Cutting the Fluff

Descript has this amazing feature that finds and removes filler words like “um” and “uh” with a single click. Instead of painstakingly hunting them down, I just let Descript do its thing and I review the results. Easy.

3. Trimming the Fat

Reading through the transcript makes it so much easier to spot parts that needed to go. Instead of fumbling with audio waveforms, I just delete unnecessary sections of text, and Descript handles the audio edits for me. My personal audio engineer.

4. Fixing Mistakes Without Re-recording

Made a mistake? No problem. Descript’s Overdub feature lets me fix small errors by typing in the correct words, and the AI matches voices perfectly. No awkward re-recording sessions, just a quick text fix.

5. Adding the Finishing Touches

I can drag in intro music, add transitions, and adjust volume levels, right inside Descript, my new production studio. No need for another program, just a simple drag-and-drop workflow.

6. Exporting and Done

Once I’m happy with the edit, I can export the final file and uploaded it to the hosting platforms. Better yet, I can ask an AI agent to publish it to everywhere it needs to be distributed to. See what I did there, done and done.

The Takeaway: Faster, Smarter Editing

Descript can cut my editing time in half and makes the whole process feel way less painful. Editing text instead of waveforms just makes sense, and tools like filler word removal and overdub saves me from re-recording sections.

If you’re still editing podcasts the hard way, stop. Try Descript. Your future self will thank you.

I know this feels like a Descript commercial, but I am not affiliated with nor paid by Descript. I pay full price for my Descript subscription. I’m just a long-time raving fan that wants to share the love.

AI Agents Can Write Code, Here’s How We Win as Developers

This is a thought exercise and a biased prediction. I have no real facts except what I see happening in the news and observed through my experiences. I don’t have any proof to back up some of my predictions about the future. So, feel free to disagree. Challenge my position, especially when I try to blow up the rockets in the end.

The Game has Changed

We don’t need to write C#, Python, or Java to build software anymore. Just like we no longer need to code in assembly or binary, today’s high-level languages are now being pushed down a level. We can code by talking to an AI agent in plain English. This isn’t science fiction. AI agents are here, and they’re disrupting traditional software development. The value isn’t in writing code, it’s in delivering value and desired business outcomes.

Soon, every app can be basically copied by an agent. Features don’t matter, value does. This means the future isn’t about who can write the best code or build the best feature set. Any product developer with an agent worth its silicon will be able to write an app. For product developers, it will soon be about who can use AI agents in a way that actually delivers business value. Those developers that have Agentic: Ops, Design, Development, Infrastructure, and Marketing will beat those without. Those with agents and experienced agent operators that deliver value rapidly will beat the developers that still take 3 months to deliver an MVP.

AI is No Longer Just a Tool, It’s the New Coder

AI assistants won’t just be assisting developers, as I once thought, they will become the developers, the designers, the marketers, the project managers. The shift isn’t about writing code faster. It’s about not writing code at all and letting AI generate, deploy, and optimize entire systems. How do we manage AI agent employees? An AI HR agent? The implications are far wider than just the replacement of humans in developer roles. Markets are going to shift, industries will be disrupted regularly, the world is going to enter a new age faster than any other shift in civilization that we’ve had in the past. I may be wrong, but it looks clear to me.

What does that mean for us?

  • The focus moves from software development to AI agent development and integration.
  • Companies that figure out how to deliver value with agents effectively will dominate product development.
  • The winners will have an early advantage building a proven system, with tested agents, and experienced agent operators that customers will trust to continuously deliver desired value.

Product Features are Dead, Value Delivery is Everything

If an AI agent can copy any feature, what really matters in product development? Value delivery, that’s what. Value has always been king and queen in product development. I believe it’s more important now than ever. AI-native product developers will outperform traditional ones not only because they don’t waste time or money on manually coding features, but they focus on outcomes and delivering value that deliver those outcomes.

Hell, I’m seeing people that can’t code build apps that use to take weeks to build. They can build an app in 30 minutes, and we are still on v1 baby agents. What happens when the agents grow up in a couple years. In the future time won’t matter because we can deliver apps and features in day. Costs become less of a concern because agents cost less than hiring new employees. Understanding and delivering value will be the great divider between product development teams. Those who can wield agents to understand and deliver value will do better in the market.

China and the team that built DeepSeek proved that they can beat the likes of multi-billion dollar US aligned companies with less than $10 million to train a frontier model. What will someone with a team of agents delivering value in days do against an old school team of human developers delivering the same value in months.

Think about it 🤔

Businesses don’t care if the back end is in Python or Rust. They care if revenue goes up and costs go down.

Customers don’t care if their data is in PostgreSQL or SQL Server. They care if their system is performant and costs are feasible.

Users don’t care if the UI is React or Blazor. They care if the experience is seamless and solves their problems.

No one asks whether an AI agent or human wrote the code, they just want a solution that fills their needs.

A product development team’s value is not in their technology choices but in the value they can deliver and maintain.

The AI-Native Product Development Playbook

If AI replaces traditional software product development, how do we compete? We learn to not focus on coding features, we build AI-driven systems that can deliver value.

Here’s A Playbook 🚀

1. Find the pain points where AI delivers real value. Optimize workflows, automate decisions, eliminate inefficiencies, increase customer attraction, acquisition, engagement, retention, and satisfaction.
2. Use rapid prototyping to test and iterate at breakneck speed. Don’t waste weeks and months building, when we can ship, test, and refine in days.
3. Orchestrate AI agents. Until AI surpasses AGI (artificial general intelligence) and reaches super intelligence, initial success won’t come from using a single agent. It will come from coordinating multiple agents to work together efficiently.
4. Measure and optimize continuously. The job isn’t done when a system is deployed. AI needs constant tuning, monitoring, and retraining.

People Still Want Human Connection

There’s one thing AI agents can’t replace, human relationships. People will always crave trust, emotional intelligence, and real connection with other humans. Businesses that blend AI automation with authentic human experiences will win.

The Future of Software Product Development is AI-First, Human-Led

This isn’t about whether AI will replace traditional software product development or developers. That ship is sailing as we speak, it is underway. The real question is who will successfully integrate and optimize AI in businesses? Who can help build AI-native businesses that out compete their competitors? I hope the answer is you. The future is AI-first. Those who embrace it will lead. Those who resist will be left behind because we are the Borg, resistance is futile.

Now, my last question is, are you ready? Do you know how to transform now? Evolution is too slow. You must blow up some rockets to rapidly figure out what works and doesn’t work. But doing so is easier said than done when jobs and investments are on the line. For now, we may be OK staying stuck in our ways and relying on old thought processes. I’d say we have 5-10 years (into my retirement years) to enjoy the status quo. However, that time horizon seems to shrink every month and every day not focused on transformation is a day lost to the competition.

Need help in your transformation? Let’s talk about the rockets you want to blow up.

Estimates are Bullshit

We had an issue in a new environment we were building out. For some reason branded images were not being found on one of the websites. At one time there were 6 developers focusing on this one problem for about a couple hours with no results (that’s 12 hours of effort). How would we estimate this beforehand? How would we account for those lost 12 hours, because they are not in the estimate? Those 12 hours have to come from somewhere.

I have been involved in hundreds of planning and estimation sessions. Some resulted in dead on estimates, but most were over or under. Projects ended up with nothing to do near the end or skimping on quality and increasing technical debt to meet the estimated deadline. Estimates are contextual. They can change every moment we understand something new about context. What we know regardless of context is that we want to deliver value to our customers. We want to deliver and maintain a quality product. We want to deliver often.

The business management wants to make sure the cost to deliver does not exceed the return on the value we deliver. Business wants to deliver to generate revenue and manage costs to increase profits. I am not a business major so I may have a naïve simplistic view, but this is what I have experienced. Business wants to control costs so they ask for estimates. When a project is delivered over the estimate people get upset. So how do we provide the business with what they need while not relying on our inability as humans to predict the future.

The lean or agile practitioners give some clues to what is a viable solution.

  • Break down deliverables into bite sized pieces. Bite sized meaning what ever makes sense for your team.
  • Provide an estimate on each piece based on current understanding of the context. Take no more than 10-15 minutes doing an estimate on each piece. You can use hours, days, team weeks, story points…doesn’t matter how hard you try you can’t accurately predict the future 100% of the time.
  • Deliver in small iterations. You can commit to delivering a set number of pieces per iteration with floating release date. You can commit to a release date and deliver the pieces you have ready on the release date.
  • At the end of each iteration re-estimate the pieces in the back log and break down new deliverables to replace the ones that have been promoted to an iteration.

What does this mean for the business? They still get their estimates from the mystic developers and their sprint tarot card readings, but the business has to understand that adjustments will be made iteratively to those estimates to match the reality we live in. The business has to be willing to loose an investment in a first iteration. If developers promise a working product at the end of the iteration, the product should be worth the investment. If developers don’t deliver, the business can opt not to continue based on the re-estimate or be willing to loose until they get a something shippable. First iteration deliver a working prototype, demo it to the business, get their feedback, adjust scope, and re-estimate what it will take to deliver the next iterations based on current understanding of the context.

If you believe that developers can give perfect estimates and deadlines, I have a bridge you can buy for a steal.

If the business needs to forecast the future, deliver in small, fast, continuous increments. This builds predictability in the system with increasing level of probability, until the system changes and the cycle starts again.

In the end, estimates are bullshit! 

What do you think?