Editing a Podcast Using AI Without Losing My Mind

Editing podcasts used to be a nightmare. Manually cutting out filler words, scrubbing through audio waveforms, and trying to stitch everything together without making it sound like a Frankenstein monster. It was a process I dreaded and I was a professional audio engineer. Then I tried Descript, and suddenly, editing a podcast didn’t feel like an uphill battle anymore.

Why Descript?

Because I don’t have time to waste. Descript treats audio and video like a text document, meaning I can edit my podcast like I’m editing a Word doc. No more staring at confusing waveforms or trying to make precise audio cuts, I just edit words in the transcript and Descript does the rest.

The Editing Process

Here’s how I turn raw audio into a polished episode with way less stress than usual.

1. Automatic Transcription

I upload the podcast audio and Descript spits out a transcript in minutes. It may not be perfect, but it’s close enough that I only have to tweak a few words here and there. This alone saves me a ton of time.

2. Cutting the Fluff

Descript has this amazing feature that finds and removes filler words like “um” and “uh” with a single click. Instead of painstakingly hunting them down, I just let Descript do its thing and I review the results. Easy.

3. Trimming the Fat

Reading through the transcript makes it so much easier to spot parts that needed to go. Instead of fumbling with audio waveforms, I just delete unnecessary sections of text, and Descript handles the audio edits for me. My personal audio engineer.

4. Fixing Mistakes Without Re-recording

Made a mistake? No problem. Descript’s Overdub feature lets me fix small errors by typing in the correct words, and the AI matches voices perfectly. No awkward re-recording sessions, just a quick text fix.

5. Adding the Finishing Touches

I can drag in intro music, add transitions, and adjust volume levels, right inside Descript, my new production studio. No need for another program, just a simple drag-and-drop workflow.

6. Exporting and Done

Once I’m happy with the edit, I can export the final file and uploaded it to the hosting platforms. Better yet, I can ask an AI agent to publish it to everywhere it needs to be distributed to. See what I did there, done and done.

The Takeaway: Faster, Smarter Editing

Descript can cut my editing time in half and makes the whole process feel way less painful. Editing text instead of waveforms just makes sense, and tools like filler word removal and overdub saves me from re-recording sections.

If you’re still editing podcasts the hard way, stop. Try Descript. Your future self will thank you.

I know this feels like a Descript commercial, but I am not affiliated with nor paid by Descript. I pay full price for my Descript subscription. I’m just a long-time raving fan that wants to share the love.

AI Agents Can Write Code, Here’s How We Win as Developers

This is a thought exercise and a biased prediction. I have no real facts except what I see happening in the news and observed through my experiences. I don’t have any proof to back up some of my predictions about the future. So, feel free to disagree. Challenge my position, especially when I try to blow up the rockets in the end.

The Game has Changed

We don’t need to write C#, Python, or Java to build software anymore. Just like we no longer need to code in assembly or binary, today’s high-level languages are now being pushed down a level. We can code by talking to an AI agent in plain English. This isn’t science fiction. AI agents are here, and they’re disrupting traditional software development. The value isn’t in writing code, it’s in delivering value and desired business outcomes.

Soon, every app can be basically copied by an agent. Features don’t matter, value does. This means the future isn’t about who can write the best code or build the best feature set. Any product developer with an agent worth its silicon will be able to write an app. For product developers, it will soon be about who can use AI agents in a way that actually delivers business value. Those developers that have Agentic: Ops, Design, Development, Infrastructure, and Marketing will beat those without. Those with agents and experienced agent operators that deliver value rapidly will beat the developers that still take 3 months to deliver an MVP.

AI is No Longer Just a Tool, It’s the New Coder

AI assistants won’t just be assisting developers, as I once thought, they will become the developers, the designers, the marketers, the project managers. The shift isn’t about writing code faster. It’s about not writing code at all and letting AI generate, deploy, and optimize entire systems. How do we manage AI agent employees? An AI HR agent? The implications are far wider than just the replacement of humans in developer roles. Markets are going to shift, industries will be disrupted regularly, the world is going to enter a new age faster than any other shift in civilization that we’ve had in the past. I may be wrong, but it looks clear to me.

What does that mean for us?

  • The focus moves from software development to AI agent development and integration.
  • Companies that figure out how to deliver value with agents effectively will dominate product development.
  • The winners will have an early advantage building a proven system, with tested agents, and experienced agent operators that customers will trust to continuously deliver desired value.

Product Features are Dead, Value Delivery is Everything

If an AI agent can copy any feature, what really matters in product development? Value delivery, that’s what. Value has always been king and queen in product development. I believe it’s more important now than ever. AI-native product developers will outperform traditional ones not only because they don’t waste time or money on manually coding features, but they focus on outcomes and delivering value that deliver those outcomes.

Hell, I’m seeing people that can’t code build apps that use to take weeks to build. They can build an app in 30 minutes, and we are still on v1 baby agents. What happens when the agents grow up in a couple years. In the future time won’t matter because we can deliver apps and features in day. Costs become less of a concern because agents cost less than hiring new employees. Understanding and delivering value will be the great divider between product development teams. Those who can wield agents to understand and deliver value will do better in the market.

China and the team that built DeepSeek proved that they can beat the likes of multi-billion dollar US aligned companies with less than $10 million to train a frontier model. What will someone with a team of agents delivering value in days do against an old school team of human developers delivering the same value in months.

Think about it 🤔

Businesses don’t care if the back end is in Python or Rust. They care if revenue goes up and costs go down.

Customers don’t care if their data is in PostgreSQL or SQL Server. They care if their system is performant and costs are feasible.

Users don’t care if the UI is React or Blazor. They care if the experience is seamless and solves their problems.

No one asks whether an AI agent or human wrote the code, they just want a solution that fills their needs.

A product development team’s value is not in their technology choices but in the value they can deliver and maintain.

The AI-Native Product Development Playbook

If AI replaces traditional software product development, how do we compete? We learn to not focus on coding features, we build AI-driven systems that can deliver value.

Here’s A Playbook 🚀

1. Find the pain points where AI delivers real value. Optimize workflows, automate decisions, eliminate inefficiencies, increase customer attraction, acquisition, engagement, retention, and satisfaction.
2. Use rapid prototyping to test and iterate at breakneck speed. Don’t waste weeks and months building, when we can ship, test, and refine in days.
3. Orchestrate AI agents. Until AI surpasses AGI (artificial general intelligence) and reaches super intelligence, initial success won’t come from using a single agent. It will come from coordinating multiple agents to work together efficiently.
4. Measure and optimize continuously. The job isn’t done when a system is deployed. AI needs constant tuning, monitoring, and retraining.

People Still Want Human Connection

There’s one thing AI agents can’t replace, human relationships. People will always crave trust, emotional intelligence, and real connection with other humans. Businesses that blend AI automation with authentic human experiences will win.

The Future of Software Product Development is AI-First, Human-Led

This isn’t about whether AI will replace traditional software product development or developers. That ship is sailing as we speak, it is underway. The real question is who will successfully integrate and optimize AI in businesses? Who can help build AI-native businesses that out compete their competitors? I hope the answer is you. The future is AI-first. Those who embrace it will lead. Those who resist will be left behind because we are the Borg, resistance is futile.

Now, my last question is, are you ready? Do you know how to transform now? Evolution is too slow. You must blow up some rockets to rapidly figure out what works and doesn’t work. But doing so is easier said than done when jobs and investments are on the line. For now, we may be OK staying stuck in our ways and relying on old thought processes. I’d say we have 5-10 years (into my retirement years) to enjoy the status quo. However, that time horizon seems to shrink every month and every day not focused on transformation is a day lost to the competition.

Need help in your transformation? Let’s talk about the rockets you want to blow up.

Estimates are Bullshit

We had an issue in a new environment we were building out. For some reason branded images were not being found on one of the websites. At one time there were 6 developers focusing on this one problem for about a couple hours with no results (that’s 12 hours of effort). How would we estimate this beforehand? How would we account for those lost 12 hours, because they are not in the estimate? Those 12 hours have to come from somewhere.

I have been involved in hundreds of planning and estimation sessions. Some resulted in dead on estimates, but most were over or under. Projects ended up with nothing to do near the end or skimping on quality and increasing technical debt to meet the estimated deadline. Estimates are contextual. They can change every moment we understand something new about context. What we know regardless of context is that we want to deliver value to our customers. We want to deliver and maintain a quality product. We want to deliver often.

The business management wants to make sure the cost to deliver does not exceed the return on the value we deliver. Business wants to deliver to generate revenue and manage costs to increase profits. I am not a business major so I may have a naïve simplistic view, but this is what I have experienced. Business wants to control costs so they ask for estimates. When a project is delivered over the estimate people get upset. So how do we provide the business with what they need while not relying on our inability as humans to predict the future.

The lean or agile practitioners give some clues to what is a viable solution.

  • Break down deliverables into bite sized pieces. Bite sized meaning what ever makes sense for your team.
  • Provide an estimate on each piece based on current understanding of the context. Take no more than 10-15 minutes doing an estimate on each piece. You can use hours, days, team weeks, story points…doesn’t matter how hard you try you can’t accurately predict the future 100% of the time.
  • Deliver in small iterations. You can commit to delivering a set number of pieces per iteration with floating release date. You can commit to a release date and deliver the pieces you have ready on the release date.
  • At the end of each iteration re-estimate the pieces in the back log and break down new deliverables to replace the ones that have been promoted to an iteration.

What does this mean for the business? They still get their estimates from the mystic developers and their sprint tarot card readings, but the business has to understand that adjustments will be made iteratively to those estimates to match the reality we live in. The business has to be willing to loose an investment in a first iteration. If developers promise a working product at the end of the iteration, the product should be worth the investment. If developers don’t deliver, the business can opt not to continue based on the re-estimate or be willing to loose until they get a something shippable. First iteration deliver a working prototype, demo it to the business, get their feedback, adjust scope, and re-estimate what it will take to deliver the next iterations based on current understanding of the context.

If you believe that developers can give perfect estimates and deadlines, I have a bridge you can buy for a steal.

If the business needs to forecast the future, deliver in small, fast, continuous increments. This builds predictability in the system with increasing level of probability, until the system changes and the cycle starts again.

In the end, estimates are bullshit! 

What do you think?

The AI Compass: Aligning AI with Your Desired Outcome

There’s a common misconception that AI and LLM-driven agents can just “figure out” any task. That, given enough data and compute power, they’ll magically generate perfect results without much human intervention.

That’s not how it works by my observations.

Some tasks agents do fine without human intervention, but that is not true for all tasks. AI is not an oracle, yet. Today and for some time it won’t be a fully autonomous problem-solver for every problem. There is no doubt that it’s a force multiplier—one that’s only as good as the person driving it towards a desired outcome. The better you define the outcome and align the agent to it, the better the result will be. If you fail to steer it properly, or worse, don’t even recognize when it’s veering off course, the outcome will be equally off target.

The Illusion of AI Autonomy

Let’s break this down.

You’ve got an LLM-powered assistant writing requirements for a new feature. It spits out something that looks good at first glance. It’s structured well, grammatically correct, even using the right jargon. But when you dig deeper, you realize it misunderstood core business constraints, overcomplicated a simple feature, or left out a critical user need.

Whose fault is that?

The AI? No.

The person prompting it? Partially.

The real issue? A lack of precise alignment between intent and output.

AI doesn’t have an innate sense of correctness. It only mirrors patterns from the data it was trained on, shaped by the inputs and feedback it receives. If the feedback loop is weak or the desired outcome isn’t well defined, the model will confidently produce incorrect or misaligned results.

The Human Role in AI Success

This means that the quality of AI-driven work is only as strong as the human guiding it.

  • If you define the wrong outcome, the AI will chase the wrong goal.
  • If you fail to recognize when the AI is drifting, it will continue on a bad trajectory.
  • If you provide poor feedback, it will reinforce bad patterns and biases.

This is why skilled AI agent operators will outperform those who blindly rely on automation. Knowing what good looks like and being able to course-correct when things go wrong are the real differentiators between success and failure in today’s AI-driven workflows.

What I Observed

I’ve spent a lot of time exploring coding agents and their ability to autonomously build everything from simple scripts to complex applications. And while they can be impressive, there’s a recurring pattern: they eventually hit a wall—a bug or logical flaw they just can’t overcome.

What’s fascinating is how they handle it. Often, they charge ahead with brute force, guessing fixes, cycling through the same incorrect fixes, completely unaware that they’re stuck. They exude confidence in fixes that ultimately fail. If I weren’t experienced enough to spot these errors, I’d have no way of guiding them out of the mess they create.

But you know what, even with these bugs from time-to-time, it’s still a much better coder than me in many ways. It’s faster, more precise, and often more elegant in its solutions. The frustration for me only kicks in when it veers off course, and that’s where my human intervention becomes critical. Reviewing, testing, and course-correcting the agent’s output is the key to making it truly useful.

Today’s AI agents don’t need to be perfect. They just need the right human in the loop, with knowledge of their task, to keep them on track.

The AI Compass Principle

To consistently achieve high-quality AI-assisted work, apply The AI Compass Principle:

👉 “The effectiveness of an AI agent is directly proportional to the clarity of the goal, the precision of its alignment, and the human’s ability to detect and correct deviations.”

Most LLMs agents today are not autonomous experts; they are extensions of your thinking. If you want them to deliver better results, sharpen your ability to define, align, evaluate, and correct. The best AI outputs don’t come from the best models, they come from the best operators.

“Shit in equals shit out.”

If you want to explore this with me, let’s connect.

Building Resilient .NET Applications using Polly

In distributed systems, outages and transient errors are inevitable. Ensuring that your application stays responsive when a dependent service goes down is critical. This article explores service resilience using Polly, a .NET library that helps handle faults gracefully. It covers basic resilience strategies and explains how to keep your service running when a dependency is unavailable.

What Is Service Resilience

Service resilience is the ability of an application to continue operating despite failures such as network issues, temporary service unavailability, or unexpected exceptions. A resilient service degrades gracefully rather than crashing outright, ensuring users receive the best possible experience even during failures.

Key aspects of resilience include:

  • Retrying Failed Operations automatically attempts an operation again when a transient error occurs.
  • Breaking the Circuit prevents a system from continuously attempting operations that are likely to fail.
  • Falling Back provides an alternative response or functionality when a dependent service is unavailable.

Introducing Polly: The .NET Resilience Library

Polly is an open-source library for .NET that simplifies resilience strategies. Polly allows defining policies to handle transient faults, combining strategies into policy wraps, and integrating them into applications via dependency injection.

Polly provides several resilience strategies:

  • Retry automatically reattempts operations when failures occur.
  • Circuit Breaker stops attempts temporarily if failures exceed a threshold.
  • Fallback provides a default value or action when all retries fail.
  • Timeout cancels operations that take too long.

These strategies can be combined to build a robust resilience pipeline.

Key Polly Strategies for Service Resilience

Retry Policy

The retry policy is useful when failures are transient. Polly can automatically re-execute failed operations after a configurable delay. Example:

var retryPolicy = Policy
    .Handle<HttpRequestException>()
    .OrResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
    .WaitAndRetryAsync(3, retryAttempt => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)),
        onRetry: (outcome, timespan, retryCount, context) =>
        {
            Console.WriteLine($"Retry {retryCount}: waiting {timespan} before next attempt.");
        });

Circuit Breaker

A circuit breaker prevents an application from continuously retrying an operation that is likely to fail, protecting it from cascading failures. Example:

var circuitBreakerPolicy = Policy
    .Handle<HttpRequestException>()
    .OrResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode)
    .CircuitBreakerAsync(
        handledEventsAllowedBeforeBreaking: 3,
        durationOfBreak: TimeSpan.FromSeconds(30),
        onBreak: (outcome, breakDelay) =>
        {
            Console.WriteLine("Circuit breaker opened.");
        },
        onReset: () =>
        {
            Console.WriteLine("Circuit breaker reset.");
        });

Fallback Strategy: Keeping Your Service Running

When a dependent service is down, a fallback policy provides a default or cached response instead of propagating an error. Example:

var fallbackPolicy = Policy<HttpResponseMessage>
    .Handle<HttpRequestException>()
    .OrResult(r => !r.IsSuccessStatusCode)
    .FallbackAsync(
         fallbackAction: cancellationToken => Task.FromResult(
             new HttpResponseMessage(HttpStatusCode.OK)
             {
                 Content = new StringContent("Service temporarily unavailable. Please try again later.")
             }),
         onFallbackAsync: (outcome, context) =>
         {
             Console.WriteLine("Fallback executed: dependent service is down.");
             return Task.CompletedTask;
         });

Timeout Policy

A timeout policy ensures that long-running requests do not block system resources indefinitely. Example:

var timeoutPolicy = Policy.TimeoutAsync<HttpResponseMessage>(TimeSpan.FromSeconds(10));

Implementing Basic Service Resilience with Polly

Example Use Case: Online Payment Processing System

Imagine an e-commerce platform, ShopEase, which processes customer payments through an external payment gateway. To ensure a seamless shopping experience, ShopEase implements the following resilience strategies:

  • Retry Policy: If the payment gateway experiences transient network issues, ShopEase retries the request automatically before failing.
  • Circuit Breaker: If the payment gateway goes down for an extended period, the circuit breaker prevents continuous failed attempts.
  • Fallback Policy: If the gateway is unavailable, ShopEase allows customers to save their cart and receive a notification when payment is available.
  • Timeout Policy: If the payment gateway takes too long to respond, ShopEase cancels the request and notifies the customer.

By integrating these resilience patterns, ShopEase ensures a robust payment processing system that enhances customer trust and maintains operational efficiency, even when external services face issues.

Conclusion

Building resilient services means designing systems that remain robust under pressure. Polly enables implementing retries, circuit breakers, timeouts, and fallback strategies to keep services running even when dependencies fail. This improves the user experience and enhances overall application reliability.

I advocate for 12-Factor Apps (https://12factor.net/) and while resilience is not directly a part of the 12-Factor methodology, many of its principles support resilience indirectly. For truly resilient applications, a combination of strategies like Polly for .NET, Kubernetes auto-recovery, and chaos engineering should be incorporated. Encouraging 12-Factor principles, auto-recovery, auto-scaling, and other methods ensures services remain resilient and performant.

By applying these techniques, developers can create resilient architectures that gracefully handle failure scenarios while maintaining consistent functionality for users. Implement Polly and supporting resilience strategies to ensure applications stay operational despite unexpected failures.

The Copilots Are Coming

This is an unpublished throwback from 2023. Obviously, the Copilots are here and its much scarier than I thought.

In “The age of copilots” Satya Nadella, the CEO of Microsoft, outlines the company’s vision for Microsoft Copilot, positioning it as an integral tool across all user interfaces.

Microsoft Copilot
Meet your everyday AI companion for work and life.

https://www.microsoft.com/en-us/copilot

Copilot incorporates search functionality, harnessing the context of the web. This was a genius pivot of Bing Chat into a multi-platform service. They even have an enterprise version with added data protection (they are listening to the streets). And they are giving power to the people, Microsoft 365 now features Copilot, which operates across various applications. As a developer, my Semantic Kernel plugins can be easily integrated, my OpenAI GPTs and Assistants can be integrated. I can build some things, my team can build more things and considering the world currently needs so many Copilot things, I’m so excited. So many tasks to optimize, so many roles to bring efficiency to, so many jobs-to-be-done to be supported by automation and AI.

We believe in a future where they will be a copilot for everyone and everything you do. 

Satya Nadella, CEO of Microsoft

Nadella emphasizes the customizability of Copilot for individual business needs, highlighting its application in different roles. GitHub Copilot aids developers in coding more efficiently, while SecOps teams leverage it for rapid threat response. For sales and customer service, Copilot integrates with CRM systems and agent desktops to enhance performance.

Furthermore, Nadella speaks about the extension of Copilot through the creation of a Copilot Studio, which allows for further role-specific adaptations. He notes the emerging ecosystem around Copilot, with various independent software vendors and customers developing plugins to foster productivity and insights. I hope this means there is a Copilot Store coming with some revenue share with independent software vendors like the me and the company I work for.

You will, of course, need to tailor your Copilot for your very specific needs, your data, your workflows, as well as your security requirements. No two business processes, no two companies are going to be the same. 

Satya Nadella, CEO of Microsoft

Lastly, Nadella touches on future innovations in AI with mixed reality, where user interactions extend beyond language to gestures and gazes, and in AI with quantum computing, where simulations of natural phenomena can be emulated and quantum advancements can accelerate these processes. He envisions a future where such technology empowers every individual globally (actually Nadella expressed more on Microsoft’s vision of caring for the world and I appreciated it), offering personalized assistance in various aspects of life.

Nadella did a good job of expressing Microsoft’s vision on caring for our world. Microsoft will be “generating 100 percent of the energy they use in their datacenters, from zero-carbon sources by 2025.” He said that and next year is 2024. I hope they stay on track towards this goal.

Charles L. Bryant, Citizen of the World

The message concludes with a reference to a video featuring a Ukrainian developer’s experience with Copilot. This is also a lesson in the power of expressing the value of a product with story and emotion. Storyboard Copilot is coming too.

AgenticOps Optimization with Graded Feedback Loops

To optimize our AgenticOps workflow, we need a structured grading system that evaluates each agent’s output. These scores will drive continuous improvement, refining both workflow logic and AI models.

1️⃣ First Principles: Why Grade Agent Outputs?

  1. Measure Effectiveness – Quantify the performance of automated actions.
  2. Improve Decision-Making – Identify patterns in approved vs. rejected outputs.
  3. Fine-Tune AI Agents – Adjust response generation models based on feedback.
  4. Reduce Human Intervention – Increase automation where confidence is high.

2️⃣ Agent Performance Grading System

Each agent’s output can be graded based on predefined evaluation criteria.

2.1 Defining the Grading Criteria

For each AgenticOps step, we define a scoring model (0-100) based on key metrics:

Example:

  • If an AI-generated reply is rejected, log why (e.g., “Too formal,” “Missing details”).
  • If a categorization error occurs, adjust classification model weights.

3️⃣ Implementation in Power Automate

Step 1: Store Grading Data

  • Each agent’s output is scored after human review.
  • Store feedback in database, Azure Blob, Dataverse, SharePoint, or SQL DB.

Step 2: Automate Feedback Processing

  • If an agent scores below a threshold, flag for model retraining.
  • If an agent performs well consistently, increase automation confidence.

Step 3: Adjust AI Models Dynamically

  • Use Azure OpenAI fine-tuning for response agents.
  • Use reinforcement learning for decision-making agents.
  • Optimize categorization AI models with feedback.

Step 4: Power BI Dashboard for Analytics

  • Track agent performance over time.
  • Identify patterns in rejections and bottlenecks.
  • Provide insights for workflow tuning.

4️⃣ Adaptive Learning & Continuous Improvement

How The System Evolves

  1. Each agent’s performance is logged.
  2. Feedback is analyzed in real-time.
  3. Underperforming models are flagged for updates.
  4. Over time, AI agents improve their accuracy.
  5. Manual review workload decreases as automation confidence grows.

Scaling This System

  • Introduce self-adjusting automation thresholds based on past performance.
  • Train AI to predict when human review is necessary.
  • Implement continuous learning pipelines for AI model updates.

5️⃣ What’s Next?

  • Where should we log agent grades? (database, Azure Blob, Dataverse, or SharePoint?)
  • How frequently should we retrain AI models? (Weekly, Monthly?)
  • Do you want Power BI dashboards to track agent performance trends?

This graded feedback system will ensure that AgenticOps evolves into a highly optimized, self-improving workflow. I’ll grade your agents if you grade mine! 🚀

Enhancing AgenticOps with Observability

To ensure an AgenticOps system remains efficient, explainable, and continuously improving, we need Agent Observability as a core feature. This enables monitoring, debugging, and optimizing agent workflows just as we would in a human-managed system.

1️⃣ First Principles of Agent Observability

Agent observability allows us to:

  1. Track Agent Behavior – Log all actions and decisions for auditing.
  2. Measure Agent Performance – Grade outputs, detect failures, and identify optimization areas.
  3. Explain Agent Decisions – Ensure transparency in AI-generated actions.
  4. Detect and Resolve Bottlenecks – Identify slowdowns and inefficiencies in workflows.
  5. Enable Continuous Learning – Use real-world feedback to refine models.

2️⃣ Key Observability Components

To implement observability, we need four core layers:

2.1 Logging & Traceability

  • What: Log all agent actions, inputs, outputs, and decision paths.
  • How: Store structured logs in Database, Azure Blobs, Dataverse, or SharePoint.
  • Why: Enables debugging and root cause analysis.

Example:

  • An agent categorizes an email incorrectly → Logs capture model confidence score, decision rationale, and correction applied.

2.2 Monitoring & Alerts

  • What: Real-time monitoring of agent activity, errors, and response times.
  • How: Use Power Automate monitoring, Application Insights (Azure), or Power BI dashboards.
  • Why: Detect failures or anomalies in agent workflows.

Example:

  • If an agent’s response generation time exceeds a threshold, trigger an alert for investigation.

2.3 Performance Metrics & Scoring

  • What: Evaluate agent effectiveness using quantitative metrics.
  • How: Assign performance scores (accuracy, speed, confidence) and track trends.
  • Why: Identify underperforming agents and adjust automation levels accordingly.

2.4 Root Cause Analysis & Self-Healing

  • What: Identify why failures happen and trigger automated corrections.
  • How: Use error logging, anomaly detection, and adaptive learning.
  • Why: Minimize human intervention and improve self-recovery.

Example:

  • If an agent’s classification accuracy drops below 80%, automatically retrain the model on the latest feedback.

3️⃣ Implementation Plan in Power Automate

Step 1: Enable Structured Logging

  • Capture agent actions in database, Azure Blobs, Dataverse, or SharePoint.
  • Store:
    • Agent name, action, input, output, timestamps.
    • AI confidence scores, human corrections, workflow status.

Step 2: Real-Time Monitoring & Alerts

  • Use Power Automate’s monitoring tools or Azure Application Insights.
  • Set up alerts for:
    • High error rates.
    • Slow response times.
    • Frequent human overrides of agent outputs.

Step 3: Create Agent Performance Dashboards

  • Power BI integration to visualize:
    • Agent accuracy trends.
    • Workflow bottlenecks.
    • Automation confidence levels.

Step 4: Implement Self-Healing Mechanisms

  • Trigger auto-retraining when performance drops.
  • Adjust automation levels dynamically based on agent reliability.

4️⃣ Long-Term Optimization

1. Continuous Improvement Loop

  1. Log agent behavior and collect feedback.
  2. Analyze data trends for optimization.
  3. Retrain AI models based on agent scoring.
  4. Adjust automation thresholds dynamically.

2. Scaling Observability

  • Extend to multi-agent systems (e.g., coordinating across multiple workflows).
  • Introduce AI-driven workflow tuning (e.g., intelligent decision-routing based on agent performance).

5️⃣ Next Steps

  • Where should we store agent logs? (database, Azure Blobs, Dataverse, SharePoint?)
  • What thresholds should trigger alerts? (High error rates, long processing times?)
  • Do you want automated model retraining or manual review checkpoints?

With agent observability at the core, AgenticOps becomes a self-optimizing, transparent, and explainable automation system! How’s your agent observability? Want to discuss mine in more details, give me a poke. 🚀

Creating an AgenticOps Powered Email Workflow

Workflow Overview

This is workflow seems simple enough to wrap our heads around. It is complex enough to get a feel for how to build an AgenticOps workflow. You do not need to use an overly complicated platform. Yet, I’m very technical and analytical in my old age. This is easy for me, but it may be harder if you don’t deal with building with technology daily. However, anyone with a little patience and problem-solving ability can handle it.

Here’s the workflow:

Trigger: An email is received (via Outlook connector).

Agent 1: Summarization Agent

  • Extracts key information from the email (e.g., sender intent, action items, important context).
  • Uses Azure OpenAI (GPT/Copilot) or AI Builder for summarization.

Agent 2: Sentiment Analysis Agent

  • Analyzes sentiment (e.g., Positive, Neutral, Negative, Urgent) using:
    • Power Automate AI Builder
    • Azure Cognitive Services Text Analytics
    • GPT-based prompt for sentiment classification
  • Adds a Sentiment Label to guide prioritization.

Agent 3: Categorization Agent

  • Classifies emails into categories such as:
    • Support
    • Sales
    • Urgent
    • Inquiry
    • Spam
  • Uses AI-based classification.

Agent 4: Priority Routing Agent

  • Uses Sentiment + Category to assign a priority level:
    • High Priority (Urgent & Negative Sentiment) → Immediate Action
    • Medium Priority (Neutral Sentiment) → Regular Workflow
    • Low Priority (Positive Sentiment) → Can be delayed

Agent 5: Reply Generation Agent

  • Generates an AI-powered response:
    • Uses Azure OpenAI GPT/Copilot
    • Includes pre-defined templates
    • Formats placeholders (e.g., Client Name, Ticket ID)

Agent 6: Review & Edit Agent

  • Reviews AI-generated response (human or AI).
  • Provides edit suggestions and tracks changes.

Agent 7: Approval Agent

  • Final approval for sending response.
  • Decision options: Approve, Edit, Reject.

Decision Point: Manager (AI or Human)

  • If approvedSend Email
  • If editedReturn for Review
  • If rejectedEscalate for Manual Handling

Action: Send, Revise, or Flag for Manual Review


Implementation in Power Automate

Step 1: Create Power Automate Flow

  • Trigger: New email arrives in Outlook.
  • Filter: Exclude spam using AI-based rules.
  • Extract: Email Body, Sender, Subject for processing.

Step 2: Summarization Agent

  • Use Azure OpenAI GPT, Copilot, or AI Builder for summarization.
  • Return key points from email.

Step 3: Sentiment Analysis Agent

  • Call Azure Cognitive Services – Text Analytics API
  • Classify sentiment: Positive, Neutral, Negative, Urgent
  • Store Sentiment Score & Label

Step 4: Categorization Agent

  • AI-based classification into Support, Sales, Urgent, Inquiry, Spam

Step 5: Priority Routing Agent

  • If Urgent & Negative SentimentEscalate Immediately
  • If Positive SentimentQueue for Later
  • If Neutral SentimentProceed Normally

Step 6: Reply Generation Agent

  • Generate reply with GPT, Copilot, or AI templates
  • Auto-insert placeholders like [Client Name], [Ticket ID]

Step 7: Review & Edit Agent

  • AI or human suggests modifications to response.
  • Changes are stored in Dataverse or SharePoint.

Step 8: Approval Agent

  • Approve, Edit, or Reject email response.

Step 9: Decision Point (AI Manager or Human)

  • If Approved → Send Email Automatically.
  • If Rejected → Manual Review or Escalation.

Enhancements & Extensions

Logging & Monitoring

  • Track workflow execution, decisions, and feedback.
  • Store logs in Dataverse, SharePoint, or SQL.

Adaptive Workflow

  • Urgent Emails: Send Teams Notification for immediate action.
  • Low-Priority Emails: Add to review queue for later processing.

Integration with Teams

  • Notify Teams channel if approval is required.
  • Allow human managers to approve via Teams.

🚀 Final Questions Before Implementation

  1. Deployment Choice
    • Power Automate Cloud (Fully automated & integrated with Outlook)?
    • Power Automate Desktop (For more local processing)?
  2. Review Process
    • Do you want a human-in-the-loop for reviewing AI responses?
    • Or should this be fully autonomous?
  3. AI Model Preference
    • Azure OpenAI GPT-4/Copilot for Summarization, Categorization & Reply?
    • Azure Cognitive Services for Sentiment Analysis?

Should I write the detailed steps? Need help building this workflow or something like it, let me know, and we can talk it out! 🚀

Why We Need to Bet on Agents Now

Let’s cut through the noise. Agents, these AI-driven digital workers, aren’t some sci-fi fantasy. They’re here, and they’re about to fundamentally change how you go about your day and how your business operates. Whether you’re building products, running marketing campaigns, or supporting operations or clients, understanding agents is no longer optional. It’s the key to getting and staying ahead.

Agents Are No Longer Theoretical

My prediction is that in the near future, agents will be indispensable. People won’t monitor their email. They won’t browse social media or use apps and websites as they do today. Their agents will do these tasks for them. These AI-driven workers will curate and deliver exactly what users need, without requiring them to use third-party user interfaces. We won’t have to log into Instagram or email. Our agent can stream email and content from other services through a single interface.

This will change marketing because marketers will have to learn how to attract agents to reach their human operators. Online stores will have to learn how to sell to agents. Agents make purchases on behalf of their human operators. Websites and apps won’t target humans but agents. If it can be done on a computer, agents will be able to do it. This includes phones. We need to rethink target users across our products. Our world will go through an epic paradigm shift.

Agents are still an emerging concept, and nothing is real or set in stone yet. However, early movers are already deploying agents. They use them to automate tasks, generate content, write code, and optimize decision-making. But here’s the kicker, most businesses don’t yet have agents tailored to their unique needs. This presents a massive opportunity. The potential applications are vast, and the market is wide open. If you get started today, we’re not just building agents; we’re writing the best practices for this transformation. By focusing on how to attract and build agents now, we’re positioning ourselves to thrive as the agent ecosystem grows.

This is our chance to step up as experts. Yes, we’re in uncharted territory, but that’s a good thing. I have made predictions here. However, no one really knows what’s coming. No one knows what to do to apply agents in industries. We’re not just building agents; we’re shaping the best practices that will define agents in our respective industries.

Why Early Adoption Matters

Being early comes with risks, but the opportunities and reward far outweigh them. By diving in now, we can shape the future of how agents are built, delivered, and operated. Early adoption means gaining:

  • Experience: Each agent we develop is a chance to learn from both success and failure. What works, what doesn’t, and how to pivot.
  • Credibility: As agents become mainstream, businesses will seek pioneers, those who’ve already proven their expertise and early results.
  • Market Advantage: Agents are self-improving. If we start soon, we will develop smarter and more capable agents sooner. Our systems will perform better compared to late adopters. Compounded learning will separate leaders from laggards. By diving in now, we gain an early entrance advantage in terms of experience and credibility. We also gain a head start in acquiring the precious data we need. This data is essential to feed our agents and improve their performance.

The Work Ahead

We must learn to build agents. We must also understand how to deliver and operate them as the best solution for specific use cases.

Delivering Agents

  • Planning: Understand the jobs to be done. Identify use cases, workflows, and challenges where agents can provide meaningful value.
  • Designing: Define clear objectives, user interactions, and system integration and interfaces for the agent.
  • Building: Train agents on the right data, using AI frameworks that allow flexibility and growth.
  • Testing and Iterating: Rigorously evaluate agent performance and refine based on real-world feedback.
  • Deploying: Introduce agents thoughtfully, ensuring seamless onboarding and integration with existing tools and workflows.
  • Releasing: Equip users with proper training and documentation to ensure successful adoption.

Operating Agents

  • Managing: Overseeing the agent’s functionality, ensuring it runs as expected, and addressing any operational issues.
  • Monitoring: Tracking real-time performance metrics, such as speed, accuracy, and user feedback, to ensure consistent quality.
  • Evaluating: Regularly reviewing the agent’s outcomes against its goals, identifying areas for improvement or additional training.
  • Improving: Iterating on the agent. This involves refining its prompts, templates, tools, and algorithms. We can update its RAG with new data. We can fine-tune it or retrain it with new data. We can also enhance its features to adapt to evolving needs.

Roadmap

Our roadmap to be successful with agents as a product focuses on both strategic insights and actionable steps.

  1. Understand the Jobs to Be Done: Not every task needs an agent, and replacing traditional digital solutions (e.g., websites or apps) requires clear benefits.
  2. Iterate Relentlessly: The first version of any agent won’t be perfect. It may often hallucinate and get things wrong. That’s fine. What matters is how quickly we learn and adapt.
  3. Collaborate Across Teams: Product, marketing, and support teams must all contribute. Everyone’s input is critical. The more perspectives we have, the better equipped we are to design and refine agents that excel.
  4. Measure and Optimize: Agents need monitoring and fine-tuning. Metrics like accuracy, speed, and user satisfaction will guide us.

Agents Improve Over Time

Let’s tackle a key truth, the first iteration of any agent will rarely deliver perfect results. Early versions might be clunky, prone to hallucinations, errors, or lacking the nuanced judgment needed for complex tasks. But that’s not a failure. It marks the beginning of an iterative process. This process allows agents to learn, adapt, and improve through data and feedback.

Unlike traditional solutions, which typically rely on fixed algorithms and human-driven updates, agents can operate dynamically. They evolve in real-time as they encounter new data and scenarios. This ability to self-optimize positions agents as uniquely suited for complex and evolving challenges where traditional solutions fall short.

  • Initial Challenges: In their infancy, agents might struggle with insufficient data, unclear objectives, or unexpected scenarios. These early hiccups can result in inconsistent performance or even outright errors.
  • Continuous Learning: With every iteration, agents refine their capabilities. New data helps them understand patterns better, adapt to edge cases, and make more accurate decisions. The more they’re used, the smarter they get.
  • Operator Involvement: Effective improvement requires skilled operators. We monitor agent performance. We analyze results and provide feedback and data. In doing so, we ensure agents evolve in ways that align with business goals.
  • Replacing Traditional Solutions: Over time, agents become faster. They become more accurate and better tuned to tasks. Eventually, they will outperform traditional solutions and humans. This transformation won’t happen overnight, but the incremental improvements lead to exponential results. Starting early helps us get through this journey faster than late adopters.

The goal isn’t perfection from day one. It’s about building a foundation that grows stronger and more capable with time.

A Vision for What’s Next

Agents will handle the tedious, time-consuming stuff, freeing us to focus on strategy, creativity, and big-picture thinking. Our clients see the results. Our stakeholders see the value. We get to lead the charge in one of the most exciting shifts in tech.

But this won’t happen by accident. It’s going to take the courage to move now with bold ideas and hard work. Its going to take a willingness to fail fast and learn faster. Let’s embrace the challenge and make it happen.

Let’s get to work! Do you want to talk about how to start or improve your agentic ops journey, I’m here.