Building AI-Driven Product Teams in AgenticOps

In an AI-Driven Product environment, success is rooted in continuous improvement and guided by five core principles:

  • Clarity in communication ensures agents and operators understand what to deliver and why.
  • Strategic and tactical alignment in task execution connects high-level goals with day-to-day work.
  • Observability in performance enables continuous measurement, learning, and improvement.
  • Explainability ensures we can interpret and trust deliverables.
  • Consistency in deliverables builds client trust and enhances value of deliverables.

The journey for AI-Driven Product Teams progresses through three layers of maturity towards AgenticOps: AI-Assisted Development, Agent Development, and Agentic Delivery.

By Product Team, I mean a team that delivers a digital, data, AI, or IoT product. This product requires design and writing code.


1. AI-Assisted Development

This foundational stage focuses on training both agents and operators. The operator collaborates with their agent assistants by crafting precise prompts to direct workflows, break work items into actionable steps, and improve deliverables.

Agent Role

  • Act as specialized junior team members (e.g., marketer, developer, QA analyst, DevOps engineer, data scientist).
  • Execute prompts and produce deliverables for operator review.

Operator Role

  • Maintain control over workflows and agent task assignment, ensuring clarity in prompts and alignment with strategic and tactical goals.
  • Measure performance based on value-added time and deliverable ratings, reviews, and scores (e.g., stars, thumbs up/down, percentages).
  • Collaborate with the team to refine and solidify agent prompts, data, fine-tuning, training, workflows, policies, and templates.

Goals

  • Train operators and agents to deliver high-value deliverables with consistency and precision.
  • Build confidence in agent outputs by ensuring explainability of results and observability in performance.
  • Lay the foundation for continuous improvement through feedback and measurable progress.

Outcome
AI-Assisted Development serves as the training ground, where agents learn and improve while operators refine their ability to prompt, evaluate, and lead agents.


2. Agent Development

At this stage, agents gain more autonomy, handling complete work items while maintaining alignment with operator-defined criteria. They focus on delivering high-value deliverables efficiently and improving their ratings, reviews, and scores.

Agent Role

  • Execute work items independently, adhering to prompts and defined workflows.
  • Strive for explainability in deliverables to build operator trust.
  • Actively improve through operator feedback, targeting higher ratings, reviews, and better scores.

Operator Role

  • Shift from managing tasks to guiding agents and evaluating outcomes.
  • Monitor and analyze performance metrics (e.g., flow time, throughput, and value-added time).
  • Collaborate with the team to optimize workflows, policies, and templates.

Goals

  • Deliver predictable, high-value outputs while minimizing operator intervention.
  • Link speed to cost and value, optimizing workflows for value, efficiency, and profitability.
  • Foster a system of continuous improvement based on measurable feedback.

Outcome
Agent Development prepares agents for full autonomy by ensuring they consistently meet or exceed expectations in value, quality, and speed.


3. Agentic Delivery

In this stage, agents achieve the agentic state with full autonomy. They independently manage work items from a queue, delivering high-value deliverables aligned with strategic goals, with minimal operator oversight.

Agent Role

  • Own the entire lifecycle of a work item, from planning to execution and delivery.
  • Ensure deliverables are explainable and align with strategic and tactical objectives.
  • Continuously improve performance by adapting to feedback and refining workflows.

Operator Role

  • Define high-level goals, vision, and success criteria.
  • Monitor performance metrics and provide directional guidance only when necessary.
  • Focus on innovation and strategy while refining policies and templates to scale operations.

Goals

  • Achieve consistent, predictable, and explainable high-value deliverables.
  • Scale operations efficiently, reducing reliance on human intervention.
  • Build a self-sustaining system of Agentic Ops that continuously improves.

Outcome
Agentic Delivery transforms agents into trusted, autonomous team members capable of delivering measurable value at scale. Operators focus on strategic priorities while agents handle execution.


Continuous Improvement and Explainability

The path from AI-Assisted Development to Agentic Delivery is defined by continuous improvement and explainability. Agents are motivated to enhance their deliverables by earning higher ratings, reviews, and scores, while operators ensure clarity and alignment through refined workflows and templates.

By observing and explaining performance, operators and teams build trust in agent outputs. This system fosters a reliable, scalable process where agents evolve into autonomous contributors, consistently delivering high-value deliverables with measurable impact.

This is a lot easier said than done and there are many devils in the details, but this provides a framework to achieve Agentic Ops.

Where are you in your journey with AI Agents? I’m here if you want to talk more about taking your first step or stepping into the agentic state.

Streamlining Dependency Management: Lessons from 2015 to Today

In this throwback Tuesday post, we revamp at a dusty draft post from 2015.

In 2015, I faced a challenging problem. I had to manage dependencies across a suite of interconnected applications. It was crucial to ensure efficient, safe builds and deployments. Our system included 8 web applications, 24 web services, and 8 Windows services. This made a total of 40 pipelines for building, deploying, and testing. At the time, this felt manageable in terms of automation, but shared dependencies introduced complexity. It was critical that all applications used the same versions of internal dependencies. This was especially important because they interacted with a shared database and dependencies can change the interaction.

Back then, we used zip files for our package format and were migrating to NuGet to streamline dependency management. NuGet was built for exactly this kind of challenge. However, we needed a system to build shared dependencies once. It was necessary to ensure version consistency across all applications. The system also needed to handle local, and server builds seamlessly.

Here’s how I approached the problem in 2015 and how I’d tackle it today, leveraging more modern tools and practices.


The 2015 Solution: NuGet as a Dependency Manager

Problem Statement

We had to ensure:

  1. Shared dependencies were built once and consistently used by all applications.
  2. Dependency versions were automatically synchronized across all projects (both local and server builds).
  3. External dependencies are handled individually per application.

The core challenge was enforcing consistent dependency versions across 40 applications without excessive manual updates or creating a maintenance nightmare.

2015 Approach

  1. Migrating to NuGet for Internal Packages
    We began by treating internal dependencies as NuGet packages. Each shared dependency (e.g., ProjB, ProjC, ProjD) was packaged with a version number and stored in a NuGet repository. When a dependency changed, we built it and updated the corresponding NuGet package version.
  2. Version Synchronization
    To ensure that dependent applications used the same versions of internal packages:
    • We used nuspec files to define package dependencies.
    • NuGet commands like nuget update were incorporated into our build process. For example, if ProjD was updated, nuget update ProjD was run in projects that depended on it.
  3. Automating Local and Server Builds
    We integrated NuGet restore functionality into both local and server builds. On the server, we used Cruise Control as our CI server. We added a build target that handled dependency restoration before the build process began. Locally, Visual Studio handled this process, ensuring consistency across environments.
  4. Challenges Encountered
    • Updating dependencies manually with nuget update was error-prone and repetitive, especially for 40 applications.
    • Adding new dependencies required careful tracking to ensure all projects referenced the latest versions.
    • Changes to internal dependencies triggered cascading updates across multiple pipelines, which increased build times.
    • We won’t talk about circular dependencies.

Despite these challenges, the system worked, providing a reliable way to manage dependency versions across applications.


The Modern Solution: Solving This in 2025

Fast forward to today, and the landscape of dependency management has evolved. Tools like NuGet remain invaluable. However, modern CI/CD pipelines have transformed how we approach these challenges. Advanced dependency management techniques and containerization have also contributed to this transformation.

1. Use Modern CI/CD Tools for Dependency Management

  • Pipeline Orchestration: Platforms like GitHub Actions, Azure DevOps, or GitLab CI/CD let us build dependencies once. We can reuse artifacts across multiple pipelines. Shared dependencies can be stored in artifact repositories (e.g., Azure Artifacts, GitHub Packages) and injected dynamically into downstream pipelines.
  • Dependency Locking: Tools like NuGet’s lock file (packages.lock.json) ensure version consistency by locking dependencies to specific versions.

2. Automate Version Synchronization

  • Semantic Versioning: Internal dependencies should follow semantic versioning (e.g., 1.2.3) to track compatibility.
  • Automatic Dependency Updates: Use tools like Dependabot or Renovate to update internal dependencies across all projects. These tools can automate pull requests whenever a new version of an internal package is published.

3. Embrace Containerization

  • By containerizing applications and services, shared dependencies can be bundled into base container images. These images act as a consistent environment for all applications, reducing the need to manage dependency versions separately.

4. Leverage Centralized Package Management

  • Modern package managers like NuGet now include improved version constraints and dependency management. For example:
    • Use a shared Directory.Packages.props file to define and enforce consistent dependency versions across all projects in a repository.
    • Define private NuGet feeds for internal dependencies and configure all applications to pull from the same feed.

5. Monitor and Enforce Consistency

  • Dependency Auditing: Tools like WhiteSource or SonarQube can analyze dependency usage to ensure all projects adhere to the same versions.
  • Build Once, Deploy Everywhere: By decoupling build and deployment, you can reuse prebuilt NuGet packages in local and server builds. This ensures consistency without rebuilding dependencies unnecessarily.

Case Study: Revisiting ProjA, ProjB, ProjC, and ProjD

Let’s revisit the original example that help me figure this out in 2015 but using today’s tools.

  1. When ProjD changes:
    • A CI/CD pipeline builds the new version of ProjD and publishes it as a NuGet package to the internal feed.
    • Dependency lock files in ProjB and ProjC ensure they use the updated version.
  2. Applications automatically update:
    • Dependabot identifies the new version of ProjD and creates pull requests to update ProjB and ProjC.
    • After merging, ProjA inherits the changes through ProjB.
  3. Consistency is enforced:
    • Centralized package configuration (Directory.Packages.props) ensures that local and server builds use the same dependency versions.

The Results

By modernizing our approach:

  • Efficiency: Dependencies are built once and reused, reducing redundant builds.
  • Consistency: Dependency versions are enforced across all projects, minimizing integration issues.
  • Scalability: The system can scale to hundreds of applications without introducing maintenance overhead.

Conclusion

In 2015, we solved the problem using NuGet and MSBuild magic to enforce dependency consistency. Today, with modern tools and practices, the process is faster, more reliable, and scalable. Dependency management is no longer a bottleneck; it’s an enabler of agility and operational excellence.

Are you ready to future-proof your dependency management? Let’s talk about optimizing your build and deployment pipelines today.

AgenticOps: Transform Your Business with Agent Enhanced Teams

Welcome to 2025 and the year of AgenticOps.

Can your business operating systems think for themselves? Can they adapt in real time? Do they make decisions that perfectly align with your strategic goals? Imagine a system that is seamlessly integrated with your workflows. It not only reduces manual overhead but actively amplifies your team’s effectiveness. This vision is no longer aspirational—it’s here. Welcome to AgenticOps, the new frontier in how organizations operate, collaborate, and deliver value.

What is AgenticOps?

AgenticOps (Agentic Operations) isn’t your typical automation or AI solution. It’s a comprehensive, structured framework where autonomous agents collaborate with human operators to achieve shared strategic and tactical goals. Picture a network of hyper-specialized agents. These could include planner agents, research agents, developer agents, and more. They work tirelessly behind the scenes to streamline processes. They optimize results and keep priorities aligned.

These agents are not mere tools. They are intelligent systems designed to learn, adapt, and execute tasks as an extension of your team. These systems enable better alignment and faster decisions. They also promote proactive problem-solving and seamless orchestration of complex workflows.


The Core Principles of AgenticOps

AgenticOps is built on three foundational pillars:

1. Autonomy

Agents operate independently, performing tasks without constant human oversight. But, they rely on human feedback and approval for critical decision points. These agents autonomously monitor, analyze, and act on data, ensuring their actions align with your organization’s strategic objectives.

2. Collaboration

Agents are interconnected, working as a cohesive system. They share insights, offer feedback, and coordinate activities across workflows. This eliminates silos, streamlines communication, and ensures seamless collaboration between agents and human operators.

3. Accountability

Every agent’s action is transparent, traceable, and purpose-driven, aligning with predefined objectives. Accountability fosters trust and ensures operational integrity, empowering teams to rely on agents while maintaining control.


The Value of AgenticOps

The transformative potential of AgenticOps lies in the value it delivers to organizations:

1. Strategic Alignment

By embedding strategic goals into every operation, AgenticOps ensures resources are directed toward the most impactful tasks. This integration drives measurable business outcomes.

2. Efficiency at Scale

Agents handle repetitive, high-volume tasks with precision, freeing human operators to focus on creative, innovative, and high-priority initiatives.

3. Real-Time Adaptability

In dynamic environments, agents adapt instantly—reallocating resources, recalibrating priorities, and maintaining operational continuity in response to market demands.

4. Enhanced Visibility

Agents continuously monitor and report on operational performance, providing unparalleled insights into bottlenecks, inefficiencies, and opportunities for improvement.


The Specialized Roles of Agents in AgenticOps

AgenticOps systems are powered by specialized agents designed to handle distinct responsibilities:

  • Strategy Agents: Define and maintain the strategic objectives that guide all operations.
  • Plan Agents: Develop and update plans, ensuring timelines and milestones align with goals.
  • Research Agents: Conduct user research, market analysis, and feasibility studies to provide actionable insights.
  • Tech Agents: Manage the heavy lifting in development, design, QA, and release engineering.
  • Manager Agents: Oversee workflows, align tasks with strategic goals, and maintain accountability.

Together, these agents form a synergistic system that empowers businesses to operate with unprecedented precision and agility.


A Real-World Example of AgenticOps in Action

Consider a product team managing a portfolio of client projects. In traditional setups, tracking progress, aligning with strategic goals, and adjusting priorities require multiple tools, meetings, and manual interventions.

With AgenticOps, Manager Agents dynamically analyze work in progress, monitor KPIs, and provide actionable insights. If a bottleneck arises, agents flag the issue, recommend solutions, and execute corrective actions autonomously. This proactive approach keeps revenue targets on track, eliminates delays, and ensures client satisfaction.


How to Adopt AgenticOps

Transitioning to AgenticOps requires a strategic, phased approach:

  1. Start Small: Identify high-impact areas where agents can deliver immediate value, such as planning or research.
  2. Integrate Incrementally: Introduce agents gradually, ensuring they complement existing workflows.
  3. Empower Teams: Provide teams with the training and tools needed to collaborate effectively with agents.
  4. Measure Success: Use metrics to track the impact of agents, iterating to refine their contributions.

The Future of Work is AgenticOps

AgenticOps is more than a technological advancement. It’s a thought process and paradigm shift that will force businesses to evolve. Otherwise, they lose against businesses that make the shift. Its not a new idea, its our framework to take advantage of the rapid advancements in AI. By embedding intelligence into operations, businesses become adaptive, resilient, and capable of thriving in fast-paced environments.

This isn’t about replacing humans. It’s about empowering them—reducing cognitive load and enabling them to focus on what truly matters. The question is no longer, “What can your team do for your business?” It’s now, “What can your agent-augmented team achieve for your business?”

Have you begun the shit to AI driven operations? How are you succeeding with something like AgenticOps? Let us know your thoughts and join the conversation.

2025: The Year AI Transforms Work and Industry at Scale

Happy New Year! Here are my bet’s for 2025.

The rapid evolution of artificial intelligence is reshaping industries and redefining how we work. In 2025, I predict that several transformative trends will reshape the landscape of software development, industry specialization, and workforce dynamics. Below, we explore these predictions, provide additional insights, and examine the opportunities and risks that come with these changes.


1. Accelerating Improvements in LLMs

Large Language Models (LLMs) will continue to push the boundaries of what AI can do. With advancements in transformer, SSM, and other architectures, fine-tuning, and multimodal learning, LLMs will deliver faster, more accurate, and contextually rich results. We may even see an new architecture that pushes LLMs ahead even faster.

Opportunities

  • Enhanced productivity and creativity in tasks like content creation, research, and customer interaction.
  • Multimodal capabilities enabling seamless integration of text, images, and even video into workflows.

Challenges

  • Ethical concerns around misuse, bias, and misinformation.
  • Increased compute requirements may exacerbate energy consumption concerns.

2. Smaller Models, Big Potential

Compact LLMs leveraging distillation, pruning, and quantization will achieve near-parity with today’s largest models. This shift will democratize AI, making it accessible for edge devices, IoT applications, and industries with limited compute resources.

Opportunities

  • Cost-effective AI solutions for SMBs.
  • Expanding AI’s footprint into rural and underserved areas via low-power devices.

Challenges

  • Balancing efficiency with accuracy in critical applications like healthcare diagnostics or autonomous vehicles.

3. Industry-Specific AI Takes Center Stage

The rise of verticalized AI solutions tailored to specific industries, such as marketing, healthcare, finance, and legal, will dominate the market. These solutions will provide unparalleled domain expertise, driving faster ROI for businesses.

Opportunities

  • Precision and relevance in solving domain-specific challenges.
  • Increased trust in AI adoption as models demonstrate real-world impact.

Challenges

  • Heavy reliance on proprietary data could lead to monopolistic behavior or widen the gap between industry leaders and smaller players.

4. AI-Driven Development as the Norm

AI coding assistants like GitHub Copilot, Cursor, and Aider are already changing the way developers work. In 2025, AI will be fully integrated into development ecosystems, assisting with ideation, debugging, and even deployment.

Opportunities

  • Streamlined development cycles, reducing time to market.
  • Better accessibility for non-traditional developers, diversifying the talent pool.

Challenges

  • Over-reliance on AI could erode foundational coding skills.
  • Potential loss of creativity in problem-solving as AI-driven patterns become standardized.

5. Coding Becomes a Commodity

As AI handles the technical details of coding, the value of knowing how to code will diminish. Instead, the ability to guide AI assistants, define clear objectives, and solve high-level problems with code will become paramount. The experience to know when code is good or bad becomes more important than the just the ability to code.

Opportunities

  • Broader inclusion of non-technical professionals in tech projects.
  • Emergence of new roles focused on prompt engineering, strategy, and oversight.

Challenges

  • A potential skills gap for current developers who fail to adapt.
  • Risk of job displacement without sufficient upskilling initiatives.

6. Shippable AI-Driven Products with Minimal Oversight

In 2025, performant coding agents capable of turning product requirements into deployable solutions will emerge. These agents will handle well-defined scopes but may still require human oversight for complex or high-stakes applications.

Opportunities

  • Faster MVP development for startups.
  • Automated maintenance of legacy systems, freeing up human resources for innovation.

Challenges

  • Ensuring quality control and accountability in AI-generated products.
  • Difficulty in generalizing complex, nuanced requirements.

7. OpenAI Dominates but Faces Competition

OpenAI will maintain a stronghold on the market, but competition from Google, Meta, Anthropic, Cohere, and open-source ecosystems will heat up.

Opportunities

  • Diverse options for businesses to choose from, fostering innovation and reducing costs.
  • Strengthening open-source movements that promote transparency and collaboration.

Challenges

  • Risk of market fragmentation, making it harder for businesses to standardize solutions.
  • Proprietary dominance could limit interoperability.

8. AI Agencies Replace Traditional Agencies

Small AI agencies will rise to prominence, offering specialized services in automation, data modeling, and AI-driven marketing and development. These agencies will cater to SMBs, replacing traditional creative and technical firms.

Opportunities

  • Affordable, tailored AI solutions for local markets.
  • Innovation in personalized customer experiences and hyper-local strategies.

Challenges

  • Ethical dilemmas in hyper-targeted advertising.
  • Limited oversight in emerging markets where regulation lags behind.

9. Data Becomes the New Gold for Real This Time

Businesses will finally fully realize the value of their proprietary data, using it to train domain-specific models. Data-rich companies will dominate their industries by leveraging AI in unique and powerful ways.

Opportunities

  • Competitive differentiation through unique datasets.
  • Greater investment in data quality, security, and governance.

Challenges

  • Risk of data monopolies exacerbating inequality.
  • Increased cybersecurity threats targeting proprietary datasets.

Navigating Risks and Building for the Future

While the predictions for 2025 are exciting, they also come with challenges that require proactive measures:

  • Upskilling the Workforce: Governments, businesses, and educational institutions must collaborate to prepare the workforce for AI-driven roles.
  • Regulating Ethically: Establishing global standards for AI use will be crucial to avoid misuse and ensure equitable benefits.
  • Driving Sustainability: Advancements in AI must prioritize energy efficiency and sustainable practices.

As we move into 2025, businesses that embrace these changes while navigating risks will unlock unprecedented opportunities. The future of work and industry is brighter, more efficient, and deeply collaborative—driven by AI.

Are you ready to harness the power of AI for your business? Let’s talk about it and get your business ready for our AI future.

Writing Automated Integration Tests by the Numbers

In this Throwback Tuesday post is a revamped draft post from January 2014 where I wrote about writing SpecFlow tests. Here I am generalizing the processes because I don’t use SpecFlow anymore.

One thing I learned in the Marine Corps was to do things by the numbers. It was a natural fit for my analytical mind. Plus, let’s face it, we were told we were useless maggots as dumb as a rock, and this training method was apparently the easiest way to teach a bunch of recruits. Naturally, it worked great for a dumb rock like me, OORAH!

Because of this lesson, I’ve always tried to distill common processes into neat little numbered lists. They’re easy to refer to, teach from, and optimize. When I find a pattern that works across a wide range of scenarios, I know I’ve hit on something useful. So, with that in mind, here’s how I approach writing automated integration tests by the numbers.


1. Understand the Test Data Needs

The first step in any integration test is figuring out the test data you need. This means asking questions like, “What inputs are required? What outputs am I validating?” You can’t test a system without meaningful data, so this step is non-negotiable.

2. Prepare the Test Data

Once you know what you need, it’s time to create or acquire that data. Maybe you generate it on the fly using a tool like Faker. Maybe you’ve got pre-existing seed scripts to load it. Whatever the method, getting the right data in place is critical to setting the stage for your tests.

3. Set Up the Environment

Integration tests usually need a controlled environment. This might involve spinning up Docker containers, running a seed script, or setting up mock services. Automating this step wherever possible is the key to saving time and avoiding headaches.

4. Run a Manual Sanity Check

Before diving into automation, I like to run the test manually. This gives me a feel for what the system is doing and helps catch any obvious issues before I start coding. If something’s off, it’s better to catch it here than waste time troubleshooting broken automation.

5. Create Reusable Test Components

If the test interacts with a UI, this is where I’d create or update page objects. For APIs or other layers, I’d build out reusable components to handle the interactions. Modular components make tests easier to write, maintain, and debug.

6. Write and Organize the Tests

This is the core of the process: writing the test steps and organizing them logically. Whether you’re using SpecFlow, pytest, or any other framework, the principle is the same: break your tests into clear, reusable steps.

7. Tag and Manage Tests

In SpecFlow, I used to tag scenarios with @Incomplete while they were still under development. Modern frameworks let you tag or group tests to control when and how they run. This is handy for managing incomplete tests or running only high-priority ones in CI/CD pipelines.

8. Debug and Refine

Once the test is written, run it and fix any issues. Debugging is a given, but this is also a chance to refine your steps or improve your reusable components. The goal is to make each test rock-solid and maintainable.


Lessons Learned

Breaking things down by the numbers isn’t just about being organized—it’s about being aware of where the bottlenecks are. For me, steps 1 and 2 (understanding and preparing test data) are often the slowest. Knowing that helps me focus on building tools and processes to speed up those steps.

This approach also makes training others easier. If I need to onboard someone to integration testing:

  1. Pair with them on a computer.
  2. Pull out the “Integration Tests by the Numbers” list.
  3. Call them a worthless maggot as dumb as a rock (just kidding… mostly).
  4. Walk through the process step by step.

Relevance Today

Even though I don’t use SpecFlow anymore, this process still applies. Integration testing frameworks and tools have evolved, but the principles are timeless. Whether you’re using Playwright, Cypress, or RestAssured, these steps form the foundation of effective testing.

What’s different now is the tooling. Tools like Docker, Terraform, and CI/CD pipelines have made environment setup easier. Test data can be generated on the fly with libraries like Faker or FactoryBot. Tests can be grouped and executed conditionally with advanced tagging systems.

The key takeaway? Processes evolve, but the mindset of breaking things down by the numbers is as valuable as ever. It’s how I keep my integration tests efficient, maintainable, and scalable.

Maximize Business Efficiency with SAS Agent

I am spending a lot of time focusing on how to bring AI to bear on business challenges. One area that intrigued me lately is strategic alignment. Getting everyone on the same page is challenging. I put some thought into how an AI agent can help.

Things are moving so fast, and business priorities are constantly shifting. So, staying strategically aligned is much harder today than it was in 1989 when I started my first business. To move at this accelerated pace and stay aligned, we developed an AI agent named the Strategic Alignment Scoring Agent. This agent is guided by the Strategic Alignment Score (SAS), formalizing strategic prioritization into a simple scoring algorithm. This AI-driven tool ensures that every decision counts. It aligns work items with our strategy. It also aligns with the strategy of our client’s. Here’s how it transforms task management and decision-making for our agency.

The Challenge: Managing Priorities in a Fast-Paced World

Every business leader knows the struggle: a growing list of tasks and limited resources. Determining what deserves attention can feel like navigating a maze. Without clear prioritization, teams risk wasting time on low-impact tasks while critical initiatives fall behind.

The Vision: Data-Driven Strategic Alignment

Imagine a world where every task supports your company’s strategic goals. With a SAS Agent, that vision becomes reality. This agent examines work items using SAS’s robust analytics platform. It employs a scoring algorithm to assign a strategic alignment score to each work item. The result? A clear, prioritized task list that maximizes value delivered and impact.

How the SAS Agent Works

  1. Input: Upload your list of work items in common formats like Excel or CSV and give your SAS and weights. This can also be automated through an API.
  2. Analysis: The SAS Agent uses proprietary SAS analytics and scoring algorithms. The agent evaluates tasks based on their alignment with strategy prioritizes them to best achieve desired outcomes.
  3. Prioritization: Tasks are scored and ranked, giving you an actionable, ordered list.
  4. Integration: Ability to seamlessly integrate with popular project management platforms ensures smooth workflow integration.

Why It Matters: Delivering Real Business Value

  • Increased Efficiency: Spend less time debating priorities and more time delivering results.
  • Enhanced Transparency: Clear, data-backed rankings reduce ambiguity and foster accountability.
  • Better Decision-Making: Leadership teams can allocate resources with confidence, knowing each task supports the broader corporate mission.

How SAS Is Calculated

We calculate the SAS by evaluating key business factors like strategic alignment, value delivered, feasibility, and urgency. Each factor is assigned a weight reflecting its strategic importance. Work items are scored based on these components, ensuring top-priority tasks are clearly identified and ranked for action.

The SAS Agent adjusts weights at the strategic level. This dynamic adaptation helps the agent meet changing business needs. It allows us to focus on what matters most. The AI-powered agent continuously evaluates work items. It can suggest adjusting weights. This ensures work delivery stays aligned with evolving strategic goals.

Similar Frameworks and How SAS Compares

The Strategic Alignment Score (SAS) is more than a prioritization framework. It’s a customized decision-making system built to optimize strategic impact. It isn’t new or groundbreaking. It borrows and adapts best practices from various prioritization and decision-making models and packaged them to fit our business context. Here’s how SAS compares to well-known frameworks:

  1. Weighted Scoring Model
    • Similarity: SAS shares the concept of assigning weighted scores across multiple criteria.
    • Difference: SAS explicitly incorporates business strategy dimensions. These include strategic alignment, feasibility, and viability. These dimensions are broader than typical cost-benefit or ROI-focused models.
  2. MoSCoW Prioritization (Must have, Should have, Could have, Won’t have)
    • Similarity: Both focus on prioritizing tasks based on impact and necessity. This was a major feature of our earlier prioritization system.
    • Difference: MoSCoW is more categorical, while SAS provides a granular, data-driven scoring mechanism.
  3. Eisenhower Matrix (Urgent vs. Important)
    • Similarity: SAS considers urgency and impact (value delivered), aligning with the matrix’s core dimensions.
    • Difference: SAS expands with extra metrics like resource availability, complexity, and feasibility, making it more comprehensive.
  4. RICE Scoring (Reach, Impact, Confidence, Effort)
    • Similarity: Both frameworks quantify impact and feasibility while considering constraints.
    • Difference: SAS covers strategic and operational dimensions, while RICE is more product-focused.
  5. OKR Framework (Objectives and Key Results)
    • Similarity: Both are designed to align tasks with strategic goals.
    • Difference: OKRs define goals and track results at a high level. SAS scores and prioritizes individual work items based on detailed, weighted criteria.
  6. SAFe Weighted Shortest Job First (WSJF)
    • Similarity: WSJF uses a similar scoring approach, prioritizing tasks based on economic impact and urgency.
    • Difference: SAS has a broader application that extends beyond overly complex Agile SAFe environments. It emphasizes alignment with both agency and client objectives.

Real-World Use Cases

  • Project Managers: Prioritize tasks within complex projects to ensure critical goals are met first.
  • Analysts: Evaluate quarterly tasks against strategic targets for data-driven reporting.
  • Executives: Guide top-level planning and budget decisions using clear task prioritization.

Potential Challenges and Mitigations

While the SAS Agent offers clear benefits, it’s important to recognize potential challenges that arise when implementing such a system:

  • Subjectivity in Weight Assignment: Assigning weights to different business factors can introduce subjectivity and bias.
    • Mitigation: Establish clear scoring guidelines and involve cross-functional teams in setting weights to ensure balanced and consistent evaluations.
  • Data Accuracy and Completeness: Incomplete or inaccurate data entry can compromise the reliability of scores.
      • Mitigation: Implement data validation protocols and require standardized data formats for input submissions.
    • Over-Reliance on Automation: Teams become overly dependent on automated scoring, overlooking qualitative business insights.
        • Mitigation: Use SAS as a decision-support tool. It should not replace decision-making but support it. Incorporate expert reviews for critical tasks. Overrule the agent when necessary.
      • Complexity in Setup and Maintenance: First setup and ongoing maintenance of a SAS Agent need significant effort.
          • Mitigation: Start with a simplified version of a SAS Agent and expand over time, ensuring proper training and documentation.

        Organizations that want to leverage something like a SAS Agent must tackle these potential challenges proactively. This enhances strategic alignment and optimizes task prioritization.

        Looking Ahead

        The SAS Agent isn’t just a tool, it’s helps us build a mindset that moves continuous improvement to strategic alignment. We constantly review our performance. We adjust our scoring weights during retrospectives. This approach ensures that our work remains in sync with evolving client and agency needs. As we grow, SAS will stay a cornerstone of our operational strategy, guiding us toward sustained success.

        Ready to elevate your strategy alignment game? Implementing a tool like a SAS Agent can transform how you prioritize, execute, and succeed. What could your team achieve with smarter priorities?

        Unlock JavaScript Key Code Values for Developers

        I have a mountain of unpublished blog posts. I can now use AI to make more sense of all of this content. So, I decided to start cleaning up the posts and releasing them in a series I’m calling “Throwback Tuesdays”. I aim to make them relevant for today as best I can. However, I will publish even if it’s a dead concept.

        First up, JavaScript ASCII Key Code Values from January 2015. I was struggling to find key codes and got a list of values from Rachit Patel (see the end of this post for the list). This was just a reference post so I didn’t have to dig through Google search results to find a key code. This is somewhat useless today with all the ready made code and AI that can crank out solutions that are based on key codes.

        Unlocking the Power of Keyboard Shortcuts in Web Applications

        Why did I need a list of key codes? I needed to make keyboard shortcuts for various use cases. Let’s explore the possibilities.

        Keyboard shortcuts are an essential tool for enhancing user experience and interaction in web applications. By responding to key presses, developers can create intuitive and powerful functionality, ranging from custom navigation to accessibility features. Below, we explore examples of why and how these shortcuts can be used, complete with code and explanations to inspire your next project.


        Form Navigation

        Use Case: Improve user experience by enabling seamless navigation between input fields.

        Code:

        document.addEventListener('keydown', (event) => {
            if (event.keyCode === 9) { // Tab key
                event.preventDefault();
                const inputs = Array.from(document.querySelectorAll('input, textarea'));
                const current = inputs.indexOf(document.activeElement);
                const next = (current + 1) % inputs.length;
                inputs[next].focus();
            }
        });
        
        

        Explanation:

        • Listens for the Tab key press (keyCode 9).
        • Prevents the default behavior and cycles focus through input and textarea fields in a custom order.

        Custom Keyboard Shortcuts

        Use Case: Provide power users with quick access to application features.

        Code:

        document.addEventListener('keydown', (event) => {
            if (event.ctrlKey && event.keyCode === 83) { // Ctrl+S
                event.preventDefault();
                console.log('Save shortcut triggered');
            }
        });
        
        

        Explanation:

        • Detects when the Ctrl key is pressed along with S (keyCode 83).
        • Prevents the browser’s default save dialog and triggers custom functionality, such as saving data.

        Game Controls

        Use Case: Enable interactive movement in games or apps.

        Code:

        document.addEventListener('keydown', (event) => {
            switch (event.keyCode) {
                case 37: // Left arrow
                    console.log('Move left');
                    break;
                case 38: // Up arrow
                    console.log('Move up');
                    break;
                case 39: // Right arrow
                    console.log('Move right');
                    break;
                case 40: // Down arrow
                    console.log('Move down');
                    break;
            }
        });
        
        

        Explanation:

        • Maps arrow keys to movement directions (left, up, right, down).
        • Switch statements check the keyCode and trigger corresponding actions.

        Text Editor Commands

        Use Case: Allow users to insert a tab character in text areas.

        Code:

        document.addEventListener('keydown', (event) => {
            if (event.keyCode === 9) { // Tab key
                event.preventDefault();
                const editor = document.getElementById('editor');
                const start = editor.selectionStart;
                editor.value = editor.value.slice(0, start) + '\t' + editor.value.slice(start);
                editor.selectionStart = editor.selectionEnd = start + 1;
            }
        });
        
        

        Explanation:

        • Overrides the default Tab key behavior to insert a tab character (\t) at the cursor position in a text editor.

        Secret Feature Activation

        Use Case: Trigger hidden features using specific key sequences.

        Code:

        let secretSequence = [38, 38, 40, 40, 37, 39, 37, 39, 66, 65]; // Konami Code
        let inputSequence = [];
        
        document.addEventListener('keydown', (event) => {
            inputSequence.push(event.keyCode);
            if (inputSequence.slice(-secretSequence.length).join('') === secretSequence.join('')) {
                console.log('Secret mode activated!');
            }
        });
        
        

        Explanation:

        • Tracks user key presses and compares them to a predefined sequence (e.g., the Konami Code).
        • Executes an action when the sequence is completed.

        Virtual Keyboard Input

        Use Case: Mimic physical keyboard input for touchscreen devices.

        Code:

        const virtualKeys = document.querySelectorAll('.virtual-key');
        virtualKeys.forEach((key) => {
            key.addEventListener('click', () => {
                const keyCode = parseInt(key.dataset.keyCode, 10);
                const event = new KeyboardEvent('keydown', { keyCode });
                document.dispatchEvent(event);
            });
        });
        
        

        Explanation:

        • Creates virtual keys that simulate real key presses by dispatching synthetic keydown events.
        • Useful for applications that run on touchscreen devices.

        Accessibility Features

        Use Case: Provide shortcuts to assist users with disabilities.

        Code:

        document.addEventListener('keydown', (event) => {
            if (event.keyCode === 16) { // Shift key
                console.log('Accessibility shortcut triggered');
            }
        });
        
        

        Explanation:

        • Detects the Shift key press (keyCode 16) and performs an action, such as enabling high-contrast mode.

        Media Controls

        Use Case: Control video playback using the keyboard.

        Code:

        document.addEventListener('keydown', (event) => {
            const video = document.getElementById('videoPlayer');
            if (event.keyCode === 32) { // Spacebar
                video.paused ? video.play() : video.pause();
            } else if (event.keyCode === 37) { // Left arrow
                video.currentTime -= 5;
            } else if (event.keyCode === 39) { // Right arrow
                video.currentTime += 5;
            }
        });
        
        

        Explanation:

        • Spacebar toggles play/pause, while the left and right arrow keys adjust the playback position.

        Form Validation

        Use Case: Restrict input to numeric values only.

        Code:

        document.getElementById('numberInput').addEventListener('keydown', (event) => {
            if ((event.keyCode < 48 || event.keyCode > 57) && // Numbers 0-9
                (event.keyCode < 96 || event.keyCode > 105)) { // Numpad 0-9
                event.preventDefault();
            }
        });
        
        

        Explanation:

        • Prevents non-numeric keys from being entered, ensuring valid input.

        Fullscreen or Escape

        Use Case: Toggle fullscreen mode or close a modal.

        Code:

        document.addEventListener('keydown', (event) => {
            if (event.keyCode === 27) { // Escape
                console.log('Modal closed');
            } else if (event.keyCode === 122) { // F11
                event.preventDefault();
                document.documentElement.requestFullscreen();
            }
        });
        
        

        Explanation:

        • Escape key closes modals or cancels actions.
        • F11 toggles fullscreen mode, overriding default behavior.

        Conclusion

        By leveraging keyboard shortcuts, developers can create applications that are not only more user-friendly but also highly functional and accessible. These examples range from form navigation to hidden features. They demonstrate how key presses can enhance interactivity and usability in your web applications. Explore these ideas in your own projects to deliver delightful and intuitive user experiences.

        JavaScript ASCII Key Code Values

        KeyCode

        backspace

        8

        tab

        9

        enter

        13

        shift

        16

        ctrl

        17

        alt

        18

        pause/break

        19

        caps lock

        20

        escape

        27

        page up

        33

        page down

        34

        end

        35

        home

        36

        left arrow

        37

        up arrow

        38

        right arrow

        39

        down arrow

        40

        insert

        45

        delete

        46

        0

        48

        1

        49

        2

        50

        3

        51

        4

        52

        5

        53

        6

        54

        7

        55

        8

        56

        9

        57

        a

        65

        b

        66

        c

        67

        d

        68

        e

        69

        f

        70

        g

        71

        h

        72

        i

        73

        j

        74

        k

        75

        l

        76

        m

        77

        n

        78

        o

        79

        p

        80

        q

        81

        r

        82

        s

        83

        t

        84

        u

        85

        v

        86

        w

        87

        x

        88

        y

        89

        z

        90

        left window key

        91

        right window key

        92

        select key

        93

        numpad 0

        96

        numpad 1

        97

        numpad 2

        98

        numpad 3

        99

        numpad 4

        100

        numpad 5

        101

        numpad 6

        102

        numpad 7

        103

        numpad 8

        104

        numpad 9

        105

        multiply

        106

        add

        107

        subtract

        109

        decimal point

        110

        divide

        111

        f1

        112

        f2

        113

        f3

        114

        f4

        115

        f5

        116

        f6

        117

        f7

        118

        f8

        119

        f9

        120

        f10

        121

        f11

        122

        f12

        123

        num lock

        144

        scroll lock

        145

        semi-colon

        186

        equal sign

        187

        comma

        188

        dash

        189

        period

        190

        forward slash

        191

        grave accent

        192

        open bracket

        219

        back slash

        220

        close braket

        221

        single quote

        222

          

        A Future Vision of Software Development

        From Coders to System Operators

        As artificial intelligence (AI) continues reshaping industries, the role of software development is undergoing a profound transformation. Writing code is becoming less about crafting individual lines of code and more about designing systems of services that deliver business value. Development is shifting from writing code to creative problem-solving and systematic orchestration of interconnected services.

        The End of Coding as We Know It

        Code generation has become increasingly automated. Modern AI tools can write boilerplate code, generate tests, and even create entire applications from high-level specifications. As this trend accelerates, human developers will move beyond writing routine code to defining the architecture and interactions of complex systems and services.

        Rather than focusing on syntax or implementation details, the next generation of developers will manage systems holistically, designing services, orchestrating workflows, and ensuring that all components deliver measurable and scalable user, client, and business value.

        The Rise of the System Operator

        In this emerging paradigm, the role of the System Operator comes into focus. A System Operator oversees a network of AI-driven assistants and specialized agents, ensuring the system delivers maximum value through continuous refinement and coordination.

        Key Responsibilities of the System Operator:

        1. Define Value Streams: Identify business goals, define value metrics, and ensure the system workflow aligns with strategic objectives.
        2. Design System Architectures: Structure interconnected services that collaborate to provide seamless functionality.
        3. Manage AI Agents: Lead AI-powered assistants specializing in tasks like strategy, planning, research, design, development, marketing, hosting, and client support.
        4. Optimize System Operations: Continuously monitor and adjust services for efficiency, reliability, and scalability.
        5. Deliver Business Outcomes: Ensure that every aspect of the system contributes directly to business success.

        AI-Augmented Teams: A New Kind of Collaboration

        Traditional product development teams will evolve into AI-Augmented Teams, where every team member works alongside AI-driven agents. These agents will handle specialized tasks such as market analysis, system design, and performance optimization. The System Operator will orchestrate the work of these agents to create a seamless, value-driven product development process.

        Core Roles in an AI-Augmented Team:

        • Strategist: Guides the product’s vision and sets business goals.
        • Planner: Manages delivery timelines, budgets, and project milestones.
        • Researcher & Analyst: Conducts in-depth user, customer, market, technical, and competitive analyses.
        • Architect & Designer: Defines system architecture and creates intuitive user interfaces.
        • Developer & DevOps Tech: Implements features and ensures smooth deployment pipelines.
        • Marketer & Client Success Tech: Drives user adoption, engagement, and retention.
        • Billing & Hosting Tech: Manages infrastructure, costs, and financial tracking.

        System Operator: A New Job Description

        A System Operator is like an Uber driver for business systems. Product development becomes a part of the gig economy.

        Operators need expertise in one or more of the system roles with agents augmenting their experience gaps in other roles. System Operators can be independent contractors or salaried employees.

        Title: System Operator – AI-Augmented Development Team

        Objective: To manage and orchestrate AI-powered agents, ensuring the seamless delivery of software systems and services that maximize business value.

        Responsibilities:

        • Collaborate with other system operators and AI-driven assistants to systematically deliver and maintain system services.
        • Define work item scope, schedule, budget, and value-driven metrics.
        • Oversee service performance, ensuring adaptability, scalability, and reliability.
        • Lead AI assistants in tasks such as data analysis, technical research, and design creation.
        • Ensure alignment with client and agency objectives through continuous feedback and system improvements.

        Skills and Qualifications:

        • Expertise in system architecture and service-oriented strategy, planning, and design.
        • Strong understanding of AI tools, agents, and automation frameworks.
        • Ability to manage cross-functional teams, both human and AI-powered.
        • Analytical mindset with a focus on continuous system optimization.

        Conclusion: Embracing the Future of Development

        The role of developers is rapidly evolving into something much broader, more strategic, and less focused on boilerplate coding. System Operators will lead the charge, leveraging AI-powered agents to transform ideas into scalable, value-driven solutions. As we move toward this new reality, development teams must embrace the change, shifting from code writers to orchestrators of complex service ecosystems that redefine what it means to build software in the AI era.

        Revolutionizing Business Operations for Digital Products a Value Delivery System (VDS)

        In the world of digital products and business operations, we often talk about delivering value to customers. But what does this really mean in practice? How can we ensure that our processes are optimized for maximum efficiency and effectiveness? Enter the Value Delivery System (VDS) – a framework that’s revolutionizing how businesses approach value creation and delivery.

        Deconstructing the Value Delivery System

        At its core, a Value Delivery System is an engineered approach to streamlining the process of value creation and delivery. It’s not just about moving products or services from point A to point B; it’s about optimizing every step of the journey from customer request to customer satisfaction.

        Let’s break down the key components.

        Value Streams

        Think of value streams as the pipelines through which value flows. Each stream represents a series of steps that transform raw inputs into finished products or services. In a digital product development business, this might look like:

        Understanding and optimizing these streams is crucial for identifying bottlenecks and improving overall system efficiency.

        Work Items

        Work items are the atomic units of value in your system. In an Agile context, these could be user stories, tasks, or features. Each work item travels through the value stream, accumulating value at each stage.

        Flow Metrics

        To manage what you measure, we need robust flow metrics. Key metrics include:

        • Flow Time: The total time it takes for a work item to move through the entire value stream.
        • Throughput: The number of work items completed per unit of time.
        • Work in Progress (WIP): The number of items currently being worked on.

        These metrics provide vital insights into the health and efficiency of your Value Delivery System.

        Feedback Loops

        Continuous improvement is at the heart of any effective VDS. Implementing feedback loops at various stages allows for:

        • Rapid iteration based on user feedback
        • Early detection and correction of issues
        • Continuous refinement of processes

        Implementing a Value Delivery System

        Implementing a VDS requires a shift in thinking and operations. Here’s steps for a quick start.

        1. Map Your Current Value Streams: Start by visualizing your existing processes.
        2. Identify Bottlenecks and Waste: Use flow metrics to pinpoint areas of inefficiency.
        3. Implement Pull Systems: Adopt Kanban or similar methodologies to manage WIP and improve flow.
        4. Automate Where Possible: Use CI/CD pipelines to reduce manual interventions and speed up delivery.
        5. Monitor and Iterate: Continuously track your flow metrics and make data-driven improvements.

        The Technical Side of VDS

        From a technical perspective, implementing a VDS for digital products often involves:

        • Version Control Systems: Git for tracking changes and managing code bases.
        • CI/CD Tools: Jenkins, GitLab CI, or GitHub Actions for automating build, test, and deployment processes.
        • Monitoring Tools: Prometheus, Grafana for tracking system health and performance.
        • Workflow Management: JIRA, Trello, or Azure DevOps for managing work items and visualizing flow.

        Here’s a simplified example of how these tools might integrate in a VDS:

        Engineering for Value

        Implementing a Value Delivery System is not just about adopting new tools or processes. It’s about engineering your entire business operation to optimize for value delivery. By focusing on flow, measuring the right metrics, and continuously improving based on feedback, you can create a system that not only meets but exceeds customer expectations.

        As software engineers and business leaders, our goal should be to create systems that are as efficient and effective as the code we write. A well-implemented VDS is the key to achieving this, enabling businesses to respond quickly to change, deliver value consistently, and stay ahead in an increasingly competitive landscape.

        Remember, the journey to optimizing your Value Delivery System is ongoing. Each iteration brings new insights and opportunities for improvement. Embrace this continuous evolution, and you’ll be well-positioned to deliver exceptional value in an ever-changing business environment. Provide a state diagram illustrating the state transitions.

        Software Delivery Metrics

        This is a post from 2014 stuck in my drafts. Be free little post… be free.

        We have been pondering metrics for software delivery at work. Let me tell you, trying to hammer down a core set of global metrics for an organization with thousands of developers is not an easy task. Fortunately, in my personal projects I am only concerned with:

        • How many defects are reported in production.
        • How fast are we fixing production defects.
        • How many production defects are recurring or repeat offenders.

        Can there be more metrics? Absolutely, but until I have a good handle on these I don’t want to complicate things by tracking anything that doesn’t have a direct affect my customers. Having 5, 10, 20…or more metrics that I actively track would make me over analyze and spread my focus to wide. Keeping it simple and focused on the metrics that bring the most insight into keeping my customers happy with my product is most important.

        Would this limited set of metrics work for every project, every company…no. My metrics are optimized to the goals of my small product and company. You have to find that thing that is most important to your company. This is where it get’s difficult. There are so many opinions of what a good metric is and people want to advocate the metrics that have worked for them. The answer to large scale metrics projects may be a focus on achieving a core set of goals and only having metrics that have a direct correlation with the goal while having relevance in every part of the company. Easier than it sounds, but I believe this would force the scope of the metrics program downward. Less metrics is a good, good thing.

        In fact, I believe that a burgeoning metrics program should focus on one thing at a time as you ramp up. Choose one problem to fix in your software delivery and find a metric that can shed light on a possible way to fix it. If you have a problem with delivery time, maybe some type of process flow type metric would benefit you if you follow Kanban. What you want to do is optimize your metrics for your particular problem space and there isn’t a secret formula or magic bullet that someone can write in a blog to get you there. You have to try something. Pick a relevant metric throw it at the wall, if it sticks, run with it and find another one.

        Once you have a metric, get your benchmark by querying your current data to see where you are with the metric. As you know, the benchmark is your measuring stick and the point where you measure your good and bad trends from. Once you have your benchmark, then develop a tracking system: who to collect, store and report on the metric. Begin tracking it and implementing programs to improve the metric. Follow the trend of the metric to see how your changes are affecting the metric. Then when you have a handle on how the metric works for you then you will have a framework to develop additional metrics. You can call it the Minimum Viable Metric, if you will.

        The point is, if you spin your wheels analyzing what metrics to use, months will roll by and you will be no better. Precious data would just be passing right by you. Start today and you may find yourself with a wealth of actionable data at your disposal and the means to roll out more metrics.