#ServiceDelivery

TheLambdaDevthelambdadev
2025-12-24

With this approach, even sub-goals such as profitability or team productivity align with the larger goal of exceptional customer service.

Read more šŸ‘‰ lttr.ai/AmbOb

Michael Adeyeye Oshinmaoshin@ngportal.com
2025-12-12
Perfectly baked and enshittified for you (::).

The process, which entails delayed_Shipping and invisible_Postman - is called Enshittification. All explained in the book.

Could it have happened somewhere in Australia? It is common to see a postman act like that.

One did not knock but rather left me a card to pickup my parcel at the post office.

Here is an excerpt of a toot at https://ngportal.com/maoshin/p/1751361429.952022 :
---------------........ and StarTrack would not ring our bell or knock but rather drop a "Sorry we miss you" notice, as shown here—https://ibb.co/bgXKpJsx.
..........
-------------------
This problem with Auspost was on the news earlier today. I was not too surprised to hear it on the radio.

Your situation goes deeper than the postman that was joyously hopping about without knocking; much like mine with Scroptec.

CC: @pluralistic@mamot.fr

#servicedelivery
TheLambdaDevthelambdadev
2025-12-10

By adopting the Service Orientation Agenda, Tech Managers create a mindset focusing an organization on long-term customer relationships.

Read more šŸ‘‰ lttr.ai/Al00y

TheLambdaDevthelambdadev
2025-11-25

A vital part is ensuring that each team member understands how their work impacts the customer, resulting in better engagement and accountability.

Read more šŸ‘‰ lttr.ai/AlZF6

TheLambdaDevthelambdadev
2025-11-12

With this approach, even sub-goals such as profitability or team productivity align with the larger goal of exceptional customer service.

Read more šŸ‘‰ lttr.ai/AkzDA

TheLambdaDevthelambdadev
2025-11-06

This agenda emphasizes that all organizational areas should prioritize customer needs and balance them accordingly, fostering a culture where every decision is aligned.

Read more šŸ‘‰ lttr.ai/AkqCP

TheLambdaDevthelambdadev
2025-11-06

The Kanban Service Orientation Agenda is a powerful tool that provides a straightforward way to align teams with customer-centric goals.

Read more šŸ‘‰ lttr.ai/Akp0i

TheLambdaDevthelambdadev
2025-11-05

With this approach, even sub-goals such as profitability or team productivity align with the larger goal of exceptional customer service.

Read more šŸ‘‰ lttr.ai/AknTl

Civic Innovationscivic.io@civic.io
2025-11-04

Infrastructure as Code for AI: The Rapid Evolution of Agent Instructions

Over the past year, we’ve seen the world of AI coding agents evolve at a pace that seems crazy fast. A process that started as simple markdown files to guide AI assistants has rapidly matured into sophisticated orchestration frameworks. It’s a transformation that took infrastructure automation roughly a decade – from shell scripts to Terraform – but for AI agent instructions, it’s happened in months.

From notes to infrastructure

When AI coding agents first emerged, developers quickly realized they needed a way to communicate project-specific standards and conventions. The solution was elegantly simple: markdown files in your repository that any AI tool could read. An AGENTS.md file became the standard way to explain your coding conventions, project structure, and workflow preferences to an AI coding agent – essentially documentation that both humans and machines could understand.

But as with most things in technology, simplicity gave way to more complexity šŸ˜…. We soon saw tool-specific variants of this approach emerge – CLAUDE.md, GEMINI.md, .github/copilot-instructions.md – each serving the same basic purpose but tailored to specific platforms. The principle remained consistent: give AI coding agents written instructions about how your project works.

A two-tiered strategy

What’s particularly interesting is how a pattern emerged for managing these instructions at different levels of specificity. We can think of it as two distinct layers:

High-level project instructions provide the constitution of your codebase – broad guidance that’s always relevant. The overall architecture, general coding standards and conventions, and common workflows live here.

Specialized, task-specific instructions offer detailed, narrowly-focused guidance pulled into context only when needed. These might be domain-specific instruction files for generating compliance documentation, or granular files for formatting particular types of components.

The key insight about this approach is that it’s about managing the finite context windows of AI coding agents. Developers keep essential, high-level guidance always available while bringing in detailed instructions just-in-time, only when the agent needs them for a specific task.

Claude Skills and custom Copilot Spaces are examples of this narrowly-focused approach, though they differ in implementation. Skills are platform-integrated and can activate automatically when relevant. Custom Copilot instructions live in your repository, version-controlled alongside your code. But they follow the same pattern: specialized knowledge, activated when needed.

Rise of the Meta-Frameworks

Here’s where things get really interesting. New projects like Superpowers and Amplifier represent a new category – meta-frameworks for instruction management. In simplest terms, they are orchestration platforms for AI coding instructions.

These tools solve second-order problems. The first-order problem was: ā€œHow do I give AI the right instructions?ā€ The answer: AGENTS.md, Skills, custom instructions. The second-order problem is: ā€œNow I have many instruction sources – how do I manage, combine, and optimize them?ā€ The answer: Superpowers, Amplifier, and probably a bunch of other projects under development right now.

These frameworks aggregate multiple instruction sources, prioritize what matters for current context, compose instructions intelligently to stay within context limits, and provide version control and management for instruction sets across teams and across organizations.

They are, in essence, infrastructure as code for AI agent instructions.

The speed of change

What stands out most to me about this evolution is how fast it’s happening. The transition from writing shell scripts to using Terraform to manage cloud infrastructure took years – arguably a decade or more before infrastructure as code became standard practice in many organizations. But for AI agent instructions, we’ve gone from basic markdown files to sophisticated orchestration frameworks in a matter of just months.

We’re watching an entire infrastructure layer emerge in real-time. The move from manual instruction crafting to instruction management systems. From static files to dynamic, context-aware composition. From individual projects to organization-wide instruction governance.

Implications for government software development

For those of us who work on government technology projects, this rapid evolution presents both opportunity and challenge. Government agencies are just beginning to experiment with AI coding tools, and many are still getting their arms around how to safely and effectively incorporate them into their development practices.

The good news is that the industry appears to be converging on patterns that could work well within government constraints. Version-controlled instruction files fit naturally with existing government practices around documentation and change management. The ability to create organization-wide instruction standards aligns well with the need for standards across agencies and programs.

But the speed of change is the challenge. Government technology adoption typically moves deliberately, with good reason – there are legitimate concerns about security, privacy, and the need for deeper scrutiny of new tools. The challenge is figuring out how to move quickly enough to benefit from these new capabilities while maintaining the appropriate level of oversight and management.

A new vector for government collaboration

For many years, advocates have pushed for government agencies to share open source software code – and many agencies and states do publish their code under open source licenses. It makes sense on paper – different agencies and states have common needs and functions, so sharing code should be cheaper and easier than each jurisdiction buying or building custom software independently.

Despite some notable successes, widespread code sharing between governments has proven much harder than many people – including myself – initially thought. The challenges are real: different technology stacks, varying regulatory requirements, distinct operational contexts, and the simple reality that code built for one jurisdiction’s specific needs rarely transfers easily or cleanly to another’s environment.

But AI agent instructions could provide a different path to the same destination.

I’m thinking about the Atlas ATO Accelerator project I started recently – a specialized instruction set designed to help AI agents generate infrastructure-as-code documents that meet NIST compliance requirements. This isn’t meant to be general-purpose guidance. It’s narrowly focused on a problem that’s nearly universal across government: producing compliant technical documentation for security authorization processes.

Here’s what makes this interesting: an instruction set is fundamentally more portable than code. A well-crafted AGENTS.md file that helps an AI generate compliant security documentation can work just as effectively on different technology stacks, different cloud providers, and different jurisdictional requirements. The agent instructions encode knowledge and patterns, not specific implementation details.

This opens up new possibilities for government collaboration. States could develop and share instruction sets for common functions – eligibility determination logic for public benefits, tax calculation patterns, procurement document generation. Federal agencies could create specialized instructions for working with government-specific frameworks and compliance requirements. Cities could share instruction sets for common civic functions like permit processing or 311 service requests.

The barriers to sharing are lower. A different government agency or office doesn’t need to adopt an entire codebase, with all its dependencies and technical debt. This approach pivots from sharing code to sharing knowledge about how to approach a problem, encoded in a way that helps AI agents generate appropriate solutions for different contexts.

It’s a fundamentally different model. Instead of ā€œhere’s working code you can deploy,ā€ it’s ā€œhere’s a way to think about this problem, packaged in a way that helps AI generate code that works for your environment.ā€

What governments can learn from this evolution

There are a few lessons I think government technology leaders should draw from watching this space evolve so rapidly:

Start experimenting now. The patterns are still emerging, but waiting for them to fully mature means falling further behind. Set up safe environments where teams can experiment with AI coding agents and different instruction approaches.

Invest in government-specific instruction sets. Unlike open source code, which has proven difficult to share across jurisdictions, instruction sets for AI agents may be inherently more portable. Government agencies should start to think about developing specialized instructions for common government functions – from compliance documentation to eligibility determination – that can be shared and adapted across agencies and jurisdictions.

Think about oversight early. The rapid emergence of meta-frameworks like Amplifier suggests that instruction governance will become critical as adoption scales. Government agencies should begin thinking now about how instruction sets will be managed, reviewed, and controlled across programs and teams.

Recognize the infrastructure layer. AI agent instructions are more than just helpful documentation – they’re becoming a critical piece of development infrastructure. They deserve the same attention and investment as other infrastructure components like CI/CD pipelines or deployment platforms.

Consider the context window as a constraint. Just as government developers learned to work within memory, processing, and bandwidth constraints, they’ll need to learn to work within AI context window constraints. The two-tier instruction strategy offers a proven pattern for doing this effectively.

Prepare for the next evolution. If the past year has taught us anything, it’s that this space will continue to change very quickly. Whatever instruction management approaches agencies adopt should be flexible enough to evolve as the ecosystem continues to mature.

Looking ahead

Standing back from the details, what we’re really watching is the emergence of a new layer in the software development stack. Just as we moved from manually configuring servers to infrastructure as code, we’re now moving from manually instructing AI to structured, governed, version-controlled instruction management systems.

As AI becomes more integral to how we build software, the instructions we give these tools become a form of infrastructure themselves – something that needs to be managed, versioned, and reviewed with the same scrutiny we apply to other foundational infrastructure components.

For government agencies, this evolution carries particular significance. Traditional open source code sharing has often fallen short of expectations due to the friction of different technical contexts and requirements. But instruction sets for AI agents may finally provide the more portable, adaptable approach to sharing knowledge and best practices across jurisdictions that advocates have long sought.

A new critical infrastructure layer is forming. Government needs to be part of shaping it.

#AI #artificialIntelligence #government #serviceDelivery

TheLambdaDevthelambdadev
2025-11-01

With this approach, even sub-goals such as profitability or team productivity align with the larger goal of exceptional customer service.

Read more šŸ‘‰ lttr.ai/AkeDj

TheLambdaDevthelambdadev
2025-10-31

By adopting the Service Orientation Agenda, Tech Managers create a mindset focusing an organization on long-term customer relationships.

Read more šŸ‘‰ lttr.ai/AkbXP

TheLambdaDevthelambdadev
2025-10-30

This agenda emphasizes that all organizational areas should prioritize customer needs and balance them accordingly, fostering a culture where every decision is aligned.

Read more šŸ‘‰ lttr.ai/AkY5D

TheLambdaDevthelambdadev
2025-10-30

In practice, Tech Managers can implement Service Orientation by continuously refining processes based on customer feedback, using Kanban boards to visualize the flow.

Read more šŸ‘‰ lttr.ai/AkYw7

TheLambdaDevthelambdadev
2025-10-29

In their crucial role, Tech Managers can use this agenda to guide their teams in delivering services that meet and exceed customer expectations.

Read more šŸ‘‰ lttr.ai/AkWDW

TheLambdaDevthelambdadev
2025-10-28

The Kanban Service Orientation Agenda is a powerful tool that provides a straightforward way to align teams with customer-centric goals.

Read more šŸ‘‰ lttr.ai/AkTpP

TheLambdaDevthelambdadev
2025-10-28

A vital part is ensuring that each team member understands how their work impacts the customer, resulting in better engagement and accountability.

Read more šŸ‘‰ lttr.ai/AkTfX

TheLambdaDevthelambdadev
2025-10-25

The Kanban’s Service Orientation Agenda for Tech Managers.
ā–ø lttr.ai/AkNaf

Civic Innovationscivic.io@civic.io
2025-10-23

Maybe We Shouldn’t Call Them AI ā€œAgentsā€

Beware of pretty faces that you find. A pretty face can hide an evil mind.
– Johnny Rivers, Secret Agent Man

As artificial intelligence capabilities expand into government service delivery, it’s worth pausing to think carefully about the language we’re using. The terms ā€œagentic servicesā€ and ā€œagentic AIā€ have gained significant traction in the tech industry, and for good reason — it captures something important about AI systems that can act autonomously. I myself am as guilty as anyone of using this term frequently. But for those of us working in government contexts, there are some considerations worth keeping in mind.

The ā€œAgentā€ Problem in Government

In government, the word ā€œagentā€ carries particular connotations. FBI agents. Border patrol agents. IRS agents. These are enforcement and investigative roles. When citizens hear ā€œgovernment agent,ā€ they often think of authority, compliance, and oversight — not helpful service delivery.

This isn’t an insurmountable problem, but it’s worth being aware of. The language we choose shapes how citizens perceive and respond to new service models. If we’re trying to build trust in AI-enabled services, starting with terminology that might trigger concerns about surveillance or enforcement may not be ideal.

(And yes, for a certain generation, The Matrix movies didn’t exactly help the cultural perception of ā€œagentsā€ either. šŸ˜…)

What the term ā€œagentsā€ might obscure

There’s a deeper consideration beyond just the word ā€œagentā€ itself. Calling these services ā€œagenticā€ can make them sound radically new — a complete departure enabled by cutting-edge AI. But that framing might obscure an important reality.

Delegation-based government services aren’t new. They’ve existed for decades, and are extremely common today.

Tax preparers handle filing returns on behalf of clients. Immigration attorneys navigate visa applications. Customs brokers manage import/export documentation for businesses. Permit expediters guide building approval processes. Benefits navigators help people apply for disability or veterans services.

These are all delegation relationships. Citizens hand over complex, high-stakes government interactions to trusted specialists who handle the administrative burden on their behalf. AI doesn’t enable this service delivery paradigm, but it does potentially make it more scalable and affordable.

Why Words Matter

Thinking about these services as ā€œdelegation-basedā€ rather than simply ā€œagenticā€ opens up useful design questions.

When you frame it as delegation, you can look to existing delegation relationships for guidance. What makes someone comfortable delegating their tax filing to a CPA? What trust factors matter when hiring an immigration attorney? These aren’t abstract questions — there are decades of real-world answers.

The language of delegation also centers the citizen experience more clearly. It’s not about what the AI can do autonomously; it’s about what citizens are willing to hand over and under what conditions. That subtle shift in framing can lead to different design choices around transparency, control and oversight.

Moving Forward

This isn’t a call to abandon the term ā€œagentic servicesā€ entirely. It’s widely used in industry, and there’s value in using common language when talking with technology partners and vendors.

But maybe for internal discussions, policy development, and especially citizen-facing communications, it might be worth experimenting with terms like ā€œdelegation-based servicesā€ or similar language. It acknowledges continuity with existing practices, avoids potentially problematic associations with ā€œgovernment agents,ā€ and keeps the focus on what citizens are actually doing: choosing to delegate burdensome tasks while maintaining appropriate oversight and accountability.

The technology may be new, but the underlying service delivery paradigm isn’t. Our language should reflect that.

Note – this post originally appeared on GovLoop.

#agent #AI #artificialIntelligence #ChatGPT #serviceDelivery

Civic Innovationscivic.io@civic.io
2025-09-10

Revisiting an Old Idea: Building a Rules Engine with CouchDB

A few years back, while working at 18F, I created a prototype that explored something a bit unconventional: using CouchDB’s document validation functions as the foundation for a rules engine. The idea was to leverage CouchDB’s built-in validation capabilities to create business rules that could be applied to documents as they’re inserted or updated.

I’ve always been somewhat obsessed with CouchDB—there’s something elegant about its document-oriented approach and the way it handles replication, versioning, and distributed architectures. (Here’s a video I made over 10 years ago showing how to load polling location data into a CouchDB instance.) So even though my prototype remained just that, the concept has continued to bubble in the back of my brain.

Recently, I decided to dust off this old project and give it the attention it deserves. I’ve worked to develop a comprehensive roadmap to transform the basic prototype into a more functional and usable product that truly leverages CouchDB’s unique strengths.

What Makes This Interesting

Instead of building yet another traditional rules engine, this approach uses CouchDB’s native validation functions as the rule execution environment. This means:

The roadmap I’ve created takes my earlier work from a proof-of-concept to a (hopefully) production-ready system with a web-based rule management interface, comprehensive testing infrastructure, and advanced rule capabilities—all while maintaining the elegance of the core CouchDB foundation.

Looking Ahead

Over the next few weeks (again, hopefully), I’ll be working through the development phases, starting with a modern testing framework and a clean web interface for rule management. The goal is to create something that demonstrates how CouchDB’s unique features can be leveraged in ways that traditional databases simply can’t match.

If you’re interested in following along or have thoughts about creative uses for CouchDB, I’d love to hear from you. Sometimes the most interesting solutions come from pushing familiar tools in unexpected directions.

#CouchDB #governmet #Javascript #OpenSource #rules #serviceDelivery

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst