Back to Blog

Building at Scale with Amazon Kiro - Shared Standards Across Microservices

2026-04-15 9 min read
Building at Scale with Amazon Kiro - Shared Standards Across 20+ Microservices

AI coding assistants are remarkably good at generating code for a single project. Give one a clear prompt, some context about your stack, and it will produce something reasonable.

The problem starts when you have twenty projects that all need to look the same.

At Oproto (the SaaS platform I'm building) the backend runs on AWS across over twenty microservice repositories. Each one follows the same architecture:

  • .NET Lambda APIs with DynamoDB
  • a shared authorization model
  • domain events on EventBridge
  • CDK infrastructure

Every service uses the same patterns for request validation, error handling, service layer design, and deployment.

Getting an AI assistant to follow those patterns once is easy. Getting it to follow them consistently across every repository, every feature, every time requires a different approach.

This post covers how we use Amazon Kiro's steering files, code review checklists, and agent hooks to maintain architectural consistency at scale, and the patterns I've developed after months of iteration.

The Problem with Context

Kiro, like any AI assistant, works within a context window. The more context it has, the better its output. But context is finite, and in a platform with dozens of standards documents, you can't load everything into every conversation.

Our standards cover:

  • Lambda project configuration
  • authorization patterns across three API surfaces
  • DynamoDB entity conventions
  • validation rules
  • CDK infrastructure patterns
  • domain event publishing
  • cross-service communication
  • and much more

Loading all of that into context for every task would be wasteful, and in practice, most of it would be irrelevant to whatever you're working on at the moment.

The naive approach is to dump everything into always-loaded steering files and hope the model sorts it out. We tried that. It doesn't scale. Context gets diluted, the model starts ignoring instructions buried deep in the prompt, and you burn through your context budget before you've done any real work.

The Index Pattern

The solution we landed on is a single always-loaded index file that describes what's available, paired with dozens of manual-inclusion steering files that only load when relevant.

The index file is the only steering document marked with inclusion: always. Every other file uses inclusion: manual, which means Kiro won't load it unless it decides the content is relevant, or you explicitly reference it with # in chat.

The index serves two purposes. The first is a categorized documentation index, structured like a table of contents. Each entry links to a steering file and includes a short description of what it covers:

### Building Lambda APIs
- **[Lambda Annotations](lambda-annotations.md)** - REST API patterns, 
  Console App configuration, unique function naming, LambdaRequestBuilder
- **[Dependency Injection](dependency-injection.md)** - Service registration, 
  Startup class, FluentDynamoDB Table registration
- **[Validation](validation.md)** - FluentValidation patterns for input 
  validation at API boundaries

### Authorization & Security
- **[Authorization Standards](authorization-standards.md)** - VP action naming, 
  resource types, capabilities registry
- **[Service Layer Architecture](service-layer-architecture.md)** - RequestRunner 
  pipeline with authorization patterns

### Data Access & Business Logic
- **[Repository Pattern](repository-pattern.md)** - FluentDynamoDB entities, 
  source-generated Table classes, repository interface pattern
- **[Entity Conventions](entity-conventions.md)** - Standard audit fields, 
  ActorRef format, DynamoDB key patterns

This gives Kiro enough information to decide which files are relevant without actually loading them. When it sees a task involving DynamoDB entities, it knows to pull in repository-pattern.md and entity-conventions.md based on the descriptions alone.

The second purpose is a quick-reference lookup that maps common work patterns directly to the relevant documents:

When working on:
- New Lambda functions → Lambda Annotations, OpenAPI Documentation, 
  Dependency Injection, Validation
- Authorization setup → Authorization Standards, Service Layer Architecture
- DynamoDB entities → Repository Pattern, Entity Conventions
- CDK stacks → CDK Infrastructure, CDK Documentation
- Completing a spec → Development Workflow (cleanup and changelog)
- New microservice → Project Structure, Project Versioning, Authorization 
  Standards, Service Layer Architecture, Domain Events

The categorized index is the foundation. It's what makes the whole system work. The quick-reference section is a convenience layer on top that helps Kiro make faster decisions about what to load when the task is straightforward.

Together, they give you the depth of thirty-plus standards documents with the context footprint of one.

What Goes in Steering Files

The steering files aren't vague guidelines. They're specific, opinionated, and full of code examples.

The level of detail matters. A steering file that says "use consistent naming conventions" won't change behavior. One that says "Lambda function method names must be globally unique within the assembly, must include the entity name, and must not use generic names like Create or List" will. The model needs concrete rules it can follow mechanically, not principles it has to interpret.

Each file covers a single concern. One for Lambda project configuration. One for the service layer request pipeline. One for DynamoDB entity conventions. One for CDK infrastructure patterns. They're written as reference material, not tutorials. Dense, specific, and designed to be consumed by an AI that needs to produce correct code on the first pass.

The specificity also makes them testable. If a rule can't be verified with a grep command or a checklist item, it's probably too abstract to be useful.

The Behavioral Rules Problem

Even with good steering, we found that Kiro would sometimes take shortcuts, especially during longer implementation sessions where it was iterating on build errors or test failures. The model would start strong, following every convention, but as it worked through compilation issues it would gradually drift.

Three patterns kept recurring:

First, when fixing code that wasn't compiling, Kiro would mark the existing implementation as [Obsolete] and create a new version alongside it. This is reasonable behavior in a production codebase, but during iterative development it just creates clutter. The code should be fixed in place.

Second, after fixing a bug or making a change, Kiro would helpfully bump the package version in the .csproj file. We control versioning using tags and GitHub Actions, and this created unnecessary noise in pull requests.

Third, when encountering issues with custom libraries, Kiro would sometimes try to work around the problem by replacing library usage with raw AWS SDK calls. The actual fix was almost always a missing using statement or an incorrect method signature, but the model would go down a rabbit hole of refactoring instead of stopping to ask.

I addressed these with a set of non-negotiable behavioral rules at the top of the index file. They're blunt and direct:

DO NOT mark existing code as [Obsolete] and create new versions. Fix the code in place.

DO NOT increment package versions. You do not control versioning.

When encountering issues with custom libraries: STOP. Describe the error. Wait for feedback.

These rules load into every conversation because they're in the always-included index. They've eliminated entire categories of wasted time.

Checklists as a Safety Net

Steering tells Kiro how to write code. Checklists verify that it actually did.

I maintain separate checklists for Lambda project configuration, service layer patterns, CDK infrastructure, domain events, startup registration, and testing. Each checklist is structured as a set of DO and DO NOT rules, followed by concrete verification steps, often grep commands that can be run against the codebase to detect violations.

For example, the Lambda Project Checklist includes:

DO NOT use generic method names like Create, List, Get. Use CreateRole, ListRoles, GetRole.

DO set AnnotationsHandler in CDK to match the exact C# method name.

Each rule exists because we found the mistake in real code. The checklists are living documents. When we discover a new pattern of drift, we add a rule.

The verification steps are designed to be runnable:

# Function name uniqueness check - should have no duplicates
grep -r "\[LambdaFunction\]" --include="*.cs" -A 5 \
  | grep "public async Task" \
  | awk '{print $4}' | cut -d'(' -f1 \
  | sort | uniq -d

Empty output means no duplicates. A hit means something needs fixing.

Automating Checklists with Hooks

Running checklists manually after every task is tedious. Kiro's agent hooks let you automate this.

I use a postTaskExecution hook that triggers after each spec task is marked complete. The hook sends a prompt back to Kiro asking it to run the relevant checklists against the code it just wrote:

{
  "name": "Post-Task Checklist Review",
  "version": "1.0.0",
  "when": {
    "type": "postTaskExecution"
  },
  "then": {
    "type": "askAgent",
    "prompt": "Run the relevant code review checklists against the 
    changes you just made. Check for violations and fix any issues 
    before proceeding."
  }
}

This creates a feedback loop: Kiro implements a task, then immediately reviews its own work against the standards. It catches the drift that happens during implementation, the shortcuts taken while fixing build errors, the conventions forgotten during a long chain of changes.

It's not perfect. The model sometimes marks its own work as passing when it shouldn't. But it catches enough issues to be worth the overhead, and the violations it misses tend to be the subtle ones that would require human review anyway.

Sharing Standards Across Repositories

The standards repository is a standalone Git repo that gets added as a secondary workspace folder in Kiro alongside whatever microservice you're working on. This means the same steering files, checklists, and behavioral rules are available in every repository without duplication.

The structure looks like this:

standards-microservices/
├── .kiro/
│   └── steering/
│       ├── index.md                    # Always loaded - the map
│       ├── lambda-annotations.md       # Manual - loaded when needed
│       ├── service-layer-architecture.md
│       ├── entity-conventions.md
│       ├── authorization-standards.md
│       └── ... (30+ more)
├── codereview/
│   ├── lambda-project-checklist.md
│   ├── service-layer-checklist.md
│   ├── cdk-checklist.md
│   ├── events-checklist.md
│   └── ...
├── planning/
│   └── ... (architecture decisions, design docs)
├── services/
│   └── ... (service catalog - what each service does)
└── infrastructure/
    └── ... (CDK stack documentation, SSM parameters)

When you open a microservice repo with this standards repo as a second workspace folder, Kiro automatically picks up the steering files. The index loads, the behavioral rules apply, and all the manual-inclusion standards are available on demand.

Updating a standard in one place updates it everywhere. No copy-paste across twenty repos. No drift between what one service thinks the pattern is and what another service does.

Beyond Steering: Planning and Service Documentation

The standards repo also holds planning documents and a service catalog. Planning documents capture architectural decisions: how the cellular architecture works, how the authorization cache is designed, how cross-service messaging is structured. These aren't steering files (they don't tell Kiro how to write code), but they provide context that helps the model make better decisions when implementing features that span services.

The service catalog is a distilled description of each microservice. What it does, what APIs it exposes, what events it publishes, what tables it owns. When Kiro is working on a service that needs to call another service, it can reference the catalog to understand the contract without reading the other service's entire codebase.

What I've Learned

A few observations after months of building this way:

Steering files need to be specific, not aspirational. Vague guidelines like "follow clean code principles" don't change behavior. Specific rules like "function method names must include the entity name" do. Every steering rule should be testable. If you can't write a grep command or a checklist item to verify it, it's probably too abstract.

The index pattern is essential at scale. Without it, you're either loading too much context (diluting quality) or too little (missing standards). The index gives the model a map of what exists and lets it pull in what it needs.

Behavioral rules prevent the most expensive mistakes. The three rules at the top of the index (no obsolete marking, no version bumping, stop on library issues) have saved more time than any individual standard. They address the model's instincts rather than its knowledge.

Checklists catch drift that steering doesn't prevent. Steering tells the model what to do. Checklists verify it did it. The gap between those two things is where most bugs live.

Automation closes the loop. Post-task hooks that run checklists automatically turn a manual review process into a continuous one. The model reviews its own work before you have to.

None of this replaces human review. But it raises the floor. The conversations we have with Kiro now are about architecture decisions and tradeoffs, not about whether a file was configured correctly. That's a meaningful shift in how the time gets spent.

Dan Guisinger

Dan Guisinger

AWS cloud architect and consultant specializing in system and security architecture. 20 years building enterprise applications in healthcare and finance.

Need Help With Your AWS Architecture?

Get a free 25-minute consultation to discuss your challenges.

Get in Touch