Ron and Ella Wiki Page

Extremely Serious

The Age of Slop Code – And How Senior Engineers Keep Systems Sane

Slop code is becoming a defining challenge of modern software engineering: code that looks clean, runs, and even passes tests, yet is shallow, fragile, and corrosive to long‑term quality.

From “AI Slop” to Slop Code

The term “AI slop” emerged to describe low‑quality AI‑generated content that appears competent but is actually superficial, cheap to produce, and easy to flood the world with. Researchers characterize this slop by three prototypical properties: superficial competence, asymmetric effort, and mass producibility. When this pattern moved into software, engineers started talking about “AI slop code” or simply “slop code” for similar low‑quality output in codebases.

At the same time, “vibe coding” entered the lexicon: relying on LLMs to generate entire chunks of functionality from natural‑language prompts, reviewing results only lightly and steering with follow‑up prompts rather than deep understanding. When this practice spills over into rushed shipping, missing refactors, and weak testing, you get “vibe slopping”: chaotic, unrefactored, AI‑heavy changes that harden into technical debt.

What Slop Code Looks Like in Practice

Slop code is not obviously broken. That is precisely why it is dangerous. It often has these traits:

  • Superficially correct behavior: it compiles, runs, and passes basic or happy‑path tests.
  • Overly complex implementations: verbose solutions, unnecessary abstractions, and duplicated logic rather than refactoring.
  • Architectural blindness: code that “solves” the prompt but ignores existing patterns, invariants, or system boundaries.
  • Weak error handling and edge‑case coverage: success paths are implemented, but failure modes are hand‑waved or inconsistent.
  • Inconsistent conventions: style, naming, and dependency usage drift across files or services.
  • Low comprehension: the submitting developer struggles to explain trade‑offs, invariants, or why this approach fits the system.

Reports from teams using AI‑assisted development describe AI slop as code that “looks decent at first glance” but hides overcomplication, neglected edge cases, and performance or integration issues that only surface later. Senior engineers increasingly describe their role as auditing AI‑generated code and guarding architecture and security rather than writing most of the initial implementation themselves.

A Simple Example Pattern

Consider an AI‑generated “quick” integration:

  • It introduces a new HTTP client wrapper instead of reusing the existing one.
  • It hard‑codes timeouts and retry logic instead of using shared configuration.
  • It parses responses with ad‑hoc JSON access rather than central DTOs and validation.

Everything appears to work in a demo and passes a couple of unit tests, but it quietly duplicates concerns, violates resilience patterns, and becomes a fragile outlier under load — classic slop behavior.

Why Slop Code Is Systemically Dangerous

The slop layer is insidious because it is made of code that “works” and “looks fine.” It doesn’t crash obviously; instead, it undermines systems over time.

Key risks include:

  • Accelerated technical debt: AI tools optimize for local code generation, not global architecture, so they create bloat, duplication, and shallow abstractions at scale.
  • False sense of velocity: teams see rapid feature delivery and green test suites while hidden complexity and fragility quietly accumulate.
  • Integration fragility: code that works in isolation clashes with production data shapes, error behaviors, and cross‑service contracts.
  • Erosion of engineering skill: juniors rely on AI for non‑trivial tasks, skipping the deep debugging and maintenance work that forms real expertise.

Some industry analyses describe this as an “AI slop layer”: code that compiles, passes tests, and looks clean, yet is “system‑blind” and architecturally shallow. The result is a sugar‑rush phase of AI‑driven development now, followed by a slowdown later as teams pay down accumulated slop.

How Slop Relates to Vibe Coding and Vibe Slopping

The modern ecosystem has started to differentiate related behaviors:

Term Core idea Typical failure mode
AI slop Low‑quality AI content that seems competent but is shallow. Volume over rigor; hard‑to‑spot defects.
Vibe coding Using LLMs as the primary way to generate code from English. Accepting working code without fully understanding it.
Vibe slopping The chaotic aftermath of vibe coding under delivery pressure. Bloated, duct‑taped, unrefactored code and technical debt.
Slop code The resulting messy or shallow code in the repo. Long‑term maintainability and reliability problems.

Crucially, using AI does not automatically produce slop. If an engineer reviews, tests, and truly understands AI‑written code, that is closer to using an LLM as a typing assistant than to vibe coding. Slop arises when teams accept AI output at face value, optimize for throughput, and skip the engineering disciplines that make software robust.

Guardrails: How Technical Leads Can Contain Slop

For someone in a technical‑lead role, the real question is: how do we get the productivity benefits of AI without drowning in slop?

Industry guidance and experience from teams operating heavily with AI suggest a few practical guardrails.

  • Raise the bar for acceptance, not generation
    Treat AI code as if it were written by a very fast junior: useful, but never trusted without review. Require that the author can explain key invariants, trade‑offs, and failure modes in their own words.
  • Design and architecture first
    Make system boundaries, contracts, and invariants explicit before generating code. The more precise the specification and context, the less room there is for the model to generate clever but misaligned solutions.
  • Enforce consistency with existing patterns
    Review code for alignment with established architecture, libraries, and conventions, not just for local correctness. Build simple checklists: shared clients, shared error envelopes, shared DTOs, and standard logging and metrics patterns.
  • Strengthen tests around behavior, not implementation
    Focus tests on business rules, edge cases, and contracts between modules and services. This constrains slop by making shallow or misaligned behavior visible quickly.
  • Be deliberate with AI usage
    Use AI where it shines: boilerplate, glue code, and refactors, rather than core domain logic or delicate concurrency and performance‑critical code. When applying AI to critical paths, budget time for deep human review and stress testing.
  • Train for slop recognition
    Teach your team to spot red flags: over‑verbose code, unnecessary abstractions, unexplained dependencies, and “magic” logic. Encourage code reviews that ask, “How does this fit the system?” as much as “Does this pass tests?”

A recurring theme in expert commentary is that future high‑value skills include auditing AI‑generated code, debugging AI‑assisted systems, and securing and scaling AI‑written software. In that world, leads act less as primary implementers and more as stewards of architecture, quality, and learning.

A Simple Example: Turning Slop into Solid Code (Conceptual)

To keep this language‑agnostic, imagine a service that needs to fetch user preferences from another microservice and fall back gracefully on failure.

A slop‑code version often looks like this conceptually:

  • Creates a new HTTP client with hard‑coded URL and timeouts.
  • Calls the remote service directly in multiple places.
  • Swallows or logs errors without clear fallback behavior.
  • Has only a basic success‑path test, no network‑failure tests.

A cleaned‑up version, written with architectural intent, would instead:

  • Reuse the shared HTTP client and central configuration for timeouts and retries.
  • Encapsulate the call behind a single interface, e.g., UserPreferencesProvider.
  • Define explicit behavior on failure (default preferences, cached values, or clear error propagation).
  • Add tests for timeouts, 4xx/5xx responses, and deserialization failures, plus contract tests for the external API.

Slop is not about who typed the code; it is about whether the team did the engineering work around it.

WireMock Java Stubbing: From Configuration to StubMapping

In this article we will walk through the main Java concepts behind WireMock: how you configure the server, choose a port, describe requests and responses, and how everything ends up as a StubMapping. The goal is that you not only know how to use the API, but also why it is structured this way, the way an experienced engineer would reason about test doubles.


Configuring WireMockServer with WireMockConfiguration

WireMockConfiguration is the object that describes how your WireMock HTTP server should run. You rarely construct it directly; instead you use a static factory called options(), which returns a configuration builder.

At a high level:

  • WireMockConfiguration controls ports, HTTPS, file locations, extensions, and more.
  • The fluent style (via options()) encourages explicit, readable configuration instead of magic defaults scattered through the codebase.
  • Because it is a separate object from WireMockServer, you can reuse or tweak configuration for different test scenarios.

Example shape (without imports for now):

WireMockServer wireMockServer = new WireMockServer(
    options()
        .dynamicPort()
        .dynamicHttpsPort()
);

You pass the built configuration into the WireMockServer constructor, which then uses it to bind sockets, set up handlers, and so on. Conceptually, think of WireMockConfiguration as the blueprint; WireMockServer is the running building.


Dynamic Port: Why and How

In test environments, hard‑coding ports (e.g. 8080, 9090) is a common source of flakiness. If two tests (or two services) try to use the same port, one will fail with “address already in use.”

WireMock addresses this with dynamicPort():

  • dynamicPort() tells WireMock to pick any free TCP port available on the machine.
  • After the server starts, you ask the server which port it actually bound to, via wireMockServer.port().
  • This pattern is ideal for parallel test runs and CI environments, where port availability is unpredictable.

Example pattern:

WireMockServer wireMockServer = new WireMockServer(
    options().dynamicPort()
);

wireMockServer.start();

int port = wireMockServer.port(); // the chosen port at runtime
String baseUrl = "http://localhost:" + port;

You then configure your HTTP client (or the service under test) to call baseUrl, not a hard‑coded port. The rationale is to shift from “global fixed port” to “locally discovered port,” which removes an entire class of brittle test failures.


Creating a Stub: The Big Picture

When we say “create a stub” in WireMock, we mean:

Define a mapping from a request description to a response description, and register it with the server so that runtime HTTP calls are intercepted according to that mapping.

This mapping is built in three conceptual layers:

  • A request pattern (what should be matched).
  • A response definition (what should be returned).
  • A stub mapping that joins these two together and gives it identity and lifecycle inside WireMock.

In Java, the fluent DSL exposes this as:

wireMockServer.stubFor(
    get(urlEqualTo("/api/message"))
        .willReturn(
            aResponse()
                .withStatus(200)
                .withHeader("Content-Type", "text/plain")
                .withBody("hello-wiremock")
        )
);

This one line of code hides several objects: a MappingBuilder, a RequestPattern, a ResponseDefinition, and eventually a StubMapping. The design encourages a declarative style: you describe what should happen, not how to dispatch it.


MappingBuilder: Fluent Construction of a Stub

MappingBuilder is the central builder used by the Java DSL. Calls like get(urlEqualTo("/foo")) or post(urlPathMatching("/orders/.*")) return a MappingBuilder instance.

It is responsible for:

  • Capturing the HTTP method (GET, POST, etc.).
  • Associating a URL matcher (exact equality, regex, path, etc.).
  • Enriching with conditions on headers, query parameters, cookies, and body content.
  • Attaching a response definition via willReturn.

You rarely instantiate MappingBuilder yourself. Instead you use static helpers from the DSL:

get(urlEqualTo("/api/message"))
post(urlPathEqualTo("/orders"))
put(urlMatching("/v1/users/[0-9]+"))

Each of these returns a MappingBuilder, and you chain further methods to refine the match. The rationale is to keep your test code highly readable while still configuring quite a lot of matching logic.


RequestPattern: Describing the Request Shape

Under the hood, MappingBuilder gradually accumulates a RequestPattern (or more precisely, builds a RequestPatternBuilder). A RequestPattern is an object representation of “what an incoming HTTP request must look like for this stub to apply.”

A RequestPattern may include:

  • HTTP method (e.g. GET).
  • URL matcher: urlEqualTo, urlPathEqualTo, regex matchers, etc.
  • Optional header conditions: withHeader("X-Env", equalTo("test")).
  • Optional query param or cookie matchers.
  • Optional body matchers: raw equality, regex, JSONPath, XPath, and so on.

Example via DSL:

post(urlPathEqualTo("/orders"))
    .withHeader("X-Tenant", equalTo("test"))
    .withQueryParam("source", equalTo("mobile"))
    .withRequestBody(matchingJsonPath("$.items[0].id"));

Each of these DSL calls contributes to the underlying RequestPattern. The motivation for this design is to let you express complex request matching without writing imperative “if header equals X and URL contains Y” code; WireMock handles that logic internally.


ResponseDefinition and aResponse: Describing the Response

If RequestPattern says “what we expect to receive,” then ResponseDefinition says “what we will send back.” It captures all aspects of the stubbed response:

  • Status code and optional status message.
  • Headers (e.g., content type, custom headers).
  • Body content (string, JSON, binary, templated content).
  • Optional behaviour like artificial delays or faults.

The idiomatic way to construct a ResponseDefinition in Java is via the aResponse() factory, which returns a ResponseDefinitionBuilder:

aResponse()
    .withStatus(201)
    .withHeader("Content-Type", "application/json")
    .withBody("{\"id\":123}");

Using a builder for responses has several benefits:

  • It separates pure data (status, headers, body) from the network I/O, so you can reason about responses as values.
  • It encourages small, focused stubs rather than ad‑hoc code that manipulates sockets or streams.
  • It allows extensions and transformers to hook into a well‑defined structure.

Once built, this response definition is attached to a mapping via willReturn.


willReturn: Connecting Request and Response

The willReturn method lives on MappingBuilder and takes a ResponseDefinitionBuilder (typically produced by aResponse()).

Conceptually:

  • Before willReturn, you are only describing the request side.
  • After willReturn, you have a complete “if request matches X, then respond with Y” mapping.
  • The resulting MappingBuilder can be passed to stubFor, which finally registers it with the server.

Example:

get(urlEqualTo("/api/message"))
    .willReturn(
        aResponse()
            .withStatus(200)
            .withBody("hello-wiremock")
    );

The wording is deliberate. The DSL reads like: “GET /api/message willReturn this response.” This is a very intentional choice to make tests self‑documenting and easy to skim.


StubMapping: The Persisted Stub Definition

Once you call stubFor(mappingBuilder), WireMock converts the builder into a concrete StubMapping instance. This is the in‑memory (and optionally JSON‑on‑disk) representation of your stub.

A StubMapping includes:

  • The RequestPattern (what to match).
  • The ResponseDefinition (what to send).
  • Metadata: UUID, name, priority, scenario state, and other advanced properties.

StubMapping is what WireMock uses at runtime to:

  • Evaluate incoming requests against all known stubs.
  • Decide which stub wins (based on priority rules).
  • Produce the actual HTTP response that the client receives.

From an architectural perspective, StubMapping lets WireMock treat stubs as data. That is why you can:

  • Export stubs as JSON.
  • Import them via admin endpoints.
  • Manipulate them dynamically without recompiling or restarting your tests.

WireMock Class: The Fluent DSL Entry Point

The WireMock class is the static gateway to the Java DSL. It provides methods used throughout examples:

  • Request builders: get(), post(), put(), delete(), any().
  • URL matchers: urlEqualTo(), urlPathEqualTo(), regex variants.
  • Response builders: aResponse(), plus convenience methods like ok(), badRequest(), etc.
  • Utility methods to bind the static DSL to a specific server (configureFor(host, port)).

In tests you typically import its static methods:

import static com.github.tomakehurst.wiremock.client.WireMock.*;

This is what enables code such as:

get(urlEqualTo("/api/message"))
    .willReturn(aResponse().withStatus(200));

instead of more verbose, object‑oriented calls. The goal is to minimize ceremony and make test intent immediately obvious.


A Simple Example

Let’s now put all these pieces together in a small JUnit 5 test using:

  • Java 11+ HttpClient.
  • WireMockServer with dynamicPort().
  • A single stub built with the core DSL concepts we have discussed.

This example intentionally avoids any build or dependency configuration, focusing only on the Java code.

import com.github.tomakehurst.wiremock.WireMockServer;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;

import static com.github.tomakehurst.wiremock.client.WireMock.*;
import static com.github.tomakehurst.wiremock.core.WireMockConfiguration.options;
import static org.junit.jupiter.api.Assertions.assertEquals;

class WireMockExampleTest {

    private WireMockServer wireMockServer;

    @BeforeEach
    void startServer() {
        // Configure WireMock with a dynamic port to avoid clashes.
        wireMockServer = new WireMockServer(
                options().dynamicPort()
        );
        wireMockServer.start();

        // Bind the static DSL to this server instance.
        configureFor("localhost", wireMockServer.port());
    }

    @AfterEach
    void stopServer() {
        wireMockServer.stop();
    }

    @Test
    void shouldReturnStubbedMessage() throws Exception {
        // Create a stub (MappingBuilder -> RequestPattern + ResponseDefinition)
        wireMockServer.stubFor(
                get(urlEqualTo("/api/message"))
                        .willReturn(
                                aResponse()
                                        .withStatus(200)
                                        .withHeader("Content-Type", "text/plain")
                                        .withBody("hello-wiremock")
                        )
        );

        // Build an HTTP client and request using the dynamic port.
        HttpResponse<String> response;
        try (HttpClient client = HttpClient.newHttpClient()) {
            String baseUrl = "http://localhost:" + wireMockServer.port();

            HttpRequest request = HttpRequest.newBuilder()
                    .uri(URI.create(baseUrl + "/api/message"))
                    .GET()
                    .build();

            response = client.send(request, HttpResponse.BodyHandlers.ofString());
        }

        // Validate that the stub mapping was applied correctly.
        assertEquals(200, response.statusCode());
        assertEquals("hello-wiremock", response.body());
    }
}

How to validate this example

To validate the example:

  • Ensure you have WireMock and JUnit 5 in your project dependencies (via Maven, Gradle, or your build tool of choice).
  • Run the test class.
  • The test passes if:
    • The WireMockServer starts on a dynamic port without conflicts.
    • The request to /api/message is matched by the RequestPattern defined in the MappingBuilder.
    • The ResponseDefinition created with aResponse() and attached via willReturn produces the expected status and body.

Why We Need Modern Software and Tools?

Modern software and tools are no longer “nice to have”; they are the infrastructure that lets individuals and organizations work faster, more accurately, and more securely in a digital economy.

The role of modern tools in today’s world

We now build, run, and maintain most services through software, from banking and healthcare to logistics and entertainment. Modern tools encapsulate current best practices, regulations, and technologies, allowing us to keep up with rapidly changing requirements and expectations.

Efficiency and productivity at scale

Modern tools automate repetitive work such as deployments, testing, reporting, and coordination, which dramatically reduces manual effort and context switching. This automation scales: one team can now manage systems that would previously have required many more people, simply because the tools handle orchestration and routine checks.

Accuracy, reliability, and reduced risk

Contemporary platforms embed validation, type checking, automated tests, and monitoring capabilities that reduce the likelihood of human error. As a result, systems become more reliable, analytics more trustworthy, and business decisions less exposed to mistakes arising from inconsistent or incorrect data.

Collaboration in a distributed world

Work has become inherently distributed across locations and time zones, and modern software is designed to support this reality. Shared repositories, real‑time document and code collaboration, integrated chat, and task tracking make it feasible for cross‑functional teams to coordinate effectively without being physically co‑located.

Security, compliance, and maintainability

Security threats evolve constantly, and older tools tend not to receive timely patches or support for new standards. Modern platforms incorporate stronger authentication, encryption, audit trails, and compliance features, helping organizations protect data and meet regulatory obligations while keeping maintenance overhead manageable.

Innovation and competitive advantage

New capabilities—AI-assisted development, advanced analytics, low‑code platforms, cloud‑native services—are exposed primarily through modern tools and ecosystems. Organizations that adopt them can experiment faster, ship features more quickly, and create better user experiences, while those tied to outdated tooling tend to move slowly and lose competitive ground.

In short, we use modern software and tools because they are the practical way to achieve speed, quality, security, and innovation in a world where all of these are moving targets.

Cloud Native Applications and the Twelve‑Factor Methodology

Cloud native and the twelve‑factor methodology describe two tightly related but distinct layers of modern software: cloud native is primarily about the environment and platform you deploy to, while twelve‑factor is about how you design and implement the application so it thrives in that environment.

What “cloud native” actually means

Cloud‑native applications are designed to run on dynamic, elastic infrastructure such as public clouds, private clouds, or hybrid environments. They assume that:

  • Infrastructure is ephemeral: instances can disappear and be recreated at any time.
  • Scale is horizontal: you handle more load by adding instances, not vertically scaling a single machine.
  • Configuration, networking, and persistence are provided by the platform and external services, not by local machine setup.

Typically, cloud‑native systems use:

  • Containers (OCI images) as the primary packaging and deployment unit.
  • Orchestration (e.g., Kubernetes) to schedule, scale, heal, and roll out workloads.
  • Declarative configuration and infrastructure‑as‑code to describe desired state.
  • Observability (logs, metrics, traces) and automation (CI/CD, auto‑scaling, auto‑healing) as first‑class concerns.

From an architect’s perspective, “cloud native” is the combination of these platform capabilities with an application design that can exploit them. Twelve‑factor is one of the earliest and still influential descriptions of that design.

The twelve‑factor app in a nutshell

The twelve‑factor methodology was introduced to codify best practices for building Software‑as‑a‑Service applications that are:

  • Portable across environments.
  • Easy to scale horizontally.
  • Amenable to continuous deployment.
  • Robust under frequent change.

The original factors (Codebase, Dependencies, Config, Backing services, Build/Release/Run, Processes, Port binding, Concurrency, Disposability, Dev/prod parity, Logs, Admin processes) constrain how you structure and operate the app. The key idea is that by following these constraints, you produce an application that is:

  • Stateless in its compute tier.
  • Strict about configuration boundaries.
  • Explicit about dependencies.
  • Friendly to automation and orchestration.

Notice how those properties line up almost one‑for‑one with cloud‑native expectations.

How twelve‑factor underpins cloud‑native properties

Let’s connect specific twelve‑factor principles to core cloud‑native characteristics.

Portability and containerization

Several factors directly support packaging and running your app in containers:

  • Dependencies: All dependencies are declared explicitly and isolated from the base system. This maps naturally to container images, where your application and its runtime are packaged together.
  • Config: Configuration is stored in the environment, not baked into the image. That means the same image can be promoted across environments (dev → test → prod) simply by changing environment variables, ConfigMaps, or Secrets.
  • Backing services: Backing services (databases, queues, caches, etc.) are treated as attached resources, accessed via configuration. This decouples code from specific infrastructure instances, making it easy to bind to managed cloud services.

Result: your artifact (image) becomes environment‑agnostic, which is a prerequisite for true cloud‑native deployments across multiple clusters, regions, or even cloud providers.

Statelessness and horizontal scalability

Cloud‑native platforms shine when workloads are stateless and scale horizontally. Several factors enforce that:

  • Processes: The app executes as one or more stateless processes; any persistent state is stored in external services.
  • Concurrency: Scaling is achieved by running multiple instances of the process rather than threading tricks inside a single instance.
  • Disposability: Processes are fast to start and stop, enabling rapid scaling, rolling updates, and failure recovery.

On an orchestrator like Kubernetes, these characteristics translate directly into:

  • Replica counts controlling concurrency.
  • Pod restarts and rescheduling being safe and routine.
  • Auto‑scaling policies that can add or remove instances in response to load.

If your app violates these factors (e.g., uses local disk for state, maintains sticky in‑memory sessions, or takes minutes to start), it fights the cloud‑native platform rather than benefiting from it.

Reliability, operability, and automation

Cloud‑native systems rely heavily on automation and observability. Twelve‑factor anticipates this:

  • Dev/prod parity: Minimizing the gap between development, staging, and production environments reduces surprises and supports continuous delivery.
  • Logs: Treating logs as an event stream, written to stdout/stderr, fits perfectly with container logging and centralized log aggregation. The platform can capture, ship, and index logs without the application managing log files.
  • Admin processes: One‑off tasks (migrations, batch jobs) run as separate processes (or jobs), using the same codebase and configuration as long‑running services. This aligns with Kubernetes Jobs/CronJobs or serverless functions.

Together, these make it far easier to build reliable CI/CD pipelines, perform safe rollouts/rollbacks, and operate the system with minimal manual intervention—hallmarks of cloud‑native operations.

How to use twelve‑factor as a cloud‑native checklist

you can treat twelve‑factor as a practical assessment framework for cloud‑readiness of an application, regardless of language or stack.

For each factor, ask: “If I deployed this on a modern orchestrator, would this factor hold, or would it cause friction?” For example:

  • Config: Can I deploy the same container image to dev, QA, and prod, changing only environment settings? If not, there is a cloud‑native anti‑pattern.
  • Processes & Disposability: Can I safely kill any instance at any time without data loss and with quick recovery? If not, the app is not truly cloud‑native‑friendly.
  • Logs: If I run multiple instances, can I still understand system behavior from aggregated logs, or is there stateful, instance‑local logging?

You will usually discover that bringing a legacy application “into Kubernetes” without addressing these factors leads to brittle deployments: liveness probes fail under load, rollouts are risky, and scaling is unpredictable.

Conversely, if an app cleanly passes a twelve‑factor review, it tends to behave very well in a cloud‑native environment with minimal additional work.

How to position twelve‑factor today

Twelve‑factor is not the whole story in 2026, but it remains an excellent baseline:

  • It does not cover all modern concerns (e.g., multi‑tenant isolation, advanced security, service mesh, zero‑trust networking, event‑driven patterns).
  • It is, however, an excellent “minimum bar” for application behavior in a cloud‑native context.

I recommend treating it as:

  • A design standard for service teams: code reviews and design docs should reference the factors explicitly where relevant.
  • A readiness checklist before migrating a service to a Kubernetes cluster or similar platform.
  • A teaching tool for new engineers to understand why “just dockerizing the app” is not enough.

Scaffolding a Modern VS Code Extension with Yeoman

In this article we focus purely on scaffolding: generating the initial VS Code extension project using the Yeoman generator, with TypeScript and esbuild, ready for you to start coding.


Prerequisites

Before you scaffold the project, ensure you have:

  • Node.js 18+ installed (check with node -v).
  • Git installed (check with git --version).

These are required because the generator uses Node, and the template can optionally initialise a Git repository for you.


Generating the extension with Yeoman

VS Code’s official generator is distributed as a Yeoman generator. You don’t need to install anything globally; you can invoke it directly via npx:

# One-time scaffold (no global install needed)
npx --package yo --package generator-code -- yo code

This command:

  • Downloads yo (Yeoman) and generator-code on demand.
  • Runs the VS Code extension generator.
  • Prompts you with a series of questions about the extension you want to create.

Recommended answers to the generator prompts

When the interactive prompts appear, choose:

? What type of extension do you want to create? → New Extension (TypeScript)
? What's the name of your extension?            → my-ai-extension
? What's the identifier?                        → my-ai-extension
? Initialize a git repository?                  → Yes
? Which bundler to use?                         → esbuild
? Which package manager?                        → npm

Why these choices matter:

  • New Extension (TypeScript) – gives you a typed development experience and a standard project layout.
  • Name / Identifier – the identifier becomes the technical ID used in the marketplace and in settings; pick something stable and lowercase.
  • Initialize a git repository – sets up Git so you can immediately start version-controlling your work.
  • esbuild – a modern, fast bundler that creates a single bundled extension.js for VS Code.
  • npm – a widely used default package manager; you can adapt to pnpm/yarn later if needed.

After you answer the prompts, Yeoman will generate the project in a new folder named after your extension (e.g. my-ai-extension).


Understanding the generated structure

Open the new folder in VS Code. The generator gives you a standard layout, including:

  • src/extension.ts
    This is the entry point of your extension. It exports activate and (optionally) deactivate. All your activation logic, command registration, and other behaviour start here.
  • package.json
    This acts as the extension manifest. It contains:

    • Metadata (name, version, publisher).
    • "main" field pointing to the compiled bundle (e.g. ./dist/extension.js).
    • "activationEvents" describing when your extension loads.
    • "contributes" describing commands, configuration, views, etc., that your extension adds to VS Code.

From an architectural perspective, package.json is the single most important file: it tells VS Code what your extension is and how and when it integrates into the editor.

You’ll also see other generated files such as:

  • tsconfig.json – TypeScript compiler configuration.
  • Build scripts in package.json – used to compile and bundle the extension with esbuild.
  • .vscode/launch.json – debug configuration for running the extension in a development host.

At this stage, you don’t need to modify any of these to get a working scaffold.


Running the scaffolded extension

Once the generator finishes:

  1. Install dependencies:

    cd my-ai-extension
    npm install
  2. Open the folder in VS Code (if you aren’t already).

  3. Press F5.

    VS Code will:

    • Run the build task defined by the generator.
    • Launch a new Extension Development Host window.
    • Load your extension into that window.

In the Extension Development Host:

  • Open the Command Palette.
  • Run the sample command that the generator added (typically named something like “Hello World”).

If the command runs and shows the sample notification, you have a fully working scaffolded extension. From here, you can start replacing the generated sample logic in src/extension.ts and adjusting package.json to declare your own contributions.

Building With Terraform Modules

Terraform modules are how you turn raw Terraform into a reusable, versioned “library” of infrastructure components. In this article we’ll go through what modules are, the types you’ll see in practice, how to create them, when to factor code into a module, how to update them safely, how to publish them, and finally how to consume them from your stacks.


What is a Terraform module?

At its core, a module is just a directory containing Terraform configuration that can be called from other Terraform code.

  • Any directory with .tf files is a module.
  • The directory where you run terraform init/plan/apply is your root module.
  • A root module can call child modules via module blocks, which is how you achieve reuse and composition.

Conceptually, a module is like a function in code:

  • Inputs → variables
  • Logic → resources, locals, data sources
  • Outputs → values other code can depend on

Good modules hide internal complexity behind a clear, minimal interface, exactly as you’d expect from a well‑designed API.


Types of modules you’ll deal with

In practice you’ll encounter several “types” or roles of modules:

  1. Root module
    • The entrypoint of a stack (e.g. envs/prod), where you configure providers, backends, and call other modules.
    • Represents one deployable unit: a whole environment, a service, or a single app stack.
  2. Child / reusable modules
    • Reusable building blocks: VPCs, EKS clusters, RDS databases, S3 buckets, etc.
    • Usually live under modules/ in a repo, or in a separate repo entirely.
    • Called from root or other modules with module "name" { ... }.
  3. Public registry modules
    • Published to the public Terraform Registry, versioned and documented.
    • Example: terraform-aws-modules/vpc/aws
    • Great for standard primitives (VPCs, security groups, S3, etc.), less so for business‑specific patterns.
  4. Private/organizational modules
    • Hosted in private registries or Git repos.
    • Usually represent your organization’s conventions and guardrails (“a compliant VPC”, “a hardened EKS cluster”).

Architecturally, many teams settle on layers:

  • Layer 0: cloud and providers (root module).
  • Layer 1: platform modules (VPC, KMS, logging, IAM baselines).
  • Layer 2: product/service modules (service X, API Y) that compose platform modules.

Creating a Terraform module

Standard structure

A well‑structured module typically has:

  • main.tf – core resources and module logic
  • variables.tf – input interface
  • outputs.tf – exported values
  • versions.tf (optional but recommended) – provider and Terraform version constraints
  • README.md – usage, inputs, outputs, examples

This structure is not required by Terraform but is widely used because it keeps interfaces clear and tooling friendly.

Simple working example

Let’s build a small AWS S3 bucket module and then consume it from a root module.

Module: modules/aws_s3_bucket

modules/aws_s3_bucket/versions.tf:

terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

modules/aws_s3_bucket/variables.tf:

variable "bucket_name" {
  type        = string
  description = "Name of the S3 bucket."
}

variable "environment" {
  type        = string
  description = "Environment name (e.g., dev, prod)."
  default     = "dev"
}

variable "extra_tags" {
  type        = map(string)
  description = "Additional tags to apply to the bucket."
  default     = {}
}

modules/aws_s3_bucket/main.tf:

resource "aws_s3_bucket" "this" {
  bucket = var.bucket_name

  tags = merge(
    {
      Name        = var.bucket_name
      Environment = var.environment
    },
    var.extra_tags
  )
}

modules/aws_s3_bucket/outputs.tf:

output "bucket_id" {
  description = "The ID (name) of the bucket."
  value       = aws_s3_bucket.this.id
}

output "bucket_arn" {
  description = "The ARN of the bucket."
  value       = aws_s3_bucket.this.arn
}

Rationale:

  • variables.tf defines the module’s public input contract.
  • outputs.tf defines the public output contract.
  • versions.tf protects you from incompatible provider/Terraform versions.
  • main.tf stays focused on resources and any derived locals.

Root module consuming it

In your root directory (e.g. project root):

versions.tf:

terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

providers.tf:

provider "aws" {
  region                      = "us-east-1"

  # Fake credentials for LocalStack
  access_key                  = "test"
  secret_key                  = "test"

  skip_credentials_validation = true
  skip_metadata_api_check     = true
  skip_requesting_account_id  = true
  s3_use_path_style           = true

  # Point AWS services at LocalStack
  endpoints {
    s3 = "http://localhost:4566"
    # add more if needed, e.g. dynamodb = "http://localhost:4566"
  }
}

variables.tf:

variable "aws_region" {
  type        = string
  description = "AWS region to deploy into."
  default     = "ap-southeast-2"
}

variable "environment" {
  type        = string
  description = "Environment name."
  default     = "dev"
}

main.tf:

module "logs_bucket" {
  source      = "./modules/aws_s3_bucket"
  bucket_name = "my-org-logs-${var.environment}"
  environment = var.environment
  extra_tags = {
    owner = "platform-team"
  }
}

output "logs_bucket_arn" {
  value       = module.logs_bucket.bucket_arn
  description = "Logs bucket ARN."
}

How to validate this example

From the root directory:

  1. Start LocalStack (for example, via Docker):

    docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack

    This exposes the LocalStack APIs on http://localhost:4566 as expected by the provider config.

  2. terraform init

  • Ensures Terraform and the AWS provider are set up; discovers the local module.
  1. terraform validate
  • Confirms syntax, types, required variables satisfied.
  1. terraform plan
  • You should see one S3 bucket to be created, with the name my-org-logs-dev by default.
  • Confirm that the tags include Environment = dev and owner = platform-team.
  1. terraform apply
  • After apply, run terraform output logs_bucket_arn and check that:
    • The ARN looks correct for your region.
    • The bucket exists in AWS with expected tags.

If these checks pass, your module and consumption pattern are wired correctly.


When to create a module

You should not modularise everything; the trick is to modularise at the right abstraction boundaries.

Good reasons to create a module

  • You’re copy‑pasting the same pattern across stacks or repos
    • Example: the same cluster pattern for dev, stage, prod.
    • A module eliminates duplication and concentrates fixes in one place.
  • You have a logical component with a clear responsibility
    • Examples: “networking”, “observability stack”, “Generic service with ALB + ECS + RDS”.
    • Each becomes a module with focused inputs and outputs.
  • You want to hide complexity and provide sane defaults
    • Consumers shouldn’t need to know every IAM policy detail.
    • Provide a small set of inputs; encode your standards inside the module.
  • You want a contract between teams
    • Platform team maintains modules; product teams just configure inputs.
    • This aligns nicely with how you manage APIs or libraries internally.

When not to create a module (yet)

  • One‑off experiments or throwaway code.
  • A single, simple resource that is unlikely to be reused.
  • When you don’t yet understand the pattern — premature modularisation leads to awkward, unstable interfaces.

A good heuristic: if you’d be comfortable writing a README with “what this does, inputs, outputs” and you expect re‑use, it’s a good module candidate.


Updating a module safely

Updating modules has two dimensions: changing the module itself, and rolling out the updated version to consumers.

Evolving the module interface

Prefer backwards‑compatible changes when possible:

  • Add new variables with sensible defaults instead of changing existing ones.
  • Add new outputs without altering the meaning of existing outputs.
  • If you must break behaviour, bump a major version and document the migration path.

Internally you might refactor resources, adopt new provider versions, or change naming conventions, but keep the external contract as stable as you can.

Versioning strategy

For modules in a separate repo or registry:

  • Use semantic versioning: MAJOR.MINOR.PATCH.
    • PATCH: bugfixes, no breaking changes.
    • MINOR: new optional features, backwards compatible.
    • MAJOR: breaking changes.

Tag releases (v1.2.3) and use those tags in consumers (Git or registry).

Rolling out updates to consumers

For a Git‑sourced module:

module "logs_bucket" {
  source  = "git::https://github.com/my-org/terraform-aws-s3-bucket.git?ref=v1.3.0"
  # ...
}

To upgrade:

  1. Change ref from v1.2.0 to v1.3.0.
  2. Run terraform init -upgrade.
  3. Run terraform plan and review changes carefully.
  4. Apply in lower environments first, then promote the same version to higher environments (via branch promotion, pipelines, or workspace variables).

For a registry module, the pattern is the same but with a version argument:

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.3.0"
}

Pinning versions gives you reproducibility and avoids surprise changes across environments.


Publishing a module

Publishing is about making your module discoverable and consumable by others, with strong versioning and documentation.

Public registry (high‑level)

To publish a module publicly (e.g. to the Terraform Registry):

  • Place the module in a public VCS repo (commonly GitHub).
  • Name the repo using the convention: terraform-<PROVIDER>-<NAME>
    • Example: terraform-aws-s3-bucket.
  • Ensure the repo root contains your module (main.tf, variables.tf, outputs.tf, etc.).
  • Tag a version (e.g. v1.0.0).
  • Register the module on the registry UI (linking your VCS account).

Once indexed, users can consume it as:

module "logs_bucket" {
  source  = "my-org/s3-bucket/aws"
  version = "1.0.0"

  bucket_name = "my-org-logs-prod"
  environment = "prod"
}

Private registries and Git

For internal usage, many organizations prefer:

  • Private registry (Terraform Cloud/Enterprise, vendor platform, or self‑hosted).
    • Similar flow to the public registry, but scoped to your org.
  • Direct Git usage
    • Modules are consumed from Git with ?ref= pointing to tags or commits.
    • Simpler setup, but you lose some of the browsing and discoverability that registries provide.

The key idea is the same: modules are versioned artefacts, and consumers should pin versions and upgrade intentionally.


Consuming modules (putting it all together)

To consume any module, you:

  1. Add a module block.
  2. Set source to a local path, Git URL, or registry identifier.
  3. Pass the required inputs as arguments.
  4. Use the module’s outputs via module.<name>.<output_name>.

Example: consuming a local network module and a registry VPC module side by side.

# Local module (your own)
module "network" {
  source = "./modules/network"

  vpc_cidr        = "10.0.0.0/16"
  public_subnets  = ["10.0.1.0/24", "10.0.2.0/24"]
  private_subnets = ["10.0.11.0/24", "10.0.12.0/24"]
}

# Registry module (third-party)
module "logs_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "~> 4.0"

  bucket = "my-org-logs-prod"

  tags = {
    Environment = "prod"
  }
}

output "network_vpc_id" {
  value = module.network.vpc_id
}

output "logs_bucket_arn" {
  value = module.logs_bucket.s3_bucket_arn
}

The root module becomes a composition layer, wiring together multiple modules rather than directly declaring many low‑level resources.


Summary of key practices

  • Treat modules as APIs: clear inputs, clear outputs, stable contracts.
  • Use a predictable structure: main.tf, variables.tf, outputs.tf, versions.tf, README.md.
  • Only create modules where there is clear reuse or a meaningful abstraction.
  • Version modules and pin those versions when consuming them.
  • Use lower environments and terraform plan to validate updates before promoting.

The Core GitHub Copilot AI Primitives in VS Code

The primitives in this ecosystem are the building blocks you compose to turn a generic model into a team‑specific coding assistant: instruction files, skills, prompts, custom agents (and sub‑agents), and hooks. Think of them as layers: always‑on rules at the bottom, on‑demand capabilities on top, and automation wrapped around the lifecycle.


1. Instruction files: Persistent rules and context

Instruction files are Markdown configurations that Copilot always includes in the context when working in your repo or specific files.

  • They live alongside your code (for example .instructions.md or repo‑level instruction files) and often use glob patterns to target languages or folders.
  • You capture architecture decisions, coding standards, naming conventions, security constraints, and “how this codebase works” so the agent doesn’t guess.
  • File‑ or pattern‑scoped instructions let you tune behavior per domain (e.g., frontend vs. backend vs. infra scripts).

Rationale: This is your “always‑on brain” for the codebase; you remove prompt repetition and make the agent opinionated in the same way your senior engineers are.


2. Skills: On‑demand specialized capabilities

Skills are folders (with SKILL.md) that define how to perform a specialized task, plus any helper scripts or examples.

  • SKILL.md contains YAML frontmatter (metadata) and instructions describing when and how to use the skill.
  • Copilot decides when to inject a skill into context based on the user’s request and the skill description—for example “debug input handling for this game” or “migrate legacy API calls.”
  • Skills are ideal for repeatable domain tasks: debugging patterns, migration playbooks, data‑access rules, or company‑specific frameworks.

Rationale: Instructions describe global rules, while skills encode detailed procedures that are only loaded when relevant, keeping the context window efficient.


3. Prompts: Reusable slash‑command workflows

Prompt files define named prompts that appear as slash commands (e.g., /test, /document, /refactor) inside Copilot chat.

  • They bundle a task pattern, guidance, and sometimes specific tools into a reusable command your team can trigger instantly.
  • Typical uses: generate tests for the current file, summarize a diff, propose a refactor plan, or scaffold a feature implementation outline.
  • Prompts can be tailored per repo so their behavior reflects local conventions and dependencies.

Rationale: Prompts are UX primitives for humans: they standardize how people ask for common operations, reducing prompt variability and making outcomes more predictable.


4. Custom agents and sub‑agents: Role‑based specialization

Custom agents are defined via agent config files (for example .agent.md under .github/agents) that describe a persona, its tools, and its behavior.

  • The frontmatter configures name, description, tools (built‑in tools and MCP servers), model, and where the agent is available.
  • The Markdown body defines its role, expertise, boundaries, and how it should respond—for example “Solution Architect,” “Security Reviewer,” or “Test‑first Implementer.”
  • These agents appear in the chat agent dropdown and can be invoked directly for tasks that match their specialization.

Sub‑agents are agents that run under an orchestrator agent to handle subtasks in parallel.

  • The orchestrator can delegate subtasks like planning, implementation, accessibility review, and cleanup to different agents, each working in its own context.
  • Only distilled results return to the orchestrator, preventing its context from being flooded with every intermediate step.

Rationale: This mirrors a real engineering team: you encode roles and responsibilities into agents, then let them collaborate while preserving clear separation of concerns and cleaner context windows.


5. Hooks: Lifecycle automation and policy enforcement

Hooks are shell commands that run at key lifecycle points of an agent session, configured via hook files described in the docs.

  • They can trigger on events like session start/stop, agent or sub‑agent start/stop, before or after a tool call, or before/after edits are applied.
  • Hooks receive JSON input describing what the agent is doing, and can decide to log, transform, veto, or augment actions (for example enforce formatting, run linters, or perform security checks before committing changes).
  • Output from hooks can influence whether the agent continues, rolls back, or adjusts its plan.

Rationale: Hooks move important practices (lint, tests, security, approvals) from “please remember” into enforced automation, embedding your governance into the agent runtime itself.


6. How the primitives fit together

Taken together, these primitives give you a layered design:

  • Instruction files: stable background knowledge and guardrails.
  • Skills: contextual, task‑specific playbooks the agent loads when needed.
  • Prompts: ergonomic entry points for common user workflows.
  • Custom agents and sub‑agents: specialized roles and multi‑agent orchestration.
  • Hooks: lifecycle glue for automation, quality, and compliance.

Understanding State with show, state, and output

Terraform’s state is how it “remembers” what exists in your infrastructure so it can plan precise, minimal changes instead of blindly recreating resources. In this article, we’ll treat Terraform as a black box and learn how to inspect its memory using three key CLI tools: terraform show, the terraform state subcommands, and terraform output.


1. What Terraform State Actually Is

Terraform keeps a mapping between:

  • Your configuration (.tf files)
  • The real resources in your cloud provider (IDs, IPs, ARNs, etc.)

This mapping lives in a state file, usually terraform.tfstate, and often in a remote backend such as S3, Azure Blob Storage, or GCS for team use. The state includes every attribute of every managed resource, plus metadata used for things like dependency ordering and change detection.

Why you care:

  • Debugging: Is Terraform seeing the same thing you see in the console?
  • Refactoring: How do you rename resources without destroying them?
  • Automation: How do you feed outputs into CI/CD or other tools?

You should never hand-edit the state file; instead you use the CLI commands discussed below to read or safely modify it.


2. terraform show — Inspecting the Whole State or a Plan

Think of terraform show as “dump what Terraform currently knows” — it turns a state file or a saved plan into a human-readable or JSON view.

Core usage

# Show the current state snapshot (from the active backend)
terraform show

# Show a specific state file
terraform show path/to/terraform.tfstate

# Show a saved plan file
terraform show tfplan

# Machine-readable JSON for tooling
terraform show -json > plan.json
  • Without a file argument, terraform show prints the latest state snapshot from the active backend.
  • With a plan file, it describes the proposed actions and resulting state.
  • With -json, you get a structured document that external tools (e.g. CI, tests) can parse and validate.

Important: When using -json, sensitive values are printed in plain text; handle this carefully in pipelines and logs.

When to use terraform show

Use it when:

  • You want a global view: “What exactly is Terraform tracking right now?”
  • You want to inspect a plan artifact (plan -out tfplan) before approving it in CI.
  • You want to feed state or plan data into a tool (via -json) for policy checks, drift checks, or custom validation.

Conceptually, terraform show is read-only and holistic: it treats the state (or plan) as a whole, rather than individual resources.


3. terraform state — Fine-Grained State Inspection and Surgery

The terraform state command is a group of subcommands designed specifically to inspect and modify state without touching real infrastructure. This is the surgical toolkit you reach for when refactoring or repairing.

Key subcommands

Command What it does Typical use
terraform state list Lists all resource addresses in state “What is Terraform tracking?”
terraform state show ADDRESS Shows attributes of one resource Debugging one resource (IDs, IPs, tags, etc.)
terraform state mv SRC DEST Moves/renames a resource in state Refactors config without destroy/recreate
terraform state rm ADDRESS Removes a resource from state Stop managing a resource without deleting it
terraform state pull Prints raw state to stdout Backup, inspection, or external processing
terraform state push Uploads a local state file Restore/correct broken remote state (used rarely, carefully)

3.1 terraform state list

terraform state list
# e.g.
# aws_instance.web[0]
# aws_instance.web[1]
# aws_security_group.allow_ssh

This gives you the resource addresses Terraform knows about, optionally filtered by a prefix. It’s extremely useful when working with modules or count/for_each, because you can see the exact address Terraform expects.

3.2 terraform state show

terraform state show aws_instance.web[0]

This prints every attribute of that specific resource as seen in state — IDs, IPs, tags, relationships, and computed attributes. Semantically, it answers: “What does Terraform think this one resource looks like?”.

Use it when:

  • Debugging drift: console vs state mismatch.
  • Understanding complex resources: which subnet, which IAM role?
  • Checking data sources that were resolved at apply time.

Note the difference:

  • terraform show → everything (or full plan).
  • terraform state show ADDRESS → one resource only.

3.3 terraform state mv — Refactor Without Downtime

terraform state mv aws_instance.web aws_instance.app

If you simply rename the block in your .tf code, Terraform will plan to destroy the old resource and create a new one because it assumes they’re unrelated. state mv tells Terraform that the underlying resource is the same, you’re just changing the mapping.

This is critical for:

  • Renaming resources.
  • Moving resources into/out of modules.
  • Splitting a monolith configuration into multiple modules/workspaces.

3.4 terraform state rm — Stop Managing Without Deleting

terraform state rm aws_instance.legacy

This removes the resource from Terraform’s management while leaving it alive in your provider. Use this when decommissioning Terraform from part of your estate or when you temporarily need Terraform to “forget” something (e.g. migration to a different tool).

3.5 terraform state pull / push

These expose and manipulate the raw state blob:

terraform state pull > backup.tfstate
terraform state push backup.tfstate

They’re useful for backups or extremely rare recovery scenarios, but they’re dangerous if misused, so in practice you rely much more on list, show, mv, and rm.


4. terraform output — Consuming State Safely

terraform output reads output values defined in the root module and prints their values from the state file. It is the “official interface” for other systems (and humans) to consume selected bits of state without parsing the state file directly.

4.1 Defining outputs in configuration

In your root module:

output "instance_ips" {
  value = aws_instance.web[*].public_ip
}

output "lb_address" {
  value = aws_lb.web.dns_name
}

output "db_connection_string" {
  value     = module.database.connection_string
  sensitive = true
}
  • Outputs are calculated after terraform apply and stored in state.
  • Only root module outputs are visible to terraform output; child module outputs must be re-exposed.

4.2 Using terraform output interactively

# Show all outputs for the root module
terraform output

# Show one specific output
terraform output lb_address

# Machine-readable JSON
terraform output -json

# Raw string (no quotes/newlines), perfect for scripts
terraform output -raw lb_address
  • With no arguments, it prints all root outputs.
  • With a NAME, it prints just that value.
  • -json gives a JSON object keyed by output name; can be piped into jq or similar tools.
  • -raw prints a bare string/number/boolean; ideal when exporting in shell scripts without extra quoting.

This is the idiomatic way to feed state into:

  • CI/CD pipelines (e.g. get ALB DNS for integration tests).
  • Other scripts (e.g. configure DNS records).
  • Other tools (e.g. Ansible inventory).

5. Putting It Together: A Simple Example

Below is a minimal, self-contained configuration you can run locally.

5.1. Prerequisites

  1. LocalStack running (Docker is typical):

    docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack

    LocalStack’s edge endpoint is exposed on http://localhost:4566 by default.

  2. Terraform installed (1.x).

5.2. Terraform Configuration Using LocalStack

Create a directory (for example tf-localstack-ec2) and within it create two files: versions.tf and main.tf.

versions.tf

Lock AWS provider to a version that is known to work well with LocalStack (4.x is a common choice):

terraform {
  required_version = ">= 1.6.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

main.tf

provider "aws" {
  region                      = "us-east-1"
  access_key                  = "test"
  secret_key                  = "test"
  skip_credentials_validation = true
  skip_metadata_api_check     = true
  skip_requesting_account_id  = true

  endpoints {
    ec2 = "http://localhost:4566"
  }
}

resource "aws_instance" "web" {
  ami           = "ami-12345678"
  instance_type = "t3.micro"

  tags = {
    Name = "tf-demo-web-localstack"
  }
}

output "web_public_ip" {
  value = aws_instance.web.public_ip
}

Notes:

  • The endpoints.ec2 block points the EC2 API at LocalStack’s edge endpoint.
  • Credentials are dummy; LocalStack doesn’t actually validate them.
  • The AMI ID is a placeholder; LocalStack typically does not require the AMI to exist, but EC2 support is limited and can hang for some combinations. For state/command learning it’s usually enough that Terraform “thinks” it created something.

5.3. How to Apply and Validate

From the directory containing these files:

  1. Initialize and apply

    terraform init
    terraform apply -auto-approve

    Terraform will talk to LocalStack instead of AWS because of the custom endpoint.

  2. Validate with show

    terraform show

    Confirm there is a aws_instance.web block with attributes populated from LocalStack’s response.

  3. Validate with state

    terraform state list
    # should include:
    # aws_instance.web
    
    terraform state show aws_instance.web

    This tells you what Terraform’s state holds for this specific resource address.

  4. Validate with output

    terraform output web_public_ip
    terraform output -raw web_public_ip

    For LocalStack, the public IP may be empty or synthetic depending on EC2 emulation level, but the command proves the wiring from resource → state → output.

5.4. Rationale for These Choices

  • We override only the EC2 endpoint to keep the example close to “real” AWS code while still talking to LocalStack.
  • We relax provider validations (skip_* flags) because LocalStack does not implement all AWS account/metadata APIs.

With this setup, you can safely experiment with terraform show, terraform state *, and terraform output on your laptop, without touching real AWS accounts or incurring cost.


6. Conceptual Summary: Which Command When?

Need Command Rationale
See everything Terraform knows terraform show Whole-state, read-only view (or a plan)
Inspect one resource deeply terraform state show ADDRESS Focused, per-resource state inspection
List all tracked resources terraform state list Discover resource addresses in state
Rename/move resources/modules terraform state mv Refactor mappings without downtime
Forget a resource but keep it alive terraform state rm Stop managing without deleting
Give other tools a clean interface terraform output / -json / -raw Official way to expose selected state data developer.

The underlying rationale is separation of concerns:

  • terraform showobservability of plans and state.
  • terraform stateprecise manipulation and inspection of state.
  • terraform outputcontrolled, stable API to state for humans and downstream systems.

Terraform Console in Practice: Your Interactive HCL Lab

Terraform console is an interactive interpreter where you can evaluate Terraform expressions, inspect state, and prototype logic before committing anything to code or infrastructure.


1. What terraform console actually is

At its core, terraform console is a REPL for Terraform’s expression language (HCL2).

  • It reads your configuration and current state from the configured backend, so you can query real values: var.*, local.*, resource.*, data.*.
  • It is read‑only with respect to infrastructure: it does not change resources or configuration, it only evaluates expressions against configuration/state or, if you have no state yet, against pure expressions and built‑ins.
  • It holds a lock on state while open, so other commands that need the state (plan/apply) will wait or fail until you exit.

Pedagogically, think of it as Terraform’s “maths lab”: you experiment with expressions and data structures in isolation before wiring them into modules.


2. Why you should care as a practitioner

You will use terraform console for three broad reasons:

  • Rapid feedback on expressions
    • Test for expressions, conditionals, complex locals, and functions like cidr*, jsonencode, jsondecode, file, etc., without running full plans.
  • Insight into “what Terraform thinks”
    • Inspect live values for resources, data sources, variables, and outputs as Terraform sees them in state, which is often where misunderstandings hide.
  • Debugging complex data structures
    • When for_each over nested maps/lists behaves oddly, you can print and transform the structures interactively to understand shape and keys before editing code.

This shortens the debug loop significantly on large stacks and reduces the risk of generating enormous, accidental plans.


3. Running the console and basic usage

In any initialized working directory:

terraform init    # if not already done
terraform console

You then get a prompt like:

> 1 + 2
3

> upper("auckland")
"AUCKLAND"

You can reference configuration components directly:

> var.cidr
"10.0.0.0/24"

> cidrnetmask(var.cidr)
"255.255.255.0"

> cidrhost(var.cidr, 10)
"10.0.0.10"

Inspecting resources and data sources:

> aws_s3_bucket.data
# prints the entire state object of that bucket (attributes, tags, region, etc.)

Terraform’s own tutorial demonstrates this pattern with an S3 bucket, using terraform console to print attributes like bucket, arn, region, ACLs and so on from state.

To exit:

> exit

Or press Ctrl+D / Ctrl+C.


4. Evaluating expressions: from simple to advanced

The console supports essentially any expression you can write in HCL: literals, operators, functions, for expressions, conditionals, etc.

Examples:

  • Lists and maps:

    > [for env in ["dev", "test", "prod"] : "env-${env}"]
    [
    "env-dev",
    "env-test",
    "env-prod",
    ]
    
    > { for k, v in { a = 1, b = 2, c = 3 } : k => v if v % 2 == 1 }
    {
    "a" = 1
    "c" = 3
    }
  • Filtering complex maps (example adapted from the docs):

    variable "apps" {
    type = map(any)
    default = {
      foo = { region = "us-east-1" }
      bar = { region = "eu-west-1" }
      baz = { region = "ap-south-1" }
    }
    }

    In the console:

    > var.apps.foo
    {
    "region" = "us-east-1"
    }
    
    > { for key, value in var.apps : key => value if value.region == "us-east-1" }
    {
    "foo" = {
      "region" = "us-east-1"
    }
    }
  • Testing network helpers:

    > cidrnetmask("172.16.0.0/12")
    "255.240.0.0"
    ```[1]

This is exactly how you should design locals and for_each expressions: prototype an expression in console, inspect the result, then paste into your module.


5. Inspecting state and outputs

Console is wired to your current backend and workspace.

  • Inspect an entire resource instance:

    > aws_s3_bucket.data
    # large object showing bucket name, ARN, tags, region, ACL, encryption, etc.

    The S3 tutorial shows this in detail, where the console prints attributes like bucket, bucket_domain_name, force_destroy, encryption configuration, tags, and more.

  • Build structured objects and validate them:

    > jsonencode({
      arn    = aws_s3_bucket.data.arn
      id     = aws_s3_bucket.data.id
      region = aws_s3_bucket.data.region
    })
    "\"{\\\"arn\\\":\\\"arn:aws:s3:::...\\\",\\\"id\\\":\\\"...\\\",\\\"region\\\":\\\"us-west-2\\\"}\""

The tutorial uses this pattern to design an output bucket_details, then later validates that terraform output -json bucket_details produces the exact desired JSON structure.

This is a powerful workflow: design your JSON structures interactively in console, then turn them into outputs or policy documents.


6. Using the console with plans (-plan)

By default, console evaluates expressions against the current state, which means values “known after apply” are not concrete yet.

You can ask console to evaluate against a fresh plan:

terraform console -plan

Now you can inspect “planned” values that do not exist in state yet, e.g. resources that are about to be created.

Rationale: this helps reason about the result of for_each, count, and complex expressions before touching real infrastructure. The docs do note that configurations which perform side effects during planning (for example via external data sources) will also do so in console -plan, so such patterns are discouraged.


7. Non‑interactive/scripting usage

You can pipe expressions into console from a script; only the last expression’s result is printed unless an error occurs.

Example from the reference:

echo 'split(",", "foo,bar,baz")' | terraform console

Output:

tolist([
  "foo",
  "bar",
  "baz",
])

This is extremely handy for:

  • CI checks that assert a particular expression evaluates to an expected structure.
  • One‑off debugging scripts that compute derived values from state (e.g. join tags, summarise regions) without adding permanent outputs.

8. A simple example

Let’s assemble a minimal, end‑to‑end example that you can run locally.

8.1. Configuration

Files:

variables.tf:

variable "cidr" {
  type    = string
  default = "10.0.0.0/24"
}

main.tf:

terraform {
  required_version = ">= 1.1.0"

  required_providers {
    random = {
      source  = "hashicorp/random"
      version = "~> 3.0"
    }
  }
}

provider "random" {}

resource "random_password" "db" {
  length  = 16
  special = true
}

locals {
  subnet_ips = [
    for host in range(1, 5) :
    cidrhost(var.cidr, host)
  ]
}

output "db_password" {
  value     = random_password.db.result
  sensitive = true
}

output "subnet_ips" {
  value = local.subnet_ips
}

This uses standard providers and functions: random_password, cidrhost, range, and a for expression, all supported in Terraform 1.1+.

8.2. Apply once

terraform init
terraform apply -auto-approve

You now have state with random_password.db and all locals resolved.

8.3. Explore and validate with terraform console

Run:

terraform console

Try these expressions:

> var.cidr
"10.0.0.0/24"

> local.subnet_ips
[
  "10.0.0.1",
  "10.0.0.2",
  "10.0.0.3",
  "10.0.0.4",
]

> random_password.db.result
"R@nd0mP@ss..." # your value will be different

Validation steps:

  1. Confirm outputs match console:

    terraform output subnet_ips

    You should see the same list printed that you saw for local.subnet_ips in console. Both are derived from the same expression and state.

  2. Confirm password consistency:

    • terraform state show random_password.db will show the result field.
    • Compare that value with random_password.db.result printed in console; they must be identical for the same state snapshot.

If both checks pass, you have empirically validated that:

  • The console is looking at the same state as terraform state and terraform output.
  • Your locals and for expressions behave exactly as expected before you embed similar patterns into more complex modules.

9. Rationale and best‑practice use

From an engineering‑practice perspective, use terraform console as a standard part of your workflow:

  • Before adding non‑trivial expressions
    • Prototype them in console with realistic variable values; only once you’re happy paste them into locals or resource arguments.
  • When debugging bugs in production stacks
    • Inspect what Terraform actually has in state for a resource or data source, rather than inferring from code.

Used this way, console is not a “nice extra” but a core tool: it turns Terraform’s somewhat opaque expression runtime into something you can interrogate directly and safely.

Terraform Expressions and Functions

Terraform’s expression and function system is the core “thinking engine” behind your configurations: expressions describe what value an argument should have, and functions are reusable tools you invoke inside those expressions to compute values dynamically.


1. What Is an Expression in Terraform?

An expression is any piece of HCL that Terraform can evaluate to a concrete value: a string, number, bool, list, map, or object. You use expressions in almost every place on the right-hand side of arguments, in locals, count, for_each, dynamic blocks, and more.

Common expression forms:

  • Literals: "hello", 5, true, null, ["a", "b"], { env = "dev" }.
  • References: var.region, local.tags, aws_instance.app.id, module.vpc.vpc_id.
  • Operators: arithmetic (+ - * / %), comparisons (< > <= >=), equality (== !=), logical (&& || !).
  • Conditionals: condition ? value_if_true : value_if_false.
  • for expressions: [for s in var.subnets : upper(s)] to transform collections.
  • Splat expressions: aws_instance.app[*].id to project attributes out of collections.

Rationale: Terraform must stay declarative (you describe the desired state), but real infrastructure is not static; expressions give you a minimal “language” to derive values from other values without dropping into a general-purpose programming language.


2. What Is a Function?

A function is a built‑in helper you call inside expressions to transform or combine values. The syntax is a function name followed by comma‑separated arguments in parentheses, for example max(5, 12, 9). Functions always return a value, so they can appear anywhere a normal expression is allowed.

Key properties:

  • Terraform ships many built‑in functions (string, numeric, collection, IP/network, crypto, time, type conversion, etc.).
  • You cannot define your own functions in HCL; you only use built‑ins, plus any provider-defined functions a provider may export.
  • Provider-defined functions are namespaced like provider::<local-name>::function_name(...) when used.

Examples of useful built‑in functions:

  • String: upper("dev"), lower(), format(), join("-", ["app", "dev"]).
  • Numeric: max(5, 12, 9), min(), ceil(), floor().
  • Collection: length(var.subnets), merge(local.tags, local.extra_tags), flatten().

Rationale: Functions cover the common transformation needs (naming, list/map manipulation, math) so that your Terraform remains expressive but compact, and you avoid copy‑pasting “string‑mangling” logic everywhere.


3. How Expressions and Functions Work Together

Terraform’s model is expression‑centric: on the right‑hand side of almost every argument, you write an expression, and function calls are just one kind of expression. You freely compose references, operators, conditionals, for expressions, and functions, as long as the input and output types match.

Typical composition patterns:

  • Use references (var.*, local.*, resource attributes) as the base inputs.
  • Apply operators and conditional expressions to make decisions (var.env == "prod" ? 3 : 1).
  • Use for expressions and collection functions to reshape data ([for s in var.subnets : upper(s)]).
  • Use string functions to build consistent resource names (format("%s-%s", var.app, var.env)).

From a mental-model perspective, a good way to think about this is: “Everything dynamic lives in expressions; functions are building blocks inside those expressions.”


4. A Small Example

Below is a minimal Terraform configuration that showcases expressions and functions together, and that you can actually run to observe evaluation results.

Example configuration (main.tf)

terraform {
  required_version = ">= 1.6"
}

variable "environment" {
  type    = string
  default = "dev"
}

variable "app_servers" {
  type    = list(string)
  default = ["app-1", "app-2", "app-3"]
}

locals {
  # Expression: equality operator -> bool
  is_prod = var.environment == "prod"

  # Literal map and reference
  base_tags = {
    app         = "payments"
    environment = var.environment
  }

  # For expression + string function
  uppercased_servers = [for s in var.app_servers : upper(s)]

  # Merge and format functions to compute a name once
  common_tags = merge(
    local.base_tags,
    {
      name = format(
        "%s-%s-%02d",
        local.base_tags.app,
        local.base_tags.environment,
        length(var.app_servers)
      )
    }
  )
}

output "summary" {
  value = {
    is_prod            = local.is_prod
    uppercased_servers = local.uppercased_servers
    common_tags        = local.common_tags
  }
}

What this demonstrates conceptually:

  • Expressions:
    • var.environment == "prod" produces a bool for local.is_prod.
    • The map in local.base_tags uses both literals and references.
    • The locals block itself is a way to give names to intermediate expressions.
  • Functions:
    • upper(s) transforms each server name to uppercase inside a for expression.
    • length(var.app_servers) computes the number of servers.
    • format("%s-%s-%02d", ...) builds a stable name string.
    • merge(...) combines two maps into a single tag map.

Rationale: This pattern—variables + locals + expressions + functions—is exactly how you avoid repetition and keep a production Terraform codebase readable as it grows.


5. How to Validate This Example

Terraform provides an expression console and standard workflow commands to validate that your expressions and functions behave as expected before they affect real infrastructure.

Option A: Run the configuration

  1. Save the example as main.tf in an empty directory.
  2. Run:
    • terraform init to set up the working directory.
    • terraform apply -auto-approve to evaluate and show outputs.
  3. Observe the summary output:
    • is_prod should be false (with the default environment dev).
    • uppercased_servers should be ["APP-1", "APP-2", "APP-3"].
    • common_tags.name should be payments-dev-03.

To see how expressions react to different inputs, run again with a different environment:

terraform apply -auto-approve -var 'environment=prod'

Now is_prod will be true, and the computed name will switch to payments-prod-03, even though you haven’t changed any resource definitions.

Option B: Experiment interactively with terraform console

Terraform’s console lets you test expressions and functions on the fly.

From the same directory:

terraform console

Then try:

> 1 + 2 * 3
> var.environment == "prod"
> [for s in var.app_servers : upper(s)]
> merge({a = 1}, {b = 2})
> format("%s-%s", "app", var.environment)

You will see the evaluated results immediately, which is ideal for teaching yourself how a particular expression or function behaves before embedding it into a real module.


6. Summary Table: Expressions vs Functions

Aspect Expressions Functions
Purpose Describe how to compute a value. Provide reusable operations used inside expressions.
Examples var.env == "prod", [for s in xs : x.id]. length(var.subnets), join("-", local.tags).
Defined by Terraform language syntax. Terraform’s built‑in and provider-defined function library.
Customization Composed via locals, variables, and blocks. No user-defined functions in HCL; only built‑ins/providers.
Typical usage domain Conditionals, loops, references, constructing structures. String formatting, math, collection manipulation, conversion.
« Older posts