Skip to main content

        A practical method for designing extensible test automation infrastructure using a JSON schema as the single source of truth and Claude Code as a development partner.

Kickstarting AI-Enhanced Automation Infrastructure: A Practical Guide Using Claude Code

A practical method for designing extensible test automation infrastructure using a JSON schema as the single source of truth and Claude Code as a development partner.

Imagine you’re a software developer - or better yet, an automation infrastructure engineer - tasked with building a test framework from scratch. Sounds exciting, right? Well… that depends on how well you execute it.

There are countless considerations when designing a resilient automation infrastructure. But the real secret? Making it extensible with minimal effort.

In this short guide, I’ll walk you through a method I’ve designed to streamline the design and implementation process using AI as a development partner.

The Key Ingredient: A Single Source of Truth

Let’s cut to the chase. We need a single, authoritative data source that describes our system’s API. This lets us standardize test creation and simplify maintenance.

The simplest solution? A JSON Schema (or YAML definition) that outlines the exposed API. This, by the way, can be consumed from the actual product itself (blink-blink-swagger-blink-blink).

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "UserAPI",
  "type": "object",
  "properties": {
    "request": { "$ref": "#/definitions/CreateUserRequest" },
    "response": { "$ref": "#/definitions/CreateUserResponse" }
  },
  "required": ["request", "response"],
  "definitions": {
    "CreateUserRequest": {
      "type": "object",
      "required": ["username", "email", "password"],
      "properties": {
        "username": { "type": "string", "minLength": 3, "maxLength": 30 },
        "email": { "type": "string", "format": "email" },
        "password": { "type": "string", "minLength": 8 },
        "age": { "type": "integer", "minimum": 0 },
        "isAdmin": { "type": "boolean", "default": false }
      },
      "additionalProperties": false
    },
    "CreateUserResponse": {
      "type": "object",
      "required": ["id", "username", "email", "createdAt"],
      "properties": {
        "id": { "type": "string", "format": "uuid" },
        "username": { "type": "string" },
        "email": { "type": "string", "format": "email" },
        "createdAt": { "type": "string", "format": "date-time" },
        "isAdmin": { "type": "boolean" }
      },
      "additionalProperties": false
    }
  }
}

This schema defines a basic CreateUser API with both request and response objects. The real magic comes from letting Claude generate our client library based on this schema.

Step 1: Introduce Claude to Your Project

At the root of your testing project, create a file named CLAUDE.md. This file introduces Claude to your project and explains how the client should be generated.

Additionally, place another CLAUDE.md inside the client directory. Claude supports multiple config files based on folder structure, which gives you fine-grained control over how different parts of the project are handled.

This configuration explains to Claude how I would like the project to be built - simple, verbal, very easy to understand. But without an example, it’s worth nothing - Claude won’t follow the pattern. So in the same file I include an example.

Step 2: Generate the Client

Now you’re ready to ask Claude:

“Generate me a client library in the project.”

In seconds, Claude proposes a fully working client structure. You get clear, Pythonic code - based on your schema - ready to use and maintain.

Step 3: Harden the Setup

One of the things I find iterating with GPTs is how to fine-tune the responses. We need to ask the same question repeatedly while refining the structure we want.

In the previous step I described what should be done to create a client library, but how does that help if the results aren’t what we wanted? For that, we need to better explain what we want to achieve and have the model use it as we’d like.

I added more information. In the example, I created a new sut.py file in the root of the project - a singleton that holds the URL configuration. Why? Because it’s a testing system, and we should inject the URL at runtime to test against different environments.

I gave instructions on what should and shouldn’t be done and modified the configuration slightly.

The response: exactly what I’d want in my project. Clear, simple, expandable when schema.json changes.

What’s Next?

This guide is just the beginning. Here are a few ideas to level it up:

  • Auto-generate the client on each commit by pulling the latest schema.json from your Git repo.
  • Generate pytest test cases from the schema using Claude.
  • Evolve the client into an MCP agent for advanced test automation or agent-to-agent simulations.
  • Celebrate your hard work with a well-earned beer. Even if it took no more than 2 hours.

This workflow isn’t just about saving time - it’s about building smarter, more scalable test infrastructure with the help of AI.

Start simple. Iterate fast. And let your infrastructure grow with your product.