Skip to main content

        Stop the fix-run-fail-repeat cycle. pytest-check lets you collect multiple assertion failures in a single test run.

Soft Asserting With pytest Made Fun

Stop the fix-run-fail-repeat cycle. pytest-check lets you collect multiple assertion failures in a single test run.

As Python developers or automation engineers, we’ve all been there. You’re running a comprehensive test suite (unit, component, or end-to-end), and one assertion fails early in a test function. The test stops executing, and you’re left wondering about the other potential failures that might exist further down in the same test. You fix that one assertion, re-run the test, and discover another failure. Fix-run-test-fail-repeat, isn’t it?

Meet pytest-check

Unlike traditional pytest assertions that halt execution at the first failure, pytest-check allows you to continue executing tests even after assertions fail. This means you can collect multiple failures in a single test run, dramatically improving your debugging efficiency and ability to explore multiple failure areas.

Real-world examples

In the following example, one could test multiple conditions related to fields of a registration form with multiple pseudo-asserts using the check function.

def test_user_registration_complete():
    user_data = {
        "username": "johndoe123",
        "email": "john@example.com",
        "password": "weak123",
        "age": 17
    }
    result = register_user(user_data)

    # Capture all validation failures at once
    check.is_true(len(result.username) >= 6)
    check.is_true("@" in result.email and "." in result.email)
    check.is_true(len(result.password) >= 8)
    check.is_true(result.age >= 18)
    check.is_false(result.has_errors)

In this example, we’re checking that data was moved correctly after a migration or upgrade of a system/database:

def test_customer_data_migration():
    migrated_customers = run_migration_script()

    # Verify migration completeness across all records
    check.equal(len(migrated_customers), 1500)  # Expected count

    for customer in migrated_customers[:10]:  # Sample check
        check.is_not_none(customer.id)
        check.is_not_none(customer.created_date)
        check.is_true(customer.email_verified)
        check.in_range(customer.loyalty_points, 0, 10000)

Testing APIs is done easily with pytest-check for multiple schema fields:

def test_payment_gateway_integration():
    payment_request = create_test_payment(amount=99.99)
    response = process_payment(payment_request)

    # Validate entire payment flow in one execution
    check.equal(response.status, "success")
    check.equal(response.amount_charged, 99.99)
    check.is_not_none(response.transaction_id)
    check.is_true(response.timestamp <= datetime.now())
    check.contains(response.receipt_url, "https://")
    check.equal(response.currency, "USD")

Testing that a system made from multiple services and components is actually working as expected:

def test_system_health_comprehensive():
    system_status = get_system_health()

    # Database connectivity checks
    with check.context("Database Health"):
        check.is_true(system_status.db_connected)
        check.less_than(system_status.db_response_time, 100)
        check.greater_than(system_status.db_connections, 5)

    # Cache system checks
    with check.context("Cache Health"):
        check.is_true(system_status.cache_available)
        check.less_than(system_status.cache_memory_usage, 80)
        check.greater_than(system_status.cache_hit_rate, 0.85)

    # External service checks
    with check.context("External Services"):
        check.is_true(system_status.payment_service_up)
        check.is_true(system_status.email_service_up)
        check.less_than(system_status.avg_api_latency, 200)

Why this matters for your development workflow

Time savings. Instead of the traditional fix-one-failure-at-a-time cycle, you see all issues upfront. Especially valuable in integration tests or when validating complex data structures.

Better test coverage. You can write more comprehensive tests without worrying about early exits masking other potential issues.

Improved debugging. Getting a complete picture of what’s working and what isn’t helps you identify patterns and root causes faster.

Real-world impact

I recently used pytest-check on a project where we were validating API responses with multiple fields. It worked nicely, which led me to integrate it across multiple test suites.

Getting started

Installation is straightforward:

pip install pytest-check

The plugin integrates seamlessly with your existing pytest setup. You can even mix traditional assertions with pytest-check assertions in the same test file.

The bottom line

pytest-check represents a fundamental shift in how we think about test assertions. Dynamic assertions that continue execution even after failures are becoming increasingly important as our applications grow more complex and our testing needs evolve.

Have you encountered scenarios where pytest-check could have saved you significant debugging time?