#!/usr/bin/env -S ai --sonnet --skipRun the test suite for this project. Report which tests passed and whichfailed. If any tests fail, explain the root cause.
From examples/run-tests.md. The AI detects your test framework automatically.
# Make it executablechmod +x run-tests.md# Run in any project directorycd ~/projects/my-app./run-tests.md
Output:
Copy
[AI Runner] Using: Claude Code + Claude Pro[AI Runner] Model: Sonnet 4.6I'll run the test suite using npm test.✓ 45 tests passed✗ 3 tests failed:1. UserService.createUser - validation error File: src/services/user.service.test.ts:42 Root cause: Missing email validation in createUser function The test expects validation but the implementation skips it2. API /auth/login - 500 error File: src/api/auth.test.ts:78 Root cause: Database connection not mocked in test setup The test tries to connect to a real database3. Utils.parseDate - timezone handling File: src/utils/date.test.ts:91 Root cause: Date parsing assumes UTC but test runs in local timezone
# Use Haiku for faster/cheaper analysisai --haiku run-tests.md# Use AWS Bedrockai --aws --sonnet run-tests.md# Use local Ollama (free!)ai --ollama run-tests.md
# .git/hooks/pre-commit#!/bin/bashecho "Running test suite..."if ! ai --haiku --skip << 'EOF' > /tmp/test-results.txt; thenRun the test suite. Only output PASS or FAIL and a count.EOF cat /tmp/test-results.txt echo "Tests failed. Commit aborted." exit 1fiecho "All tests passed."
#!/bin/bash# nightly-test-analysis.shcd ~/projects/my-appai --apikey --opus --skip << 'EOF' > "test-report-$(date +%Y-%m-%d).md"Run the full test suite including integration tests.For each failure:1. Identify the root cause2. Check git history for recent changes to related files3. Suggest specific fixes with code snippets4. Rate the severity (critical/high/medium/low)Summarize test health trends if you can access previous reports.EOF# Email the reportmail -s "Nightly Test Report" dev-team@company.com < "test-report-$(date +%Y-%m-%d).md"
This modifies code. Only use in development, never in CI without review.
Copy
#!/usr/bin/env -S ai --sonnet --skipRun the test suite. If any tests fail:1. Analyze the root cause2. Fix the issue in the source code3. Run tests again to verify the fix4. Report what you changedDo NOT commit changes. Just fix and verify.
#!/usr/bin/env -S ai --sonnet --skipRun only unit tests (npm run test:unit).Report pass/fail counts and analyze failures.
Integration tests with setup:
Copy
#!/usr/bin/env -S ai --sonnet --skipRun integration tests:1. Start docker-compose services2. Wait for services to be healthy3. Run npm run test:integration4. Stop services when doneAnalyze any failures.
#!/usr/bin/env -S ai --sonnet --skipRun performance tests:1. Execute npm run test:perf2. Compare results to previous baseline in perf-baseline.json3. Flag any regressions >10%4. Identify the slowest tests
#!/usr/bin/env -S ai --sonnet --skipRun tests with coverage:1. Execute npm run test:coverage2. Report overall coverage percentage3. List files with <80% coverage4. Suggest priority areas for new tests
#!/usr/bin/env -S ai --sonnet --skip --liveRun the test suite. Print a status update after each test file completes.Finally, report pass/fail counts and analyze failures.
Now you’ll see updates as tests run instead of waiting for everything to complete.