Build a swarm test harness for parallel signup flows
Recipe for provisioning N identities in parallel and running N concurrent frontend audits against an auth flow.
- how-to
- aidenid
- swarm
- parallel-testing
- e2e
Parallel test an auth flow with N independent identities to check for rate limiting, race conditions, and tenant isolation.
Why
A single-user test proves the happy path works. An N-user concurrent test proves the system scales, isolates tenants, and enforces rate limits correctly. It also exposes race conditions that only surface under concurrent signups.
Step 1 — Choose N
Start at N=10 and work up. For a production-like stress test, N=50 to N=100.
Step 2 — Bulk-provision identities
N=10
for i in $(seq 1 $N); do
sl ai provision-email \
--tags "swarm,batch-$(date +%Y%m%d),user-$i" \
--ttl 1800 \
--execute \
--json \
> ".sentinelayer/swarm-identities/user-$i.json"
done
Or in a single Senti session call:
sl session provision-emails sess_abc123 \
--count $N \
--tags "swarm,signup-race" \
--ttl-hours 1 \
--concurrency 10 \
--json
(Using `sl session provision-emails` inside a Senti session is better because provision events appear on the shared stream, visible to all participating agents.)
Step 3 — Run N concurrent audits
Using xargs for shell-level parallelism:
ls .sentinelayer/swarm-identities/user-*.json \
| xargs -n 1 -P 10 -I {} \
bash -c 'email=$(jq -r ".identity.email" {}); sl audit frontend --url https://app.example.com/signup --email "$email" --json > ".sentinelayer/swarm-audits/$(basename {} .json).audit.json"'
`-P 10` runs 10 audits concurrently. Adjust based on your rate limits.
Step 4 — Reduce the results
# Count findings across all swarm audits
jq -s '[.[] | .findings[]] | group_by(.severity) | map({severity: .[0].severity, count: length})' \
.sentinelayer/swarm-audits/*.audit.json
# Find any audit that caught a race-condition finding
jq -s '[.[] | select(.findings[]?.tags[]? == "race-condition")]' \
.sentinelayer/swarm-audits/*.audit.json
Step 5 — Cleanup
# Revoke all swarm identities at once
sl ai identity list --tag swarm --json \
| jq -r '.identities[].id' \
| xargs -n 1 sl ai identity revoke
Common findings this uncovers
- **Missing idempotency key.** 10 concurrent signups with the same email produce 10 user rows instead of 1.
- **Rate-limit bypass.** Per-user limits apply but global limit is missing.
- **Tenant leakage.** Two concurrent signups see each other's data in the welcome email.
- **OTP race.** Old OTP remains valid during the brief window before the new one arrives.
Best practices
- Use **distinct tags per swarm run** so cleanup is safe even if multiple runs happen in a day.
- **TTL short.** 1800s (30 min) is plenty for most signup-flow tests.
- **Respect your own rate limits.** Do not swarm-test your production service without a staging environment.
Related
- [AIdenID agent workflows](/docs/aidenid/agent-workflows)
- [Senti overview](/docs/senti/overview)
Structured Answers
What is a realistic N for swarm testing?
Start at 10 for correctness. Scale to 50-100 for stress tests. Above 100, use a dedicated load-test harness like k6; AIdenID is for identity correctness, not throughput testing.
Can I run swarm tests against production?
Only with explicit authorization. AIdenID creates real accounts in your system. Use a staging environment by default; treat production swarm tests as a last resort with cleanup guaranteed.