When we set out to build Mailchk, we had a non-negotiable requirement: email validation had to be fast enough that users wouldn't notice it happening. If your signup form freezes for 500 milliseconds while waiting for a validation response, that's a noticeable delay that hurts conversion rates. We targeted sub-50ms response times — fast enough to feel instant — and we hit that target. Here's how.
Why Speed Matters for Email Validation
Email validation typically sits in the critical path of a user action. Someone fills out a signup form, hits submit, and your backend validates their email before creating the account. Every millisecond of validation latency adds directly to the user's perceived wait time.
Research consistently shows that each additional 100ms of latency reduces conversion rates by 1-2%. For a signup form processing 10,000 submissions per month, shaving 200ms off validation time could mean 20-40 additional conversions monthly.
Beyond UX, speed matters for bulk operations. If you're validating a list of 100,000 emails and each check takes 500ms, you're looking at nearly 14 hours of processing. At 50ms per check, the same list finishes in under 90 minutes.
The Architecture
Edge Computing with Cloudflare Workers
The single biggest factor in our response times is running validation logic at the edge. Mailchk's API runs on Cloudflare Workers, which means our code executes in over 300 data centers worldwide. When a request arrives from Tokyo, it's processed in Tokyo — not routed to a server in Virginia.
Traditional API architectures route all requests to a centralized server or cloud region. A user in Sydney making an API call to a US-East server adds 150-200ms of network latency before any processing even begins. Edge computing eliminates this entirely.
Our Worker handles the full validation pipeline: syntax parsing, DNS lookups, disposable domain checks, risk scoring, and response formatting. No request ever leaves the edge data center unless it needs to query an external DNS server.
Pre-Computed Disposable Domain Database
The disposable domain lookup is the most latency-sensitive part of our pipeline. We maintain a database of over 75,000 known disposable domains, and every validation request checks against it. The naive approach — querying a centralized database — would add 10-50ms of latency depending on the user's location.
Instead, we distribute the entire disposable domain database to every edge location using Cloudflare's KV storage with aggressive caching. The lookup is essentially a local hash table check, completing in under 1ms. When our AI crawler discovers a new disposable domain, it propagates to all edge locations within 60 seconds.
Optimized DNS Resolution
MX record lookups are inherently network-dependent — you're querying DNS servers that may be anywhere in the world. This is the one part of our pipeline we can't make purely local. But we've optimized it significantly:
- DNS result caching: MX records for popular domains (gmail.com, outlook.com, yahoo.com, etc.) are cached at the edge. Since these records change rarely, we cache them aggressively with a TTL-aware refresh policy. This eliminates DNS lookups for 70-80% of all requests.
- Parallel DNS queries: When we do need to query DNS, we send requests to multiple resolvers simultaneously and use the first response. This eliminates slow-resolver tail latency.
- Pre-warmed caches: We proactively resolve and cache MX records for the top 10,000 email domains globally, ensuring that the vast majority of lookups never hit the network.
Zero-Allocation String Parsing
Email syntax validation involves string parsing, and in a high-throughput system, memory allocations during parsing can add up. Our syntax validator is written to minimize allocations — we use index-based parsing rather than splitting strings into substrings, and we validate character-by-character with a single pass through the input.
This might sound like premature optimization, but at 10,000+ requests per second per edge location, the difference between allocating 5 objects per request and 0 is measurable. Our syntax validation completes in under 0.1ms consistently.
The Validation Pipeline in Practice
Here's the actual timing breakdown for a typical validation request:
| Step | Time | Notes |
|---|---|---|
| TLS termination | 0ms | Already at the edge |
| Request parsing | <1ms | JSON parse + auth check |
| Syntax validation | <1ms | Single-pass parser |
| Disposable domain check | <1ms | Local KV lookup |
| DNS/MX verification | 5-30ms | Cached for popular domains |
| Risk scoring | <1ms | Computed from signals |
| Response formatting | <1ms | JSON serialization |
For domains with cached MX records (which is 70-80% of requests), total response time is under 10ms. For uncached domains, DNS resolution adds 5-30ms depending on the authoritative nameserver's location, bringing the total to 15-40ms. Our P95 response time sits comfortably under 50ms.
Scaling Challenges We Solved
Cache Invalidation
The classic hard problem in computer science. Our disposable domain database changes constantly — our AI crawler adds new domains throughout the day. We use a tiered caching strategy:
- L1 (in-worker memory): Refreshed every 5 minutes. Fastest access but potentially slightly stale.
- L2 (Cloudflare KV): Updated within 60 seconds of a change. Consistent across all edge locations.
- L3 (origin database): Source of truth. Only queried when L2 misses.
For DNS caches, we respect the original record's TTL but set a maximum of 1 hour. This balances freshness with performance — if a domain migrates email providers, we'll pick up the change within an hour at most.
Cold Start Performance
Cloudflare Workers don't have traditional cold starts like AWS Lambda, but there is a one-time initialization cost when a Worker instance spins up at an edge location. We minimize this by keeping our Worker's initialization logic lean — no large dependency trees, no complex setup sequences. The Worker is ready to serve requests within 2-3ms of initialization.
Handling DNS Timeouts Gracefully
DNS resolution is the one external dependency in our pipeline, and DNS servers sometimes respond slowly or not at all. We implement aggressive timeouts: if no DNS response arrives within 3 seconds, we return a result with the DNS check marked as inconclusive rather than timing out the entire request. The response still includes syntax validation, disposable detection, and partial risk scoring.
Monitoring and Observability
You can't optimize what you can't measure. We track P50, P95, and P99 response times broken down by:
- Edge location (to catch regional performance degradation)
- Pipeline step (to identify which layer is slow)
- Domain type (to monitor cache hit rates)
- Request type (single validation vs. bulk)
When P95 at any edge location exceeds 50ms, we get alerted. This has caught issues like DNS cache expiration storms (when many popular domain caches expire simultaneously) and KV propagation delays.
What We'd Do Differently
If we were starting from scratch, we'd invest in DNS pre-warming earlier. Our initial approach was purely reactive caching — cache DNS results as they're requested. We later added proactive resolution for popular domains, which dramatically improved cold-cache performance. Building this from day one would have saved us time debugging latency spikes from cache misses.
We'd also standardize on a single serialization format earlier. We initially supported both JSON and form-encoded responses, which added complexity to the response formatting step. JSON-only would have been simpler and just as effective.
The Result
Today, Mailchk processes email validations with a median response time of 12ms and a P95 of 38ms. For cached domains — which represent the vast majority of real-world traffic — responses consistently arrive in under 15ms. Our users get instant validation without their end users noticing any delay.
Speed isn't a feature we bolt on — it's a consequence of architectural decisions made from day one. Edge computing, pre-distributed databases, aggressive caching, and lean code combine to deliver validation responses faster than most APIs can complete a TLS handshake.



