opdeck / blog / always-on-detections-eliminating-the-waf-log-versus-block-tr

Revolutionizing WAF: How Always-On Detections End the Log vs Block Dilemma

March 5, 2026 / OpDeck Team
WAFSecurityWeb ApplicationsThreat DetectionCybersecurity

The WAF Dilemma That's Been Haunting Security Teams for Decades

Every security engineer who has ever managed a Web Application Firewall knows the feeling: you're staring at a dashboard full of alerts, trying to decide whether to flip a rule from "log" to "block," and you're paralyzed. Block too aggressively, and you break legitimate user flows. Log everything without blocking, and you're essentially keeping a meticulous diary of your own exploitation.

This is the "log versus block" trade-off, and it has defined — and limited — WAF strategy for the better part of two decades. Cloudflare's recent announcement of Attack Signature Detection and Full-Transaction Detection represents a meaningful shift in how this problem can be approached. But the deeper story isn't just about what Cloudflare is doing — it's about why this trade-off exists, what it costs organizations that get it wrong, and how you can rethink your entire detection posture regardless of which WAF vendor you're using.


Why Traditional WAFs Force You to Choose

To understand why the log/block decision is so painful, you need to understand how traditional WAFs make their decisions. Most WAF rules operate on a single dimension: the request. A request comes in, the WAF inspects its headers, URI, body, and parameters, and if it matches a known-bad pattern, it either logs or blocks.

The problem is that this approach is fundamentally incomplete. A request that looks malicious might be completely harmless — maybe your penetration tester is running scans, maybe a legitimate user has a name that triggers an XSS rule, or maybe a search query happens to contain SQL-like syntax. Conversely, a request that looks perfectly normal might be the first step in a multi-stage attack that only becomes apparent after several exchanges.

This is why WAF tuning is such a time-consuming, never-ending job. Security teams spend enormous energy writing exceptions, adjusting sensitivity thresholds, and managing false positive rates. The moment you tighten rules to catch more attacks, you start breaking things. The moment you loosen them to stop breaking things, you let more attacks through.

The Hidden Cost of "Log Only" Mode

Many organizations default to running large portions of their WAF ruleset in log-only mode because they can't afford the operational disruption of blocking legitimate traffic. This feels like a reasonable compromise, but it creates a dangerous illusion of security.

Log-only mode means you have visibility without enforcement. You can see that someone is probing your application with SQL injection payloads, but you're not stopping them. Worse, the volume of log data generated by a WAF in log-only mode is often so overwhelming that security teams can't meaningfully analyze it in real time. The logs become a forensic resource — useful after a breach, but not before.

The math here is brutal: if an attacker needs to send 500 requests to find a working SQL injection vector, and you're logging all 500 but not blocking any of them, you've given them a free pass to succeed. You'll have excellent records of exactly how they compromised you, but that's cold comfort.


What Full-Transaction Detection Actually Changes

Cloudflare's new approach — correlating request payloads with server responses — addresses a fundamental gap in traditional WAF architecture. Instead of only looking at what comes in, you're also looking at what goes out.

This matters enormously because server responses are the ground truth of whether an attack succeeded. A SQL injection probe might look scary in the request, but if the server returns a 403 or a generic error page, nothing bad happened. If the server returns a 200 with a database dump in the body, something very bad happened. The response tells you what the request couldn't.

Correlating Requests and Responses: A New Detection Primitive

Think about what this enables. Instead of asking "does this request look like an attack?", you can ask "did this request cause an attack to succeed?" This is a categorically different question, and it produces categorically different results.

Consider a classic scenario: an attacker is fuzzing an API endpoint with variations of a path traversal payload. Traditional WAFs might catch some of these, miss others, and generate dozens of false positives along the way. With full-transaction detection, you can observe that one specific payload variant caused the server to return file contents it shouldn't have — and that observation is essentially noise-free. There's no ambiguity about whether something bad happened.

This is why Cloudflare describes these as "high-fidelity" detections. The fidelity comes from the response correlation. You're not inferring that an attack might have succeeded based on pattern matching; you're observing that it did succeed based on what the server returned.


Rethinking Your Detection Posture: A Practical Framework

Whether you're using Cloudflare, AWS WAF, ModSecurity, or any other WAF solution, the principles behind always-on detection should inform how you architect your security monitoring. Here's a practical framework for applying these ideas.

Layer 1: Request-Level Detection (Your Existing WAF)

Your existing WAF rules still matter. They provide the first line of defense and can block a large percentage of automated, low-sophistication attacks before they ever reach your application servers. The key is to be realistic about what this layer can and cannot do.

Request-level rules are good at:

  • Blocking known-bad signatures (CVE-specific payloads, known malware C2 patterns)
  • Rate limiting and bot mitigation
  • Enforcing input validation at the perimeter
  • Catching unsophisticated, automated attacks

Request-level rules are poor at:

  • Detecting zero-day exploits with novel payloads
  • Identifying multi-stage attacks that span multiple requests
  • Distinguishing between malicious intent and legitimate but unusual input
  • Confirming whether an attack actually succeeded

Layer 2: Response-Level Monitoring

This is the layer that most organizations are missing. Instrumenting your infrastructure to monitor response characteristics — status codes, response sizes, content types, response times — gives you a detection surface that's much harder for attackers to evade.

Here's a simple example of what response-level monitoring might look like in practice. If you're running Nginx, you can configure extended logging to capture response bodies or at minimum response metadata:

log_format detailed '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

access_log /var/log/nginx/access.log detailed;

Then you can build detection rules around anomalies: a request that matches a known injection pattern and returns an unusually large response body is a much stronger signal than the request alone.

Layer 3: Behavioral Correlation

The most sophisticated detection layer correlates behavior across multiple requests and over time. This is where you start catching things like credential stuffing (many failed logins from distributed IPs), account takeover sequences (login success followed by immediate password change and data export), and slow-burn reconnaissance (low-and-slow scanning that stays under per-request rate limits).

Building this layer requires aggregating data from your WAF, your application logs, your authentication system, and ideally your CDN. The data model looks something like this:

# Pseudocode for behavioral correlation
def evaluate_session(session_id, time_window):
    events = get_events_for_session(session_id, time_window)
    
    signals = {
        'injection_probes': count_injection_patterns(events.requests),
        'error_rate': calculate_error_rate(events.responses),
        'data_volume': sum_response_bytes(events.responses),
        'endpoint_diversity': count_unique_endpoints(events.requests),
        'time_anomaly': check_access_time_anomaly(events.timestamps)
    }
    
    # High injection probes + high data volume = likely successful exfiltration
    if signals['injection_probes'] > 10 and signals['data_volume'] > 1_000_000:
        trigger_alert('potential_data_exfiltration', session_id, signals)
    
    # High endpoint diversity + high error rate = likely reconnaissance
    if signals['endpoint_diversity'] > 50 and signals['error_rate'] > 0.7:
        trigger_alert('reconnaissance_pattern', session_id, signals)

The Operational Reality: Making "Always-On" Actually Work

The phrase "always-on detection" sounds appealing, but it raises legitimate operational concerns. If you're generating high-fidelity detections continuously, someone has to act on them. Here's how to make this operationally sustainable.

Prioritization by Confidence, Not Just Severity

Traditional security alerting prioritizes by severity (critical, high, medium, low). Always-on detection enables a better model: prioritize by the combination of severity and confidence. A high-confidence detection of a medium-severity issue should often be prioritized over a low-confidence detection of a critical issue, because the high-confidence alert is more likely to require immediate action.

Response-correlated detections naturally have higher confidence, which means your alert queue becomes more actionable. Instead of triaging 500 WAF alerts to find the 3 that matter, you might triage 15 correlated alerts and find that 10 of them require immediate response.

Automated Response Playbooks

Always-on detection only eliminates the log/block trade-off if you have automated response capabilities that can act on detections faster than a human analyst can. This means building playbooks for common scenarios:

  • Successful injection detected: Automatically quarantine the session, block the source IP, and trigger a data access audit
  • Credential stuffing in progress: Automatically require step-up authentication for affected accounts
  • API abuse pattern detected: Automatically rate-limit the offending API key and notify the key owner

The goal isn't to replace human judgment — it's to ensure that the first response happens in seconds, not minutes, while human analysts review and refine.

Keeping Your Infrastructure Visible

One thing that often gets overlooked in WAF discussions is the importance of having clear visibility into your underlying infrastructure before you start tuning detection rules. If you don't know exactly what's running on your servers, what SSL/TLS configurations you have in place, or what security headers your application is returning, you're tuning blind.

Before implementing any of the detection layers described above, it's worth doing a thorough audit of your current security posture. The SSL Certificate Checker can quickly surface certificate issues that might be masking security problems — an expired or misconfigured certificate is sometimes a sign that a server has been forgotten and isn't receiving security updates. Similarly, the Vulnerability Scanner can identify missing security headers, XSS vulnerabilities, and other weaknesses that your WAF might be compensating for rather than actually fixing.


The False Positive Problem Isn't Going Away — But It's Changing

One of the promises of response-correlated detection is that it dramatically reduces false positives. This is largely true, but it's worth being precise about what changes and what doesn't.

False positives that go away: Alerts triggered by malicious-looking requests that the server correctly rejected. If someone sends an XSS payload and your server returns a 400 error, that's not an incident — it's your application working correctly. Response correlation means you don't alert on this.

False positives that remain: Alerts triggered by legitimate behavior that looks like an attack in aggregate. An internal security scanner might trigger behavioral correlation rules. A legitimate data export by an authorized user might look like exfiltration. These require careful tuning of your behavioral rules, not your request-level rules.

New false positives that emerge: Response-level monitoring can generate its own false positives. A large response body might indicate data exfiltration, or it might indicate a legitimate bulk export. Accurate classification requires understanding your application's normal behavior, which takes time to establish.

The net result is that always-on detection doesn't eliminate the need for tuning — it changes where you spend your tuning effort. You move from endlessly tweaking request-level rules to building accurate models of normal application behavior.


Applying This to Your SEO and Performance Stack Too

It's worth noting that the principle of always-on, response-correlated monitoring applies beyond pure security use cases. The same architecture that lets you detect successful SQL injection by observing response anomalies can also help you detect performance degradations, SEO issues, and API reliability problems.

For instance, if you're monitoring your application's response characteristics continuously, you'll catch things like: pages that suddenly start returning 500 errors (bad deployment), pages whose response size drops dramatically (content being stripped), or API endpoints whose response times spike (database query regression). These aren't security incidents, but they're the kind of issues that hurt your users and your business.

Tools like the Website Performance Analyzer and the SEO Audit can give you a baseline snapshot of how your application is performing and what it's returning — useful both for establishing normal behavior baselines and for investigating anomalies when your monitoring systems fire. If you're seeing unexpected response size changes, running a fresh API Response Time Tester check can help you quickly isolate whether the issue is at the network layer or the application layer.


The Bigger Picture: Toward Continuous Security Validation

What Cloudflare is building with Attack Signature Detection and Full-Transaction Detection is, at its core, a form of continuous security validation. Instead of periodically running penetration tests or compliance scans, you're continuously validating that your security controls are working as intended by observing real-world attack outcomes.

This is a meaningful shift in security philosophy. Traditional security assumes that if you have the right controls in place, you're secure. Continuous validation assumes that controls drift, attackers adapt, and the only way to know you're actually secure is to continuously verify it against real traffic.

The log/block trade-off was always a symptom of a deeper problem: WAFs were making binary decisions based on incomplete information. By expanding the information set — incorporating response data, behavioral patterns, and temporal correlations — you can make decisions with much higher confidence, and you can make them continuously rather than reactively.


Conclusion

The WAF log/block trade-off has cost organizations countless hours of security engineering time and left countless applications partially exposed while teams tried to tune their way to safety. The shift toward always-on, response-correlated detection represents a genuine architectural improvement — not just an incremental feature update.

Whether you adopt Cloudflare's specific implementation or build your own multi-layer detection architecture, the core insight is the same: request-only inspection is fundamentally insufficient, and the response is where ground truth lives.

If you're ready to audit your current security posture and identify gaps before attackers do, OpDeck offers a suite of tools to help you get started — from the Vulnerability Scanner that checks your security headers and common attack surfaces, to the SSL Certificate Checker and Website Performance Analyzer that give you a complete picture of what your application is actually serving to the world. Start with visibility, then build toward always-on detection.