Suped

Why are delivery error rates different between API and UI?

Michael Ko profile picture
Michael Ko
Co-founder & CEO, Suped
Published 26 May 2025
Updated 17 Aug 2025
7 min read
As email senders, we rely heavily on data to understand our performance and identify issues. One frustrating scenario is encountering conflicting delivery error rates between different reporting interfaces, particularly between an API and a User Interface (UI). This discrepancy can make it challenging to pinpoint the true state of your email deliverability and take corrective action.
You might pull data directly from an API, only to see a vastly different picture when you log into a dashboard, such as Google Postmaster Tools (GPT). This isn't just a minor annoyance, it can lead to misinformed decisions about your email strategy and a delay in addressing critical deliverability problems.
Understanding why these discrepancies occur is crucial for effective email management. It often comes down to how data is collected, processed, and presented in different environments. Let's explore the common factors that contribute to these differing delivery error rates.

Understanding data sources

The fundamental difference often lies in the nature of the data itself. API endpoints typically provide raw, granular data directly from the source. This means you're getting the most unfiltered view of events as they happen, or soon after.
In contrast, a UI dashboard, while user-friendly, presents data that has often undergone various levels of processing, aggregation, and filtering. This can include data being sampled or summarized to improve performance and readability for the end-user. For instance, Postmaster Tools sometimes redacts data or aggregates it, especially if your sending volume falls below certain thresholds. This means the numbers you see in the UI might be a simplified representation rather than the complete picture available via the API.
It's also common for different reporting platforms, like an ESP's dashboard versus SNDS and SFMC, to show discrepancies. This is often due to their differing methods of data collection and processing. While an API offers the raw data, the UI attempts to provide a more digestible view, which by nature, involves some level of interpretation or summarization.

Reasons for discrepancies

Several factors can directly cause these observed differences. One significant factor is data latency and processing delays. API data might be available in near real-time, while UI dashboards often have scheduled updates or longer processing times to compile and display the information, creating a time lag. This means that at any given moment, the API might reflect more current data than the UI.
Another common reason is differing methodologies for data aggregation and sampling. A UI might use sampling techniques or aggregate data over broader timeframes to keep the display responsive, which can smooth out peaks and valleys present in the raw API data. This also applies to how different systems might filter or redact data, especially for lower volume senders.
Finally, the scope of data can vary. An API might provide access to specific subsets of data (e.g., specific IP addresses or subdomains) that aren't easily isolated or even visible in the UI's default view. Additionally, inconsistencies in time zones or reporting periods between your internal systems pulling API data and the UI's default settings can lead to mismatched figures. This is why comparing apples to apples requires careful attention to all these details.

API data

  1. Granular data: Provides raw, unfiltered data often updated more frequently.
  2. Technical focus: Requires technical knowledge to query and interpret.
  3. Comprehensive errors: Often includes detailed error codes and categories.

UI data

  1. Aggregated data: Summarized and simplified for easy consumption.
  2. User-friendly: Designed for quick visual understanding and less technical users.
  3. Simplified errors: May group error types or omit specific details.

Technical factors and interpretations

Sometimes the issue isn't with the raw data itself, but how it's processed or interpreted on either end. A common scenario is when the overall error rate shown in a UI is high, but the detailed breakdown for specific error types all show 0%. This suggests a potential calculation or display error within the UI or the underlying data aggregation, as noted in some discussions about Google Postmaster Tools specifically.
Another technical factor is how data is scaled for graphical representation. If you're building your own reporting dashboard from an API, a scaling discrepancy could lead to visual differences, even if the underlying numbers are conceptually similar. For example, slight variations in how percentage points are rounded or how small volumes of errors are represented can create a perceived difference.
It's also worth noting that different systems might categorize delivery errors differently. What one system reports as a temporary error (and thus not count it as a hard failure in their UI) might be included in a broader delivery error bucket accessible via the API. Understanding API error codes and their definitions across platforms is key.

Checking raw API responses

When you encounter discrepancies, always cross-reference the UI data with the raw API response. Ensure that your API calls are correctly configured and that you are indeed pulling the specific data points that the UI purports to represent. This helps rule out issues with your own data ingestion or processing. Remember that delivery errors can occur for various reasons, not all of which are immediately apparent.
Example API response showing 0% detailed errorsjson
{ "date": "2024-05-29", "total_emails": 100000, "delivery_errors_percentage": 0.0, "error_details": [ { "error_type": "spam_complaint", "percentage": 0.0 }, { "error_type": "domain_rejection", "percentage": 0.0 } ] }

Impact on deliverability and troubleshooting

Discrepancies in error rates, whether between API and UI or different reporting tools, can significantly impact your understanding of email deliverability. If you see a high overall error rate in a UI but no specific details, it can obscure the real problems, making it difficult to address root causes like content issues, poor list hygiene, or IP reputation problems. This is particularly problematic for emails going to spam.
To mitigate this, it's essential to not solely rely on a single data source. Cross-referencing data from multiple points, including your own sending logs, ESP reports, and Postmaster Tools, can help paint a more accurate picture. This multi-faceted approach allows you to identify if the discrepancy is a reporting anomaly or an actual deliverability issue requiring attention. Effective DMARC monitoring is one of the best ways to get an unbiased view of your email performance.
When troubleshooting, focus on the details the API provides. If the UI shows a high error rate but gives no specifics, investigate your API data for specific error types or categories that might be aggregated away in the UI. Understanding the nuances of email deliverability rates means delving into the raw data and not just what's presented on a dashboard.
Encountering different delivery error rates between API and UI is a common challenge in email deliverability. These discrepancies stem from differences in data granularity, processing, aggregation, and the specific metrics reported by each interface. By understanding these underlying reasons, we can more effectively interpret the data, troubleshoot issues, and ultimately improve our email deliverability.

Views from the trenches

Best practices
Always cross-reference API data with UI data to get a comprehensive view of your email performance.
Prioritize analyzing raw API responses for the most detailed and unfiltered insights into delivery errors.
Ensure that time zones and reporting periods are aligned across all your data sources to avoid misinterpretations.
Use DMARC reports as a neutral, third-party source to validate authentication and delivery outcomes.
Understand the specific definitions and categories of delivery errors used by each reporting system.
Common pitfalls
Relying solely on UI dashboard data without verifying it against the more granular API information.
Ignoring data redaction or aggregation by providers, which can mask underlying deliverability issues.
Misinterpreting a 0% detailed error breakdown when the overall error rate is shown as high.
Failing to account for latency and processing delays between API updates and UI refreshes.
Not considering how different systems classify temporary versus permanent delivery failures.
Expert tips
Implement automated processes to pull and analyze API data regularly for real-time insights.
Develop custom dashboards that combine data from various sources for a unified view of deliverability.
Pay close attention to trends over time rather than just daily snapshots, as this can reveal deeper issues.
If building your own API integration, thoroughly test for scaling and interpretation discrepancies.
Consult community forums or expert groups when facing persistent and unexplained data inconsistencies.
Expert view
Expert from Email Geeks says checking the API's raw response is crucial to rule out dashboard calculation errors, as discrepancies have been observed before.
2024-05-29 - Email Geeks
Expert view
Expert from Email Geeks says Google Postmaster Tools (GPT) error breakdowns are often inaccurate, and senders should rule out scaling discrepancies in their own API code interpretation before blaming GPT, noting that GPT graphs can be approximated but not perfectly replicated.
2024-05-30 - Email Geeks

Frequently asked questions

DMARC monitoring

Start monitoring your DMARC reports today

Suped DMARC platform dashboard

What you'll get with Suped

Real-time DMARC report monitoring and analysis
Automated alerts for authentication failures
Clear recommendations to improve email deliverability
Protection against phishing and domain spoofing