---
title: "Twilio Incident Update: SMS Delivery Receipt Delays from Long Codes to Small US Networks"
description: "Breaking down the Twilio SMS delivery receipt delays affecting long codes and small US carriers, what it means for developers, and how to build resilience."
date: "2026-02-24"
author: "ScribePilot Team"
category: "general"
keywords: ["Twilio SMS incident", "SMS delivery receipt delays", "Twilio long code DLR", "US carrier SMS issues", "CPaaS reliability"]
coverImage: ""
coverImageCredit: ""
---
Twilio SMS Delivery Receipt Delays: What Happened, What It Means, and What to Watch
If you're running SMS workflows through Twilio and noticed your delivery receipts going quiet over the past few days, you're not imagining things. Twilio has been tracking an incident involving delayed SMS delivery receipts (DLRs) from a subset of long codes to multiple small carrier networks in the United States. As of this writing, the incident is in a monitoring phase, meaning Twilio believes the core issue is addressed but is watching for recurrence.
Here's what we know, what it means for your applications, and what you should do about it.
What's Actually Happening
Let's be precise: this incident involves delivery receipt delays, not necessarily message delivery failures. Those are two very different things.
When you send an SMS through Twilio, the message travels through intermediaries before reaching the recipient's carrier. The carrier then sends back a delivery receipt confirming the message landed (or didn't). In this incident, those receipts from certain small US carriers are arriving late, or in some cases, not arriving within expected timeframes.
The scope matters here. This affects a subset of Twilio long codes, not all of them. And it's hitting multiple small carrier networks specifically, not the major nationwide carriers. Your high-volume short code campaigns or toll-free messaging may be entirely unaffected.
Why Small Carriers Get Hit Harder
The US SMS ecosystem is more fragmented than most developers realize. Beyond the handful of carriers everyone knows, there are hundreds of smaller regional and rural networks. These carriers often rely on different aggregator relationships and interconnect agreements to handle message routing.
Long codes (standard 10-digit phone numbers) route through these networks differently than short codes or toll-free numbers, which typically have more direct carrier integrations. When something goes wrong in the aggregator chain between a CPaaS provider like Twilio and a small carrier, DLR reporting is often the first thing to break. The messages themselves may still deliver fine. The confirmation just gets stuck.
How Twilio Has Handled Communication
Twilio has posted updates through their status page, which is consistent with their typical incident response process. For a monitoring-phase incident like this, the level of transparency is reasonable by CPaaS industry standards. Many providers wouldn't surface a DLR-specific issue at all, particularly one scoped to smaller networks.
That said, if you're relying on Twilio's status page as your only monitoring source, you're leaving yourself exposed. More on that below.
The Real Business Impact
Delayed DLRs don't just mean a missing webhook callback. They can cascade:
- Verification flows that wait for delivery confirmation before timing out may trigger unnecessary retries, burning through your messaging budget
- Transactional alerts where your system marks messages as "pending" indefinitely, confusing internal dashboards and support teams
- Marketing campaigns where delivery analytics look artificially worse than actual performance, skewing your reporting
What You Should Do Right Now
First, don't assume every undelivered receipt means a failed message. Adjust your timeout logic if you haven't already:
`python
DLR_TIMEOUT_SECONDS = 300 # 5 minutes (adjust based on your use case)
`
Consider implementing fallback status checks using Twilio's Message Resource API to poll for delivery status when callbacks are late. And if you're running critical verification flows, having a secondary messaging provider or channel (email, voice OTP) as a backup isn't paranoia. It's engineering.
The Bigger Picture
This incident is a small symptom of a much larger reality: the US SMS delivery landscape keeps getting more complex. Between 10DLC registration requirements, evolving carrier filtering rules, and ongoing carrier fragmentation, delivery variability is something every team building on SMS needs to plan for. Treating any single provider's delivery path as perfectly reliable is a recipe for a bad week.
What to Monitor Next
Keep an eye on Twilio's status page for resolution confirmation. Review your DLR callback logs from the past several days to identify any messages stuck in limbo. And honestly, if this incident exposed blind spots in your monitoring, that's the real takeaway. Build alerting around DLR delivery rates so you catch the next one before your customers do.
The incident will resolve. The architectural lessons shouldn't expire with it.