Hi, these are normal edge cases of Defender for Storage’s malware scanning. “Scan failed – internal service error” is a transient backend issue, retries usually succeed; “scan exceeded time limitation” happens when large/complex blobs (e.g., big or deeply nested archives) hit the per-blob time cap (roughly 30 minutes to 3 hours depending on size). You can also see failures when the rate/throughput limits are exceeded (≈2 GB/min and 2,000 files/min per storage account - requests get throttled/not scanned), or when blobs are unscannable (archive tier, client-side encryption, password-protected archives, excessive archive nesting, or oversize blobs, current docs note a 50 GB ceiling). Mitigate by smoothing uploads below the limits (batch/queue with backoff), splitting or avoiding heavily nested/passworded archives, ensuring the scanner’s permissions weren’t removed, and monitoring results via Log Analytics. There’s no built-in auto-retry for failed scans, implement it yourself: subscribe to the scan result events (or logs), and on error states trigger an on-demand rescan or re-ingest via a DMZ pattern (Event Grid - Function moves only “no threat found” blobs). Key refs: results/limits, timeouts, throttling/size/unsupported causes, and response patterns.
What causes the malware scan of a blob in a storage account to fail or timeout?
Brian Hall - AgelessRx
40
Reputation points
We have Microsoft Defender for Cloud enabled and we are using the malware scanning feature for blobs in our storage account when they are uploaded. We have been seeing a small percentage of these scans fail outright due to an "internal service error" or a timeout error. We are trying to understand why these happen and if there's anything we can do to mitigate this or if there's a setting in Azure that forces blobs like this to retry automatically.
Azure Storage
Azure Storage
Globally unique resources that provide access to data management services and serve as the parent namespace for the services.