We are now monitoring queue length and Capture processing time.
Posted Dec 12, 2025 - 11:59 UTC
Identified
After restarting some infrastructure components, the throughput of the affected queue was restored, and Capture processing should now be back to its average level. The backlog of stuck invoices has also been processed.
Posted Dec 12, 2025 - 11:58 UTC
Update
We are still working on identifying the cause of the problem. So far, we’ve discovered that the performance of one of our workflow queues has degraded significantly. Scaling out the underlying infrastructure has had a positive impact on system throughput, so we have decided to overprovision the infrastructure to mitigate the impact on Capture processing times.
Posted Dec 12, 2025 - 10:34 UTC
Investigating
We are experiencing serious degradation of the Capture service in US region due to an outage of one of the Azure infrastructure components. We are investigating it right now with a cloud support engineer.