Resolved -
Capture processing time remains stable.
Dec 12, 14:55 UTC
Monitoring -
We are now monitoring queue length and Capture processing time.
Dec 12, 11:59 UTC
Identified -
After restarting some infrastructure components, the throughput of the affected queue was restored, and Capture processing should now be back to its average level. The backlog of stuck invoices has also been processed.
Dec 12, 11:58 UTC
Update -
We are still working on identifying the cause of the problem. So far, we’ve discovered that the performance of one of our workflow queues has degraded significantly. Scaling out the underlying infrastructure has had a positive impact on system throughput, so we have decided to overprovision the infrastructure to mitigate the impact on Capture processing times.
Dec 12, 10:34 UTC
Investigating -
We are experiencing serious degradation of the Capture service in US region due to an outage of one of the Azure infrastructure components. We are investigating it right now with a cloud support engineer.
Dec 12, 08:57 UTC