Resolved -
Dear All,
We have received confirmation from our infrastructure partner that the defective hardware has now been successfully replaced. As communicated earlier, the service had already been restored following our prior mitigation actions, and we have been closely monitoring system performance to ensure stability.
All infrastructure components are now operating normally, and performance metrics confirm that the platform is fully back to its expected state.
As the situation has been fully resolved and the infrastructure is back to normal operation, we are therefore closing this incident. Thank you for your patience and understanding throughout this process.
Thank you for your patience and understanding.
Regards,
Scaleflex Incident Management team
Mar 8, 21:36 CET
Monitoring -
Dear All,
We are pleased to inform you that the workaround has been successfully implemented and traffic delivery is now returning to normal.
The affected disk on our cache server has not yet been replaced; however, we are monitoring the situation closely to ensure continued stability. We will provide updates promptly if there are any changes or further developments.
Thank you for your patience and understanding.
Regards,
Scaleflex Incident Management team
Mar 6, 19:30 CET
Identified -
Dear All,
We would like to provide an update regarding the ongoing incident. The root cause has been identified as an intermittent hardware issue on our cache server, which made diagnosing the issue more challenging and contributed to the extended investigation time.
Our team is currently implementing a temporary mitigation to reduce the impact while awaiting the replacement of the affected disk. We do not yet have a confirmed replacement timeline, but we are in active communication with our hosting partner, OVH, to resolve this as quickly as possible.
We will continue to provide updates as more information becomes available, and we appreciate your patience and understanding.
Regards,
Scaleflex Incident Management team
Mar 6, 18:05 CET
Investigating -
Dear All,
Please accept our apologies, as our notification process via the status page did not operate as expected.
At approximately 10:20 AM (UTC) on 06 March 2026, we began observing degraded traffic delivery performance, reflected by an elevated number of HTTP 429 and 439 responses.
Our engineering team is currently investigating the situation and working to identify the root cause. We will continue to monitor the platform closely and are taking the necessary actions to restore normal delivery levels as quickly as possible.
We will provide further updates as soon as additional information becomes available.
Thank you for your patience and understanding.
Regards,
Scaleflex Incident Management team
Mar 6, 16:33 CET