Professional Versatility: Bridging Accenture Expertise with AWS Innovation
The morning of 21st October 2025 began with an atmospheric stillness over Madrid. As I walked towards the iconic Four Towers, a thick mist enveloped the heights of the skyscrapers, creating a sense of quiet intensity. From the 11th floor of the AWS offices, the world below seemed to vanish into the fog, providing the perfect, focused backdrop for a day dedicated to one of the most critical aspects of modern technology: Resilience.

Whilst my core expertise has historically flourished within the Google Cloud Platform (GCP) ecosystem, I have always maintained that a true professional must transcend the boundaries of a single provider. In my current role at Accenture, specifically within Data Engineering, Management & Governance, I see first-hand how a multi-cloud strategy provides unparalleled value to clients. Conasequently, attending the AWS Partner Tech Day was not merely an educational exercise; it was a strategic move to ensure I can deliver robust, high-availability solutions regardless of the underlying infrastructure.
The Multi-Cloud Imperative
In the current digital landscape, relying on a single vendor can often create a bottleneck for innovation and risk management. Furthermore, clients increasingly demand architectures that are not only scalable but also geographically redundant across different providers. By mastering AWS resilience alongside my existing GCP knowledge, I am better positioned to architect solutions that meet stringent regulatory requirements and complex business needs.
The workshop focused intensely on how to achieve true resilience for critical applications. We explored the shared responsibility model, a fundamental concept that defines the security and operational obligations of both the provider and the customer. Understanding this balance is vital for any Data Engineer who aims to protect the integrity and availability of high-value data pipelines.

Embracing the Culture of Chaos
One of the most compelling segments of the day was the deep dive into Chaos Engineering. The philosophy is simple yet profound: the only way to be certain a system is resilient is to proactively try to break it. By simulating real-world failures, such as server crashes or network latency, we can observe how our systems react and improve them before a genuine disaster occurs.

Architecting for the Unexpected: RTO and RPO
We discussed legendary tools such as Chaos Monkey, pioneered by Netflix, which randomly terminates instances in production to ensure that the overall service remains unaffected. In addition, we explored how AWS tools enable us to implement these experiments safely. For a professional in Data Management, this is transformative. We must constantly test our own solutions; it is not enough to build a pipeline and hope it stays upright. We must prove its stability through rigorous, intentional disruption.
A significant portion of our hands-on labs involved verifying that critical applications could meet their Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
- RTO refers to the maximum tolerable length of time that a computer, system, network, or application can be down after a failure.
- RPO refers to the maximum age of files that must be recovered from backup storage for normal operations to resume.
During the intensive technical challenges, I successfully implemented Multi-AZ (Availability Zone) patterns and cell-based architectures. These strategies ensure that if one data centre experiences an outage, the workload seamlessly shifts to another, maintaining business continuity. I am pleased to share that I completed all the assigned technical challenges within the allotted timeframe, reinforcing my ability to adapt quickly to the AWS environment and deliver results under pressure.
Bridging Data Governance and Cloud Resilience
At Accenture, my focus on Data Engineering and Governance means that I do not just look at whether a system is “up” or “down.” I look at the quality and reliability of the data flowing through it. Nevertheless, even the most sophisticated data governance framework is useless if the infrastructure housing it is fragile.
By integrating fault isolation techniques and static stability into our data architectures, we ensure that metadata remains accessible and that governance policies are enforced even during partial system failures. This holistic approach—combining resilience with data integrity—is what allows me to provide high-value outcomes for my clients.
Final Reflections on a Productive Day
As the mist finally began to clear from the Madrid skyline towards the end of the afternoon, the insights from the day felt equally clear. The AWS Partner Tech Day was an exceptional opportunity to connect with other industry experts and refine my technical toolkit. The networking was as valuable as the labs themselves; sharing experiences with fellow SREs and Cloud Architects reminded me that resilience is as much a cultural mindset as it is a technical configuration.
Ultimately, my commitment to continuous learning in a multi-cloud world is driven by a desire to be a versatile thought partner for my clients. Whether we are building on AWS, GCP, or a hybrid of both, the goal remains the same: creating systems that are “anti-fragile”—systems that do not just survive stress, but actually improve because of it.
Let’s Build Resilient Futures Together
If you are looking to fortify your cloud infrastructure or seeking a partner who understands the intricate intersection of Data Engineering and Cloud Resilience, I would welcome the opportunity to connect. I understand the pressure of managing critical workloads and the high stakes involved in digital transformation.
Would you like to discuss how we can optimise your current architecture for maximum uptime and data integrity? Please feel free to reach out to me directly on LinkedIn to start a conversation about your next high-impact project.