Post-Cyber Attack Recovery
Migrating 200+ Customers from Legacy Infrastructure to Azure

Post-Cyber Attack Recovery
Migrating 200+ Customers from Legacy Infrastructure to Azure

Post-Cyber Attack Recovery
Migrating 200+ Customers from Legacy Infrastructure to Azure

Industry
Industry
Construction Tech (B2B SaaS)
Construction Tech (B2B SaaS)
Company Size
Company Size
200+ customers, 10-year-old product
200+ customers, 10-year-old product
Engagement
Engagement
Fractional CTO + DevOps as a Service
Fractional CTO + DevOps as a Service
Duration
Duration
12-18 months
12-18 months
Platform
Platform
Legacy .NET Application on IaaS
Legacy .NET Application on IaaS
Services
Services
Infrastructure Assessment & Documentation
Cloud Migration (OVH → Azure)
Fractional CTO
Disaster Recovery Planning
Tech Stack
Tech Stack
Azure Virtual Machines (IaaS)
SQL Server on Azure VMs
Azure Managed Disks
Azure Backup Services
The Challenge
A construction technology company called us in after suffering a cyber attack. They had been operating for 10 years without any C-level technology leadership. The attack exposed what years of technical neglect had created. The infrastructure was on OVH, a European cloud provider, with no standardisation whatsoever. VMs were built manually through a dashboard. Networks, disks, and configurations were all ad-hoc. There was no Infrastructure as Code, no automation, no proper backup regime, and critically - no documentation of how anything actually worked. The product itself was a 10-year-old .NET application. It was stateful, not cloud-native, and could only run on Infrastructure as a Service (IaaS). Platform as a Service options were not viable given the application architecture. They had over 200 customers depending on this system daily. The mandate was clear: recover from the attack, modernise the infrastructure, and build resilience so this could never happen again.
The Challenge
A construction technology company called us in after suffering a cyber attack. They had been operating for 10 years without any C-level technology leadership. The attack exposed what years of technical neglect had created. The infrastructure was on OVH, a European cloud provider, with no standardisation whatsoever. VMs were built manually through a dashboard. Networks, disks, and configurations were all ad-hoc. There was no Infrastructure as Code, no automation, no proper backup regime, and critically - no documentation of how anything actually worked. The product itself was a 10-year-old .NET application. It was stateful, not cloud-native, and could only run on Infrastructure as a Service (IaaS). Platform as a Service options were not viable given the application architecture. They had over 200 customers depending on this system daily. The mandate was clear: recover from the attack, modernise the infrastructure, and build resilience so this could never happen again.
The Challenge
A construction technology company called us in after suffering a cyber attack. They had been operating for 10 years without any C-level technology leadership. The attack exposed what years of technical neglect had created. The infrastructure was on OVH, a European cloud provider, with no standardisation whatsoever. VMs were built manually through a dashboard. Networks, disks, and configurations were all ad-hoc. There was no Infrastructure as Code, no automation, no proper backup regime, and critically - no documentation of how anything actually worked. The product itself was a 10-year-old .NET application. It was stateful, not cloud-native, and could only run on Infrastructure as a Service (IaaS). Platform as a Service options were not viable given the application architecture. They had over 200 customers depending on this system daily. The mandate was clear: recover from the attack, modernise the infrastructure, and build resilience so this could never happen again.



Scalable Design Foundation
Phase 1
Assessment and Standardisation
Before touching anything, we mapped the entire estate. We documented every service, understood how the application worked, and identified all the weak points in both the product and infrastructure. WireApps brought in a DevOps consultant alongside the Fractional CTO engagement. The first priority was standardisation through Infrastructure as Code. We needed to stop the bleeding of manual, undocumented changes before we could think about migration. We created multiple playbooks: - Migration procedures from OVH to Azure - Backup regime documentation (disk backups, database backups, retention policies) - Disaster recovery procedures - Incident response protocols
Scalable Design Foundation
Phase 1
Assessment and Standardisation
Before touching anything, we mapped the entire estate. We documented every service, understood how the application worked, and identified all the weak points in both the product and infrastructure. WireApps brought in a DevOps consultant alongside the Fractional CTO engagement. The first priority was standardisation through Infrastructure as Code. We needed to stop the bleeding of manual, undocumented changes before we could think about migration. We created multiple playbooks: - Migration procedures from OVH to Azure - Backup regime documentation (disk backups, database backups, retention policies) - Disaster recovery procedures - Incident response protocols
Scalable Design Foundation
Phase 1
Assessment and Standardisation
Before touching anything, we mapped the entire estate. We documented every service, understood how the application worked, and identified all the weak points in both the product and infrastructure. WireApps brought in a DevOps consultant alongside the Fractional CTO engagement. The first priority was standardisation through Infrastructure as Code. We needed to stop the bleeding of manual, undocumented changes before we could think about migration. We created multiple playbooks: - Migration procedures from OVH to Azure - Backup regime documentation (disk backups, database backups, retention policies) - Disaster recovery procedures - Incident response protocols



Phase 2
Migration Strategy and Execution
We shortlisted Azure as the target platform. The decision was pragmatic: the application required IaaS, Azure had strong .NET support, and the security and compliance posture met their needs post-attack.
Our approach was methodical:
Trial and error on non-production: We ran multiple test migrations on QA and dev servers to understand the full process
Manual to playbook: We converted successful manual migrations into documented playbooks
Playbook to automation: We then converted playbooks into automated tooling
This progression - manual, then documented, then automated - is critical for legacy migrations. You cannot automate what you do not understand.
Phase 2
Migration Strategy and Execution
We shortlisted Azure as the target platform. The decision was pragmatic: the application required IaaS, Azure had strong .NET support, and the security and compliance posture met their needs post-attack.
Our approach was methodical:
Trial and error on non-production: We ran multiple test migrations on QA and dev servers to understand the full process
Manual to playbook: We converted successful manual migrations into documented playbooks
Playbook to automation: We then converted playbooks into automated tooling
This progression - manual, then documented, then automated - is critical for legacy migrations. You cannot automate what you do not understand.
Phase 2
Migration Strategy and Execution
We shortlisted Azure as the target platform. The decision was pragmatic: the application required IaaS, Azure had strong .NET support, and the security and compliance posture met their needs post-attack.
Our approach was methodical:
Trial and error on non-production: We ran multiple test migrations on QA and dev servers to understand the full process
Manual to playbook: We converted successful manual migrations into documented playbooks
Playbook to automation: We then converted playbooks into automated tooling
This progression - manual, then documented, then automated - is critical for legacy migrations. You cannot automate what you do not understand.



Where We Got It Wrong (And What We Learned)
Mistake 1
Premature cloud-native architecture
We initially tried to follow cloud best practices by separating the database (Azure SQL managed service) from the application server. The first five customer migrations were disasters. The application was not cloud-optimised. When the database was separated from the application, performance collapsed. To compensate, we had to increase IOPS on both the VM and managed SQL, which became prohibitively expensive. The pivot: We went back to the drawing board and replicated the original architecture - SQL Server and application on the same VM. Yes, this creates a single point of failure. But we mitigated it properly: - Separated data disks from application disks from OS disks - Implemented managed disks with proper backup - Set up correct maintenance windows and automation Sometimes the pragmatic answer is not the textbook answer. A stable, well-managed "imperfect" architecture beats an unstable "best practice" one.
Where We Got It Wrong (And What We Learned)
Mistake 1
Premature cloud-native architecture
We initially tried to follow cloud best practices by separating the database (Azure SQL managed service) from the application server. The first five customer migrations were disasters. The application was not cloud-optimised. When the database was separated from the application, performance collapsed. To compensate, we had to increase IOPS on both the VM and managed SQL, which became prohibitively expensive. The pivot: We went back to the drawing board and replicated the original architecture - SQL Server and application on the same VM. Yes, this creates a single point of failure. But we mitigated it properly: - Separated data disks from application disks from OS disks - Implemented managed disks with proper backup - Set up correct maintenance windows and automation Sometimes the pragmatic answer is not the textbook answer. A stable, well-managed "imperfect" architecture beats an unstable "best practice" one.
Where We Got It Wrong (And What We Learned)
Mistake 1
Premature cloud-native architecture
We initially tried to follow cloud best practices by separating the database (Azure SQL managed service) from the application server. The first five customer migrations were disasters. The application was not cloud-optimised. When the database was separated from the application, performance collapsed. To compensate, we had to increase IOPS on both the VM and managed SQL, which became prohibitively expensive. The pivot: We went back to the drawing board and replicated the original architecture - SQL Server and application on the same VM. Yes, this creates a single point of failure. But we mitigated it properly: - Separated data disks from application disks from OS disks - Implemented managed disks with proper backup - Set up correct maintenance windows and automation Sometimes the pragmatic answer is not the textbook answer. A stable, well-managed "imperfect" architecture beats an unstable "best practice" one.
Mistake 2
Unbalanced VM distribution
With 200+ customers to migrate, we had three target VMs. In our rush to complete the migration (we were paying for two cloud providers simultaneously), we migrated over 100 customers to the first VM before properly distributing across all three. Result: one oversaturated VM, two underutilised ones. We had to rebalance post-migration, which added complexity and risk. The lesson: Migration speed matters, but capacity planning matters more. We should have enforced distribution rules from day one.
Mistake 2
Unbalanced VM distribution
With 200+ customers to migrate, we had three target VMs. In our rush to complete the migration (we were paying for two cloud providers simultaneously), we migrated over 100 customers to the first VM before properly distributing across all three. Result: one oversaturated VM, two underutilised ones. We had to rebalance post-migration, which added complexity and risk. The lesson: Migration speed matters, but capacity planning matters more. We should have enforced distribution rules from day one.
Mistake 2
Unbalanced VM distribution
With 200+ customers to migrate, we had three target VMs. In our rush to complete the migration (we were paying for two cloud providers simultaneously), we migrated over 100 customers to the first VM before properly distributing across all three. Result: one oversaturated VM, two underutilised ones. We had to rebalance post-migration, which added complexity and risk. The lesson: Migration speed matters, but capacity planning matters more. We should have enforced distribution rules from day one.



Phase 3
Operationalisation
With migrations complete, we focused on long-term stability: - Automated backup verification - Maintenance programme automation - Monitoring and alerting - Updated runbooks for the operations team
Phase 3
Operationalisation
With migrations complete, we focused on long-term stability: - Automated backup verification - Maintenance programme automation - Monitoring and alerting - Updated runbooks for the operations team
Phase 3
Operationalisation
With migrations complete, we focused on long-term stability: - Automated backup verification - Maintenance programme automation - Monitoring and alerting - Updated runbooks for the operations team



The Results
200+ customers
200+ customers
200+ customers
migrated from OVH to Azure
migrated from OVH to Azure
migrated from OVH to Azure
Zero customer data loss
Zero customer data loss
Zero customer data loss
during migration
during migration
during migration
Infrastructure as Code
Infrastructure as Code
Infrastructure as Code
implemented across entire estate
implemented across entire estate
implemented across entire estate
Documented playbooks
Documented playbooks
Documented playbooks
for all operational procedures
for all operational procedures
for all operational procedures
Automated backup regime
Automated backup regime
Automated backup regime
with verified recovery procedures
with verified recovery procedures
with verified recovery procedures
Reduced operational overhead
Reduced operational overhead
Reduced operational overhead
through automation
through automation
through automation



Key Lessons
Document before you automate.
Document before you automate.
Document before you automate.
The playbook phase is not optional. You cannot automate a process you do not fully understand, and you do not fully understand it until you have documented it step by step.
The playbook phase is not optional. You cannot automate a process you do not fully understand, and you do not fully understand it until you have documented it step by step.
The playbook phase is not optional. You cannot automate a process you do not fully understand, and you do not fully understand it until you have documented it step by step.
Legacy applications break cloud best practices.
Legacy applications break cloud best practices.
Legacy applications break cloud best practices.
A 10-year-old .NET application will not behave like a cloud-native microservice. Design your infrastructure around what the application actually needs, not what the architecture diagrams say it should need.
A 10-year-old .NET application will not behave like a cloud-native microservice. Design your infrastructure around what the application actually needs, not what the architecture diagrams say it should need.
A 10-year-old .NET application will not behave like a cloud-native microservice. Design your infrastructure around what the application actually needs, not what the architecture diagrams say it should need.
Honest post-mortems build trust.
Honest post-mortems build trust.
Honest post-mortems build trust.
We made mistakes during this engagement. We documented them, shared them with the client, and adjusted. This transparency built more trust than pretending everything went perfectly.
We made mistakes during this engagement. We documented them, shared them with the client, and adjusted. This transparency built more trust than pretending everything went perfectly.
We made mistakes during this engagement. We documented them, shared them with the client, and adjusted. This transparency built more trust than pretending everything went perfectly.
Speed vs stability is a real trade-off.
Speed vs stability is a real trade-off.
Speed vs stability is a real trade-off.
Paying for two cloud providers created pressure to migrate fast. That pressure led to the VM distribution mistake. In hindsight, a more measured pace would have avoided the rebalancing work.
Paying for two cloud providers created pressure to migrate fast. That pressure led to the VM distribution mistake. In hindsight, a more measured pace would have avoided the rebalancing work.
Paying for two cloud providers created pressure to migrate fast. That pressure led to the VM distribution mistake. In hindsight, a more measured pace would have avoided the rebalancing work.
Post-incident is the right time for transformation.
Post-incident is the right time for transformation.
Post-incident is the right time for transformation.
The cyber attack created executive attention and budget for infrastructure investment. We used that window to implement changes that would have been deprioritised during normal operations.
The cyber attack created executive attention and budget for infrastructure investment. We used that window to implement changes that would have been deprioritised during normal operations.
The cyber attack created executive attention and budget for infrastructure investment. We used that window to implement changes that would have been deprioritised during normal operations.



Client Context
A construction technology company serving 200+ B2B customers with a legacy on-premises application. The engagement began in crisis mode following a cyber attack, then evolved into a comprehensive infrastructure modernisation programme. The DevOps as a Service model provided the specialised expertise they lacked in-house without requiring permanent hires in a domain that was not their core competency. Suitable for: CTOs inheriting legacy infrastructure, companies recovering from security incidents, leaders evaluating managed DevOps services for infrastructure modernisation.
Client Context
A construction technology company serving 200+ B2B customers with a legacy on-premises application. The engagement began in crisis mode following a cyber attack, then evolved into a comprehensive infrastructure modernisation programme. The DevOps as a Service model provided the specialised expertise they lacked in-house without requiring permanent hires in a domain that was not their core competency. Suitable for: CTOs inheriting legacy infrastructure, companies recovering from security incidents, leaders evaluating managed DevOps services for infrastructure modernisation.
Client Context
A construction technology company serving 200+ B2B customers with a legacy on-premises application. The engagement began in crisis mode following a cyber attack, then evolved into a comprehensive infrastructure modernisation programme. The DevOps as a Service model provided the specialised expertise they lacked in-house without requiring permanent hires in a domain that was not their core competency. Suitable for: CTOs inheriting legacy infrastructure, companies recovering from security incidents, leaders evaluating managed DevOps services for infrastructure modernisation.
See More Case Studies
See how we turn bold ideas into fast, reliable, and scalable digital products.
See More Case Studies
See how we turn bold ideas into fast, reliable, and scalable digital products.
See More Case Studies
See how we turn bold ideas into fast, reliable, and scalable digital products.
Your Next Big Product Starts Here
Work with a team that designs, builds, and ships digital products — fast, scalable, and user-first.

Your Next Big Product Starts Here
Work with a team that designs, builds, and ships digital products — fast, scalable, and user-first.
Your Next Big Product Starts Here
Work with a team that designs, builds, and ships digital products — fast, scalable, and user-first.




