In the assessment phase, AWS MAP helped L3Harris and Cloud303 evaluate the readiness of the existing infrastructure for the migration, outlining the potential costs, benefits, and risks. This process provided a clear roadmap, enabling the teams to define a comprehensive migration strategy that aligned with L3Harris' business objectives.
During the mobilization phase, AWS MAP provided the necessary tools and training to prepare the L3Harris team for the migration. This ensured a smooth transition, reducing the chances of disruptions during the actual migration process.
Finally, in the migration and modernization phase, AWS MAP's resources and best practices played a crucial role in ensuring the seamless migration of the Stern application to AWS EKS. AWS MAP's tools aided in the process of re-platforming, re-hosting, and re-architecting the application. The auto-scaling implementation at the pod and cluster levels was made possible by AWS's scalable solutions and AWS MAP's guidance.
Coming from a different cloud provider, L3Harris wanted its infrastructure deployed using a cloud-agnostic Infrastructure as Code. Cloud303 leveraged Terraform, using CI/CD automation to deploy the infrastructure on AWS by implementing centralized CICD pipelines (with development, staging and production environments) with robust manual approval stages to ease the management overhead of the application's development. Every modification to the infrastructure must pass through a CI/CD pipeline that applies quality, security, and policy checks.
When geospatial data is uploaded into the Stern application, the request is processed and it hits the load balancer, which then proxies the request over the relevant ports to the EKS cluster residing in subnets spanning multi-AZs. Stern APIs orchestrate the application functionalities. Microservices are split between master and worker pods. Master pods listen to incoming requests while the workers' process requests based on messages in the RabbitMQ (switched to AmazonMQ recently). These APIs do many processes - including creating new accounts - and run processes requiring GPU support, etc. All microservices/pods are routed to and orchestrated by Kong in conjunction with Network Load Balancers (NLBs).