AWS Load balancing and auto scaling Interview Questions & Answers

AWS Load balancing and auto scaling Interview Questions & Answers

Q: What is Load balancing?

Answer: Load balancing is the process of distributing incoming network traffic or computational workload across multiple resources, such as servers, to optimize resource utilization, improve performance, and ensure high availability of services.

Load balancers typically monitor the health and capacity of individual resources and direct traffic in a way that minimizes response time and maximizes throughput.

Q: What is Auto Scaling?

Answer: AWS auto scaling is a great feature that provides the ability to monitor your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. With Auto-scaling you can scale out/in resources (servers, database, storage, etc.) automatically based on the pre-defined threshold for the metrics such as memory usage, network usage, CPU utilization.

Q: What are the different types of load balancers in AWS?

Answer: There are four types of load balancers in EC2 –

  • Application Load Balancer – Application Load Balancer Operates at layer-7 of the OSI (Open Systems Interconnection) model. ALB can distribute incoming traffic to multiple targets based on the application-level details such as HTTP and HTTPS traffic, Content of the message.
  • Network Load Balancer – Network Load Balancer Operates at layer-4 of the OSI (Open Systems Interconnection) model. It is useful for load balancing based on TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). NLB is capable of handling millions of requests per second while maintaining high throughput and ultra-low latencies. NLB is very well optimized for handling sudden and volatile traffic patterns.
  • Gateway Load Balance – Gateway Load Balancer Operates at layer-3 of the OSI (Open Systems Interconnection) model. It allows you to deploy, scale, and manage virtual appliances, such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. It combines a transparent network gateway (i.e. A single entry and exit point for all traffic) and distributes the connections to virtual appliances and scales them up or down with demand.
  • Classic Load Balancer – AWS retiring the classic load balancer, it’s better to migrate to another available load balancer.

Q: How can you improve security when using load balancers?

Answer: Elastic Load Balancer associate with Amazon Virtual Private Cloud (VPC) to provide strong network security. Depending on your requirement, you can create internet-facing (load balancer with public IP)) and non-internet-facing (internal load balancer with private IP) load balancers. You can associate security groups, authentication, SSL/TLS decryption with ELB to provide more network security.

Q: Can you load balance WebSockets?

Answer: Yes, it is possible to load balance WebSocket connections. Load balancing WebSocket traffic is a common practice to distribute the incoming connections across multiple servers or instances to handle high volumes of real-time, bidirectional communication.

To load balance WebSockets, you can utilize a load balancer that supports WebSocket protocols and can maintain the persistence of WebSocket connections. The load balancer acts as a reverse proxy, receiving WebSocket requests from clients and distributing them across multiple backend servers.

Q: Difference between Ingress and Load Balancer?

Answer: Ingress is used to map incoming traffic from the internet to the services running in the cluster. It is an API object in Kubernetes that provides routing rules and configuration for managing external access to services. while a Load Balancer is a networking service or component responsible for distributing incoming traffic across multiple backend instances.

Ingress operates at the application layer and requires an Ingress controller, while Load Balancer operates at the network layer and is provided by the underlying infrastructure.

Q: What’s the difference between Active and Passive Health Checks?

Answer: Active health checks are initiated by the load balancer itself to test the target resource health, while passive health checks are initiated by the server to monitor the health of the resources.

Active health checks provide proactive monitoring with quicker detection of failures, while passive health checks offer a less intrusive approach that doesn’t add additional load but may have slower response times.

The choice between active and passive health checks depends on the specific requirements and characteristics of the system being monitored.

Q: What happens if your load balancer fails? How to Manage this condition?

Answer: There are several strategies, you can effectively manage the condition when a load balancer fails –

  • Monitoring: Implement robust monitoring systems to detect load balancer failures promptly. Monitor key metrics such as response time, error rates, and connectivity to identify any anomalies. This helps in proactive detection and allows for timely action.
  • Redundancy: Implement a redundant load balancer setup. This involves deploying multiple load balancers in parallel, either in an active-passive or active-active configuration. Redundancy ensures that if one load balancer fails, the traffic can be automatically redirected to the healthy load balancer(s) to maintain service availability.
  • Load Balancer Health Checks: Configure health checks for the load balancer itself. Most load balancers offer health check mechanisms to verify the health and availability of their own instances. These health checks can automatically remove a failed load balancer from the rotation and direct traffic to the functioning ones.
  • Load Balancer Failover: Implement failover mechanisms to automatically switch to a backup load balancer in case of a failure. This typically involves using DNS failover or IP failover techniques to redirect traffic to a standby load balancer with minimal downtime.
  • Scalability: Ensure that the load balancer setup can handle the expected load and scale as necessary. By provisioning sufficient resources and optimizing load balancer configurations, you can prevent overloading and potential failures.
  • Regular Maintenance and Updates: Perform regular maintenance and updates for the load balancer software and underlying infrastructure. This includes applying patches, security updates, and firmware upgrades. Proper maintenance helps minimize the risk of load balancer failures caused by outdated software or vulnerabilities.
  • Disaster Recovery Planning: Have a comprehensive disaster recovery plan in place. This should include procedures for handling load balancer failures, such as activating standby load balancers, rerouting traffic, and communicating with relevant stakeholders.

Q: What is the difference between geo-load balancing and global server load balancing?

Answer: Geo-load balancing focuses on load balancing within a specific geographic region, optimizing performance and minimizing latency for users within that region. On the other hand, global server load balancing (GSLB) operates on a larger scale, distributing traffic across multiple regions or worldwide, and considers factors beyond geographic proximity to ensure global performance optimization and high availability.

AWS Load balancing and auto scaling Interview Questions & Answers