Kubernetes is the de facto standard for deploying, scaling, and managing containerized applications. For Kubernetes administrators, having a deep understanding of the platform’s underlying functionalities is crucial to ensure application health, scalability, and fault-tolerance. Even though the declarative nature of Kubernetes simplifies deployments and configuration management, to troubleshoot issues and optimize performance, you often need to dive deeper into the core concepts.
One such concept that’s central to Service discovery and communication within a Kubernetes cluster is Endpoints. Endpoints provide a way for Pods to locate backend services. This article explores the world of Kubernetes Endpoints, covering their inner workings, consumption methods, and troubleshooting techniques. By the end, expect to have picked up valuable insights to effectively leverage Endpoints in managing your Kubernetes deployments.
Endpoints in Kubernetes are responsible for facilitating communication between cluster components. At their core, Endpoints provide a way for Pods to discover and connect to backend services. This is achieved by dynamically updating a list of network addresses associated with a particular service.
When a service is created, Kubernetes automatically initializes corresponding Endpoints and populates them with the IP addresses of the Pods backing that service. This enables Pods to discover and communicate with the backend services they depend on, without needing to hardcode IP addresses or manage service discovery manually.
Consider a web application that has multiple microservices, such as a frontend service, a user authentication service, and a database service. The frontend service, running in one Pod, needs to communicate with the authentication service running in another Pod to verify user credentials. Endpoints enable this communication by providing a stable network address for the authentication service that can be used by the frontend and other relevant services.
Here are some additional reasons why Endpoints are a key concept in Kubernetes:
EndpointSlices are a new feature introduced in Kubernetes v1.21 to address scalability and performance challenges associated with managing a large number of Endpoints. Unlike traditional Endpoints that store all Endpoint information in a single resource, EndpointSlices break down Endpoint data into smaller, more manageable slices. This improves scalability and reduces the overhead of managing Endpoint information, particularly in clusters with a high volume of Services and Pods.
For example, suppose you have a microservice that exposes Endpoints for HTTP and gRPC protocols on ports 80 and 50051, respectively. When using traditional Endpoints, all information for this Service would be stored in a single resource. However, with EndpointSlices, administrators can create separate slices for HTTP and gRPC Endpoints, each containing the relevant port numbers.
In the following sections, let’s explore how to manage Endpoints in Kubernetes.
In most scenarios, Kubernetes automatically creates Endpoints when a Service is defined with a Pod selector. Manual creation of Endpoints is generally not recommended, as it bypasses the automatic discovery and management mechanisms.
However, for specific scenarios where it can’t be avoided (e.g., exposing external resources), you can define a YAML manifest for the Endpoint and then create it using this command:
kubectl apply -f <manifest-file.yaml>
Here’s a sample YAML file:
apiVersion: v1
kind: Endpoints
metadata:
name: test-service #should match the exact Service name
subsets:
- addresses:
- ip: 10.0.0.1
- ip: 10.0.0.2
ports:
- name: http
port: 80
protocol: TCP
To fetch Endpoints in a Kubernetes cluster, use this command:
kubectl get endpoints
It will fetch a list of all endpoints currently defined in the cluster. Output should look like this:
NAME ENDPOINTS AGE
my-service 10.0.0.1:8080 1d
Alternatively, to access the Endpoints of a Service, you can use this command:
kubectl get endpoints my-service
The output will show a list of Endpoints associated with the Service.
Similar to creation, manual updates to Endpoints are discouraged. Kubernetes automatically updates Endpoints whenever Pods are created, deleted, or their IP addresses change. If you need to modify the communication target for a Service, the recommended approach is to update the Service definition (e.g., modify Pod selectors or ports) to trigger automatic Endpoint updates.
With that said, if a manual update can’t be avoided, you can simply update the Endpoint YAML file, and then apply it again using the same kubectl command as above:
kubectl apply -f my-service-endpoints-updated.yaml
Endpoint deletion can disrupt Service functionality. However, there are scenarios where it may be necessary — for example, when cleaning up after a Service deletion. If you need to delete an Endpoint, run this command:
kubectl delete endpoints <endpoint_name>
Use the above command with caution, as it removes the specified Endpoint object.
Note: For most use cases, rely on Kubernetes' automatic Endpoint management for optimal Service discovery and resilience. Manual intervention should be reserved for troubleshooting or specific configurations.
Kubernetes has been designed for fault-tolerance from the ground up. However, issues and bottlenecks are not uncommon. In the following sections, we will explore several issues that users report while working with Endpoints, along with troubleshooting advice for each.
Description: When you check a Service's Endpoints using kubectl get endpoints <service_name>, you see an empty "Endpoints" column.
Troubleshooting steps:
Description: Even though Endpoints appear healthy, your application experiences issues receiving traffic from the Service.
Troubleshooting steps:
Description: You observe unusually frequent updates to the Endpoints list, potentially impacting Service stability.
Troubleshooting steps:
Description: Despite adding or removing Pods, the Service's Endpoints list doesn't update.
Troubleshooting steps:
Description: Even though Endpoints are updated correctly, Pods continue to use outdated Service Endpoints due to cached DNS entries.
Troubleshooting steps:
Description: Network policies may be inadvertently blocking communication between Pods and the Service. This can lead to Service discovery failures.
Troubleshooting steps:
Description: Endpoint misconfigurations can occur due to incorrect network configurations or erroneous Service definitions. These misconfigurations can lead to Service discovery failures and communication issues between Pods.
Troubleshooting steps:
Description: Scaling operations, both automated and manual, might be impacting Endpoint availability and load balancing.
Troubleshooting steps:
Description: Endpoints may experience performance degradation due to network congestion, resource contention, or inefficient load balancing algorithms. This can result in increased latency, dropped connections, or timeouts for client requests.
Troubleshooting steps:
Description: Endpoints that transmit sensitive data without encryption are vulnerable to interception and data breaches.
Troubleshooting steps:
Endpoints can potentially introduce security vulnerabilities if not managed carefully. Here’s a breakdown of key security considerations and best practices to secure Endpoints and reduce your cluster’s attack surface.
Use TLS encryption to secure communication between Pods and Services. TLS encrypts data in transit, which prevents eavesdropping and tampering to ensure confidentiality and integrity. Fortunately, Kubernetes provides built-in support for managing TLS certificates, via the certificates.k8s.io API.
Use Kubernetes RBAC to restrict access to Services and Endpoints. Define Roles and ClusterRoles that grant specific permissions for interacting with Services. Bind these Roles to Service accounts used by Pods or applications. Moreover, regularly review and audit RBAC policies to ensure compliance with security requirements and minimize the risk of privilege escalation.
Define network policies to enforce segmentation and restrict traffic flow between Pods and Services within the Kubernetes cluster. Network policies allow you to specify ingress and egress rules based on IP addresses, ports, and protocols, effectively isolating workloads and preventing unauthorized communication.
Moreover, comply with least privilege principles to limit access to only necessary Endpoints and Services. This reduces the attack surface and mitigates the risk of lateral movement by potential attackers.
Use Endpoint hardening techniques to improve the security posture of Kubernetes Endpoints. For example, you can:
Implement robust monitoring and logging mechanisms to detect and respond to security incidents and anomalies. You can use tools like Kubernetes Monitoring by Site24x7 to collect and analyze metrics related to Endpoint activity, network traffic, and resource utilization. Additionally, consider configuring centralized logging solutions to aggregate logs from Kubernetes clusters and correlate events for threat detection and forensic analysis.
Kubernetes Endpoints are an important concept that every cluster administrator should understand and master. In this post, we covered everything you need to know about Endpoints: what they are, how they work, how to manage them, how to troubleshoot common issues, and how to make them more secure. We hope you found it useful!
To always stay on top of your cluster’s health and performance, don’t forget to try out the comprehensive Kubernetes monitoring solution by Site24x7.
Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.
Apply Now