Deploying .NET applications on Linux containers within Azure Kubernetes Service

Azure Kubernetes Service (AKS) is a cloud container orchestration service that takes the pain out of deploying and scaling Kubernetes environments. AKS integrates seamlessly with Linux containers, which perform better than virtual machines (VMs) because they have a lower kernel overhead. Also, because Linux containers include all required dependencies to run an application, they make it easy to deploy .NET applications across multiple hosts.

This hands-on tutorial shows you how to deploy .NET applications within Linux containers on Azure Kubernetes Service (AKS). First, you’ll prepare a .NET application for deployment and set up the AKS and Linux containers. Then, you’ll deploy, manage, and troubleshoot the application on AKS.

Deploying .NET apps in Linux containers on Azure Kubernetes Service

Prerequisites

To follow along with this tutorial, you need:

  • An Azure account with an active subscription
  • A .NET application using .NET 6.0 target framework
  • Visual Studio Code
  • Docker Desktop
  • The .NET command-line interface (CLI)
  • Dotnet runtime 6.0

The sample project in this tutorial was created on a Windows PC, using PowerShell as the default command prompt directory and Visual Studio Code as the preferred code editor.

Start by cloning the Movflix application from GitHub using this command:

git clone https://github.com/OkothPius/Movflix.git

Note: When using Docker Desktop, remember to switch from Windows containers to Linux containers if you were previously using Windows containers.

Switching to Linux containers Fig. 1: Switching to Linux containers

Preparing your .NET application for deployment

To prepare your .NET application for deployment to AKS, add your .NET application to a Docker container. First, publish your .NET application. Run the following command at the application root to compile your code to the Release directory in your project:

dotnet publish -c Release

Creating the Dockerfile and container

Next, you need to create a Dockerfile in the same location as the .csproj extension in your project. A Dockerfile is a text file that outlines how a Docker image is created when the build command executes.

Open the Dockerfile in Visual Studio Code and add the following code:

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS base 
WORKDIR /Movflix

# Copy everything
COPY . ./
# Restore as distinct layers
RUN dotnet restore
# Build and publish a release
RUN dotnet publish -c Release -o out

# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /Movflix
COPY --from=base /Movflix/out .
ENTRYPOINT ["dotnet", "Movflix.dll"]

The above Dockerfile employs the Microsoft Container Registry (mcr.microsoft.com/dotnet/sdk) as the Docker container image. It tags the image with the specific version of the .NET runtime of your project—6.0 in this case. You need the compiled project dynamic-link library (DLL) to be the executable point of your application, which is why you’re adding it in the ENTRYPOINT command.

Open PowerShell or your terminal. In the root directory containing the Dockerfile, run:

docker build -t movflix-image -f Dockerfile .

This command executes each line in the Dockerfile and builds a Docker image that creates a local repository called movflix-image. The . in the docker build command tells Docker to use the current working directory to build the image, and the -f switch specifies the Dockerfile path.

Here’s the image build:

Docker image build Fig. 2: Docker image build

Now, use the docker images command to check if the created image is available.

To run the newly created container, use:

docker run -d -p 5000:80 --name movflix-container movflix-image

This command runs the container specified by the -d flag in the background, assigns it to port 5000, and gives it the name movflix-container based on the container image created earlier in the Docker build.

In your browser, navigate to http://localhost:5000 to test the application. It should look like this:

Testing the application locally using the Docker image Fig. 3: Testing the application locally using the Docker image

You should now be able to see the container running on the Docker desktop.

Setting up Azure Kubernetes service

To set up your AKS, you need to create a Kubernetes cluster. The Kubernetes cluster is responsible for running your containerized application within AKS. In Azure, type “Azure Kubernetes Service” in the search box. When it appears, click Create to create a Kubernetes cluster.

You will land on a page like this:

Resource group and cluster name setup Fig. 4: Resource group and cluster name setup

Now, create the Resource Group and the Kubernetes cluster name of your AKS. Leave the other options as default.

Click Next: Node pools.

Node pools setup Fig. 5: Node pools setup

In the screen above, the OS-type of the node pool is Linux. To scale your applications, you can add extra node pools to to handle additional workloads. However, this tutorial uses a single-node pool.

Leave the default options for the Access and Networking steps.

For Integrations, create an Azure Container Registry (ACR) to push your Docker image:

Cluster integrations setup Fig. 6: Cluster integrations setup

Leave the options in the Advanced and Tags steps as default.

After passing all validations, at Review + Create, click Create to build the Kubernetes cluster:

Creating the Kubernetes cluster Fig. 7: Creating the Kubernetes cluster

Setting up Linux containers on AKS

To configure the Linux container for AKS, you first have to push it to the Azure Container Registry (ACR). You’ve already created the ACR, so you need to log in to the registry using the Azure CLI before you can push your Linux container.

Pushing Linux container to ACR

In PowerShell, or your terminal, so long as it uses the Azure CLI, use this command to log in to your registry:

az acr login --name moflixRegistry

The command should return a login succeeded message:

Log in to Azure container registry Fig. 8: Log in to Azure container registry

Before you push the image, you need to tag it with the name of your registry. To do so, use:

docker tag movflix-image moflixregistry.azurecr.io/movflix-image:v1

Finally, use the docker push command to push the container to ACR:

docker push moflixregistry.azurecr.io/movflix-image:v1

Pushing Linux container to ACR Fig. 9: Pushing Linux container to ACR

Remember to change the registry name to all lowercase when using it as a server URL.

If you check your ACR in Azure under Services > Repositories, you should be able to see the container image.

Creating Kubernetes deployments and services

To run and view your application on the internet, you need to create deployment and service resources. A deployment is a workload resource that specifies how to create or update your containerized application Pods.

Deployment makes it possible to spin up more replicas, depending on usage. On the other hand, service allows network access through unique assigned IP addresses to Pods.

First, create a file called movflix.yaml in the root of your project with the following command:

touch movflix.yaml

Open the file in Visual Studio Code and add this code to it:

apiVersion: apps/v1 
kind: Deployment
metadata:
name: movflixv1
labels:
app: movflixv1
spec:
replicas: 1
template:
metadata:
name: movflixv1
labels:
app: movflixv1
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: movflixv1
image: moflixregistry.azurecr.io/movflix-image:v1
resources:
limits:
cpu: 1
memory: 800M
ports:
- containerPort: 80
selector:
matchLabels:
app: movflixv1
---
apiVersion: v1
kind: Service
metadata:
name: movflixv1
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
selector:
app: movflixv1

The Deployment resource creates one Pod using the Linux-based OS and passes the image you pushed previously. The deployment also has the CPU and memory resources the application will run on. The Service resource exposes the protocol to the port made accessible via the load balancer.

Deploying your .NET application on AKS

With everything set up, deploying the containerized application on AKS is straightforward.

First, you need to use Kubectl to connect to the AKS cluster. Install Kubectl with this command:

az aks install-cli

Next, configure your credentials:

az aks get-credentials --resource-group moflixResourceGroup --name moflixCluster

Finally, run the movflix.yaml file you created using this command:

kubectl apply -f movflix.yaml

Be sure to execute the command in the same directory that houses your YAML file. The output of the command should indicate that the resources have been created.

Testing the application

To verify that the application is working, you need to access it through a public IP using the following command:

kubectl get service movflix --watch

Executing the service Fig. 10: Executing the service

If you access the External-IP, you should be able to view your .NET application, as shown:

Deployed application on AKS Fig. 11: Deployed application on AKS

Managing and scaling your .NET application on AKS

When scaling your .NET application on AKS, it’s necessary to manage the application effectively to maintain optimal performance. Below are some strategies for doing so.

Monitoring tools

Azure has various built-in tools you can use to monitor your containerized workload in AKS:

  • Azure Monitor Metrics — This tool collects and aggregates multidimensional metrics and telemetry data from your applications, operating systems, and Azure resources.
  • Azure Monitor Workspace — This tool integrates Prometheus and Graphana, open-source tools that provide greater visibility of resources within the Kubernetes cluster.
  • Log Analytics Workspace — This tool contains all the resource logs for your AKS resources.

There are also third-party solutions available. For example, ManageEngine Site24x7 is an effective and popular alternative to Azure’s built-in solutions. It’s a cloud-based monitoring platform that offers unified monitoring across servers, public and private cloud, and network monitoring. Site24x7 provides an all-in-one dashboard that delivers complete insights in one place, negating the need for multiple monitoring devices.

Updating an application in AKS

To update an already-deployed application in AKS, you’ll create the updated container image, push the container to the Azure Container Registry, and then deploy it to AKS. If you have updated your .NET application, you first need to publish it by compiling the latest changes.

Update the compiled DLL of your project in the Dockerfile and then use the docker build command to build the updated image. Use docker run to test whether the change is reflected.

Push the updated container image to ACR and follow the previous process described to deploy to AKS.

Whenever you change your .NET application, you must update your deployment by following the steps above.

Scaling an application on AKS

Kubernetes is designed with scalability at its core, allowing your applications to adapt dynamically to varying workloads.

One of its primary autoscaling features, the Horizontal Pod Autoscaler (HPA), is continuously observing metrics like CPU and memory usage. When these metrics exceed or fall below predefined thresholds, HPA adjusts the number of Pod replicas to match the current demand.

For node-level scaling, AKS incorporates the cluster autoscaler. The cluster autoscaler ensures that the cluster always has the right number of nodes. The autoscaler adds nodes when there’s resource contention and removes them when they’re underutilized, optimizing cost and performance. Another feature is the Vertical Pod Autoscaler (VPA), which automatically adjusts Pod resource limits based on usage history, ensuring optimal resource allocation.

Though Kubernetes offers these powerful automated scaling options, it doesn’t restrict manual interventions. Users can directly set Pod replicas or specify node counts, allowing for precise control in scenarios where manual scaling is preferred.

Troubleshooting common issues

Here are some common issues when deploying .NET applications to AKS and some tips to resolve them:

  • Quota exceeded — This is a common issue when you try to create an AKS cluster. To resolve it, you need to request a quota increase for your subscription.
  • Performance issues — These occur when service creation takes too long or fails due to a time-out error. To solve this issue you need to scale down the number of your nodes and add a load balancer to your service resource.
  • Increased Azure charges — To reduce the amount you’re billed due to your running workloads, remember to delete unused Azure resources.

To help diagnose your AKS issues, use Kubernetes events and logs. Log Analytics workspace, for instance, has resource logs for faster troubleshooting.

Conclusion

In this tutorial, you deployed a .NET application on a Linux container to the Azure Kubernetes Service (AKS). You published the .NET application and then added a Dockerfile, which created a Docker image after being built. You then pushed the Docker image to Docker, making it easy to use Azure resources and services to deploy AKS.

AKS makes it straightforward to deploy and manage .NET applications. Take this tutorial as a starting point to deploy .NET applications to AKS, and fast-track your development, monitoring, and workload scaling in Azure.

Was this article helpful?

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us