The tremendous operational features and functionality of the public cloud do not simply stop with high-availability mechanisms. One of the very powerful capabilities of cloud infrastructure is the seamless ability to provide scalability. Since cloud infrastructure does not involve “racking and stacking” servers and is instead provisioned with “infrastructure as code”, scaling applications and infrastructure in the public cloud is much more easily achieved in cloud environments.

With Microsoft’s Azure cloud infrastructure, there are many ways that businesses can easily scale their applications and infrastructure using built-in features. In this post, we will talk about Microsoft Azure cloud scalability features and functionality and see some of the scalability options provided in the Azure public cloud.

Understanding Scalability and Why It’s Important

Architecting applications in Microsoft Azure requires architects to think about many aspects of applications residing there. One of the first objectives for DevOps personnel is to make sure the infrastructure and applications are highly available. This also comes with performance expectations of the application or particular infrastructure.

Many businesses experience a period of time where application performance is acceptable for a certain number of users. However, especially for web applications, what if a large influx of users using the application has degraded performance considerably or to a point where the application is unusable? The scalability of the application has quickly become a top priority.

Scalability describes the ability of a system to handle an ever-increasing demand on the expected or unexpected workload by adding resources manually or in an automated fashion to handle the increased workload. The expected amount of performance would weigh into the overall picture of scalability for the cloud-hosted application. Customers or other key business stakeholders may be responsible for setting the expectation of performance for the system.

Download Banner

In the traditional on-premise world of infrastructure, if performance or scalability was lacking to handle an increased user base, new hardware would be provisioned to add compute, memory, network, storage or other resources needed to handle the increased workload. The problem with this approach is that it is not very efficient. If there are periodic spikes in traffic that necessitate the extra hardware but those spikes are short-lived, the extra hardware is basically sitting unused for the remainder of the time. This is not very cost-effective.

What is scalability in azure?

With the Microsoft Azure public cloud and others, scalability is essentially baked into the environment by way of programmatic controls to allow easily extending or shrinking scalability.

Let’s take a look at some of the specific examples of Microsoft Azure cloud scalability features and functionality to see how scalability is easily accomplished in the Azure cloud.

Microsoft Azure Cloud Scalability Features and Functionality

As opposed to the dilemma of on-premises scaling of resources requiring provisioning more hardware, with the Azure cloud environment, resources can easily be scaled both up and down depending on the needs of the customer. Azure features many great capabilities related to scaling, including:

  • Scaling up and down
  • Scaling in and out
  • Autoscaling

Scaling up

Scaling up is typically performed on-premises when more capacity and performance are needed for an increased number of users hitting a particular workload.

Scaling up refers to adding additional resources to an existing physical or virtual server. This includes adding CPU, memory, or storage resources.

Scaling down
Scaling down is generally not something you see performed on-premises as it involves removing resources from an existing server or virtual server. This decreases the available capacity on the server resource.

Scaling out

Scaling out refers to the process of provisioning additional servers and capacity. This involves adding new servers to run business-critical applications that share the load of existing servers serving out the application. Using load balancers and other technologies, the new servers are added to the pool of resources and then traffic is routed to them accordingly.

Scaling In

Scaling in is the opposite of scaling out. This involves de-provisioning existing servers and capacity by removing a subset of server resources from the pool of servers carrying existing workloads.

Azure Auto Scaling

Autoscaling is a way to automatically perform scalability actions such as scaling up/down or in/out in an automated fashion based on workload demand. Autoscaling ensures the right amount of resources are available for the demands of the workload and ensures efficiency in operations. This helps to eliminate wasted resources or unneeded resources.

Azure PaaS Scalability Features

Azure’s Platform-as-a-Service offering provides services for applications. This is a managed infrastructure service provided by Azure that allows operations and developers to deploy applications on top of the offering without the need to worry about the underlying infrastructure. There are several different tiers of PaaS offerings including the following. Besides the free and shared tiers, each has scalability features to benefit from.

  • Free – No scalability features included
  • Shared – No scalability features included
  • Basic – Manual scaling capabilities
  • Standard – Automatic scaling capabilities
  • Premium – Automatic scaling capabilities

Azure IaaS Scalability Features

The Azure Infrastructure-as-a-Service or IaaS offering provides an environment for customers to run their own infrastructure platform in the Azure cloud. The difference with IaaS is, the customers are responsible for managing the infrastructure such as virtual machines, etc.

With the Azure IaaS solution, customers can take advantage of the VM scale sets as a solution to solve day-to-day operations challenges involving patching and other maintenance operations.

VM Scale Sets

VM scale sets let you create and manage a group of identical, load-balanced VMs. You can define the VM instances that automatically increase or decrease in response to demand or a defined schedule. This helps to provide high-availability to applications and allows central administration involving configuration, updating, and other tasks.

Using VM scale sets helps to improve the performance, redundancy, and distribution of workloads across multiple instances. When maintenance, updates, reboots and other operations need to be performed, traffic is simply routed to another available application instance. To scale the workloads, simply provision more application instances in the VM

VM Scale Set Auto Scaling

With the auto scaling mechanism that is built into the VM Scale Set, you can automatically increase or decrease the number of VM instances that run your application. This provides an automated and elastic behavior that helps to reduce the management overhead in monitoring and optimizing the performance of your application. Rules can be created that defines the acceptable performance of the VMSS.

Once the VM instances are created, the scale set starts to distributed traffic through a load balancer. Metrics are monitored as defined, such as CPU, memory, or how long a load would have to stay at a certain level. Auto scale rules can both increase resources or decrease them depending on the demands of the environment.

Concluding Thoughts

Businesses looking at placing business-critical resources inside the public cloud must think not only about the availability of their applications but also the scalability. Scalability concerns are extremely important as workload demands change over time.

Compared to on-premises environments, public cloud environments like Azure have the capabilities to automatically scale workloads based on certain metrics. This includes many aspects of scaling including scaling up, down, out, in, and auto-scaling. By successfully leveraging the built-in capabilities of Azure scalability, businesses can greatly decrease the management overhead involved with manually monitoring and scaling resources as needed.

Follow our Twitter and Facebook feeds for new releases, updates, insightful posts and more.

Like what you read? Rate us