Get Started Guide For Azure Developers
Get Started Guide For Azure Developers
Overview
Developer guide
SDKs and tools
Quickstart
Web Apps
Virtual machines
Linux
Windows
Serverless
Microservices
Service Fabric
Container Service
Azure Spring Cloud
Tutorials
Create and deploy a web app
.NET with SQL DB
Node.js with Mongo DB
PHP with MySQL
Java with MySQL
Deploy complex VM templates
Linux
Windows
Create an Azure connected function
Docker deploy web app on Linux
Samples
Azure CLI
Web Apps
Linux VM
Windows VM
Azure PowerShell
Web Apps
Linux VM
Windows VM
Concepts
Billing and subscriptions
Hosting comparisons
What is App Service?
Virtual machines
Linux VMs
Windows VMs
Service Fabric overview
How to guides
Plan
Web application architectures
VM architectures
Connect to on-premises networks
Microservices patterns/scenarios
Develop
Linux VM
Windows VM
Serverless apps
Microservices cluster
Deploy
Web and mobile apps from source control
Microservices locally
Linux VM
Windows VM
Store data
Blobs
File shares
Key-value pairs
JSON documents
Relational tables
Message queues
Scale
Web and mobile apps
Virtual machines
Microservice apps
Secure
Web and mobile apps
Backup
Web and mobile apps
Virtual machines
Monitor
Web and mobile apps
Windows VM
Microservices
Billing alerts
Automate
Scale Linux VM
Scale Windows VM
Reference
REST
SDKs
.NET
Java
Node.js
PHP
Python
Ruby
Command line interfaces
Azure CLI
Azure PowerShell
Billing
Resources
Azure limits and quotas
Azure regions
Azure Roadmap
Pricing calculator
Samples
Videos
Get started guide for Azure developers
2/14/2021 • 21 minutes to read • Edit Online
What is Azure?
Azure is a complete cloud platform that can host your existing applications and streamline new application
development. Azure can even enhance on-premises applications. Azure integrates the cloud services that you
need to develop, test, deploy, and manage your applications, all while taking advantage of the efficiencies of
cloud computing.
By hosting your applications in Azure, you can start small and easily scale your application as your customer
demand grows. Azure also offers the reliability that's needed for high-availability applications, even including
failover between different regions. The Azure portal lets you easily manage all your Azure services. You can also
manage your services programmatically by using service-specific APIs and templates.
This guide is an introduction to the Azure platform for application developers. It provides guidance and direction
that you need to start building new applications in Azure or migrating existing applications to Azure.
Where do I start?
With all the services that Azure offers, it can be an intimidating task to figure out which services you need to
support your solution architecture. This section highlights the Azure services that developers commonly use. For
a list of all Azure services, see the Azure documentation.
First, you must decide on how to host your application in Azure. Do you need to manage your entire
infrastructure as a virtual machine (VM)? Can you use the platform management facilities that Azure provides?
Maybe you need a serverless framework to host code execution only?
Your application needs cloud storage, which Azure provides several options for. You can take advantage of
Azure's enterprise authentication. There are also tools for cloud-based development and monitoring, and most
hosting services offer DevOps integration.
Now, let's look at some of the specific services that we recommend investigating for your applications.
Application hosting
Azure provides several cloud-based compute offerings to run your application so that you don't have to worry
about the infrastructure details. You can easily scale up or scale out your resources as your application usage
grows.
Azure offers services that support your application development and hosting needs. Azure provides
Infrastructure as a Service (IaaS) to give you full control over your application hosting. Azure's Platform as a
Service (PaaS) offerings provide the fully managed services needed to power your apps. There's even true
serverless hosting in Azure where all you need to do is write your code.
Azure App Service
When you want the quickest path to publish your web-based projects, consider Azure App Service. App Service
makes it easy to extend your web apps to support your mobile clients and publish easily consumed REST APIs.
This platform provides authentication by using social providers, traffic-based autoscaling, testing in production,
and continuous and container-based deployments.
You can create web apps, mobile app back ends, and API apps.
Because all three app types share the App Service runtime, you can host a website, support mobile clients, and
expose your APIs in Azure, all from the same project or solution. To learn more about App Service, see What is
Azure Web Apps.
App Service has been designed with DevOps in mind. It supports various tools for publishing and continuous
integration deployments. These tools include GitHub webhooks, Jenkins, Azure DevOps, TeamCity, and others.
You can migrate your existing applications to App Service by using the online migration tool.
When to use : Use App Service when you're migrating existing web applications to Azure, and when you
need a fully-managed hosting platform for your web apps. You can also use App Service when you need to
support mobile clients or expose REST APIs with your app.
Get star ted : App Service makes it easy to create and deploy your first web app, mobile app, or API app.
Tr y it now : App Service lets you provision a short-lived app to try the platform without having to sign up
for an Azure account. Try the platform and create your Azure App Service app.
When to use : Use Virtual Machines when you want full control over your application infrastructure or to
migrate on-premises application workloads to Azure without having to make changes.
Get star ted : Create a Linux VM or Windows VM from the Azure portal.
When to use : Use Azure Functions when you have code that is triggered by other Azure services, by web-
based events, or on a schedule. You can also use Functions when you don't need the overhead of a complete
hosted project or when you only want to pay for the time that your code runs. To learn more, see Azure
Functions Overview.
Get star ted : Follow the Functions quickstart tutorial to create your first function from the portal.
Tr y it now : Azure Functions lets you run your code without having to sign up for an Azure account. Try it
now at and create your first Azure Function.
When to use: Service Fabric is a good choice when you're creating an application or rewriting an existing
application to use a microservice architecture. Use Service Fabric when you need more control over, or direct
access to, the underlying infrastructure.
Get star ted: Create your first Azure Service Fabric application.
When to use: As a fully managed service Azure Spring Cloud is a good choice when you're minimizing
operational cost running Spring Boot/Spring Cloud based microservices on Azure.
Get star ted: Deploy your first Azure Spring Cloud application.
When to use: When your application needs document, table, or graph databases, including
MongoDB databases, with multiple well-defined consistency models.
Get star ted : Build an Azure Cosmos DB web app. If you're a MongoDB developer, see Build a
MongoDB web app with Azure Cosmos DB.
Azure Storage : Offers durable, highly available storage for blobs, queues, files, and other kinds of
nonrelational data. Storage provides the storage foundation for VMs.
When to use : When your app stores nonrelational data, such as key-value pairs (tables), blobs, files
shares, or messages (queues).
Get star ted : Choose from one of these types storage: blobs, tables, queues, or files.
Azure SQL Database : An Azure-based version of the Microsoft SQL Server engine for storing relational
tabular data in the cloud. SQL Database provides predictable performance, scalability with no downtime,
business continuity, and data protection.
When to use : When your application requires data storage with referential integrity, transactional
support, and support for TSQL queries.
Get star ted : Create a database in Azure SQL Database in minutes by using the Azure portal.
You can use Azure Data Factory to move existing on-premises data to Azure. If you aren't ready to move data to
the cloud, Hybrid Connections in Azure App Service lets you connect your App Service hosted app to on-
premises resources. You can also connect to Azure data and storage services from your on-premises
applications.
Docker support
Docker containers, a form of OS virtualization, let you deploy applications in a more efficient and predictable
way. A containerized application works in production the same way as on your development and test systems.
You can manage containers by using standard Docker tools. You can use your existing skills and popular open-
source tools to deploy and manage container-based applications on Azure.
Azure provides several ways to use containers in your applications.
Azure Kubernetes Ser vice : Lets you create, configure, and manage a cluster of virtual machines that
are preconfigured to run containerized applications. To learn more about Azure Kubernetes Service, see
Azure Kubernetes Service introduction.
When to use : When you need to build production-ready, scalable environments that provide
additional scheduling and management tools, or when you're deploying a Docker Swarm cluster.
Get star ted : Deploy a Kubernetes Service cluster.
Docker Machine : Lets you install and manage a Docker Engine on virtual hosts by using docker-
machine commands.
When to use : When you need to quickly prototype an app by creating a single Docker host.
Custom Docker image for App Ser vice : Lets you use Docker containers from a container registry or
a customer container when you deploy a web app on Linux.
Authentication
It's crucial to not only know who is using your applications, but also to prevent unauthorized access to your
resources. Azure provides several ways to authenticate your app clients.
Azure Active Director y (Azure AD) : The Microsoft multitenant, cloud-based identity and access
management service. You can add single-sign on (SSO) to your applications by integrating with Azure AD.
You can access directory properties by using the Azure AD Graph API directly or the Microsoft Graph API.
You can integrate with Azure AD support for the OAuth2.0 authorization framework and Open ID
Connect by using native HTTP/REST endpoints and the multiplatform Azure AD authentication libraries.
When to use : When you want to provide an SSO experience, work with Graph-based data, or
authenticate domain-based users.
Get star ted : To learn more, see the Azure Active Directory developer's guide.
App Ser vice Authentication : When you choose App Service to host your app, you also get built-in
authentication support for Azure AD, along with social identity providers—including Facebook, Google,
Microsoft, and Twitter.
When to use : When you want to enable authentication in an App Service app by using Azure AD,
social identity providers, or both.
Get star ted : To learn more about authentication in App Service, see Authentication and
authorization in Azure App Service.
To learn more about security best practices in Azure, see Azure security best practices and patterns.
Monitoring
With your application up and running in Azure, you need to monitor performance, watch for issues, and see how
customers are using your app. Azure provides several monitoring options.
Application Insights : An Azure-hosted extensible analytics service that integrates with Visual Studio to
monitor your live web applications. It gives you the data that you need to improve the performance and
usability of your apps continuously. This improvement occurs whether you host your applications on
Azure or not.
Azure Monitor : A service that helps you to visualize, query, route, archive, and act on the metrics and
logs that you generate with your Azure infrastructure and resources. Monitor is a single source for
monitoring Azure resources and provides the data views that you see in the Azure portal.
DevOps integration
Whether it's provisioning VMs or publishing your web apps with continuous integration, Azure integrates with
most of the popular DevOps tools. You can work with the tools that you already have and maximize your
existing experience with support for tools like:
Jenkins
GitHub
Puppet
Chef
TeamCity
Ansible
Azure DevOps
Get star ted : To see DevOps options for an App Service app, see Continuous Deployment to Azure App
Service.
Tr y it now: Try out several of the DevOps integrations.
Azure regions
Azure is a global cloud platform that is generally available in many regions around the world. When you
provision a service, application, or VM in Azure, you're asked to select a region. This region represents a specific
datacenter where your application runs or where your data is stored. These regions correspond to specific
locations, which are published on the Azure regions page.
Choose the best region for your application and data
One of the benefits of using Azure is that you can deploy your applications to various datacenters around the
globe. The region that you choose can affect the performance of your application. For example, it's better to
choose a region that's closer to most of your customers to reduce latency in network requests. You might also
want to select your region to meet the legal requirements for distributing your app in certain countries/regions.
It's always a best practice to store application data in the same datacenter or in a datacenter as near as possible
to the datacenter that is hosting your application.
Multi-region apps
Although unlikely, it's not impossible for an entire datacenter to go offline because of an event such as a natural
disaster or Internet failure. It's a best practice to host vital business applications in more than one datacenter to
provide maximum availability. Using multiple regions can also reduce latency for global users and provide
additional opportunities for flexibility when updating applications.
Some services, such as Virtual Machine and App Services, use Azure Traffic Manager to enable multi-region
support with failover between regions to support high-availability enterprise applications. For an example, see
Azure reference architecture: Run a web application in multiple regions.
When to use : When you have enterprise and high-availability applications that benefit from failover and
replication.
When to use : Use Resource Manager templates when you want a template-based deployment for your app
that you can manage programmatically by using REST APIs, the Azure CLI, and Azure PowerShell.
Get star ted : To get started using templates, see Authoring Azure Resource Manager templates.
When to use : When you need fine-grained access management for users and groups or when you
need to make a user an owner of a subscription.
Get star ted : To learn more, see Add or remove Azure role assignments using the Azure portal.
Ser vice principal objects : Along with providing access to user principals and groups, you can grant the
same access to a service principal.
When to use : When you're programmatically managing Azure resources or granting access for
applications. For more information, see Create Active Directory application and service principal.
Tags
Azure Resource Manager lets you assign custom tags to individual resources. Tags, which are key-value pairs,
can be helpful when you need to organize resources for billing or monitoring. Tags provide you a way to track
resources across multiple resource groups. You can assign tags the following ways:
In the portal
In the Azure Resource Manager template
Using the REST API
Using the Azure CLI
Using PowerShell
You can assign multiple tags to each resource. To learn more, see Using tags to organize your Azure resources.
Billing
In the move from on-premises computing to cloud-hosted services, tracking and estimating service usage and
related costs are significant concerns. It's important to estimate what new resources cost to run on a monthly
basis. You can also project how the billing looks for a given month based on the current spending.
Get resource usage data
Azure provides a set of Billing REST APIs that give access to resource consumption and metadata information
for Azure subscriptions. These Billing APIs give you the ability to better predict and manage Azure costs. You can
track and analyze spending in hourly increments and create spending alerts. You can also predict future billing
based on current usage trends.
Get star ted : To learn more about using the Billing APIs, see Azure consumption API overview
Azure App Service provides a highly scalable, self-patching web hosting service. This quickstart shows how to
deploy a basic HTML+CSS site to Azure App Service. You'll complete this quickstart in Cloud Shell, but you can
also run these commands locally with Azure CLI.
If you don't have an Azure subscription, create a free account before you begin.
O P T IO N EXA M P L E/ L IN K
mkdir quickstart
cd $HOME/quickstart
Next, run the following command to clone the sample app repository to your quickstart directory.
cd html-docs-hello-world
{
"app_url": "https://<app_name>.azurewebsites.net",
"location": "westeurope",
"name": "<app_name>",
"os": "Windows",
"resourcegroup": "appsvc_rg_Windows_westeurope",
"serverfarm": "appsvc_asp_Windows_westeurope",
"sku": "FREE",
"src_path": "/home/<username>/quickstart/html-docs-hello-world ",
< JSON data removed for brevity. >
}
Make a note of the resourceGroup value. You need it for the clean up resources section.
Save your changes and exit nano. Use the command ^O to save and ^X to exit.
You'll now redeploy the app with the same az webapp up command.
Once deployment has completed, switch back to the browser window that opened in the Browse to the app
step, and refresh the page.
Manage your new Azure app
To manage the web app you created, in the Azure portal, search for and select App Ser vices .
On the App Ser vices page, select the name of your Azure app.
You see your web app's Overview page. Here, you can perform basic management tasks like browse, stop, start,
restart, and delete.
The left menu provides different pages for configuring your app.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, delete the resource group by running the following command in the Cloud Shell.
Remember that the resource group name was automatically generated for you in the create a web app step.
az group delete --name appsvc_rg_Windows_westeurope
Next steps
Map custom domain
Quickstart: Create a Linux virtual machine in the
Azure portal
11/2/2020 • 3 minutes to read • Edit Online
Azure virtual machines (VMs) can be created through the Azure portal. The Azure portal is a browser-based user
interface to create Azure resources. This quickstart shows you how to use the Azure portal to deploy a Linux
virtual machine (VM) running Ubuntu 18.04 LTS. To see your VM in action, you also SSH to the VM and install
the NGINX web server.
If you don't have an Azure subscription, create a free account before you begin.
Sign in to Azure
Sign in to the Azure portal if you haven't already.
5. Under Instance details , type myVM for the Vir tual machine name , choose East US for your Region ,
and choose Ubuntu 18.04 LTS for your Image . Leave the other defaults.
9. Under Inbound por t rules > Public inbound por ts , choose Allow selected por ts and then select
SSH (22) and HTTP (80) from the drop-down.
10. Leave the remaining defaults and then select the Review + create button at the bottom of the page.
11. On the Create a vir tual machine page, you can see the details about the VM you are about to create.
When you are ready, select Create .
12. When the Generate new key pair window opens, select Download private key and create
resource . Your key file will be download as myKey.pem . Make sure you know where the .pem file was
downloaded, you will need the path to it in the next step.
13. When the deployment is finished, select Go to resource .
14. On the page for your new VM, select the public IP address and copy it to your clipboard.
TIP
The SSH key you created can be used the next time your create a VM in Azure. Just select the Use a key stored in
Azure for SSH public key source the next time you create a VM. You already have the private key on your computer,
so you won't need to download anything.
Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so,
select the resource group for the virtual machine, select Delete , then confirm the name of the resource group to
delete.
Next steps
In this quickstart, you deployed a simple virtual machine, created a Network Security Group and rule, and
installed a basic web server. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
Azure Linux virtual machine tutorials
Quickstart: Create a Windows virtual machine in the
Azure portal
11/2/2020 • 2 minutes to read • Edit Online
Azure virtual machines (VMs) can be created through the Azure portal. This method provides a browser-based
user interface to create VMs and their associated resources. This quickstart shows you how to use the Azure
portal to deploy a virtual machine (VM) in Azure that runs Windows Server 2019. To see your VM in action, you
then RDP to the VM and install the IIS web server.
If you don't have an Azure subscription, create a free account before you begin.
Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.
5. Under Instance details , type myVM for the Vir tual machine name and choose East US for your
Region , and then choose Windows Server 2019 Datacenter for the Image . Leave the other defaults.
6. Under Administrator account , provide a username, such as azureuser and a password. The password
must be at least 12 characters long and meet the defined complexity requirements.
7. Under Inbound por t rules , choose Allow selected por ts and then select RDP (3389) and HTTP (80)
from the drop-down.
8. Leave the remaining defaults and then select the Review + create button at the bottom of the page.
2. In the Connect to vir tual machine page, keep the default options to connect by IP address, over port
3389, and click Download RDP file .
3. Open the downloaded RDP file and click Connect when prompted.
4. In the Windows Security window, select More choices and then Use a different account . Type the
username as localhost \username, enter password you created for the virtual machine, and then click
OK .
5. You may receive a certificate warning during the sign-in process. Click Yes or Continue to create the
connection.
Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources.
Select the resource group for the virtual machine, then select Delete . Confirm the name of the resource group
to finish deleting the resources.
Next steps
In this quickstart, you deployed a simple virtual machine, open a network port for web traffic, and installed a
basic web server. To learn more about Azure virtual machines, continue to the tutorial for Windows VMs.
Azure Windows virtual machine tutorials
Getting started with Azure Functions
2/14/2021 • 3 minutes to read • Edit Online
Introduction
Azure Functions allows you to implement your system's logic into readily-available blocks of code. These code
blocks are called "functions".
Use the following resources to get started.
A C T IO N RESO URC ES
Visual Studio
Visual Studio Code
Command line
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Execute an Azure Function with triggers
A C T IO N RESO URC ES
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Develop an App using the Maven Plugin for Azure
Functions
A C T IO N RESO URC ES
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Build Serverless APIs with Azure Functions
Create serverless logic with Azure Functions
Refactor Node.js and Express APIs to Serverless APIs with
Azure Functions
A C T IO N RESO URC ES
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Build Serverless APIs with Azure Functions
Create serverless logic with Azure Functions
Execute an Azure Function with triggers
A C T IO N RESO URC ES
Explore an interactive tutorial Choose the best Azure serverless technology for your
business scenario
Well-Architected Framework - Performance efficiency
Build Serverless APIs with Azure Functions
Create serverless logic with Azure Functions
Next steps
Learn about the anatomy of an Azure Functions application
Tutorial: Create and deploy an application with an
ASP.NET Core Web API front-end service and a
stateful back-end service
2/14/2021 • 14 minutes to read • Edit Online
This tutorial is part one of a series. You will learn how to create an Azure Service Fabric application with an
ASP.NET Core Web API front end and a stateful back-end service to store your data. When you're finished, you
have a voting application with an ASP.NET Core web front-end that saves voting results in a stateful back-end
service in the cluster. This tutorial series requires a Windows developer machine. If you don't want to manually
create the voting application, you can download the source code for the completed application and skip ahead
to Walk through the voting sample application. If you prefer, you can also watch a video walk-through of this
tutorial.
Prerequisites
Before you begin this tutorial:
If you don't have an Azure subscription, create a free account
Install Visual Studio 2019 version 15.5 or later with the Azure development and ASP.NET and web
development workloads.
Install the Service Fabric SDK
5. On the New Ser vice Fabric Ser vice page, choose Stateless ASP.NET Core , name your service
VotingWeb , then click OK .
6. The next page provides a set of ASP.NET Core project templates. For this tutorial, choose Web
Application (Model-View-Controller) , then click OK .
Visual Studio creates an application and a service project and displays them in Solution Explorer.
Update the site.js file
Open wwwroot/js/site.js . Replace its contents with the following JavaScript used by the Home views, then
save your changes.
$scope.refresh = function () {
$http.get('api/Votes?c=' + new Date().getTime())
.then(function (data, status) {
$scope.votes = data;
}, function (data, status) {
$scope.votes = undefined;
});
};
@{
ViewData["Title"] = "Service Fabric Voting Sample";
}
<div class="row">
<div class="col-xs-8 col-xs-offset-2">
<form class="col-xs-12 center-block">
<div class="col-xs-6 form-group">
<input id="txtAdd" type="text" class="form-control" placeholder="Add voting option"
ng-model="item"/>
</div>
<button id="btnAdd" class="btn btn-default" ng-click="add(item)">
<span class="glyphicon glyphicon-plus" aria-hidden="true"></span>
Add
</button>
</form>
</div>
</div>
<hr/>
<div class="row">
<div class="col-xs-8 col-xs-offset-2">
<div class="row">
<div class="col-xs-4">
Click to vote
</div>
</div>
<div class="row top-buffer" ng-repeat="vote in votes.data">
<div class="col-xs-8">
<button class="btn btn-success text-left btn-block" ng-click="add(vote.key)">
<span class="pull-left">
{{vote.key}}
</span>
<span class="badge pull-right">
{{vote.value}} Votes
</span>
</button>
</div>
<div class="col-xs-4">
<button class="btn btn-danger pull-right btn-block" ng-click="remove(vote.key)">
<span class="glyphicon glyphicon-remove" aria-hidden="true"></span>
Remove
</button>
</div>
</div>
</div>
</div>
</div>
</div>
</head>
<body>
<div class="container body-content">
@RenderBody()
</div>
<script src="~/lib/jquery/dist/jquery.js"></script>
<script src="~/lib/bootstrap/dist/js/bootstrap.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.7.2/angular.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/2.5.0/ui-bootstrap-tpls.js">
</script>
<script src="~/js/site.js"></script>
Also add the following GetVotingDataServiceName method below CreateServiceInstanceListeners() , then save
your changes. GetVotingDataServiceName returns the service name when polled.
[Produces("application/json")]
[Route("api/Votes")]
public class VotesController : Controller
{
private readonly HttpClient httpClient;
// GET: api/Votes
[HttpGet]
public async Task<IActionResult> Get()
{
List<KeyValuePair<string, int>> votes= new List<KeyValuePair<string, int>>();
votes.Add(new KeyValuePair<string, int>("Pizza", 3));
votes.Add(new KeyValuePair<string, int>("Ice cream", 4));
return Json(votes);
}
}
}
<Resources>
<Endpoints>
<!-- This endpoint is used by the communication listener to obtain the port on which to
listen. Please note that if your service is partitioned, this port is shared with
replicas of different partitions that are placed in your code. -->
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8080" />
</Endpoints>
</Resources>
Also update the Application URL property value in the Voting project so a web browser opens to the correct port
when you debug your application. In Solution Explorer, select the Voting project and update the Application
URL property to 8080 .
Deploy and run the Voting application locally
You can now go ahead and run the Voting application for debugging. In Visual Studio, press F5 to deploy the
application to your local Service Fabric cluster in debug mode. The application will fail if you didn't previously
open Visual Studio as administrator .
NOTE
The first time you run and deploy the application locally, Visual Studio creates a local Service Fabric cluster for debugging.
Cluster creation may take some time. The cluster creation status is displayed in the Visual Studio output window.
After the Voting application has been deployed to your local Service Fabric cluster, your web app will open in a
browser tab automatically and should look like this:
To stop debugging the application, go back to Visual Studio and press Shift+F5 .
namespace VotingData.Controllers
{
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.ServiceFabric.Data;
using Microsoft.ServiceFabric.Data.Collections;
[Route("api/[controller]")]
public class VoteDataController : Controller
{
private readonly IReliableStateManager stateManager;
return this.Json(result);
}
}
// PUT api/VoteData/name
[HttpPut("{name}")]
public async Task<IActionResult> Put(string name)
{
IReliableDictionary<string, int> votesDictionary = await
this.stateManager.GetOrAddAsync<IReliableDictionary<string, int>>("counts");
// DELETE api/VoteData/name
[HttpDelete("{name}")]
public async Task<IActionResult> Delete(string name)
{
IReliableDictionary<string, int> votesDictionary = await
this.stateManager.GetOrAddAsync<IReliableDictionary<string, int>>("counts");
"nodeTypes": [
{
...
"httpGatewayEndpointPort": "[variables('nt0fabricHttpGatewayPort')]",
"isPrimary": true,
"vmInstanceCount": "[parameters('nt0InstanceCount')]",
"reverseProxyEndpointPort": "[parameters('SFReverseProxyPort')]"
}
],
To find the reverse proxy port used in your local development cluster, view the
HttpApplicationGatewayEndpoint element in the local Service Fabric cluster manifest:
1. Open a browser window and navigate to http://localhost:19080 to open the Service Fabric Explorer tool.
2. Select Cluster -> Manifest .
3. Make a note of the HttpApplicationGatewayEndpoint element port. By default this should be 19081. If it is
not 19081, you will need to change the port in the GetProxyAddress method of the following
VotesController.cs code.
Update the VotesController.cs file
In the VotingWeb project, open the Controllers/VotesController.cs file. Replace the VotesController class
definition contents with the following, then save your changes. If the reverse proxy port you discovered in the
pervious step is not 19081, change the port used in the GetProxyAddress method from 19081 to the port that
you discovered.
// GET: api/Votes
[HttpGet("")]
public async Task<IActionResult> Get()
public async Task<IActionResult> Get()
{
Uri serviceName = VotingWeb.GetVotingDataServiceName(this.serviceContext);
Uri proxyAddress = this.GetProxyAddress(serviceName);
result.AddRange(JsonConvert.DeserializeObject<List<KeyValuePair<string, int>>>(await
response.Content.ReadAsStringAsync()));
}
}
return this.Json(result);
}
// PUT: api/Votes/name
[HttpPut("{name}")]
public async Task<IActionResult> Put(string name)
{
Uri serviceName = VotingWeb.GetVotingDataServiceName(this.serviceContext);
Uri proxyAddress = this.GetProxyAddress(serviceName);
long partitionKey = this.GetPartitionKey(name);
string proxyUrl = $"{proxyAddress}/api/VoteData/{name}?PartitionKey=
{partitionKey}&PartitionKind=Int64Range";
// DELETE: api/Votes/name
[HttpDelete("{name}")]
public async Task<IActionResult> Delete(string name)
{
Uri serviceName = VotingWeb.GetVotingDataServiceName(this.serviceContext);
Uri proxyAddress = this.GetProxyAddress(serviceName);
long partitionKey = this.GetPartitionKey(name);
string proxyUrl = $"{proxyAddress}/api/VoteData/{name}?PartitionKey=
{partitionKey}&PartitionKind=Int64Range";
/// <summary>
/// Constructs a reverse proxy URL for a given service.
/// Example: http://localhost:19081/VotingApplication/VotingData/
/// </summary>
/// <param name="serviceName"></param>
/// <returns></returns>
private Uri GetProxyAddress(Uri serviceName)
{
return new Uri($"http://localhost:19081{serviceName.AbsolutePath}");
}
/// <summary>
/// Creates a partition key from the given name.
/// Uses the zero-based numeric position in the alphabet of the first letter of the name (0-25).
/// </summary>
/// <param name="name"></param>
/// <returns></returns>
private long GetPartitionKey(string name)
{
return Char.ToUpper(name.First()) - 'A';
}
}
b. First construct the URL to the ReverseProxy for the back-end service (1) .
c. Then send the HTTP PUT Request to the ReverseProxy (2) .
d. Finally the return the response from the back-end service to the client (3) .
5. Press F5 to continue.
a. You are now at the break point in the back-end service.
b. In the first line in the method (1) use the StateManager to get or add a reliable dictionary called
counts .
c. All interactions with values in a reliable dictionary require a transaction, this using statement (2)
creates that transaction.
d. In the transaction, update the value of the relevant key for the voting option and commits the
operation (3) . Once the commit method returns, the data is updated in the dictionary and
replicated to other nodes in the cluster. The data is now safely stored in the cluster, and the back-
end service can fail over to other nodes, still having the data available.
6. Press F5 to continue.
To stop the debugging session, press Shift+F5 .
Next steps
In this part of the tutorial, you learned how to:
Create an ASP.NET Core Web API service as a stateful reliable service
Create an ASP.NET Core Web Application service as a stateless web service
Use the reverse proxy to communicate with the stateful service
Advance to the next tutorial:
Deploy the application to Azure
Quickstart: Deploy your first Azure Spring Cloud
application
2/14/2021 • 11 minutes to read • Edit Online
This quickstart explains how to deploy a simple Azure Spring Cloud microservice application to run on Azure.
NOTE
Steeltoe support for Azure Spring Cloud is currently offered as a public preview. Public preview offerings allow customers
to experiment with new features prior to their official release. Public preview features and services are not meant for
production use. For more information about support during previews, see the FAQ or file a Support request.
Prerequisites
An Azure account with an active subscription. Create an account for free.
.NET Core 3.1 SDK. The Azure Spring Cloud service supports .NET Core 3.1 and later versions.
Azure CLI version 2.0.67 or later.
Git.
az --version
Install the Azure Spring Cloud extension for the Azure CLI using the following command:
Log in to Azure
1. Log in to the Azure CLI
az login
2. If you have more than one subscription, choose the one you want to use for this quickstart.
az account list -o table
mkdir source-code
cd source-code
cd hello-world
"spring": {
"application": {
"name": "hello-world"
}
},
"eureka": {
"client": {
"shouldFetchRegistry": true,
"shouldRegisterWithEureka": true
}
}
4. Also in appsettings.json, change the log level for the Microsoft category from Warning to Information .
This change ensures that logs will be produced when you view streaming logs in a later step.
The appsettings.json file now looks similar to the following example:
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Information",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*",
"spring": {
"application": {
"name": "hello-world"
}
},
"eureka": {
"client": {
"shouldFetchRegistry": true,
"shouldRegisterWithEureka": true
}
}
}
<ItemGroup>
<PackageReference Include="Steeltoe.Discovery.ClientCore" Version="3.0.0" />
<PackageReference Include="Microsoft.Azure.SpringCloud.Client" Version="2.0.0-preview.1" />
</ItemGroup>
<Target Name="Publish-Zip" AfterTargets="Publish">
<ZipDirectory SourceDirectory="$(PublishDir)"
DestinationFile="$(MSBuildProjectDirectory)/deploy.zip" Overwrite="true" />
</Target>
The packages are for Steeltoe Service Discovery and the Azure Spring Cloud client library. The Zip task
is for deployment to Azure. When you run the dotnet publish command, it generates the binaries in the
publish folder, and this task zips the publish folder into a .zip file that you upload to Azure.
6. In the Program.cs file, add a using directive and code that uses the Azure Spring Cloud client library:
using Microsoft.Azure.SpringCloud.Client;
7. In the Startup.cs file, add a using directive and code that uses the Steeltoe Service Discovery at the end
of the ConfigureServices and Configure methods:
using Steeltoe.Discovery.Client;
public void ConfigureServices(IServiceCollection services)
{
// Template code not shown.
services.AddDiscoveryClient(Configuration);
}
app.UseDiscoveryClient();
}
dotnet build
3. Create an app in your Azure Spring Cloud instance with a public endpoint assigned. Use the same
application name "hello-world" that you specified in appsettings.json.
az spring-cloud app create -n hello-world -s <service instance name> -g <resource group name> --is-
public --runtime-version NetCore_31
az spring-cloud app deploy -n hello-world -s <service instance name> -g <resource group name> --
runtime-version NetCore_31 --main-entry hello-world.dll --artifact-path ./deploy.zip
The --main-entry option identifies the .dll file that contains the application's entry point. After the service
uploads the .zip file, it extracts all the files and folders and tries to execute the entry point in the .dll file
specified by --main-entry .
It takes a few minutes to finish deploying the application. To confirm that it has deployed, go to the Apps
blade in the Azure portal.
[{"date":"2020-09-08T21:01:50.0198835+00:00","temperatureC":14,"temperatureF":57,"summary":"Bracing"},
{"date":"2020-09-09T21:01:50.0200697+00:00","temperatureC":-14,"temperatureF":7,"summary":"Bracing"},
{"date":"2020-09-10T21:01:50.0200715+00:00","temperatureC":27,"temperatureF":80,"summary":"Freezing"},
{"date":"2020-09-11T21:01:50.0200717+00:00","temperatureC":18,"temperatureF":64,"summary":"Chilly"},
{"date":"2020-09-12T21:01:50.0200719+00:00","temperatureC":16,"temperatureF":60,"summary":"Chilly"}]
az spring-cloud app logs -n hello-world -s <service instance name> -g <resource group name> --lines 100 -f
TIP
Use az spring-cloud app logs -h to explore more parameters and log stream functionalities.
For advanced log analytics features, visit Logs tab in the menu on Azure portal. Logs here have a latency of a
few minutes.
This quickstart explains how to deploy a simple Azure Spring Cloud microservice application to run on Azure.
The application code used in this tutorial is a simple app built with Spring Initializr. When you've completed this
example, the application will be accessible online and can be managed via the Azure portal.
This quickstart explains how to:
Generate a basic Spring Cloud project
Provision a service instance
Build and deploy the app with a public endpoint
Stream logs in real time
Prerequisites
To complete this quickstart:
Install JDK 8
Sign up for an Azure subscription
(Optional) Install the Azure CLI version 2.0.67 or higher and the Azure Spring Cloud extension with
command: az extension add --name spring-cloud
(Optional) Install the Azure Toolkit for IntelliJ and sign-in
https://start.spring.io/#!type=maven-
project&language=java&platformVersion=2.3.4.RELEASE&packaging=jar&jvmVersion=1.8&groupId=com.example&artifac
tId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.he
llospring&dependencies=web,cloud-eureka,actuator,cloud-starter-sleuth,cloud-starter-zipkin,cloud-config-
client
1. Click Generate when all the dependencies are set. Download and unpack the package, then create a web
controller for a simple web application by adding
src/main/java/com/example/hellospring/HelloController.java as follows:
package com.example.hellospring;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;
@RestController
public class HelloController {
@RequestMapping("/")
public String index() {
return "Greetings from Azure Spring Cloud!";
}
5. Fill out the form on the Azure Spring Cloud Create page. Consider the following guidelines:
Subscription : Select the subscription you want to be billed for this resource.
Resource group : Creating new resource groups for new resources is a best practice. This will be used
in later steps as <resource group name> .
Ser vice Details/Name : Specify the <ser vice instance name> . The name must be between 4 and
32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character
of the service name must be a letter and the last character must be either a letter or a number.
Location : Select the region for your service instance.
6. Click Review and create .
2. (If you haven't already installed it) Install the Azure Spring Cloud extension for the Azure CLI:
az spring-cloud app create -n hellospring -s <service instance name> -g <resource group name> --is-
public true
az spring-cloud app deploy -n hellospring -s <service instance name> -g <resource group name> --jar-
path <jar file path>
5. It takes a few minutes to finish deploying the application. To confirm that it has deployed, go to the Apps
blade in the Azure portal. You should see the status of the application.
Once deployment has completed, you can access the app at
https://<service instance name>-hellospring.azuremicroservices.io/ .
az spring-cloud app logs -n hellospring -s <service instance name> -g <resource group name> --lines 100 -f
TIP
Use az spring-cloud app logs -h to explore more parameters and log stream functionalities.
For advanced logs analytics features, visit Logs tab in the menu on Azure portal. Logs here have a latency of a
few minutes.
Clean up resources
In the preceding steps, you created Azure resources that will continue to accrue charges while they remain in
your subscription. If you don't expect to need these resources in the future, delete the resource group from the
portal or by running the following command in the Azure CLI:
az group delete --name <your resource group name; for example: hellospring-1558400876966-rg> --yes
Next steps
In this quickstart, you learned how to:
Generate a basic Azure Spring Cloud project
Provision a service instance
Build and deploy the app with a public endpoint
Stream logs in real time
To learn how to use more Azure Spring capabilities, advance to the quickstart series that deploys a sample
application to Azure Spring Cloud:
Build and Run Microservices
More samples are available on GitHub: Azure Spring Cloud Samples.
Tutorial: Deploy an ASP.NET app to Azure with
Azure SQL Database
11/2/2020 • 11 minutes to read • Edit Online
Azure App Service provides a highly scalable, self-patching web hosting service. This tutorial shows you how to
deploy a data-driven ASP.NET app in App Service and connect it to Azure SQL Database. When you're finished,
you have an ASP.NET app running in Azure and connected to SQL Database.
Prerequisites
To complete this tutorial:
Install Visual Studio 2019 with the ASP.NET and web development workload.
If you've installed Visual Studio already, add the workloads in Visual Studio by clicking Tools > Get Tools and
Features .
Select Azure as your target, click next, and make sure that Azure App Ser vice (Windows) is selected and
click next again.
Sign in to Azure
In the Publish dialog, click Add an account from the account manager drop down, and then sign in to your
Azure subscription. If you're already signed into a Microsoft account, make sure that account holds your Azure
subscription. If the signed-in Microsoft account doesn't have your Azure subscription, click it to add the correct
account.
NOTE
If you're already signed in, don't select Create yet.
SET T IN G SUGGEST ED VA L UE F O R M O RE IN F O RM AT IO N
3. The Publish dialog shows the resources you've configured. Click Finish .
Create a server
Before creating a database, you need a logical SQL server. A logical SQL server is a logical construct that
contains a group of databases managed as a group.
1. Click Configure next to SQL Server Database under Connected Ser vices .
2. In the Azure SQL Database dialog, click New next to Database Ser ver .
A unique server name is generated. This name is used as part of the default URL for your server,
<server_name>.database.windows.net . It must be unique across all servers in Azure SQL. You can change
the server name, but for this tutorial, keep the generated value.
3. Add an administrator username and password. For password complexity requirements, see Password
Policy.
Remember this username and password. You need them to manage the server later.
IMPORTANT
Even though your password in the connection strings is masked (in Visual Studio and also in App Service), the fact
that it's maintained somewhere adds to the attack surface of your app. App Service can use managed service
identities to eliminate this risk by removing the need to maintain secrets in your code or app configuration at all.
For more information, see Next steps.
4. Select Finish .
Once the wizard finishes creating the Azure resources, click Publish to deploy your ASP.NET app to Azure. Your
default browser is launched with the URL to the deployed app.
Add a few to-do items.
Congratulations! Your data-driven ASP.NET application is running live in Azure App Service.
Once Visual Studio finishes creating the firewall setting for your SQL Database instance, your connection shows
up in SQL Ser ver Object Explorer .
Here, you can perform the most common database operations, such as run queries, create views and stored
procedures, and more.
Expand your connection > Databases > <your database> > Tables . Right-click on the Todoes table and
select View Data .
Enable-Migrations
Add a migration:
Add-Migration AddProperty
Update-Database
Type Ctrl+F5 to run the app. Test the edit, details, and create links.
If the application loads without errors, then Code First Migrations has succeeded. However, your page still looks
the same because your application logic is not using this new property yet.
Use the new property
Make some changes in your code to use the Done property. For simplicity in this tutorial, you're only going to
change the Index and Create views to see the property in action.
Open Controllers\TodosController.cs.
Find the Create() method on line 52 and add Done to the list of properties in the Bind attribute. When you're
done, your Create() method signature looks like the following code:
Open Views\Todos\Create.cshtml.
In the Razor code, you should see a <div class="form-group"> element that uses model.Description , and then
another <div class="form-group"> element that uses model.CreatedDate . Immediately following these two
elements, add another <div class="form-group"> element that uses model.Done :
<div class="form-group">
@Html.LabelFor(model => model.Done, htmlAttributes: new { @class = "control-label col-md-2" })
<div class="col-md-10">
<div class="checkbox">
@Html.EditorFor(model => model.Done)
@Html.ValidationMessageFor(model => model.Done, "", new { @class = "text-danger" })
</div>
</div>
</div>
Open Views\Todos\Index.cshtml.
Search for the empty <th></th> element. Just above this element, add the following Razor code:
<th>
@Html.DisplayNameFor(model => model.Done)
</th>
Find the <td> element that contains the Html.ActionLink() helper methods. Above this <td> , add another
<td> element with the following Razor code:
<td>
@Html.DisplayFor(modelItem => item.Done)
</td>
That's all you need to see the changes in the Index and Create views.
Type Ctrl+F5 to run the app.
You can now add a to-do item and check Done . Then it should show up in your homepage as a completed item.
Remember that the Edit view doesn't show the Done field, because you didn't change the Edit view.
Enable Code First Migrations in Azure
Now that your code change works, including database migration, you publish it to your Azure app and update
your SQL Database with Code First Migrations too.
Just like before, right-click your project and select Publish .
Click Configure to open the publish settings.
All your existing to-do items are still displayed. When you republish your ASP.NET application, existing data in
your SQL Database is not lost. Also, Code First Migrations only changes the data schema and leaves your
existing data intact.
However, you don't see any of the trace messages yet. That's because when you first select View Streaming
Logs , your Azure app sets the trace level to Error , which only logs error events (with the Trace.TraceError()
method).
Change trace levels
To change the trace levels to output other trace messages, go back to Ser ver Explorer .
Right-click your Azure app again and select View Settings .
In the Application Logging (File System) dropdown, select Verbose . Click Save .
TIP
You can experiment with different trace levels to see what types of messages are displayed for each level. For example, the
Information level includes all logs created by Trace.TraceInformation() , Trace.TraceWarning() , and
Trace.TraceError() , but not logs created by Trace.WriteLine() .
In your browser navigate to your app again at http://<your app name>.azurewebsites.net, then try clicking
around the to-do list application in Azure. The trace messages are now streamed to the Output window in
Visual Studio.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, you can delete them by deleting the resource group.
1. From your web app's Over view page in the Azure portal, select the myResourceGroup link under
Resource group .
2. On the resource group page, make sure that the listed resources are the ones you want to delete.
3. Select Delete , type myResourceGroup in the text box, and then select Delete .
Next steps
In this tutorial, you learned how to:
Create a database in Azure SQL Database
Connect an ASP.NET app to SQL Database
Deploy the app to Azure
Update the data model and redeploy the app
Stream logs from Azure to your terminal
Manage the app in the Azure portal
Advance to the next tutorial to learn how to easily improve the security of your connection Azure SQL Database.
Access SQL Database securely using managed identities for Azure resources
More resources:
Configure ASP.NET app
Want to optimize and save on your cloud spending?
Start analyzing costs with Cost Management
Tutorial: Build a Node.js and MongoDB app in
Azure
2/14/2021 • 17 minutes to read • Edit Online
Azure App Service provides a highly scalable, self-patching web hosting service. This tutorial shows how to
create a Node.js app in App Service on Windows and connect it to a MongoDB database. When you're done,
you'll have a MEAN application (MongoDB, Express, AngularJS, and Node.js) running in Azure App Service. For
simplicity, the sample application uses the MEAN.js web framework.
Azure App Service provides a highly scalable, self-patching web hosting service using the Linux operating
system. This tutorial shows how to create a Node.js app in App Service on Linux, connect it locally to a MongoDB
database, then deploy it to a database in Azure Cosmos DB's API for MongoDB. When you're done, you'll have a
MEAN application (MongoDB, Express, AngularJS, and Node.js) running in App Service on Linux. For simplicity,
the sample application uses the MEAN.js web framework.
Prerequisites
To complete this tutorial:
Install Git
Install Node.js and NPM
Install Bower (required by MEAN.js)
Install Gulp.js (required by MEAN.js)
Install and run MongoDB Community Edition
Use the Bash environment in Azure Cloud Shell.
If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
mongo
If your connection is successful, then your MongoDB database is already running. If not, make sure that your
local MongoDB database is started by following the steps at Install MongoDB Community Edition. Often,
MongoDB is installed, but you still need to start it by running mongod .
When you're done testing your MongoDB database, type Ctrl+C in the terminal.
This sample repository contains a copy of the MEAN.js repository. It is modified to run on App Service (for more
information, see the MEAN.js repository README file).
Run the application
Run the following commands to install the required packages and start the application.
cd meanjs
npm install
npm start
Ignore the config.domain warning. When the app is fully loaded, you see something similar to the following
message:
--
MEAN.JS - Development Environment
Environment: development
Server: http://0.0.0.0:3000
Database: mongodb://localhost/mean-dev
App version: 0.5.0
MEAN.JS version: 0.5.0
--
Navigate to http://localhost:3000 in a browser. Click Sign Up in the top menu and create a test user.
The MEAN.js sample application stores user data in the database. If you are successful at creating a user and
signing in, then your app is writing data to the local MongoDB database.
You generally create your resource group and the resources in a region near you.
When the command finishes, a JSON output shows you the resource group properties.
Create a Cosmos DB account
NOTE
There is a cost to creating the Azure Cosmos DB databases in this tutorial in your own Azure subscription. To use a free
Azure Cosmos DB account for seven days, you can use the Try Azure Cosmos DB for free experience. Just click the Create
button in the MongoDB tile to create a free MongoDB database on Azure. Once the database is created, navigate to
Connection String in the portal and retrieve your Azure Cosmos DB connection string for use later in the tutorial.
In the Cloud Shell, create a Cosmos DB account with the az cosmosdb create command.
In the following command, substitute a unique Cosmos DB name for the <cosmosdb-name> placeholder. This
name is used as the part of the Cosmos DB endpoint, https://<cosmosdb-name>.documents.azure.com/ , so the
name needs to be unique across all Cosmos DB accounts in Azure. The name must contain only lowercase
letters, numbers, and the hyphen (-) character, and must be between 3 and 50 characters long.
{
"consistencyPolicy":
{
"defaultConsistencyLevel": "Session",
"maxIntervalInSeconds": 5,
"maxStalenessPrefix": 100
},
"databaseAccountOfferType": "Standard",
"documentEndpoint": "https://<cosmosdb-name>.documents.azure.com:443/",
"failoverPolicies":
...
< Output truncated for readability >
}
Copy the value of primaryMasterKey . You need this information in the next step.
Configure the connection string in your Node.js application
In your local MEAN.js repository, in the config/env/ folder, create a file named local-production.js. .gitignore is
already configured to keep this file out of the repository.
Copy the following code into it. Be sure to replace the two <cosmosdb-name> placeholders with your Cosmos
DB database name, and replace the <primary-master-key> placeholder with the key you copied in the previous
step.
module.exports = {
db: {
uri: 'mongodb://<cosmosdb-name>:<primary-master-key>@<cosmosdb-name>.documents.azure.com:10250/mean?
ssl=true&sslverifycertificate=false'
}
};
gulp prod
In a local terminal window, run the following command to use the connection string you configured in
config/env/local-production.js. Ignore the certificate error and the config.domain warning.
# Bash
NODE_ENV=production node server.js
# Windows PowerShell
$env:NODE_ENV = "production"
node server.js
NODE_ENV=production sets the environment variable that tells Node.js to run in the production environment.
node server.js starts the Node.js server with server.js in your repository root. This is how your Node.js
application is loaded in Azure.
When the app is loaded, check to make sure that it's running in the production environment:
--
MEAN.JS
Environment: production
Server: http://0.0.0.0:8443
Database: mongodb://<cosmosdb-name>:<primary-master-key>@<cosmosdb-
name>.documents.azure.com:10250/mean?ssl=true&sslverifycertificate=false
App version: 0.5.0
MEAN.JS version: 0.5.0
Navigate to http://localhost:8443 in a browser. Click Sign Up in the top menu and create a test user. If you are
successful creating a user and signing in, then your app is writing data to the Cosmos DB database in Azure.
In the terminal, stop Node.js by typing Ctrl+C .
The JSON output shows the password as null . If you get a 'Conflict'. Details: 409 error, change the
username. If you get a 'Bad Request'. Details: 400 error, use a stronger password.
Record your username and password to use to deploy your web apps.
Create an App Service plan
In the Cloud Shell, create an App Service plan with the az appservice plan create command.
The following example creates an App Service plan named myAppServicePlan in the Free pricing tier:
When the App Service plan has been created, the Azure CLI shows information similar to the following example:
{
"adminSiteName": null,
"appServicePlanName": "myAppServicePlan",
"geoRegion": "West Europe",
"hostingEnvironmentProfile": null,
"id": "/subscriptions/0000-
0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan",
"kind": "app",
"location": "West Europe",
"maximumNumberOfWorkers": 1,
"name": "myAppServicePlan",
< JSON data removed for brevity. >
"targetWorkerSizeId": 0,
"type": "Microsoft.Web/serverfarms",
"workerTierName": null
}
In the Cloud Shell, create an App Service plan in the resource group with the az appservice plan create
command.
The following example creates an App Service plan named myAppServicePlan in the Free pricing tier ( --sku F1 )
and in a Linux container ( --is-linux ).
When the App Service plan has been created, the Azure CLI shows information similar to the following example:
{
"adminSiteName": null,
"appServicePlanName": "myAppServicePlan",
"geoRegion": "West Europe",
"hostingEnvironmentProfile": null,
"id": "/subscriptions/0000-
0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan",
"kind": "linux",
"location": "West Europe",
"maximumNumberOfWorkers": 1,
"name": "myAppServicePlan",
<JSON data removed for brevity.>
"targetWorkerSizeId": 0,
"type": "Microsoft.Web/serverfarms",
"workerTierName": null
}
# Bash
az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime
"NODE|6.9" --deployment-local-git
# PowerShell
az --% webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime
"NODE|6.9" --deployment-local-git
When the web app has been created, the Azure CLI shows output similar to the following example:
NOTE
The URL of the Git remote is shown in the property, with the format
deploymentLocalGitUrl
https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git . Save this URL as you need it later.
# Bash
az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime
"NODE|6.9" --deployment-local-git
# PowerShell
az --% webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime
"NODE|6.9" --deployment-local-git
When the web app has been created, the Azure CLI shows output similar to the following example:
In Node.js code, you access this app setting with process.env.MONGODB_URI , just like you would access any
environment variable.
In your local MEAN.js repository, open config/env/production.js (not config/env/local-production.js), which has
production-environment specific configuration. The default MEAN.js app is already configured to use the
MONGODB_URI environment variable that you created.
db: {
uri: ... || process.env.MONGODB_URI || ...,
...
},
Push to the Azure remote to deploy your app with the following command. When Git Credential Manager
prompts you for credentials, make sure you enter the credentials you created in Configure a deployment
user , not the credentials you use to sign in to the Azure portal.
This command may take a few minutes to run. While running, it displays information similar to the following
example:
Counting objects: 5, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 489 bytes | 0 bytes/s, done.
Total 5 (delta 3), reused 0 (delta 0)
remote: Updating branch 'main'.
remote: Updating submodules.
remote: Preparing deployment for commit id '6c7c716eee'.
remote: Running custom deployment command...
remote: Running deployment command...
remote: Handling node.js deployment.
.
.
.
remote: Deployment successful.
To https://<app-name>.scm.azurewebsites.net/<app-name>.git
* [new branch] main -> main
You may notice that the deployment process runs Gulp after npm install . App Service does not run Gulp or
Grunt tasks during deployment, so this sample repository has two additional files in its root directory to enable
it:
.deployment - This file tells App Service to run bash deploy.sh as the custom deployment script.
deploy.sh - The custom deployment script. If you review the file, you will see that it runs gulp prod after
npm install and bower install .
You can use this approach to add any step to your Git-based deployment. If you restart your Azure app at any
point, App Service doesn't rerun these automation tasks. For more information, see Run Grunt/Bower/Gulp.
Browse to the Azure app
Browse to the deployed app using your web browser.
http://<app-name>.azurewebsites.net
article.title = req.body.title;
article.content = req.body.content;
article.comment = req.body.comment;
...
};
Open modules/articles/client/views/view-article.client.view.html.
Just above the closing </section> tag, add the following line to display comment along with the rest of the
article data:
Open modules/articles/client/views/list-articles.client.view.html.
Just above the closing </a> tag, add the following line to display comment along with the rest of the article
data:
<p class="list-group-item-text" ng-bind="article.comment"></p>
Open modules/articles/client/views/admin/list-articles.client.view.html.
Inside the <div class="list-group"> element and just above the closing </a> tag, add the following line to
display comment along with the rest of the article data:
Open modules/articles/client/views/admin/form-article.client.view.html.
Find the <div class="form-group"> element that contains the submit button, which looks like this:
<div class="form-group">
<button type="submit" class="btn btn-default">{{vm.article._id ? 'Update' : 'Create'}}</button>
</div>
Just above this tag, add another <div class="form-group"> element that lets people edit the comment field. Your
new element should look like this:
<div class="form-group">
<label class="control-label" for="comment">Comment</label>
<textarea name="comment" data-ng-model="vm.article.comment" id="comment" class="form-control" cols="30"
rows="10" placeholder="Comment"></textarea>
</div>
# Bash
gulp prod
NODE_ENV=production node server.js
# Windows PowerShell
gulp prod
$env:NODE_ENV = "production"
node server.js
Navigate to http://localhost:8443 in a browser and make sure that you're signed in.
Select Admin > Manage Ar ticles , then add an article by selecting the + button.
You see the new Comment textbox now.
In the terminal, stop Node.js by typing Ctrl+C .
Publish changes to Azure
In the local terminal window, commit your changes in Git, then push the code changes to Azure.
Once the git push is complete, navigate to your Azure app and try out the new functionality.
If you added any articles earlier, you still can see them. Existing data in your Cosmos DB is not lost. Also, your
updates to the data schema and leaves your existing data intact.
Stream diagnostic logs
While your Node.js application runs in Azure App Service, you can get the console logs piped to your terminal.
That way, you can get the same diagnostic messages to help you debug application errors.
To start log streaming, use the az webapp log tail command in the Cloud Shell.
Once log streaming has started, refresh your Azure app in the browser to get some web traffic. You now see
console logs piped to your terminal.
Stop log streaming at any time by typing Ctrl+C .
To access the console logs generated from inside your application code in App Service, turn on diagnostics
logging by running the following command in the Cloud Shell:
Possible values for --level are: Error , Warning , Info , and Verbose . Each subsequent level includes the
previous level. For example: Error includes only error messages, and Verbose includes all messages.
Once diagnostic logging is turned on, run the following command to see the log stream:
NOTE
You can also inspect the log files from the browser at https://<app-name>.scm.azurewebsites.net/api/logs/docker .
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, delete the resource group by running the following command in the Cloud Shell:
Next steps
What you learned:
Create a MongoDB database in Azure
Connect a Node.js app to MongoDB
Deploy the app to Azure
Update the data model and redeploy the app
Stream logs from Azure to your terminal
Manage the app in the Azure portal
Advance to the next tutorial to learn how to map a custom DNS name to the app.
Map an existing custom DNS name to Azure App Service
Or, check out other resources:
Configure Node.js app
Tutorial: Build a PHP and MySQL app in Azure App
Service
2/14/2021 • 20 minutes to read • Edit Online
Azure App Service provides a highly scalable, self-patching web hosting service using the Windows operating
system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're
finished, you'll have a Laravel app running on Azure App Service on Windows.
Azure App Service provides a highly scalable, self-patching web hosting service using the Linux operating
system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're
finished, you'll have a Laravel app running on Azure App Service on Linux.
If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
mysql -u root -p
If you're prompted for a password, enter the password for the root account. If you don't remember your root
account password, see MySQL: How to Reset the Root Password.
If your command runs successfully, then your MySQL server is running. If not, make sure that your local MySQL
server is started by following the MySQL post-installation steps.
Create a database locally
At the mysql prompt, create a database.
quit
cd laravel-tasks
composer install
APP_ENV=local
APP_DEBUG=true
APP_KEY=
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_DATABASE=sampledb
DB_USERNAME=root
DB_PASSWORD=<root_password>
For information on how Laravel uses the .env file, see Laravel Environment Configuration.
Run the sample locally
Run Laravel database migrations to create the tables the application needs. To see which tables are created in
the migrations, look in the database/migrations directory in the Git repository.
You generally create your resource group and the resources in a region near you.
When the command finishes, a JSON output shows you the resource group properties.
Create a MySQL server
In the Cloud Shell, create a server in Azure Database for MySQL with the az mysql server create command.
In the following command, substitute a unique server name for the <mysql-server-name> placeholder, a user
name for the <admin-user>, and a password for the <admin-password> placeholder. The server name is used
as part of your MySQL endpoint ( https://<mysql-server-name>.mysql.database.azure.com ), so the name needs to
be unique across all servers in Azure. For details on selecting MySQL DB SKU, see Create an Azure Database for
MySQL server.
az mysql server create --resource-group myResourceGroup --name <mysql-server-name> --location "West Europe"
--admin-user <admin-user> --admin-password <admin-password> --sku-name B_Gen5_1
When the MySQL server is created, the Azure CLI shows information similar to the following example:
{
"administratorLogin": "<admin-user>",
"administratorLoginPassword": null,
"fullyQualifiedDomainName": "<mysql-server-name>.mysql.database.azure.com",
"id": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myResourceGroup/providers/Microsoft.DBforMySQL/servers/<mysql-
server-name>",
"location": "westeurope",
"name": "<mysql-server-name>",
"resourceGroup": "myResourceGroup",
...
- < Output has been truncated for readability >
}
TIP
You can be even more restrictive in your firewall rule by using only the outbound IP addresses your app uses.
In the Cloud Shell, run the command again to allow access from your local computer by replacing <your-ip-
address> with your local IPv4 IP address.
quit
APP_ENV=production
APP_DEBUG=true
APP_KEY=
DB_CONNECTION=mysql
DB_HOST=<mysql-server-name>.mysql.database.azure.com
DB_DATABASE=sampledb
DB_USERNAME=phpappuser@<mysql-server-name>
DB_PASSWORD=MySQLAzure2017
MYSQL_SSL=true
TIP
To secure your MySQL connection information, this file is already excluded from the Git repository (See .gitignore in the
repository root). Later, you learn how to configure environment variables in App Service to connect to your database in
Azure Database for MySQL. With environment variables, you don't need the .env file in App Service.
'mysql' => [
...
'sslmode' => env('DB_SSLMODE', 'prefer'),
'options' => (env('MYSQL_SSL') && extension_loaded('pdo_mysql')) ? [
PDO::MYSQL_ATTR_SSL_KEY => '/ssl/BaltimoreCyberTrustRoot.crt.pem',
] : []
],
The certificate BaltimoreCyberTrustRoot.crt.pem is provided in the repository for convenience in this tutorial.
Test the application locally
Run Laravel database migrations with .env.production as the environment file to create the tables in your
MySQL database in Azure Database for MySQL. Remember that .env.production has the connection information
to your MySQL database in Azure.
.env.production doesn't have a valid application key yet. Generate a new one for it in the terminal.
Navigate to http://localhost:8000 . If the page loads without errors, the PHP application is connecting to the
MySQL database in Azure.
Add a few tasks in the page.
To stop PHP, type Ctrl + C in the terminal.
Commit your changes
Run the following Git commands to commit your changes:
git add .
git commit -m "database.php updates"
Deploy to Azure
In this step, you deploy the MySQL-connected PHP application to Azure App Service.
Configure a deployment user
FTP and local Git can deploy to an Azure web app by using a deployment user. Once you configure your
deployment user, you can use it for all your Azure deployments. Your account-level deployment username and
password are different from your Azure subscription credentials.
To configure the deployment user, run the az webapp deployment user set command in Azure Cloud Shell.
Replace <username> and <password> with a deployment user username and password.
The username must be unique within Azure, and for local Git pushes, must not contain the ‘@’ symbol.
The password must be at least eight characters long, with two of the following three elements: letters,
numbers, and symbols.
az webapp deployment user set --user-name <username> --password <password>
The JSON output shows the password as null . If you get a 'Conflict'. Details: 409 error, change the
username. If you get a 'Bad Request'. Details: 400 error, use a stronger password.
Record your username and password to use to deploy your web apps.
Create an App Service plan
In the Cloud Shell, create an App Service plan with the az appservice plan create command.
The following example creates an App Service plan named myAppServicePlan in the Free pricing tier:
When the App Service plan has been created, the Azure CLI shows information similar to the following example:
{
"adminSiteName": null,
"appServicePlanName": "myAppServicePlan",
"geoRegion": "West Europe",
"hostingEnvironmentProfile": null,
"id": "/subscriptions/0000-
0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan",
"kind": "app",
"location": "West Europe",
"maximumNumberOfWorkers": 1,
"name": "myAppServicePlan",
< JSON data removed for brevity. >
"targetWorkerSizeId": 0,
"type": "Microsoft.Web/serverfarms",
"workerTierName": null
}
In the Cloud Shell, create an App Service plan in the resource group with the az appservice plan create
command.
The following example creates an App Service plan named myAppServicePlan in the Free pricing tier ( --sku F1 )
and in a Linux container ( --is-linux ).
When the App Service plan has been created, the Azure CLI shows information similar to the following example:
{
"adminSiteName": null,
"appServicePlanName": "myAppServicePlan",
"geoRegion": "West Europe",
"hostingEnvironmentProfile": null,
"id": "/subscriptions/0000-
0000/resourceGroups/myResourceGroup/providers/Microsoft.Web/serverfarms/myAppServicePlan",
"kind": "linux",
"location": "West Europe",
"maximumNumberOfWorkers": 1,
"name": "myAppServicePlan",
<JSON data removed for brevity.>
"targetWorkerSizeId": 0,
"type": "Microsoft.Web/serverfarms",
"workerTierName": null
}
# Bash
az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime
"PHP|7.2" --deployment-local-git
# PowerShell
az --% webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime
"PHP|7.2" --deployment-local-git
When the web app has been created, the Azure CLI shows output similar to the following example:
You’ve created an empty new web app, with git deployment enabled.
NOTE
The URL of the Git remote is shown in the property, with the format
deploymentLocalGitUrl
https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git . Save this URL as you need it later.
# Bash
az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime
"PHP|7.2" --deployment-local-git
# PowerShell
az --% webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime
"PHP|7.2" --deployment-local-git
When the web app has been created, the Azure CLI shows output similar to the following example:
You’ve created an empty new web app, with git deployment enabled.
NOTE
The URL of the Git remote is shown in the property, with the format
deploymentLocalGitUrl
https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git . Save this URL as you need it later.
az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DB_HOST="
<mysql-server-name>.mysql.database.azure.com" DB_DATABASE="sampledb" DB_USERNAME="phpappuser@<mysql-server-
name>" DB_PASSWORD="MySQLAzure2017" MYSQL_SSL="true"
You can use the PHP getenv method to access the settings. the Laravel code uses an env wrapper over the PHP
getenv . For example, the MySQL configuration in config/database.php looks like the following code:
'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST', 'localhost'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
...
],
Configure Laravel environment variables
Laravel needs an application key in App Service. You can configure it with app settings.
In the local terminal window, use php artisan to generate a new application key without saving it to .env.
In the Cloud Shell, set the application key in the App Service app by using the az webapp config appsettings set
command. Replace the placeholders <app-name> and <outputofphpartisankey:generate>.
az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings APP_KEY="
<output_of_php_artisan_key:generate>" APP_DEBUG="true"
APP_DEBUG="true" tells Laravel to return debugging information when the deployed app encounters errors.
When running a production application, set it to false , which is more secure.
Set the virtual application path
Set the virtual application path for the app. This step is required because the Laravel application lifecycle begins
in the public directory instead of the application's root directory. Other PHP frameworks whose lifecycle start in
the root directory can work without manual configuration of the virtual application path.
In the Cloud Shell, set the virtual application path by using the az resource update command. Replace the
<app-name> placeholder.
By default, Azure App Service points the root virtual application path (/) to the root directory of the deployed
application files (sites\wwwroot).
Laravel application lifecycle begins in the public directory instead of the application's root directory. The default
PHP Docker image for App Service uses Apache, and it doesn't let you customize the DocumentRoot for Laravel.
However, you can use .htaccess to rewrite all requests to point to /public instead of the root directory. In the
repository root, an .htaccess is added already for this purpose. With it, your Laravel application is ready to be
deployed.
For more information, see Change site root.
Push to Azure from Git
Back in the local terminal window, add an Azure remote to your local Git repository. Replace
<deploymentLocalGitUrl-from-create-step> with the URL of the Git remote that you saved from Create a web
app.
Push to the Azure remote to deploy your app with the following command. When Git Credential Manager
prompts you for credentials, make sure you enter the credentials you created in Configure a deployment
user , not the credentials you use to sign in to the Azure portal.
NOTE
You may notice that the deployment process installs Composer packages at the end. App Service does not run these
automations during default deployment, so this sample repository has three additional files in its root directory to enable
it:
.deployment - This file tells App Service to run bash deploy.sh as the custom deployment script.
deploy.sh - The custom deployment script. If you review the file, you will see that it runs
php composer.phar install after npm install .
composer.phar - The Composer package manager.
You can use this approach to add any step to your Git-based deployment to App Service. For more information, see
Custom Deployment Script.
Back in the local terminal window, add an Azure remote to your local Git repository. Replace
<deploymentLocalGitUrl-from-create-step> with the URL of the Git remote that you saved from Create a web
app.
Push to the Azure remote to deploy your app with the following command. When Git Credential Manager
prompts you for credentials, make sure you enter the credentials you created in Configure a deployment
user , not the credentials you use to sign in to the Azure portal.
This command may take a few minutes to run. While running, it displays information similar to the following
example:
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Updating branch 'main'.
remote: Updating submodules.
remote: Preparing deployment for commit id 'a5e076db9c'.
remote: Running custom deployment command...
remote: Running deployment command...
...
< Output has been truncated for readability >
This command shows you the name of the migration file that's generated. Find this file in database/migrations
and open it.
Replace the up method with the following code:
The preceding code adds a boolean column in the tasks table called complete .
Replace the down method with the following code for the rollback action:
In the local terminal window, run Laravel database migrations to make the change in the local database.
Based on the Laravel naming convention, the model Task (see app/Task.php) maps to the tasks table by
default.
Update application logic
Open the routes/web.php file. The application defines its routes and business logic here.
At the end of the file, add a route with the following code:
/**
* Toggle Task completeness
*/
Route::post('/task/{id}', function ($id) {
error_log('INFO: post /task/'.$id);
$task = Task::findOrFail($id);
$task->complete = !$task->complete;
$task->save();
return redirect('/');
});
The preceding code makes a simple update to the data model by toggling the value of complete .
Update the view
Open the resources/views/tasks.blade.php file. Find the <tr> opening tag and replace it with:
<tr class="{{ $task->complete ? 'success' : 'active' }}" >
The preceding code changes the row color depending on whether the task is complete.
In the next line, you have the following code:
<td>
<form action="{{ url('task/'.$task->id) }}" method="POST">
{{ csrf_field() }}
The preceding code adds the submit button that references the route that you defined earlier.
Test the changes locally
In the local terminal window, run the development server from the root directory of the Git repository.
To see the task status change, navigate to http://localhost:8000 and select the checkbox.
To stop PHP, type Ctrl + C in the terminal.
Publish changes to Azure
In the local terminal window, run Laravel database migrations with the production connection string to make the
change in the Azure database.
Commit all the changes in Git, and then push the code changes to Azure.
git add .
git commit -m "added complete checkbox"
git push azure main
Once the git push is complete, navigate to the Azure app and test the new functionality.
If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
Once log streaming has started, refresh the Azure app in the browser to get some web traffic. You can now see
console logs piped to the terminal. If you don't see console logs immediately, check again in 30 seconds.
To stop log streaming at any time, type Ctrl +C.
To access the console logs generated from inside your application code in App Service, turn on diagnostics
logging by running the following command in the Cloud Shell:
Possible values for --level are: Error , Warning , Info , and Verbose . Each subsequent level includes the
previous level. For example: Error includes only error messages, and Verbose includes all messages.
Once diagnostic logging is turned on, run the following command to see the log stream:
az webapp log tail --resource-group <resource-group-name> --name <app-name>
NOTE
You can also inspect the log files from the browser at https://<app-name>.scm.azurewebsites.net/api/logs/docker .
TIP
A PHP application can use the standard error_log() to output to the console. The sample application uses this approach in
app/Http/routes.php.
As a web framework, Laravel uses Monolog as the logging provider. To see how to get Monolog to output messages to
the console, see PHP: How to use monolog to log to console (php://out).
You see your app's Overview page. Here, you can perform basic management tasks like stop, start, restart,
browse, and delete.
The left menu provides pages for configuring your app.
Clean up resources
In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these
resources in the future, delete the resource group by running the following command in the Cloud Shell:
Next steps
In this tutorial, you learned how to:
Create a MySQL database in Azure
Connect a PHP app to MySQL
Deploy the app to Azure
Update the data model and redeploy the app
Stream diagnostic logs from Azure
Manage the app in the Azure portal
Advance to the next tutorial to learn how to map a custom DNS name to the app.
Tutorial: Map custom DNS name to your app
Or, check out other resources:
Configure PHP app
Tutorial: Build a Java Spring Boot web app with
Azure App Service on Linux and Azure Cosmos DB
2/14/2021 • 6 minutes to read • Edit Online
This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on
Azure. When you are finished, you will have a Spring Boot application storing data in Azure Cosmos DB running
on Azure App Service on Linux.
Prerequisites
Azure CLI, installed on your own computer.
Git
Java JDK
Maven
az login
az account set -s <your-subscription-id>
3. Create Azure Cosmos DB with the GlobalDocumentDB kind. The name of Cosmos DB must use only lower
case letters. Note down the documentEndpoint field in the response from the command.
cd initial/spring-todo-app
cp set-env-variables-template.sh .scripts/set-env-variables.sh
Edit .scripts/set-env-variables.sh in your favorite editor and supply Azure Cosmos DB connection info. For the
App Service Linux configuration, use the same region as before ( your-resource-group-region ) and resource
group ( your-azure-group-name ) used when creating the Cosmos DB database. Choose a WEBAPP_NAME that is
unique since it cannot duplicate any web app name in any Azure deployment.
export COSMOSDB_URI=<put-your-COSMOS-DB-documentEndpoint-URI-here>
export COSMOSDB_KEY=<put-your-COSMOS-DB-primaryMasterKey-here>
export COSMOSDB_DBNAME=<put-your-COSMOS-DB-name-here>
source .scripts/set-env-variables.sh
These environment variables are used in application.properties in the TODO list app. The fields in the
properties file set up a default repository configuration for Spring Data:
azure.cosmosdb.uri=${COSMOSDB_URI}
azure.cosmosdb.key=${COSMOSDB_KEY}
azure.cosmosdb.database=${COSMOSDB_DBNAME}
@Repository
public interface TodoItemRepository extends DocumentDbRepository<TodoItem, String> {
}
@Document
public class TodoItem {
private String id;
private String description;
private String owner;
private boolean finished;
Run the sample app
Use Maven to run the sample.
[INFO] SimpleUrlHandlerMapping - Mapped URL path [/webjars/**] onto handler of type [class
org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
[INFO] SimpleUrlHandlerMapping - Mapped URL path [/**] onto handler of type [class
org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
[INFO] WelcomePageHandlerMapping - Adding welcome page: class path resource [static/index.html]
2018-10-28 15:04:32.101 INFO 7673 --- [ main] c.m.azure.documentdb.DocumentClient :
Initializing DocumentClient with serviceEndpoint [https://sample-cosmos-db-westus.documents.azure.com:443/],
ConnectionPolicy [ConnectionPolicy [requestTimeout=60, mediaRequestTimeout=300, connectionMode=Gateway,
mediaReadMode=Buffered, maxPoolSize=800, idleConnectionTimeout=60, userAgentSuffix=;spring-
data/2.0.6;098063be661ab767976bd5a2ec350e978faba99348207e8627375e8033277cb2,
retryOptions=com.microsoft.azure.documentdb.RetryOptions@6b9fb84d, enableEndpointDiscovery=true,
preferredLocations=null]], ConsistencyLevel [null]
[INFO] AnnotationMBeanExporter - Registering beans for JMX exposure on startup
[INFO] TomcatWebServer - Tomcat started on port(s): 8080 (http) with context path ''
[INFO] TodoApplication - Started TodoApplication in 45.573 seconds (JVM running for 76.534)
You can access Spring TODO App locally using this link once the app is started: http://localhost:8080/ .
If you see exceptions instead of the "Started TodoApplication" message, check that the bash script in the
previous step exported the environment variables properly and that the values are correct for the Azure Cosmos
DB database you created.
<!--*************************************************-->
<!-- Deploy to Java SE in App Service Linux -->
<!--*************************************************-->
<plugin>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-webapp-maven-plugin</artifactId>
<version>1.11.0</version>
<configuration>
<schemaVersion>v2</schemaVersion>
<appSettings>
<property>
<name>COSMOSDB_URI</name>
<value>${COSMOSDB_URI}</value>
</property>
<property>
<name>COSMOSDB_KEY</name>
<value>${COSMOSDB_KEY}</value>
</property>
<property>
<name>COSMOSDB_DBNAME</name>
<value>${COSMOSDB_DBNAME}</value>
</property>
<property>
<name>JAVA_OPTS</name>
<value>-Dserver.port=80</value>
</property>
</appSettings>
</configuration>
</plugin>
...
</plugins>
The output contains the URL to your deployed application (in this example,
https://spring-todo-app.azurewebsites.net ). You can copy this URL into your web browser or run the following
command in your Terminal window to load your app.
curl https://spring-todo-app.azurewebsites.net
You should see the app running with the remote URL in the address bar:
Stream diagnostic logs
To access the console logs generated from inside your application code in App Service, turn on diagnostics
logging by running the following command in the Cloud Shell:
Possible values for --level are: Error , Warning , Info , and Verbose . Each subsequent level includes the
previous level. For example: Error includes only error messages, and Verbose includes all messages.
Once diagnostic logging is turned on, run the following command to see the log stream:
NOTE
You can also inspect the log files from the browser at https://<app-name>.scm.azurewebsites.net/api/logs/docker .
Clean up resources
If you don't need these resources for another tutorial (see Next steps), you can delete them by running the
following command in the Cloud Shell:
Next steps
Azure for Java Developers Spring Boot, Spring Data for Cosmos DB, Azure Cosmos DB and App Service Linux.
Learn more about running Java apps on App Service on Linux in the developer guide.
Java in App Service Linux dev guide
Tutorial: Create and Manage Linux VMs with the
Azure CLI
11/2/2020 • 8 minutes to read • Edit Online
Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers
basic Azure virtual machine deployment items such as selecting a VM size, selecting a VM image, and deploying
a VM. You learn how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the latest version. To open
the Cloud Shell, select Tr y it from the top of any code block.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.
The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial.
az vm create \
--resource-group myResourceGroupVM \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information
about the VM. Take note of the publicIpAddress , this address can be used to access the virtual machine..
{
"fqdns": "",
"id": "/subscriptions/d5b9d4b7-6fc1-0000-0000-
000000000000/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "52.174.34.95",
"resourceGroup": "myResourceGroupVM"
}
Connect to VM
You can now connect to the VM with SSH in the Azure Cloud Shell or from your local computer. Replace the
example IP address with the publicIpAddress noted in the previous step.
Once logged in to the VM, you can install and configure applications. When you are finished, you close the SSH
session as normal:
exit
Understand VM images
The Azure marketplace includes many images that can be used to create VMs. In the previous steps, a virtual
machine was created using an Ubuntu image. In this step, the Azure CLI is used to search the marketplace for a
CentOS image, which is then used to deploy a second virtual machine.
To see a list of the most commonly used images, use the az vm image list command.
A full list can be seen by adding the --all argument. The image list can also be filtered by --publisher or
–-offer . In this example, the list is filtered for all images with an offer that matches CentOS .
Partial output:
To deploy a VM using a specific image, take note of the value in the Urn column, which consists of the publisher,
offer, SKU, and optionally a version number to identify the image. When specifying the image, the image version
number can be replaced with “latest”, which selects the latest version of the distribution. In this example, the
--image argument is used to specify the latest version of a CentOS 6.5 image.
Understand VM sizes
A virtual machine size determines the amount of compute resources such as CPU, GPU, and memory that are
made available to the virtual machine. Virtual machines need to be sized appropriately for the expected work
load. If workload increases, an existing virtual machine can be resized.
VM Sizes
The following table categorizes sizes into use cases.
General purpose B, Dsv3, Dv3, DSv2, Dv2, Av2, DC Balanced CPU-to-memory. Ideal for
dev / test and small to medium
applications and data solutions.
Memory optimized Esv3, Ev3, M, DSv2, Dv2 High memory-to-core. Great for
relational databases, medium to large
caches, and in-memory analytics.
Storage optimized Lsv2, Ls High disk throughput and IO. Ideal for
Big Data, SQL, and NoSQL databases.
GPU NV, NVv2, NC, NCv2, NCv3, ND Specialized VMs targeted for heavy
graphic rendering and video editing.
Partial output:
MaxDataDiskCount MemoryInMb Name NumberOfCores OsDiskSizeInMb
ResourceDiskSizeInMb
------------------ ------------ ---------------------- --------------- ---------------- ---------------
-------
2 3584 Standard_DS1 1 1047552
7168
4 7168 Standard_DS2 2 1047552
14336
8 14336 Standard_DS3 4 1047552
28672
16 28672 Standard_DS4 8 1047552
57344
4 14336 Standard_DS11 2 1047552
28672
8 28672 Standard_DS12 4 1047552
57344
16 57344 Standard_DS13 8 1047552
114688
32 114688 Standard_DS14 16 1047552
229376
1 768 Standard_A0 1 1047552
20480
2 1792 Standard_A1 1 1047552
71680
4 3584 Standard_A2 2 1047552
138240
8 7168 Standard_A3 4 1047552
291840
4 14336 Standard_A5 2 1047552
138240
16 14336 Standard_A4 8 1047552
619520
8 28672 Standard_A6 4 1047552
291840
16 57344 Standard_A7 8 1047552
619520
az vm create \
--resource-group myResourceGroupVM \
--name myVM3 \
--image UbuntuLTS \
--size Standard_F4s \
--generate-ssh-keys
Resize a VM
After a VM has been deployed, it can be resized to increase or decrease resource allocation. You can view the
current of size of a VM with az vm show:
Before resizing a VM, check if the desired size is available on the current Azure cluster. The az vm list-vm-resize-
options command returns the list of sizes.
If the desired size is not on the current cluster, the VM needs to be deallocated before the resize operation can
occur. Use the az vm deallocate command to stop and deallocate the VM. Note, when the VM is powered back
on, any data on the temp disk may be removed. The public IP address also changes unless a static IP address is
being used.
VM power states
An Azure VM can have one of many power states. This state represents the current state of the VM from the
standpoint of the hypervisor.
Power states
P O W ER STAT E DESC RIP T IO N
Output:
To retrieve the power state of all the VMs in your subscription, use the Virtual Machines - List All API with
parameter statusOnly set to true.
Management tasks
During the life-cycle of a virtual machine, you may want to run management tasks such as starting, stopping, or
deleting a virtual machine. Additionally, you may want to create scripts to automate repetitive or complex tasks.
Using the Azure CLI, many common management tasks can be run from the command line or in scripts.
Get IP address
This command returns the private and public IP addresses of a virtual machine.
Next steps
In this tutorial, you learned about basic VM creation and management such as how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
Advance to the next tutorial to learn about VM disks.
Create and Manage VM disks
Tutorial: Create and Manage Windows VMs with
Azure PowerShell
11/2/2020 • 7 minutes to read • Edit Online
Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers
basic Azure virtual machine (VM) deployment tasks like selecting a VM size, selecting a VM image, and
deploying a VM. You learn how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
New-AzResourceGroup `
-ResourceGroupName "myResourceGroupVM" `
-Location "EastUS"
The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial.
Create a VM
When creating a VM, several options are available like operating system image, network configuration, and
administrative credentials. This example creates a VM named myVM, running the default version of Windows
Server 2016 Datacenter.
Set the username and password needed for the administrator account on the VM with Get-Credential:
$cred = Get-Credential
Connect to VM
After the deployment has completed, create a remote desktop connection with the VM.
Run the following commands to return the public IP address of the VM. Take note of this IP Address so you can
connect to it with your browser to test web connectivity in a future step.
Get-AzPublicIpAddress `
-ResourceGroupName "myResourceGroupVM" | Select IpAddress
Use the following command, on your local machine, to create a remote desktop session with the VM. Replace
the IP address with the publicIPAddress of your VM. When prompted, enter the credentials used when creating
the VM.
mstsc /v:<publicIpAddress>
In the Windows Security window, select More choices and then Use a different account . Type the
username and password you created for the VM and then click OK .
Use the Get-AzVMImageOffer to return a list of image offers. With this command, the returned list is filtered on
the specified publisher named MicrosoftWindowsServer :
Get-AzVMImageOffer `
-Location "EastUS" `
-PublisherName "MicrosoftWindowsServer"
The Get-AzVMImageSku command will then filter on the publisher and offer name to return a list of image
names.
Get-AzVMImageSku `
-Location "EastUS" `
-PublisherName "MicrosoftWindowsServer" `
-Offer "WindowsServer"
This information can be used to deploy a VM with a specific image. This example deploys a VM using the latest
version of a Windows Server 2016 with Containers image.
New-AzVm `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM2" `
-Location "EastUS" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress2" `
-ImageName "MicrosoftWindowsServer:WindowsServer:2016-Datacenter-with-Containers:latest" `
-Credential $cred `
-AsJob
The -AsJob parameter creates the VM as a background task, so the PowerShell prompts return to you. You can
view details of background jobs with the Get-Job cmdlet.
Understand VM sizes
The VM size determines the amount of compute resources like CPU, GPU, and memory that are made available
to the VM. Virtual machines should be created using a VM size appropriate for the workload. If a workload
increases, an existing virtual machine can also be resized.
VM Sizes
The following table categorizes sizes into use cases.
General purpose B, Dsv3, Dv3, DSv2, Dv2, Av2, DC Balanced CPU-to-memory. Ideal for
dev / test and small to medium
applications and data solutions.
Memory optimized Esv3, Ev3, M, DSv2, Dv2 High memory-to-core. Great for
relational databases, medium to large
caches, and in-memory analytics.
Storage optimized Lsv2, Ls High disk throughput and IO. Ideal for
Big Data, SQL, and NoSQL databases.
GPU NV, NVv2, NC, NCv2, NCv3, ND Specialized VMs targeted for heavy
graphic rendering and video editing.
Resize a VM
After a VM has been deployed, it can be resized to increase or decrease resource allocation.
Before resizing a VM, check if the size you want is available on the current VM cluster. The Get-AzVMSize
command returns a list of sizes.
If the size is available, the VM can be resized from a powered-on state, however it is rebooted during the
operation.
$vm = Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-VMName "myVM"
$vm.HardwareProfile.VmSize = "Standard_DS3_v2"
Update-AzVM `
-VM $vm `
-ResourceGroupName "myResourceGroupVM"
If the size you want isn't available on the current cluster, the VM needs to be deallocated before the resize
operation can occur. Deallocating a VM will remove any data on the temp disk, and the public IP address will
change unless a static IP address is being used.
Stop-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" -Force
$vm = Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-VMName "myVM"
$vm.HardwareProfile.VmSize = "Standard_E2s_v3"
Update-AzVM -VM $vm `
-ResourceGroupName "myResourceGroupVM"
Start-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name $vm.name
VM power states
An Azure VM can have one of many power states.
To get the state of a particular VM, use the Get-AzVM command. Be sure to specify a valid name for a VM and
resource group.
Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" `
-Status | Select @{n="Status"; e={$_.Statuses[1].Code}}
Status
------
PowerState/running
To retrieve the power state of all the VMs in your subscription, use the Virtual Machines - List All API with
parameter statusOnly set to true.
Management tasks
During the lifecycle of a VM, you may want to run management tasks like starting, stopping, or deleting a VM.
Additionally, you may want to create scripts to automate repetitive or complex tasks. Using Azure PowerShell,
many common management tasks can be run from the command line or in scripts.
Stop a VM
Stop and deallocate a VM with Stop-AzVM:
Stop-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" -Force
If you want to keep the VM in a provisioned state, use the -StayProvisioned parameter.
Start a VM
Start-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM"
Remove-AzResourceGroup `
-Name "myResourceGroupVM" `
-Force
Next steps
In this tutorial, you learned about basic VM creation and management such as how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
Advance to the next tutorial to learn about VM disks.
Create and Manage VM disks
Create a function triggered by Azure Queue
storage
11/2/2020 • 5 minutes to read • Edit Online
Learn how to create a function that is triggered when messages are submitted to an Azure Storage queue.
Prerequisites
An Azure subscription. If you don't have one, create a free account before you begin.
Function App name Globally unique name Name that identifies your new
function app. Valid characters are
a-z (case insensitive), 0-9 , and
- .
4. Select Next : Hosting . On the Hosting page, enter the following settings.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N
5. Select Next : Monitoring . On the Monitoring page, enter the following settings.
Storage account connection AzureWebJobsStorage You can use the storage account
connection already being used by
your function app, or create a new
one.
Next, you connect to your Azure storage account and create the myqueue-items storage queue.
Create the queue
1. In your function, on the Over view page, select your resource group.
2. In a separate browser window, go to your resource group in the Azure portal, and select the storage
account.
3. Select Queues , and then select the myqueue-items container.
4. Select Add message , and type "Hello World!" in Message text . Select OK .
5. Wait for a few seconds, then go back to your function logs and verify that the new message has been
read from the queue.
6. Back in your storage queue, select Refresh and verify that the message has been processed and is no
longer in the queue.
Clean up resources
Other quickstarts in this collection build upon this quickstart. If you plan to work with subsequent quickstarts,
tutorials, or with any of the services you have created in this quickstart, do not clean up the resources.
Resources in Azure refer to function apps, functions, storage accounts, and so forth. They're grouped into
resource groups, and you can delete everything in a group by deleting the group.
You created resources to complete these quickstarts. You may be billed for these resources, depending on your
account status and service pricing. If you don't need the resources anymore, here's how to delete them:
1. In the Azure portal, go to the Resource group page.
To get to that page from the function app page, select the Over view tab and then select the link under
Resource group .
To get to that page from the dashboard, select Resource groups , and then select the resource group
that you used for this quickstart.
2. In the Resource group page, review the list of included resources, and verify that they're the ones you
want to delete.
3. Select Delete resource group , and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can
also select the bell icon at the top of the page to view the notification.
Next steps
You have created a function that runs when a message is added to a storage queue. For more information about
Queue storage triggers, see Azure Functions Storage queue bindings.
Now that you have a created your first function, let's add an output binding to the function that writes a
message back to another queue.
Add messages to an Azure Storage queue using Functions
Run a custom container in Azure
2/14/2021 • 7 minutes to read • Edit Online
Azure App Service provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS.
The preconfigured Windows container environment locks down the operating system from administrative
access, software installations, changes to the global assembly cache, and so on. For more information, see
Operating system functionality on Azure App Service. If your application requires more access than the
preconfigured environment allows, you can deploy a custom Windows container instead.
This quickstart shows how to deploy an ASP.NET app, in a Windows image, to Docker Hub from Visual Studio.
You run the app in a custom container in Azure App Service.
NOTE
Windows Containers is limited to Azure Files and does not currently support Azure Blob.
Prerequisites
To complete this tutorial:
Sign up for a Docker Hub account
Install Docker for Windows.
Switch Docker to run Windows containers.
Install Visual Studio 2019 with the ASP.NET and web development and Azure development
workloads. If you've installed Visual Studio 2019 already:
Install the latest updates in Visual Studio by selecting Help > Check for Updates .
Add the workloads in Visual Studio by selecting Tools > Get Tools and Features .
6. If the Dockerfile file isn't opened automatically, open it from the Solution Explorer .
7. You need a supported parent image. Change the parent image by replacing the FROM line with the
following code and save the file:
FROM mcr.microsoft.com/dotnet/framework/aspnet:4.7.2-windowsservercore-ltsc2019
8. From the Visual Studio menu, select Debug > Star t Without Debugging to run the web app locally.
If you have a custom image elsewhere for your web application, such as in Azure Container Registry or in
any other private repository, you can configure it here.
7. Select Review and Create and then Create and wait for Azure to create the required resources.
1. Click Go to resource .
2. In the overview of this resource, follow the link next to URL .
A new browser page opens to the following page:
Wait a few minutes and try again, until you get the default ASP.NET home page:
Congratulations! You're running your first custom Windows container in Azure App Service.
https://<app_name>.scm.azurewebsites.net/api/logstream
<div class="jumbotron">
<h1>ASP.NET in Azure!</h1>
<p class="lead">This is a simple app that we've built that demonstrates how to deploy a .NET app
to Azure App Service.</p>
</div>
3. To redeploy to Azure, right-click the myfirstazurewebapp project in Solution Explorer and choose
Publish .
4. On the publish page, select Publish and wait for publishing to complete.
5. To tell App Service to pull in the new image from Docker Hub, restart the app. Back in the app page in the
portal, click Restar t > Yes .
Browse to the container app again. As you refresh the webpage, the app should revert to the "Starting up" page
at first, then display the updated webpage again after a few minutes.
Next steps
Migrate to Windows container in Azure
Or, check out other resources:
Configure custom container
App Service on Linux provides pre-defined application stacks on Linux with support for languages such as .NET,
PHP, Node.js and others. You can also use a custom Docker image to run your web app on an application stack
that is not already defined in Azure. This quickstart shows you how to deploy an image from an Azure Container
Registry (ACR) to App Service.
Prerequisites
An Azure account
Docker
Visual Studio Code
The Azure App Service extension for VS Code. You can use this extension to create, manage, and deploy Linux
Web Apps on the Azure Platform as a Service (PaaS).
The Docker extension for VS Code. You can use this extension to simplify the management of local Docker
images and commands and to deploy built app images to Azure.
Create an image
To complete this quickstart, you will need a suitable web app image stored in an Azure Container Registry.
Follow the instructions in Quickstart: Create a private container registry using the Azure portal, but use the
mcr.microsoft.com/azuredocs/go image instead of the hello-world image. For reference, the sample Dockerfile
is found in Azure Samples repo.
IMPORTANT
Be sure to set the Admin User option to Enable when you create the container registry. You can also set it from the
Access keys section of your registry page in the Azure portal. This setting is required for App Service access.
Sign in
Next, launch VS Code and log into your Azure account using the App Service extension. To do this, select the
Azure logo in the Activity Bar, navigate to the APP SERVICE explorer, then select Sign in to Azure and follow
the instructions.
Check prerequisites
Now you can check whether you have all the prerequisites installed and configured properly.
In VS Code, you should see your Azure email address in the Status Bar and your subscription in the APP
SERVICE explorer.
Next, verify that you have Docker installed and running. The following command will display the Docker version
if it is running.
docker --version
Finally, ensure that your Azure Container Registry is connected. To do this, select the Docker logo in the Activity
Bar, then navigate to REGISTRIES .
The following table includes links to bash scripts built using the Azure CLI.
Create app
Create an app and deploy files with FTP Creates an App Service app and deploys a file to it using FTP.
Create an app and deploy code from GitHub Creates an App Service app and deploys code from a public
GitHub repository.
Create an app with continuous deployment from GitHub Creates an App Service app with continuous publishing from
a GitHub repository you own.
Create an app and deploy code from a local Git repository Creates an App Service app and configures code push from
a local Git repository.
Create an app and deploy code to a staging environment Creates an App Service app with a deployment slot for
staging code changes.
Create an ASP.NET Core app in a Docker container Creates an App Service app on Linux and loads a Docker
image from Docker Hub.
Create an app and expose it with a Private Endpoint Creates an App Service app and a Private Endpoint
Configure app
Map a custom domain to an app Creates an App Service app and maps a custom domain
name to it.
Bind a custom TLS/SSL certificate to an app Creates an App Service app and binds the TLS/SSL certificate
of a custom domain name to it.
Scale app
Scale an app manually Creates an App Service app and scales it across 2 instances.
Scale an app worldwide with a high-availability architecture Creates two App Service apps in two different geographical
regions and makes them available through a single endpoint
using Azure Traffic Manager.
Protect app
Integrate with Azure Application Gateway Creates an App Service app and integrates it with
Application Gateway using service endpoint and access
restrictions.
Connect an app to a SQL Database Creates an App Service app and a database in Azure SQL
Database, then adds the database connection string to the
app settings.
Connect an app to a storage account Creates an App Service app and a storage account, then
adds the storage connection string to the app settings.
Connect an app to an Azure Cache for Redis Creates an App Service app and an Azure Cache for Redis,
then adds the redis connection details to the app settings.)
Connect an app to Cosmos DB Creates an App Service app and a Cosmos DB, then adds
the Cosmos DB connection details to the app settings.
Backup an app Creates an App Service app and creates a one-time backup
for it.
Create a scheduled backup for an app Creates an App Service app and creates a scheduled backup
for it.
Restores an app from a backup Restores an App Service app from a backup.
Monitor app
Monitor an app with web server logs Creates an App Service app, enables logging for it, and
downloads the logs to your local machine.
PowerShell samples for Azure App Service
11/2/2020 • 2 minutes to read • Edit Online
The following table includes links to PowerShell scripts built using the Azure PowerShell.
Create app
Create an app with deployment from GitHub Creates an App Service app that pulls code from GitHub.
Create an app with continuous deployment from GitHub Creates an App Service app that continuously deploys code
from GitHub.
Create an app and deploy code with FTP Creates an App Service app and upload files from a local
directory using FTP.
Create an app and deploy code from a local Git repository Creates an App Service app and configures code push from
a local Git repository.
Create an app and deploy code to a staging environment Creates an App Service app with a deployment slot for
staging code changes.
Create an app and expose your app with a Private Endpoint Creates an App Service app with a Private Endpoint.
Configure app
Map a custom domain to an app Creates an App Service app and maps a custom domain
name to it.
Bind a custom TLS/SSL certificate to an app Creates an App Service app and binds the TLS/SSL certificate
of a custom domain name to it.
Scale app
Scale an app manually Creates an App Service app and scales it across 2 instances.
Scale an app worldwide with a high-availability architecture Creates two App Service apps in two different geographical
regions and makes them available through a single endpoint
using Azure Traffic Manager.
Connect an app to a SQL Database Creates an App Service app and a database in Azure SQL
Database, then adds the database connection string to the
app settings.
Connect an app to a storage account Creates an App Service app and a storage account, then
adds the storage connection string to the app settings.
Back up an app Creates an App Service app and creates a one-time backup
for it.
Create a scheduled backup for an app Creates an App Service app and creates a scheduled backup
for it.
Restore an app from backup Restores an app from a previously completed backup.
Restore a backup across subscriptions Restores a web app from a backup in another subscription.
Monitor app
Monitor an app with web server logs Creates an App Service app, enables logging for it, and
downloads the logs to your local machine.
What is Azure Cost Management + Billing?
2/14/2021 • 7 minutes to read • Edit Online
By using the Microsoft cloud, you can significantly improve the technical performance of your business
workloads. It can also reduce your costs and the overhead required to manage organizational assets. However,
the business opportunity creates a risk because of the potential for waste and inefficiencies that are introduced
into your cloud deployments. Azure Cost Management + Billing is a suite of tools provided by Microsoft that
help you analyze, manage, and optimize the costs of your workloads. Using the suite helps ensure that your
organization is taking advantage of the benefits provided by the cloud.
You can think of your Azure workloads like the lights in your home. When you leave to go out for the day, are
you leaving the lights on? Could you use different bulbs that are more efficient to help reduce your monthly
energy bill? Do you have more lights in one room than are needed? You can use Azure Cost Management +
Billing to apply a similar thought process to the workloads used by your organization.
With Azure products and services, you only pay for what you use. As you create and use Azure resources, you’re
charged for the resources. Because of the deployment ease for new resources, the costs of your workloads can
jump significantly without proper analysis and monitoring. You use Azure Cost Management + Billing features
to:
Conduct billing administrative tasks such as paying your bill
Manage billing access to costs
Download cost and usage data that was used to generate your monthly invoice
Proactively apply data analysis to your costs
Set spending thresholds
Identify opportunities for workload changes that can optimize your spending
To learn more about how to approach cost management as an organization, take a look at the Azure Cost
Management best practices article.
SC O P E DEF IN IT IO N
Enterprise Agreement
SC O P E DEF IN IT IO N
SC O P E TA SK S
Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You
can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run
and scale with ease on both Windows and Linux-based environments.
App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing,
autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as
continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management,
staging environments, custom domain, and TLS/SSL certificates.
With App Service, you pay for the Azure compute resources you use. The compute resources you use are
determined by the App Service plan that you run your apps on. For more information, see Azure App Service
plans overview.
Next steps
Create your first web app.
ASP.NET Core (on Windows or Linux)
ASP.NET (on Windows)
PHP (on Windows or Linux)
Ruby (on Linux)
Node.js (on Windows or Linux)
Java (on Windows or Linux)
Python (on Linux)
HTML (on Windows or Linux)
Custom container (Windows or Linux)
Linux virtual machines in Azure
2/14/2021 • 6 minutes to read • Edit Online
Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure
offers. Typically, you choose a VM when you need more control over the computing environment than the other
choices offer. This article gives you information about what you should consider before you create a VM, how
you create it, and how you manage it.
An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware
that runs it. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and
installing the software that runs on it.
Azure virtual machines can be used in various ways. Some examples are:
Development and test – Azure VMs offer a quick and easy way to create a computer with specific
configurations required to code and test an application.
Applications in the cloud – Because demand for your application can fluctuate, it might make economic
sense to run it on a VM in Azure. You pay for extra VMs when you need them and shut them down when you
don’t.
Extended datacenter – Virtual machines in an Azure virtual network can easily be connected to your
organization’s network.
The number of VMs that your application uses can scale up and out to whatever is required to meet your needs.
M ET H O D DESC RIP T IO N
Azure portal Select a location from the list when you create a VM.
Availability
Azure announced an industry leading single instance virtual machine Service Level Agreement of 99.9%
provided you deploy the VM with premium storage for all disks. In order for your deployment to qualify for the
standard 99.95% VM Service Level Agreement, you still need to deploy two or more VMs running your
workload inside of an availability set. An availability set ensures that your VMs are distributed across multiple
fault domains in the Azure data centers as well as deployed onto hosts with different maintenance windows. The
full Azure SLA explains the guaranteed availability of Azure as a whole.
VM Size
The size of the VM that you use is determined by the workload that you want to run. The size that you choose
then determines factors such as processing power, memory, and storage capacity. Azure offers a wide variety of
sizes to support many types of uses.
Azure charges an hourly price based on the VM’s size and operating system. For partial hours, Azure charges
only for the minutes used. Storage is priced and charged separately.
VM Limits
Your subscription has default quota limits in place that could impact the deployment of many VMs for your
project. The current limit on a per subscription basis is 20 VMs per region. Limits can be raised by filing a
support ticket requesting an increase
Managed Disks
Managed Disks handles Azure Storage account creation and management in the background for you, and
ensures that you do not have to worry about the scalability limits of the storage account. You specify the disk
size and the performance tier (Standard or Premium), and Azure creates and manages the disk. As you add disks
or scale the VM up and down, you don't have to worry about the storage being used. If you're creating new VMs,
use the Azure CLI or the Azure portal to create VMs with Managed OS and data disks. If you have VMs with
unmanaged disks, you can convert your VMs to be backed with Managed Disks.
You can also manage your custom images in one storage account per Azure region, and use them to create
hundreds of VMs in the same subscription. For more information about Managed Disks, see the Managed Disks
Overview.
Distributions
Microsoft Azure supports running a number of popular Linux distributions provided and maintained by a
number of partners. You can find available distributions in the Azure Marketplace. Microsoft actively works with
various Linux communities to add even more flavors to the Azure endorsed Linux Distros list.
If your preferred Linux distro of choice is not currently present in the gallery, you can "Bring your own Linux"
VM by creating and uploading a Linux VHD in Azure.
Microsoft works closely with partners to ensure the images available are updated and optimized for an Azure
runtime. For more information on Azure partner offers, see the following links:
Linux on Azure - Endorsed Distributions
SUSE - Azure Marketplace - SUSE Linux Enterprise Server
Red Hat - Azure Marketplace - Red Hat Enterprise Linux
Canonical - Azure Marketplace - Ubuntu Server
Debian - Azure Marketplace - Debian
FreeBSD - Azure Marketplace - FreeBSD
Flatcar - Azure Marketplace - Flatcar Container Linux
RancherOS - Azure Marketplace - RancherOS
Bitnami - Bitnami Library for Azure
Mesosphere - Azure Marketplace - Mesosphere DC/OS on Azure
Docker - Azure Marketplace - Docker images
Jenkins - Azure Marketplace - CloudBees Jenkins Platform
Cloud-init
To achieve a proper DevOps culture, all infrastructure must be code. When all the infrastructure lives in code it
can easily be recreated. Azure works with all the major automation tooling like Ansible, Chef, SaltStack, and
Puppet. Azure also has its own tooling for automation:
Azure Templates
Azure VMaccess
Azure supports for cloud-init across most Linux Distros that support it. We are actively working with our
endorsed Linux distro partners in order to have cloud-init enabled images available in the Azure marketplace.
These images will make your cloud-init deployments and configurations work seamlessly with VMs and virtual
machine scale sets.
Using cloud-init on Azure Linux VMs
Storage
Introduction to Microsoft Azure Storage
Add a disk to a Linux VM using the azure-cli
How to attach a data disk to a Linux VM in the Azure portal
Networking
Virtual Network Overview
IP addresses in Azure
Opening ports to a Linux VM in Azure
Create a Fully Qualified Domain Name in the Azure portal
Data residency
In Azure, the feature to enable storing customer data in a single region is currently only available in the
Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil
Geo. For all other regions, customer data is stored in Geo. For more information, see Trust Center.
Next steps
Create your first VM!
Portal
Azure CLI
PowerShell
Windows virtual machines in Azure
2/14/2021 • 5 minutes to read • Edit Online
Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure
offers. Typically, you choose a VM when you need more control over the computing environment than the other
choices offer. This article gives you information about what you should consider before you create a VM, how
you create it, and how you manage it.
An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware
that runs it. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and
installing the software that runs on it.
Azure virtual machines can be used in various ways. Some examples are:
Development and test – Azure VMs offer a quick and easy way to create a computer with specific
configurations required to code and test an application.
Applications in the cloud – Because demand for your application can fluctuate, it might make economic
sense to run it on a VM in Azure. You pay for extra VMs when you need them and shut them down when you
don’t.
Extended datacenter – Virtual machines in an Azure virtual network can easily be connected to your
organization’s network.
The number of VMs that your application uses can scale up and out to whatever is required to meet your needs.
M ET H O D DESC RIP T IO N
Azure portal Select a location from the list when you create a VM.
Availability
Azure announced an industry leading single instance virtual machine Service Level Agreement of 99.9%
provided you deploy the VM with premium storage for all disks. In order for your deployment to qualify for the
standard 99.95% VM Service Level Agreement, you still need to deploy two or more VMs running your
workload inside of an availability set. An availability set ensures that your VMs are distributed across multiple
fault domains in the Azure data centers as well as deployed onto hosts with different maintenance windows. The
full Azure SLA explains the guaranteed availability of Azure as a whole.
VM size
The size of the VM that you use is determined by the workload that you want to run. The size that you choose
then determines factors such as processing power, memory, and storage capacity. Azure offers a wide variety of
sizes to support many types of uses.
Azure charges an hourly price based on the VM’s size and operating system. For partial hours, Azure charges
only for the minutes used. Storage is priced and charged separately.
VM Limits
Your subscription has default quota limits in place that could impact the deployment of many VMs for your
project. The current limit on a per subscription basis is 20 VMs per region. Limits can be raised by filing a
support ticket requesting an increase
Operating system disks and images
Virtual machines use virtual hard disks (VHDs) to store their operating system (OS) and data. VHDs are also
used for the images you can choose from to install an OS.
Azure provides many marketplace images to use with various versions and types of Windows Server operating
systems. Marketplace images are identified by image publisher, offer, sku, and version (typically version is
specified as latest). Only 64-bit operating systems are supported. For more information on the supported guest
operating systems, roles, and features, see Microsoft server software support for Microsoft Azure virtual
machines.
This table shows some ways that you can find the information for an image.
M ET H O D DESC RIP T IO N
Azure portal The values are automatically specified for you when you
select an image to use.
M ET H O D DESC RIP T IO N
You can choose to upload and use your own image and when you do, the publisher name, offer, and sku aren’t
used.
Extensions
VM extensions give your VM additional capabilities through post deployment configuration and automated
tasks.
These common tasks can be accomplished using extensions:
Run custom scripts – The Custom Script Extension helps you configure workloads on the VM by running
your script when the VM is provisioned.
Deploy and manage configurations – The PowerShell Desired State Configuration (DSC) Extension helps
you set up DSC on a VM to manage configurations and environments.
Collect diagnostics data – The Azure Diagnostics Extension helps you configure the VM to collect
diagnostics data that can be used to monitor the health of your application.
Related resources
The resources in this table are used by the VM and need to exist or be created when the VM is created.
Next steps
Create your first VM!
Portal
PowerShell
Azure CLI
Overview of Azure Service Fabric
2/14/2021 • 2 minutes to read • Edit Online
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage
scalable and reliable microservices and containers. Service Fabric also addresses the significant challenges in
developing and managing cloud native applications.
A key differentiator of Service Fabric is its strong focus on building stateful services. You can use the Service
Fabric programming model or run containerized stateful services written in any language or code. You can
create Service Fabric clusters anywhere, including Windows Server and Linux on premises and other public
clouds, in addition to Azure.
Service Fabric powers many Microsoft services today, including Azure SQL Database, Azure Cosmos DB,
Cortana, Microsoft Power BI, Microsoft Intune, Azure Event Hubs, Azure IoT Hub, Dynamics 365, Skype for
Business, and many core Azure services.
Container orchestration
Service Fabric is Microsoft's container orchestrator for deploying and managing microservices across a cluster
of machines, benefiting from the lessons learned running Microsoft services at massive scale. Service Fabric can
deploy applications in seconds, at high density with hundreds or thousands of applications or containers per
machine. With Service Fabric, you can mix both services in processes and services in containers in the same
application.
Learn more about Service Fabric core concepts, programming models, application lifecycle, testing, clusters, and
health monitoring.
Compliance
Azure Service Fabric Resource Provider is available in all Azure regions and is compliant with all Azure
compliance certifications, including: SOC, ISO, PCI DSS, HIPAA, and GDPR. For a complete list, see Microsoft
Compliance Offerings.
Next steps
Create and deploy your first application on Azure Service Fabric:
Service Fabric quickstart
Service Fabric application scenarios
2/14/2021 • 4 minutes to read • Edit Online
Azure Service Fabric offers a reliable and flexible platform where you can write and run many types of business
applications and services. These applications and microservices can be stateless or stateful, and they're
resource-balanced across virtual machines to maximize efficiency.
The unique architecture of Service Fabric enables you to perform near real-time data analysis, in-memory
computation, parallel transactions, and event processing in your applications. You can easily scale your
applications in or out depending on your changing resource requirements.
For design guidance on building applications, read Microservices architecture on Azure Service Fabric and Best
practices for application design using Service Fabric.
Consider using the Service Fabric platform for the following types of applications:
Data gathering, processing, and IoT : Service Fabric handles large scale and has low latency through
its stateful services. It can help process data on millions of devices where the data for the device and the
computation are colocated.
Customers who have built IoT services by using Service Fabric include Honeywell, PCL Construction,
Crestron, BMW, Schneider Electric, and Mesh Systems.
Gaming and session-based interactive applications : Service Fabric is useful if your application
requires low-latency reads and writes, such as in online gaming or instant messaging. Service Fabric
enables you to build these interactive, stateful applications without having to create a separate store or
cache. Visit Azure gaming solutions for design guidance on using Service Fabric in gaming services.
Customers who have built gaming services include Next Games and Digamore. Customers who have
built interactive sessions include Honeywell with Hololens.
Data analytics and workflow processing : Applications that must reliably process events or streams
of data benefit from the optimized reads and writes in Service Fabric. Service Fabric also supports
application processing pipelines, where results must be reliable and passed on to the next processing
stage without any loss. These pipelines include transactional and financial systems, where data
consistency and computation guarantees are essential.
Customers who have built business workflow services include Zeiss Group, Quorum Business Solutions,
and Société General.
Computation on data : Service Fabric enables you to build stateful applications that do intensive data
computation. Service Fabric allows the colocation of processing (computation) and data in applications.
Normally, when your application requires access to data, network latency associated with an external data
cache or storage tier limits the computation time. Stateful Service Fabric services eliminate that latency,
enabling more optimized reads and writes.
For example, consider an application that performs near real-time recommendation selections for
customers, with a round-trip time requirement of less than 100 milliseconds. The latency and
performance characteristics of Service Fabric services provide a responsive experience to the user,
compared with the standard implementation model of having to fetch the necessary data from remote
storage. The system is more responsive because the computation of recommendation selection is
colocated with the data and rules.
Customers who have built computation services include Solidsoft Reply and Infosupport.
Highly available ser vices : Service Fabric provides fast failover by creating multiple secondary service
replicas. If a node, process, or individual service goes down due to hardware or other failure, one of the
secondary replicas is promoted to a primary replica with minimal loss of service.
Scalable ser vices : Individual services can be partitioned, allowing for state to be scaled out across the
cluster. Individual services can also be created and removed on the fly. You can scale out services from a
few instances on a few nodes to thousands of instances on many nodes, and then scale them in again as
needed. You can use Service Fabric to build these services and manage their complete life cycles.
Next steps
Get started building stateless and stateful services with the Service Fabric Reliable Services and Reliable
Actors programming models.
Visit the Azure Architecture Center for guidance on building microservices on Azure.
Go to Azure Service Fabric application and cluster best practices for application design guidance.
See also:
Understanding microservices
Define and manage service state
Availability of services
Scale services
Partition services
How to create a Linux virtual machine with Azure
Resource Manager templates
11/2/2020 • 4 minutes to read • Edit Online
Learn how to create a Linux virtual machine (VM) by using an Azure Resource Manager template and the Azure
CLI from the Azure Cloud shell. To create a Windows virtual machine, see Create a Windows virtual machine
from a Resource Manager template.
An alternative is to deploy the template from the Azure portal. To open the template in the portal, select the
Deploy to Azure button.
Templates overview
Azure Resource Manager templates are JSON files that define the infrastructure and configuration of your Azure
solution. By using a template, you can repeatedly deploy your solution throughout its lifecycle and have
confidence your resources are deployed in a consistent state. To learn more about the format of the template
and how you construct it, see Quickstart: Create and deploy Azure Resource Manager templates by using the
Azure portal. To view the JSON syntax for resources types, see Define resources in Azure Resource Manager
templates.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"projectName": {
"type": "string",
"metadata": {
"description": "Specifies a name for generating resource names."
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Specifies the location for all resources."
}
},
"adminUsername": {
"type": "string",
"type": "string",
"metadata": {
"description": "Specifies a username for the Virtual Machine."
}
},
"adminPublicKey": {
"type": "string",
"metadata": {
"description": "Specifies the SSH rsa public key file as a string. Use \"ssh-keygen -t rsa -b 2048\"
to generate your SSH key pairs."
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D2s_v3",
"metadata": {
"description": "description"
}
}
},
"variables": {
"vNetName": "[concat(parameters('projectName'), '-vnet')]",
"vNetAddressPrefixes": "10.0.0.0/16",
"vNetSubnetName": "default",
"vNetSubnetAddressPrefix": "10.0.0.0/24",
"vmName": "[concat(parameters('projectName'), '-vm')]",
"publicIPAddressName": "[concat(parameters('projectName'), '-ip')]",
"networkInterfaceName": "[concat(parameters('projectName'), '-nic')]",
"networkSecurityGroupName": "[concat(parameters('projectName'), '-nsg')]",
"networkSecurityGroupName2": "[concat(variables('vNetSubnetName'), '-nsg')]"
},
"resources": [
{
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2020-05-01",
"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": [
{
"name": "ssh_rule",
"properties": {
"description": "Locks inbound down to ssh default port 22.",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "22",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 123,
"direction": "Inbound"
}
}
]
}
},
{
"type": "Microsoft.Network/publicIPAddresses",
"apiVersion": "2020-05-01",
"name": "[variables('publicIPAddressName')]",
"location": "[parameters('location')]",
"properties": {
"publicIPAllocationMethod": "Dynamic"
},
"sku": {
"name": "Basic"
}
},
{
"comments": "Simple Network Security Group for subnet [variables('vNetSubnetName')]",
"comments": "Simple Network Security Group for subnet [variables('vNetSubnetName')]",
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2020-05-01",
"name": "[variables('networkSecurityGroupName2')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": [
{
"name": "default-allow-22",
"properties": {
"priority": 1000,
"access": "Allow",
"direction": "Inbound",
"destinationPortRange": "22",
"protocol": "Tcp",
"sourceAddressPrefix": "*",
"sourcePortRange": "*",
"destinationAddressPrefix": "*"
}
}
]
}
},
{
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2020-05-01",
"name": "[variables('vNetName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName2'))]"
],
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('vNetAddressPrefixes')]"
]
},
"subnets": [
{
"name": "[variables('vNetSubnetName')]",
"properties": {
"addressPrefix": "[variables('vNetSubnetAddressPrefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName2'))]"
}
}
}
]
}
},
{
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2020-05-01",
"name": "[variables('networkInterfaceName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]",
"[resourceId('Microsoft.Network/virtualNetworks', variables('vNetName'))]",
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',
variables('publicIPAddressName'))]"
},
"subnet": {
"id": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('vNetName'),
variables('vNetSubnetName'))]"
}
}
}
]
}
},
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2019-12-01",
"name": "[variables('vmName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkInterfaces', variables('networkInterfaceName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[variables('vmName')]",
"adminUsername": "[parameters('adminUsername')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[concat('/home/', parameters('adminUsername'), '/.ssh/authorized_keys')]",
"keyData": "[parameters('adminPublicKey')]"
}
]
}
}
},
"storageProfile": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "18.04-LTS",
"version": "latest"
},
"osDisk": {
"createOption": "fromImage"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('networkInterfaceName'))]"
}
]
}
}
}
]
}
To run the CLI script, Select Tr y it to open the Azure Cloud shell. To paste the script, right-click the shell, and then
select Paste :
echo "Enter the Resource Group name:" &&
read resourceGroupName &&
echo "Enter the location (i.e. centralus):" &&
read location &&
echo "Enter the project name (used for generating resource names):" &&
read projectName &&
echo "Enter the administrator username:" &&
read username &&
echo "Enter the SSH public key:" &&
read key &&
az group create --name $resourceGroupName --location "$location" &&
az deployment group create --resource-group $resourceGroupName --template-uri
https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/101-vm-sshkey/azuredeploy.json --
parameters projectName=$projectName adminUsername=$username adminPublicKey="$key" &&
az vm show --resource-group $resourceGroupName --name "$projectName-vm" --show-details --query publicIps --
output tsv
The last Azure CLI command shows the public IP address of the newly created VM. You need the public IP
address to connect to the virtual machine. See the next section of this article.
In the previous example, you specified a template stored in GitHub. You can also download or create a template
and specify the local path with the --template-file parameter.
Here are some additional resources:
To learn how to develop Resource Manager templates, see Azure Resource Manager documentation.
To see the Azure virtual machine schemas, see Azure template reference.
To see more virtual machine template samples, see Azure Quickstart templates.
ssh <adminUsername>@<ipAddress>
Next steps
In this example, you created a basic Linux VM. For more Resource Manager templates that include application
frameworks or create more complex environments, browse the Azure Quickstart templates.
To learn more about creating templates, view the JSON syntax and properties for the resources types you
deployed:
Microsoft.Network/networkSecurityGroups
Microsoft.Network/publicIPAddresses
Microsoft.Network/virtualNetworks
Microsoft.Network/networkInterfaces
Microsoft.Compute/virtualMachines
Create a Windows virtual machine from a Resource
Manager template
2/14/2021 • 4 minutes to read • Edit Online
Learn how to create a Windows virtual machine by using an Azure Resource Manager template and Azure
PowerShell from the Azure Cloud shell. The template used in this article deploys a single virtual machine
running Windows Server in a new virtual network with a single subnet. For creating a Linux virtual machine, see
How to create a Linux virtual machine with Azure Resource Manager templates.
An alternative is to deploy the template from the Azure portal. To open the template in the portal, select the
Deploy to Azure button.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": {
"type": "string",
"metadata": {
"description": "Username for the Virtual Machine."
}
},
"adminPassword": {
"type": "securestring",
"minLength": 12,
"metadata": {
"description": "Password for the Virtual Machine."
}
},
"dnsLabelPrefix": {
"type": "string",
"defaultValue": "[toLower(concat(parameters('vmName'),'-', uniqueString(resourceGroup().id,
parameters('vmName'))))]",
"metadata": {
"description": "Unique DNS Name for the Public IP used to access the Virtual Machine."
}
},
"publicIpName": {
"type": "string",
"defaultValue": "myPublicIP",
"metadata": {
"description": "Name for the Public IP used to access the Virtual Machine."
}
},
"publicIPAllocationMethod": {
"publicIPAllocationMethod": {
"type": "string",
"defaultValue": "Dynamic",
"allowedValues": [
"Dynamic",
"Static"
],
"metadata": {
"description": "Allocation method for the Public IP used to access the Virtual Machine."
}
},
"publicIpSku": {
"type": "string",
"defaultValue": "Basic",
"allowedValues": [
"Basic",
"Standard"
],
"metadata": {
"description": "SKU for the Public IP used to access the Virtual Machine."
}
},
"OSVersion": {
"type": "string",
"defaultValue": "2019-Datacenter",
"allowedValues": [
"2008-R2-SP1",
"2012-Datacenter",
"2012-R2-Datacenter",
"2016-Nano-Server",
"2016-Datacenter-with-Containers",
"2016-Datacenter",
"2019-Datacenter",
"2019-Datacenter-Core",
"2019-Datacenter-Core-smalldisk",
"2019-Datacenter-Core-with-Containers",
"2019-Datacenter-Core-with-Containers-smalldisk",
"2019-Datacenter-smalldisk",
"2019-Datacenter-with-Containers",
"2019-Datacenter-with-Containers-smalldisk"
],
"metadata": {
"description": "The Windows version for the VM. This will pick a fully patched image of this given
Windows version."
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D2_v3",
"metadata": {
"description": "Size of the virtual machine."
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"vmName": {
"type": "string",
"defaultValue": "simple-vm",
"metadata": {
"description": "Name of the virtual machine."
}
}
},
"variables": {
"variables": {
"storageAccountName": "[concat('bootdiags', uniquestring(resourceGroup().id))]",
"nicName": "myVMNic",
"addressPrefix": "10.0.0.0/16",
"subnetName": "Subnet",
"subnetPrefix": "10.0.0.0/24",
"virtualNetworkName": "MyVNET",
"subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('virtualNetworkName'),
variables('subnetName'))]",
"networkSecurityGroupName": "default-NSG"
},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2019-06-01",
"name": "[variables('storageAccountName')]",
"location": "[parameters('location')]",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"properties": {}
},
{
"type": "Microsoft.Network/publicIPAddresses",
"apiVersion": "2020-06-01",
"name": "[parameters('publicIPName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('publicIpSku')]"
},
"properties": {
"publicIPAllocationMethod": "[parameters('publicIPAllocationMethod')]",
"dnsSettings": {
"domainNameLabel": "[parameters('dnsLabelPrefix')]"
}
}
},
{
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2020-06-01",
"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": [
{
"name": "default-allow-3389",
"properties": {
"priority": 1000,
"access": "Allow",
"direction": "Inbound",
"destinationPortRange": "3389",
"protocol": "Tcp",
"sourcePortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*"
}
}
]
}
},
{
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2020-06-01",
"name": "[variables('virtualNetworkName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
],
"properties": {
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "[variables('subnetPrefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]"
}
}
}
]
}
},
{
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2020-06-01",
"name": "[variables('nicName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]",
"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]"
},
"subnet": {
"id": "[variables('subnetRef')]"
}
}
}
]
}
},
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2020-06-01",
"name": "[parameters('vmName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[parameters('vmName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('OSVersion')]",
"version": "latest"
"version": "latest"
},
"osDisk": {
"createOption": "FromImage",
"managedDisk": {
"storageAccountType": "StandardSSD_LRS"
}
},
"dataDisks": [
{
"diskSizeGB": 1023,
"lun": 0,
"createOption": "Empty"
}
]
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts',
variables('storageAccountName'))).primaryEndpoints.blob]"
}
}
}
}
],
"outputs": {
"hostname": {
"type": "string",
"value": "[reference(parameters('publicIPName')).dnsSettings.fqdn]"
}
}
}
To run the PowerShell script, Select Tr y it to open the Azure Cloud shell. To paste the script, right-click the shell,
and then select Paste :
If you choose to install and use the PowerShell locally instead of from the Azure Cloud shell, this tutorial
requires the Azure PowerShell module. Run Get-Module -ListAvailable Az to find the version. If you need to
upgrade, see Install Azure PowerShell module. If you're running PowerShell locally, you also need to run
Connect-AzAccount to create a connection with Azure.
In the previous example, you specified a template stored in GitHub. You can also download or create a template
and specify the local path with the --template-file parameter.
Here are some additional resources:
To learn how to develop Resource Manager templates, see Azure Resource Manager documentation.
To see the Azure virtual machine schemas, see Azure template reference.
To see more virtual machine template samples, see Azure Quickstart templates.
Next Steps
If there were issues with the deployment, you might take a look at Troubleshoot common Azure deployment
errors with Azure Resource Manager.
Learn how to create and manage a virtual machine in Create and manage Windows VMs with the Azure
PowerShell module.
To learn more about creating templates, view the JSON syntax and properties for the resources types you
deployed:
Microsoft.Network/publicIPAddresses
Microsoft.Network/virtualNetworks
Microsoft.Network/networkInterfaces
Microsoft.Compute/virtualMachines
Azure Functions developer guide
2/14/2021 • 10 minutes to read • Edit Online
In Azure Functions, specific functions share a few core technical concepts and components, regardless of the
language or binding you use. Before you jump into learning details specific to a given language or binding, be
sure to read through this overview that applies to all of them.
This article assumes that you've already read the Azure Functions overview.
Function code
A function is the primary concept in Azure Functions. A function contains two important pieces - your code,
which can be written in a variety of languages, and some config, the function.json file. For compiled languages,
this config file is generated automatically from annotations in your code. For scripting languages, you must
provide the config file yourself.
The function.json file defines the function's trigger, bindings, and other configuration settings. Every function has
one and only one trigger. The runtime uses this config file to determine the events to monitor and how to pass
data into and return data from a function execution. The following is an example function.json file.
{
"disabled":false,
"bindings":[
// ... bindings here
{
"type": "bindingType",
"direction": "in",
"name": "myParamName",
// ... more depending on binding
}
]
}
For more information, see Azure Functions triggers and bindings concepts.
The bindings property is where you configure both triggers and bindings. Each binding shares a few common
settings and some settings which are specific to a particular type of binding. Every binding requires the
following settings:
For example,
queueTrigger .
Function app
A function app provides an execution context in Azure in which your functions run. As such, it is the unit of
deployment and management for your functions. A function app is comprised of one or more individual
functions that are managed, deployed, and scaled together. All of the functions in a function app share the same
pricing plan, deployment method, and runtime version. Think of a function app as a way to organize and
collectively manage your functions. To learn more, see How to manage a function app.
NOTE
All functions in a function app must be authored in the same language. In previous versions of the Azure Functions
runtime, this wasn't required.
Folder structure
The code for all the functions in a specific function app is located in a root project folder that contains a host
configuration file and one or more subfolders. Each subfolder contains the code for a separate function. The
folder structure is shown in the following representation:
FunctionApp
| - host.json
| - MyFirstFunction
| | - function.json
| | - ...
| - MySecondFunction
| | - function.json
| | - ...
| - SharedCode
| - bin
In version 2.x and higher of the Functions runtime, all functions in the function app must share the same
language stack.
The host.json file contains runtime-specific configurations and is in the root folder of the function app. A bin
folder contains packages and other library files that the function app requires. See the language-specific
requirements for a function app project:
C# class library (.csproj)
C# script (.csx)
F# script
Java
JavaScript
Python
The above is the default (and recommended) folder structure for a Function app. If you wish to change the file
location of a function's code, modify the scriptFile section of the function.json file. We also recommend using
package deployment to deploy your project to your function app in Azure. You can also use existing tools like
continuous integration and deployment and Azure DevOps.
NOTE
If deploying a package manually, make sure to deploy your host.json file and function folders directly to the wwwroot
folder. Do not include the wwwroot folder in your deployments. Otherwise, you end up with wwwroot\wwwroot folders.
Parallel execution
When multiple triggering events occur faster than a single-threaded function runtime can process them, the
runtime may invoke the function multiple times in parallel. If a function app is using the Consumption hosting
plan, the function app could scale out automatically. Each instance of the function app, whether the app runs on
the Consumption hosting plan or a regular App Service hosting plan, might process concurrent function
invocations in parallel using multiple threads. The maximum number of concurrent function invocations in each
function app instance varies based on the type of trigger being used as well as the resources used by other
functions within the function app.
Repositories
The code for Azure Functions is open source and stored in GitHub repositories:
Azure Functions
Azure Functions host
Azure Functions portal
Azure Functions templates
Azure WebJobs SDK
Azure WebJobs SDK Extensions
Bindings
Here is a table of all supported bindings.
This table shows the bindings that are supported in the major versions of the Azure Functions runtime:
2. X A N D
TYPE 1. X H IGH ER 1 T RIGGER IN P UT O UT P UT
Blob storage ✔ ✔ ✔ ✔ ✔
Azure Cosmos ✔ ✔ ✔ ✔ ✔
DB
Dapr3 ✔ ✔ ✔ ✔
Event Grid ✔ ✔ ✔ ✔
Event Hubs ✔ ✔ ✔ ✔
HTTP & ✔ ✔ ✔ ✔
webhooks
IoT Hub ✔ ✔ ✔ ✔
Kafka2 ✔ ✔ ✔
Mobile Apps ✔ ✔ ✔
Notification ✔ ✔
Hubs
Queue storage ✔ ✔ ✔ ✔
RabbitMQ2 ✔ ✔ ✔
SendGrid ✔ ✔ ✔
Service Bus ✔ ✔ ✔ ✔
SignalR ✔ ✔ ✔
Table storage ✔ ✔ ✔ ✔
Timer ✔ ✔ ✔
Twilio ✔ ✔ ✔
1 Starting with the version 2.x runtime, all bindings except HTTP and Timer must be registered. See Register
binding extensions.
2 Triggers aren't supported in the Consumption plan. Requires runtime-driven triggers.
3 Supported only in Kubernetes, IoT Edge, and other self-hosted modes only.
Having issues with errors coming from the bindings? Review the Azure Functions Binding Error Codes
documentation.
Connections
Your function project references connection information by name from its configuration provider. It does not
directly accept the connection details, allowing them to be changed across environments. For example, a trigger
definition might include a connection property. This might refer to a connection string, but you cannot set the
connection string directly in a function.json . Instead, you would set connection to the name of an
environment variable that contains the connection string.
The default configuration provider uses environment variables. These might be set by Application Settings when
running in the Azure Functions service, or from the local settings file when developing locally.
Connection values
When the connection name resolves to a single exact value, the runtime identifies the value as a connection
string, which typically includes a secret. The details of a connection string are defined by the service to which
you wish to connect.
However, a connection name can also refer to a collection of multiple configuration items. Environment variables
can be treated as a collection by using a shared prefix that ends in double underscores __ . The group can then
be referenced by setting the connection name to this prefix.
For example, the connection property for a Azure Blob trigger definition might be Storage1 . As long as there is
no single string value configured with Storage1 as its name, Storage1__serviceUri would be used for the
serviceUri property of the connection. The connection properties are different for each service. Refer to the
documentation for the extension that uses the connection.
Configure an identity-based connection
Some connections in Azure Functions are configured to use an identity instead of a secret. Support depends on
the extension using the connection. In some cases, a connection string may still be required in Functions even
though the service to which you are connecting supports identity-based connections.
IMPORTANT
Even if a binding extension supports identity-based connections, that configuration may not be supported yet in the
Consumption plan. See the support table below.
Identity-based connections are supported by the following trigger and binding extensions:
NOTE
Support for identity-based connections is not yet available for storage connections used by the Functions runtime for
core behaviors. This means that the AzureWebJobsStorage setting must be a connection string.
Connection properties
An identity-based connection for an Azure service accepts the following properties:
Additional options may be supported for a given connection type. Please refer to the documentation for the
component making the connection.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-
assigned identity is used by default. When run in other contexts, such as local development, your developer
identity is used instead, although this can be customized using alternative connection parameters.
Lo c al devel o pm en t
When running locally, the above configuration tells the runtime to use your local developer identity. The
connection will attempt to get a token from the following locations, in order:
A local cache shared between Microsoft applications
The current user context in Visual Studio
The current user context in Visual Studio Code
The current user context in the Azure CLI
If none of these options are successful, an error will occur.
In some cases, you may wish to specify use of a different identity. You can add configuration properties for the
connection that point to the alternate identity.
NOTE
The following configuration options are not supported when hosted in the Azure Functions service.
To connect using an Azure Active Directory service principal with a client ID and secret, define the connection
with the following properties:
Tenant ID Yes
<CONNECTION_NAME_PREFIX>__tenantId The Azure Active Directory
tenant (directory) ID.
Client ID Yes
<CONNECTION_NAME_PREFIX>__clientId The client (application) ID of
an app registration in the
tenant.
Reporting Issues
IT EM DESC RIP T IO N L IN K
Next steps
For more information, see the following resources:
Azure Functions triggers and bindings
Code and test Azure Functions locally
Best Practices for Azure Functions
Azure Functions C# developer reference
Azure Functions Node.js developer reference
Create a Service Fabric cluster in Azure using the
Azure portal
11/2/2020 • 11 minutes to read • Edit Online
This is a step-by-step guide that walks you through the steps of setting up a Service Fabric cluster (Linux or
Windows) in Azure using the Azure portal. This guide walks you through the following steps:
Create a cluster in Azure through the Azure portal.
Authenticate administrators using certificates.
NOTE
For more advanced security options, such as user authentication with Azure Active Directory and setting up certificates
for application security, create your cluster using Azure Resource Manager.
Cluster security
Certificates are used in Service Fabric to provide authentication and encryption to secure various aspects of a
cluster and its applications. For more information on how certificates are used in Service Fabric, see Service
Fabric cluster security scenarios.
If this is the first time you are creating a service fabric cluster or are deploying a cluster for test workloads, you
can skip to the next section (Create cluster in the Azure por tal ) and have the system generate certificates
needed for your clusters that run test workloads. If you are setting up a cluster for production workloads, then
continue reading.
Cluster and server certificate (required)
This certificate is required to secure a cluster and prevent unauthorized access to it. It provides cluster security in
a couple ways:
Cluster authentication: Authenticates node-to-node communication for cluster federation. Only nodes
that can prove their identity with this certificate can join the cluster.
Ser ver authentication: Authenticates the cluster management endpoints to a management client, so that
the management client knows it is talking to the real cluster. This certificate also provides TLS for the HTTPS
management API and for Service Fabric Explorer over HTTPS.
To serve these purposes, the certificate must meet the following requirements:
The certificate must contain a private key.
The certificate must be created for key exchange, exportable to a Personal Information Exchange (.pfx) file.
The certificate's subject name must match the domain used to access the Service Fabric cluster. This is
required to provide TLS for the cluster's HTTPS management endpoints and Service Fabric Explorer. You
cannot obtain a TLS/SSL certificate from a certificate authority (CA) for the .cloudapp.azure.com domain.
Acquire a custom domain name for your cluster. When you request a certificate from a CA the certificate's
subject name must match the custom domain name used for your cluster.
Client authentication certificates
Additional client certificates authenticate administrators for cluster management tasks. Service Fabric has two
access levels: admin and read-only user . At minimum, a single certificate for administrative access should be
used. For additional user-level access, a separate certificate must be provided. For more information on access
roles, see role-based access control for Service Fabric clients.
You do not need to upload Client authentication certificates to Key Vault to work with Service Fabric. These
certificates only need to be provided to users who are authorized for cluster management.
NOTE
Azure Active Directory is the recommended way to authenticate clients for cluster management operations. To use Azure
Active Directory, you must create a cluster using Azure Resource Manager.
In the Basics blade, you need to provide the basic details for your cluster.
1. Enter the name of your cluster.
2. Enter a User name and Password for Remote Desktop for the VMs.
3. Make sure to select the Subscription that you want your cluster to be deployed to, especially if you have
multiple subscriptions.
4. Create a new Resource group . It is best to give it the same name as the cluster, since it helps in finding
them later, especially when you are trying to make changes to your deployment or delete your cluster.
NOTE
Although you can decide to use an existing resource group, it is a good practice to create a new resource group.
This makes it easy to delete clusters and all the resources it uses.
5. Select the Location in which you want to create the cluster. If you are planning to use an existing
certificate that you have already uploaded to a key vault, You must use the same region that your Key
vault is in.
2. Cluster configuration
Configure your cluster nodes. Node types define the VM sizes, the number of VMs, and their properties. Your
cluster can have more than one node type, but the primary node type (the first one that you define on the
portal) must have at least five VMs, as this is the node type where Service Fabric system services are placed. Do
not configure Placement Proper ties because a default placement property of "NodeTypeName" is added
automatically.
NOTE
A common scenario for multiple node types is an application that contains a front-end service and a back-end service. You
want to put the front-end service on smaller VMs (VM sizes like D2_V2) with ports open to the Internet, and put the
back-end service on larger VMs (with VM sizes like D3_V2, D6_V2, D15_V2, and so on) with no Internet-facing ports
open.
1. Choose a name for your node type (1 to 12 characters containing only letters and numbers).
2. The minimum size of VMs for the primary node type is driven by the Durability tier you choose for the
cluster. The default for the durability tier is bronze. For more information on durability, see how to choose the
Service Fabric cluster durability.
3. Select the Vir tual machine size . D-series VMs have SSD drives and are highly recommended for stateful
applications. Do not use any VM SKU that has partial cores or have less than 10 GB of available disk capacity.
Refer to service fabric cluster planning consideration document for help in selecting the VM size.
4. Single node cluster and three node clusters are meant for test use only. They are not supported for any
running production workloads.
5. Choose the Initial vir tual machine scale set capacity for the node type. You can scale in or out the
number of VMs in a node type later on, but on the primary node type, the minimum is five for production
workloads. Other node types can have a minimum of one VM. The minimum number of VMs for the
primary node type drives the reliability of your cluster.
6. Configure Custom endpoints . This field allows you to enter a comma-separated list of ports that you want
to expose through the Azure Load Balancer to the public Internet for your applications. For example, if you
plan to deploy a web application to your cluster, enter "80" here to allow traffic on port 80 into your cluster.
For more information on endpoints, see communicating with applications
7. Enable reverse proxy . The Service Fabric reverse proxy helps microservices running in a Service Fabric
cluster discover and communicate with other services that have http endpoints.
8. Back in the Cluster configuration blade, under +Show optional settings , configure cluster diagnostics .
By default, diagnostics are enabled on your cluster to assist with troubleshooting issues. If you want to
disable diagnostics change the Status toggle to Off . Turning off diagnostics is not recommended. If you
already have Application Insights project created, then give its key, so that the application traces are routed to
it.
9. Include DNS ser vice . The DNS service an optional service that enables you to find other services using the
DNS protocol.
10. Select the Fabric upgrade mode you want set your cluster to. Select Automatic , if you want the system to
automatically pick up the latest available version and try to upgrade your cluster to it. Set the mode to
Manual , if you want to choose a supported version. For more details on the Fabric upgrade mode see the
Service Fabric Cluster Upgrade document.
NOTE
We support only clusters that are running supported versions of Service Fabric. By selecting the Manual mode, you are
taking on the responsibility to upgrade your cluster to a supported version.
3. Security
To make setting up a secure test cluster easy for you, we have provided the Basic option. If you already have a
certificate and have uploaded it to your key vault (and enabled the key vault for deployment), then use the
Custom option
Basic Option
Follow the screens to add or reuse an existing key vault and add a certificate. The addition of the certificate is a
synchronous process and so you will have to wait for the certificate to be created.
Resist the temptation of navigating away from the screen until the preceding process is completed.
Now that the key vault is created, edit the access policies for your key vault.
Click on the Edit access policies , then Show advanced access policies and enable access to Azure Virtual
Machines for deployment. It is recommended that you enable the template deployment as well. Once you have
made your selections, do not forget to click the Save button and close out of the Access policies pane.
Enter the name of the certificate and click OK .
Custom Option
Skip this section, if you have already performed the steps in the Basic Option.
You need the Source key vault, Certificate URL, and Certificate thumbprint information to complete the security
page. If you do not have it handy, open up another browser window and in the Azure portal do the following
1. Navigate to your key vault service.
2. Select the "Properties" tab and copy the 'RESOURCE ID' to "Source key vault" on the other browser
window
3. Now, select the "Certificates" tab.
4. Click on certificate thumbprint, which takes you to the Versions page.
5. Click on the GUIDs you see under the current Version.
6. You should now be on the screen like below. Copy the hexadecimal SHA-1 Thumbprint to "Certificate
thumbprint" on the other browser window
7. Copy the 'Secret Identifier' to the "Certificate URL" on other browser window.
Check the Configure advanced settings box to enter client certificates for admin client and read-only
client . In these fields, enter the thumbprint of your admin client certificate and the thumbprint of your read-
only user client certificate, if applicable. When administrators attempt to connect to the cluster, they are granted
access only if they have a certificate with a thumbprint that matches the thumbprint values entered here.
4. Summary
Now you are ready to deploy the cluster. Before you do that, download the certificate, look inside the large blue
informational box for the link. Make sure to keep the cert in a safe place. you need it to connect to your cluster.
Since the certificate you downloaded does not have a password, it is advised that you add one.
To complete the cluster creation, click Create . You can optionally download the template.
You can see the creation progress in the notifications. (Click the "Bell" icon near the status bar at the upper right
of your screen.) If you clicked Pin to Star tboard while creating the cluster, you see Deploying Ser vice Fabric
Cluster pinned to the Star t board. This process will take some time.
In order to perform management operations on your cluster using PowerShell or CLI, you need to connect to
your cluster, read more on how to at connecting to your cluster.
NOTE
Service Fabric clusters require a certain number of nodes to be up always to maintain availability and preserve state -
referred to as "maintaining quorum". Therefore, it is typically not safe to shut down all machines in the cluster unless you
have first performed a full backup of your state.
Next steps
At this point, you have a secure cluster using certificates for management authentication. Next, connect to your
cluster and learn how to manage application secrets. Also, learn about Service Fabric support options.
Continuous deployment to Azure App Service
2/14/2021 • 7 minutes to read • Edit Online
Azure App Service enables continuous deployment from GitHub, BitBucket, and Azure Repos repositories by
pulling in the latest updates. This article shows you how to use the Azure portal to continuously deploy your app
through the Kudu build service or Azure Pipelines.
For more information on the source control services, see Create a repo (GitHub), Create a repo (BitBucket), or
Create a new Git repo (Azure Repos).
RUN T IM E RO OT DIREC TO RY F IL ES
PHP index.php
To customize your deployment, include a .deployment file in the repository root. For more information, see
Customize deployments and Custom deployment script.
NOTE
If you develop in Visual Studio, let Visual Studio create a repository for you. The project is immediately ready to be
deployed by using Git.
NOTE
To use Azure Repos, make sure your Azure DevOps Services organization is linked to your Azure subscription. For
more information, see Set up an Azure DevOps Services account so it can deploy to a web app.
4. For GitHub or Azure Repos, on the Build provider page, select App Ser vice build ser vice , and then
select Continue . Bitbucket always uses the App Service build service.
5. On the Configure page:
For GitHub, drop down and select the Organization , Repositor y , and Branch you want to
deploy continuously.
NOTE
If you don't see any repositories, you may need to authorize Azure App Service in GitHub. Browse to your
GitHub repository and go to Settings > Applications > Authorized OAuth Apps . Select Azure App
Ser vice , and then select Grant . For organization repositories, you must be an owner of the organization
to grant the permissions.
For Bitbucket, select the Bitbucket Team , Repositor y , and Branch you want to deploy
continuously.
For Azure Repos, select the Azure DevOps Organization , Project , Repositor y , and Branch
you want to deploy continuously.
NOTE
If your Azure DevOps organization isn't listed, make sure it's linked to your Azure subscription. For more
information, see Set up an Azure DevOps Services account so it can deploy to a web app..
6. Select Continue .
7. After you configure the build provider, review the settings on the Summar y page, and then select Finish .
8. New commits in the selected repository and branch now deploy continuously into your App Service app.
You can track the commits and deployments on the Deployment Center page.
4. On the Build Provider page, select Azure Pipelines (Preview) , and then select Continue .
5. On the Configure page, in the Code section, select the Organization , Repositor y , and Branch you
want to deploy continuously and select Continue .
NOTE
If you don't see any repositories, you may need to authorize Azure App Service in GitHub. Browse to your GitHub
repository and go to Settings > Applications > Authorized OAuth Apps . Select Azure App Ser vice , and
then select Grant . For organization repositories, you must be an owner of the organization to grant the
permissions.
In the Build section, specify the Azure DevOps Organization, Project, language framework that Azure
Pipelines should use to run build tasks, and then select Continue .
6. After you configure the build provider, review the settings on the Summar y page, and then select Finish .
7. New commits in the selected repository and branch now deploy continuously into your App Service. You
can track the commits and deployments on the Deployment Center page.
Azure Repos + Azure Pipelines
1. In the Azure portal, search for App Ser vices , and then select the App Service you want to deploy.
2. On the app page, select Deployment Center in the left menu.
3. Select Azure Repos as the source control provider on the Deployment Center page and select
Continue .
4. On the Build Provider page, select Azure Pipelines (Preview) , and then select Continue .
5. On the Configure page, in the Code section, select the Organization , Repositor y , and Branch you
want to deploy continuously and select Continue .
NOTE
If your existing Azure DevOps organization isn't listed, you may need to link it to your Azure subscription. For
more information, see Define your CD release pipeline.
In the Build section, specify the Azure DevOps Organization, Project, language framework that Azure
Pipelines should use to run build tasks, and then select Continue .
6. After you configure the build provider, review the settings on the Summar y page, and then select Finish .
7. New commits in the selected repository and branch now deploy continuously into your App Service. You
can track the commits and deployments on the Deployment Center page.
Additional resources
Investigate common issues with continuous deployment
Use Azure PowerShell
Git documentation
Project Kudu
Deploy and remove applications using PowerShell
2/14/2021 • 11 minutes to read • Edit Online
Once an application type has been packaged, it's ready for deployment into an Azure Service Fabric cluster.
Deployment involves the following three steps:
1. Upload the application package to the image store.
2. Register the application type with image store relative path.
3. Create the application instance.
Once the deployed application is no longer required, you can delete the application instance and its application
type. To completely remove an application from the cluster involves the following steps:
1. Remove (or delete) the running application instance.
2. Unregister the application type if you no longer need it.
3. Remove the application package from the image store.
If you use Visual Studio for deploying and debugging applications on your local development cluster, all the
preceding steps are handled automatically through a PowerShell script. This script is found in the Scripts folder
of the application project. This article provides background on what that script is doing so that you can perform
the same operations outside of Visual Studio.
Another way to deploy an application is by using external provision. The application package can be packaged as
sfpkg and uploaded to an external store. In this case, upload to the image store is not needed. Deployment
needs the following steps:
1. Upload the sfpkg to an external store. The external store can be any store that exposes a REST http or https
endpoint.
2. Register the application type using the external download URI and the application type information.
3. Create the application instance.
For cleanup, remove the application instances and unregister the application type. Because the package was not
copied to the image store, there is no temporary location to cleanup. Provisioning from external store is
available starting with Service Fabric version 6.1.
NOTE
Visual Studio does not currently support external provision.
Connect-ServiceFabricCluster
For examples of connecting to a remote cluster or cluster secured using Azure Active Directory, X509
certificates, or Windows Active Directory see Connect to a secure cluster.
Upload the application package
Uploading the application package puts it in a location that's accessible by internal Service Fabric components. If
you want to verify the application package locally, use the Test-ServiceFabricApplicationPackage cmdlet.
The Copy-ServiceFabricApplicationPackage command uploads the application package to the cluster image
store.
Suppose you build and package an application named MyApplication in Visual Studio 2015. By default, the
application type name listed in the ApplicationManifest.xml is "MyApplicationType". The application package,
which contains the necessary application manifest, service manifests, and code/config/data packages, is located
in C:\Users<username>\Documents\Visual Studio 2015\Projects\MyApplication\MyApplication\pkg\Debug.
The following command lists the contents of the application package:
If the application package is large and/or has many files, you can compress it. The compression reduces the size
and the number of files. This results in faster registering and unregistering of the application type. Upload time
may be slower currently, especially if you include the time to compress the package.
To compress a package, use the same Copy-ServiceFabricApplicationPackage command. Compression can be
done separate from upload, by using the SkipCopy flag, or together with the upload operation. Applying
compression on a compressed package is no-op. To uncompress a compressed package, use the same Copy-
ServiceFabricApplicationPackage command with the UncompressPackage switch.
The following cmdlet compresses the package without copying it to the image store. The package now includes
zipped files for the Code and Config packages. The application and the service manifests are not zipped,
because they are needed for many internal operations (like package sharing, application type name and version
extraction for certain validations). Zipping the manifests would make these operations inefficient.
For large application packages, the compression takes time. For best results, use a fast SSD drive. The
compression times and the size of the compressed package also differ based on the package content. For
example, here is compression statistics for some packages, which show the initial and the compressed package
size, with the compression time.
C O M P RESSED PA C K A GE
IN IT IA L SIZ E ( M B ) F IL E C O UN T C O M P RESSIO N T IM E SIZ E ( M B )
Once a package is compressed, it can be uploaded to one or multiple Service Fabric clusters as needed. The
deployment mechanism is the same for compressed and uncompressed packages. Compressed packages are
stored as such in the cluster image store. The packages are uncompressed on the node, before the application is
run.
The following example uploads the package to the image store, into a folder named "MyApplicationV1":
If you do not specify the -ApplicationPackagePathInImageStore parameter, the application package is copied into
the "Debug" folder in the image store.
NOTE
Copy-Ser viceFabricApplicationPackage will automatically detect the appropriate image store connection string if the
PowerShell session is connected to a Service Fabric cluster. For Service Fabric versions older than 5.6, the -
ImageStoreConnectionString argument must be explicitly provided.
See Understand the image store connection string for supplementary information about the image store and image store
connection string.
The time it takes to upload a package differs depending on multiple factors. Some of these factors are the
number of files in the package, the package size, and the file sizes. The network speed between the source
machine and the Service Fabric cluster also impacts the upload time. The default timeout for Copy-
ServiceFabricApplicationPackage is 30 minutes. Depending on the described factors, you may have to increase
the timeout. If you are compressing the package in the copy call, you need to also consider the compression
time.
"MyApplicationV1" is the folder in the image store where the application package is located. The application type
with name "MyApplicationType" and version "1.0.0" (both are found in the application manifest) is now
registered in the cluster.
Register the application package copied to an external store
Starting with Service Fabric version 6.1, provision supports downloading the package from an external store.
The download URI represents the path to the sfpkg application package from where the application package
can be downloaded using HTTP or HTTPS protocols. The package must have been previously uploaded to this
external location. The URI must allow READ access so Service Fabric can download the file. The sfpkg file must
have the extension ".sfpkg". The provision operation should include the application type information, as found in
the application manifest.
Register-ServiceFabricApplicationType -ApplicationPackageDownloadUri
"https://sftestresources.blob.core.windows.net:443/sfpkgholder/MyAppPackage.sfpkg" -ApplicationTypeName
MyApp -ApplicationTypeVersion V1 -Async
The Register-ServiceFabricApplicationType command returns only after the system has successfully registered
the application package. How long registration takes depends on the size and contents of the application
package. If needed, the -TimeoutSec parameter can be used to supply a longer timeout (the default timeout is
60 seconds).
If you have a large application package or if you are experiencing timeouts, use the -Async parameter. The
command returns when the cluster accepts the register command. The register operation continues as needed.
The Get-ServiceFabricApplicationType command lists the application type versions and their registration status.
You can use this command to determine when the registration is done.
Get-ServiceFabricApplicationType
ApplicationTypeName : MyApplicationType
ApplicationTypeVersion : 1.0.0
Status : Available
DefaultParameters : { "Stateless1_InstanceCount" = "-1" }
ApplicationName : fabric:/MyApp
ApplicationTypeName : MyApplicationType
ApplicationTypeVersion : 1.0.0
ApplicationParameters : {}
Multiple application instances can be created for any given version of a registered application type. Each
application instance runs in isolation, with its own work directory and process.
To see which named apps and services are running in the cluster, run the Get-ServiceFabricApplication and Get-
ServiceFabricService cmdlets:
Get-ServiceFabricApplication
ApplicationName : fabric:/MyApp
ApplicationTypeName : MyApplicationType
ApplicationTypeVersion : 1.0.0
ApplicationStatus : Ready
HealthState : Ok
ApplicationParameters : {}
Get-ServiceFabricApplication | Get-ServiceFabricService
ServiceName : fabric:/MyApp/Stateless1
ServiceKind : Stateless
ServiceTypeName : Stateless1Type
IsServiceGroup : False
ServiceManifestVersion : 1.0.0
ServiceStatus : Active
HealthState : Ok
Remove an application
When an application instance is no longer needed, you can permanently remove it by name using the Remove-
ServiceFabricApplication cmdlet. Remove-ServiceFabricApplication automatically removes all services that
belong to the application as well, permanently removing all service state.
WARNING
This operation cannot be reversed, and application state cannot be recovered.
Remove-ServiceFabricApplication fabric:/MyApp
Confirm
Continue with this operation?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"):
Remove application instance succeeded
Get-ServiceFabricApplication
ApplicationTypeName : MyApplicationType
ApplicationTypeVersion : 1.0.0
Status : Available
DefaultParameters : { "Stateless1_InstanceCount" = "-1" }
Troubleshooting
Copy-ServiceFabricApplicationPackage asks for an ImageStoreConnectionString
The Service Fabric SDK environment should already have the correct defaults set up. But if needed, the
ImageStoreConnectionString for all commands should match the value that the Service Fabric cluster is using.
You can find the ImageStoreConnectionString in the cluster manifest, retrieved using the Get-
ServiceFabricClusterManifest and Get-ImageStoreConnectionStringFromClusterManifest commands:
Get-ImageStoreConnectionStringFromClusterManifest(Get-ServiceFabricClusterManifest)
[...]
<Section Name="Management">
<Parameter Name="ImageStoreConnectionString" Value="file:D:\ServiceFabric\Data\ImageStore" />
</Section>
[...]
See Understand the image store connection string for supplementary information about the image store and
image store connection string.
Deploy large application package
Issue: Copy-ServiceFabricApplicationPackage times out for a large application package (order of GB). Try:
Specify a larger timeout for Copy-ServiceFabricApplicationPackage command, with TimeoutSec parameter.
By default, the timeout is 30 minutes.
Check the network connection between your source machine and cluster. If the connection is slow, consider
using a machine with a better network connection. If the client machine is in another region than the cluster,
consider using a client machine in a closer or same region as the cluster.
Check if you are hitting external throttling. For example, when the image store is configured to use azure
storage, upload may be throttled.
Issue: Upload package completed successfully, but Register-ServiceFabricApplicationType times out. Try:
Compress the package before copying to the image store. The compression reduces the size and the number
of files, which in turn reduces the amount of traffic and work that Service Fabric must perform. The upload
operation may be slower (especially if you include the compression time), but register and un-register the
application type are faster.
Specify a larger timeout for Register-ServiceFabricApplicationType with TimeoutSec parameter.
Specify Async switch for Register-ServiceFabricApplicationType. The command returns when the cluster
accepts the command and the registration of the application type continues asynchronously. For this reason,
there is no need to specify a higher timeout in this case. The Get-ServiceFabricApplicationType command lists
all successfully registered application type versions and their registration status. You can use this command
to determine when the registration is done.
Get-ServiceFabricApplicationType
ApplicationTypeName : MyApplicationType
ApplicationTypeVersion : 1.0.0
Status : Available
DefaultParameters : { "Stateless1_InstanceCount" = "-1" }
Get-ServiceFabricApplicationType
ApplicationTypeName : MyApplicationType
ApplicationTypeVersion : 1.0.0
Status : Available
DefaultParameters : { "Stateless1_InstanceCount" = "-1" }
Next steps
Package an application
Service Fabric application upgrade
Service Fabric health introduction
Diagnose and troubleshoot a Service Fabric service
Model an application in Service Fabric
Tutorial: Create and Manage Linux VMs with the
Azure CLI
11/2/2020 • 8 minutes to read • Edit Online
Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers
basic Azure virtual machine deployment items such as selecting a VM size, selecting a VM image, and deploying
a VM. You learn how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the latest version. To open
the Cloud Shell, select Tr y it from the top of any code block.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.
The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial.
az vm create \
--resource-group myResourceGroupVM \
--name myVM \
--image UbuntuLTS \
--admin-username azureuser \
--generate-ssh-keys
It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information
about the VM. Take note of the publicIpAddress , this address can be used to access the virtual machine..
{
"fqdns": "",
"id": "/subscriptions/d5b9d4b7-6fc1-0000-0000-
000000000000/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "52.174.34.95",
"resourceGroup": "myResourceGroupVM"
}
Connect to VM
You can now connect to the VM with SSH in the Azure Cloud Shell or from your local computer. Replace the
example IP address with the publicIpAddress noted in the previous step.
Once logged in to the VM, you can install and configure applications. When you are finished, you close the SSH
session as normal:
exit
Understand VM images
The Azure marketplace includes many images that can be used to create VMs. In the previous steps, a virtual
machine was created using an Ubuntu image. In this step, the Azure CLI is used to search the marketplace for a
CentOS image, which is then used to deploy a second virtual machine.
To see a list of the most commonly used images, use the az vm image list command.
A full list can be seen by adding the --all argument. The image list can also be filtered by --publisher or
–-offer . In this example, the list is filtered for all images with an offer that matches CentOS .
Partial output:
To deploy a VM using a specific image, take note of the value in the Urn column, which consists of the publisher,
offer, SKU, and optionally a version number to identify the image. When specifying the image, the image version
number can be replaced with “latest”, which selects the latest version of the distribution. In this example, the
--image argument is used to specify the latest version of a CentOS 6.5 image.
Understand VM sizes
A virtual machine size determines the amount of compute resources such as CPU, GPU, and memory that are
made available to the virtual machine. Virtual machines need to be sized appropriately for the expected work
load. If workload increases, an existing virtual machine can be resized.
VM Sizes
The following table categorizes sizes into use cases.
General purpose B, Dsv3, Dv3, DSv2, Dv2, Av2, DC Balanced CPU-to-memory. Ideal for
dev / test and small to medium
applications and data solutions.
Memory optimized Esv3, Ev3, M, DSv2, Dv2 High memory-to-core. Great for
relational databases, medium to large
caches, and in-memory analytics.
Storage optimized Lsv2, Ls High disk throughput and IO. Ideal for
Big Data, SQL, and NoSQL databases.
GPU NV, NVv2, NC, NCv2, NCv3, ND Specialized VMs targeted for heavy
graphic rendering and video editing.
Partial output:
MaxDataDiskCount MemoryInMb Name NumberOfCores OsDiskSizeInMb
ResourceDiskSizeInMb
------------------ ------------ ---------------------- --------------- ---------------- ---------------
-------
2 3584 Standard_DS1 1 1047552
7168
4 7168 Standard_DS2 2 1047552
14336
8 14336 Standard_DS3 4 1047552
28672
16 28672 Standard_DS4 8 1047552
57344
4 14336 Standard_DS11 2 1047552
28672
8 28672 Standard_DS12 4 1047552
57344
16 57344 Standard_DS13 8 1047552
114688
32 114688 Standard_DS14 16 1047552
229376
1 768 Standard_A0 1 1047552
20480
2 1792 Standard_A1 1 1047552
71680
4 3584 Standard_A2 2 1047552
138240
8 7168 Standard_A3 4 1047552
291840
4 14336 Standard_A5 2 1047552
138240
16 14336 Standard_A4 8 1047552
619520
8 28672 Standard_A6 4 1047552
291840
16 57344 Standard_A7 8 1047552
619520
az vm create \
--resource-group myResourceGroupVM \
--name myVM3 \
--image UbuntuLTS \
--size Standard_F4s \
--generate-ssh-keys
Resize a VM
After a VM has been deployed, it can be resized to increase or decrease resource allocation. You can view the
current of size of a VM with az vm show:
Before resizing a VM, check if the desired size is available on the current Azure cluster. The az vm list-vm-resize-
options command returns the list of sizes.
If the desired size is not on the current cluster, the VM needs to be deallocated before the resize operation can
occur. Use the az vm deallocate command to stop and deallocate the VM. Note, when the VM is powered back
on, any data on the temp disk may be removed. The public IP address also changes unless a static IP address is
being used.
VM power states
An Azure VM can have one of many power states. This state represents the current state of the VM from the
standpoint of the hypervisor.
Power states
P O W ER STAT E DESC RIP T IO N
Output:
To retrieve the power state of all the VMs in your subscription, use the Virtual Machines - List All API with
parameter statusOnly set to true.
Management tasks
During the life-cycle of a virtual machine, you may want to run management tasks such as starting, stopping, or
deleting a virtual machine. Additionally, you may want to create scripts to automate repetitive or complex tasks.
Using the Azure CLI, many common management tasks can be run from the command line or in scripts.
Get IP address
This command returns the private and public IP addresses of a virtual machine.
Next steps
In this tutorial, you learned about basic VM creation and management such as how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
Advance to the next tutorial to learn about VM disks.
Create and Manage VM disks
Tutorial: Create and Manage Windows VMs with
Azure PowerShell
11/2/2020 • 7 minutes to read • Edit Online
Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers
basic Azure virtual machine (VM) deployment tasks like selecting a VM size, selecting a VM image, and
deploying a VM. You learn how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
New-AzResourceGroup `
-ResourceGroupName "myResourceGroupVM" `
-Location "EastUS"
The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial.
Create a VM
When creating a VM, several options are available like operating system image, network configuration, and
administrative credentials. This example creates a VM named myVM, running the default version of Windows
Server 2016 Datacenter.
Set the username and password needed for the administrator account on the VM with Get-Credential:
$cred = Get-Credential
Connect to VM
After the deployment has completed, create a remote desktop connection with the VM.
Run the following commands to return the public IP address of the VM. Take note of this IP Address so you can
connect to it with your browser to test web connectivity in a future step.
Get-AzPublicIpAddress `
-ResourceGroupName "myResourceGroupVM" | Select IpAddress
Use the following command, on your local machine, to create a remote desktop session with the VM. Replace
the IP address with the publicIPAddress of your VM. When prompted, enter the credentials used when creating
the VM.
mstsc /v:<publicIpAddress>
In the Windows Security window, select More choices and then Use a different account . Type the
username and password you created for the VM and then click OK .
Use the Get-AzVMImageOffer to return a list of image offers. With this command, the returned list is filtered on
the specified publisher named MicrosoftWindowsServer :
Get-AzVMImageOffer `
-Location "EastUS" `
-PublisherName "MicrosoftWindowsServer"
The Get-AzVMImageSku command will then filter on the publisher and offer name to return a list of image
names.
Get-AzVMImageSku `
-Location "EastUS" `
-PublisherName "MicrosoftWindowsServer" `
-Offer "WindowsServer"
This information can be used to deploy a VM with a specific image. This example deploys a VM using the latest
version of a Windows Server 2016 with Containers image.
New-AzVm `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM2" `
-Location "EastUS" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress2" `
-ImageName "MicrosoftWindowsServer:WindowsServer:2016-Datacenter-with-Containers:latest" `
-Credential $cred `
-AsJob
The -AsJob parameter creates the VM as a background task, so the PowerShell prompts return to you. You can
view details of background jobs with the Get-Job cmdlet.
Understand VM sizes
The VM size determines the amount of compute resources like CPU, GPU, and memory that are made available
to the VM. Virtual machines should be created using a VM size appropriate for the workload. If a workload
increases, an existing virtual machine can also be resized.
VM Sizes
The following table categorizes sizes into use cases.
General purpose B, Dsv3, Dv3, DSv2, Dv2, Av2, DC Balanced CPU-to-memory. Ideal for
dev / test and small to medium
applications and data solutions.
Memory optimized Esv3, Ev3, M, DSv2, Dv2 High memory-to-core. Great for
relational databases, medium to large
caches, and in-memory analytics.
Storage optimized Lsv2, Ls High disk throughput and IO. Ideal for
Big Data, SQL, and NoSQL databases.
GPU NV, NVv2, NC, NCv2, NCv3, ND Specialized VMs targeted for heavy
graphic rendering and video editing.
Resize a VM
After a VM has been deployed, it can be resized to increase or decrease resource allocation.
Before resizing a VM, check if the size you want is available on the current VM cluster. The Get-AzVMSize
command returns a list of sizes.
If the size is available, the VM can be resized from a powered-on state, however it is rebooted during the
operation.
$vm = Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-VMName "myVM"
$vm.HardwareProfile.VmSize = "Standard_DS3_v2"
Update-AzVM `
-VM $vm `
-ResourceGroupName "myResourceGroupVM"
If the size you want isn't available on the current cluster, the VM needs to be deallocated before the resize
operation can occur. Deallocating a VM will remove any data on the temp disk, and the public IP address will
change unless a static IP address is being used.
Stop-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" -Force
$vm = Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-VMName "myVM"
$vm.HardwareProfile.VmSize = "Standard_E2s_v3"
Update-AzVM -VM $vm `
-ResourceGroupName "myResourceGroupVM"
Start-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name $vm.name
VM power states
An Azure VM can have one of many power states.
To get the state of a particular VM, use the Get-AzVM command. Be sure to specify a valid name for a VM and
resource group.
Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" `
-Status | Select @{n="Status"; e={$_.Statuses[1].Code}}
Status
------
PowerState/running
To retrieve the power state of all the VMs in your subscription, use the Virtual Machines - List All API with
parameter statusOnly set to true.
Management tasks
During the lifecycle of a VM, you may want to run management tasks like starting, stopping, or deleting a VM.
Additionally, you may want to create scripts to automate repetitive or complex tasks. Using Azure PowerShell,
many common management tasks can be run from the command line or in scripts.
Stop a VM
Stop and deallocate a VM with Stop-AzVM:
Stop-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" -Force
If you want to keep the VM in a provisioned state, use the -StayProvisioned parameter.
Start a VM
Start-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM"
Remove-AzResourceGroup `
-Name "myResourceGroupVM" `
-Force
Next steps
In this tutorial, you learned about basic VM creation and management such as how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
Advance to the next tutorial to learn about VM disks.
Create and Manage VM disks
Quickstart: Azure Blob Storage client library v12 for
.NET
2/14/2021 • 8 minutes to read • Edit Online
Get started with the Azure Blob Storage client library v12 for .NET. Azure Blob Storage is Microsoft's object
storage solution for the cloud. Follow steps to install the package and try out example code for basic tasks. Blob
storage is optimized for storing massive amounts of unstructured data.
Use the Azure Blob Storage client library v12 for .NET to:
Create a container
Upload a blob to Azure Storage
List all of the blobs in a container
Download the blob to your local computer
Delete a container
Additional resources:
API reference documentation
Library source code
Package (NuGet)
Samples
NOTE
The features described in this article are now available to accounts that have a hierarchical namespace. To review
limitations, see the Blob storage features available in Azure Data Lake Storage Gen2 article.
Prerequisites
Azure subscription - create one for free
Azure storage account - create a storage account
Current .NET Core SDK for your operating system. Be sure to get the SDK and not the runtime.
Setting up
This section walks you through preparing a project to work with the Azure Blob Storage client library v12 for
.NET.
Create the project
Create a .NET Core application named BlobQuickstartV12.
1. In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new
console app with the name BlobQuickstartV12. This command creates a simple "Hello World" C# project
with a single source file: Program.cs.
cd BlobQuickstartV12
3. In side the BlobQuickstartV12 directory, create another directory called data. This is where the blob data
files will be created and stored.
mkdir data
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using System;
using System.IO;
using System.Threading.Tasks;
namespace BlobQuickstartV12
{
class Program
{
static async Task Main()
{
}
}
}
After you add the environment variable in Windows, you must start a new instance of the command window.
Linux
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
macOS
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
Restart programs
After you add the environment variable, restart any running programs that will need to read the environment
variable. For example, restart your development environment or editor before continuing.
Object model
Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data
that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers
three types of resources:
The storage account
A container in the storage account
A blob in the container
The following diagram shows the relationship between these resources.
Code examples
These example code snippets show you how to perform the following with the Azure Blob Storage client library
for .NET:
Get the connection string
Create a container
Upload blobs to a container
List the blobs in a container
Download blobs
Delete a container
Get the connection string
The code below retrieves the connection string for the storage account from the environment variable created in
the Configure your storage connection string section.
Add this code inside the Main method:
// Retrieve the connection string for use with the application. The storage
// connection string is stored in an environment variable on the machine
// running the application called AZURE_STORAGE_CONNECTION_STRING. If the
// environment variable is created after the application is launched in a
// console or with Visual Studio, the shell or application needs to be closed
// and reloaded to take the environment variable into account.
string connectionString = Environment.GetEnvironmentVariable("AZURE_STORAGE_CONNECTION_STRING");
Create a container
Decide on a name for the new container. The code below appends a GUID value to the container name to ensure
that it is unique.
IMPORTANT
Container names must be lowercase. For more information about naming containers and blobs, see Naming and
Referencing Containers, Blobs, and Metadata.
Create an instance of the BlobServiceClient class. Then, call the CreateBlobContainerAsync method to create the
container in your storage account.
Add this code to the end of the Main method:
// Create a local file in the ./data/ directory for uploading and downloading
string localPath = "./data/";
string fileName = "quickstart" + Guid.NewGuid().ToString() + ".txt";
string localFilePath = Path.Combine(localPath, fileName);
Console.WriteLine("Listing blobs...");
Download blobs
Download the previously created blob by calling the DownloadAsync method. The example code adds a suffix of
"DOWNLOADED" to the file name so that you can see both files in local file system.
Add this code to the end of the Main method:
// Download the blob to a local file
// Append the string "DOWNLOADED" before the .txt extension
// so you can compare the files in the data directory
string downloadFilePath = localFilePath.Replace(".txt", "DOWNLOADED.txt");
Delete a container
The following code cleans up the resources the app created by deleting the entire container by using
DeleteAsync. It also deletes the local files created by the app.
The app pauses for user input by calling Console.ReadLine before it deletes the blob, container, and local files.
This is a good chance to verify that the resources were actually created correctly, before they are deleted.
Add this code to the end of the Main method:
// Clean up
Console.Write("Press any key to begin clean up");
Console.ReadLine();
Console.WriteLine("Done");
dotnet build
dotnet run
Listing blobs...
quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31.txt
Downloading blob to
./data/quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31DOWNLOADED.txt
Before you begin the clean up process, check your data folder for the two files. You can open them and observe
that they are identical.
After you've verified the files, press the Enter key to delete the test files and finish the demo.
Next steps
In this quickstart, you learned how to upload, download, and list blobs using .NET.
To see Blob storage sample apps, continue to:
Azure Blob Storage SDK v12 .NET samples
For tutorials, samples, quick starts and other documentation, visit Azure for .NET and .NET Core developers.
To learn more about .NET Core, see Get started with .NET in 10 minutes.
Develop for Azure Files with .NET
2/14/2021 • 21 minutes to read • Edit Online
Learn the basics of developing .NET applications that use Azure Files to store data. This article shows how to
create a simple console application to do the following with .NET and Azure Files:
Get the contents of a file.
Set the maximum size, or quota, for a file share.
Create a shared access signature (SAS) for a file.
Copy a file to another file in the same storage account.
Copy a file to a blob in the same storage account.
Create a snapshot of a file share.
Restore a file from a share snapshot.
Use Azure Storage Metrics for troubleshooting.
To learn more about Azure Files, see What is Azure Files?
TIP
Check out the Azure Storage code samples repositor y
For easy-to-use end-to-end Azure Storage code samples that you can download and run, please check out our list of
Azure Storage Samples.
API W H EN TO USE N OT ES
value="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=mykey;EndpointSuffix=core.windows.net
" />
<add key="StorageAccountName" value="myaccount" />
<add key="StorageAccountKey" value="mykey" />
</appSettings>
</configuration>
NOTE
The Azurite storage emulator does not currently support Azure Files. Your connection string must target an Azure storage
account in the cloud to work with Azure Files.
using System;
using System.Configuration;
using System.IO;
using System.Threading.Tasks;
using Azure;
using Azure.Storage;
using Azure.Storage.Blobs;
using Azure.Storage.Files.Shares;
using Azure.Storage.Files.Shares.Models;
using Azure.Storage.Sas;
// Instantiate a ShareClient which will be used to create and manipulate the file share
ShareClient share = new ShareClient(connectionString, shareName);
// Save the data to a local file, overwrite if the file already exists
using (FileStream stream = File.OpenWrite(@"downloadedLog1.txt"))
{
await download.Content.CopyToAsync(stream);
await stream.FlushAsync();
stream.Close();
//-------------------------------------------------
// Set the maximum size of a share
//-------------------------------------------------
public async Task SetMaxShareSizeAsync(string shareName, int increaseSizeInGiB)
{
const long ONE_GIBIBYTE = 10737420000; // Number of bytes in 1 gibibyte
// Expires in 24 hours
ExpiresOn = expiration
};
For more information about creating and using shared access signatures, see How a shared access signature
works.
Copy files
Beginning with version 5.x of the Azure Files client library, you can copy a file to another file, a file to a blob, or a
blob to a file.
You can also use AzCopy to copy one file to another or to copy a blob to a file or the other way around. See Get
started with AzCopy.
NOTE
If you are copying a blob to a file, or a file to a blob, you must use a shared access signature (SAS) to authorize access to
the source object, even if you are copying within the same storage account.
if (await destFile.ExistsAsync())
{
Console.WriteLine($"{sourceFile.Uri} copied to {destFile.Uri}");
}
}
}
await destBlob.StartCopyFromUriAsync(sourceFile.Uri);
if (await destBlob.ExistsAsync())
{
Console.WriteLine($"File {sourceFile.Name} copied to blob {destBlob.Name}");
}
}
}
You can copy a blob to a file in the same way. If the source object is a blob, then create a SAS to authorize access
to that blob during the copy operation.
Share snapshots
Beginning with version 8.5 of the Azure Files client library, you can create a share snapshot. You can also list or
browse share snapshots and delete share snapshots. Once created, share snapshots are read-only.
Create share snapshots
The following example creates a file share snapshot.
.NET v12
.NET v11
//-------------------------------------------------
// Create a share snapshot
//-------------------------------------------------
public async Task CreateShareSnapshotAsync(string shareName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
// Instatiate a ShareServiceClient
ShareServiceClient shareServiceClient = new ShareServiceClient(connectionString);
//-------------------------------------------------
// List the snapshots on a share
//-------------------------------------------------
public void ListShareSnapshots()
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
// Instatiate a ShareServiceClient
ShareServiceClient shareServiceClient = new ShareServiceClient(connectionString);
// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);
// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);
Console.WriteLine($"Share: {share.Name}");
//-------------------------------------------------
// Recursively list a directory tree
//-------------------------------------------------
public void ListDirTree(ShareDirectoryClient dir)
{
// List the files and directories in the snapshot
foreach (ShareFileItem item in dir.GetFilesAndDirectories())
{
if (item.IsDirectory)
{
Console.WriteLine($"Directory: {item.Name}");
ShareDirectoryClient subDir = dir.GetSubdirectoryClient(item.Name);
ListDirTree(subDir);
}
else
{
Console.WriteLine($"File: {dir.Name}\\{item.Name}");
}
}
}
// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);
// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);
// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);
// Get a ShareClient
ShareClient share = shareService.GetShareClient(shareName);
try
{
// Delete the snapshot
await snapshotShare.DeleteIfExistsAsync();
}
catch (RequestFailedException ex)
{
Console.WriteLine($"Exception: {ex.Message}");
Console.WriteLine($"Error code: {ex.Status}\t{ex.ErrorCode}");
}
}
// Instatiate a ShareServiceClient
ShareServiceClient shareService = new ShareServiceClient(connectionString);
If you encounter any problems, you can refer to Troubleshoot Azure Files problems in Windows.
Next steps
For more information about Azure Files, see the following resources:
Conceptual articles and videos
Azure Files: a frictionless cloud SMB file system for Windows and Linux
Use Azure Files with Linux
Tooling support for File storage
Get started with AzCopy
Troubleshoot Azure Files problems in Windows
Reference
Azure Storage APIs for .NET
File Service REST API
Quickstart: Build a Table API app with .NET SDK
and Azure Cosmos DB
11/2/2020 • 9 minutes to read • Edit Online
Prerequisites
If you don’t already have Visual Studio 2019 installed, you can download and use the free Visual Studio 2019
Community Edition. Make sure that you enable Azure development during the Visual Studio setup.
If you don't have an Azure subscription, create a free account before you begin.
Resource Group Create new , then Account Name Select Create new . Then enter a
new resource group name for your
account. For simplicity, use the same
name as your Azure Cosmos DB
account name.
Location The region closest to your users Select a geographic location to host
your Azure Cosmos DB account.
Use the location that's closest to
your users to give them the fastest
access to the data.
You can leave the Geo-Redundancy and Multi-region Writes options at Disable to avoid additional
charges, and skip the Network and Tags sections.
5. Select Review+Create . After the validation is complete, select Create to create the account.
6. It takes a few minutes to create the account. You'll see a message that states Your deployment is
under way . Wait for the deployment to finish, and then select Go to resource .
Add a table
You can now use the Data Explorer tool in the Azure portal to create a database and table.
1. Select Data Explorer > New Table .
The Add Table area is displayed on the far right, you may need to scroll right to see it.
2. In the Add Table page, enter the settings for the new table.
3. Select OK .
4. Data Explorer displays the new database and table.
2. Now add data to the PartitionKey value box and RowKey value box, and select Add Entity .
You can now add more entities to your table, edit your entities, or query your data in Data Explorer. Data
Explorer is also where you can scale your throughput and add stored procedures, user-defined functions,
and triggers to your table.
md "C:\git-samples"
2. Open a git terminal window, such as git bash, and use the cd command to change to the new folder to
install the sample app.
cd "C:\git-samples"
3. Run the following command to clone the sample repository. This command creates a copy of the sample
app on your computer.
2. Navigate to the folder where you cloned the sample application and open the TableStorage.sln file.
Console.WriteLine();
return table;
}
The following code shows how to insert data into the table:
try
{
// Create the InsertOrReplace table operation
TableOperation insertOrMergeOperation = TableOperation.InsertOrMerge(entity);
if (result.RequestCharge.HasValue)
{
Console.WriteLine("Request Charge of InsertOrMerge Operation: " + result.RequestCharge);
}
return insertedCustomer;
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
The following code shows how to query data from the table:
public static async Task<CustomerEntity> RetrieveEntityUsingPointQueryAsync(CloudTable table, string
partitionKey, string rowKey)
{
try
{
TableOperation retrieveOperation = TableOperation.Retrieve<CustomerEntity>(partitionKey,
rowKey);
TableResult result = await table.ExecuteAsync(retrieveOperation);
CustomerEntity customer = result.Result as CustomerEntity;
if (customer != null)
{
Console.WriteLine("\t{0}\t{1}\t{2}\t{3}", customer.PartitionKey, customer.RowKey,
customer.Email, customer.PhoneNumber);
}
if (result.RequestCharge.HasValue)
{
Console.WriteLine("Request Charge of Retrieve Operation: " + result.RequestCharge);
}
return customer;
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
The following code shows how to delete data from the table:
if (result.RequestCharge.HasValue)
{
Console.WriteLine("Request Charge of Delete Operation: " + result.RequestCharge);
}
}
catch (StorageException e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}
{
"StorageConnectionString": "<Primary connection string from Azure portal>"
}
3. Click Install to install the Microsoft.Azure.Cosmos.Table library. This installs the Azure Cosmos DB
Table API package and all dependencies.
4. When you run the entire app, sample data is inserted into the table entity and deleted at the end so you
won’t see any data inserted if you run the whole sample. However you can insert some breakpoints to
view the data. Open BasicSamples.cs file and right-click on line 52, select Breakpoint , then select Inser t
Breakpoint . Insert another breakpoint on line 55.
5. Press F5 to run the application. The console window displays the name of the new table database (in this
case, demoa13b1) in Azure Cosmos DB.
When you hit the first breakpoint, go back to Data Explorer in the Azure portal. Click the Refresh button,
expand the demo* table, and click Entities . The Entities tab on the right shows the new entity that was
added for Walter Harp. Note that the phone number for the new entity is 425-555-0101.
If you receive an error that says Settings.json file can’t be found when running the project, you can
resolve it by adding the following XML entry to the project settings. Right click on CosmosTableSamples,
select Edit CosmosTableSamples.csproj and add the following itemGroup:
<ItemGroup>
<None Update="Settings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
6. Close the Entities tab in Data Explorer.
7. Press F5 to run the app to the next breakpoint.
When you hit the breakpoint, switch back to the Azure portal, click Entities again to open the Entities
tab, and note that the phone number has been updated to 425-555-0105.
8. Press F5 to run the app.
The app adds entities for use in an advanced sample app that the Table API currently does not support.
The app then deletes the table created by the sample app.
9. In the console window, press Enter to end the execution of the app.
Clean up resources
When you're done with your app and Azure Cosmos DB account, you can delete the Azure resources you created
so you don't incur more charges. To delete the resources:
1. In the Azure portal Search bar, search for and select Resource groups .
2. From the list, select the resource group you created for this quickstart.
3. On the resource group Over view page, select Delete resource group .
4. In the next window, enter the name of the resource group to delete, and then select Delete .
Next steps
In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data
Explorer, and run an app. Now you can query your data using the Table API.
Import table data to the Table API
Quickstart: Build a .NET console app to manage
Azure Cosmos DB SQL API resources
11/2/2020 • 12 minutes to read • Edit Online
Prerequisites
Azure subscription - create one for free or you can Try Azure Cosmos DB for free without an Azure
subscription, free of charge and commitments.
The .NET Core 2.1 SDK or later.
Setting up
This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure
Cosmos DB SQL API client library for .NET to manage resources. The example code described in this article
creates a FamilyDatabase database and family members (each family member is an item) within that database.
Each family member has properties such as Id, FamilyName, FirstName, LastName, Parents, Children, Address, .
The LastName property is used as the partition key for the container.
Create an Azure Cosmos account
If you use the Try Azure Cosmos DB for free option to create an Azure Cosmos account, you must create an
Azure Cosmos DB account of type SQL API . An Azure Cosmos DB test account is already created for you. You
don't have to create the account explicitly, so you can skip this section and move to the next section.
If you have your own Azure subscription or created a subscription for free, you should create an Azure Cosmos
account explicitly. The following code will create an Azure Cosmos account with session consistency. The account
is replicated in South Central US and North Central US .
You can use Azure Cloud Shell to create the Azure Cosmos account. Azure Cloud Shell is an interactive,
authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the
shell experience that best suits the way you work, either Bash or PowerShell. For this quickstart, choose Bash
mode. Azure Cloud Shell also requires a storage account, you can create one when prompted.
Select the Tr y It button next to the following code, choose Bash mode select create a storage account and
login to Cloud Shell. Next copy and paste the following code to Azure Cloud Shell and run it. The Azure Cosmos
account name must be globally unique, make sure to update the mysqlapicosmosdb value before you run the
command.
# Set variables for the new SQL API account, database, and container
resourceGroupName='myResourceGroup'
location='southcentralus'
# The Azure Cosmos account name must be globally unique, make sure to update the `mysqlapicosmosdb` value
before you run the command
accountName='mysqlapicosmosdb'
# Create a SQL API Cosmos DB account with session consistency and multi-region writes enabled
az cosmosdb create \
--resource-group $resourceGroupName \
--name $accountName \
--kind GlobalDocumentDB \
--locations regionName="South Central US" failoverPriority=0 --locations regionName="North Central US"
failoverPriority=1 \
--default-consistency-level "Session" \
--enable-multiple-write-locations true
The creation of the Azure Cosmos account takes a while, once the operation is successful, you can see the
confirmation output. After the command completes successfully, sign into the Azure portal and verify that the
Azure Cosmos account with the specified name exists. You can close the Azure Cloud Shell window after the
resource is created.
Create a new .NET app
Create a new .NET application in your preferred editor or IDE. Open the Windows command prompt or a
Terminal window from your local computer. You will run all the commands in the next sections from the
command prompt or terminal. Run the following dotnet new command to create a new app with the name
todo . The --langVersion parameter sets the LangVersion property in the created project file.
Change your directory to the newly created app folder. You can build the application with:
cd todo
dotnet build
The expected output from the build should look something like this:
Build succeeded.
0 Warning(s)
0 Error(s)
Copy your Azure Cosmos account credentials from the Azure portal
The sample application needs to authenticate to your Azure Cosmos account. To authenticate, you should pass
the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by
following these steps:
1. Sign in to the Azure portal.
2. Navigate to your Azure Cosmos account.
3. Open the Keys pane and copy the URI and PRIMARY KEY of your account. You will add the URI and
keys values to an environment variable in the next step.
Set the environment variables
After you have copied the URI and PRIMARY KEY of your account, save them to a new environment variable
on the local machine running the application. To set the environment variable, open a console window, and run
the following command. Make sure to replace <Your_Azure_Cosmos_account_URI> and
<Your_Azure_Cosmos_account_PRIMARY_KEY> values.
Windows
Linux
macOS
Object model
Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB and the
object model used to create and access these resources. The Azure Cosmos DB creates resources in the
following order:
Azure Cosmos account
Databases
Containers
Items
To learn in more about the hierarchy of different entities, see the working with databases, containers, and items
in Azure Cosmos DB article. You will use the following .NET classes to interact with these resources:
CosmosClient - This class provides a client-side logical representation for the Azure Cosmos DB service.
The client object is used to configure and execute requests against the service.
CreateDatabaseIfNotExistsAsync - This method creates (if doesn't exist) or gets (if already exists) a
database resource as an asynchronous operation.
CreateContainerIfNotExistsAsync- - This method creates (if it doesn't exist) or gets (if it already exists) a
container as an asynchronous operation. You can check the status code from the response to determine
whether the container was newly created (201) or an existing container was returned (200).
CreateItemAsync - This method creates an item within the container.
UpsertItemAsync - This method creates an item within the container if it doesn't already exist or replaces
the item if it already exists.
GetItemQueryIterator - This method creates a query for items under a container in an Azure Cosmos
database using a SQL statement with parameterized values.
DeleteAsync - Deletes the specified database from your Azure Cosmos account. DeleteAsync method
only deletes the database. Disposing of the Cosmosclient instance should happen separately (which it
does in the DeleteDatabaseAndCleanupAsync method.
Code examples
The sample code described in this article creates a family database in Azure Cosmos DB. The family database
contains family details such as name, address, location, the associated parents, children, and pets. Before
populating the data to your Azure Cosmos account, define the properties of a family item. Create a new class
named Family.cs at the root level of your sample application and add the following code to it:
using Newtonsoft.Json;
namespace todo
{
public class Family
{
[JsonProperty(PropertyName = "id")]
public string Id { get; set; }
public string LastName { get; set; }
public Parent[] Parents { get; set; }
public Child[] Children { get; set; }
public Address Address { get; set; }
public bool IsRegistered { get; set; }
// The ToString() method is used to format the output, it's used for demo purpose only. It's not
required by Azure Cosmos DB
public override string ToString()
{
return JsonConvert.SerializeObject(this);
}
}
using System;
using System.Threading.Tasks;
using System.Configuration;
using System.Collections.Generic;
using System.Net;
using Microsoft.Azure.Cosmos;
To the Program.cs file, add code to read the environment variables that you have set in the previous step.
Define the CosmosClient , Database , and the Container objects. Next add code to the main method that calls the
GetStartedDemoAsync method where you manage Azure Cosmos account resources.
namespace todo
{
public class Program
{
/// The Azure Cosmos DB endpoint for running this GetStarted sample.
private string EndpointUrl = Environment.GetEnvironmentVariable("EndpointUrl");
}
catch (CosmosException de)
{
Exception baseException = de.GetBaseException();
Console.WriteLine("{0} error occurred: {1}", de.StatusCode, de);
}
catch (Exception e)
{
Console.WriteLine("Error: {0}", e);
}
finally
{
Console.WriteLine("End of demo, press any key to exit.");
Console.ReadKey();
}
}
}
}
Create a database
Define the CreateDatabaseAsync method within the program.cs class. This method creates the FamilyDatabase if
it doesn't already exist.
Create an item
Create a family item by adding the AddItemsToContainerAsync method with the following code. You can use the
CreateItemAsync or UpsertItemAsync methods to create an item:
private async Task AddItemsToContainerAsync()
{
// Create a family object for the Andersen family
Family andersenFamily = new Family
{
Id = "Andersen.1",
LastName = "Andersen",
Parents = new Parent[]
{
new Parent { FirstName = "Thomas" },
new Parent { FirstName = "Mary Kay" }
},
Children = new Child[]
{
new Child
{
FirstName = "Henriette Thaulow",
Gender = "female",
Grade = 5,
Pets = new Pet[]
{
new Pet { GivenName = "Fluffy" }
}
}
},
Address = new Address { State = "WA", County = "King", City = "Seattle" },
IsRegistered = false
};
try
{
// Create an item in the container representing the Andersen family. Note we provide the value of
the partition key for this item, which is "Andersen".
ItemResponse<Family> andersenFamilyResponse = await this.container.CreateItemAsync<Family>
(andersenFamily, new PartitionKey(andersenFamily.LastName));
// Note that after creating the item, we can access the body of the item with the Resource property
of the ItemResponse. We can also access the RequestCharge property to see the amount of RUs consumed on this
request.
Console.WriteLine("Created item in database with id: {0} Operation consumed {1} RUs.\n",
andersenFamilyResponse.Resource.Id, andersenFamilyResponse.RequestCharge);
}
catch (CosmosException ex) when (ex.StatusCode == HttpStatusCode.Conflict)
{
Console.WriteLine("Item in database with id: {0} already exists\n", andersenFamily.Id);
}
}
while (queryResultSetIterator.HasMoreResults)
{
FeedResponse<Family> currentResultSet = await queryResultSetIterator.ReadNextAsync();
foreach (Family family in currentResultSet)
{
families.Add(family);
Console.WriteLine("\tRead {0}\n", family);
}
}
}
//Dispose of CosmosClient
this.cosmosClient.Dispose();
}
After you add all the required methods, save the Program.cs file.
dotnet build
dotnet run
The following output is generated when you run the application. You can also sign into the Azure portal and
validate that the resources are created:
Created item in database with id: Andersen.1 Operation consumed 11.62 RUs.
Read {"id":"Andersen.1","LastName":"Andersen","Parents":[{"FamilyName":null,"FirstName":"Thomas"},
{"FamilyName":null,"FirstName":"Mary Kay"}],"Children":[{"FamilyName":null,"FirstName":"Henriette
Thaulow","Gender":"female","Grade":5,"Pets":[{"GivenName":"Fluffy"}]}],"Address":
{"State":"WA","County":"King","City":"Seattle"},"IsRegistered":false}
You can validate that the data is created by signing into the Azure portal and see the required items in your
Azure Cosmos account.
Clean up resources
When no longer needed, you can use the Azure CLI or Azure PowerShell to remove the Azure Cosmos account
and the corresponding resource group. The following command shows how to delete the resource group by
using the Azure CLI:
Next steps
In this quickstart, you learned how to create an Azure Cosmos account, create a database and a container using
a .NET Core app. You can now import additional data to your Azure Cosmos account with the instructions int the
following article.
Import data into Azure Cosmos DB
Quickstart: Create an Azure SQL Database single
database
2/14/2021 • 7 minutes to read • Edit Online
In this quickstart, you create a single database in Azure SQL Database using either the Azure portal, a
PowerShell script, or an Azure CLI script. You then query the database using Quer y editor in the Azure portal.
Prerequisite
An active Azure subscription. If you don't have one, create a free account.
3. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
4. For Resource group , select Create new , enter myResourceGroup, and select OK .
5. For Database name enter mySampleDatabase.
6. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. So enter something like mysqlserver12345, and the portal lets you
know if it is available or not.
Ser ver admin login : Enter azureuser.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Select a location from the dropdown list.
Select OK .
7. Leave Want to use SQL elastic pool set to No .
8. Under Compute + storage , select Configure database .
9. This quickstart uses a serverless database, so select Ser verless , and then select Apply .
5. Select Run , and then review the query results in the Results pane.
6. Close the Quer y editor page, and select OK when prompted to discard your unsaved edits.
Clean up resources
Keep the resource group, server, and single database to go on to the next steps, and learn how to connect and
query your database with different methods.
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.
Portal
Azure CLI
PowerShell
To delete myResourceGroup and all its resources using the Azure portal:
1. In the portal, search for and select Resource groups , and then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup, and then select Delete .
Next steps
Connect and query your database using different tools and languages:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Want to optimize and save on your cloud spending?
Start analyzing costs with Cost Management
Get started with Azure Queue Storage using .NET
2/14/2021 • 19 minutes to read • Edit Online
Overview
Azure Queue Storage provides cloud messaging between application components. In designing applications for
scale, application components are often decoupled so they can scale independently. Queue Storage delivers
asynchronous messaging between application components, whether they are running in the cloud, on the
desktop, on an on-premises server, or on a mobile device. Queue Storage also supports managing
asynchronous tasks and building process work flows.
About this tutorial
This tutorial shows how to write .NET code for some common scenarios using Azure Queue Storage. Scenarios
covered include creating and deleting queues and adding, reading, and deleting queue messages.
Estimated time to complete: 45 minutes
Prerequisites
Microsoft Visual Studio
An Azure Storage account
Storage Account: All access to Azure Storage is done through a storage account. For more information
about storage accounts, see Storage account overview.
Queue: A queue contains a set of messages. All messages must be in a queue. Note that the queue name
must be all lowercase. For information on naming queues, see Naming Queues and Metadata.
Message: A message, in any format, of up to 64 KB. The maximum time that a message can remain in
the queue is 7 days. For version 2017-07-29 or later, the maximum time-to-live can be any positive
number, or -1 indicating that the message doesn't expire. If this parameter is omitted, the default time-to-
live is seven days.
URL format: Queues are addressable using the following URL format: http:// <storage account>
.queue.core.windows.net/ <queue>
The following URL addresses a queue in the diagram:
http://myaccount.queue.core.windows.net/incoming-orders
NOTE
You can target the storage emulator to avoid incurring any costs associated with Azure Storage. However, if you do
choose to target an Azure Storage account in the cloud, costs for performing this tutorial will be negligible.
For more information about connection strings, see Configure a connection string to Azure Storage.
NOTE
Your storage account key is similar to the root password for your storage account. Always be careful to protect your
storage account key. Avoid distributing it to other users, hard-coding it, or saving it in a plain-text file that is accessible to
others. Regenerate your key by using the Azure portal if you believe it may have been compromised.
The best way to maintain your storage connection string is in a configuration file. To configure your connection
string, open the app.config file from Solution Explorer in Visual Studio. Add the contents of the <appSettings>
element shown here. Replace connection-string with the value you copied from your storage account in the
portal:
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.7.2" />
</startup>
<appSettings>
<add key="StorageConnectionString" value="connection-string" />
</appSettings>
</configuration>
<add key="StorageConnectionString"
value="DefaultEndpointsProtocol=https;AccountName=storagesample;AccountKey=GMuzNHjlB3S9itqZJHHCnRkrokLkcSyW7
yK9BRbGp0ENePunLPwBgpxV1Z/pVo9zpem/2xSHXkMqTHHLcx8XRA==EndpointSuffix=core.windows.net" />
To target the Azurite storage emulator, you can use a shortcut that maps to the well-known account name and
key. In that case, your connection string setting is:
// Instantiate a QueueClient which will be used to create and manipulate the queue
QueueClient queueClient = new QueueClient(connectionString, queueName);
}
Now you are ready to write code that reads data from and writes data to Queue Storage.
Create a queue
This example shows how to create a queue:
.NET v12
.NET v11
//-------------------------------------------------
// Create a message queue
//-------------------------------------------------
public bool CreateQueue(string queueName)
{
try
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
// Instantiate a QueueClient which will be used to create and manipulate the queue
QueueClient queueClient = new QueueClient(connectionString, queueName);
if (queueClient.Exists())
{
Console.WriteLine($"Queue created: '{queueClient.Name}'");
return true;
}
else
{
Console.WriteLine($"Make sure the Azurite storage emulator running and try again.");
return false;
}
}
catch (Exception ex)
{
Console.WriteLine($"Exception: {ex.Message}\n\n");
Console.WriteLine($"Make sure the Azurite storage emulator running and try again.");
return false;
}
}
//-------------------------------------------------
// Insert a message into a queue
//-------------------------------------------------
public void InsertMessage(string queueName, string message)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
// Instantiate a QueueClient which will be used to create and manipulate the queue
QueueClient queueClient = new QueueClient(connectionString, queueName);
if (queueClient.Exists())
{
// Send a message to the queue
queueClient.SendMessage(message);
}
Console.WriteLine($"Inserted: {message}");
}
//-------------------------------------------------
// Peek at a message in the queue
//-------------------------------------------------
public void PeekMessage(string queueName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
if (queueClient.Exists())
{
// Peek at the next message
PeekedMessage[] peekedMessage = queueClient.PeekMessages();
//-------------------------------------------------
// Update an existing message in the queue
//-------------------------------------------------
public void UpdateMessage(string queueName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
if (queueClient.Exists())
{
// Get the message from the queue
QueueMessage[] message = queueClient.ReceiveMessages();
if (queueClient.Exists())
{
// Get the next message
QueueMessage[] retrievedMessage = queueClient.ReceiveMessages();
if (await queueClient.ExistsAsync())
{
Console.WriteLine($"Queue '{queueClient.Name}' created");
}
else
{
Console.WriteLine($"Queue '{queueClient.Name}' exists");
}
if (queueClient.Exists())
{
// Receive and process 20 messages
QueueMessage[] receivedMessages = queueClient.ReceiveMessages(20, TimeSpan.FromMinutes(5));
//-----------------------------------------------------
// Get the approximate number of messages in the queue
//-----------------------------------------------------
public void GetQueueLength(string queueName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
if (queueClient.Exists())
{
QueueProperties properties = queueClient.GetProperties();
Delete a queue
.NET v12
.NET v11
To delete a queue and all the messages contained in it, call the Delete method on the queue object.
//-------------------------------------------------
// Delete the queue
//-------------------------------------------------
public void DeleteQueue(string queueName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
if (queueClient.Exists())
{
// Delete the queue
queueClient.Delete();
}
Next steps
Now that you've learned the basics of Queue Storage, follow these links to learn about more complex storage
tasks.
View the Queue Storage reference documentation for complete details about available APIs:
Azure Storage client library for .NET reference
Azure Storage REST API reference
View more feature guides to learn about additional options for storing data in Azure.
Get started with Azure Table Storage using .NET to store structured data.
Get started with Azure Blob Storage using .NET to store unstructured data.
Connect to SQL Database by using .NET (C#) to store relational data.
Learn how to simplify the code you write to work with Azure Storage by using the Azure WebJobs SDK.
Scale up an app in Azure App Service
11/2/2020 • 2 minutes to read • Edit Online
This article shows you how to scale your app in Azure App Service. There are two workflows for scaling, scale up
and scale out, and this article explains the scale up workflow.
Scale up: Get more CPU, memory, disk space, and extra features like dedicated virtual machines (VMs),
custom domains and certificates, staging slots, autoscaling, and more. You scale up by changing the pricing
tier of the App Service plan that your app belongs to.
Scale out: Increase the number of VM instances that run your app. You can scale out to as many as 30
instances, depending on your pricing tier. App Service Environments in Isolated tier further increases your
scale-out count to 100 instances. For more information about scaling out, see Scale instance count manually
or automatically. There, you find out how to use autoscaling, which is to scale instance count automatically
based on predefined rules and schedules.
The scale settings take only seconds to apply and affect all apps in your App Service plan. They don't require you
to change your code or redeploy your application.
For information about the pricing and features of individual App Service plans, see App Service Pricing Details.
NOTE
Before you switch an App Service plan from the Free tier, you must first remove the spending limits in place for your
Azure subscription. To view or change options for your Microsoft Azure App Service subscription, see Microsoft Azure
Subscriptions.
2. In the Summar y part of the Resource group page, select a resource that you want to scale. The
following screenshot shows a SQL Database resource.
To scale up the related resource, see the documentation for the specific resource type. For example, to
scale up a single SQL Database, see Scale single database resources in Azure SQL Database. To scale up a
Azure Database for MySQL resource, see Scale MySQL resources.
More resources
Scale instance count manually or automatically
Configure PremiumV3 tier for App Service
What are virtual machine scale sets?
2/14/2021 • 4 minutes to read • Edit Online
Azure virtual machine scale sets let you create and manage a group of load balanced VMs. The number of VM
instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets
provide high availability to your applications, and allow you to centrally manage, configure, and update a large
number of VMs. With virtual machine scale sets, you can build large-scale services for areas such as compute,
big data, and container workloads.
Add additional VM instances Manual process to create, configure, Automatically create from central
and ensure compliance configuration
Traffic balancing and distribution Manual process to create and Can automatically create and integrate
configure Azure load balancer or with Azure load balancer or Application
Application Gateway Gateway
High availability and redundancy Manually create Availability Set or Automatic distribution of VM instances
distribute and track VMs across across Availability Zones or Availability
Availability Zones Sets
Scaling of VMs Manual monitoring and Azure Autoscale based on host metrics, in-
Automation guest metrics, Application Insights, or
schedule
There is no additional cost to scale sets. You only pay for the underlying compute resources such as the VM
instances, load balancer, or Managed Disk storage. The management and automation features, such as autoscale
and redundancy, incur no additional charges over the use of VMs.
Data residency
In Azure, the feature to enable storing customer data in a single region is currently only available in the
Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil
Geo. For all other regions, customer data is stored in Geo. For more information, see Trust Center.
Next steps
To get started, create your first virtual machine scale set in the Azure portal.
Create a scale set in the Azure portal
Scaling in Service Fabric
2/14/2021 • 15 minutes to read • Edit Online
Azure Service Fabric makes it easy to build scalable applications by managing the services, partitions, and
replicas on the nodes of a cluster. Running many workloads on the same hardware enables maximum resource
utilization, but also provides flexibility in terms of how you choose to scale your workloads. This Channel 9 video
describes how you can build scalable microservices applications:
Powershell:
Powershell:
If you increase the number of nodes, Service Fabric will move some of the existing replicas there. For example,
let's say the number of nodes increases to four and the replicas get redistributed. Now the service now has three
replicas running on each node, each belonging to different partitions. This allows better resource utilization
since the new node isn't cold. Typically, it also improves performance as each service has more resources
available to it.
Scaling by using the Service Fabric Cluster Resource Manager and
metrics
Metrics are how services express their resource consumption to Service Fabric. Using metrics gives the Cluster
Resource Manager an opportunity to reorganize and optimize the layout of the cluster. For example, there may
be plenty of resources in the cluster, but they might not be allocated to the services that are currently doing
work. Using metrics allows the Cluster Resource Manager to reorganize the cluster to ensure that services have
access to the available resources.
Choosing a platform
Due to implementation differences between operating systems, choosing to use Service Fabric with Windows or
Linux can be a vital part of scaling your application. One potential barrier is how staged logging is performed.
Service Fabric on Windows uses a kernel driver for a one-per-machine log, shared between stateful service
replicas. This log weighs in at about 8 GB. Linux, on the other hand, uses a 256 MB staging log for each replica,
making it less ideal for applications that want to maximize the number of lightweight service replicas running
on a given node. These differences in temporary storage requirements could potentially inform the desired
platform for Service Fabric cluster deployment.
Next steps
For more information on Service Fabric concepts, see the following articles:
Availability of Service Fabric services
Partitioning Service Fabric services
Secure a custom DNS name with a TLS/SSL binding
in Azure App Service
11/2/2020 • 8 minutes to read • Edit Online
This article shows you how to secure the custom domain in your App Service app or function app by creating a
certificate binding. When you're finished, you can access your App Service app at the https:// endpoint for
your custom DNS name (for example, https://www.contoso.com ).
Prerequisites
To follow this how-to guide:
Create an App Service app
Map a domain name to your app or buy and configure it in Azure
Add a private certificate to your app
NOTE
The easiest way to add a private certificate is to create a free App Service Managed Certificate (Preview).
On the App Ser vices page, select the name of your web app.
Check to make sure that your web app is not in the F1 or D1 tier. Your web app's current tier is highlighted by a
dark blue box.
Custom SSL is not supported in the F1 or D1 tier. If you need to scale up, follow the steps in the next section.
Otherwise, close the Scale up page and skip the Scale up your App Service plan section.
Scale up your App Service plan
Select any of the non-free tiers (B1 , B2 , B3 , or any tier in the Production category). For additional options, click
See additional options .
Click Apply .
When you see the following notification, the scale operation is complete.
In Custom Domain , select the custom domain you want to add a binding for.
If your app already has a certificate for the selected custom domain, go to Create binding directly. Otherwise,
keep going.
Add a certificate for custom domain
If your app has no certificate for the selected custom domain, then you have two options:
Upload PFX Cer tificate - Follow the workflow at Upload a private certificate, then select this option here.
Impor t App Ser vice Cer tificate - Follow the workflow at Import an App Service certificate, then select
this option here.
NOTE
You can also Create a free certificate (Preview) or Import a Key Vault certificate, but you must do it separately and then
return to the TLS/SSL Binding dialog.
Create binding
Use the following table to help you configure the TLS binding in the TLS/SSL Binding dialog, then click Add
Binding .
Custom domain The domain name to add the TLS/SSL binding for.
TLS/SSL Type SNI SSL - Multiple SNI SSL bindings may be added.
This option allows multiple TLS/SSL certificates to
secure multiple domains on the same IP address.
Most modern browsers (including Internet Explorer,
Chrome, Firefox, and Opera) support SNI (for more
information, see Server Name Indication).
IP SSL - Only one IP SSL binding may be added. This
option allows only one TLS/SSL certificate to secure a
dedicated public IP address. After you configure the
binding, follow the steps in Remap records for IP SSL.
IP SSL is supported only in Standard tier or above.
Once the operation is complete, the custom domain's TLS/SSL state is changed to Secure .
NOTE
A Secure state in the Custom domains means that it is secured with a certificate, but App Service doesn't check if the
certificate is self-signed or expired, for example, which can also cause browsers to show an error or warning.
Test HTTPS
In various browsers, browse to https://<your.custom.domain> to verify that it serves up your app.
Your application code can inspect the protocol via the "x-appservice-proto" header. The header will have a value
of http or https .
NOTE
If your app gives you certificate validation errors, you're probably using a self-signed certificate.
If that's not the case, you may have left out intermediate certificates when you export your certificate to the PFX file.
Prevent IP changes
Your inbound IP address can change when you delete a binding, even if that binding is IP SSL. This is especially
important when you renew a certificate that's already in an IP SSL binding. To avoid a change in your app's IP
address, follow these steps in order:
1. Upload the new certificate.
2. Bind the new certificate to the custom domain you want without deleting the old one. This action replaces the
binding instead of removing the old one.
3. Delete the old certificate.
Enforce HTTPS
By default, anyone can still access your app using HTTP. You can redirect all HTTP requests to the HTTPS port.
In your app page, in the left navigation, select SSL settings . Then, in HTTPS Only , select On .
When the operation is complete, navigate to any of the HTTP URLs that point to your app. For example:
http://<app_name>.azurewebsites.net
http://contoso.com
http://www.contoso.com
fqdn=<replace-with-www.{yourdomain}>
pfxPath=<replace-with-path-to-your-.PFX-file>
pfxPassword=<replace-with-your=.PFX-password>
resourceGroup=myResourceGroup
webappname=mywebapp$RANDOM
# Create an App Service plan in Basic tier (minimum required by custom domains).
az appservice plan create --name $webappname --resource-group $resourceGroup --sku B1
# Before continuing, go to your DNS configuration UI for your custom domain and follow the
# instructions at https://aka.ms/appservicecustomdns to configure a CNAME record for the
# hostname "www" and point it your web app's default domain name.
PowerShell
$fqdn="<Replace with your custom domain name>"
$pfxPath="<Replace with path to your .PFX file>"
$pfxPassword="<Replace with your .PFX password>"
$webappname="mywebapp$(Get-Random)"
$location="West Europe"
# Before continuing, go to your DNS configuration UI for your custom domain and follow the
# instructions at https://aka.ms/appservicecustomdns to configure a CNAME record for the
# hostname "www" and point it your web app's default domain name.
# Upgrade App Service plan to Basic tier (minimum required by custom SSL certificates)
Set-AzAppServicePlan -Name $webappname -ResourceGroupName $webappname `
-Tier Basic
More resources
Use a TLS/SSL certificate in your code in Azure App Service
FAQ : App Service Certificates
Back up your app in Azure
11/2/2020 • 6 minutes to read • Edit Online
The Backup and Restore feature in Azure App Service lets you easily create app backups manually or on a
schedule. You can configure the backups to be retained up to an indefinite amount of time. You can restore the
app to a snapshot of a previous state by overwriting the existing app or restoring to another app.
For information on restoring an app from backup, see Restore an app in Azure.
NOTE
Each backup is a complete offline copy of your app, not an incremental update.
NOTE
If you see the following message, click it to upgrade your App Service plan before you can proceed with backups.
For more information, see Scale up an app in Azure.
2. In the Backup page, select Backup is not configured. Click here to configure backup for your
app .
3. In the Backup Configuration page, click Storage not configured to configure a storage account.
4. Choose your backup destination by selecting a Storage Account and Container . The storage account
must belong to the same subscription as the app you want to back up. If you wish, you can create a new
storage account or a new container in the respective pages. When you're done, click Select .
5. In the Backup Configuration page that is still left open, you can configure Backup Database , then
select the databases you want to include in the backups (SQL Database or MySQL), then click OK .
NOTE
For a database to appear in this list, its connection string must exist in the Connection strings section of the
Application settings page for your app.
In-app MySQL databases are automatically backed up without any configuration. If you make manually settings
for in-app MySQL databases, such as adding connection strings, the backups may not work correctly.
NOTE
Individual databases in the backup can be 4GB max but the total max size of the backup is 10GB
\site\wwwroot\Images\brand.png
\site\wwwroot\Images\2014
\site\wwwroot\Images\2013
Upload _backup.filter file to the D:\home\site\wwwroot\ directory of your site using ftp or any other method. If
you wish, you can create the file directly using Kudu DebugConsole and insert the content there.
Run backups the same way you would normally do it, manually or automatically. Now, any files and folders that
are specified in _backup.filter is excluded from the future backups scheduled or manually initiated.
NOTE
You restore partial backups of your site the same way you would restore a regular backup. The restore process does the
right thing.
When a full backup is restored, all content on the site is replaced with whatever is in the backup. If a file is on the site, but
not in the backup it gets deleted. But when a partial backup is restored, any content that is located in one of the
restricted directories, or any restricted file, is left as is.
WARNING
Altering any of the files in your websitebackups container can cause the backup to become invalid and therefore non-
restorable.
Next Steps
For information on restoring an app from a backup, see Restore an app in Azure.
An overview of Azure VM backup
2/14/2021 • 12 minutes to read • Edit Online
This article describes how the Azure Backup service backs up Azure virtual machines (VMs).
Azure Backup provides independent and isolated backups to guard against unintended destruction of the data
on your VMs. Backups are stored in a Recovery Services vault with built-in management of recovery points.
Configuration and scaling are simple, backups are optimized, and you can easily restore as needed.
As part of the backup process, a snapshot is taken, and the data is transferred to the Recovery Services vault
with no impact on production workloads. The snapshot provides different levels of consistency, as described
here.
Azure Backup also has specialized offerings for database workloads like SQL Server and SAP HANA that are
workload-aware, offer 15 minute RPO (recovery point objective), and allow backup and restore of individual
databases.
Backup process
Here's how Azure Backup completes a backup for Azure VMs:
1. For Azure VMs that are selected for backup, Azure Backup starts a backup job according to the backup
schedule you specify.
2. During the first backup, a backup extension is installed on the VM if the VM is running.
For Windows VMs, the VMSnapshot extension is installed.
For Linux VMs, the VMSnapshotLinux extension is installed.
3. For Windows VMs that are running, Backup coordinates with Windows Volume Shadow Copy Service
(VSS) to take an app-consistent snapshot of the VM.
By default, Backup takes full VSS backups.
If Backup can't take an app-consistent snapshot, then it takes a file-consistent snapshot of the
underlying storage (because no application writes occur while the VM is stopped).
4. For Linux VMs, Backup takes a file-consistent backup. For app-consistent snapshots, you need to
manually customize pre/post scripts.
5. After Backup takes the snapshot, it transfers the data to the vault.
The backup is optimized by backing up each VM disk in parallel.
For each disk that's being backed up, Azure Backup reads the blocks on the disk and identifies and
transfers only the data blocks that changed (the delta) since the previous backup.
Snapshot data might not be immediately copied to the vault. It might take some hours at peak times.
Total backup time for a VM will be less than 24 hours for daily backup policies.
6. Changes made to a Windows VM after Azure Backup is enabled on it are:
Microsoft Visual C++ 2013 Redistributable(x64) - 12.0.40660 is installed in the VM
Startup type of Volume Shadow Copy service (VSS) changed to automatic from manual
IaaSVmProvider Windows service is added
7. When the data transfer is complete, the snapshot is removed, and a recovery point is created.
Encryption of Azure VM backups
When you back up Azure VMs with Azure Backup, VMs are encrypted at rest with Storage Service Encryption
(SSE). Azure Backup can also back up Azure VMs that are encrypted by using Azure Disk Encryption.
EN C RY P T IO N DETA IL S SUP P O RT
SSE With SSE, Azure Storage provides Azure Backup uses SSE for at-rest
encryption at rest by automatically encryption of Azure VMs.
encrypting data before storing it.
Azure Storage also decrypts data
before retrieving it. Azure Backup
supports backups of VMs with two
types of Storage Service Encryption:
SSE with platform-managed
keys : This encryption is by default for
all disks in your VMs. See more here.
SSE with customer-managed
keys . With CMK, you manage the keys
used to encrypt the disks. See more
here.
Azure Disk Encr yption Azure Disk Encryption encrypts both Azure Backup supports backup of
OS and data disks for Azure VMs. managed and unmanaged Azure VMs
encrypted with BEKs only, or with BEKs
Azure Disk Encryption integrates with together with KEKs.
BitLocker encryption keys (BEKs),
which are safeguarded in a key vault as Both BEKs and KEKs are backed up and
secrets. Azure Disk Encryption also encrypted.
integrates with Azure Key Vault key
encryption keys (KEKs). Because KEKs and BEKs are backed up,
users with the necessary permissions
can restore keys and secrets back to
the key vault if needed. These users
can also recover the encrypted VM.
For managed and unmanaged Azure VMs, Backup supports both VMs encrypted with BEKs only or VMs
encrypted with BEKs together with KEKs.
The backed-up BEKs (secrets) and KEKs (keys) are encrypted. They can be read and used only when they're
restored back to the key vault by authorized users. Neither unauthorized users, or Azure, can read or use
backed-up keys or secrets.
BEKs are also backed up. So, if the BEKs are lost, authorized users can restore the BEKs to the key vault and
recover the encrypted VMs. Only users with the necessary level of permissions can back up and restore
encrypted VMs or keys and secrets.
Snapshot creation
Azure Backup takes snapshots according to the backup schedule.
Windows VMs: For Windows VMs, the Backup service coordinates with VSS to take an app-consistent
snapshot of the VM disks. By default, Azure Backup takes a full VSS backup (it truncates the logs of
application such as SQL Server at the time of backup to get application level consistent backup). If you're
using a SQL Server database on Azure VM backup, then you can modify the setting to take a VSS Copy
backup (to preserve logs). For more information, see this article.
Linux VMs: To take app-consistent snapshots of Linux VMs, use the Linux pre-script and post-script
framework to write your own custom scripts to ensure consistency.
Azure Backup invokes only the pre/post scripts written by you.
If the pre-scripts and post-scripts execute successfully, Azure Backup marks the recovery point as
application-consistent. However, when you're using custom scripts, you're ultimately responsible for
the application consistency.
Learn more about how to configure scripts.
Snapshot consistency
The following table explains the different types of snapshot consistency:
Application-consistent App-consistent backups When you're recovering a Windows: All VSS writers
capture memory content VM with an app-consistent succeeded
and pending I/O snapshot, the VM boots up.
operations. App-consistent There's no data corruption Linux: Pre/post scripts are
snapshots use a VSS writer or loss. The apps start in a configured and succeeded
(or pre/post scripts for consistent state.
Linux) to ensure the
consistency of the app data
before a backup occurs.
File-system consistent File-system consistent When you're recovering a Windows: Some VSS writers
backups provide VM with a file-system failed
consistency by taking a consistent snapshot, the
snapshot of all files at the VM boots up. There's no Linux: Default (if pre/post
same time. data corruption or loss. scripts aren't configured or
Apps need to implement failed)
their own "fix-up"
mechanism to make sure
that restored data is
consistent.
SN A P SH OT DETA IL S REC O VERY C O N SIDERAT IO N
NOTE
If the provisioning state is succeeded , Azure Backup takes file-system consistent backups. If the provisioning state is
unavailable or failed , crash-consistent backups are taken. If the provisioning state is creating or deleting , that means
Azure Backup is retrying the operations.
Preparing backups Keep in mind the time needed to prepare the backup. The
preparation time includes installing or updating the backup
extension and triggering a snapshot according to the backup
schedule.
C O N SIDERAT IO N DETA IL S
Data transfer Consider the time needed for Azure Backup to identify the
incremental changes from the previous backup.
Initial backup Although the total backup time for incremental backups is
less than 24 hours, that might not be the case for the first
backup. The time needed for the initial backup will depend
on the size of the data and when the backup is processed.
Restore queue Azure Backup processes restore jobs from multiple storage
accounts at the same time, and it puts restore requests in a
queue.
Restore copy During the restore process, data is copied from the vault to
the storage account.
Backup performance
These common scenarios can affect the total backup time:
Adding a new disk to a protected Azure VM: If a VM is undergoing incremental backup and a new disk
is added, the backup time will increase. The total backup time might last more than 24 hours because of
initial replication of the new disk, along with delta replication of existing disks.
Fragmented disks: Backup operations are faster when disk changes are contiguous. If changes are spread
out and fragmented across a disk, backup will be slower.
Disk churn: If protected disks that are undergoing incremental backup have a daily churn of more than 200
GB, backup can take a long time (more than eight hours) to complete.
Backup versions: The latest version of Backup (known as the Instant Restore version) uses a more
optimized process than checksum comparison for identifying changes. But if you're using Instant Restore and
have deleted a backup snapshot, the backup switches to checksum comparison. In this case, the backup
operation will exceed 24 hours (or fail).
Restore performance
These common scenarios can affect the total restore time:
The total restore time depends on the Input/output operations per second (IOPS) and the throughput of the
storage account.
The total restore time can be affected if the target storage account is loaded with other application read and
write operations. To improve restore operation, select a storage account that isn't loaded with other
application data.
Best practices
When you're configuring VM backups, we suggest following these practices:
Modify the default schedule times that are set in a policy. For example, if the default time in the policy is
12:00 AM, increment the timing by several minutes so that resources are optimally used.
If you're restoring VMs from a single vault, we highly recommend that you use different general-purpose v2
storage accounts to ensure that the target storage account doesn't get throttled. For example, each VM must
have a different storage account. For example, if 10 VMs are restored, use 10 different storage accounts.
For backup of VMs that are using premium storage with Instant Restore, we recommend allocating 50% free
space of the total allocated storage space, which is required only for the first backup. The 50% free space
isn't a requirement for backups after the first backup is complete
The limit on the number of disks per storage account is relative to how heavily the disks are being accessed
by applications that are running on an infrastructure as a service (IaaS) VM. As a general practice, if 5 to 10
disks or more are present on a single storage account, balance the load by moving some disks to separate
storage accounts.
To restore VMs with managed disks using PowerShell, provide the additional parameter
TargetResourceGroupName to specify the resource group to which managed disks will be restored, Learn
more here.
Backup costs
Azure VMs backed up with Azure Backup are subject to Azure Backup pricing.
Billing doesn't start until the first successful backup finishes. At this point, the billing for both storage and
protected VMs begins. Billing continues as long as any backup data for the VM is stored in a vault. If you stop
protection for a VM, but backup data for the VM exists in a vault, billing continues.
Billing for a specified VM stops only if the protection is stopped and all backup data is deleted. When protection
stops and there are no active backup jobs, the size of the last successful VM backup becomes the protected
instance size used for the monthly bill.
The protected-instance size calculation is based on the actual size of the VM. The VM's size is the sum of all the
data in the VM, excluding the temporary storage. Pricing is based on the actual data that's stored on the data
disks, not on the maximum supported size for each data disk that's attached to the VM.
Similarly, the backup storage bill is based on the amount of data that's stored in Azure Backup, which is the sum
of the actual data in each recovery point.
For example, take an A2-Standard-sized VM that has two additional data disks with a maximum size of 32 TB
each. The following table shows the actual data stored on each of these disks:
OS disk 32 TB 17 GB
Data disk 1 32 TB 30 GB
Data disk 2 32 TB 0 GB
The actual size of the VM in this case is 17 GB + 30 GB + 0 GB = 47 GB. This protected-instance size (47 GB)
becomes the basis for the monthly bill. As the amount of data in the VM grows, the protected-instance size used
for billing changes to match.
Next steps
Prepare for Azure VM backup.
Enable diagnostics logging for apps in Azure App
Service
2/14/2021 • 8 minutes to read • Edit Online
Overview
Azure provides built-in diagnostics to assist with debugging an App Service app. In this article, you learn how to
enable diagnostic logging and add instrumentation to your application, as well as how to access the information
logged by Azure.
This article uses the Azure portal and Azure CLI to work with diagnostic logs. For information on working with
diagnostic logs using Visual Studio, see Troubleshooting Azure in Visual Studio.
NOTE
In addition to the logging instructions in this article, there's new, integrated logging capability with Azure Monitoring.
You'll find more on this capability in the Send logs to Azure Monitor (preview) section.
Application logging Windows, Linux App Service file system Logs messages generated
and/or Azure Storage blobs by your application code.
The messages can be
generated by the web
framework you choose, or
from your application code
directly using the standard
logging pattern of your
language. Each message is
assigned one of the
following categories:
Critical, Error , Warning ,
Info , Debug , and Trace .
You can select how verbose
you want the logging to be
by setting the severity level
when you enable
application logging.
Web server logging Windows App Service file system or Raw HTTP request data in
Azure Storage blobs the W3C extended log file
format. Each log message
includes data such as the
HTTP method, resource URI,
client IP, client port, user
agent, response code, and
so on.
TYPE P L AT F O RM LO C AT IO N DESC RIP T IO N
Detailed Error Messages Windows App Service file system Copies of the .htm error
pages that would have
been sent to the client
browser. For security
reasons, detailed error
pages shouldn't be sent to
clients in production, but
App Service can save the
error page each time an
application error occurs that
has HTTP code 400 or
greater. The page may
contain information that
can help determine why the
server returns the error
code.
Failed request tracing Windows App Service file system Detailed tracing information
on failed requests, including
a trace of the IIS
components used to
process the request and the
time taken in each
component. It's useful if you
want to improve site
performance or isolate a
specific HTTP error. One
folder is generated for each
failed request, which
contains the XML log file,
and the XSL stylesheet to
view the log file with.
Deployment logging Windows, Linux App Service file system Logs for when you publish
content to an app.
Deployment logging
happens automatically and
there are no configurable
settings for deployment
logging. It helps you
determine why a
deployment failed. For
example, if you use a
custom deployment script,
you might use deployment
logging to determine why
the script is failing.
NOTE
App Service provides a dedicated, interactive diagnostics tool to help you troubleshoot your application. For more
information, see Azure App Service diagnostics overview.
In addition, you can use other Azure services to improve the logging and monitoring capabilities of your app, such as
Azure Monitor.
To enable application logging for Windows apps in the Azure portal, navigate to your app and select App
Ser vice logs .
Select On for either Application Logging (Filesystem) or Application Logging (Blob) , or both.
The Filesystem option is for temporary debugging purposes, and turns itself off in 12 hours. The Blob option
is for long-term logging, and needs a blob storage container to write logs to. The Blob option also includes
additional information in the log messages, such as the ID of the origin VM instance of the log message (
InstanceId ), thread ID ( Tid ), and a more granular timestamp ( EventTickCount ).
NOTE
Currently only .NET application logs can be written to the blob storage. Java, PHP, Node.js, Python application logs can
only be stored on the App Service file system (without code modifications to write logs to external storage).
Also, if you regenerate your storage account's access keys, you must reset the respective logging configuration to use the
updated access keys. To do this:
1. In the Configure tab, set the respective logging feature to Off . Save your setting.
2. Enable logging to the storage account blob again. Save your setting.
Select the Level , or the level of details to log. The following table shows the log categories included in each
level:
Disabled None
NOTE
If you regenerate your storage account's access keys, you must reset the respective logging configuration to use the
updated keys. To do this:
1. In the Configure tab, set the respective logging feature to Off . Save your setting.
2. Enable logging to the storage account blob again. Save your setting.
Stream logs
Before you stream logs in real time, enable the log type that you want. Any information written to files ending in
.txt, .log, or .htm that are stored in the /LogFiles directory (d:/home/logfiles) is streamed by App Service.
NOTE
Some types of logging buffer write to the log file, which can result in out of order events in the stream. For example, an
application log entry that occurs when a user visits a page may be displayed in the stream before the corresponding HTTP
log entry for the page request.
In Azure portal
To stream logs in the Azure portal, navigate to your app and select Log stream .
In Cloud Shell
To stream logs live in Cloud Shell, use the following command:
To filter specific events, such as errors, use the --Filter parameter. For example:
To filter specific log types, such as HTTP, use the --Path parameter. For example:
In local terminal
To stream logs in the local console, install Azure CLI and sign in to your account. Once signed in, followed the
instructions for Cloud Shell
For Linux/container apps, the ZIP file contains console output logs for both the docker host and the docker
container. For a scaled-out app, the ZIP file contains one set of logs for each instance. In the App Service file
system, these log files are the contents of the /home/LogFiles directory.
For Windows apps, the ZIP file contains the contents of the D:\Home\LogFiles directory in the App Service file
system. It has the following structure:
Failed Request Traces /LogFiles/W3SVC#########/ Contains XML files, and an XSL file. You
can view the formatted XML files in the
browser.
Detailed Error Logs /LogFiles/DetailedErrors/ Contains HTM error files. You can view
the HTM files in the browser.
Another way to view the failed request
traces is to navigate to your app page
in the portal. From the left menu,
select Diagnose and solve
problems , then search for Failed
Request Tracing Logs , then click the
icon to browse and view the trace you
want.
LO G T Y P E DIREC TO RY DESC RIP T IO N
Web Ser ver Logs /LogFiles/http/RawLogs/ Contains text files formatted using the
W3C extended log file format. This
information can be read using a text
editor or a utility like Log Parser.
App Service doesn't support the
s-computername , s-ip , or
cs-version fields.
Deployment logs /LogFiles/Git/ and /deployments/ Contain logs generated by the internal
deployment processes, as well as logs
for Git deployments.
W IN DO W S L IN UX
LO G T Y P E W IN DO W S C O N TA IN ER L IN UX C O N TA IN ER DESC RIP T IO N
AppServiceAppL ASP .NET ASP .NET Java SE & Java SE & Tomcat Application logs
ogs Tomcat Blessed Blessed Images 1
Images 1
1 For Java SE apps, add "$WEBSITE_AZMON_PREVIEW_ENABLED" to the app settings and set it to 1 or to true.
Next steps
Query logs with Azure Monitor
How to Monitor Azure App Service
Troubleshooting Azure App Service in Visual Studio
Analyze app Logs in HDInsight
Tutorial: Monitor a Windows virtual machine in
Azure
2/14/2021 • 4 minutes to read • Edit Online
Azure monitoring uses agents to collect boot and performance data from Azure VMs, store this data in Azure
storage, and make it accessible through portal, the Azure PowerShell module, and Azure CLI. Advanced
monitoring is delivered with Azure Monitor for VMs by collecting performance metrics, discover application
components installed on the VM, and includes performance charts and dependency map.
In this tutorial, you learn how to:
Enable boot diagnostics on a VM
View boot diagnostics
View VM host metrics
Enable Azure Monitor for VMs
View VM performance metrics
Create an alert
$cred = Get-Credential
Now create the VM with New-AzVM. The following example creates a VM named myVM in the EastUS location.
If they do not already exist, the resource group myResourceGroupMonitorMonitor and supporting network
resources are created:
New-AzVm `
-ResourceGroupName "myResourceGroupMonitor" `
-Name "myVM" `
-Location "East US" `
-Credential $cred
NOTE
To create a new Log Analytics workspace to store the monitoring data from the VM, see Create a Log Analytics
workspace. The workspace must belong to one of the supported regions.
After you've enabled monitoring, you might need to wait several minutes before you can view the performance
metrics for the VM.
Create alerts
You can create alerts based on specific performance metrics. Alerts can be used to notify you when average CPU
usage exceeds a certain threshold or available free disk space drops below a certain amount, for example. Alerts
are displayed in the Azure portal or can be sent via email. You can also trigger Azure Automation runbooks or
Azure Logic Apps in response to alerts being generated.
The following example creates an alert for average CPU usage.
1. In the Azure portal, click Resource Groups , select myResourceGroupMonitor , and then select myVM
in the resource list.
2. Click Aler t rules on the VM blade, then click Add metric aler t across the top of the alerts blade.
3. Provide a Name for your alert, such as myAlertRule
4. To trigger an alert when CPU percentage exceeds 1.0 for five minutes, leave all the other defaults selected.
5. Optionally, check the box for Email owners, contributors, and readers to send email notification. The
default action is to present a notification in the portal.
6. Click the OK button.
Next steps
In this tutorial, you configured and viewed performance of your VM. You learned how to:
Create a resource group and VM
Enable boot diagnostics on the VM
View boot diagnostics
View host metrics
Enable Azure Monitor for VMs
View VM metrics
Create an alert
Advance to the next tutorial to learn about Azure Security Center.
Manage VM security
Monitoring and diagnostics for Azure Service Fabric
11/2/2020 • 8 minutes to read • Edit Online
This article provides an overview of monitoring and diagnostics for Azure Service Fabric. Monitoring and
diagnostics are critical to developing, testing, and deploying workloads in any cloud environment. For example,
you can track how your applications are used, the actions taken by the Service Fabric platform, your resource
utilization with performance counters, and the overall health of your cluster. You can use this information to
diagnose and correct issues, and prevent them from occurring in the future. The next few sections will briefly
explain each area of Service Fabric monitoring to consider for production workloads.
NOTE
This article was recently updated to use the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a
Log Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the
terminology to better reflect the role of logs in Azure Monitor. See Azure Monitor terminology changes for details.
Application monitoring
Application monitoring tracks how features and components of your application are being used. You want to
monitor your applications to make sure issues that impact users are caught. The responsibility of application
monitoring is on the users developing an application and its services since it is unique to the business logic of
your application. Monitoring your applications can be useful in the following scenarios:
How much traffic is my application experiencing? - Do you need to scale your services to meet user demands
or address a potential bottleneck in your application?
Are my service to service calls successful and tracked?
What actions are taken by the users of my application? - Collecting telemetry can guide future feature
development and better diagnostics for application errors
Is my application throwing unhandled exceptions?
What is happening within the services running inside my containers?
The great thing about application monitoring is that developers can use whatever tools and framework they'd
like since it lives within the context of your application! You can learn more about the Azure solution for
application monitoring with Azure Monitor - Application Insights in Event analysis with Application Insights. We
also have a tutorial with how to set this up for .NET Applications. This tutorial goes over how to install the right
tools, an example to write custom telemetry in your application, and viewing the application diagnostics and
telemetry in the Azure portal.
The diagnostics provided are in the form of a comprehensive set of events out of the box. These Service Fabric
events illustrate actions done by the platform on different entities such as Nodes, Applications, Services,
Partitions etc. In the last scenario above, if a node were to go down, the platform would emit a NodeDown event
and you could be notified immediately by your monitoring tool of choice. Other common examples include
ApplicationUpgradeRollbackStarted or PartitionReconfigured during a failover. The same events are
available on both Windows and Linux clusters.
The events are sent through standard channels on both Windows and Linux and can be read by any monitoring
tool that supports these. The Azure Monitor solution is Azure Monitor logs. Feel free to read more about our
Azure Monitor logs integration which includes a custom operational dashboard for your cluster and some
sample queries from which you can create alerts. More cluster monitoring concepts are available at Platform
level event and log generation.
Health monitoring
The Service Fabric platform includes a health model, which provides extensible health reporting for the status of
entities in a cluster. Each node, application, service, partition, replica, or instance, has a continuously updatable
health status. The health status can either be "OK", "Warning", or "Error". Think of Service Fabric events as verbs
done by the cluster to various entities and health as an adjective for each entity. Each time the health of a
particular entity transitions, an event will also be emitted. This way you can set up queries and alerts for health
events in your monitoring tool of choice, just like any other event.
Additionally, we even let users override health for entities. If your application is going through an upgrade and
you have validation tests failing, you can write to Service Fabric Health using the Health API to indicate your
application is no longer healthy, and Service Fabric will automatically rollback the upgrade! For more on the
health model, check out the introduction to Service Fabric health monitoring
Watchdogs
Generally, a watchdog is a separate service that can watch health and load across services, ping endpoints, and
report health for anything in the cluster. This can help prevent errors that would not be detected based on the
view of a single service. Watchdogs are also a good place to host code that performs remedial actions without
user interaction (for example, cleaning up log files in storage at certain time intervals). You can find a sample
watchdog service implementation here.
Recommended Setup
Now that we've gone over each area of monitoring and example scenarios, here is a summary of the Azure
monitoring tools and set up needed to monitor all areas above.
Application monitoring with Application Insights
Cluster monitoring with Diagnostics Agent and Azure Monitor logs
Infrastructure monitoring with Azure Monitor logs
You can also use and modify the sample ARM template located here to automate deployment of all necessary
resources and agents.
Next steps
For getting started with instrumenting your applications, see Application level event and log generation.
Go through the steps to set up Application Insights for your application with Monitor and diagnose an
ASP.NET Core application on Service Fabric.
Learn more about monitoring the platform and the events Service Fabric provides for you at Platform level
event and log generation.
Configure the Azure Monitor logs integration with Service Fabric at Set up Azure Monitor logs for a cluster
Learn how to set up Azure Monitor logs for monitoring containers - Monitoring and Diagnostics for
Windows Containers in Azure Service Fabric.
See example diagnostics problems and solutions with Service Fabric in diagnosing common scenarios
Check out other diagnostics products that integrate with Service Fabric in Service Fabric diagnostic partners
Learn about general monitoring recommendations for Azure resources - Best Practices - Monitoring and
diagnostics.
Use cost alerts to monitor usage and spending
2/14/2021 • 3 minutes to read • Edit Online
This article helps you understand and use Cost Management alerts to monitor your Azure usage and spending.
Cost alerts are automatically generated based when Azure resources are consumed. Alerts show all active cost
management and billing alerts together in one place. When your consumption reaches a given threshold, alerts
are generated by Cost Management. There are three types of cost alerts: budget alerts, credit alerts, and
department spending quota alerts.
Budget alerts
Budget alerts notify you when spending, based on usage or cost, reaches or exceeds the amount defined in the
alert condition of the budget. Cost Management budgets are created using the Azure portal or the Azure
Consumption API.
In the Azure portal, budgets are defined by cost. Using the Azure Consumption API, budgets are defined by cost
or by consumption usage. Budget alerts support both cost-based and usage-based budgets. Budget alerts are
generated automatically whenever the budget alert conditions are met. You can view all cost alerts in the Azure
portal. Whenever an alert is generated, it's shown in cost alerts. An alert email is also sent to the people in the
alert recipients list of the budget.
You can use the Budget API to send email alerts in a different language. For more information, see Supported
locales for budget alert emails.
Credit alerts
Credit alerts notify you when your Azure Prepayment (previously called monetary commitment) is consumed.
Azure Prepayment is for organizations with Enterprise Agreements. Credit alerts are generated automatically at
90% and at 100% of your Azure Prepayment credit balance. Whenever an alert is generated, it's reflected in cost
alerts and in the email sent to the account owners.
Budget ✔ ✔ ✔
Credit ✔ ✘ ✘
M IC RO SO F T C USTO M ER W EB DIREC T / PAY - A S- Y O U-
A L ERT T Y P E EN T ERP RISE A GREEM EN T A GREEM EN T GO
Department spending ✔ ✘ ✘
quota
The total number of active and dismissed alerts appears on the cost alerts page.
All alerts show the alert type. A budget alert shows the reason why it was generated and the name of the
budget it applies to. Each alert shows the date it was generated, its status, and the scope (subscription or
management group) that the alert applies to.
Possible status includes active and dismissed . Active status indicates that the alert is still relevant. Dismissed
status indicates that someone has marked the alert to set it as no longer relevant.
Select an alert from the list to view its details. Alert details show more information about the alert. Budget alerts
include a link to the budget. If a recommendation is available for a budget alert, then a link to the
recommendation is also shown. Budget, credit, and department spending quota alerts have a link to analyze in
cost analysis where you can explore costs for the alert's scope. The following example shows spending for a
department with alert details.
When you view the details of a dismissed alert, you can reactivate it if manual action is needed. The following
image shows an example.
See also
If you haven't already created a budget or set alert conditions for a budget, complete the Create and manage
budgets tutorial.
Tutorial: Create a virtual machine scale set and
deploy a highly available app on Linux with the
Azure CLI
11/2/2020 • 7 minutes to read • Edit Online
A virtual machine scale set allows you to deploy and manage a set of identical, auto-scaling virtual machines.
You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage
such as CPU, memory demand, or network traffic. In this tutorial, you deploy a virtual machine scale set in
Azure. You learn how to:
Use cloud-init to create an app to scale
Create a virtual machine scale set
Increase or decrease the number of instances in a scale set
Create autoscale rules
View connection info for scale set instances
Use data disks in a scale set
This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the latest version. To open
the Cloud Shell, select Tr y it from the top of any code block.
If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version
2.0.30 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.
Now create a virtual machine scale set with az vmss create. The following example creates a scale set named
myScaleSet, uses the cloud-init file to customize the VM, and generates SSH keys if they do not exist:
az vmss create \
--resource-group myResourceGroupScaleSet \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--custom-data cloud-init.txt \
--admin-username azureuser \
--generate-ssh-keys
It takes a few minutes to create and configure all the scale set resources and VMs. There are background tasks
that continue to run after the Azure CLI returns you to the prompt. It may be another couple of minutes before
you can access the app.
Enter the public IP address in to a web browser. The app is displayed, including the hostname of the VM that the
load balancer distributed traffic to:
To see the scale set in action, you can force-refresh your web browser to see the load balancer distribute traffic
across all the VMs running your app.
Management tasks
Throughout the lifecycle of the scale set, you may need to run one or more management tasks. Additionally, you
may want to create scripts that automate various lifecycle-tasks. The Azure CLI provides a quick way to do those
tasks. Here are a few common tasks.
View VMs in a scale set
To view a list of VMs running in your scale set, use az vmss list-instances as follows:
az vmss list-instances \
--resource-group myResourceGroupScaleSet \
--name myScaleSet \
--output table
az vmss show \
--resource-group myResourceGroupScaleSet \
--name myScaleSet \
--query [sku.capacity] \
--output table
You can then manually increase or decrease the number of virtual machines in the scale set with az vmss scale.
The following example sets the number of VMs in your scale set to 3:
az vmss scale \
--resource-group myResourceGroupScaleSet \
--name myScaleSet \
--new-capacity 3
az vmss list-instance-connection-info \
--resource-group myResourceGroupScaleSet \
--name myScaleSet
When instances are removed from a scale set, any attached data disks are also removed.
Add data disks
To add a data disk to instances in your scale set, use az vmss disk attach. The following example adds a 50Gb
disk to each instance:
Next steps
In this tutorial, you created a virtual machine scale set. You learned how to:
Use cloud-init to create an app to scale
Create a virtual machine scale set
Increase or decrease the number of instances in a scale set
Create autoscale rules
View connection info for scale set instances
Use data disks in a scale set
Advance to the next tutorial to learn more about load balancing concepts for virtual machines.
Load balance virtual machines
Tutorial: Create a virtual machine scale set and
deploy a highly available app on Windows with
Azure PowerShell
11/2/2020 • 7 minutes to read • Edit Online
A virtual machine scale set allows you to deploy and manage a set of identical, autoscaling virtual machines. You
can scale the number of VMs in the scale set manually. You can also define rules to autoscale based on resource
usage such as CPU, memory demand, or network traffic. In this tutorial, you deploy a virtual machine scale set in
Azure and learn how to:
Use the Custom Script Extension to define an IIS site to scale
Create a load balancer for your scale set
Create a virtual machine scale set
Increase or decrease the number of instances in a scale set
Create autoscale rules
It takes a few minutes to create and configure all the scale set resources and VMs.
# Use Custom Script Extension to install IIS and configure basic website
Add-AzVmssExtension -VirtualMachineScaleSet $vmss `
-Name "customScript" `
-Publisher "Microsoft.Compute" `
-Type "CustomScriptExtension" `
-TypeHandlerVersion 1.8 `
-Setting $publicSettings
# Update the scale set and apply the Custom Script Extension to the VM instances
Update-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myScaleSet" `
-VirtualMachineScaleSet $vmss
$vnet = Get-AzVirtualNetwork `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name myVnet
$frontendSubnet = $vnet.Subnets[0]
$frontendSubnetConfig = Set-AzVirtualNetworkSubnetConfig `
-VirtualNetwork $vnet `
-Name mySubnet `
-AddressPrefix $frontendSubnet.AddressPrefix `
-NetworkSecurityGroup $nsgFrontend
# Update the scale set and apply the Custom Script Extension to the VM instances
Update-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myScaleSet" `
-VirtualMachineScaleSet $vmss
Get-AzPublicIPAddress `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myPublicIPAddress" | select IpAddress
Enter the public IP address in to a web browser. The web app is displayed, including the hostname of the VM that
the load balancer distributed traffic to:
To see the scale set in action, you can force-refresh your web browser to see the load balancer distribute traffic
across all the VMs running your app.
Management tasks
Throughout the lifecycle of the scale set, you may need to run one or more management tasks. Additionally, you
may want to create scripts that automate various lifecycle-tasks. Azure PowerShell provides a quick way to do
those tasks. Here are a few common tasks.
View VMs in a scale set
To view a list of VM instances in a scale set, use Get-AzVmssVM as follows:
Get-AzVmssVM `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet"
The following example output shows two VM instances in the scale set:
To view additional information about a specific VM instance, add the -InstanceId parameter to Get-AzVmssVM.
The following example views information about VM instance 1:
Get-AzVmssVM `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet" `
-InstanceId "1"
You can then manually increase or decrease the number of virtual machines in the scale set with Update-
AzVmss. The following example sets the number of VMs in your scale set to 3:
# Get current scale set
$scaleset = Get-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet"
If takes a few minutes to update the specified number of instances in your scale set.
Configure autoscale rules
Rather than manually scaling the number of instances in your scale set, you define autoscale rules. These rules
monitor the instances in your scale set and respond accordingly based on metrics and thresholds you define.
The following example scales out the number of instances by one when the average CPU load is greater than
60% over a 5-minute period. If the average CPU load then drops below 30% over a 5-minute period, the
instances are scaled in by one instance:
# Define your scale set information
$mySubscriptionId = (Get-AzSubscription)[0].Id
$myResourceGroup = "myResourceGroupScaleSet"
$myScaleSet = "myScaleSet"
$myLocation = "East US"
$myScaleSetId = (Get-AzVmss -ResourceGroupName $myResourceGroup -VMScaleSetName $myScaleSet).Id
# Create a scale up rule to increase the number instances after 60% average CPU usage exceeded for a 5-
minute period
$myRuleScaleUp = New-AzAutoscaleRule `
-MetricName "Percentage CPU" `
-MetricResourceId $myScaleSetId `
-Operator GreaterThan `
-MetricStatistic Average `
-Threshold 60 `
-TimeGrain 00:01:00 `
-TimeWindow 00:05:00 `
-ScaleActionCooldown 00:05:00 `
-ScaleActionDirection Increase `
-ScaleActionValue 1
# Create a scale down rule to decrease the number of instances after 30% average CPU usage over a 5-minute
period
$myRuleScaleDown = New-AzAutoscaleRule `
-MetricName "Percentage CPU" `
-MetricResourceId $myScaleSetId `
-Operator LessThan `
-MetricStatistic Average `
-Threshold 30 `
-TimeGrain 00:01:00 `
-TimeWindow 00:05:00 `
-ScaleActionCooldown 00:05:00 `
-ScaleActionDirection Decrease `
-ScaleActionValue 1
# Create a scale profile with your scale up and scale down rules
$myScaleProfile = New-AzAutoscaleProfile `
-DefaultCapacity 2 `
-MaximumCapacity 10 `
-MinimumCapacity 2 `
-Rule $myRuleScaleUp,$myRuleScaleDown `
-Name "autoprofile"
For more design information on the use of autoscale, see autoscale best practices.
Next steps
In this tutorial, you created a virtual machine scale set. You learned how to:
Use the Custom Script Extension to define an IIS site to scale
Create a load balancer for your scale set
Create a virtual machine scale set
Increase or decrease the number of instances in a scale set
Create autoscale rules
Advance to the next tutorial to learn more about load balancing concepts for virtual machines.
Load balance virtual machines
Azure consumption API overview
2/14/2021 • 8 minutes to read • Edit Online
The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources.
These APIs currently only support Enterprise Enrollments and Web Direct Subscriptions (with a few exceptions).
The APIs are continually updated to support other types of Azure subscriptions.
Azure Consumption APIs provide access to:
Enterprise and Web Direct Customers
Usage Details
Marketplace Charges
Reservation Recommendations
Reservation Details
Reservation Summaries
Enterprise Customers Only
Price sheet
Budgets
Balances
Balances API
Enterprise customers can use the Balances API to get a monthly summary of information on balances, new
purchases, Azure Marketplace service charges, adjustments, and overage charges. You can get this information
for the current billing period or any period in the past. Enterprises can use this data to perform a comparison
with manually calculated summary charges. This API does not provide resource-specific information and an
aggregate view of costs.
The API includes:
Azure role-based access control (Azure RBAC) - Configure access policies on the Azure portal, the
Azure CLI or Azure PowerShell cmdlets to specify which users or applications can get access to the
subscription's usage data. Callers must use standard Azure Active Directory tokens for authentication. Add
the caller to either the Billing Reader, Reader, Owner, or Contributor role to get access to the usage data for a
specific Azure subscription.
Enterprise Customers Only This API is only available EA customers. - Customers must have Enterprise
Admin permissions to call this API
For more information, see the technical specification for the Balances API.
Budgets API
Enterprise customers can use this API to create either cost or usage budgets for resources, resource groups, or
billing meters. Once this information has been determined, alerting can be configured to notify when user-
defined budget thresholds are exceeded.
The API includes:
Azure role-based access control (Azure RBAC) - Configure access policies on the Azure portal, the
Azure CLI or Azure PowerShell cmdlets to specify which users or applications can get access to the
subscription's usage data. Callers must use standard Azure Active Directory tokens for authentication. Add
the caller to either the Billing Reader, Reader, Owner, or Contributor role to get access to the usage data for a
specific Azure subscription.
Enterprise Customers Only - This API is only available EA customers.
Configurable Notifications - Specify user(s) to be notified when the budget is tripped.
Usage or Cost Based Budgets - Create your budget based on either consumption or cost as needed by
your scenario.
Filtering - Filter your budget to a smaller subset of resources using the following configurable filters -
Resource Group - Resource Name - Meter
Configurable budget time periods - Specify how often the budget should reset and how long the budget
is valid for.
For more information, see the technical specification for the Budgets API.
Scenarios
Here are some of the scenarios that are made possible via the consumption APIs:
Invoice Reconciliation - Did Microsoft charge me the right amount? What is my bill and can I calculate it
myself?
Cross Charges - Now that I know how much I'm being charged, who in my org needs to pay?
Cost Optimization - I know how much I've been charged… how can I get more out of the money I am
spending on Azure?
Cost Tracking - I want to see how much I am spending and using Azure over time. What are the trends?
How could I be doing better?
Azure spend during the month - How much is my current month's spend to date? Do I need to make any
adjustments in my spending and/or usage of Azure? When during the month am I consuming Azure the
most?
Set up aler ts - I would like to set up resource-based consumption or monetary-based alerting based on a
budget.
Next Steps
For information about using REST APIs retrieve prices for all Azure services, see Azure Retail Prices overview.
Azure subscription and service limits, quotas, and
constraints
2/14/2021 • 99 minutes to read • Edit Online
This document lists some of the most common Microsoft Azure limits, which are also sometimes called quotas.
To learn more about Azure pricing, see Azure pricing overview. There, you can estimate your costs by using the
pricing calculator. You also can go to the pricing details page for a particular service, for example, Windows VMs.
For tips to help manage your costs, see Prevent unexpected costs with Azure billing and cost management.
Managing limits
NOTE
Some services have adjustable limits.
When a service doesn't have adjustable limits, the following tables use the header Limit . In those cases, the default and
the maximum limits are the same.
When the limit can be adjusted, the tables include Default limit and Maximum limit headers. The limit can be raised
above the default limit but not above the maximum limit.
If you want to raise the limit or quota above the default limit, open an online customer support request at no charge.
The terms soft limit and hard limit often are used informally to describe the current, adjustable limit (soft limit) and the
maximum limit (hard limit). If a limit isn't adjustable, there won't be a soft limit, only a hard limit.
Free Trial subscriptions aren't eligible for limit or quota increases. If you have a Free Trial subscription, you can
upgrade to a Pay-As-You-Go subscription. For more information, see Upgrade your Azure Free Trial subscription
to a Pay-As-You-Go subscription and the Free Trial subscription FAQ.
Some limits are managed at a regional level.
Let's use vCPU quotas as an example. To request a quota increase with support for vCPUs, you must decide how
many vCPUs you want to use in which regions. You then make a specific request for Azure resource group vCPU
quotas for the amounts and regions that you want. If you need to use 30 vCPUs in West Europe to run your
application there, you specifically request 30 vCPUs in West Europe. Your vCPU quota isn't increased in any
other region--only West Europe has the 30-vCPU quota.
As a result, decide what your Azure resource group quotas must be for your workload in any one region. Then
request that amount in each region into which you want to deploy. For help in how to determine your current
quotas for specific regions, see Resolve errors for resource quotas.
General limits
For limits on resource names, see Naming rules and restrictions for Azure resources.
For information about Resource Manager API read and write limits, see Throttling Resource Manager requests.
Management group limits
The following limits apply to management groups.
RESO URC E L IM IT
RESO URC E L IM IT
1You can apply up to 50 tags directly to a subscription. However, the subscription can contain an unlimited
number of tags that are applied to resource groups and resources within the subscription. The number of tags
per resource or resource group is limited to 50. Resource Manager returns a list of unique tag name and values
in the subscription only when the number of tags is 80,000 or less. You still can find a resource by tag when the
number exceeds 80,000.
2If you reach the limit of 800deployments, delete deployments that are no longer needed from the history. To
delete subscription-level deployments, use Remove-AzDeployment or az deployment sub delete.
Resource group limits
RESO URC E L IM IT
Resources per resource group Resources aren't limited by resource group. Instead, they're
limited by resource type in a resource group. See next row.
RESO URC E L IM IT
Resources per resource group, per resource type 800 - Some resource types can exceed the 800 limit. See
Resources not limited to 800 instances per resource group.
1Deployments are automatically deleted from the history as you near the limit. Deleting an entry from the
deployment history doesn't affect the deployed resources. For more information, see Automatic deletions from
deployment history.
Template limits
VA L UE L IM IT
Parameters 256
Variables 256
Outputs 64
Template size 4 MB
You can exceed some template limits by using a nested template. For more information, see Use linked
templates when you deploy Azure resources. To reduce the number of parameters, variables, or outputs, you can
combine several values into an object. For more information, see Objects as parameters.
C AT EGO RY L IM IT
Domains You can add no more than 900 managed domain names. If
you set up all of your domains for federation with on-
premises Active Directory, you can add no more than 450
domain names in each tenant.
1Scaling limits depend on the pricing tier. For details on the pricing tiers and their scaling limits, see API
Management pricing.
2Per unit cache size depends on the pricing tier. To see the pricing tiers and their scaling limits, see API
Management pricing.
3Connections are pooled and reused unless explicitly closed by the back end.
4This limit is per unit of the Basic, Standard, and Premium tiers. The Developer tier is limited to 1,024. This limit
limited to 16 KiB.
6Multiple custom domains are supported in the Developer and Premium tiers only.
7CA certificates are not supported in the Consumption tier.
8This limit applies to the Consumption tier only. There are no limits in these categories for other tiers.
9Applies to the Consumption tier only. Includes an up to 2048 bytes long query string.
10 To raise this limit please contact support.
11Self-hosted gateways are supported in the Developer and Premium tiers only. The limit applies to the number
of self-hosted gateway resources. To raise this limit please contact support. Note, that the number of nodes (or
replicas) associated with a self-hosted gateway resource is unlimited in the Premium tier and capped at a single
node in the Developer tier.
P REM IUM
RESO URC E F REE SH A RED B A SIC STA N DA RD ( V1- V3) ISO L AT ED
App Service 10 per region 10 per 100 per 100 per 100 per 100 per
plan resource resource resource resource resource
group group group group group
Concurrent 1 1 1 5 5 5
debugger
connections
per
application
P REM IUM
RESO URC E F REE SH A RED B A SIC STA N DA RD ( V1- V3) ISO L AT ED
Custom Not Not Unlimited SNI Unlimited SNI Unlimited SNI Unlimited SNI
domain SSL supported, supported, SSL SSL and 1 IP SSL and 1 IP SSL and 1 IP
support wildcard wildcard connections SSL SSL SSL
certificate for certificate for connections connections connections
*.azurewebsit *.azurewebsit included included included
es.net es.net
available by available by
default default
Hybrid 5 per plan 25 per plan 200 per app 200 per app
connections
Virtual X X X
Network
Integration
Integrated X X X X X10
load balancer
Access 512 rules per 512 rules per 512 rules per 512 rules per 512 rules per 512 rules per
restrictions app app app app app app
Always On X X X X
Autoscale X X X
WebJobs11 X X X X X X
Endpoint X X X X
monitoring
P REM IUM
RESO URC E F REE SH A RED B A SIC STA N DA RD ( V1- V3) ISO L AT ED
Staging slots 5 20 20
per app
Testing in X X X
Production
Diagnostic X X X X X X
Logs
Kudu X X X X X X
Authenticatio X X X X X X
n and
Authorization
App Service X X X X
Managed
Certificates
(Public
Preview)12
1Apps and storage quotas are per App Service plan unless noted otherwise.
2The actual number of apps that you can host on these machines depends on the activity of the apps, the size of
the machine instances, and the corresponding resource utilization.
3Dedicated instances can be of different sizes. For more information, see App Service pricing.
4More are allowed upon request.
5The storage limit is the total content size across all apps in the same App service plan. The total content size of
all apps across all App service plans in a single resource group and region cannot exceed 500GB.
6These resources are constrained by physical resources on the dedicated instances (the instance size and the
number of instances).
7If you scale an app in the Basic tier to two instances, you have 350 concurrent connections for each of the two
instances. For Standard tier and above, there are no theoretical limits to web sockets, but other factors can limit
the number of web sockets. For example, maximum concurrent requests allowed (defined by
maxConcurrentRequestsPerCpu ) are: 7,500 per small VM, 15,000 per medium VM (7,500 x 2 cores), and 75,000
per large VM (18,750 x 4 cores).
8The maximum IP connections are per instance and depend on the instance size: 1,920 per B1/S1/P1V3 instance,
limit of 200.
10App Service Isolated SKUs can be internally load balanced (ILB) with Azure Load Balancer, so there's no public
connectivity from the internet. As a result, some features of an ILB Isolated App Service must be used from
machines that have direct access to the ILB network endpoint.
11Run custom executables and/or scripts on demand, on a schedule, or continuously as a background task
within your App Service instance. Always On is required for continuous WebJobs execution. There's no
predefined limit on the number of WebJobs that can run in an App Service instance. There are practical limits
that depend on what the application code is trying to do.
12Naked domains are not supported. Only issuing standard certificates (wildcard certificates are not available).
RESO URC E L IM IT N OT ES
Maximum number of new jobs that 100 When this limit is reached, the
can be submitted every 30 seconds subsequent requests to create a job
per Azure Automation account fail. The client receives an error
(nonscheduled jobs) response.
Maximum storage size of job metadata 10 GB (approximately 4 million jobs) When this limit is reached, the
for a 30-day rolling period subsequent requests to create a job
fail.
Maximum job stream limit 1 MiB A single stream cannot be larger than
1 MiB.
Job run time, Free tier 500 minutes per subscription per
calendar month
1A sandbox is a shared environment that can be used by multiple jobs. Jobs that use the same sandbox are
RESO URC E L IM IT N OT ES
File 500
Registry 250
Services 250
Daemon 250
Update Management
The following table shows the limits for Update Management.
RESO URC E L IM IT N OT ES
Configuration store requests - Standard tier Throttling starts at 20,000 requests per hour
Databases 64
Azure Cache for Redis limits and sizes are different for each pricing tier. To see the pricing tiers and their
associated sizes, see Azure Cache for Redis pricing.
For more information on Azure Cache for Redis configuration limits, see Default Redis server configuration.
Because configuration and management of Azure Cache for Redis instances is done by Microsoft, not all Redis
commands are supported in Azure Cache for Redis. For more information, see Redis commands not supported
in Azure Cache for Redis.
1Each Azure Cloud Service with web or worker roles can have two deployments, one for production and one for
staging. This limit refers to the number of distinct roles, that is, configuration. This limit doesn't refer to the
number of instances per role, that is, scaling.
Azure Cognitive Search limits
Pricing tiers determine the capacity and limits of your search service. Tiers include:
Free multi-tenant service, shared with other Azure subscribers, is intended for evaluation and small
development projects.
Basic provides dedicated computing resources for production workloads at a smaller scale, with up to three
replicas for highly available query workloads.
Standard , which includes S1, S2, S3, and S3 High Density, is for larger production workloads. Multiple levels
exist within the Standard tier so that you can choose a resource configuration that best matches your
workload profile.
Limits per subscription
You can create multiple services within a subscription. Each one can be provisioned at a specific tier. You're
limited only by the number of services allowed at each tier. For example, you could create up to 12 services at
the Basic tier and another 12 services at the S1 tier within the same subscription. For more information about
tiers, see Choose an SKU or tier for Azure Cognitive Search.
Maximum service limits can be raised upon request. If you need more services within the same subscription,
contact Azure Support.
RESO URC
E F REE 1 B A SIC S1 S2 S3 S3 H D L1 L2
Maximu 1 16 16 8 6 6 6 6
m
services
Maximu N/A 3 SU 36 SU 36 SU 36 SU 36 SU 36 SU 36 SU
m scale in
search
units
(SU)2
1 Free is based on shared, not dedicated, resources. Scale-up is not supported on shared resources.
RESO URC
E F REE B A SIC 1 S1 S2 S3 S3 H D L1 L2
Partitions N/A 1 12 12 12 3 12 12
per
service
Replicas N/A 3 12 12 12 12 12 12
1 Basic has one fixed partition. Additional search units can be used to add replicas for larger query volumes.
2 Service level agreements are in effect forbillable services on dedicated resources. Free services and preview
features have no SLA. For billable services, SLAs take effect when you provision sufficient redundancy for your
service. Two or more replicas are required for query (read) SLAs. Three or more replicas are required for query
and indexing (read-write) SLAs. The number of partitions isn't an SLA consideration.
To learn more about limits on a more granular level, such as document size, queries per second, keys, requests,
and responses, see Service limits in Azure Cognitive Search.
TYPE L IM IT EXA M P L E
A mixture of Cognitive Services Maximum of 200 total Cognitive 100 Computer Vision resources in
resources Services resources. West US 2, 50 Speech Service
resources in West US, and 50 Text
Analytics resources in East US.
A single type of Cognitive Services Maximum of 100 resources per region, 100 Computer Vision resources in
resources. with a maximum of 200 total Cognitive West US 2, and 100 Computer Vision
Services resources. resources in East US.
RESO URC E L IM IT
SC O P E O P ERAT IO N L IM IT
App Service 100 per region 100 per resource 100 per resource - -
plans group group
Custom domain unbounded SNI unbounded SNI unbounded SNI unbounded SNI n/a
SSL support SSL connection SSL and 1 IP SSL SSL and 1 IP SSL SSL and 1 IP SSL
included connections connections connections
included included included
1 By default, the timeout for the Functions 1.x runtime in an App Service plan is unbounded.
2 Requires the App Service plan be set to Always On. Pay at standard rates.
3 These limits are set in the host.
4 The actual number of function apps that you can host depends on the activity of the apps, the size of the
machine instances, and the corresponding resource utilization.
5 The storage limit is the total content size in temporary storage across all apps in the same App Service plan.
apps in a Premium plan or an App Service plan, you can map a custom domain using either a CNAME or an A
record.
7 Guaranteed for up to 60 minutes.
8 Workers are roles that host customer apps. Workers are available in three fixed sizes: One vCPU/3.5 GB RAM;
Maximum nodes per cluster with Virtual Machine Scale Sets 1000 (100 nodes per node pool)
and Standard Load Balancer SKU
Maximum pods per node: Advanced networking with Azure Azure CLI deployment: 301
Container Networking Interface Azure Resource Manager template: 301
Portal deployment: 30
1When you deploy an Azure Kubernetes Service (AKS) cluster with the Azure CLI or a Resource Manager
template, this value is configurable up to 250 pods per node. You can't configure maximum pods per node after
you've already deployed an AKS cluster, or if you deploy a cluster by using the Azure portal.
The following table shows the cumulative data size limit for Azure Maps accounts in an Azure subscription. The
Azure Maps Data service is available only at the S1 pricing tier.
RESO URC E L IM IT
For more information on the Azure Maps pricing tiers, see Azure Maps pricing.
Metric alerts (classic) 100 active alert rules per subscription. Call support
Activity log alerts 100 active alert rules per subscription Same as default
(cannot be increased).
Log alerts 512 active alert rules per subscription. Call support
200 active alert rules per resource.
Alert rules and Action rules description Log search alerts 4096 characters Same as default
length All other 2048 characters
Action groups
RESO URC E DEFA ULT L IM IT M A XIM UM L IM IT
Azure app push 10 Azure app actions per action group. Same as Default
Autoscale
RESO URC E DEFA ULT L IM IT M A XIM UM L IM IT
Query language Azure Monitor uses the same Kusto query language as
Azure Data Explorer. See Azure Monitor log query language
differences for KQL language elements not supported in
Azure Monitor.
Azure regions Log queries can experience excessive overhead when data
spans Log Analytics workspaces in multiple Azure regions.
See Query limits for details.
Cross resource queries Maximum number of Application Insights resources and Log
Analytics workspaces in a single query limited to 100.
Cross-resource query is not supported in View Designer.
Cross-resource query in log alerts is supported in the new
scheduledQueryRules API.
See Cross-resource query limits for details.
Time in concurrency queue 3 minutes If a query sits in the queue for more
than 3 minutes without being started,
it will be terminated with an HTTP
error response with code 429.
Total queries in concurrency queue 200 Once the number of queries in the
queue reaches 200, any additional
queries will by rejected with an HTTP
error code 429. This number is in
addition to the 5 queries that can be
running simultaneously.
Query rate 200 queries per 30 seconds This is the overall rate that queries can
be submitted by a single user to all
workspaces. This limit applies to
programmatic queries or queries
initiated by visualization parts such as
Azure dashboards and the Log
Analytics workspace summary page.
Current Per GB pricing tier No limit 30 - 730 days Data retention beyond 31
(introduced April 2018) days is available for
additional charges. Learn
more about Azure Monitor
pricing.
T IER L IM IT P ER DAY DATA RET EN T IO N C O M M EN T
Legacy Per Node (OMS) No limit 30 to 730 days Data retention beyond 31
(introduced April 2016) days is available for
additional charges. Learn
more about Azure Monitor
pricing.
C AT EGO RY L IM IT C O M M EN T S
Maximum records returned by a log 10,000 Reduce results using query scope, time
query range, and filters in the query.
C AT EGO RY L IM IT C O M M EN T S
Maximum size for a single post 30 MB Split larger volumes into multiple
posts.
Maximum size for field values 32 KB Fields longer than 32 KB are truncated.
Search API
C AT EGO RY L IM IT C O M M EN T S
Maximum request rate 200 requests per 30 seconds per See Rate limits for details.
Azure AD user or client IP address
C AT EGO RY L IM IT C O M M EN T S
Data export Not currently available Use Azure Function or Logic App to
aggregate and export data.
Azure Monitor is a high scale data service that serves thousands of customers sending terabytes of data each
Da t a in g e s t io n v o lu m e r a t e
month at a growing pace. The volume rate limit intends to isolate Azure Monitor customers from sudden
ingestion spikes in multitenancy environment. A default ingestion volume rate threshold of 500 MB
(compressed) is defined in workspaces, this is translated to approximately 6 GB/min uncompressed -- the
actual size can vary between data types depending on the log length and its compression ratio. The volume rate
limit applies to data ingested from Azure resources via Diagnostic settings. When volume rate limit is reached, a
retry mechanism attempts to ingest the data 4 times in a period of 30 minutes and drop it if operation fails. It
doesn't apply to data ingested from agents or Data Collector API.
When data sent to your workspace is at a volume rate higher than 80% of the threshold configured in your
workspace, an event is sent to the Operation table in your workspace every 6 hours while the threshold
continues to be exceeded. When ingested volume rate is higher than threshold, some data is dropped and an
event is sent to the Operation table in your workspace every 6 hours while the threshold continues to be
exceeded. If your ingestion volume rate continues to exceed the threshold or you are expecting to reach it
sometime soon, you can request to increase it in by opening a support request.
See Monitor health of Log Analytics workspace in Azure Monitor to create alert rules to be proactively notified
when you reach any ingestion limits.
NOTE
Depending on how long you've been using Log Analytics, you might have access to legacy pricing tiers. Learn more about
Log Analytics legacy pricing tiers.
Application Insights
There are some limits on the number of metrics and events per application, that is, per instrumentation key.
Limits depend on the pricing plan that you choose.
RESO URC E DEFA ULT L IM IT N OT E
Total data per day 100 GB You can reduce data by setting a cap. If
you need more data, you can increase
the limit in the portal, up to 1,000 GB.
For capacities greater than 1,000 GB,
send email to
[email protected].
Availability multi-step test detailed 90 days This resource provides detailed results
results retention of each step.
For more information, see About pricing and quotas in Application Insights.
W H ERE W H AT M A XIM UM C O UN T
RESO URC E L IM IT
If you are using the Learn & Develop SKU, you cannot request an increase on your quota limits. Instead you
should switch to the Performance at Scale SKU.
Performance at Scale SKU
Solver hours 1,000 hours per month up to 50,000 hours per month
If you need to request a limit increase, please reach out to Azure Support.
For more information, please review the Azure Quantum pricing page. For information on third-party offerings,
please review the relevant provider page in the Azure portal.
Backup limits
For a summary of Azure Backup support settings and limitations, see Azure Backup Support Matrices.
Batch limits
RESO URC E DEFA ULT L IM IT M A XIM UM L IM IT
NOTE
Default limits vary depending on the type of subscription you use to create a Batch account. Cores quotas shown are for
Batch accounts in Batch service mode. View the quotas in your Batch account.
IMPORTANT
To help us better manage capacity during the global health pandemic, the default core quotas for new Batch accounts in
some regions and for some types of subscription have been reduced from the above range of values, in some cases to
zero cores. When you create a new Batch account, check your core quota and request a core quota increase, if required.
Alternatively, consider reusing Batch accounts that already have sufficient quota.
1Extra small instances count as one vCPU toward the vCPU limit despite using a partial CPU core.
2The storage account limit includes both Standard and Premium storage accounts.
Standard sku cores (CPUs) for K80 GPU per region per 181,2
subscription
Standard sku cores (CPUs) for P100 or V100 GPU per region 01,2
per subscription
Ports per IP 5
1To request a limit increase, create an Azure Support request. Free subscriptions including Azure Free Account
and Azure for Students aren't eligible for limit or quota increases. If you have a free subscription, you can
upgrade to a Pay-As-You-Go subscription.
2Default limit for Pay-As-You-Go subscription. Limit may differ for other category types.
Webhooks 2 10 500
1 Storage included in the daily rate for each tier. Additional storage may be used, up to the registry storage limit,
at an additional daily rate per GiB. For rate information, see Azure Container Registry pricing. If you need
storage beyond the registry storage limit, please contact Azure Support.
2ReadOps, WriteOps, and Bandwidth are minimum estimates. Azure Container Registry strives to improve
performance as usage requires.
3A docker pull translates to multiple read operations based on the number of layers in the image, plus the
manifest retrieval.
4A docker push translates to multiple write operations, based on the number of layers that must be pushed. A
docker push includes ReadOps to retrieve a manifest for an existing image.
A Content Delivery Network subscription can contain one or more Content Delivery Network profiles. A Content
Delivery Network profile can contain one or more Content Delivery Network endpoints. You might want to use
multiple profiles to organize your Content Delivery Network endpoints by internet domain, web application, or
some other criteria.
Concurrent Data Integration Units1 Region group 12 : 6,000 Region group 12 : 6,000
consumption per subscription per Region group 22 : 3,000 Region group 22 : 3,000
Azure Integration Runtime region Region group 32 : 1,500 Region group 32 : 1,500
ForEach parallelism 20 50
1 The data integration unit (DIU) is used in a cloud-to-cloud copy operation, learn more from Data integration
units (version 2). For information on billing, see Azure Data Factory pricing.
2 Azure Integration Runtime is globally available to ensure data compliance, efficiency, and reduced network
egress costs.
Region group 1 Central US, East US, East US2, North Europe, West Europe,
West US, West US 2
Region group 3 Canada Central, East Asia, France Central, Korea Central, UK
South
3 Pipeline, data set, and linked service objects represent a logical grouping of your
workload. Limits for these
objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is
designed to scale to handle petabytes of data.
4 The payload for each activity run includes the activity configuration, the associated dataset(s) and linked
service(s) configurations if any, and a small portion of system properties generated per activity type. Limit for
this payload size doesn't relate to the amount of data you can move and process with Azure Data Factory. Learn
about the symptoms and recommendation if you hit this limit.
Version 1
RESO URC E DEFA ULT L IM IT M A XIM UM L IM IT
Retry count for pipeline activity runs 1,000 MaxInt (32 bit)
1 Pipeline, data set, and linked service objects represent a logical grouping of your
workload. Limits for these
objects don't relate to the amount of data you can move and process with Azure Data Factory. Data Factory is
designed to scale to handle petabytes of data.
2 On-demand HDInsight cores are allocated out of the subscription that contains the data factory. As a result, the
previous limit is the Data Factory-enforced core limit for on-demand HDInsight cores. It's different from the core
limit that's associated with your Azure subscription.
3 The cloud data movement unit (DMU) for version 1 is used in a cloud-to-cloud copy operation, learn more
from Cloud data movement units (version 1). For information on billing, see Azure Data Factory pricing.
RESO URC E L IM IT C O M M EN T S
Maximum number of access ACLs, per 32 This is a hard limit. Use groups to
file or folder manage access with fewer entries.
Maximum number of default ACLs, per 32 This is a hard limit. Use groups to
file or folder manage access with fewer entries.
RESO URC E L IM IT
RESO URC E L IM IT C O M M EN T S
Functional limits
The following table lists the functional limits of Azure Digital Twins.
TIP
For modeling recommendations to operate within these functional limits, see Best practices for designing models.
Rate limits
The following table reflects the rate limits of different APIs.
API C A PA B IL IT Y DEFA ULT L IM IT A DJUSTA B L E?
Other limits
Limits on data types and fields within DTDL documents for Azure Digital Twins models can be found within its
spec documentation in GitHub: Digital Twins Definition Language (DTDL) - version 2.
Query latency details and other query limitations can be found in How-to: Query the twin graph.
RESO URC E L IM IT
Publish rate for a custom or a partner topic (ingress) 5,000 events/sec or 1 MB/sec (whichever is met first)
Event size 1 MB
Publish rate for an event domain (ingress) 5,000 events/sec or 1 MB/sec (whichever is met first)
L IM IT N OT ES VA L UE
Number of consumer 1 20
groups per event hub
Namespaces 1 50 per CU
Consumer groups 20 per Event Hub No limit per CU, 1000 per event hub
Brokered connections 1,000 included, 5,000 max 100 K included and max
F EAT URE L IM IT
L IM IT STA N DA RD DEDIC AT ED
The following table lists the limits that apply to IoT Hub resources.
RESO URC E L IM IT
Maximum size of device-to-cloud batch AMQP and HTTP: 256 KB for the entire batch
MQTT: 256 KB for each message
Maximum size of device twin 8 KB for tags section, and 32 KB for desired and reported
properties sections each
Maximum message routing rules 100 (for S1, S2, and S3)
Maximum number of concurrently connected device streams 50 (for S1, S2, S3, and F1 only)
Maximum device stream data transfer 300 MB per day (for S1, S2, S3, and F1 only)
NOTE
If you need more than 50 paid IoT hubs in an Azure subscription, contact Microsoft Support.
NOTE
Currently, the total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. If
you want to increase this limit, contact Microsoft Support.
IoT Hub throttles requests when the following quotas are exceeded.
T H ROT T L E P ER- H UB VA L UE
Device connections 6,000/sec/unit (for S3), 120/sec/unit (for S2), 12/sec/unit (for
S1).
Minimum of 100/sec.
Device-to-cloud sends 6,000/sec/unit (for S3), 120/sec/unit (for S2), 12/sec/unit (for
S1).
Minimum of 100/sec.
Direct methods 24 MB/sec/unit (for S3), 480 KB/sec/unit (for S2), 160
KB/sec/unit (for S1).
Based on 8-KB throttling meter size.
Device twin updates 250/sec/unit (for S3), Maximum of 50/sec or 5/sec/unit (for
S2), 50/sec (for S1)
Jobs per-device operation throughput 50/sec/unit (for S3), maximum of 10/sec or 1/sec/unit (for
S2), 10/sec (for S1).
Device stream initiation rate 5 new streams/sec (for S1, S2, S3, and F1 only).
RESO URC E L IM IT
NOTE
To increase the number of enrollments and registrations on your provisioning service, contact Microsoft Support.
NOTE
Increasing the maximum number of CAs is not supported.
The Device Provisioning Service throttles requests when the following quotas are exceeded.
T H ROT T L E P ER- UN IT VA L UE
Operations 200/min/service
NOTE
In the previous table, we see that for RSA 2,048-bit software keys, 2,000 GET transactions per 10 seconds are allowed. For
RSA 2,048-bit HSM-keys, 1,000 GET transactions per 10 seconds are allowed.
The throttling thresholds are weighted, and enforcement is on their sum. For example, as shown in the previous table,
when you perform GET operations on RSA HSM-keys, it's eight times more expensive to use 4,096-bit keys compared to
2,048-bit keys. That's because 1,000/125 = 8.
In a given 10-second interval, an Azure Key Vault client can do only one of the following operations before it encounters a
429 throttling HTTP status code:
For information on how to handle throttling when these limits are exceeded, see Azure Key Vault throttling
guidance.
1 A subscription-wide limit forall transaction types is five times per key vault limit. For example, HSM-other
transactions per subscription are limited to 5,000 transactions in 10 seconds per subscription.
Azure Private Link integration
NOTE
The number of key vaults with private endpoints enabled per subscription is an adjustable limit. The limit shown below is
the default limit. If you would like to request a limit increase for your service, please send an email to akv-
[email protected]. We will approve these requests on a case by case basis.
RESO URC E L IM IT
Account limits
RESO URC E DEFA ULT L IM IT
Asset limits
RESO URC E DEFA ULT L IM IT
File size In some scenarios, there is a limit on the maximum file size
supported for processing in Media Services. (1)
1 The maximum size supported for a single blob is currently up to 5 TB in Azure Blob Storage. Additional limits
apply in Media Services based on the VM sizes that are used by the service. The size limit applies to the files that
you upload and also the files that get generated as a result of Media Services processing (encoding or
analyzing). If your source file is larger than 260-GB, your Job will likely fail.
The following table shows the limits on the media reserved units S1, S2, and S3. If your source file is larger than
the limits defined in the table, your encoding job fails. If you encode 4K resolution sources of long duration,
you're required to use S3 media reserved units to achieve the performance needed. If you have 4K content that's
larger than the 260-GB limit on the S3 media reserved units, open a support ticket.
S1 26
S2 60
S3 260
3 This number includes queued, finished, active, and canceled Jobs. It does not include deleted Jobs.
Any Job record in your account older than 90 days will be automatically deleted, even if the total number of
records is below the maximum quota.
Live streaming limits
RESO URC E DEFA ULT L IM IT
4 For detailed information about Live Event limitations, see Live Event types comparison and limitations.
5 Live Outputs start on creation and stop when deleted.
6 When using a custom Streaming Policy, you should design a limited set of such policies for your Media Service
account, and re-use them for your StreamingLocators whenever the same encryption options and protocols are
needed. You should not be creating a new Streaming Policy for each Streaming Locator.
7 Streaming Locators are not designed for managing per-user access control. To give different access rights to
individual users, use Digital Rights Management (DRM) solutions.
Protection limits
RESO URC E DEFA ULT L IM IT
Licenses per month for each of the DRM types on Media 1,000,000
Services key delivery service per account
Support ticket
For resources that are not fixed, you may ask for the quotas to be raised, by opening a support ticket. Include
detailed information in the request on the desired quota changes, use-case scenarios, and regions required.
Do not create additional Azure Media Services accounts in an attempt to obtain higher limits.
Media Services v2 (legacy)
For limits specific to Media Services v2 (legacy), see Media Services v2 (legacy)
API calls 500,000 1.5 million per unit 15 million per unit
Push notifications Azure Notification Hubs Notification Hubs Basic tier Notification Hubs Standard
Free tier included, up to 1 included, up to 10 million tier included, up to 10
million pushes pushes million pushes
For more information on limits and pricing, see Azure Mobile Services pricing.
Networking limits
Networking limits - Azure Resource Manager
The following limits apply only for networking resources managed through Azure Resource Manager per
region per subscription. Learn how to view your current resource usage against your subscription limits.
NOTE
We recently increased all default limits to their maximum limits. If there's no maximum limit column, the resource doesn't
have adjustable limits. If you had these limits increased by support in the past and don't see updated limits in the
following tables, open an online customer support request at no charge
RESO URC E L IM IT
RESO URC E L IM IT
1The limit is up to 150resources, in any combination of standalone virtual machine resources, availability set
resources, and virtual machine scale-set placement groups.
Basic Load Balancer
RESO URC E L IM IT
The following limits apply only for networking resources managed through the classic deployment model per
subscription. Learn how to view your current resource usage against your subscription limits.
Concurrent TCP or UDP flows per NIC 500,000, up to 1,000,000 for two or 500,000, up to 1,000,000 for two or
of a virtual machine or role instance more NICs. more NICs.
ExpressRoute limits
RESO URC E L IM IT
Number of virtual network links allowed per ExpressRoute See the Number of virtual networks per ExpressRoute circuit
circuit table.
50 Mbps 10 20
100 Mbps 10 25
200 Mbps 10 25
500 Mbps 10 40
1 Gbps 10 50
2 Gbps 10 60
5 Gbps 10 75
10 Gbps 10 100
40 Gbps* 10 100
NOTE
Global Reach connections count against the limit of virtual network connections per ExpressRoute Circuit. For example, a
10 Gbps Premium Circuit would allow for 5 Global Reach connections and 95 connections to the ExpressRoute Gateways
or 95 Global Reach connections and 5 connections to the ExpressRoute Gateways or any other combination up to the
limit of 100 connections for the circuit.
Local Network Gateway address prefixes 1000 per local network gateway
Throughput per Virtual WAN VPN connection (2 tunnels) 2 Gbps with 1 Gbps/IPsec tunnel
VNet connections per hub 500 minus total number of hubs in Virtual WAN
Aggregate throughput per Virtual WAN Hub Router 50 Gbps for VNet to VNet transit
RESO URC E L IM IT N OT E
1 In case of WAF-enabled SKUs, you must limit the number of resources to 40.
Network Watcher limits
RESO URC E L IM IT N OT E
Packet capture sessions 10,000 per region Number of sessions only, not saved
captures.
RESO URC E L IM IT
Number of IP Configurations on a private link service 8 (This number is for the NAT IP addresses used per PLS)
*May vary due to other on-going RDP sessions or other on-going SSH sessions.
**May vary if there are existing RDP connections or usage from other on-going SSH sessions.
Azure DNS limits
Public DNS zones
RESO URC E L IM IT
RESO URC E L IM IT
Virtual Networks Links per private DNS zones with auto- 100
registration enabled
1These limits are applied to every individual virtual machine and not at the virtual network level. DNS queries
exceeding these limits are dropped.
Azure Firewall limits
RESO URC E L IM IT
Public IP addresses 250 maximum. All public IP addresses can be used in DNAT
rules and they all contribute to available SNAT ports.
FQDNs in network rules For good performance, do not exceed more than 1000
FQDNs across all network rules per firewall.
Timeout values
Client to Front Door
Front Door has an idle TCP connection timeout of 61 seconds.
Front Door to application back-end
If the response is a chunked response, a 200 is returned if or when the first chunk is received.
After the HTTP request is forwarded to the back end, Front Door waits for 30 seconds for the first packet
from the back end. Then it returns a 503 error to the client. This value is configurable via the field
sendRecvTimeoutSeconds in the API.
For caching scenarios, this timeout is not configurable and so, if a request is cached and it takes more
than 30 seconds for the first packet from Front Door or from the backend, then a 504 error is returned
to the client.
After the first packet is received from the back end, Front Door waits for 30 seconds in an idle timeout. Then
it returns a 503 error to the client. This timeout value is not configurable.
Front Door to the back-end TCP session timeout is 90 seconds.
Upload and download data limit
W IT H C H UN K ED T RA N SF ER
EN C O DIN G ( C T E) W IT H O UT H T T P C H UN K IN G
Download There's no limit on the download size. There's no limit on the download size.
Upload There's no limit as long as each CTE The size can't be larger than 2 GB.
upload is less than 2 GB.
Other limits
Maximum URL size - 8,192 bytes - Specifies maximum length of the raw URL (scheme + hostname + port +
path + query string of the URL)
Maximum Query String size - 4,096 bytes - Specifies the maximum length of the query string, in bytes.
Maximum HTTP response header size from health probe URL - 4,096 bytes - Specified the maximum length
of all the response headers of health probes.
For more information on limits and pricing, see Notification Hubs pricing.
Q UOTA N A M E SC O P E N OT ES VA L UE
Number of topics or queues Namespace Subsequent requests for 10,000 for the Basic or
per namespace creation of a new topic or Standard tier. The total
queue on the namespace number of topics and
are rejected. As a result, if queues in a namespace
configured through the must be less than or equal
Azure portal, an error to 10,000.
message is generated. If
called from the For the Premium tier, 1,000
management API, an per messaging unit (MU).
exception is received by the
calling code.
Number of partitioned Namespace Subsequent requests for Basic and Standard tiers:
topics or queues per creation of a new 100.
namespace partitioned topic or queue
on the namespace are Partitioned entities aren't
rejected. As a result, if supported in the Premium
configured through the tier.
Azure portal, an error
message is generated. If Each partitioned queue or
called from the topic counts toward the
management API, the quota of 1,000 entities per
exception namespace.
QuotaExceededExceptio
n is received by the calling
code.
Q UOTA N A M E SC O P E N OT ES VA L UE
Message size for a queue, Entity Incoming messages that Maximum message size:
topic, or subscription entity exceed these quotas are 256 KB for Standard tier, 1
rejected, and an exception is MB for Premium tier.
received by the calling code.
Due to system overhead,
this limit is less than these
values.
Maximum number of
header properties in
property bag:
byte/int.MaxValue .
Number of subscriptions Entity Subsequent requests for 2,000 per-topic for the
per topic creating additional Standard tier and Premium
subscriptions for the topic tier.
are rejected. As a result, if
configured through the
portal, an error message is
shown. If called from the
management API, an
exception is received by the
calling code.
Q UOTA N A M E SC O P E N OT ES VA L UE
Size of SQL filters or actions Namespace Subsequent requests for Maximum length of filter
creation of additional filters condition string: 1,024 (1
are rejected, and an K).
exception is received by the
calling code. Maximum length of rule
action string: 1,024 (1 K).
Maximum number of
expressions per rule action:
32.
L IM IT IDEN T IF IER L IM IT
Storage limits
The following table describes default limits for Azure general-purpose v1, v2, Blob storage, and block blob
storage accounts. The ingress limit refers to all data that is sent to a storage account. The egress limit refers to all
data that is received from a storage account.
NOTE
You can request higher capacity and ingress limits. To request an increase, contact Azure Support.
RESO URC E L IM IT
Maximum request rate1 per storage account 20,000 requests per second
Maximum ingress1 per storage account (regions other than 5 Gbps if RA-GRS/GRS is enabled, 10 Gbps for LRS/ZRS2
US and Europe)
Maximum egress for general-purpose v1 storage accounts 20 Gbps if RA-GRS/GRS is enabled, 30 Gbps for LRS/ZRS2
(US regions)
RESO URC E L IM IT
Maximum egress for general-purpose v1 storage accounts 10 Gbps if RA-GRS/GRS is enabled, 15 Gbps for LRS/ZRS2
(non-US regions)
1 Azure Storage standard accounts support higher capacity limits and higher limits for ingress by request. To
request an increase in account limits, contact Azure Support.
2 If yourstorage account has read-access enabled with geo-redundant storage (RA-GRS) or geo-zone-redundant
storage (RA-GZRS), then the egress targets for the secondary location are identical to those of the primary
location. For more information, see Azure Storage replication.
NOTE
Microsoft recommends that you use a general-purpose v2 storage account for most scenarios. You can easily upgrade a
general-purpose v1 or an Azure Blob storage account to a general-purpose v2 account with no downtime and without
the need to copy data. For more information, see Upgrade to a general-purpose v2 storage account.
All storage accounts run on a flat network topology regardless of when they were created. For more information
on the Azure Storage flat network architecture and on scalability, see Microsoft Azure Storage: A Highly Available
Cloud Storage Service with Strong Consistency.
For more information on limits for standard storage accounts, see Scalability targets for standard storage
accounts.
Storage resource provider limits
The following limits apply only when you perform management operations by using Azure Resource Manager
with Azure Storage.
RESO URC E L IM IT
Storage account management operations (write) 10 per second / 1200 per hour
Maximum size of a block blob 50,000 X 100 MiB (approximately 4.75 50,000 X 4000 MiB (approximately
TiB) 190.7 TiB) (preview)
Target request rate for a single blob Up to 500 requests per second
1 Throughput for a single blob depends on several factors, including, but not limited to: concurrency, request
size, performance tier, speed of source for uploads, and destination for downloads. To take advantage of the
performance enhancements of high-throughput block blobs, upload larger blobs or blocks. Specifically, call the
Put Blob or Put Block operation with a blob or block size that is greater than 4 MiB for standard storage
accounts. For premium block blob or for Data Lake Storage Gen2 storage accounts, use a block or blob size that
is greater than 256 KiB.
2 Page blobs are not yet supported in accounts that have the Hierarchical namespace setting on them.
The following table describes the maximum block and blob sizes permitted by service version.
Version 2019-12-12 and 4000 MiB (preview) Approximately 190.7 TiB 5000 MiB (preview)
later (4000 MiB X 50,000 blocks)
(preview)
Maximum request rate per storage account 20,000 messages per second, which assumes a 1-KiB
message size
Target throughput for a single queue (1-KiB messages) Up to 2,000 messages per second
Number of tables in an Azure storage account Limited only by the capacity of the storage account
Number of partitions in a table Limited only by the capacity of the storage account
Number of entities in a partition Limited only by the capacity of the storage account
Maximum number of properties in a table entity 255 (including the three system properties, Par titionKey ,
RowKey , and Timestamp )
Maximum total size of an individual property in an entity Varies by property type. For more information, see
Proper ty Types in Understanding the Table Service Data
Model.
Size of an entity group transaction A transaction can include at most 100 entities and the
payload must be less than 4 MiB in size. An entity group
transaction can include an update to an entity only once.
Maximum request rate per storage account 20,000 transactions per second, which assumes a 1-KiB
entity size
Target throughput for a single table partition (1 KiB-entities) Up to 2,000 entities per second
RESO URC E L IM IT
For Standard storage accounts: A Standard storage account has a maximum total request rate of 20,000
IOPS. The total IOPS across all of your virtual machine disks in a Standard storage account should not exceed
this limit.
You can roughly calculate the number of highly utilized disks supported by a single Standard storage account
based on the request rate limit. For example, for a Basic tier VM, the maximum number of highly utilized disks is
about 66, which is 20,000/300 IOPS per disk. The maximum number of highly utilized disks for a Standard tier
VM is about 40, which is 20,000/500 IOPS per disk.
For Premium storage accounts: A Premium storage account has a maximum total throughput rate of 50
Gbps. The total throughput across all of your VM disks should not exceed this limit.
For more information, see Virtual machine sizes.
Disk encryption sets
There's a limitation of 1000 disk encryption sets per region, per subscription. For more information, see the
encryption documentation for Linux or Windows virtual machines. If you need to increase the quota, contact
Azure support.
Managed virtual machine disks
Standard HDD managed disks
STA N
DA RD
DISK
TYPE S4 S6 S10 S15 S20 S30 S40 S50 S60 S70 S80
Disk 32 64 128 256 512 1,024 2,048 4,096 8,192 16,38 32,76
size in 4 7
GiB
IOPS Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to
per 500 500 500 500 500 500 500 500 1,300 2,000 2,000
disk
Throu Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to
ghput 60 60 60 60 60 60 60 60 300 500 500
per MB/s MB/s MB/s MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se
disk ec ec ec c c c c c c c c
Dis 4 8 16 32 64 128 256 512 1,0 2,0 4,0 8,1 16, 32,
k 24 48 96 92 384 767
size
in
GiB
IOP Up Up Up Up Up Up Up Up Up Up Up Up Up Up
S to to to to to to to to to to to to to to
per 500 500 500 500 500 500 500 500 500 500 500 2,0 4,0 6,0
disk 00 00 00
Thr Up Up Up Up Up Up Up Up Up Up Up Up Up Up
oug to to to to to to to to to to to to to to
hpu 60 60 60 60 60 60 60 60 60 60 60 400 600 750
t MB MB MB MB MB MB MB MB MB MB/ MB/ MB/ MB/ MB/
per /sec /sec /sec /sec /sec /sec /sec /sec /sec sec sec sec sec sec
disk
Dis 4 8 16 32 64 128 256 512 1,0 2,0 4,0 8,1 16, 32,
k 24 48 96 92 384 767
size
in
GiB
P RE
M IU
M
SSD
SIZ
ES P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
Pro 120 120 120 120 240 500 1,1 2,3 5,0 7,5 7,5 16, 18, 20,
visi 00 00 00 00 00 000 000 000
one
d
IOP
S
per
disk
Pro 25 25 25 25 50 100 125 150 200 250 250 500 750 900
visi MB MB MB MB MB MB MB MB MB MB/ MB/ MB/ MB/ MB/
one /sec /sec /sec /sec /sec /sec /sec /sec /sec sec sec sec sec sec
d
Thr
oug
hpu
t
per
disk
Ma 30 30 30 30 30 30 30 30
x min min min min min min min min
bur
st
dur
atio
n
RESO URC E L IM IT
P REM IUM
STO RA GE DISK
TYPE P 10 P 20 P 30 P 40 P 50
Disk size 128 GiB 512 GiB 1,024 GiB (1 TB) 2,048 GiB (2 TB) 4,095 GiB (4 TB)
Maximum 100 MB/sec 150 MB/sec 200 MB/sec 250 MB/sec 250 MB/sec
throughput per
disk
Maximum 280 70 35 17 8
number of disks
per storage
account
Maximum number of schedules per 168 A schedule for every hour, every day
bandwidth template of the week.
Maximum size of a tiered volume on 64 TB for StorSimple 8100 and StorSimple 8100 and StorSimple 8600
physical devices StorSimple 8600 are physical devices.
Maximum size of a tiered volume on 30 TB for StorSimple 8010 StorSimple 8010 and StorSimple 8020
virtual devices in Azure 64 TB for StorSimple 8020 are virtual devices in Azure that use
Standard storage and Premium
storage, respectively.
Maximum size of a locally pinned 9 TB for StorSimple 8100 StorSimple 8100 and StorSimple 8600
volume on physical devices 24 TB for StorSimple 8600 are physical devices.
Maximum number of snapshots of any 256 This amount includes local snapshots
type that can be retained per volume and cloud snapshots.
L IM IT IDEN T IF IER L IM IT C O M M EN T S
Restore and clone recover time for <2 minutes The volume is made available
tiered volumes within 2 minutes of a restore or
clone operation, regardless of
the volume size.
The volume performance might
initially be slower than normal
as most of the data and
metadata still resides in the
cloud. Performance might
increase as data flows from the
cloud to the StorSimple device.
The total time to download
metadata depends on the
allocated volume size.
Metadata is automatically
brought into the device in the
background at the rate of 5
minutes per TB of allocated
volume data. This rate might be
affected by Internet bandwidth
to the cloud.
The restore or clone operation
is complete when all the
metadata is on the device.
Backup operations can't be
performed until the restore or
clone operation is fully
complete.
L IM IT IDEN T IF IER L IM IT C O M M EN T S
Restore recover time for locally pinned <2 minutes The volume is made available
volumes within 2 minutes of the restore
operation, regardless of the
volume size.
The volume performance might
initially be slower than normal
as most of the data and
metadata still resides in the
cloud. Performance might
increase as data flows from the
cloud to the StorSimple device.
The total time to download
metadata depends on the
allocated volume size.
Metadata is automatically
brought into the device in the
background at the rate of 5
minutes per TB of allocated
volume data. This rate might be
affected by Internet bandwidth
to the cloud.
Unlike tiered volumes, if there
are locally pinned volumes, the
volume data is also
downloaded locally on the
device. The restore operation is
complete when all the volume
data has been brought to the
device.
The restore operations might
be long and the total time to
complete the restore will
depend on the size of the
provisioned local volume, your
Internet bandwidth, and the
existing data on the device.
Backup operations on the
locally pinned volume are
allowed while the restore
operation is in progress.
Maximum client read/write 920/720 MB/sec with a single 10- Up to two times with MPIO and two
throughput, when served from the gigabit Ethernet network interface network interfaces.
SSD tier*
*Maximum throughput per I/O type was measured with 100 percent read and 100 percent write scenarios.
Actual throughput might be lower and depends on I/O mix and network conditions.
Stream Analytics limits
L IM IT IDEN T IF IER L IM IT C O M M EN T S
Maximum number of inputs per job 60 There's a hard limit of 60 inputs per
Azure Stream Analytics job.
Maximum number of outputs per job 60 There's a hard limit of 60 outputs per
Stream Analytics job.
Maximum number of functions per job 60 There's a hard limit of 60 functions per
Stream Analytics job.
Maximum number of streaming units 192 There's a hard limit of 192 streaming
per job units per Stream Analytics job.
Maximum number of jobs per region 1,500 Each subscription can have up to
1,500 jobs per geographical region.
1 Virtual machines created by using the classic deployment model instead of Azure Resource Manager are
automatically stored in a cloud service. You can add more virtual machines to that cloud service for load
balancing and availability.
2 Input endpoints allow communications to a virtual machine from outside the virtual machine's cloud service.
Virtual machines in the same cloud service or virtual network can automatically communicate with each other.
Virtual Machines limits - Azure Resource Manager
The following limits apply when you use Azure Resource Manager and Azure resource groups.
RESO URC E L IM IT
VM total cores per subscription 201 per region. Contact support to increase limit.
RESO URC E L IM IT
Azure Spot VM total cores per subscription 201 per region. Contact support to increase limit.
VM per series, such as Dv2 and F, cores per subscription 201 per region. Contact support to increase limit.
limit, use the Azure Key Vault extension for Windows or the Azure Key Vault extension for Linux to install
certificates.
3 With Azure Resource Manager, certificates are stored in the Azure Key Vault. The number of certificates is
unlimited for a subscription. There's a 1-MB limit of certificates per deployment, which consists of either a single
VM or an availability set.
NOTE
Virtual machine cores have a regional total limit. They also have a limit for regional per-size series, such as Dv2 and F.
These limits are separately enforced. For example, consider a subscription with a US East total VM core limit of 30, an A
series core limit of 30, and a D series core limit of 30. This subscription can deploy 30 A1 VMs, or 30 D1 VMs, or a
combination of the two not to exceed a total of 30 cores. An example of a combination is 10 A1 VMs and 20 D1 VMs.