RSS

Tag Archives: azure

Azure App Service Auto-Healing

Auto-Healing

Azure App Service provide feature to auto-heal your service whenever it detects anomaly with the service.

Auto-Healing allows you to restart, log or perform custom actions (action) whenever a certain criteria about your App Service is met (trigger), for example, recycle service when slow requests is detected.

You define these triggers and actions to determine what triggers / actions to take.

Azure Web Sites

Before it was renamed to Azure App Service, the service is called Azure Web Sites. To enable auto-heal in Azure Web Sites, you would have to add configuration in the web.config file.

<system.webServer>
  <monitoring>
    <triggers>
      <requests count="100" timeInterval="00:05:00" />
    </triggers>
    <actions value="Recycle" />
  </monitoring>
</system.webServer>

To see different scenarios on how to use this, head over to this link. Some of the scenarios are:

  • Recycle based on Request count (code example above).
  • Recycle based on slow requests.
  • Logging an event or recycle based on HTTP status code(s).
  • Take custom actions or recycle / log event based on memory limit.

Azure App Service

In the new Azure App Service, however, auto-heal works little bit differently. Announced in September 2018 (https://azure.github.io/AppService/2018/09/10/Announcing-the-New-Auto-Healing-Experience-in-App-Service-Diagnostics.html), the auto-heal is now available through Azure Portal.

To get to Auto-Healing setting, go to Azure App Service > Diagnose and solve problems > Diagnostic Tools > Auto Healing.

azure-app-service-auto-healing-1

azure-app-service-auto-healing-2

Azure App Service offers two types of auto-heal. First is ProActive Auto-Healing which is enabled by default. And custom Auto-Healing.

You can turn off ProActive Auto-Healing under Azure Portal as well.

To add custom Auto-Healing setting, follow this link.

azure-app-service-auto-healing-3

As the screenshot shows, the first tab (“Configure Mitigation Rules”) is the custom Auto-Healing settings. By default it’s turned off. The second tab (“ProActive Auto-Healing”) is ProActive Auto-Healing and it’s turned on by default.

Reference

https://docs.microsoft.com/en-us/azure/app-service/overview-diagnostics

https://azure.github.io/AppService/2018/09/10/Announcing-the-New-Auto-Healing-Experience-in-App-Service-Diagnostics.html

Advertisements
 
Leave a comment

Posted by on May 20, 2019 in General

 

Tags: , , , , ,

Azure Resource Manager

Azure Resource Manager Overview

Azure Resource Manager, or ARM, is a way to manage resources in Azure. With ARM, we will be able to define our platform, or infrastructure, as code.

One of the benefits of using code to manage platform / infrastructure is we can check in to source control to see different changes over time. It also allow us to reuse part of the code for other deployment.

Azure Resource Manager also have the following benefits:

  • Manage resources. Deploy, add, remove resources at ease.
  • Resource grouping. Group resource into logical set that makes sense to you, i.e.: environment, location, etc.
  • Resource dependencies. Handle dependencies between different resources.
  • Repeatable deployments. ARM template can be used to perform same. repeatable deployments.
  • Template. Using template and code to define platform / infrastructure. Template is also reusable.

Architecture

With ARM architecture, there are few components. They are:

  • Resource providers. These are the functionalities of the resources, such as compute, storage, etc.
  • Resource types. The actual resource of Azure services that will be deployed.
  • ARM REST APIs. Allow invocation of ARM command thru its APIs.

In general, ARM template has the following structure:

  • Schema (required)
  • Content Version (required)
  • Parameter
  • Variables
  • Resources
  • Output

How it Works

You can also abstract out parameters for a ARM template to its own template parameter file. ARM template parameter must also have schema and version. The version doesn’t have to be the same with the template file.

In ARM template deployment, Azure automatically detect dependencies between resource types and deploy the dependent resource first. For example, if a template has specified to deploy App Service Plan and App Service Web Site, ARM REST API will deploy App Service Plan first, which Web Site depends on.

You can also specify dependency in ARM template as well as deploy resources simultaneously for services that don’t have dependency, i.e. deploying 4 App Service Websites.

Azure Resource Manager will only deploy resources in the template that are not yet exist. When you the specified resource already exists, it will be skipped.

Within ARM template, you can also specify to deploy the application itself, this is useful, for example, when the App Service Web Site deployment will include populating the website with web application from source control.

Reference

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

https://github.com/Azure/azure-resource-manager-schemas

 
Leave a comment

Posted by on January 25, 2019 in General

 

Tags: , , , , ,

Data Warehouse Solutions in Azure

Date Warehousing Solutions at a Glance

With today’s big data requirements where data could be structured, unstructured, batch, stream and come in many other forms and size, traditional data warehouse is not going to cut it.

Typically, there are 4 types of data stage:

  • Ingest
  • Store
  • Processing
  • Consuming

Different technology is required at different stage. This also depends heavily on size and form of data and the 4 Vs: Volume, Variety, Velocity, Veracity.

Consideration for the solutions sometime also depends on:

  • Ease of management
  • Team skill sets
  • Language
  • Cost
  • Specification / requirements
  • Integration with existing / others system.

Azure Services

Azure offers many services for data warehouse solutions. Traditionally, data warehouse has been ETL process + relational database storage like SQL Data Warehouse. Today, that may not always be the case.

Some of Azure services for data warehousing:

  • Azure HDInsight
    Azure offers various cluster types that comes with HDInsight, fully managed by Microsoft, but still require management from users. Also supports Data Lake Storage. More about HDInsight. HDInsight sits on “Processing” data stage.
  • Azure Databricks
    Its support for machine learning, AI, analytics and stream / graph processing makes it a go-to solution for data processing. It’s also fully integrated with Power BI and other source / destination tools. Notebooks in Databricks allows collaboration between data engineers, data scientist and business users. Compare to HDInsight.
  • Azure Data Factory
    The “Ingest” part of data stage. Its function is to bring data in and move them around different system. Azure Data Factory supports different pipelines across Azure services to connect the data and even on-premise data. Azure Data Factory can be used to control the flow of data.
  • Azure SQL Data Warehouse
    Typically the end destination of data and to be consumed by business users. SQL DW is platform as a service, require less management from users and great for team who already familiar with TSQL and SSMS (SQL Management Studio). You can also scale it dynamically, pause / resume the compute. SQL DW uses internal storage to store data and include the compute component. SQL Data Warehouse sits on “Consuming” stage.
  • Database services (RDBMS, Cosmos, etc)
    SQL database, or other relational database system, Cosmos are part of the storage solutions offered in Azure Services. This is typically more expensive than Azure Storage, but also offer other features. Database services are part of “Storage” stage.
  • Azure Data Lake Storage
    Build on top of Azure Storage, ADLS offers unlimited storage and file system based on HDFS, allowing optimization for analytics purpose, like Hadoop or HDInsight. ADLS is part of “Storage” stage.
  • Azure Data Lake Analytics
    ADLA is a high-level abstraction of HDInsight. Users will not need to worry about scaling and management of the clusters at all, it’s an instant scale per job. However, this also comes with some limitations. ADLA support USQL, a SQL-like language that allows custom user defined function in C#. The tooling is also what developers are already familiar with, Visual Studio.
  • Azure Storage
  • Azure Analysis Services
  • Power BI

Which one to use?

There’s no right or wrong answer. The right solution depends on many others things, technical and non-technical as well as the considerations mentioned above.

Simon Lidberg and Benjamin Wright Jones have a really good presentation around this topic. See the link at reference for their full talk. But, basically, the flowchart to make decision looks like this:

data-warehouse-solutions-in-azure

Reference

https://myignite.techcommunity.microsoft.com/sessions/66581

 
Leave a comment

Posted by on January 20, 2019 in General

 

Tags: , , , , , , , , , , , , , , , , , ,

What is Azure HDInsight?

Hadoop and Azure HDInsight

Azure HDInsight is Azure’s version of Hadoop as a service. It lives in the cloud, just like other Azure services, and it’s a managed service so we don’t have to worry about some of the maintenance that’s required with Hadoop cluster.

Underneath, Azure HDInsight uses Hortonworks Data Platform (HDP)’s Hadoop components.

Each Azure HDInsight version has its own cloud distribution of HDP along with other components. Different version of HDInsight will have different version of HDP. See the reference link for technology stack and its version.

When you create Azure HDInsight, you will be asked to choose the cluster type. The cluster type is the Hadoop technology you would want to use, Hive, Spark, Storm, etc. More cluster types are being added. To see what’s currently supported, see the reference link.

Azure HDInsight can be a great data warehouse solution that lives in the cloud.

Azure HDInsight and Databricks

While Azure HDInsight is a fully managed service, there are still some management we as a user have to do. HDInsight also supports Azure Data Lake Storage and Apache Ranger integration. The sort of downside to HDInsight is it doesn’t have auto-scale and you can’t pause the deployment. This means, you will pay for the cost as long as the service lives. The typical model is to spin the service up whenever it’s needed, compute the data, store it in a permanent storage and kills the service.

This is as opposed to Databricks, which is another data warehouse solution offered by Azure, Databricks can be auto-scaled. Databricks, however, is less about ETL process and more of processing the data for analytics, machine learning and the likes. Needless to say, it has built-in library for this purpose.

The language support is also different. Language support in HDInsight depends on what cluster type you choose when you spin up the service, for example, Hive will support HiveQL (SQL-like language) in its Hive editor. Databricks supports Python, Scala, R, SQL and many others.

Reference

https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-component-versioning

https://docs.microsoft.com/en-us/azure/hdinsight/

 
Leave a comment

Posted by on January 15, 2019 in General

 

Tags: , , , , , , , , , , ,

Application Insights Intrumentation Key in Web.config

When using Azure Application Insights in ASP.Net application, by default, Visual Studio insert IntrumentationKey in ApplicationInsights.config.

To allow multiple environments tracking, move IntrumentationKey to Web.config by following this steps:

  1. Remove IntrumentationKey from ApplicationInsights.config. If you have MVC application, don’t forget to modify ApplicationInsights’ script (usually in View\Shared\_Layout.cshtml), replace:

    {instrumentationKey:"your instrumentation key"}
    

    with:

    {instrumentationKey:"@Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.Active.InstrumentationKey"}
    
  2. Add new app settings for IntrumentationKey in Web.config under <appSettings>

    <add key="InstrumentationKey" value="your instrumentation key" />
    
  3. In Global.asax.cs, Application_Start() method, add:

    Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.Active.InstrumentationKey = System.Web.Configuration.WebConfigurationManager.AppSettings["InstrumentationKey"];
    

That’s it for the configuration changes. Everything else is the same including tracking custom event or page view.
With this configuration, you will be able to define InstrumentationKey in Release management for each environments.

 
2 Comments

Posted by on May 18, 2018 in General

 

Tags: , , , , , ,

Configuring Azure Application Insights for Angular 2+ App

1. Obtain InstrumentationKey from Azure.
-Create new Application Insights resource.
1

2

-Enter Name, Application Type (choose General for Angular app) and choose Resource Group.
3

-Wait for Azure to finish creating the resource.
-Go to the resource detail, by clicking name of the resource, select Overview menu and expand Essentials information to get the Instrumentation Key.
4

2. In Angular app, install applicationinsights-js package.

npm install applicationinsights-js --save --save-dev

3. In Angular component you want to track, add import statement and call AppInsights.downloadAndSetup() method in the constructor.
Normally, we only need to call AppInsights.downloadAndSetup() once in entire application lifecycle so it make sense to put this in app.component.ts.

4. To track each telemetry, call available methods in AppInsights class. In the example below, we track page view after the component initialized.

import {AppInsights} from 'applicationinsights-js';

@Component({
	selector: 'test-app',
	templateUrl: 'test-app.component.html',
	styleUrls: ['test-app.component.scss']
})
export class TestAppComponent {
	constructor() {
		// Download and setup Application Insights
		AppInsights.downloadAndSetup({instrumentationKey: 'xxxx-xxxx-xxxx-xxxx'});
		this.Init();
	}

	public Init(): void {
		// Example of how to track page view
		AppInsights.trackPageView('TestAppComponent');
	}
}

For more details on what Application Insights can track and methods to call, please see:
https://github.com/Microsoft/ApplicationInsights-JS/blob/master/API-reference.md

After everything is configured, browse your application for few minutes and let Azure Application Insights does its magic. You should be able to go back to Azure portal and see some activities, like this:
5

 
Leave a comment

Posted by on May 18, 2018 in General

 

Tags: , , ,

Conceal Sensitive Information with Azure Role-Based Access Control (RBAC)

Use Role-Based Access Control to Hide Access to Configurations, Connection Strings, Account Keys and Certificates

Access to Azure services can be defined in a more granular level. This is useful when you want to grant access to certain services without revealing sensitive information, such as account keys, connection strings or certificates.

RBAC Custom Roles

This can be achieved by defining Custom Roles in RBAC. Built-in roles is not going to be sufficient.

For example, we could restrict access to Azure Cloud Service ‘s Configurations and Certificates below:

azure-rbac-1

{
  "Name": "Dev Ops",
  "Id": "<some_guid>",
  "IsCustom": true,
  "Description": "Dev Ops role.",
  "Actions": [
    "Microsoft.ClassicCompute/domainNames/read",
	"Microsoft.ClassicCompute/domainNames/slots/roles/providers/Microsoft.Insights/metricDefinitions/read",
	"Microsoft.ClassicCompute/domainNames/slots/start/action",
	"Microsoft.ClassicCompute/domainNames/slots/state/start/write",
	"Microsoft.ClassicCompute/domainNames/slots/state/stop/write",
	"Microsoft.ClassicCompute/domainNames/slots/stop/action",
	"Microsoft.ClassicCompute/domainNames/swap/action"
  ],
  "NotActions": [
	"Microsoft.ClassicCompute/domainNames/slots/read",
	"Microsoft.ClassicCompute/domainNames/serviceCertificates/operationStatuses/read",
	"Microsoft.ClassicCompute/domainNames/serviceCertificates/read"
  ],
  "AssignableScopes": [
    "/subscriptions/<some_guid>"
  ]
}

What restrict users access to the configurations and certificates are the resource provider operations in NotActions.

What Resource Provider Needed for Azure Service?

In the example above, I use Azure Cloud Service as an example and the resource provider for Azure Cloud Service is Microsoft.ClassicCompute.

You can find out what resource provider used in an Azure Service from the URL. For example, this is URL for Azure Cloud Service.

azure-rbac-2

The part where it says Microsoft.ClassicCompute is what tells you which resource provider to use.

More

The challenge is to find resource provider operations to suit your needs.

For more information on how to create custom roles, available built-in roles and list of resource provider operations, see the links in References.

References:
https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles
https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles
https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations

 
Leave a comment

Posted by on April 24, 2018 in General

 

Tags: ,

 
%d bloggers like this: