Saturday, September 15, 2018

Getting Started with Azure API Management - Fundamentals


It’s great to see how the evolution is taking place in IT world, mostly focused on sharing the existing resources leading to invent of cloud computing where resources are shared and used as a service on demand. Based on the type of resource shared, categorization is done – IAAS, PAAS, SAAS, IPAAS, FAAS etc. Off course, sharing is not free – you pay for whatever you use based on the charges defined per unit as to how long it is used and how much (time and amount). This ultimately helps in reducing time to market (by using already existing resource of others) and monetizing from the available resource (by sharing it).

Publishing the API (functionality) is just another thing on the same line – where the intention is to share a piece of function for enabling faster integration with clients or for monetizing from it.

API (Application Programming interface) is a set of functions/methods/procedure with a set of defined rules enabling the interaction between systems, applications, tools etc. Every organization has some or the other API (functionality) which can be used by others. It sounds great to share the capability, but with the increase in number of shared resources, two challenges also needs to be addressed – Security and Governance.

What is Azure APIM?

To cater the need around managing the APIs, Microsoft came up with a management solution i.e. Azure APIM. It is a Paas offering where you pay based on the tier(set of capacity features) you opt for.

Microsoft says - API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. Businesses everywhere are looking to extend their operations as a digital platform, creating new channels, finding new customers and driving deeper engagement with existing ones. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. You can use Azure API Management to take any backend and launch a full-fledged API program based on it.

In Layman term - It is a layer (Proxy) behind which the actual APIs are configured, the proxy url are exposed whereas the actual API's url is mapped to it, along with some middleware capabilities like transformation, with an interface to consolidate and manage thousands of API’s across multiple platforms, an authentication and access control mechanism to manage and ensure security on API access and to monitor health of APIs, identifying errors, configure throttling, rate limits, caching mechanism and also provides insight into the utilization of APIs.

Building blocks of APIM

API Gateway (Proxy)

When you create an instance of APIM , you are asked to provide name to URL – it is here where the proxy gets created. So whatever APIs you create, import and want to expose are to go through this (the backend api url is mapped in settings). You can also choose to have custom domain mapped to this url.
creating APIM Instance

Thus the Request is received here and it is here where you do all the pre-processing required on the request before handing over it to backend (the actual API).

Management (Publisher) Portal

It is an interface provided to you (publisher) to manage your API’s and to do the required groundwork before making it available to the consumers (internal or external). Earlier there was dedicated portal for it but now it is getting migrated to Azure portal itself (at the time of writing only Analytics was remaining to be migrated).

Below are the few things you can do (high level) –

· Create API, Edit API, Import API, Export API, Delete API, Add revision, Add version ,Clone API

· You can configure how the APIs should behave and control who has access to call them

· Import Operation, delete operation, clone operation

· Add Product, Delete Product, Publish and Unpublish Product, Add APIs to Product, remove APIs from product

· Store constant string values in NamedValue (properties collection of key/value pairs) which can be used in policy statements.

· Add Group, Delete Group

· Add Users, Delete Users

· Add certificates (can be used for validation in policy statement)

· Configuring the APIM instance itself through Settings section

· Configure Alerts, Application Insight using Monitoring section

· Add, Edit, Remove Policies

· Test the APIs before Publishing

· You can view the analytics for the usage and performance of your APIs

· Customize the look and feel of the developer portal

Developer (Consumer) Portal

It is an interface provided for consumers (users who are interested in using the APIs published by you).Consumers can be internal to your organization (say some other team, department) or your partners, clients etc. It is a placeholder of all the published APIs and Products (collection of APIs) which consumers can view or use (after they signup and subscribe).   

You(publisher) can add static content, such as API documentation and terms-of-use, as well as dynamic content such as blogs and forums. As an API provider, you need a way to expose your APIs, educate developers about your APIs, sign up developers, and let developers register apps. Also you can change the look and feel of the developer portal (it is HTML based).


It is through Product, APIs are made available to consumer. A Product can have one or many APIs. It is actually logical grouping of APIs which share common policies or are from same business process etc. Before using any API from any Product, the consumer needs to subscribe to that particular product and in return consumer gets Subscription key upon administrator’s approval or auto approved (based on approval configured in setting). So whenever consumer needs to invoke the API, it needs to pass on subscription key in header, else will encounter Access denied due to missing subscription key error

Products are of two type – Open and Protected. Open (public) Products are the one which doesn’t need subscription and protected ones needs subscription.  By default there are two products available when you create a APIM instance – Starter and Unlimited (both are Protected). You can create Product as per your requirement.


Level of access (view, consume, read, write etc.) of Product is through groups. Based on the requirement users can be added to respective groups – Administrator (users can perform all actions), Developer (users can consume the APIs) , Guest (users can only view), this three are out of box groups made available. But new groups can be created and users can be added to them. Users can sign up using developer portal or can be created by or invited by Administrator.


Policies are the real meat, it is through it APIM becomes powerful offering various capabilities. Using policies (set of statements) you can define the behavior of your APIs, basically providing governance around them. It is basically a XML document with mandatory sequential sections in it (inbound, backend, outbound), defining the order in which policies are executed.

Inbound section – For applying policies at entry point itself, like from where it can be called, how many times it can be called, validating the certificate etc. (Restriction, access, authorization etc)

Backend section – For applying policies before calling backend, like replacing some elements in request, converting from xml to json etc (enriching the request message, complying to expected format by backend etc)

Outbound section – For applying policies at exit point, before sending response to the caller, like setting http status code, xml/json to json/xml etc.

There is one more section, on-error which can be added. It basically acts as a catch section for above all section. If anything goes wrong in any of the section, only then on-error section comes into execution else not(provided you have added it). It is an exception handling mechanism provided out of box.



    <!-- statements to be applied to the request go here -->



    <!-statements to be applied before the request is forwarded to the backend service go here -->



    <!-- statements to be applied to the response go here -->



    <!-- statements to be applied if there is an error condition go here -->



Each of the section can have none or many statements within it. Statements are the policy (feature/function) which can be added in appropriate section based on the functionality needed. For now statements are categorized as following (based on purpose they serve)

1. Access Restriction Policies

2. Advanced Policies

3. Authentication Policies

4. Caching Policies

5. Cross domain Policies

6. Transformation Policies

There are actually four scopes/ wherein policies can be applied

1.       Global scope (to all products)

2.       Product Scope

3.       API Scope

4.       Operation/method scope

All statements are not available or not valid at each level, few only make sense at product level whereas few only at operation level, thus in policy editor sometimes you might see some statements are greyed out.  There are scenarios where a statement applies or is required at more than one level – in that case there is provision to avoid duplicity and simply inherit the statement from parent scope using <base/> element. Also if you want certain policies of current scope to be applied before parent level policies then just change the sequence as followed -

        <base />

Although the default behavior is to execute parent level policies first, but with use of <base/> element and changing the sequence of it, the order can be changed.In any level where you use base element, at runtime that is replaced with the statements of the parent level. There are situations where only statement is not enough, in that case there is provision to use expressions within statement.

Note: <base/> element is not allowed in global scope, as it is the top most level thus no parent scope to inherit from

If you worked with BizTalk, concept of policies to me reminds of pipelines (which are used for preprocessing and post processing of messages).

How it works

The publisher creates an API or imports already existing API, adds it to existing Product or creates new Product and adds the API, publishes the product. Publisher then shares the developer portal url to the consumers.

The consumer signups using developer portal if not already already added/invited by admin. Consumer then subscribes to Product and gets the subscription key (if protected Product), also goes through the API documentation to understand the method supported and message format/content type.

Consumer sends a request to the APIM Url (gateway/proxy) with subscription and content type as a header, APIM engine then loads the policies. Policies are actually a XML based configuration file which has elements/statements/functions in sequence denoting the order in which their execution is to be performed.

How is security designed?

In APIM following things are taken into consideration from security perspective – preventing unauthorized access, preventing excessive usage, preventing content attack etc. The very first thing is subscription, it has a primary and secondary key and one of these needs to be passed in the header of the request to the APIM thus enforcing pre access check. Also there is provision in security settings to go with OAuth 2.0 or OpenID Connect which forces consumers to supply a valid authorization token in the request header. Mutual Certificates can be used to limits access to your Backend API by sharing a certificate between APIM and your Backend API. APIM can also be deployed in Virtual Network.

Read about using APIM for securing apps -- Securing Logic app with Azure APIM

How to check happenings in and around API?

Analytics - insight on the usage/health of the APIM artifacts like which Product/api is receiving more request ,from where we are getting more traffic or to see the response time of particular API etc.

Activity Log - insight log of actions taken on the artifacts of APIM activities, like if you want to know when a particular product was created, when was it deleted etc. It is actually  subscription level perspective log.

Diagnotics Logs - insight log of actions taken by the artifacts of APIM. It  is resource level log which captures actions that were performed within that resource itself, like when operation was called etc

Metrics - It is in built set of query to show real time happening around the APIM, like number of total request came in, number of failed request etc.

Alerts - You can setup notification based on the metrics and logs, sending an email to statekholders or you can also automate a process by using logic app by triggering it with alert.

Apart from above there is also provision to enable Application Insights to capture the telemetry data also there is provision to surface the above logs to OMS

Good to have

1. One thing which I think would be great to have is Intellisense in Policy editor :) .
2. Provision to enable/disable the APIM instance

Related Post

Tuesday, September 4, 2018

Access denied due to missing subscription key


To test a functionapp api which I put behind APIM, I copied the URL and tried to trigger a request using Postman, but got following error:

access denied due to missing subscription key

Why it happened

It is one of the basic features APIM offers – security, only authorized users can send request to an API, unless explicitly allowed. Here the error returned by APIM engine is about missing Subscription Key, which is used to access the service (authorization).

Subscription Key  

In APIM each set of APIs are part of a Product and users need to subscribe to that product before they can access the APIs within it. The subscription has a primary and secondary key and one of these needs to be passed in the header of the request to the APIM. Thus securing your API from being called by anyone without a subscription key
This happens in either of the scenario
  1. The API which is called is not part of any Product
  2. The request send to the APIM url does not have the subscription key in the header

For me it was the first case, where I missed to add the API to a product.

What to do

The very first step is to add the API to product, get the key and add it to header while making call. 
copy subscription key

Add key in header 

Request without a key are stopped at the APIM gateway, never reaching your API backend

What if you want to allow public access to it ?

In that case you simply uncheck the Remove Subscription and can make call without key. 
Adding new product  in APIM

Below is the result of calling APi in Test Product without subscription key through postman

Related Post

Sunday, March 18, 2018

Getting Started with Logic Apps - Fundamentals

What is Logic App?

It is workflow, an orchestration in cloud (which is hosted on Microsoft Azure) with connections to systems, services.

It is an Offering from Microsoft primarily to cater the need of integrating and designing business workflow/process with orchestrating the SAAS services. It also now extends to services which are on premises.

It is one of the Service amongst the other Azure App Services (Web Apps, Mobile Apps, API Apps,Functions) and runs on top of Azure Service Fabric. It is a fully managed iPaaS (integration Platform as a Service) solution which allows developers to build highly scalable workflows and will automatically scale to meet demand. 

Logic app on Azure Service Fabric

In Layman term – Microsoft has provided a platform which is managed by them to enable a user to design/create a workflow which has a provision to connect services which are cloud based or also on premises (integrating various services) thus the name IPaas (integration Platform as a Service). 

You get a browser based designer (also available in visual studio), where you can design the workflow by selecting the appropriate Trigger (the way to start the workflow based on certain event ) thereafter adding new steps Action (it can be after a Condition) where you select the Connectors(the way to connect to data ,services or system). After you save the workflow, it gets deployed – ready to use. And you don’t have to worry about the load scenarios, it auto scales and you are charged only when it is executed (number of Actions).

It is also marketed by Microsoft as Serverless, by serverless it doesn't mean there are no servers, it just means the developers do not have to worry about the underlying infrastructure (there is an Abstraction) instead they just have to focus on the business logic (faster development). The other two offerings under serverless are Azure Functions and Event Grid.

serverless offerings from Azure

Although Logic App is said to be serverless, in reality there are servers/ virtual machine which host them but are hidden(no direct access) from users. And each region has some set of such VM's and thus there is range of IP adresses which you can see in properties  -- Runtime outgoing IP Address, Connector outgoing IP address (outgoing  from Logic app) and Access endpoint IP address (incoming to Logic app). 

Logic Apps is based on the Workflow Definition Language and provides a way to simplify, automate and integrate scalable workflows into the cloud, WDL is based on following basic structure
wdl basic structure

So whatever you create in Logic app Designer gets converted in JSON. JSON is used to define the workflow based on WDL, and same can be seen in Code view in Portal or in Visual Studio.

Thus from development perspective- Logic app is nothing but JSON definition along with ARM Template and from runtime perspective - Logic apps is a Job Scheduler with a JSON based DSL describing a dependency graph of actions.

Building Blocks of Logic Apps


Logic Apps always start from a trigger and then execute a series of steps. As in BizTalk Message creates instance of Orchestration likewise trigger creates an instance of Logic Apps.

Push, pull, repeating and manual are way to trigger Logic Apps

Push – This is reactive type, where the consumer notifies the workflow or creates an event to start the workflow(logic app endpoint).

Poll(pull) – This is proactive type, where the workflow polls a system or service for notification or event.(Service endpoint)

Repeating(Recurrence) – Prescribed Schedule to start the workflow

Manual – Manually starting the Workflow(You can click Run now button in Portal)


In my perspective, the base of Logic Apps is connector – everything in logic apps is around connectors (All components are api apps). All connectors are technically API apps that uses a metadata format named swagger, REST as pluggable interfaces and JSOn as the data interchange format. It can act as trigger and action, to connect with any service is via either  by Trigger or Action and both are api connections.. In other words, connectors is encapsulation of authentication, data validation in combination of Triggers and Actions.

It is on same line as that of Adapters in BizTalk. And at the time of writing this post there are 200+ connectors available, and every week there is addition to it.

     1.Standard connectors
    This are pre-included and are available in Logic App and does not cost extra.

     2.Integration account connector
It comes at extra cost as these connectors become available when you create Integration Account. It enables us to deal with complex integration scenarios where maps, trading partners management etc are involved .

Integration Account connector

3.On premise connector(Hybrid connector)
     This are the connectors used for connecting to systems which are on premises,
     and this is done with the help of on-premise data gateway.For now connectors for DB2, Oracle DB, SQL Server, FileSystem and Sharepoint server, Informix, Websphere MQ etc are available.

    4.Enterprise connectors
 It comes at extra cost for Enterprise level systems like MQ and SAP.

    5.Custom connector
 If none of above connectors satisfy the need, then there is provision to create a custom connector just like we have provision in BizTalk to create custom Adapter.


Every step  in Logic App (even trigger) is called Action (condition can be before it). Action always mapped to operation in Managed Connector or web api.Every action has input and output associated with it.

Output of particular action is available to all the actions following it and can be used by them e.g., Trigger's output is triggerbody and it is available to all actions (as trigger is first action in logic app).

Enterprise Integration Pack

Integration Account is required for Enterprise integration Pack. It enables to have BizTalk like power in cloud, where you get provision to store the artifacts (xsd, maps, trading partners, certificates,agreement etc) and use them to build Enterprise level B2B/EAI solution. It has supports for industry standard like AS2,EDI X12, EDIFACT , Flatfile,XML etc.

All the features around B2B/EAI solutions like validation, transformation, Encoding , decoding are made available through it enabling to have solutions build in Serverless fashion.

Read through examples of Enterprise Integration Pack

Getting Started with Logic Apps - XML to EDI X12
Getting Started with Logic Apps - EDI X12 Fundamentals
Getting Started with Logic Apps - Enterprise Application Integration

Flow controls

  • Scope
         Logical grouping of actions

  • Response
       For any request that comes in there can be response associated with it

  • Condition
     Evaluates an expression and executes the corresponding result branch

  • ForEach
     Will iterate over an array and perform inner actions for each item

  • Until
     Will execute inner actions until  a condition results to true

  • Switch Statement
     Only one branch will be executed corresponding to condition of the            particular   case

  • Calling another Logic App (Nesting)
     Workflows can be nested by making a workflow exposing a  callable endpoints (can be reached over an url)

  • Calling custom code via Azure function
     In a scenario where custom code is required to be executed, the code can be added as Azure function and can be called from Logic app

Read about calling Function App from logic app  -- Calling Active Directory Secured Function App from Logic Apps   and    Using Managed Identity in Logic Apps for Calling Active Directory Secured Function App


Security in Logic app can be applied in various ways and on different levels like securing access to triggers, securing access to Run History, securing logic app editing etc.

Out of box we have provision of SAS(shared access signature) and restriction can be applied as to who all can call the logic app based on IP Addresses(IP Filtering), you can look at it at Settings->Access control configuration.

Azure Role Based Access Control  and Azure Resource Lock can be used to prevent accidental/intentional editing or deleting of the logic apps.

If you think above aren’t enough, then we have option to leverage the power of APIM where Azure Active Directory, certificate, OAuth, or other security standards can be used.

Read about securing logic app with APIM --  Securing Logic app with Active Directory 


For monitoring there is out of box feature which tracks and enables us to view the state of each logic app instance – Run history , it includes all the details like input and output of each step in the workflow. Through this we can check if workflow was successful or failed, how much time it took to complete, what was input received, what was the output, also exception details if there are any.

Also on same line we have tracking for Triggers – Trigger history which keeps state of triggers if they were successful, failed, skipped etc.

Apart from above two we can leverage  azure diagnositcs, api management, azure alerts for additional monitoring of Logic apps and implementing alert mechanism on top of it.

Read about Advanced Monitoring  - Getting Started with Logic Apps - What happened to the Request?

Learn More about Logic App