API Gateway Design Patterns: Proxy & Edge
Summary
The API Proxy Pattern acts as a middleman for individual API calls. It enhances each interaction by adding features like rate limiting, logging, and security checks. This makes API communication safer and more efficient, focusing on improving single API interactions.
The API Edge Pattern, commonly used in microservices architectures, provides a single entry point for external users. It hides the complexity of the underlying services and manages various aspects of requests. This includes routing, combining multiple requests, and maintaining system stability. The Edge Pattern is about presenting a unified front to users while managing a complex backend.
Both patterns aim to improve API communication, but they address different needs. The Proxy Pattern enhances individual API calls, while the Edge Pattern manages the complexity of multiple services working together as a system. Each pattern is valuable in its own right, depending on the specific architecture and requirements of a given system.
API Proxy Pattern
The Proxy Pattern involves the API Gateway acting as a proxy between the client and backend services. The API Gateway receives requests from the client and forwards them to the appropriate backend service, and then returns the response to the client.

Using Azure API Management (APIM) as an API Proxy means that APIM acts as an intermediary layer between the client application and the backend services, forwarding client requests to the appropriate backend APIs without performing significant processing, transformation, or orchestration. In this role, APIM primarily handles routing, security enforcement, and basic logging, functioning as a reverse proxy.
Some Examples:
Imagine you have multiple backend services (e.g., user service, order service, product service) each exposing their own APIs. By placing APIM in front of these services as an API Proxy:
- Clients send requests to APIM rather than directly to each service.
- APIM forwards these requests to the appropriate backend service without altering the payload.
- Backend Services process the requests and send responses back through APIM to the clients.
- APIM enforces security checks and logs each transaction but doesn’t modify the data.
Some Considerations:
- Configuration Management: You’ll need to maintain API configurations within APIM to ensure accurate routing and access control.
- Performance Overhead: While minimal, there’s some latency introduced due to the additional network hop through APIM.
- Limited Functionality: Since APIM is used as a proxy, advanced features like payload transformation or protocol translation aren’t utilized.
Minimal API Implementation
using Microsoft.AspNetCore.Builder;
using Microsoft.Azure.ApiManagement.AspNetCore;
using Microsoft.Extensions.Hosting;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddAzureApiManagement();
var app = builder.Build();
app.UseAzureApiManagement();
app.MapGet("/proxy/{**path}", async (context, path) =>
{
var apiManagement = context.RequestServices.GetService<IApiManagementService>();
var request = context.Request.ToHttpRequestMessage();
request.RequestUri = new Uri(apiManagement.GetBackendUrl(request, path));
using var httpClient = new HttpClient();
using var response = await httpClient.SendAsync(request);
var responseContent = await response.Content.ReadAsStringAsync();
context.Response.StatusCode = (int)response.StatusCode;
await context.Response.WriteAsync(responseContent);
});
app.Run();
In this example, we’re defining a single endpoint for requests to the “/proxy” URL using the MapGet method. The {**path} parameter captures the entire URL path after “/proxy”.
Inside the endpoint handler, we’re using the IApiManagementService
interface provided by the Azure API Management middleware to get the backend URL for the request. We’re then forwarding the request to the backend API using a HttpClient instance and returning the response to the client.
This will start the app and listen for requests on http://localhost:5000. If you navigate to the URL http://localhost:5000/proxy/<backend-url>, where <backend-url> is the URL of your backend API, you should see the response from the backend API returned.
You can add additional endpoints and middleware to the app using the fluent API provided by the WebApplication class. For example, you could add authentication middleware, routing middleware, or logging middleware.
Azure HTTP Function Implementation Example
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.ApiManagement.AspNetCore;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using System;
using System.Net.Http;
using System.Threading.Tasks;
public static class HttpTrigger
{
[FunctionName("HttpTrigger")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", "put", "delete", Route = "{**path}")] HttpRequest req,
ILogger log)
{
var apiManagement = req.HttpContext.RequestServices.GetService<IApiManagementService>();
var request = req.ToHttpRequestMessage();
request.RequestUri = new Uri(apiManagement.GetBackendUrl(request, req.Path));
using var httpClient = new HttpClient();
using var response = await httpClient.SendAsync(request);
var responseContent = await response.Content.ReadAsStringAsync();
return new ContentResult
{
Content = responseContent,
ContentType = response.Content.Headers.ContentType.MediaType,
StatusCode = (int)response.StatusCode
};
}
}
The above code is an implementation of an API Design Proxy using Azure Functions and the Azure API Management service.
Here’s a brief explanation of what is happening in the code:
- The
[HttpTrigger]
attribute on theRun
method specifies that this is an HTTP trigger function that can handle requests for any HTTP method and any URL path using the Route parameter. - The
IApiManagementService
interface provided by the Azure API Management middleware is used to get the backend URL for the request. - The incoming HTTP request is converted to an
HttpRequestMessage
object, and the backend URL obtained from theIApiManagementService
interface is used to set theRequestUri
property of theHttpRequestMessage
. - A new
HttpClient
instance is created, and theSendAsync
method is called to send theHttpRequestMessage
to the backend API. - The response from the backend API is converted to a string using the
ReadAsStringAsync
method. - The response is returned to the client in the form of an
IActionResult
, which is created using theContentResult
class. TheContentResult
contains the response content, content type, and status code obtained from the response from the backend API.
In summary, the code is using the Azure API Management middleware to proxy incoming HTTP requests to a backend API. The middleware is responsible for obtaining the backend URL, while the Azure Function is responsible for forwarding the request and returning the response to the client.
API Edge Gateway Pattern
The Edge Gateway Pattern involves placing an API Gateway at the outermost boundary of your network, typically within a Demilitarized Zone (DMZ), to serve as a secure entry point for external clients accessing internal services. This setup enhances security by isolating backend services from direct exposure to the internet. The gateway handles tasks such as authentication, SSL/TLS termination, rate limiting, and request routing, ensuring that only authorized traffic reaches your internal network while simplifying client interactions.
For example, configuring NGINX as an edge API Gateway involves setting it up to listen on port 443 for HTTPS traffic and proxy requests to internal services. A basic NGINX configuration might look like this:
server {
listen 443 ssl;
server_name api.yourdomain.com;
ssl_certificate /path/to/yourdomain.crt;
ssl_certificate_key /path/to/yourdomain.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://internal-service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
In this configuration, NGINX acts as the edge gateway by handling secure client connections, enforcing security protocols, and forwarding requests to internal services while keeping those services shielded from direct external access.
Microgateway API Architecture
The Micro-gateway Pattern involves deploying lightweight API gateways alongside individual microservices, providing service-specific functionalities like authentication, authorisation, rate limiting, and logging. Unlike a centralised API Gateway, micro-gateways are decentralised and tailored to the needs of each microservice, offering greater flexibility and scalability. This pattern allows teams to manage and deploy their microservices independently, enabling faster development cycles and more granular control over API behaviours.
For example, you can implement a micro-gateway using Node.js with the Express framework. The micro-gateway handles authentication and proxies requests to the microservice:
// micro-gateway.js
const express = require('express');
const httpProxy = require('http-proxy');
const app = express();
const proxy = httpProxy.createProxyServer();
// Middleware for authentication
app.use((req, res, next) => {
const token = req.headers['authorization'];
if (token === 'valid-token') {
next();
} else {
res.status(401).send('Unauthorized');
}
});
// Proxy requests to the microservice
app.all('/*', (req, res) => {
proxy.web(req, res, { target: 'http://localhost:4000' });
});
app.listen(3000, () => {
console.log('Micro-gateway running on port 3000');
});
In this setup, the micro-gateway listens on port 3000
, authenticates incoming requests, and forwards them to the microservice running on port 4000
. This approach encapsulates gateway functionalities within the service boundary, promoting modularity and easing the management of microservices.
Certificate-based Authentication
So, how do you manage the security of service-to-service (S2S) communication via an API Edge Gateway? Using Certificate-based S2S.
The example below works for Azure APIM, however this is also very much possible with other API gateways like KONG, NGINX etc. The Azure API Management (APIM) service can act as a gateway that validates incoming requests using client certificates. This approach ensures that only trusted clients with valid certificates can access the backend APIs. Unlike secret-based or token-based authentication, client certificates provide an additional layer of security by utilising public-key cryptography during the TLS handshake process.
In this setup, the client application (such as an Azure Function, Logic App, or external client) presents its client certificate when making a request to APIM. APIM verifies the certificate, ensuring that it matches the trusted certificates uploaded to the API Management instance (the CER file). This verification happens seamlessly as part of the HTTPS communication between the client and APIM, ensuring secure and authenticated API interactions.
For example, imagine a scenario where multiple clients need access to APIs managed through APIM. Each client is issued a unique client certificate, and APIM is configured to validate these certificates before forwarding the request to the backend service. When a client sends a request to APIM, the certificate is validated during the TLS handshake. If the certificate is valid and trusted, the request is processed, and APIM routes it to the backend service. This ensures that unauthorised clients without a valid certificate are rejected at the entry point.
Example APIM Policy Configuration
To implement client certificate authentication in APIM, the following steps are involved:
- Upload the client certificate’s public key (.cer file) to the APIM instance under the Certificates section.
- Configure the API in APIM to enforce client certificate validation during inbound requests.
- Clients must include their client certificates (private key) in the HTTPS requests when calling the API.
For instance, consider the following policy configuration in APIM, which enforces client certificate validation for an API:
<inbound>
<base />
<choose>
<when condition="@(context.Request.Certificate == null)">
<return-response>
<set-status code="403" reason="Forbidden" />
<set-body>@("Client certificate is required for authentication.")</set-body>
</return-response>
</when>
<otherwise>
<trace source="APIM" severity="info">
Client certificate validated successfully.
</trace>
</otherwise>
</choose>
</inbound>
In this configuration, APIM checks for the presence of a client certificate during the inbound request. If no certificate is provided, the request is denied with a 403 Forbidden
response. If a valid certificate is presented, the request proceeds to the backend service. This policy can be customised further to add logging, tracing, or additional checks.
Use Case Scenario
Suppose an enterprise application requires secure communication between Azure Logic Apps and backend APIs exposed through APIM. To meet security requirements, client certificates are issued to authorised Logic Apps. APIM is configured to validate these certificates, ensuring that only requests from authorised Logic Apps are accepted.
- The Logic App includes its client certificate when calling the APIM endpoint.
- APIM validates the certificate during the TLS handshake and forwards the request to the backend API.
- If the certificate is invalid or missing, the request is rejected, and an error response is returned to the client.
This approach enhances security by combining certificate-based validation with APIM’s existing features such as rate limiting, logging, and request routing.
Some important considerations
This approach is valid as of 2023. I have written documentation on authentication many times, however at the time of writing Microsoft is moving away from ADAL to MSAL, so check the MSAL implementation guidelines for more information.
- Certificates must be carefully managed, including rotation before expiry and secure storage of private keys. You will need to implement a PUBLIC KEY rotation strategy.
- Ensure HTTPS is enforced to prevent interception of client certificates during transmission.
- APIM can efficiently handle client certificate validation at scale, making it suitable for enterprise-grade workloads.
- While minimal, validating client certificates adds slight overhead due to the TLS handshake.
Azure Function to APIM
using System;
using System.IO;
using System.Net.Http;
using System.Security.Cryptography.X509Certificates;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
public static class CertificateAuthFunction
{
private static readonly HttpClient _httpClient = new HttpClient();
[FunctionName("ValidateClientCertificate")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "{**path}")] HttpRequest req,
ILogger log)
{
var clientCertificate = req.HttpContext.Connection.ClientCertificate;
if (clientCertificate == null)
{
return new UnauthorizedObjectResult("Client certificate is required for authentication.");
}
string validThumbprint = "YOUR_VALID_CERTIFICATE_THUMBPRINT";
if (!string.Equals(clientCertificate.Thumbprint, validThumbprint, StringComparison.OrdinalIgnoreCase))
{
return new UnauthorizedObjectResult("Invalid client certificate provided.");
}
string backendUrl = "https://your-apim-instance.azure-api.net/your-backend-api";
try
{
var forwardRequest = new HttpRequestMessage
{
Method = new HttpMethod(req.Method),
RequestUri = new Uri(backendUrl),
Content = new StreamContent(req.Body)
};
foreach (var header in req.Headers)
{
forwardRequest.Headers.TryAddWithoutValidation(header.Key, header.Value.ToString());
}
var response = await _httpClient.SendAsync(forwardRequest);
string responseBody = await response.Content.ReadAsStringAsync();
return new ContentResult
{
StatusCode = (int)response.StatusCode,
Content = responseBody,
ContentType = response.Content.Headers.ContentType?.ToString()
};
}
catch (Exception ex)
{
log.LogError($"Error forwarding request to backend: {ex.Message}");
return new StatusCodeResult(502);
}
}
}
NGINX Script
Here are the configuration steps if you need to use NGINX and Certificate-based Service Principals.
(Note, this works as of Feb, 2023).
#!/bin/bash
KEYVAULT_NAME="your-keyvault-name"
CERT_NAME="your-certificate-name"
CERT_FILE="/etc/nginx/ssl/your-certificate.pfx"
CERT_PASSWORD="your-cert-password"
AZURE_LOGIN_IDENTITY="your-identity"
az login --identity --username $AZURE_LOGIN_IDENTITY
az keyvault secret download --vault-name $KEYVAULT_NAME --name $CERT_NAME --file $CERT_FILE
openssl pkcs12 -in $CERT_FILE -nocerts -nodes -out /etc/nginx/ssl/your-certificate.key -passin pass:$CERT_PASSWORD
openssl pkcs12 -in $CERT_FILE -clcerts -nokeys -out /etc/nginx/ssl/your-certificate.crt -passin pass:$CERT_PASSWORD
chmod 600 /etc/nginx/ssl/your-certificate.*
NGINX Configuration
server {
listen 443 ssl;
server_name api.yourdomain.com;
ssl_certificate /etc/nginx/ssl/your-certificate.crt;
ssl_certificate_key /etc/nginx/ssl/your-certificate.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_client_certificate /etc/nginx/ssl/ca-chain.crt;
ssl_verify_client on;
location / {
proxy_pass http://your-backend-service;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}