Distributed reporting capabilities for Dynamics 365 Finance & Operations


A recent business scenario popped up where I needed to make some Dynamics 365 Finance & Operations SQL tables available to global teams for ad-hoc reporting. These teams were based in Sydney Australia, Tokyo Japan and New York USA and were suffering from network latency.

In this post I’ll discuss the two options that I explored. Azure Data Sync and Azure SQL Replication.

The task was to maintain the existing data entity structure for each table, but make them accessible to each region that required access.

The first task was to get the D365 tables to Azure SQL. Microsoft Dynamics 365 Finance & Operations has a Bring Your Own Database (BYOD) feature. Using this feature, it is possible to offload data stored in Dynamics F&O to a custom Azure SQL DB. You can read more about the BYOD feature here: https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/analytics/export-entities-to-your-own-database

Solution Architecture

Once in Azure, there are a few options available for distributing the data to different regions:

  1. Use the BYOD service in Dynamics 365 Finance & Operations to export the relevant entities to an Azure SQL DB, then use Azure Data Sync to copy data to multiple Azure SQL regions.
  2. Leverage Geo-Replication. Azure Geo-replication allows replicated nodes to be read from.

The Microsoft online documentation also provides some basic solutions for managing our business scenario:

ScenarioSome recommended solutions
Disaster RecoveryAzure geo-redundant backups
Read ScaleUse read-only replicas to load balance read-only query workloads (preview)
ETL (OLTP to OLAP)Azure Data Factory or SQL Server Integration Services
Migration from on-premises SQL Server to Azure SQL DatabaseAzure Database Migration Service

BYOD & Azure Data Sync – Solution Architecture

SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-directionally across multiple SQL databases and SQL Server instances.

Data Sync is useful in cases where data needs to be kept up-to-date across several Azure SQL databases or SQL Server databases. Here are the main use cases for Data Sync:

1. Hybrid Data Synchronization: With Data Sync, you can keep data synchronized between your on-premises databases and Azure SQL databases to enable hybrid applications. This capability may appeal to customers who are considering moving to the cloud and would like to put some of their application in Azure.

2. Distributed Applications: In many cases, it’s beneficial to separate different workloads across different databases. For example, if you have a large production database, but you also need to run a reporting or analytics workload on this data, it’s helpful to have a second database for this additional workload. This approach minimizes the performance impact on your production workload. You can use Data Sync to keep these two databases synchronized.

3. Globally Distributed Applications: Many businesses span several regions and even several countries/regions. To minimize network latency, it’s best to have your data in a region close to you. With Data Sync, you can easily keep databases in regions around the world synchronized.


Points 2 & 3 is exactly what meets our scenario. The high-level solution architecture is as follows:

  1. Create an Azure SQL DB
  2. Create an Entity Store record
  3. Publish Entities to the Entity Store
  4. Create an Export record
  5. Create a recurring data job
  6. Create Single Instance SQL DBs in regions; Australia, Japan and the USA.
  7. Create a Data Sync Rule.

Create an Azure SQL DB

*For information on setting up BYOD, go directly to the Microsoft Documentation.

Create an Azure SQL Database and copy the connection details in the following format:

Data Source=<logical server name>,1433; Initial Catalog=<your DB name>; Integrated Security=False; User ID=<SQL user ID>; Password=<password>

Create an Entity Store Record

Note: for the R&D work, I was using a basic Azure SQL DB in the lowest possible tier. I was not able to leverage ‘Create clustered column store indexes‘ which is essential for scale and performance especially if you are going to design a data warehouse.

Supply the connection string. This is the connection to the Azure SQL DB.

Publish Entities to the Entity Store

Publishing Entities to the Azure SQL DB exports the table schema.

Note: When creating the Export job, ensure the correct legal entity has been selected. For instance, if the job was created for the legal entity DAT, then only legal entities belonging to DAT can be exported.

Here I have selected two Entities: Customers V3 and User Information.

Once I hit the Publish button, looking into the Azure SQL DB I noticed the table schema has been exported:

Create an Export Record

The final step is to migrate real data. This is where an Export Job is created. I have added the same entities to the Export Job.

The Result

Using the Azure Query Editor, the exported records can now be seen.

Create a recurring data job

For updates to occur, a recurring job needs to be setup. For this there are two requirements:

  1. Application ID required for the job process.
  2. Create the recurring job.

To create an application ID, there is a good guide here: https://axbloggerblog.blogspot.com/2017/02/d365-recurring-integration-application.html

To create the job itself is straight-forward to setup but is nicely documented here: https://atinkerersnotebook.com/2018/05/25/configuring-a-byodb-and-creating-a-full-and-incremental-entity-export/

What is required through is the application ID for setting up the authorization policy.

Setup Data Sync

For replication to other regions, I wanted to experiment with Azure SQL DB Data Sync. Data Sync as the name suggests allows SQL tables to be synchronised. Setting up Data Sync is relatively straight-forward. The one caveat to mention:

When creating a sync DB in Azure, the sync DB must be stored in the same region as the Master DB. This ensures transaction consistency.

Azure Data Sync Conclusion

Let me just be clear straight from here that this approach did not work. Azure Data Sync relies on Primary Keys to keep data in sync. By this, I mean if a record is updated on the Member database, the Sync rule uses the Primary Key to upsert the changes to the HUB and other Member databases.

The issue here is that the BYOD Data Export function creates several primary key columns, one of which is dynamically updated. These updated Primary Key columns cause rows to be deleted.

Microsoft states in their Best Practices page:

Changing the value of an existing primary key will result in the following faulty behavior:

1. Data between hub and member can be lost even though sync does not report any issue.

2. Sync can fail because the tracking table has a non-existing row from source due to the primary key change.

Other challenges

  1. Adding new columns in Master Table will not propagate into the child tables. These new columns must be added manually. This item is discussed in the MSN forums. https://social.msdn.microsoft.com/Forums/azure/en-US/5777c3e6-09c8-48c3-86db-b74766ee50da/database-reprovisioning-failed-with-the-exception-quotinvalid-column-name-col4?forum=ssdsgetstarted. There is documentation on how to maintain schema changes between instances here: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-update-sync-schema.
  2. There will always be a 10 minute delay between synchronisation. The lowest time that can be set between synchronisation jobs in Azure Data Sync is 5 minutes. The lowest time that can be setup for batch processing in Dynamics 365 Finance and Operations is also 5 minutes. This means that transaction consistency is poor.
  3. There is a performance impact on both the HUB and Member databases. Both databases contain additional tables that are being written to all the time.

4. Each table MUST have a Primary Key column. The data in the Primary Key column must not be changed.

Sync issues

Because of the points mentioned above there were several synchronisation issues.

BYOD & Azure Geo-replication

The final solution was to test the Geo-replication feature. This feature is documented here:


Using the aforementioned documentation I was able to rapidly create a cluster of Geo-replicated databases.

Azure Geo-replication conclusion

This feature just worked straight away. Setup was simple and there were no visible issues. I was able to connect Power BI to each SQL server instance making the reporting task easier and also solving the latency issue.

Leave a comment