BIDA0053 – BC Direct Query Using PowerBI Cloud

0
64

Please watch the video in 1080p because it is recorded in High Resolution.
Please note the script of the presentation is below so you can read along with the demonstration if you like.


Hello and welcome to our latest blog post.

I am Sofi, your BIDA AI Assistant.

I will be reading this blog post for you today, on behalf of Mihai.

I am really pleased to be able to bring you some news that I think will really interest you.

If you are a business person using Business Central?

The really simple message for you is this.

We have invented a new and better way for you to get access to all your Business Central, On Premise data, through Power B I, in the cloud.

This new and better way also allows you to integrate any other data that Power B I can read, which is pretty much any data.

How we did this is a bit technical.

So, if you want faster, cheaper, easier access to all your data in Power B I to improve your companies profits?

All you really need to do is to pass this blog post along to your Business Central Technical Support people.

You can ask them if they can take a look at this and use it in your company.

By the way, this is all free.

The only bit you have to pay for is your Power B I Cloud Licenses if you don’t have them already.

Many Business Central On Premise customers want to use Power B I Cloud but it’s kind of hard to use.

We have solved the problem of, “it is kind of hard to use”.

Now, moving right along.

Details of Our Solution

This video is intended for more technical people who work with Business Central.

This is for those technical people whose business community want to use Power B I to access Business Central On Premise.

So.

Please allow me to start with a statement of what the problems are with using Power B I in the Cloud to access Business Central On Premise data.

The presumption is that the business community see value in having Power B I Cloud over having Power B I for Report Services.

The presumption is that the business users want to use Power B I as one of the tools for Business Central Reporting, and very likely, for reporting on other data not held in Business Central.

There are a number of issues with querying Business Central On Premise data.

Issue One.

The Business Central data model is very complicated.

To be able to answer any question you have to have access to all the data.

But there are over one thousand, eight hundred tables in Business Central. And that number is growing rapidly.

There are many cases where joins are complex.

By this I mean that the same field can be used to join to many different tables depending on the value of another field.

It is simply not possible to give anyone a copy of Power B I, the Business Central Data Model, and say, off you go, you have access to all the tables, go and write whatever report you want.

There are too many tables to manage in Power B I.

The joins between the tables are too complex.

This is why Business Central Programmers have to write code to produce even simple reports out of Business Central.

To even get the simplest of new reports out of Business Central, you need a programmer, not a Power B I developer.

Programmer time is expensive, not to mention you have to wait in the queue to get their time.

Issue Two.

Even if you could deal with issue number one, and some how find a way to simplify the joins, there are just too many tables to deal with using Power B I.

Even with the O Data interface, it quickly becomes obvious that there will be thousands of O data sources needed to query Business Central in the Cloud, or via the O Data Interface for On Premise.

That there are one thousand eight hundred tables, the vast majority of which are not related to each other, means that the number of tables presented to Power B I is overwhelming.

Not even the best Power B I developers, even with the best intentions and efforts, are going to be able to deal with that number of tables.

For a new report?

Until now, all roads have led back to the Business Central programming staff, their high expense, and their work backlog queue.

We have promised that we have solved these problems.

So, please allow me to explain.

Solving Issue One.

We have previously released a blog post talking about the idea of putting pseudo dimensional models over the top of the Business Central database.

We have been doing further work on this and we are finding that it works fine.

Sure, it’s going to consume CPU to run queries over such views, but it will be much easier to get the correct answer.

Of course, it will take a lot of time to build up the suite of views for direct query against Business Central.

Also, we have added the numbers zero one to the end of these views.

This is so that each company is able to create their own views for tables to meet their own needs for custom developed reports.

So, a company could add, for example, five one, five two, five three, as the suffixes for their own custom developed views.

Anything that SQL Server supports can be done in the view.

All that has to be presented out to Power B I is the view itself.

You can download the latest version of the dimensional model views on the button below.

BC Direct Query DB

BC Demo Database

BC023_DM03_01

You can review the prior blog post on the button below.

BIDA0050 Blog Post

By creating these pseudo dimensional views we make it an order of magnitude easier to perform the queries against the business central data.

But we still have the problem that there are just too many views to sensibly manage.

Also, there is the serious problem that you can not define referential integrity constraints between views.

This means the joins in the Power B I Models would have to be added manually.

Solving Issue Two.

So how do we solve issue two?

What we have tested, and found to work, is this.

We created what we have currently termed a “data mart data model data base”.

The “data mart data model data base” contains a set of tables that are commonly used together.

For the sake of demonstration purposes, we have chosen the General Ledger Entry table, and all the tables associated with it.

Now, it is important to note that there can be many “data mart data model data bases”, and their function is to make the Power B I Models simple to use.

So, you only need to put into one of these databases the tables that you need for a specific Power B I report.

If you are only going to use data from 10 tables, then you can create a separate “data mart data model data base” with just those 10 tables in it if you want to.

Next.

Once you have created the tables in the “data mart data model data base”, you can remove columns that are empty or never needed by your company.

You can remove the system only columns.

Or you can not create them in the tables in the first place.

Then, with the drop and create tables commands, you must alter the primary key of each dimension table to be big integer and not null.

You must alter the columns for all the lookup keys in the fact table to also be big integer and not null.

This is so that the referential integrity constraints will be able to be created.

All the data types for the fields that will be joined have to be the same, and they all have to be not null.

Once you have the columns you want in each table you can add the primary key constraints to all the dimension tables.

The following is an example.

Please see the blog post for the typed example.

alter table dbo.vm_vendor_01 add constraint pk_vm_vendor primary key ( pk_vm_vendor );

Once you have added all the primary key constraints you can add the referential integrity constraints.

The following is an example.

Please see the blog post for the typed example.

ALTER TABLE [dbo].[vf_g_l_entry_01] WITH CHECK ADD CONSTRAINT [vf_g_l_entry_01_c04] FOREIGN KEY([dk_vm_gl_account]) REFERENCES [dbo].[vm_gl_account_01] ([pk_vm_gl_account]);

We have done this work in a demonstration database for you.

You can click on the button below and download a SQL server backup of the sample database.

This backup was created in SQL Server two thousand and nineteen.

(Update 2024-09-25: The database was updated to chance the collation sequence to be the same as Business Central Demos. It was accidentally created with the default collation sequence of the host server it was created on)

BC023_DM03_01

We will create a video demonstration of all this soon.

If you restore the backup database, you will see that it is just a small database with some test data in it.

You will see that the name of the tables is the same as the name of the views in the direct query database.

More importantly, if you check the tables, you will see the primary keys are defined.

If you check the general ledger entries table, you will see that the referential integrity constraints have been defined.

What this means is that when you develop your dashboard, you can use this database as the test data source for development.

You can put indexes on tables as you like.

You can even try writing your own Power B I report, on this sample database, just to practice how this works.

Now comes the magic part.

You create your dashboard on your desktop using Power B I desktop.

When you connect to a data source you connect to this test database where ever it may be.

It has to be visible via a fixed I P address, or server name, from your desktop.

You say you want direct query to the database, when you create the connection.

This database may be on the production business central machine. It may be on a development server.

But you create the Power B I Semantic Model for this dataset using this test database.

You then create the dashboard using this test database, and any other test database that you need.

You test the report and make sure it appears to be working properly, as best as is possible, with the test data you have.

Then, before you publish the report to Power B I Cloud, you repoint the data source to the I P address of your on premise server and the direct query database.

This database must be visible from your desktop using a fixed IP address or server name.

When changing the connection, it is important to go to “advanced options” and turn off select related tables.

You just want to repoint the desktop report, from the test database where there are just those tables you want in the Power B I Semantic Model, to the direct query database where those tables will be presented as views.

Power B I can not tell the difference between a table and a view in this repointing process.

And because you have told it not to bring in related tables, it will not look for the referential integrity constraints, which will not be there anyway.

You then publish the report to Power B I Cloud.

It will give you a message saying that it does not know where to connect the semantic model to.

This is because you have used the I P address of the business central database inside your organization.

On your Business Central On Premise server, or a server with linked tables, or replica, however you have decided to do this.

You will run the Power B I Gateway.

You can run just one Gateway per machine.

So, your published Semantic Model, or models, for the report will now be in the cloud in Power B I.

But they will not be able to see your on premise business central database.

You go into the semantic model, or models, for your Power B I Dashboard, and you repoint the source of the data from the test database to the direct query database, which will be defined in your gate way.

And then?

Whola!

When you refresh your report it will send the generated SQL into the gate way, and query your direct query database, which will have at least one thousand eight hundred views in it. And possibly many more.

So, what we have figured out is that you can create a small “data mart data model data base”, to create the Power B I Semantic model, that contains just the tables you need.

Then you can repoint that semantic model to your on premise Business Central direct query database, which contains all the possible views for the whole company.

In this way any single dashboard will contain just the semantic models it needs, while all data in Business Central is visible to any Power B I report developer, who is authorized to access that data.

You would implement your security however it is you want to implement your security.

This idea of creating these small test databases, to serve as the input to the creation of the Power B I Semantic Models, means that you can limit the number of tables and number of columns in any given semantic model, to be viable and easy to use for developers.

And then, by repointing the semantic model to the direct query database, the report will have access to all the production data. You can answer any question that the data can support.

Another factor to take into consideration is this.

Because the views in the direct query database can contain common table expressions, these views can be as complex as you like.

You can literally create very sophisticated, and complex views, and present them out as simple views.

You can also put time constraints on some views to say, show last 90 days, last 180 days, last 365 days and perhaps last 2 years.

In this way, a dashboard that only needs the last 90 days of data does not need to have “drill down” to do that. The view for the fact table data could have that constraint in it. You just add another suffix number to the fact table view.

Just for completeness.

The professional level of Power B I has a limit of 1 million rows being returned for any single query.

If you need to return more than one million rows, you have to buy the higher level licensing for Power B I.

Also, because of the way that the SQL is generated by Power B I, we would encourage you to create views that only display the columns needed in each report.

And if a report is only needed at summary levels, such as weekly, monthly, quarterly and similar, it is better to do the summarization in the database in another view, and not let Power B I do the summarization from the full fact table itself.

This is because it will consume far more temporary space if this is allowed to happen.

To show you what the SQL looks like for all the work that goes into creating such a report?

We have included a trace of our testing.

If you are familiar with Power B I, and how it creates its semantic models, and how it performs direct query, the trace will make complete sense to you.

When we read the trace, we were very pleased with how easy it was to read and understand.

Please click on the button below, if you want to read the trace created, when we set this up.

Gateway Trace

Simply put.

The Power B I Semantic model is created from the “data mart data model data base”.

Then when the switch is made to the direct query database, and the referenced tables option is turned off, it revalidates the model against the direct query database.

If the revalidation works you can then publish your report.

Just for extra information.

From our testing it appears that the semantic model knows the name of the SQL Server database at publishing time.

We were not able to alter the database name inside Power B I cloud services.

This is why we recommend changing the name of the database prior to publishing.

Once published, you can simply repoint the semantic model to get it’s data from the On Premise gateway using the correct database name.

Your database gateway support people will know how to set that up.

In Summary

Power B I Cloud offers a very wide range of features to help business managers improve the profit of their businesses.

Power B I Cloud is far in advance of Excel and Power B I on premise.

Excel is great and is better than Power B I for many uses.

Power B I On Premise is also great and very useful.

But it lacks many of the features of Power B I Cloud.

So, if you have Business Central On Premise, and you want to use Power B I Cloud?

There was no really good solution to being able to take advantage of Power B I Cloud, at least that we are aware of.

By inventing the following two ideas?

We have made it viable to use Power B I Cloud with Business Central On Premise.

Those two ideas being.

One.

Placing pseudo dimensional models over the top of the direct query database.

Two.

Creating separate dedicated “data mart data model data bases”, which will be used to create the Power B I Semantic Model, for each report that needs to use that set of tables.

Just as further information.

Although we have used Business Central in this example, it should be noted that what we describe works for any SQL Server production system.

This is a fast, simple, cheap and easy way to get started using Power B I against SQL Server production databases, or replicas.

Lastly, we really must add some caveats.

Obviously, direct query against a production database, or replica, has it’s limitations.

This is not like having a proper staging area.

This is not like having a proper data warehouse.

But this idea is very cheap and easy to get started with.

For thousands and thousands of small companies?

This will be enough.

If you use this idea and you come to understand why separate staging areas and data warehouses offer greater value?

Then you will have more arguments available to put into your business case, to ask for the budget for your data warehouse project.

We see this idea of using Power B I, without any other software purchase necessary, as a good way to get started on your journey of learning how to use data to improve the profitability of your company.

And with that?

I would like to say thank you very much for listening to our video today.

I really appreciate your time and attention.

We hope you found this blog post interesting and informative.

We would love to hear your comments, so please feel free to use the comments section below.

I wish you a great day.

Thank you.

Sofi.

Your BIDA AI Assistant.

We will be emailing our subscribers useful and
valuable information regarding business intelligence.
Please subscribe here to be on our general emailing list.