What is a "Cartesian Explosion"?
As implied by the name, it has something to do with a cartesian product, i.e. with JOINs. When performing a JOIN on the one-to-many relationship then the rows of the one-side are being replicated N times whereby N is the number of matching records on the many-side.
Here is an example for JOIN-ing 1 ProductGroup
with 1000 Products
.
The corresponding LINQ query would look like:
var groups = Context.ProductGroups
.Include(g => g.Products)
.ToList();
The SQL statement is similar to the following one:
SELECT *
FROM ProductGroups
LEFT JOIN
Products
ON Products.GroupId = ProductGroups.Id
And the result set:
ProductGroup Id | Product Id | |
1 | 1 | |
1 | 2 | |
1 | 3 | |
1 | … | |
1 | 1000 |
As we see, the columns of the ProductGroup
are replicated 1000 times. Imagine there are 10 sellers per Product
– the result set will contain 1 * 1000 * 10 = 10000 rows although we have just 1 + 1000 + 10 = 1011 records in the database.
I should be clear what happens if we add a few Includes
more. The result set (i.e. the cartesian product) would explode.
EF-forced "ORDER BY"
The larger result set due to JOINs is not the only cause for lower performance. Let’s look at the SQL statement generated by EF. Btw, the SQL statement above is not complete but the following one is:
SELECT
[p].[Id], [p].[Name], [p].[RowVersion],
[p0].[Id], [p0].[GroupId], [p0].[Name], [p0].[RowVersion]
FROM
[ProductGroups] AS [p]
LEFT JOIN
[Products] AS [p0]
ON [p].[Id] = [p0].[GroupId]
ORDER BY
[p].[Id], [p0].[Id]
For internal purposes, the EF adds an ORDER BY
clause to order the entities by their identifiers. So, with a result set of that huge size, the ordering of this data will produce considerable load on the database.
Query splitting (back to the roots)
The solution of the Cartesian Explosion Problem that came with Entity Framework Core 3 is the same as with Entity Framework (non-Core) 6. We split 1 LINQ query in multiple queries if (and only if) the database load rises significantly.
When using our (oversimplified) example from above then the solution is to load Products
and ProductGroups
separately.
var groups = Context.ProductGroups.ToList();
var products = Context.Products.ToList();
Here are some database statistics (MS SQL Server) I get when loading data having two one-to-many relationships before and after query splitting. The absolute numbers are not relevant, just look at the relative difference, especially in the Reads and Rows.
Before splitting | After splitting | |
CPU | 31 | 16 |
Duration | 75 | 3 |
Reads | 5300 | 350 |
Rows | 12000 | 2300 |
Summary
In this blog article, I wanted to convey two things: there is a new (old) issue we have to be aware of, and this issue can be solved.
The difficulty is finding such queries and determining how to split them. If we split too much, we waste time. If we split too little, we waste performance. The tools I highly recommend using for this task are the database statistics and execution plans.