Unnecessary Fuzzy Searches May Hurt Your Entity Framework Core Performance

After talking about performance issues like N+1 Queries and the Cartesian Explosion that made its comeback in Entity Framework Core 3, we will today look at a performance issue that is not tied to any Entity Framework version but is rather a general one.

In diesem Artikel:

pg
Pawel Gerr ist Architekt und Consultant bei Thinktecture. Er hat sich auf .NET Core Backends spezialisiert und kennt Entity Framework von vorne bis hinten.

What do I mean by "unnecessary fuzzy searches"?

In this article, I consider any filter criteria (i.e. WHERE clause) that can be more exact/precise.
For example, when filtering for a Product with unique names (of same length) we could do it either using Contains() or the equality == operator:

				
					var name = "my product";

var product1 = Context.Products.FirstOrDefault(p => p.Name.Contains(name));
// OR
var product2 = Context.Products.FirstOrDefault(p => p.Name == name);

				
			

The same applies when comparing numbers or dates using ><==, and so on.

Why would someone do an unnecessary fuzzy search?

Hardly anyone would intentionally do something unnecessary, but unintentionally, it happens…

One of the causes for sub-optimal filters is the lack of exact domain knowledge and/or side effects in the programming model used. I’ve seen code as in the example above using Contains() instead of == multiple times and the authors could not fully explain why they took this approach because both ways work. The best thing we can do in such a situation is to sensitize users of EF Core for issues like these.

Another cause can be unfavorable code sharing. Imagine, in one of our core projects we found a method LoadProducts that takes a few parameters, and among them the parameter string name.
Now, our code could look like:

				
					var name = "my product";

var product = someRepository.LoadProducts(name).FirstOrDefault();
				
			

After a quick test we are sure that the method produces a correct response, which is true, but let us have a closer look.

				
					public List<Product> LoadProducts(
    string name = null,
    string someOtherFilter = null)
{
    var query = Context.Products.AsQueryable();

    if(name != null)
        query = query.Where(p => p.Name.Contains(name));

    if(someOtherFilter != null)
        query = query.Where(...);

    return query.ToList();
}
				
			

As we can see, there are a few issues with this method. First, this method is clearly not made for our use case but for some kind of global product search, probably for displaying the data on a user interface. And second, it resides in a core project although we are pretty sure it is not a general-purpose business logic but for a specific use case that differs from ours.

Increased database load due to fuzzy searches

In the previous examples, we saw that the LINQ queries or rather the filter criteria could be changed without affecting the outcome but there was no evidence that the comparison via == performs better than using Contains().

Now we will execute two slightly different LINQ queries, look at the SQL statements generated by EF and analyze the execution plans to determine the winner. An execution plan describes all operations the database performs to handle the SQL request.
I’m using MS SQL Server for the following tests.

Given is a table with 100 Products having columns Id and Name and an unique index on the column Name. The LINQ queries are similar to the one in the method LoadProducts and return exactly 1 product.

				
					var name = "my product";

// query 1
var product1 = Context.Products.Where(p => p.Name.Contains(name)).ToList();

// query 2
var product2 = Context.Products.Where(p => p.Name == name).ToList();
				
			

The corresponding SQL statements (simplified for the sake of readability).

				
					-- query 1
SELECT *
FROM Products
WHERE CHARINDEX(@__name_0, Name) > 0

-- query 2
SELECT *
FROM Products
WHERE Name = @__name_0
				
			

In both cases, there is just one operation (worth mentioning) being executed by the database. For the query using Contains() the database performs a so-called Index Scan, i.e. the database scans the whole data source record by record. The other query using the == comparison does an Index Seek, i.e. the database uses the index to jump right to the requested record.

Depending on how large the data source is, scanning it can lead to a considerable load on the database, but that is not all.

(Unnecessary) Scanning of the data source is just the tip of the iceberg

The performance loss due to the scanning of the whole data source is just the beginning of a chain reaction. If the query in the example above is just a small part of a bigger query, Index Scan will lead to bigger internal working sets the database has to work with. Due to bigger working sets the database may decide for sub-optimal JOIN order (from our perspective). The increased complexity may lead to parallelization and/or use of internal temp tables, which are both not for free. In the end, the database will require more memory for handling the query (high query memory grant), leaving us with very bad performance.

Summary

When writing database queries with EF Core then the filter criteria should be both as precise and as restrictive as possible. Otherwise, not just the query itself but the overall database performance could suffer greatly. If you need numbers to compare with when optimizing database requests, I recommend analyzing execution plans.

Kostenloser
Newsletter

Aktuelle Artikel, Screencasts, Webinare und Interviews unserer Experten für Sie

Verpassen Sie keine Inhalte zu Angular, .NET Core, Blazor, Azure und Kubernetes und melden Sie sich zu unserem kostenlosen monatlichen Dev-Newsletter an.

Newsletter Anmeldung
Diese Artikel könnten Sie interessieren
.NET
pg

Discriminated Unions: Representation of Alternative Types in .NET

Representing values that may take on multiple distinct types or states is a common challenge in C#. Traditional approaches—like tuples, generics, or exceptions—often lead to clumsy and error-prone code. Discriminated unions address these issues by enabling clear, type-safe modeling of “one-of” alternatives. This article examines pitfalls of conventional patterns and introduces discriminated unions with the Thinktecture.Runtime.Extensions library, demonstrating how they enhance code safety, prevent invalid states, and improve maintainability—unlocking powerful domain modeling in .NET with minimal boilerplate.
15.07.2025
.NET
pg

Handling Complexity: Introducing Complex Value Objects in .NET

While simple value objects wrap single primitives, many domain concepts involve multiple related properties (e.g., a date range's start and end). This article introduces Complex Value Objects in .NET, which group these properties into a cohesive unit. This ensures internal consistency, centralizes validation, and encapsulates behavior. Discover how to implement these for clearer, safer code using the library Thinktecture.Runtime.Extensions, which minimizes boilerplate when handling such related data.
01.07.2025
.NET
pg

Smart Enums: Beyond Traditional Enumerations in .NET

Traditional C# enums often fall short when needing to associate data or behavior with constants, or ensure strong type safety. This article explores the "Smart Enum" pattern as a superior alternative. Leveraging the library Thinktecture.Runtime.Extensions and Roslyn Source Generators, developers can easily implement Smart Enums. These provide a robust, flexible, and type-safe way to represent fixed sets of related options, encapsulating both data and behavior directly within the Smart Enum. This results in more maintainable, expressive, and resilient C# code, overcoming the limitations of basic enums.
17.06.2025
.NET
pg

Value Objects: Solving Primitive Obsession in .NET

Overusing primitive types like string or int for domain concepts ("primitive obsession") causes bugs from missed validation, like invalid emails or negative monetary values. This article explores Value Objects as a .NET solution. Learn how these self-validating, immutable types prevent entire classes of errors, make code more expressive, and reduce developer overhead. We'll demonstrate creating robust domain models with minimal boilerplate, improving code quality without necessarily adopting full Domain-Driven Design, and see how Roslyn Source Generators make this practical.
03.06.2025
Database Access with Sessions
.NET
kp_300x300

Data Access in .NET Native AOT with Sessions

.NET 8 brings Native AOT to ASP.NET Core, but many frameworks and libraries rely on unbound reflection internally and thus cannot support this scenario yet. This is true for ORMs, too: EF Core and Dapper will only bring full support for Native AOT in later releases. In this post, we will implement a database access layer with Sessions using the Humble Object pattern to get a similar developer experience. We will use Npgsql as a plain ADO.NET provider targeting PostgreSQL.
15.11.2023
Old computer with native code
.NET
kp_300x300

Native AOT with ASP.NET Core – Overview

Originally introduced in .NET 7, Native AOT can be used with ASP.NET Core in the upcoming .NET 8 release. In this post, we look at the benefits and drawbacks from a general perspective and perform measurements to quantify the improvements on different platforms.
02.11.2023