Unnecessary Fuzzy Searches May Hurt Your Entity Framework Core Performance

After talking about performance issues like N+1 Queries and the Cartesian Explosion that made its comeback in Entity Framework Core 3, we will today look at a performance issue that is not tied to any Entity Framework version but is rather a general one.

In diesem Artikel:

pg
Pawel Gerr ist Architekt und Consultant bei Thinktecture. Er hat sich auf .NET Core Backends spezialisiert und kennt Entity Framework von vorne bis hinten.

What do I mean by "unnecessary fuzzy searches"?

In this article, I consider any filter criteria (i.e. WHERE clause) that can be more exact/precise.
For example, when filtering for a Product with unique names (of same length) we could do it either using Contains() or the equality == operator:

				
					var name = "my product";

var product1 = Context.Products.FirstOrDefault(p => p.Name.Contains(name));
// OR
var product2 = Context.Products.FirstOrDefault(p => p.Name == name);

				
			

The same applies when comparing numbers or dates using ><==, and so on.

Why would someone do an unnecessary fuzzy search?

Hardly anyone would intentionally do something unnecessary, but unintentionally, it happens…

One of the causes for sub-optimal filters is the lack of exact domain knowledge and/or side effects in the programming model used. I’ve seen code as in the example above using Contains() instead of == multiple times and the authors could not fully explain why they took this approach because both ways work. The best thing we can do in such a situation is to sensitize users of EF Core for issues like these.

Another cause can be unfavorable code sharing. Imagine, in one of our core projects we found a method LoadProducts that takes a few parameters, and among them the parameter string name.
Now, our code could look like:

				
					var name = "my product";

var product = someRepository.LoadProducts(name).FirstOrDefault();
				
			

After a quick test we are sure that the method produces a correct response, which is true, but let us have a closer look.

				
					public List<Product> LoadProducts(
    string name = null,
    string someOtherFilter = null)
{
    var query = Context.Products.AsQueryable();

    if(name != null)
        query = query.Where(p => p.Name.Contains(name));

    if(someOtherFilter != null)
        query = query.Where(...);

    return query.ToList();
}
				
			

As we can see, there are a few issues with this method. First, this method is clearly not made for our use case but for some kind of global product search, probably for displaying the data on a user interface. And second, it resides in a core project although we are pretty sure it is not a general-purpose business logic but for a specific use case that differs from ours.

Increased database load due to fuzzy searches

In the previous examples, we saw that the LINQ queries or rather the filter criteria could be changed without affecting the outcome but there was no evidence that the comparison via == performs better than using Contains().

Now we will execute two slightly different LINQ queries, look at the SQL statements generated by EF and analyze the execution plans to determine the winner. An execution plan describes all operations the database performs to handle the SQL request.
I’m using MS SQL Server for the following tests.

Given is a table with 100 Products having columns Id and Name and an unique index on the column Name. The LINQ queries are similar to the one in the method LoadProducts and return exactly 1 product.

				
					var name = "my product";

// query 1
var product1 = Context.Products.Where(p => p.Name.Contains(name)).ToList();

// query 2
var product2 = Context.Products.Where(p => p.Name == name).ToList();
				
			

The corresponding SQL statements (simplified for the sake of readability).

				
					-- query 1
SELECT *
FROM Products
WHERE CHARINDEX(@__name_0, Name) > 0

-- query 2
SELECT *
FROM Products
WHERE Name = @__name_0
				
			

In both cases, there is just one operation (worth mentioning) being executed by the database. For the query using Contains() the database performs a so-called Index Scan, i.e. the database scans the whole data source record by record. The other query using the == comparison does an Index Seek, i.e. the database uses the index to jump right to the requested record.

Depending on how large the data source is, scanning it can lead to a considerable load on the database, but that is not all.

(Unnecessary) Scanning of the data source is just the tip of the iceberg

The performance loss due to the scanning of the whole data source is just the beginning of a chain reaction. If the query in the example above is just a small part of a bigger query, Index Scan will lead to bigger internal working sets the database has to work with. Due to bigger working sets the database may decide for sub-optimal JOIN order (from our perspective). The increased complexity may lead to parallelization and/or use of internal temp tables, which are both not for free. In the end, the database will require more memory for handling the query (high query memory grant), leaving us with very bad performance.

Summary

When writing database queries with EF Core then the filter criteria should be both as precise and as restrictive as possible. Otherwise, not just the query itself but the overall database performance could suffer greatly. If you need numbers to compare with when optimizing database requests, I recommend analyzing execution plans.

Kostenloser
Newsletter

Aktuelle Artikel, Screencasts, Webinare und Interviews unserer Experten für Sie

Verpassen Sie keine Inhalte zu Angular, .NET Core, Blazor, Azure und Kubernetes und melden Sie sich zu unserem kostenlosen monatlichen Dev-Newsletter an.

Newsletter Anmeldung
Diese Artikel könnten Sie interessieren
.NET
pg

Pattern Matching with Discriminated Unions in .NET

Traditional C# pattern matching with switch statements and if/else chains is error-prone and doesn't guarantee exhaustive handling of all cases. When you add new types or states, it's easy to miss updating conditional logic, leading to runtime bugs. The library Thinktecture.Runtime.Extensions solves this with built-in Switch and Map methods for discriminated unions that enforce compile-time exhaustiveness checking.
26.08.2025
.NET
pg

Value Objects in .NET: Integration with Frameworks and Libraries

Value Objects in .NET provide a structured way to improve consistency and maintainability in domain modeling. This article examines their integration with popular frameworks and libraries, highlighting best practices for seamless implementation. From working with Entity Framework to leveraging their advantages in ASP.NET, we explore how Value Objects can be effectively incorporated into various architectures. By understanding their role in framework integration, developers can optimize data handling and enhance code clarity without unnecessary complexity.
12.08.2025
.NET
pg

Smart Enums: Adding Domain Logic to Enumerations in .NET

This article builds upon the introduction of Smart Enums by exploring their powerful capability to encapsulate behavior, a significant limitation of traditional C# enums. We delve into how Thinktecture.Runtime.Extensions enables embedding domain-specific logic directly within Smart Enum definitions. This co-location of data and behavior promotes more cohesive, object-oriented, and maintainable code, moving beyond scattered switch statements and extension methods. Discover techniques to make your enumerations truly "smart" by integrating behavior directly where it belongs.
29.07.2025
.NET
pg

Discriminated Unions: Representation of Alternative Types in .NET

Representing values that may take on multiple distinct types or states is a common challenge in C#. Traditional approaches—like tuples, generics, or exceptions—often lead to clumsy and error-prone code. Discriminated unions address these issues by enabling clear, type-safe modeling of “one-of” alternatives. This article examines pitfalls of conventional patterns and introduces discriminated unions with the Thinktecture.Runtime.Extensions library, demonstrating how they enhance code safety, prevent invalid states, and improve maintainability—unlocking powerful domain modeling in .NET with minimal boilerplate.
15.07.2025
.NET
pg

Handling Complexity: Introducing Complex Value Objects in .NET

While simple value objects wrap single primitives, many domain concepts involve multiple related properties (e.g., a date range's start and end). This article introduces Complex Value Objects in .NET, which group these properties into a cohesive unit. This ensures internal consistency, centralizes validation, and encapsulates behavior. Discover how to implement these for clearer, safer code using the library Thinktecture.Runtime.Extensions, which minimizes boilerplate when handling such related data.
01.07.2025
.NET
pg

Smart Enums: Beyond Traditional Enumerations in .NET

Traditional C# enums often fall short when needing to associate data or behavior with constants, or ensure strong type safety. This article explores the "Smart Enum" pattern as a superior alternative. Leveraging the library Thinktecture.Runtime.Extensions and Roslyn Source Generators, developers can easily implement Smart Enums. These provide a robust, flexible, and type-safe way to represent fixed sets of related options, encapsulating both data and behavior directly within the Smart Enum. This results in more maintainable, expressive, and resilient C# code, overcoming the limitations of basic enums.
17.06.2025