ASP.NET Core Web API Performance – Throughput For Upload And Download

After working with the new ASP.NET Core server Kestrel and the HttpClient for a while in a number of projects I run into some performance issues. Actually, it was a throughput issue. It took me some time to figure out whether it is the server or the client responsible for the problems. And the answer is: both.

In diesem Artikel:

ASP.NET Core Web API Performance – Throughput For Upload And Download
Pawel Gerr ist Architekt und Consultant bei Thinktecture. Er hat sich auf .NET Core Backends spezialisiert und kennt Entity Framework von vorne bis hinten.
Here are some hints to get more out of your web applications and Web APIs. The code for my test server and client are on GitHub: https://github.com/PawelGerr/AspNetCorePerformance In the following sections we will download and upload data using different schemes, storages and parameters measuring the throughput.

Download data via HTTP

Nothing special, we download a 20 MB file from the server using the default FileStreamResult:

				
					[HttpGet("Download")]
public IActionResult Download()
{
    return File(new MemoryStream(_bytes), "application/octet-stream");
}
				
			

The throughput on my machine is 140 MB/s.
For the next test we are using a CustomFileResult with increased buffer size of 64 KB and suddenly get a throughput of 200 MB/s.

Upload multipart/form-data via HTTP

The ASP.NET Core introduced a new type IFormFile that enables us to receive multipart/form-data without any manual work. For that we create a new model with a property of type IFormFile and use this model as an argument of a Web API method.

				
					public class UploadMultipartModel
{
    public IFormFile File { get; set; }
    public int SomeValue { get; set; }
}

-------------

[HttpPost("UploadMultipartUsingIFormFile")]
public async Task<IActionResult> UploadMultipartUsingIFormFile(UploadMultipartModel model)
{
     var bufferSize = 32 * 1024;
     var totalBytes = await Helpers.ReadStream(model.File.OpenReadStream(), bufferSize);

    return Ok();
}

-------------

public static async Task<int> ReadStream(Stream stream, int bufferSize)
{
    var buffer = new byte[bufferSize];

    int bytesRead;
    int totalBytes = 0;

    do
    {
        bytesRead = await stream.ReadAsync(buffer, 0, bufferSize);
        totalBytes += bytesRead;
    } while (bytesRead > 0);
    return totalBytes;
}
				
			

Using the IFormFile to transfer 20 MB we get a pretty bad throughput of 30 MB/s. Luckily we got another means to get the content of a multipart/form-data request, the MultipartReader.
Having the new reader we are able to improve the throughput up to 350 MB/s.

				
					[HttpPost("UploadMultipartUsingReader")]
public async Task<IActionResult> UploadMultipartUsingReader()
{
     var boundary = GetBoundary(Request.ContentType);
     var reader = new MultipartReader(boundary, Request.Body, 80 * 1024);

    var valuesByKey = new Dictionary<string, string>();
    MultipartSection section;

    while ((section = await reader.ReadNextSectionAsync()) != null)
    {
        var contentDispo = section.GetContentDispositionHeader();

        if (contentDispo.IsFileDisposition())
        {
            var fileSection = section.AsFileSection();
            var bufferSize = 32 * 1024;
            await Helpers.ReadStream(fileSection.FileStream, bufferSize);
        }
        else if (contentDispo.IsFormDisposition())
        {
            var formSection = section.AsFormDataSection();
            var value = await formSection.GetValueAsync();
            valuesByKey.Add(formSection.Name, value);
        }
    }

    return Ok();
}

private static string GetBoundary(string contentType)
{
    if (contentType == null)
        throw new ArgumentNullException(nameof(contentType));

    var elements = contentType.Split(' ');
    var element = elements.First(entry => entry.StartsWith("boundary="));
    var boundary = element.Substring("boundary=".Length);

    boundary = HeaderUtilities.RemoveQuotes(boundary);

    return boundary;
}
				
			

Uploading data via HTTPS

In this use case we will upload 20 MB using different storages (memory vs file system) and different schemes (http vs https).

The code for uploading data:

				
					var stream = readFromFs
    ? (Stream) File.OpenRead(filePath)
    : new MemoryStream(bytes);

var bufferSize = 4 * 1024; // default

using (var content = new StreamContent(stream, bufferSize))
{
    using (var response = await client.PostAsync("Upload", content))
    {
        response.EnsureSuccessStatusCode();
    }
}
				
			

Here are the throughput numbers:

  • HTTP + Memory: 450 MB/s
  • HTTP + File System: 110 MB
  • HTTPS + Memory: 300 MB/s
  • HTTPS + File System: 23 MB/s

Sure, the file system is not as fast as the memory but my SSD is not that slow to get just 23 MB/s …. let’s increase the buffer size instead of using the default value of 4 KB.

  • HTTPS + Memory + 64 KB: 300 MB/s
  • HTTPS + File System + 64 KB: 200 MB/s
  • HTTPS + File System + 128 KB: 250 MB/s

With bigger buffer size we get huge improvements when reading from slow storages like the file system.

Another hint: Setting the Content-Length on the client yields better overall performance.

Summary

When I started to work on the performance issues my first thought was that Kestrel is to blame because it had not enough time to mature yet.  I even tried to place IIS in front of Kestrel so that IIS is responsible for HTTPS stuff and Kestrel for the rest. The improvements are not worth of mentioning. After adding a bunch of trace logs, measuring time on the client and server, switching between schemes and storages I realized that the (mature) HttpClient is causing issues as well and one of the major problem were the default values like the buffer size.

Kostenloser
Newsletter

Aktuelle Artikel, Screencasts, Webinare und Interviews unserer Experten für Sie

Verpassen Sie keine Inhalte zu Angular, .NET Core, Blazor, Azure und Kubernetes und melden Sie sich zu unserem kostenlosen monatlichen Dev-Newsletter an.

Diese Artikel könnten Sie interessieren
ASP.NET Core
Architektur-Modernisierung: Migration von WCF zu gRPC mit ASP.NET Core – ein pragmatischer Ansatz

Architektur-Modernisierung: Migration von WCF zu gRPC mit ASP.NET Core – ein pragmatischer Ansatz

Viele Projekte mit verteilten Anwendungen in der .NET-Welt basieren noch auf der Windows Communication Foundation (WCF). Doch wie kommt man weg von der "Altlast" und wie stellt man seinen Code auf sowohl moderne als auch zukunftssichere Beine? Eine mögliche Lösung ist gRPC.

13.04.2023
ASP.NET Core
gRPC Code-First mit ASP.NET Core 7 und Blazor WebAssembly

gRPC Code-First mit ASP.NET Core 7 und Blazor WebAssembly

Wie in allen anderen browserbasierten Single-Page-Application (SPA) Frameworks, ist Blazor WebAssembly JSON-over-HTTP (über Web- oder REST-APIs) die bei weitem häufigste Methode, um Daten auszutauschen und serverseitige Vorgänge auszulösen. Der Client sendet eine HTTP-Anfrage mit JSON-Daten an eine URL, mitunter über unterschiedliche HTTP-Verben. Anschließend führt der Server eine Operation aus und antwortet mit einem HTTP-Statuscode und den resultierenden JSON-Daten. Warum sollte man das ändern? Nun, es gibt Gründe - vor allem wenn man in einem geschlossenen System ist und .NET sowohl im Frontend als auch im Backend einsetzt.
30.03.2023
ASP.NET Core
Blazor WebAssembly in .NET 7: UI-Performance-Optimierung auf Komponentenebene

Blazor WebAssembly in .NET 7: UI-Performance-Optimierung auf Komponentenebene

Stockende UI, keine Reaktion nach dem Klick auf einen Button oder einer Eingabe in einem Feld - dies sind nur wenige Beispiele alltäglicher Probleme, die der Nutzung von Client-Anwendungen im Allgemeinen, und bei Webanwendungen im Speziellen, immer wieder auftreten können. In diesem Artikel schauen wir uns an, wie wir komponentenbasierte UIs in Blazor WebAssembly optimieren können, um dadurch eine für die Benutzer zufriedenstellende Geschwindigkeit und ein flüssiges UI zu bekommen.
29.03.2023
.NET
Understanding and Controlling the Blazor WebAssembly Startup Process

Understanding and Controlling the Blazor WebAssembly Startup Process

There are a lot of things going on in the background, when a Blazor WebAssembly application is being started. In some cases you might want to take a bit more control over that process. One example might be the wish to display a loading screen for applications that take some time for initial preparation, or when users are on a slow internet connection. However, in order to control something, we need to understand what is happening first. This article takes you down the rabbit hole of how a Blazor WASM application starts up.
07.03.2023
.NET
Adding Superpowers to your Blazor WebAssembly App with Project Fugu APIs

Adding Superpowers to your Blazor WebAssembly App with Project Fugu APIs

Blazor WebAssembly is a powerful framework for building web applications that run on the client-side. With Project Fugu APIs, you can extend the capabilities of these apps to access new device features and provide an enhanced user experience. In this article, learn about the benefits of using Project Fugu APIs, the wrapper packages that are available for Blazor WebAssembly, and how to use them in your application.

Whether you're a seasoned Blazor developer or just getting started, this article will help you add superpowers to your Blazor WebAssembly app.
28.02.2023
.NET
Blazor WebAssembly in Practice: Maturity, Success Factors, Showstoppers

Blazor WebAssembly in Practice: Maturity, Success Factors, Showstoppers

ASP.NET Core Blazor is Microsoft's framework for implementing web-based applications, aimed at developers with knowledge of .NET and C#. It exists alongside other frameworks such as ASP.NET Core MVC. About two and a half years after the release of Blazor WebAssembly and based on our experiences from many customer projects at Thinktecture, we want to have a close look at the following questions: What is the current state of the framework? How can you successfully use Blazor? And where does it have limitations?
24.11.2022