Download data via HTTP
Nothing special, we download a 20 MB file from the server using the default FileStreamResult
:
[HttpGet("Download")]
public IActionResult Download()
{
return File(new MemoryStream(_bytes), "application/octet-stream");
}
The throughput on my machine is 140 MB/s.
For the next test we are using a CustomFileResult with increased buffer size of 64 KB and suddenly get a throughput of 200 MB/s.
Upload multipart/form-data via HTTP
The ASP.NET Core introduced a new type IFormFile
that enables us to receive multipart/form-data without any manual work. For that we create a new model with a property of type IFormFile
and use this model as an argument of a Web API method.
public class UploadMultipartModel
{
public IFormFile File { get; set; }
public int SomeValue { get; set; }
}
-------------
[HttpPost("UploadMultipartUsingIFormFile")]
public async Task UploadMultipartUsingIFormFile(UploadMultipartModel model)
{
var bufferSize = 32 * 1024;
var totalBytes = await Helpers.ReadStream(model.File.OpenReadStream(), bufferSize);
return Ok();
}
-------------
public static async Task ReadStream(Stream stream, int bufferSize)
{
var buffer = new byte[bufferSize];
int bytesRead;
int totalBytes = 0;
do
{
bytesRead = await stream.ReadAsync(buffer, 0, bufferSize);
totalBytes += bytesRead;
} while (bytesRead > 0);
return totalBytes;
}
Using the IFormFile
to transfer 20 MB we get a pretty bad throughput of 30 MB/s. Luckily we got another means to get the content of a multipart/form-data request, the MultipartReader
.
Having the new reader we are able to improve the throughput up to 350 MB/s.
[HttpPost("UploadMultipartUsingReader")]
public async Task UploadMultipartUsingReader()
{
var boundary = GetBoundary(Request.ContentType);
var reader = new MultipartReader(boundary, Request.Body, 80 * 1024);
var valuesByKey = new Dictionary();
MultipartSection section;
while ((section = await reader.ReadNextSectionAsync()) != null)
{
var contentDispo = section.GetContentDispositionHeader();
if (contentDispo.IsFileDisposition())
{
var fileSection = section.AsFileSection();
var bufferSize = 32 * 1024;
await Helpers.ReadStream(fileSection.FileStream, bufferSize);
}
else if (contentDispo.IsFormDisposition())
{
var formSection = section.AsFormDataSection();
var value = await formSection.GetValueAsync();
valuesByKey.Add(formSection.Name, value);
}
}
return Ok();
}
private static string GetBoundary(string contentType)
{
if (contentType == null)
throw new ArgumentNullException(nameof(contentType));
var elements = contentType.Split(' ');
var element = elements.First(entry => entry.StartsWith("boundary="));
var boundary = element.Substring("boundary=".Length);
boundary = HeaderUtilities.RemoveQuotes(boundary);
return boundary;
}
Uploading data via HTTPS
In this use case we will upload 20 MB using different storages (memory vs file system) and different schemes (http vs https).
The code for uploading data:
var stream = readFromFs
? (Stream) File.OpenRead(filePath)
: new MemoryStream(bytes);
var bufferSize = 4 * 1024; // default
using (var content = new StreamContent(stream, bufferSize))
{
using (var response = await client.PostAsync("Upload", content))
{
response.EnsureSuccessStatusCode();
}
}
Here are the throughput numbers:
- HTTP + Memory: 450 MB/s
- HTTP + File System: 110 MB
- HTTPS + Memory: 300 MB/s
- HTTPS + File System: 23 MB/s
Sure, the file system is not as fast as the memory but my SSD is not that slow to get just 23 MB/s …. let’s increase the buffer size instead of using the default value of 4 KB.
- HTTPS + Memory + 64 KB: 300 MB/s
- HTTPS + File System + 64 KB: 200 MB/s
- HTTPS + File System + 128 KB: 250 MB/s
With bigger buffer size we get huge improvements when reading from slow storages like the file system.
Another hint: Setting the Content-Length
on the client yields better overall performance.
Summary
When I started to work on the performance issues my first thought was that Kestrel
is to blame because it had not enough time to mature yet. I even tried to place IIS
in front of Kestrel
so that IIS
is responsible for HTTPS stuff and Kestrel
for the rest. The improvements are not worth of mentioning. After adding a bunch of trace logs, measuring time on the client and server, switching between schemes and storages I realized that the (mature) HttpClient
is causing issues as well and one of the major problem were the default values like the buffer size.