Incremental Roslyn Source Generators In .NET 6: Testing Source Generators, Analyzers & Code Fixes – Part 3

This article is the 3rd part of a series about Roslyn Source Generators & co. In the 1st article, of this series, we built an Incremental Source Generator to generate a new property Items of a Smart Enum. In the 2nd article, we added a Roslyn Analyzer and a Code Fix to prevent common mistake(s) and to help out the developers when using this Source Generator. All the code we have written so far was tested manually by executing the code and looking at the outcome. It is time to implement some automated tests to ensure the correct behavior.

In diesem Artikel:

pg
Pawel Gerr ist Architekt und Consultant bei Thinktecture. Er hat sich auf .NET Core Backends spezialisiert und kennt Entity Framework von vorne bis hinten.

Different Testing Approaches

When it comes to testing of Roslyn Source Generators, Analyzers, and Code Fixes then there are two different kinds of tests.

  • One is for testing the behavior of the generated code. In our case, it would be a test for the existence of the property Items and whether it returns all defined items.
  • The other kind of test verifies the emitted errors and warnings, and the generated code, not just the one produced by the Source Generator but also by the Code Fix. With such tests, we are able to tests all pieces we implemented in previous articles.

Preparation

Fist, create a new library project, DemoTests, which references both the DemoLibrary and the DemoSourceGenerator. Please note, that the reference to DemoSourceGenerator in DemoTests.csproj is missing the attribute ReferenceOutputAssembly="false" this time. For testing, we want both, the generated code, i.e. the standard functionality of a Source Generator, and a direct access to the classes DemoSourceGeneratorDemoAnalyzer and DemoCodeFixProvider.

We want to test 2 TargetFrameworks: .NET 5 and .NET 6.

				
					<Project Sdk="Microsoft.NET.Sdk">

    <PropertyGroup>
        <TargetFrameworks>net5.0;net6.0</TargetFrameworks>
    </PropertyGroup>

    <ItemGroup>
        <ProjectReference Include="..\DemoLibrary\DemoLibrary.csproj" />
        <ProjectReference Include="..\DemoSourceGenerator\DemoSourceGenerator.csproj" 
                          OutputItemType="Analyzer" />
    </ItemGroup>

    <ItemGroup>
        <PackageReference Include="Microsoft.CodeAnalysis.CSharp.Analyzer.Testing.XUnit"
                          Version="1.1.0" />
        <PackageReference Include="Microsoft.CodeAnalysis.CSharp.CodeFix.Testing.XUnit"
                          Version="1.1.0" />
        <PackageReference Include="Microsoft.CodeAnalysis.CSharp.CodeRefactoring.Testing.XUnit"
                          Version="1.1.0" />
        <PackageReference Include="Microsoft.CodeAnalysis.CSharp.Workspaces"
                          Version="4.0.1" />

        <PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.0.0" />
        <PackageReference Include="FluentAssertions" Version="6.3.0" />
        <PackageReference Include="xunit" Version="2.4.1" />
        <PackageReference Include="xunit.runner.visualstudio" Version="2.4.3" />
    </ItemGroup>

</Project>
				
			

In the following tests, we will be using XUnit and Fluent Assertions.

Testing Behavior of Generated Code

In order to test the expected behavior, we need (at least) one enumeration. Copy the ProductCategory from the project DemoConsoleApplication to DemoTests or create a new one. These kinds of tests are very common, so one positive test should be enough as a starting point. In real scenarios, we should make negative tests and tests for the edge cases as well. An edge case would be an enumeration with no items.

Create a new class ItemsPropertyTests in the project DemoTests with the following content and let the test run.

				
					using FluentAssertions;
using Xunit;

namespace DemoTests;

public class ItemsPropertyTests
{
   [Fact]
   public void Should_return_all_known_items()
   {
      ProductCategory.Items.Should().HaveCount(2)
                     .And.BeEquivalentTo(new[]
                                         {
                                            ProductCategory.Fruits,
                                            ProductCategory.Dairy
                                         });
   }
}
				
			

The test should be green.

Testing Roslyn Source Generator

Create a new class DemoSourceGeneratorTests in the project DemoTests for testing the output of the DemoSourceGenerator. First, we make a helper method that executes the Source Generator directly and returns the generated output.

				
					using FluentAssertions;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp;
using Xunit;

namespace DemoTests;

public class DemoSourceGeneratorTests
{
   private static string? GetGeneratedOutput(string sourceCode)
   {
      var syntaxTree = CSharpSyntaxTree.ParseText(sourceCode);
      var references = AppDomain.CurrentDomain.GetAssemblies()
                                .Where(assembly => !assembly.IsDynamic)
                                .Select(assembly => MetadataReference
                                                    .CreateFromFile(assembly.Location))
                                .Cast<MetadataReference>();

      var compilation = CSharpCompilation.Create("SourceGeneratorTests",
                    new[] { syntaxTree },
                    references,
                    new CSharpCompilationOptions(OutputKind.DynamicallyLinkedLibrary));

      // Source Generator to test 
      var generator = new DemoSourceGenerator.DemoSourceGenerator();

      CSharpGeneratorDriver.Create(generator)
                           .RunGeneratorsAndUpdateCompilation(compilation,
                                                              out var outputCompilation,
                                                              out var diagnostics);

      // optional
      diagnostics.Where(d => d.Severity == DiagnosticSeverity.Error)
                 .Should().BeEmpty();

      return outputCompilation.SyntaxTrees.Skip(1).LastOrDefault()?.ToString();
   }
}
				
			

The method GetGeneratedOutput parses and compiles the provided sourceCode and executes the DemoSourceGenerator. The output generated by the Source Generator is the last one in the collection SyntaxTrees. Please note that the generator doesn’t produce any output under certain conditions. That’s why we Skip the first syntax tree which is the provided sourceCode.

With the helper method GetGeneratedOutput, the actual tests are reduced to the comparison of strings. A positive test looks like as following:

				
					   [Fact]
   public void Should_generate_Items_property_with_2_items()
   {
      var input = @"
using DemoLibrary;

namespace DemoTests
{
   [EnumGeneration]
   public partial class ProductCategory
   {
      public static readonly ProductCategory Fruits = new(""Fruits"");
      public static readonly ProductCategory Dairy = new(""Dairy"");

      public string Name { get; }

      private ProductCategory(string name)
      {
         Name = name;
      }
   }
}
";
      GetGeneratedOutput(input)
         .Should().Be(@"// <auto-generated />

using System.Collections.Generic;

namespace DemoTests
{
   partial class ProductCategory
   {
      private static IReadOnlyList<ProductCategory> _items;
      public static IReadOnlyList<ProductCategory> Items => _items ??= GetItems();

      private static IReadOnlyList<ProductCategory> GetItems()
      {
         return new[] { Fruits, Dairy };
      }
   }
}
");
   }

				
			

Testing Roslyn Analyzers and Code Fixes

For testing the Analyzers and Code Fixes, it is recommended to create a few helper classes. Depending on whether we have Code Fixes or not, we need either an AnalyzerVerifier or an AnalyzerAndCodeFixVerifier. In this article, we will create both helper classes but just use the 2nd one.

Here are the contents of the class AnalyzerVerifier for the sake of completeness.

				
					using DemoLibrary;
using Microsoft.CodeAnalysis.CSharp.Testing;
using Microsoft.CodeAnalysis.Diagnostics;
using Microsoft.CodeAnalysis.Testing;
using Microsoft.CodeAnalysis.Testing.Verifiers;

namespace DemoTests.Verifiers;

public static class AnalyzerVerifier<TAnalyzer>
   where TAnalyzer : DiagnosticAnalyzer, new()
{
   public static DiagnosticResult Diagnostic(string diagnosticId)
   {
      return CSharpAnalyzerVerifier<TAnalyzer, XUnitVerifier>.Diagnostic(diagnosticId);
   }

   public static async Task VerifyAnalyzerAsync(
      string source,
      params DiagnosticResult[] expected)
   {
      var test = new AnalyzerTest(source, expected);
      await test.RunAsync(CancellationToken.None);
   }

   private class AnalyzerTest : CSharpAnalyzerTest<TAnalyzer, XUnitVerifier>
   {
      public AnalyzerTest(
         string source,
         params DiagnosticResult[] expected)
      {
         TestCode = source;
         ExpectedDiagnostics.AddRange(expected);
#if NET6_0
         ReferenceAssemblies = new ReferenceAssemblies(
            "net6.0",
            new PackageIdentity("Microsoft.NETCore.App.Ref", "6.0.0"), 
            Path.Combine("ref", "net6.0"));
#else
         ReferenceAssemblies = ReferenceAssemblies.Net.Net50;
#endif

         TestState.AdditionalReferences.Add(typeof(EnumGenerationAttribute).Assembly);
      }
   }
}
				
			

What we need for this demo is the AnalyzerAndCodeFixVerifier, so create a new class with the following content:

				
					using DemoLibrary;
using Microsoft.CodeAnalysis.CodeFixes;
using Microsoft.CodeAnalysis.CSharp.Testing;
using Microsoft.CodeAnalysis.Diagnostics;
using Microsoft.CodeAnalysis.Testing;
using Microsoft.CodeAnalysis.Testing.Verifiers;

namespace DemoTests.Verifiers;

public static class AnalyzerAndCodeFixVerifier<TAnalyzer, TCodeFix>
   where TAnalyzer : DiagnosticAnalyzer, new()
   where TCodeFix : CodeFixProvider, new()
{
   public static DiagnosticResult Diagnostic(string diagnosticId)
   {
      return CSharpCodeFixVerifier<TAnalyzer, TCodeFix, XUnitVerifier>
                .Diagnostic(diagnosticId);
   }

   public static async Task VerifyCodeFixAsync(
      string source,
      string fixedSource,
      params DiagnosticResult[] expected)
   {
      var test = new CodeFixTest(source, fixedSource, expected);
      await test.RunAsync(CancellationToken.None);
   }

   private class CodeFixTest : CSharpCodeFixTest<TAnalyzer, TCodeFix, XUnitVerifier>
   {
      public CodeFixTest(
         string source,
         string fixedSource,
         params DiagnosticResult[] expected)
      {
         TestCode = source;
         FixedCode = fixedSource;
         ExpectedDiagnostics.AddRange(expected);
#if NET6_0
         ReferenceAssemblies = new ReferenceAssemblies(
            "net6.0",
            new PackageIdentity("Microsoft.NETCore.App.Ref", "6.0.0"),
            Path.Combine("ref", "net6.0"));
#else
         ReferenceAssemblies = ReferenceAssemblies.Net.Net50;
#endif

         TestState.AdditionalReferences.Add(typeof(EnumGenerationAttribute).Assembly);
      }
   }
}
				
			

The only thing worth mentioning is the last line where we add the assembly reference DemoLibrary to the testing class CSharpCodeFixTest, otherwise the EnumGenerationAttribute will not be resolved, and the test fails.

Having the helper classes, the actual tests become much easier to write.
Create a new test class DemoAnalyzerAndCodeFixTests to test our DemoAnalyzer and the DemoCodeFixProvider for correct handling of non-partial Smart Enums.

				
					using DemoSourceGenerator;
using Xunit;
using Verifier = DemoTests.Verifiers.AnalyzerAndCodeFixVerifier<
   DemoSourceGenerator.DemoAnalyzer,
   DemoSourceGenerator.DemoCodeFixProvider>;

namespace DemoTests;

public class DemoAnalyzerAndCodeFixTests
{
   [Fact]
   public async Task Should_trigger_on_non_partial_class()
   {
      var input = @"
using DemoLibrary;

namespace DemoTests
{
   [EnumGeneration]
   public class {|#0:ProductCategory|}
   {
   }
}";

      var expectedOutput = @"
using DemoLibrary;

namespace DemoTests
{
   [EnumGeneration]
   public partial class ProductCategory
   {
   }
}";

      var expectedError = Verifier.Diagnostic(DemoDiagnosticsDescriptors.EnumerationMustBePartial.Id)
                                  .WithLocation(0)
                                  .WithArguments("ProductCategory");
      await Verifier.VerifyCodeFixAsync(input, expectedOutput, expectedError);
   }
}
				
			

The magic lies in the using Verifier = ... at the top of the file and Verifier.VerifyCodeFixAsync at the bottom, which get the input source code and expects the expectedOutput along with the expectedError. The rather strange characters {|#0: and |} is markup, so the Verifier knows what location the compilation error will be pointing to.

We were talking about location in the 2nd article. Search for the text classDeclaration.Identifier to get more information on that.

Let’s run all three tests, which should be green.

Summary

This article concludes the introductory series about Roslyn Source Generators, Analyzer, and Code Fixes. As a matter of fact, writing tests turned out to be the easiest and fastest task compared to the previous two articles.

Kostenloser
Newsletter

Aktuelle Artikel, Screencasts, Webinare und Interviews unserer Experten für Sie

Verpassen Sie keine Inhalte zu Angular, .NET Core, Blazor, Azure und Kubernetes und melden Sie sich zu unserem kostenlosen monatlichen Dev-Newsletter an.

Newsletter Anmeldung
Diese Artikel könnten Sie interessieren
Database Access with Sessions
.NET
KP-round

Data Access in .NET Native AOT with Sessions

.NET 8 brings Native AOT to ASP.NET Core, but many frameworks and libraries rely on unbound reflection internally and thus cannot support this scenario yet. This is true for ORMs, too: EF Core and Dapper will only bring full support for Native AOT in later releases. In this post, we will implement a database access layer with Sessions using the Humble Object pattern to get a similar developer experience. We will use Npgsql as a plain ADO.NET provider targeting PostgreSQL.
15.11.2023
Old computer with native code
.NET
KP-round

Native AOT with ASP.NET Core – Overview

Originally introduced in .NET 7, Native AOT can be used with ASP.NET Core in the upcoming .NET 8 release. In this post, we look at the benefits and drawbacks from a general perspective and perform measurements to quantify the improvements on different platforms.
02.11.2023
.NET
KP-round

Optimize ASP.NET Core memory with DATAS

.NET 8 introduces a new Garbage Collector feature called DATAS for Server GC mode - let's make some benchmarks and check how it fits into the big picture.
09.10.2023
.NET CORE
pg

Incremental Roslyn Source Generators: High-Level API – ForAttributeWithMetadataName – Part 8

With the version 4.3.1 of Microsoft.CodeAnalysis.* Roslyn provides a new high-level API - the method "ForAttributeWithMetadataName". Although it is just 1 method, still, it addresses one of the biggest performance issue with Source Generators.
16.05.2023
AI
favicon

Integrating AI Power into Your .NET Applications with the Semantic Kernel Toolkit – an Early View

With the rise of powerful AI models and services, questions come up on how to integrate those into our applications and make reasonable use of them. While other languages like Python already have popular and feature-rich libraries like LangChain, we are missing these in .NET and C#. But there is a new kid on the block that might change this situation. Welcome Semantic Kernel by Microsoft!
03.05.2023
.NET
sg

.NET 7 Performance: Regular Expressions – Part 2

There is this popular quote by Jamie Zawinski: Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems."

In this second article of our short performance series, we want to look at the latter one of those problems.
25.04.2023