Add C# 12
Please complete the following information:
- Name: C#
- Version: 12
- Release Note/Changelog:
- C# 12: https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-12
- C# 11: https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-11
C# 12 was introduced with .NET 8 which is LTS version (like .NET 6 and C# 10)
It would be good to have in .csproj these entries:
<PropertyGroup>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
:+1: reaction might help to get this request prioritized.
List patterns would be great to use in Codewars!
Possibly superseded by #272 .
Updated this instead of closing #272 so we don't lose the :+1: count for the newer C#. We don't always prioritize based on the :+1: count, but it's one of the major factors used to decide what to work on.
C# 12 was added earlier today and enabled for compatible kata. Unfortunately, we have more than 2K incompatible kata at the moment.
The main reason is the breaking changes from NUnit v3 to v4:
- The Classic Asserts have been moved to a separate library and their namespace and their class name were renamed to:
NUnit.Framework.Legacy.ClassicAssert. - The standalone assert classes have also been moved to the
NUnit.Framework.Legacynamespace. These classes are:-
CollectionAssert -
StringAssert -
DirectoryAssert -
FileAssert
-
-
Assert.Thatoverloads with format specification andparamshave been removed in favor of an overload usingFormattableString.
See https://docs.nunit.org/articles/nunit/release-notes/breaking-changes.html#nunit-40 for more details.
NUnit does provide a migration tool (Nunit.Analyzer code-fix) to convert Classic Asserts to constraint model (Assert.That) and fix the overload usage, but we currently don't have a convenient way to run the tool against the kata tests, verify the changes, and update the kata automatically.
For 1 and 2, we could try to minimize the compatibility issues by including a file GlobalUsings.cs with the aliases:
global using Assert = NUnit.Framework.Legacy.ClassicAssert;
global using CollectionAssert = NUnit.Framework.Legacy.CollectionAssert;
global using StringAssert = NUnit.Framework.Legacy.StringAssert;
global using DirectoryAssert = NUnit.Framework.Legacy.DirectoryAssert;
global using FileAssert = NUnit.Framework.Legacy.FileAssert;
We can even generate this file based on the test code by detecting the classic asserts and the standalone assert classes, and only include the necessary aliases. However, we'll have a problem when the test also uses the constraint model. I also prefer not doing too much implicitly because that can lead to confusion.
I also considered staying with NUnit v3, but NUnit v4 is almost a year old already, and we'll have more kata to update in the future.
Any suggestions are welcome. Also, please let me know if you notice any issues with C# 12 in general.
I messaged you on Discord but I am not sure if you read it, but: I managed to create a tool which automatically fetches test snippets (example tests and submission tests), updates syntax to Assert.That, and verifies the result on CW runner. Efficiency looks promising until now, and out of ~70 converted kata, ~~two resulted in test suites ending with a compilation error~~ all passed tests on CW runner successfully (my tool forgets to account for preloaded snippet and it is not able to verify updated kata which use preloaded, but still the updated tests passed after pasting them to a new fork).
I can provide you with a sample of converted test suites, or with whole bunch of converted test suites for you to review and upload to DB. If you are interested, I made you a collaborator of the repo of the tool, and in this directory you can find some processed snippets for kata I already updated. Snippets from the fix_csharp_output/reviewed directory pass CW tests, are reviewed for no undesired changes, and can be uploaded to DB, if possible.
I also have some observations related to setup based on NUnit 4:
- NUnit 4 is more likely to change order of reported tests than NUnit 3. This can be solved by adding the
[Order]annotation (which my tool does). - NUnit 4 uses some debugging symbols magic to print the whole expression line of
Assert.That(actual, ....). This makes failures a bit more verbose, but the worse thing is that it can sometimes leak reference solutions. Fix for that would be to factor out calls to reference solutions to an explicit variable before callingIs.EqualTo, but it might not be that easy. I will see what I can do.
Thanks @hobovsky, awesome work! Using OpenAI for these upgrades is something we've been wanting to explore.
The prompt looks good, and it seems to work really well. Cool to see it handling assertions like StringAssert.AreEqualIgnoringCase too. For this refactoring, it feels pretty safe to apply the change after the test passes, but I still think we need to review the changes because unexpected things can happen.
Thanks for creating a directory with the ones you reviewed, but I'd rather not update the database from it. (We try to avoid having "write" access to the production database from outside the production cluster. I also don't really want to download those files in a container running the app and run an untested script to update the production database.)
Maybe we should just create a "fork" with the changes. (We should have a system account to use as an owner for automations like this. Might want to list these "forks" separately from other forks.)
Once we establish the workflow and have some wrappers around this, we can accept prompts for other upgrades too. I'll try to prioritize this next week.
If it would make you feel more comfortable with updating tests automatically, I can still research the possibility of using the NUnit analyzer. The advantage would be that the analyzer would apply refactorings in a detereministic, perfectly controlled way, and maybe these would feel "safer" to upload en masse. But the disadvantage is that it won't fix the other two major problems: nondeterministic order in test reports, and reference solutions leaking through error messages. Maybe we could apply refactorings for these in a separate round, after fixing assertions? I lean towards the option of fixing assertions+order+leaking refsols in one go, but it might depend on your priorities, and maybe you want to have these separate?
EDIT: I added a functionality to the Katafix tool to view diffs in the tool, before creating a fork on CW. Now it is even easier to verify whether the refactored code is as close to the original as possible. This makes the reviewing part even easier, and the most problematic part is uploading to CW. Having a way to create a fork would be great, and having a dedicated account for this would be even better. EDIT2: I can do fork drafts now.
Once we establish the workflow and have some wrappers around this, we can accept prompts for other upgrades too.
The main impulse towards creating the Katafix tool was creating a way for assisted (not necessarily fully automated) improvements, of various kinds, to existing kata. I managed to use it to migrate F# tests from Fuchu to NUnit, to translate kata between closely related languages (C# to VB or Ruby to Crystal) or between languages with test frameworks of similar structure (JavaScript to LUA etc.), to clean up and refactor, etc. I want to research a possibility of using it for applying an improvement to many kata (maybe update of C++ kata from Snowhouse to gtest?), and for improvements to a single kata across many languages (harmonize example tests, fixed tests, ranges of random tests, whatnot). I do not expect it to work great in every case, but I have to admit that until now, results are unexpectedly good. The bottleneck is creating a fork, because currently I have to create them manually via web UI.
I can still research the possibility of using the NUnit analyzer. ...
No, I agree with you and what you've done so far is outstanding.
The main impulse towards creating the Katafix tool was creating a way for assisted (not necessarily fully automated) improvements, of various kinds, to existing kata.
Yes, exactly. This has been on our roadmap. (We'd like to help our customers in Qualified to maintain their challenges as well.)
This is getting off topic here, so I'll comment in your repo or DM you later. By the way, it's pretty cool to see how you worked around Codewars not having any usable API :)
Every open port can be an API if you try hard enough. --- Paulo Coelho
Back to the topic tho, one more observation about NUnit 4 setup: it seems to truncate actual and expected values, in particular stringified collections, in a way similar to Chai's config.truncateThreshold. I did not find a setting for this yet.