Tuesday, February 25, 2025

Technology Idiocy Log

Although I've mostly retired from software development, I'm still doing a few hours of maintenance and debugging each week for old products. In my previous post about easing into retirement, I mentioned many examples of random idiotic wildly counterproductive things keep going wrong and suck all enjoyment of working in IT. Idiotic things are still happening, and this post is a growing log of recent idiotic problems related to IT and technology in general that I can show to friends and colleagues as proof of how "everything f***ing doesn't work".


March 2025 — T4 Templates in .NET Core PENDING (3 hours)
If a T4 template in a Visual Studio project attempts to "touch" a .NET Core Assembly it will fail with reference errors. This wouldn't normally happen, but in my case I was dynamically loading an Assembly so I could reflect over it and generate code. The Assembly was recently changed from targeting netstandard20 to net80, so I hit the problem.

A moderised TextTransformCore.exe was released in June 2023, but they state that in-IDE or MSBuild tasks are not supported yet, and after an hour of experiments in March 2025 I sadly confirm that it's still a limitation. Therefore the only viable technique remaining is to use the tool from a command prompt to manually process the .tt file. You can't have $(ProjectDir) style variables in the .tt file because there is no way for the tool to resolve their values (using <T4ParameterValues> doesn't work). My bat file in the solution folder looks like this, and it uses the ‑r switch to tell the processor where the dynamically loaded Assembly is found:

"%ProgramFiles%\Microsoft Visual Studio\2022\Professional\Common7\IDE\TextTransformCore" MyTemplate.tt ‑r bin\Debug\net8.0\MyAssembly.dll

I pray that they will integrate the new T4 tool into the VS IDE and build processing soon, but I'm worried that Microsoft will put more effort into urging people to use Rosyln Source Generators instead. As I say in my C# Compiler Source Generators post a few years ago, writing generators is fragile, tedious and complicated compared to knocking-out a .tt file.

March 2025 — Publish Blazor App SOLVED (4 hours)
I need to publish a Blazor app with small changes to match those in a web service it calls. The project fails to compile due to a missing project.assets.json file. An hour of web searching for a fix produces a dozen suggestions that are either nonsensical or have no effect. Cleaning folders, restoring packages and workloads, etc, nothing works (and some ran for 20 minutes). It looks like some net60 environment change in my work PC has broken the build process. Out of desperation I upgrade from net60 to net80 and update all the packages and publish settings to match. Now it compiles and runs, but publishing still fails. I eventually see a clue: the following workloads must be installed: wasm-tools-net8. I run the command dotnet workload install wasm-tools-net8 and restart Visual Studio, now it publishes.

March 2025 — Update Chrome SOLVED (1.5 hours)
I try to install Adblock Plus in Chrome on my work PC and it says that Chome is too old. I attempt to update Chrome and it fails with some hex code ending in 2. The update is repeated a few times with several elevation prompts (which seem to work), but the error is unchanged. I login as local Administrator, which for some reason causes 2 minute pauses between logoff and logon. Update continues to fail. I download the Chrome installer and run it, and the update seems to work, with only one elevation prompt. I restart Chrome and it requests another update, which luckily this time downloads and installs without error. I log back on as my regular user (after a 2 minute stall black screen) and start Chrome and my home page fails to load because all the network shares are dead. I reboot and now Chrome seems to be updated and working normally. Now I can install Adblock Plus, which also removes the garish ads plastered by default all over popular news web sites.

March 2025 — PC Speakers fault FAILED (1 hour)
The small Logi z213 speakers on my wife's PC stop working. After an hour of experiments I determine that the wire inside the sealed plastic on the stereo jack is loose, as wiggling it around causes the sound to stop and start. The right speaker is stuck on half volume. This is a classic case of an SPF, a Single (Stupid) Point of Failure that can't be repaired and it renders the whole device practically useless. The $70 speakers only lasted about one year.

February 2025 — Visual Studio 2022 Licensing SOLVED (5 hours)
Azure Storage Explorer was telling me that some accounts needed reauthentication, which I did and it seemed to work. About an hour later I launch Visual Studio 2022 and it tells me that my trial period has expired. I spend an hour entering my usual credentials over and over and over, but the problem remains unchanged. I notice that the account dialog says: This product is licensed to: wrong address. For another hour I search for how to change the license address, or cancel it, or do anything useful. The next day, with a fresh mind I bumble into a fix by changing the Account Options > combo Windows authentication broker to Embedded web browser, relaunching VS, entering my credentials again, which causes everything to come good and the Accounts screen is sensible and shows no alerts. I then revert the combo and relaunch VS, which causes more authentication prompts, but they are also finally satisfied and all problems are solved with the original settings restored. By looking at some files in %TEMP%\servicehub\logs I see Could not find a WAM account message which lead to This Community Issue which is solved using exactly the same workaround that I used by dumb luck.

February 2025 — Dynamic C# Source Compile SOLVED (3 hours)
in July 2014 I posted an article titled Dynamic C# code compilation where I use the CSharpCodeProvider class to compile C# source code at runtime, then invoke a method inside the generated Assembly. Unfortunately that technique only works in the full .NET Framework and is deprecated in .NET Core. I went looking for modern equivalent code so I could udpate the post. After two hours of searching I found several samples that all did not work for various reasons, even the MSDN sample code produced a PlatformNotSupportedException, and the docs confusingly confirmed that was expected for .NET Core 2+. I eventually found some (non working) sample code that used the CSharpCompilation and related classes, which I tweaked and almost worked. It produced compile errors like Predefined type 'System.Object' is not defined or imported because the basic references were not found. I eventually found the minimum required references to make the compile work. For more information see Dynamic C# Code Compilation.

January 2025 — PC Sound Problems SOLVED (2 hours)
Playback through the line-out stops working and only the speakers in the screen work. All sound recording stops because Audacity says there are no input devices. After an hour of inspecting Windows settings and cables I discover that the power cable and another have been pulled out of the small mixer out of sight behind my screen. I think a cat got stuck behind my desk and got tangled in the cables trying to get out. I put all the cables back and reboot. Playback now works, but not recording. I stuff around with more settings in Windows and Audacity and after more than an hour of random stuffing around the recording device is visible and I can select it and recording works again.

January 2025 — TV Sound Problems PENDING (2 hours)
Our LG 55" TV only has two ways of playing sound: (1) HDMI-ARC which is typically used with a sound bar or similar (2) Optical out plug. We don't own a HDMI-ARC compatible playback device, just a $99 set of 5 speakers which have worked pleasantly for over 20 years using their ancient RCA plugs. Luckily, the cheap speakers have an optical input, but connecting it to the TV's optical out produces no sound. I have no way of determining which part is failing. We have no choice but to eventually buy a sound bar or something similar, but they cost anything from $200 to $2000 and it's just a burden of brainpower and money to choose something.

January 2025 — .NET 9 Upgrade SOLVED (3 hours)
Since .NET 9 had been out for a few months, I decided to bulk upgrade my main hobby suite named Hoarder from .NET 8 to 9. I did this purely as an academic exercise to keep up with the latest versions of everything and play with them. The upgrade wizard went through all 10 projects (including a Blazor one) and produced only a few low-level warning messages. The solution compiled okay first time. At runtime everything was working until I got to the WPF desktop program which died with an incomprehensible Exception deep inside the CodePage loading library. There were no matching results from web searches and I fiddled around for ages without success. I had run out of ideas and give-up in disgust. Six weeks later I revisited the problem, hit run on the WPF program and it ran perfectly. I don't know what was wrong and I don't know what went right.

Tuesday, January 14, 2025

Easing into retirement

I posted the following message into the ozdotnet mailing list in October 2024.


Hello everyone, I have an announcement and tale that might interest you.

I’m easing into retirement.

Companies I’ve been working for are being sold, retired or are no longer developing new software. I only have a few hours of ad-hoc maintenance work each week. Running out of legacy work would drive a regular dev to seek new work, but in my case, I declined to create a LinkedIn page, or send out feelers through contacts for new work, because… I’m burnt out.

Why? That’s what pleases me to imagine might interest you.

I learned to code in 1975 and became an official programmer in 1981. I wrote FORTRAN, ALGOL, COBOL, assemblers and various JCLs and scripting languages on Honeywell, FACOM and IBM mainframes. Things were simpler back then of course because you moved inside the ecosystem of a particular manufacturer and had high-level support and voluminous and accurate documentation. If you wanted to solve a problem or do something edgy, then an answer was nearby. It was a different simpler world, but … everything worked.

Now, well into the 21st century of IT, everything doesn’t work. My wife often hears me shout from the other end of the house “Everything f***ing doesn’t work”. I also only semi-jokingly say I’ll have these words carved into my gravestone: “Everything f***ing doesn’t work all the f***ing time”.

Overall, what has burnt me out is complexity and instability. I’ll break those topics down a bit.

Everything in modern IT is complicated and fragile. Every new toolkit, platform, pattern, library, package, upgrade, etc is unlikely to install and work first time. I seem to spend more time getting things working and updated than I do actually writing software. In a typical working month I might have to juggle Windows, Linux, Android, iOS, macOS, Google, Amazon, Azure, .NET, Python, PowerShell and C++, and they all have different styles and cultures. Software engineering has fractured into so many overlapping pieces that I’m tired of trying to maintain competence in them all.

That leads naturally to the problem of dependencies. Just having so many moving parts with so many different versions available produces dependencies more complex than abstract algebra. How many times have you hit some kind of compile or runtime version conflict and spent hours trying to dig your way out of it? (A special salute to Mr Newtonsoft there!) Or you install A, but it needs B, which needs C, and so on.

I often hit incomprehensible blocker problems for which web searches produce absurd and conflicting suggestions which don’t work anyway. All I can do is futz around and change things randomly until things work again. I don’t know what went wrong and I don’t know what went right.

The Web — Browsers, HTML, CSS, JavaScript, the HTTP protocol, JSON and REST can all burn for eternity in fusing hellfire. About ten years ago I told my customers I refused to write any more web UI apps. However, I was forced to do so a few times and I’m still scarred by the horror. It’s just over 30 years since the web became public and we’re still attempting to render serious business apps using dumb HTML. HTML5 is the joke of the century (so far). I still lament the loss of Silverlight.

Git — Someone is lucky I don’t own a gun.

Fads — An exercise for the reader: name all the platforms, kits, patterns and frameworks that you know were once the coolest thing and now might only be found in history articles. An advanced exercise is to speculate on which currently cool things will be gone soon.

Finally, here is a list of typical things that give me the shits, just as they pop out of my head.

  • Attempting to compile projects that have been idle for a year or more will usually fail due to changed dependencies or deprecations and it can take hours to get them going again.
  • I develop and test something with great care, then deploy it and it crashes. This is part of the general “it works on my machine” disease.
  • I can stop successful work on Friday night, then resume on Monday morning and everything utterly fails.
  • My USB microscope and music recording both stopped working recently, and it took me a week to discover that it was a block by Windows 11 app security (I thought it was hardware or incompatibility problem).
  • Security! Walls, barriers and hurdles of security everywhere to crash through. Yes, I know we need security everywhere to stop the black hats, but it’s also stopping developers. Lord knows how many times I’ve hit run or debug on my own PC and I get “Access denied” and hours of research will be required. I’m also fed-up with ceaseless 2FA requests via email or SMS.
  • Everything about mobile devices. The ludicrous variety of devices and brands makes app development a nightmare. Then you must struggle through the variety of publishing processes.
  • My final entry is simply the tiny “thousand cuts” that torture you during development: version mismatches, inconsistent behaviour, strange errors, editor quirks, missing files, etc. All the little personal problems that slip between the cracks of bigger issues I’ve previous mentioned. Your mileage may vary.

In summary, being a software engineer is now so exhausting that after 40+ years of a generally enjoyable career immersed in programming and computer technology I’ve reached a point I never thought would arrive… I’m burnt out. Even working on my hobby projects has become a burden because they suffer from many of the impediments previously mentioned.

I still plan to attend some upcoming conventions and Meetups, and I’ll be watching the forum, but my posts will diminish because I’m probably out trying to prevent the garden and house disintegrating back into the earth from whence they came.

Greg Keogh

Saturday, January 11, 2025

Web API Status Codes and Errors

Overview

This post is an update to one I made in 2018 which complained about the clumsy way REST style web services use status codes to report success or failure for various types of requests. The key point I'm going to make is that the convention of using status codes like 201 (Created), 204 (NoContent), 400 (BadRequest), etc, is inappropriate for expressing the results from a typical business service.

The list of HTTP Status Codes contains a bewlidering set of values that are mostly related to networking, or web server internals or other esoteric conditions. There is no sensible mapping of these status codes to typical business processing results. You could argue that a 400 (BadRequest) might indicate a bad parameter in a request, but the definition of 400 is far broader than that. If you get a 404 (NotFound), then exactly what is "not found"? Is it some file or database row, or the whole uri? Many other status codes are event trickier to assign some sort of business meaning.

The worst thing about the zoo of possible response codes is that the client has the burden of switching code paths to handle them all, and hoping they haven't missed any. Polite service authors will publish OpenAPI to document all their responses, but it can be time-consuming to safely turn large amounts of documentation into code (although there are various tools that convert OpenAPI into client-side code).

Only 200 (OK)

I eventually got fed-up with thinking about status codes and decided to return only 200 (OK) from my services. This indicates that the request succeeded without any kind of external problem. It does not indicate if the request succeeded in the business logic sense, as some extra standard information in the response body provides that information (explained shortly).

Any response code other than 200 indicates something went seriously wrong unrelated to the service logic, probably a network or web server failure. In this case the client app would probably show a pink screen or similar to indicate a serious problem.

If your service only returns 200, then how do you indicate if the business logic of the request succeeded or not?

I think the simplest way of returning business processing result information is to have some standard properties present in every response. Here is part of a typical error response from one of my services:

{
  "code": 2,
  "title": "Customer create failed",
  "detail": "Customer with key '806000123' name 'Contoso Pty Ltd' already exists.",
  // See below for more details of what could be here...
  "data": null  // Success data would go here
}

Exactly what standard properties to place in the response is your choice, and there are many articles that argue around this matter. In recent years there have been attempts to standardise error reporting properties, such as RFC 7807. The full RFC error response recommendations may be overkill for most business scenarios, but it's worth considering following some of the naming conventions.

Coding Details

What follows is specific to the C# language, but it can easily be applied to any other modern language.

There are two ways to return standard response properties. Firstly, define a base class with the properties and derive all responses from the base class. This does cause the standard properties to merge into the response properties at the root level, which might look a bit confusing. Secondly (my preference), is to have a generic response class like this:

public class ResponseWrap<T>
{
  public ResponseWrap(T data)
  {
    Data = data;
  }
  public ResponseWrap(int code, string? title, string? detail = null)
  {
    Code = code;
    Title = title;
    Detail = detail;
  }
  public string? Title { get; set; }
  public string? Detail { get; set; }
  public bool HasError => Code != null;
  public int? Code { get; set; }
  public T Data { get; set; }
}

Note how the standard properties are at root level, and so is a Data generic property which is expected to contain any data that is in a success response. This results in a simply shaped JSON document common to all service responses. Clients can inspect the HasError property to determine if the business logic of the requests suceeded or not. The exact code is flexible and can be adjusted according to coding preferences, but the important fact is that there are some root standard properties and one of them indicates success or failure. In case of success, the Data property will contain the return data and the other root properties will be null, and in case of failure the reverse is true.

Returning values from service methods will be easier as there is no need to construct different response status codes and types. The pair of constructors of the response class simplify service code to look like this:

if (cust == null)
  return new ResponseWrap<Customer?>(2, $"Customer {id} not found", null);
else
  return new ResponseWrap<Customer?>(cust)

.NET Web API Global Errors

Unhandled errors in a .NET Web API controller will result in a 500 (InternalServerError) and a response body that doesn't contain any useful diagnostic information. In .NET Core services you can use a global exception handler to trap unexpected errors and convert them into the standard response so that clients always receive the same shaped JSON response bodies.

This is entirely optional, as letting the error propagate back to the client as a status 500 will clearly indicate to them that something went seriously wrong, and the response body might be irrelevant anyway. The service should have internally logged the detailed error details so that developers can diagnose the problem.

Summary

By reducing the REST responses down to a single status code and putting standard properties in the response, it could be argued that I'm hijacking the REST conventions and turning them into a toy protocol. I can't argue with that, but as a developer I really need a simple protocol to return only success or failure information without the mental bother of applying status codes to my business logic (where that's meaningful). It also simplifies the coding on the service and client sides.

Ironically, the .NET SOAP protocol I used back in the 2000s was actually similar to what I'm doing now. I'm conforted to know that I'm not the only person who has considered ignoring status codes, as I recently used the API of a parcel shipping company who only returned status 200 and they had a root property named code in all their responses to indicate success or failure. In their case, curiously, code=300 indicated success.


Friday, April 5, 2024

ParallelForAsync alternatives

Feburary 2025 Note — I just found a Microsoft article published in January 2023 that discusses how to throttle Task concurrency. I've appended a section below to discuss the technique the example uses.


The Parallel.For and similar methods available in .NET Framework 4.0+ and .NET Standard 2.0+ are designed to simplify parallelism of CPU intensive processing. The Task and related classes (released a bit sooner) when combined with the async and await C# keywords facilitate simple coding patterns for fine-grained concurrency. Those class families are intended to help with different types of problems, but there are times when it's convenient to combine them, and a typical case I have is to query and update large numbers of Azure Blobs as efficiently as possible.

NOTE: The Parallel.ForEachAsync family in .NET 6+ provides a simple and convenient way to combine parallelism and fine-grained concurrently, but I couldn't use that method because my library was compelled to target Standard 2.0. This article describes the search for an alternative older technique.

ForEach and await NO

A naïve code sample might look like this:

Parallel.ForEach(nameSource, async (name) =>
{
    await InspectBlobAsync(name, cancelToken);
}

Don't do that. I tried this with a loop over 100 Tasks and they all start simultaneously and run in parallel. The ForEach statement quickly drops through as soon as all the Tasks are started. I haven't tried this with 1000 Tasks, or more, but I expect some kind of resource exhaustion or strange crashes to occur eventually as you hit some runtime or operating system limit.

Task WhenAll NO

The following code seems like a natural simple solution:

var Tasks = nameSource.Select(name => InspectBlobAsync(name, cancelToken));
await Task.WhenAll(tasks);

Unfortunately, the Task ballistics in this case are similar to the previous use of ForEach. All the Tasks start simultaneously, with the difference being that the WhenAll waits for all the Tasks to finish (by definition of how await works). Using WhenAll is a perfectly valid and popular coding pattern, but only when the number Tasks is less than some "reasonable" limit which will depend on what sort of work the Tasks are doing in your environment.

Semaphore throttling NO

You will find many articles that use a SemaphoreSlim to throttle a while loop so that a maximum number of Tasks run at once. It's like a gate that opens to let a new Task in as each leaves. I found this technique worked perfectly and planned to release it, but then I discovered that cancellation was a serious problem. I couldn't find a sensible way to signal a cancellation token and get the loop to gracefully exit and ignore waiting Tasks. There may be a way, but it was too much bother so I abandoned the semaphore technique.

ConcurrentQueue YES

I finally decided to use the ConcurrentQueue class. I load the queue with the names of the thousands of Blobs, which I think is acceptable, and not resource stressful. I then create some Tasks, each containing a simple while loop which which dequeues (aka pulls) the names off the queue and asynchronously processes them. The skeleton code looks lik this:

int concurrentCount = 4;
using (cts = new CancellationTokenSource())
{
  var queue = new ConcurrentQueue<string>(nameSource);
  var pullers = Enumerable.Range(1, concurrentCount).Select(seq => QueuePuller(seq, queue, cts.Token));
  await Task.WhenAll(pullers);
}

async Task QueuePuller(int seq, ConcurrentQueue<string> queue, CancellationToken cancelToken)
{
  while (queue.TryDequeue(out string name))
  {
    try
    {
      await InspectBlobAsync(name, cancelToken);
    }
    catch (OperationCanceledException)
    {
      // Report a cancellation. Break out of the loop and end the puller.
      break;
    }
    catch (Exception ex)
    {
      // Inspect the error and decide to report it or break and end the puller.
    }
  }
}

Although there is more code when using ConcurrentQueue, I feel that it's simple and sensible. The concurrentCount value can be adjusted as needed or set according to the number of cores and it becomes the parallelism throttle (like using a Semaphore, but simpler). Cancellation works simply as well, as signaling the cancel token causes all of the pullers to gracefully exit. The full experimental C# source code is available in https://dev.azure.com/orthogonal/ParallelTasks.

BlockingCollection

It's worth mentioning the BlockingCollection class for more sophisticated scenarios. The BlockingCollection class can be used in a similar pattern to the ConcurrentQueue class, where each can have items pushed and pulled from their internal collections, but the former provides more control over how this happens. See the docs for more information and run searches for samples.


UPDATE — Microsoft throttling example

The Microsoft Learn article Consuming the Task-based Asynchronous Pattern section titled Throttling provides an example of how to throttle a large number of asynchronous operations.

The throttling technique is based upon a List of Tasks that is initially loaded with a starter set up to the throttle limit, then as each Task completes, one of the remaining ones is added to the List. In a sense it's using the collection as an internal queue.

The code is a little bit verbose and unclear to look at, but it does work correctly. The sample code does not mention cancellation, and I haven't got time to test different error handling or cancellation scenarios. I'll leave that as an exercise for the reader. I'll post an update on that if I learn anything useful in the future.


Friday, February 16, 2024

DateTime Ticks Breakdown

For more than 20 years I've been looking at DateTime.Ticks Int64 values and they always look like 63nnnnnnnnnnnnnnnn, and I was wondering how long I have to wait until the leading digits become 64. A quick calculation reveals that we must wait until Monday, 29 January 2029 17:46:40. If you want to wait until the leading digit changes from 6 to 7 then we have to wait until Saturday, 20 March 2219 04:26:40.

If you're wondering when the 64-bit signed Ticks value will overflow and cause a new epochal form of millenium bug, then we have to wait about 29.2 million years.

Here is a nice breakdown of the digits in a Ticks value. Remember that the least significant digit (rightmost) is 100 nanoseconds (10-7 seconds).

UTC Now
Friday, 16 February 2024 06:56:32 ║ 2024-02-16T06:56:32.2446131Z

Ticks
638436633922446131 ║ 638,436,633,922,446,131

Seconds
63,843,663,392

DigitPowerSecondsSpanYears
1-70.000000100:00:00.0000001
3-60.00000100:00:00.000001
1-50.0000100:00:00.00001
6-40.000100:00:00.0001
4-30.00100:00:00.001
4-20.0100:00:00.01
2-10.100:00:00.1
20100:00:01
911000:00:10
3210000:01:40
33100000:16:40
641000002:46:40
651000001.03:46:400.003
36100000011.13:46:400.032
4710000000115.17:46:400.317
881000000001157.09:46:403.169
39100000000011574.01:46:4031.688
61010000000000115740.17:46:40316.881

Thursday, December 7, 2023

Visual Studio conditional compile any file contents

The MSBuild tools and Visual Studio provide a variety of techniques for conditionally compiling source code based upon the active configuration. The #if preprocessor directives allow lines of code to included or excluded based upon defined symbols. The Condition element in project files allows whole files to be included or excluded from the build.

Using #if is particularly useful, but it only works inside C# source code files. Over recent years I have increasingly wanted to apply conditional compilation to other types of project text files such as .html, .js, .txt, etc, which need slightly different contents according to the active configuration. Sometimes I have to manually alter the contents of the files before publishing from Visual Studio to different hosts with different configurations. Editing the files manually is tedious and error-prone. In late 2023 I finally got fed-up with this and found a way of automatically editing the contents of arbitrary text files in a build, based upon the configuration.

Note that if the contents of the text files differed significantly for different configuration builds then it could be better to maintain separate files and use Condition to include the desired files. Unfortunately for me, usually only a few lines might change in the text files, so a #if technique would be preferable.

A skeleton sample C# solution is available which demonstrates all the techniques and tricks required to make this work. There are comments prefixed with a ▅ character (easy to see!) in all important parts of the solution files to explain what is happening. Full source is available from this Azure DevOps repository:

ConditionBuildDemo

In a nutshell, the key trick to making this work is to use a T4 template (a .tt file) to generate each text file that must be customised in some way for different build configurations. I feel this is a little bit clumsy, but in consolation, this is exactly why T4 templates were invented and they also blend smoothly into Visual Studio projects.

Another trick is to pass the name of the current $(Configuration) into the templates so they can be used in the generation logic. The rather obscure <T4ParameterValues> project element can be used for that.

Yet another trick is to make the templates generate when the project starts to build, not just when the template files change. The <Touch> and <TransformOnBuild> projects elements help with that.

There are a few more small technical details than I haves skipped for brevity, but the comments in the solution files explain everything.

Friday, October 13, 2023

Custom build properties and items

Common Properties

Microsoft msbuild processing provides a convenient way of factoring out common build properties that might be shared by many projects. I read about this feature years ago, then forgot about it, so I'm posting this as a reminder to myself and any other developers who might be interested. See Customize the build by folder for information on how you can place the file Directory.Build.props in a suitable parent folder of projects and it will silently be found and used by all child projects. Many projects in a large solution can share build properties by placing a .props file in the solution folder, with contents like this example:

<Project>
  <PropertyGroup>
    <Version>1.2.3</Version>
    <TargetFrameworks>netstandard2.0;net6.0</TargetFrameworks>
    <Authors>The Big Company</Authors>
    <Company>The Big Company</Company>
    <Copyright>Copyright © 1992-2023 The Big Company</Copyright>
    …etc…
  </PropertyGroup>
</Project>

Be careful though if you have other unrelated child projects that might inherit the properties. I had some test projects of mixed types that were spoiled by the common properties, so I had to either move them to a different non-child folder or manually put the correct local override values into their project files.

Common Items

Another common requirement is to put auto-calculated metadata values into the build process. In my case I wanted to put the build time and build machine name into the compiled assembly. After many experiments I discovered the easiest technique is to create a file named Directory.Build.targets in the solution folder next to the custom .props file, with contents like this example:

<Project>
  <ItemGroup>
    <AssemblyAttribute Include="System.Reflection.AssemblyMetadataAttribute">
      <_Parameter1>BuildMachine</_Parameter1>
      <_Parameter2>$(COMPUTERNAME)</_Parameter2>
    </AssemblyAttribute>
    <AssemblyAttribute Include="System.Reflection.AssemblyMetadataAttribute">
      <_Parameter1>BuildTime</_Parameter1>
      <_Parameter2>$([System.DateTime]::Now.ToString("yyyy-MM-dd HH:mm:ss K"))</_Parameter2>
    </AssemblyAttribute>
  </ItemGroup>
</Project>

In this case I'm generating a pair of AssemblyMetadata attributes into the build process. You can generate other attributes as needed and web searches will reveal similar techniques. I wasn't previously aware that a .targets file would be found the same way as the .props file, but I tried it, and it works (I presume it's documented somewhere).

Saturday, August 19, 2023

Enumerate Azure Storage Accounts (New)

In April 2021 I posted an article titled Enumerate Azure Storage Accounts which explained how to enumerate all of the storage accounts in an Azure subscription, then drill down into all the containers and blobs, and tables and rows. This sort of code can be used as the basis of some useful custom reporting tools.

Unfortunately, the old code uses deprecated classes, so after a few concentrated hours of study and suffering I found modern replacement code. The code linked below is a skeleton of the modern way to enumerate storage accounts and their contents. For more details see the Azure SDK Samples.

An example C# console command that uses the new Azure sdk libraries can be downloaded from here:

SubscriptionReaderSample.cs

The .cs file has been renamed as a .txt file to avoid security blocks. Change the dummy Tenant Id, Client Id and Client Secret to match your Azure subscription. See the old article for a description of where the Ids can be found in the Azure Portal.

Wednesday, August 16, 2023

Azure Table 'batch' (transaction) operations

To bulk insert rows in the old Azure Table Storage API you would create a TableBatchOperation class and fill it with TableOperations, then call ExecuteBatchAsync. The batch could contain different operations, but bulk inserts were my most common need.

The batch related classes do not exist in the new Table API, and finding the replacement code was a dreadful chore. I didn't know the expression batch had been replaced with transaction, so my searches for "batch" produced endless useless old results and discussions. After searching until my fingers bled I stumbled across a hint that batch processing as was now transaction processing. Useful search results now started to arrive, but it took more time to finally find some definitive sample code that actually worked. The best summary page I found was here:

azure-sdk-for-net Transactional Batches

My sanity check code in LINQPad looks like this:

async Task Main()
{
	var tsclient = new TableServiceClient("YOUR STORAGE CONNECT STRING");
	var tclient = tsclient.GetTableClient("TestTable1");
	await tclient.CreateIfNotExistsAsync().Dump();
	var trans = new List<TableTransactionAction>>();
	var rows = Enumerable.Range(1, 10).Select(i => new MockRow()
	{
		Id = i,
		Name = $"Name for {i}"
	} ).ToArray();
	trans.AddRange(rows.Select(r => new TableTransactionAction(TableTransactionActionType.Add, r)));
	await tclient.SubmitTransactionAsync(trans).Dump();
}

class MockRow : ITableEntity
{
	public string PartitionKey { get; set; } = "P1";
	public string RowKey { get; set; } = Guid.NewGuid().ToString("N");
	public DateTimeOffset? Timestamp { get; set; }
	public ETag ETag { get; set; }
	public int Id { get; set; }
	public string Name { get; set; }
}

Transaction success returns status 202 and a list of sub-responses with status 204. I deliberately caused an error while inserting a transaction of 5 rows, by setting the second RowKey with invalid characters. As advertised, no rows are inserted and a TableTransactionFailedException is thrown with a detailed message like this:

2:The 'RowKey' parameter of value 'Bad\Key' is out of range.
RequestId:c10901e2-e002-009a-7aba-cf9069000000
Time:2023-08-15T20:55:37.4158811Z
 The index of the entity that caused the error can be found in FailedTransactionActionIndex.
Status: 400 (Bad Request)
ErrorCode: OutOfRangeInput
Additional Information:
FailedEntity: 2

Friday, February 3, 2023

localhost has been blocked by CORS policy

For many years I could not debug a Web API service and Blazor app on localhost. I would debug-run the service in one instance of Visual Studio 2022 and the Blazor app in another instance. The first client call to the service would return:

Access to fetch at 'http://localhost:5086/endpoint' from origin 'http://localhost:56709' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

At first this was only an inconvenience and I spent a couple of hours annually trying to overcome the problem. Hundreds of web search results produced a confusing jumble of suggestions, some ridiculously complex, some for the wrong platform, some absurd, and some seemingly sensible ones that did not work! In early 2023 the inability to debug on my desktop over localhost became a serious impediment and I swore to solve it. After approximately 3 solid hours of research and experiments I found the answer.

In the Program.cs code of the .NET 6 Web Api project you insert an AddCors statement as soon as the builder is created.

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddCors(options =>
{
  options.AddDefaultPolicy(policy =>
  {
    policy.AllowAnyHeader().AllowAnyMethod().AllowAnyOrigin();
  });
});

Further down, as soon as the app is created, follow it with a UseCors statement.

var app = builder.Build();
app.UseCors();

I'm sure I've seen that fix code over the years, but it never worked. Perhaps I was on an older Framework, or maybe I had the code in the wrong sequence, or maybe any other number of subtle mistakes could have sabotaged my experiments. The most infuriating aspect of the problem is that the client error message tells you that the 'Access-Control-Allow-Origin' response header is missing, which is true, but it's not the core of the problem. I think I wasted hours in futile efforts to add the header.

The breakthrough clue about what was really causing the problem was revealed by Fiddler, it showed that the first web service call was an OPTIONS request to the service endpoint. The response was 405 Method Not Allowed, which helped steer me to the fix code listed above. It wasn't clear sailing, because I forgot to add the UseCors statement and I wasted half an hour wondering why the AddCors was having no effect.

You may ask why I didn't use Fiddler years ago … I did, but it would never show me the localhost traffic coming through Visual Studio. Now, during my latest efforts, for some unexplained reason I am seeing all the traffic and the problem was revealed.

I'm quite shocked to see that every client call to the web service is silently an OPTIONS, which is then followed by the real request verb. Firstly I worry about the overhead, and then I worry about secret requests being made without my knowledge. I'll have to research when and why this happens, and I'll append a note if I find something useful.

UPDATE May 2023

Run a search for words like "OPTIONS CORS PREFLIGHT" and you will find explanations of the mechanism I complained about. The overhead of preflight requests is not as bad as it looks. There are optimisations and the concept of simple requests that make CORS less onerous that it seems. CORS is however a damn curse on developer testing.