Wow, a new Terminal in Windows 10

I just watched this video from Build 2019, I’d say about time it happens, but so cool to see what the guys are up to


Parkrun, I finally signed up and did my first run…

I’ve previously watched the 5km Parkrun in Watford, Brighton and Hove, and in my hometown of Esbjerg in Denmark, but until May Saturday 18th, 2019 I hadn’t done a single run 😨

I’ve done plenty of half marathons in training and a few of the Watford Half marathons in the month of February, and in Berhamsted in March. So, I went there this Saturday gone with my barcode and found that it is a rather tightly knit bunch a happy runners, welcoming newbies like myself. I felt very welcome, especially during the short run, where comments and smiles were exchanged. Just lovely, and I’ll definately tr and take part in these Saturday morning runs, wherever I am in the World. Not having done 5km runs for a long time, I was quite pleased I managed to finish in 22m37s. 🙂

You should try it in a place near you.

SOLID for Dummies

SOLID contains five design principles that are used to make it easier to understand software designs.

Single Responsibility

Any class should only have a single responsibility, i.e. the
class should have only a single job/task. In simple terms, if you have a Vehicle class that process any number of vehicle objects. If the processed
vehicle objects need to be stored or serialised, both the storage and serialisation tasks can easily be added to the class, but they shouldn’t.
Only changes to one part of the software’s specification should be able to affect the specification of the class, so what if the storage type changes from a database to blob or table storage or the serialisation changes from say XML to JSON? You’d be violating the Single Resposibility principle, but creating a new class for either or both of these tasks and let them handle the storage or serialisation tasks, will work without violating this principle.

Open Closed

Any software entity, class, should be open for extension, but closed for modification, i.e. the class should be easily extendable, but without modifying the existing class. In simple terms, if you have a Vehicle class that process any number of different vehicle objects Car, Van, Bus etc. For each processed vehicle object , the exhaust emission must be calculated. Assume that the calculation is different for each vehicle type. The class method that does the calculation must check each vehicle object for the actual type, and base the calculation on that. Now, if all types of vehicles are known at the time of the Vehicle class being written, no problem. However, in reality new types of vehicles are introduced frequently, so the calculation method is violating the Open Closed principle, because adding calculation for new vehicle types, requires the calculation method to be modified. To address this, each vehicle type class must come with their own calculation method. This way, when you’re creating a new type of vehicle class, and assuming you’re inheriting from the same abstract class or interface, you’re extending the Vehicle class, not modifying it.

Liskov Substitution

Objects in an application should be replaceable with instances of their subtypes without altering the correctness of the application. If you have a class Vehicle, and any number of subtypes thereof, say Car, Bicycle, Bus, you hould to replace super class with any of the subtypes. Typically Inversion of Control (IoC) or Dependency Injection (DI) is used to inject the super type or any of the subtypes into a class Controller. You need to ensure that the injected objects do not alter how the Controller process and output results.

Interface Segregation

Dependency Inversion

Visual Studio 2019

Visual Studio 2019 was released early April in 2019 of all years… 😱😂 I’ve been using preview versions and the release candidate for a number of months and I’ve rarely had any issues. Having said that, I’ve had Visual Studio 2017 installed all along, just in case 😉

I like the new version, and just days ago an update was uploaded. I didn’t experience any of the issues that was fixed in version 16.01.1. If you’re still on the 2017 edition, please try the 2019 version, as it is really good. It takes a little getting used too, but we’re talking days only.

Check out for more

EntityFramework Core Scaffolding

If you already have a database and want to use EF Core for your database/object mapping, the Scaffold-DbContext command can be run from within Visual Studio, using the Package Manager Console. If it isn’t open, you can get to it from the View/Other Windows menu command. This is an example of how to generate DbContext and entity mapping objects:

Scaffold-DbContext “Server=.\SQLEXPRESS01;Database=***;Trusted_Connection=True” Microsoft.EntityFrameworkCore.SqlServer -OutputDir Objects

The command targets SQL Server using the currently logged on user account. Make sure you select the correct project, if you more than one in your solution, from the Default Project list in the
Package Manager Console.

Notice how I have a single backslash for the server name. In your connection string, potentially stored in the appsettings.json file, you’ll need two backslashes, Server=.\\SQLEXPRESS01. If you have two backslashes when running the Scaffold-DbContext command, the command will fail, with this exception:

System.InvalidOperationException: Instance failure.
at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, Boolean withFailover)

Don’t forget the -Force option in your command, if you want to replace an existing DbContext and mapping objects.

Hertz Club DK medlem?

Er du udlandsdansker, og på vej til Danmark ved jule- eller nytårstid? Så kig lige her, og hvis du ikke allerede har modtaget en email fra Club DK, så læs nedenfor.

Vi er meget stolte over Club DK og over at have medlemmer fordelt på 149 lande.
Vi synes, at I er spændende og er nysgerrige omkring jeres eventyr ude i verden og jeres glæde ved at komme hjem.

Vi vil som noget nyt introducere Club DK interviews i filmformat.
Har du lyst til at blive interviewet om livet ude på eventyr og det helt særlige ved at komme hjem?

Hvor: Kastrup/Københavnsområdet.
Hvornår: i december/januar
Hvordan: Interview i en bil
Varighed: Ca. 30 minutter
Færdigklippet interview: Ca. 3 minutter

Som tak for deltagelse giver vi en voucher til et weekendlejemål (bilgruppe C) som kan benyttes i løbet af 2019.

Vi kontakter de personer, som er i Danmark de datoer, hvor det er muligt for os at filme.
Har du spørgsmål er du mere end velkommen til at kontakte Tilmeld dig her

Speeding Up Search in Azure Table Storage

Azure Table Storage is cheap and in some simple uses, as good as CosmosDB. However, when searching a single table storage (Standard Performance) with millions of rows, then the key to speeding up searching for specific entities or traversal through all entities, requires you to use both the partition key and row key values, but none of the other field values. Obviously, if you have multiple different values in either or both of the partition key and row key fields, you’ll have a problem with the speed of a search. In that case, CosmosDB will be a better option.

The code below shows how to search the a storage table using the partition key and row key fields. A look up table is used for the partition key and the date is used for the row key. Obviously, the storage table, which holds the different partition keys, needs to be maintained. The “duplicated” date in the row key mimics the timestamp field, but you can use pretty much any date type and format instead, as long as you a have simple way of searching this field.

There’s also an upload function, if you want to take advantage of a very cheap storage option, even for million of entities…

I have an Azure function that takes care of updating the partition keys table, using a timer trigger on a monthly basis.

public class AzureTableStorageReader {
    private static readonly CloudStorageAccount StorageAccount = CloudStorageAccount.Parse(
    private static readonly CloudTableClient TableClient = StorageAccount.CreateCloudTableClient();

    public async Task<IEnumerable<TableLog>> ReadTableLogsByDateTimes(DateTime[] logsDates) {
        if (!logsDates.Any() && logsDates.Count() != 2)
            throw new ArgumentException("Log dates are not provided.");

        try {
            var storageTable = TableClient.GetTableReference("logs");
            var storagePartKeysTable = TableClient.GetTableReference("logsPartitionKeys");
            TableContinuationToken contToken = null;

            var filter = string.Empty;

            // Get all partition keys
            foreach (var apk in await storagePartKeysTable.ExecuteQuerySegmentedAsync(new TableQuery<TableLogPartitionKey>(), contToken)) {
                filter += (filter != string.Empty) ? " or " : "(";

                // Is the full partition key text stored in the RowKey?
                filter += (apk.FullText == "0") ? "PartitionKey eq '" + apk.RowKey + "'" : "PartitionKey eq '" + apk.RowKey + "%60'";

            // Add start date and end date we're searching for (both inclusive)
            filter += ") and (RowKey ge '" +
                      logsDates[0].ToString("yyyy-MM-dd", CultureInfo.InvariantCulture) +
                      "' and RowKey lt '" + logsDates[1].AddDays(1).ToString("yyyy-MM-dd", CultureInfo.InvariantCulture) + "')";

            var storageTableQuery = new TableQuery<TableLog>();
            var fetchedLogs = new List<TableLog>();

            contToken = null;

            // Execute the query async and segmented, fetching rows in chunks, until the last segment is fetched
            do {
                var seq = await storageTable.ExecuteQuerySegmentedAsync(storageTableQuery.Where(filter), contToken);

                fetchedLogs.Capacity += seq.Count();
                contToken = seq.ContinuationToken;
                fetchedLogs.AddRange(seq.Select(a => a));
            } while (contToken != null);

            if (fetchedLogs.Count == 0) {
                // Log error...

            // Process the rows as needed...
            return fetchedLogs;
        catch (Exception ex) {
            // Log error...
            return null;
public class AzureTableStorageWriter {
    private static readonly CloudStorageAccount StorageAccount = CloudStorageAccount.Parse(
    private static readonly CloudTableClient TableClient = StorageAccount.CreateCloudTableClient();

    public async void UploadTableLogs() {
        try {
            var storageTable = TableClient.GetTableReference("logsxx");
            var storagePartKeysTable = TableClient.GetTableReference("sogsPartitionKeys");
            var contToken = new TableContinuationToken();

            // Get all partition keys
            var lpk = await storagePartKeysTable.ExecuteQuerySegmentedAsync(new TableQuery<TableLogPartitionKey>(),

            // Execute the query async
            for (var counter = 0; counter <= 1000000; counter++) {
                var date = DateTime.UtcNow.ToString("yyyy-MM-ddThh:mm:ss.fffZ", CultureInfo.InvariantCulture);
                try {
                    await storageTable.ExecuteAsync(TableOperation.Insert(new TableLog(
                        date, (from l in lpk orderby Guid.NewGuid() select l.PartitionKey).FirstOrDefault(),
                catch (Exception ex) {
                    // Log error...
                    if (ex.Message == "Conflict") continue;

        catch (Exception ex) {
            // Log error...