Visual Studio 2019

Visual Studio 2019 was released early April in 2019 of all years… 😱😂 I’ve been using preview versions and the release candidate for a number of months and I’ve rarely had any issues. Having said that, I’ve had Visual Studio 2017 installed all along, just in case 😉

I like the new version, and just days ago an update was uploaded. I didn’t experience any of the issues that was fixed in version 16.01.1. If you’re still on the 2017 edition, please try the 2019 version, as it is really good. It takes a little getting used too, but we’re talking days only.

Check out for more


Microsoft Edge Insider

I signed up for the Microsoft Edge Insider programme, where Microsoft has based the Edge browser on the Chromium open source project. I’m now on the Dev Channel, and I really like what I see and fell. Check it out,

EntityFramework Core Scaffolding

If you already have a database and want to use EF Core for your database/object mapping, the Scaffold-DbContext command can be run from within Visual Studio, using the Package Manager Console. If it isn’t open, you can get to it from the View/Other Windows menu command. This is an example of how to generate DbContext and entity mapping objects:

Scaffold-DbContext “Server=.\SQLEXPRESS01;Database=***;Trusted_Connection=True” Microsoft.EntityFrameworkCore.SqlServer -OutputDir Objects

The command targets SQL Server using the currently logged on user account. Make sure you select the correct project, if you more than one in your solution, from the Default Project list in the
Package Manager Console.

Notice how I have a single backslash for the server name. In your connection string, potentially stored in the appsettings.json file, you’ll need two backslashes, Server=.\\SQLEXPRESS01. If you have two backslashes when running the Scaffold-DbContext command, the command will fail, with this exception:

System.InvalidOperationException: Instance failure.
at System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, Boolean withFailover)

Don’t forget the -Force option in your command, if you want to replace an existing DbContext and mapping objects.

Hertz Club DK medlem?

Er du udlandsdansker, og på vej til Danmark ved jule- eller nytårstid? Så kig lige her, og hvis du ikke allerede har modtaget en email fra Club DK, så læs nedenfor.

Vi er meget stolte over Club DK og over at have medlemmer fordelt på 149 lande.
Vi synes, at I er spændende og er nysgerrige omkring jeres eventyr ude i verden og jeres glæde ved at komme hjem.

Vi vil som noget nyt introducere Club DK interviews i filmformat.
Har du lyst til at blive interviewet om livet ude på eventyr og det helt særlige ved at komme hjem?

Hvor: Kastrup/Københavnsområdet.
Hvornår: i december/januar
Hvordan: Interview i en bil
Varighed: Ca. 30 minutter
Færdigklippet interview: Ca. 3 minutter

Som tak for deltagelse giver vi en voucher til et weekendlejemål (bilgruppe C) som kan benyttes i løbet af 2019.

Vi kontakter de personer, som er i Danmark de datoer, hvor det er muligt for os at filme.
Har du spørgsmål er du mere end velkommen til at kontakte Tilmeld dig her

Speeding Up Search in Azure Table Storage

Azure Table Storage is cheap and in some simple uses, as good as CosmosDB. However, when searching a single table storage (Standard Performance) with millions of rows, then the key to speeding up searching for specific entities or traversal through all entities, requires you to use both the partition key and row key values, but none of the other field values. Obviously, if you have multiple different values in either or both of the partition key and row key fields, you’ll have a problem with the speed of a search. In that case, CosmosDB will be a better option.

The code below shows how to search the a storage table using the partition key and row key fields. A look up table is used for the partition key and the date is used for the row key. Obviously, the storage table, which holds the different partition keys, needs to be maintained. The “duplicated” date in the row key mimics the timestamp field, but you can use pretty much any date type and format instead, as long as you a have simple way of searching this field.

There’s also an upload function, if you want to take advantage of a very cheap storage option, even for million of entities…

I have an Azure function that takes care of updating the partition keys table, using a timer trigger on a monthly basis.

public class AzureTableStorageReader {
    private static readonly CloudStorageAccount StorageAccount = CloudStorageAccount.Parse(
    private static readonly CloudTableClient TableClient = StorageAccount.CreateCloudTableClient();

    public async Task<IEnumerable<TableLog>> ReadTableLogsByDateTimes(DateTime[] logsDates) {
        if (!logsDates.Any() && logsDates.Count() != 2)
            throw new ArgumentException("Log dates are not provided.");

        try {
            var storageTable = TableClient.GetTableReference("logs");
            var storagePartKeysTable = TableClient.GetTableReference("logsPartitionKeys");
            TableContinuationToken contToken = null;

            var filter = string.Empty;

            // Get all partition keys
            foreach (var apk in await storagePartKeysTable.ExecuteQuerySegmentedAsync(new TableQuery<TableLogPartitionKey>(), contToken)) {
                filter += (filter != string.Empty) ? " or " : "(";

                // Is the full partition key text stored in the RowKey?
                filter += (apk.FullText == "0") ? "PartitionKey eq '" + apk.RowKey + "'" : "PartitionKey eq '" + apk.RowKey + "%60'";

            // Add start date and end date we're searching for (both inclusive)
            filter += ") and (RowKey ge '" +
                      logsDates[0].ToString("yyyy-MM-dd", CultureInfo.InvariantCulture) +
                      "' and RowKey lt '" + logsDates[1].AddDays(1).ToString("yyyy-MM-dd", CultureInfo.InvariantCulture) + "')";

            var storageTableQuery = new TableQuery<TableLog>();
            var fetchedLogs = new List<TableLog>();

            contToken = null;

            // Execute the query async and segmented, fetching rows in chunks, until the last segment is fetched
            do {
                var seq = await storageTable.ExecuteQuerySegmentedAsync(storageTableQuery.Where(filter), contToken);

                fetchedLogs.Capacity += seq.Count();
                contToken = seq.ContinuationToken;
                fetchedLogs.AddRange(seq.Select(a => a));
            } while (contToken != null);

            if (fetchedLogs.Count == 0) {
                // Log error...

            // Process the rows as needed...
            return fetchedLogs;
        catch (Exception ex) {
            // Log error...
            return null;
public class AzureTableStorageWriter {
    private static readonly CloudStorageAccount StorageAccount = CloudStorageAccount.Parse(
    private static readonly CloudTableClient TableClient = StorageAccount.CreateCloudTableClient();

    public async void UploadTableLogs() {
        try {
            var storageTable = TableClient.GetTableReference("logsxx");
            var storagePartKeysTable = TableClient.GetTableReference("sogsPartitionKeys");
            var contToken = new TableContinuationToken();

            // Get all partition keys
            var lpk = await storagePartKeysTable.ExecuteQuerySegmentedAsync(new TableQuery<TableLogPartitionKey>(),

            // Execute the query async
            for (var counter = 0; counter <= 1000000; counter++) {
                var date = DateTime.UtcNow.ToString("yyyy-MM-ddThh:mm:ss.fffZ", CultureInfo.InvariantCulture);
                try {
                    await storageTable.ExecuteAsync(TableOperation.Insert(new TableLog(
                        date, (from l in lpk orderby Guid.NewGuid() select l.PartitionKey).FirstOrDefault(),
                catch (Exception ex) {
                    // Log error...
                    if (ex.Message == "Conflict") continue;

        catch (Exception ex) {
            // Log error...


I did an early run this morning in Cassiobury Park and along the Grand Union Canal towpath, as I have done a good few times before. This time it was really dark, just before 4:30, but I kind of love that. 🙂 Being on your own, meeting nothing but deers, hares, foxes and all sorts of birds is just a lovely “companion” when out there. I absolutely love these half marathon runs, they complete me to some extent. Just saying… BTW, I could do with a running partner anywhere near Watford…


Over the last few years I’ve had various people come up to me, asking me if I was Hugh Dennis. 😱 I’m clearly not, but I’ll take it 🙂 Have you ever had that experience, that someone mistakes or thinks you resemble a celebrity? It’s odd, and I can only apologise to the man himself, he has no need for a doppelgänger. 😎

Windows Insider Programme

I’ve been part of the Windows Insider programme for quite a while now (January 14, 2015), and I have generally been very pleased with my participation. Today I installed the Windows 10 Insider Preview 18272.1000 (rs_prerelease), from my favourite Prét coffee joint in Watford.

It took a while to download, but the actual installation was very quick, and it seems MS has been working on speeding up new OS upgrades over at least the last year. I’ve been using my trusted Lenovo Flex-2 15 for most of the builds. It is nearly 5 years old, and rarely had any issues with drivers etc, so I’m well pleased with the Windows OS, in particularly the Windows Insider builds. I’m now on the Active development of Windows preview build, on the Fast ring. I am running my development tools, Visual Studio 2017, Visual Studio Code etc, on these builds and rarely have any issues. Keep it up MS!