.NET Core config transforms for collections in different environments

The final solution can be found in Github if you want to check it out.

.NET Core’s configuration system is really powerful and is packed with a lot of features. However it get really complicated when you have a lot of environments to manage and you want to transform the values of array properties per environment. For the purpose of this tutorial we are going to looks at a config item called “Activites” which is an array of different activities you can do. Below is the default configuration we have and we want to transform the mimimumSpeed and minimunDistance for some environments

  "Acivities": [
    {
      "name": "Walking",
      "minimumDistance": "20K",
      "minimumSpeed": "10Km/h"
    },
    {
      "name": "Cycling",
      "minimumDistance": "20K",
      "minimumSpeed": "10Km/h"
    },
    {
      "name": "Kayaking",
      "minimumDistance": "20K",
      "minimumSpeed": "10Km/h"
    }
  ],

Let us also create a C# class representing this config item as below.

// Activity.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace ArrayTransforms
{
    public class Activity
    {
        public string Name { get; set; }
        public string MinimumDistance { get; set; }
        public string MinimumSpeed { get; set; }
    }
}

Let now add the configuration to the “ConfigureServices” in Startup.cs to make in available for the .NET core DI

// This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.Configure<List<Activity>>(Configuration.GetSection("Acivities"));
            services.AddControllers();
        }

I have also went ahead and created a new Activities API controller so that we can see the activities returned along with the environment.

namespace ArrayTransforms.Controllers
{
    
    [ApiController]
    public class ActvitiesController : ControllerBase
    {
        private readonly IEnumerable<Activity> _activities;
        private readonly IWebHostEnvironment _hostingEnvironment;

        /// <summary>
        /// Inject the activites config item and hosting environment 
        /// </summary>
        /// <param name="activitiesOptions">Activites options</param>
        /// <param name="hostingEnvironment">To get the current environment name</param>
        public ActvitiesController(IOptions<List<Activity>> activitiesOptions, IWebHostEnvironment hostingEnvironment)
        {
            _activities = activitiesOptions.Value;
            _hostingEnvironment = hostingEnvironment;
        }

        [Route("api/Activities")]
        public dynamic GetAcitivities()
        {
            return new             {
                _hostingEnvironment.EnvironmentName,
                Activities = _activities
            };
        }
    }
}

The Activities Api returns the following data if we run it now. We can see that the current environment is “Development”. The data returned is the default configuration that we currently stored in out config file.

Lets now create a new environment called “Harsh”, were all these activities becomes a lot more difficult and another environment call “Moderate” were minimum distance will increase. We can mimic this by updating the “launchSettings.json” to add to more profiles with different ASPNETCORE_ENVIRONMENT values.

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:51056",
      "sslPort": 44357
    }
  },
  "$schema": "http://json.schemastore.org/launchsettings.json",
  "profiles": {
    "IIS Express Development": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "api/activities",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "IIS Express Moderate": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "api/activities",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Moderate"
      }
    },
    "IIS Express Harsh": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "api/activities",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Harsh"
      }
    }
  }
}

We can now run the different environments from Visual studio using the different profiles and running each of them will return the following different results.

As you can see the only different in the response currently is the environment name. All the activities has the same minimum distance and speed across all the environments.

We can now go and create environment specific configuration for the 2 new environments and for the Moderate environment we will increase the minimum distance, where as for the harsh one we want to increase the minimum speed also. For this , let’s add the environment specific configs for “Moderate” and “Harsh” environments. Lets duplicate “appsettings.json” and create 2 news settings file “appsettings.Moderate.json” and “appsettings.Harsh.json”

If you now check the appsettings.Moderate.json it looks something like below the only values that is different from the default configuration is the “minimumDistance”. However we are duplicating most of the default config to achieve the transform for just “minimumDistance”.

//appsettings.Moderate.json
{
  "Acivities": [
    {
      "name": "Walking",
      "minimumDistance": "50K",
      "minimumSpeed": "10Km/h"
    },
    {
      "name": "Cycling",
      "minimumDistance": "50K",
      "minimumSpeed": "10Km/h"
    },
    {
      "name": "Kayaking",
      "minimumDistance": "60K",
      "minimumSpeed": "10Km/h"
    }
  ]
}

If we check “appsettings.Harsh.json” we can see a similar scenario were we are still duplicating the “name”. Running the API in those 2 environments returns the newly transformed data.

Simplify the transforms

In order to minimise the transform configs , my first attempt was to see if I can access the array elements using the colon (:)

//appsettings.Moderate.json
{
  "Acivities[0]:minimumDistance": "50K",
  "Acivities[1]:minimumDistance": "50K",
  "Acivities[2]:minimumDistance": "60K"
}

However this seems to have no effect and even if it did work, using an array with indexing positions would be a nightmare to manage. If we change the order of the original config, you will have wrong values.

Inorder to fix this and to simply our configuration first we need to modify the original application setting as below. Instead of an array now we have different nested objects under activities. However we can still use List<Activity> to access this configuration item and .NET Cores configuration system is smart enough to map it.

"Acivities": {
    "Walking": {
      "name": "Walking",
      "minimumDistance": "20K",
      "minimumSpeed": "10Km/h"
    },
    "Cycling": {
      "name": "Cycling",
      "minimumDistance": "20K",
      "minimumSpeed": "10Km/h"
    },
    "Kayaking": {
      "name": "Kayaking",
      "minimumDistance": "20K",
      "minimumSpeed": "10Km/h"
    }
  }

Now if you run the Development environment you can see that it is still returning the activities.

Lets now add transform to both the Moderate and Harsh config . As you can see here, we are only listing the config items we are interested in changing.

// these are in 2 different files but for simplicity showed
// here as one
//appsettings.Moderate.json
{
  "Acivities:Walking:minimumDistance": "50K",
  "Acivities:Cycling:minimumDistance": "50K",
  "Acivities:Kayaking:minimumDistance": "60K"
}
//appsettings.Moderate.json
{
  "Acivities:Walking:minimumDistance": "50K",
  "Acivities:Walking:minimumSpeed": "180km/h",
  "Acivities:Cycling:minimumDistance": "50K",
  "Acivities:Cycling:minimumSpeed": "300km/h",
  "Acivities:Kayaking:minimumDistance": "50K",
  "Acivities:Kayaking:minimumSpeed": "200km/h"
}

If you now run the application in Moderate and Harsh environments you can now see the new values returned.

ASPNETCore Health check for DynamoDB

ASP.NET core offer health checks Middlewares for reporting health of an application and its different components. You can expose the health check of an app as HTTP endpoint or you can choose to publish the health of an app at certain intervals to a source such as a queue.

In this blog we will be exploring getting the health stats of an api that uses dynamoDb as it data store. For this tutorial we will be using,

  • Visual studio 2019
  • dotetcore 3
  • Downloadable DynamoDB which can be found here which would also need the latest version of Java SDK

Create ASP.NET Core Web application

Create a new core web application using the Visual studio template and let’s call it DynamoDBHealthCheck and select the API project template.

Add the required dependencies

Lets add the following dependencies to the solution

Add health check to Startup.cs

In Startup call we are adding services.AddHealthChecks(); and endpoints.MapHealthChecks(“/health”); to the UseEndpoints Middleware.

If you run the api now you can see the health status of the application at /health url

Lets now also add in some code to get a bit more detailed health status for the application.

If you run the application now you can see a bit more information that just the text Healthy. Everything up till now is well documented in the Microsoft documentation for health checks.

Add a health check for DynamoDB

Lets add a new class called “DynamoOptions.cs” for holding all the dynamo db configuration

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

namespace DynamoDBHealthCheck
{
    public class DynamoOptions
    {
        public string AWSAcessKey { get; set; }
        public string AWSSecretKey { get; set; }
        public string ConnectionString { get; set; }
        public string AuthenticationRegion { get; set; }
    }
}

and add the following configuration section to the appsettings.json

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "dynamodb": {
    "aWSAcessKey": "fakeKey",
    "aWSSecretKey": "fakeSecret",
    "connectionString": "http://localhost:8000",
    "authenticationRegion": "localhost",
    "tableName": "TestTable"
  },
  "AllowedHosts": "*"
}

Lets now add a class “DynamoHealth.cs” that will implement the IHealthCheck interface from the ” Microsoft.Extensions.Diagnostics.HealthChecks” package.

using Amazon.DynamoDBv2;
using Amazon.DynamoDBv2.Model;
using Amazon.Runtime;
using Microsoft.Extensions.Diagnostics.HealthChecks;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

namespace DynamoDBHealthCheck
{
    public class DynamoHealth: IHealthCheck
    {
        private readonly DynamoOptions _options;
        public DynamoHealth(DynamoOptions options)
        {
            _options = options ?? throw new ArgumentNullException(nameof(options));
        }
        public async Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken = default)
        {
            try
            {
                var credentials = new BasicAWSCredentials(_options.AWSAcessKey, _options.AWSSecretKey);
                var config = new AmazonDynamoDBConfig();
                config.AuthenticationRegion = _options.AuthenticationRegion;
                config.ServiceURL = _options.ConnectionString;
                var client = new AmazonDynamoDBClient(credentials, config);
                await client.DescribeTableAsync(_options.TableName,cancellationToken);
                return HealthCheckResult.Healthy();
            }
            catch (Exception ex)
            {
                return new HealthCheckResult(context.Registration.FailureStatus, exception: ex);
            }
        }
    }
}

Lets also add an extension methods that can be called on the services.AddHealthChecks() methods from the startup.cs.

using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Diagnostics.HealthChecks;
using System;
using System.Collections.Generic;

namespace DynamoDBHealthCheck
{
    public static class DynamoDbHealthCheckExtensions
    {
        const string NAME = "dynamodb";
        public static IHealthChecksBuilder AddDynamoDb(this IHealthChecksBuilder builder, DynamoOptions options, string name = default, HealthStatus? failureStatus = default, IEnumerable<string> tags = default, TimeSpan? timeout = default)
        {
            return builder.Add(new HealthCheckRegistration(
                name ?? NAME,
                sp => new DynamoHealth(options),
                failureStatus,
                tags,
                timeout));
        }
    }
}

We now have to update the startup.cs to include the AddDynamoDb extension. If you run the application now you can see that the health check returns an unhealthy status for overall app and also DynamoDb as shown below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Diagnostics.HealthChecks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Diagnostics.HealthChecks;
using Microsoft.Extensions.Hosting;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;

namespace DynamoDBHealthCheck
{
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }
        public IConfiguration Configuration { get;}
        
        public void ConfigureServices(IServiceCollection services)
        {
            // Adding the health check services
            services.AddHealthChecks()
                     .AddDynamoDb(Configuration.GetSection("dynamodb")
                                               .Get<DynamoOptions>());
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseRouting();

            app.UseEndpoints(endpoints =>
            {
                // adding the health check route
                endpoints.MapHealthChecks("/health", new HealthCheckOptions()
                {
                    ResultStatusCodes =
                    {
                        [HealthStatus.Healthy] = StatusCodes.Status200OK,
                        [HealthStatus.Degraded] = StatusCodes.Status200OK,
                        [HealthStatus.Unhealthy] = StatusCodes.Status503ServiceUnavailable
                    },
                    ResponseWriter = WriteResponse
                });
                endpoints.MapGet("/", async context =>
                {
                    await context.Response.WriteAsync("Hello World!");
                });
            });
        }
        private static Task WriteResponse(HttpContext httpContext, HealthReport result)
        {
            httpContext.Response.ContentType = "application/json";

            var json = new JObject(
                new JProperty("status", result.Status.ToString()),
                new JProperty("results", new JObject(result.Entries.Select(pair =>
                    new JProperty(pair.Key, new JObject(
                        new JProperty("status", pair.Value.Status.ToString()),
                        new JProperty("description", pair.Value.Description),
                        new JProperty("data", new JObject(pair.Value.Data.Select(
                            p => new JProperty(p.Key, p.Value))))))))));
            return httpContext.Response.WriteAsync(
                json.ToString(Formatting.Indented));
        }
    }
}

{
  "status": "Unhealthy",
  "results": {
    "dynamodb": {
      "status": "Unhealthy",
      "description": null,
      "data": {
        
      }
    }
  }
}

Let’s now make sure a local dynamo db instance is running and we should also create a table called “TestTable” in this local instance. To check whether DyanamoDB’s health we are calling the DescribeTable method which throw an exception when the table is not found.

Instructions on how to run DynamoDB locally can be found here. Once we have started the DynamoDB local server and created the “TestTable” health check will return a health status both for the overall system and also dynamodb.

// https://localhost:44337/health

{
  "status": "Healthy",
  "results": {
    "dynamodb": {
      "status": "Healthy",
      "description": null,
      "data": {
        
      }
    }
  }
}

Additional Information

  • There is actually a collection of health check nuget packages for different types of products including DynamoDB can be found here. The DynamoDB health check in the package actually uses the ListTable method on Dynamo. However I do prefer to check the existence of the table that my app relies on to run.

Authentication UI workflow in Blazor : Redirecting using code

Blazor preview build : 3.0.0-preview5-19227-01

Blazor is now in preview and not experimental anymore. With each preview build there is a lot of breaking changes. The above code will work with the latest preivew build mentioned above.

Sometimes you have to redirect using code in your client side application and one common use case is redirect users to the login page when not authenticated. Blazor is an expreimental framework letting developers write C# on the client side making use of webassenly.To keep the code in a centralised location we will be writing the logic to do the redirection in MainLayout.cshtml.

Below is the full code for MainLayout.cshtml. There is only 2 lines of code that is of intereset to us here.In Line 1 we are injecting the IUriHelper and in Line 17 we are using the NavigateTo method to redirect to the loging page when the user is not authenticated.

@inject Microsoft.AspNetCore.Components.IUriHelper UriHelper
@inherits LayoutComponentBase

    <div>
        @Body
    </div>

@functions
{
    // TODO : Setting as not authenticated to false for demo 
    private bool IsAuthenticated = false;

    protected override void OnInit()
    {
        if (!IsAuthenticated)
        {
            UriHelper.NavigateTo(@"\login");
        }
    }
}

Additional resources

.Net Core Web development using Blazor on Server and C#

Blazor is an experimental .NET web framework using C# and HTML that runs in the browser. We will be looking into using the Blazer on Server with runs on server using the full DotNetCore and uses SignalR to provide nice SPA feel. Microsoft has announced that server-side Blazor will become first class citizen as part .NETCORE 3 with a new name Razor Components (See Blazor update). Client side Blazor will continue as an experiment.

In this tutorial we will be using the server-side Blazor that you can install using the below dotnet new command.

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

Solution Setup

Once the template is created let’s go an create a new Blazor Server project

  1. Create an ASP.NET Core Web Application in Visual studio
    CreateProject
  2. Select the Blazor (Server side in  ASP.NET Core) project template. Make sure ASP.NET Core 2.1 or higher is selected. I have used 2.1 in the interest of using a non preview version as 2.2 is still in preview at the time of writting.
    BlazorServer
  3. Your solution now contains 2 new projects the server and Ap
    solution.PNG
  4. Press Ctrl + F5 and you should have a Blazor server app running.
  5. Now let’s go an add a separate Web Api project to provide api services for our Blazor UI (You could use the Server project for the same but to decouple it I will be using a seperate web api project).
  6. Lets also add a .NET Standard class library and call it FullStack.Client. This will be become the API client that can be called from any .NET project.Your project should look something like this at this point.
    project_structure

Generate Swagger definition from FullStack.Api

We will be generating the client library using Swagger.json and NSwag.Msbuild. Lets start by adding the NSwag.AspNetCore nuget package to the FullStack.Api project, so that we can generate the swagger definition from code.

Install-Package NSwag.AspNetCore -Version 11.20.1

Update the ConfigureServices in startup.cs to AddSwagger

// This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc()
             .SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

            services.AddSwagger();
        }

Also update the startup to configure swagger using the “UseSwaggerUi3WithApiExplorer” extension method.


// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseHsts();
            }

            app.UseHttpsRedirection();

            // Register the Swagger generator and the Swagger UI middlewares
            app.UseSwaggerUi3WithApiExplorer(settings =>
            {
                settings.SwaggerUiRoute = "/api";
                settings.SwaggerRoute = "/api/specification.json";
                settings.GeneratorSettings.DefaultPropertyNameHandling =
                    PropertyNameHandling.CamelCase;
            });

            app.UseMvc();
        }

Now if you run the Api project and navigate to /api swagger UI should load.

Swagger UI

Generating Api Client

Navigating to https://localhost:44324/api/specification.json will give you the swagger specification for our Values Api.

Lets create a new values-swagger.json file in the FullStack.Client Proect and copy the swagger definition to this file.

Microsoft did announce that in future releases of .NET Core this process will be a lot more automated like adding a reference of the Api project in the client project.

Also add the nuget package for NSwag.MSBuild to FullStack.Client project

Install-Package NSwag.MSBuild -Version 11.20.1

Lets now edit the FullStack.Client Csproj and add the following target

<Target Name="NSwag" BeforeTargets="Build">
    <Exec Command="$(NSwagExe) swagger2csclient /input:values-swagger.json /namespace:$(RootNamespace) /InjectHttpClient:true /UseBaseUrl:false /Output:ValuesClient.cs" />
  </Target>

Now if we build the client project we should get ValuesClient.cs generated as seen below.

fullstack client.PNG

The generated client used Newtonsoft.Net and so we should add the Newtonsoft nuget package to the client project.

Install-Package Newtonsoft.Json -Version 11.0.2

Using the client in our Blazor server project

Lets now add a reference to the FullStack.Client in FullStack.App and update the Startup class to inject the ValuesClient. For this we also need to add
the nuget package Microsoft.Extensions.Http, so that we can inject the Http client.

Install-Package Microsoft.Extensions.Http -Version 2.1.1

Add the below code into ConfigureServices section of the startup class

services.AddHttpClient(httpClient =>
            {
                httpClient.BaseAddress = new Uri("https://localhost:44333");
                httpClient.Timeout = TimeSpan.FromMinutes(1);
            });

Replace the URI with the base url of your api.

Update Index.cshtml to display the values from the values api. Also in update the solution properties to run both the api project and the server project.

indexcshtml.PNG

If we run the solution now we should see the home page loaded with the values from the values controller.

Blazorhome.PNG

Notes

  1. Source code for the Solution used here can found in github
  2. Everything here should also work with client side Blazor but I used server-side Blazor as it is more production ready than the client side one

Further reading/viewing

  1. Blazor Update
  2. Blazor
  3. S207 – Blazor: Modern Web development with .NET and WebAssembly – Daniel Roth
  4. S104 – What’s New in ASP.NET Core?

Calculate Speed using Hall effect sensor

We are preparing for a workshop on V2X and for the actual demo we are building a car that runs on Raspberry Pi. One of the requirements is to calculate the speed of the car. While it is never going to be accurate, it is simple enough to calculate the speed using a Hall effect sensor.

Theory and formula

We will be using the Hall effect sensor and a magnet to find the number of times the car wheel rotated in a minute.The sensor and magnet will be placed in a such a way that at a particular point in every rotation of the wheel the sensor will detect the magnet. This will tell us how long it will take for one rotation of the wheel and from this we can calculate the number of rotations in a minute.Multiplying the rotations per minute with the diameter of the car will give the distance traveled in a minute.

formula.PNG

T = Time in minutes

D = Diameter in centimeter

Materials Needed

  1. Raspberry Pi – We could easily build this using any micro-controller but for this tutorial we are using a Raspberry Pi
  2. Hall Effect sensor
  3. Magnets
  4. DIY Car Chassis

Connecting Hall Effect Sensor to Raspberry Pi

As per the diagram we are connecting the positive terminal of the sensor to Pin 2 (5V) and the negative to Pin 6(Ground). The signal pin of the hall effect sensor is connected to Pin 8(BCM 14). Signal pin on the above linked hall effect sensor is marked with an “S” next to it.Here is a link to Raspberry Pi pin-out diagram for reference.

Hall_Effect_Sensor_bb

Positioning the Sensor and the Magnet

For the sensor to properly detect the magnet they both have to be placed at a certain distance from each other. The chassis that I used had a small shaft on the back side and that seemed the perfect location to stick the magnet.

Inkedphoto_780940922_LI.jpg

Red Circle contains the Hall effect sensor and the Yellow circle contains the Magnets

photo_315497547.jpg

Sensor is activated when the Magnet is closer to the sensor

Code

The original code has been taken from here and is modified ito calculate speed instead of just detecting the magnet. Also author of this code Matt has a nicely written blog on detecting sensor changes for hall effect sensor.

The modified code can be found here.

Below function gets called whenever a magnet is detected. One of the draw backs of this code is that the first measurement of speed is always wrong and it is not very accurate. Basically are using the start and done to find out the time between each magnet detection and then converting it to minute (Ln 9).

Ln 10 calculated the rpm and on Ln 11 rpm is multiplied with 22 cm which was the diameter of wheel and on Ln 12 we calculate the speed in Cm per minute.

Capture.PNG

Azure functions – Blob storage trigger to Process CSV in C#

Every time a CSV file is uploaded to Azure blob storage we want to run an Azure function that will process the CSV and upload data to Azure table storage. We will be using Visual Studio 2017 and Azure Storage explorer for development and testing locally.

Prerequisites

  • Download an install Azure storage expolore – here
  • Make sure Azure workflows and SDKS are installed for Visual Studio version of your choice. While we are using Visual Studion 2017 and storage explorer here , everything we are doing here can be done directly from Azure portal

Creating Azure functions App

In Visual Studio create new Project and then select the “Azure functions template”. Name the project “BlobTrigger”.

Create Azure functions app
Create Azure functions App

On the next window select “Blog trigger” and for Storage Account Select “Storage emulator”.You don’t have to add any connection strings, as by default it will connected to the storage account azure functions app is linked to (In our case it is the storage emulator). For Path let’s type in in “expenses”. This is the container name for your blob storage to which you will be uploading the CSV files. If you want to add additional types of functions , you can add them to the project later.

Create blob trigger
Create blob trigger

The default template would have created the below function for you.

using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;

namespace BlobTrigger
{
    public static class Function1
    {
        [FunctionName("Function1")]
        public static void Run([BlobTrigger("expenses/{name}", Connection = "")]Stream myBlob, string name, ILogger log)
        {
            log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
        }
    }
}

Pressing “F5” should run the function locally should popup a command line console similar to below one and you can see all your logs in this windows.

Running functions
Running functions

Add a reference to CSVHelper using Nuget, to help with processing the CSV. You can chose to use any CSV library

CSV hekper
Adding CSV helper

Below is the final code that reads data from the CSV and uploads it to table storage. You can choose to upload the data to any type of data storage but for simplicity, we are selecting table storage.

using System;
using System.Collections.Generic;
using System.IO;
using CsvHelper;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
using Microsoft.WindowsAzure.Storage.Table;

namespace BlobTrigger
{
    public static class Function1
    {
        [FunctionName("Function1")]
        public static void Run([BlobTrigger("expenses/{name}.csv", Connection = "AzureWebJobsStorage")]Stream myBlob, string name,
            [Table("Expenses", Connection = "AzureWebJobsStorage")] IAsyncCollector<Expense> expenseTable,
            ILogger log)
        {
            log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
            var records = new List<Expense>();
            using (var memoryStream = new MemoryStream())
            {
                using (var tr = new StreamReader(myBlob))
                {
                    using (var csv = new CsvReader(tr))
                    {
                        if (csv.Read())
                        {
                            log.LogInformation("Reading CSV");
                            csv.ReadHeader();
                            while (csv.Read())
                            {
                                var record = new Expense
                                {
                                    Amount = double.Parse(csv.GetField("Debit")),
                                    Title = csv.GetField("Title"),
                                    PartitionKey = "Expenses",
                                    RowKey = Guid.NewGuid().ToString()
                                };
                                expenseTable.AddAsync(record).Wait();
                                records.Add(record);
                            }
                        }
                    }
                }
            }
        }
    }

    public class Expense: TableEntity
    {
        public string Title { get; set; }
        public double Amount { get; set; }
    }
}

Function Signature

[FunctionName("Function1")]
        public static void Run([BlobTrigger("expenses/{name}.csv", Connection = "AzureWebJobsStorage")]Stream myBlob, string name,
            [Table("Expenses", Connection = "AzureWebJobsStorage")] IAsyncCollector<Expense> expenseTable,
            ILogger log)

We have updated the function signature to have a blog trigger path of “expenses/{name}.csv”. Adding the “.csv” ensures that only CSV files trigger the function.

Another addition to function is the Table storage binding which uses table name of “Expenses” and we are using an “Expense” entity which extends “TableEnity”.

Testing

Press F5 on Visual studio and start running the Azure functions locally. Add a break-point to the function, so that we can make sure uploading the CSV will triger the “Blob trigger function”.

Solution

The full solution can be found in Github

Home assistant – Find who is home

One cool feature of home assistant to detect who is home by detecting what devices are connected to the WIFI network. This feature can then be used in a wide variety of situation to such as,

  • Detect an intruder if a sensor is activated when no one is home.
  • Turn off the lights\electrical equipment’s when no one is home to save power etc.

Setting this is really simple,

1) Update the configuration.yaml with the following and then restart home assistant. My home assistant is currently running on a Windows Server and the config file was found in this folder – C:\Users\USERNAME\AppData\Roaming.homeassistant (This folder structure is for a Windows 10 PC)

device_tracker:
  - platform: ROUTER-PLATFORM
    host: ROUTER-IP
    username: YOURUSERNAME
    password: YOURPASSWORD
    interval_seconds: 10
    consider_home: 180

2) Set track to false for devices that you do not want to track in the newly created known_devices.yaml (This file will be automatically created after step 1.In my case, I only wanted to track our mobile phones and not the laptops. Restart home assistant again.

devicename:
  hide_if_away: false
  icon:Some icon
  mac: your mac
  name: friendly name
  picture:
  track: false
  vendor: ASUSTek COMPUTER INC.

3) Update the configuration.yaml to set “track_new _devices” to false. This wasn’t initially added so that we could track all our devices to start with.Now that we have updated the config to track only the devices we want to track no new devices will show up in the dashboard.Without this, additional step all the devices that I set not to track keeps coming back every time Home Assistant was restarted

device_tracker:
  - platform: netgear
    host: 192.168.0.1
    username: admin
    password: password
    interval_seconds: 10
    consider_home: 180
    track_new_devices: false

Below is a screeshot of Me and my wife being detected based on the fact that both our phones are home.

Device_discovery.png

References

 

Windows developer machine build

For the last few months I have be waiting to finalise a custom build PC build to use as my developer workstation and also a home server. I will be using the Windows server 2016 as the main OS and multiple other OS as VM’s. With this in mind I wanted to get at-least 32 GB , will the option to upgrade it to 64 GB if needed. Below is the configuration I used for the build. Gaming was never considered while building this machine.

# Item Price Comments
1. Intel SSD 540 Series 240G 2.5in SATA $119.00
2. Intel Core i7 6700 Quad Core LGA 1151 3.4GHz CPU Processor  $439.00
3. Gigabyte GeForce GT 730 2GB Video Card  $89.00
4. Corsair CS55oM 550W ATX Power Supply, 80+ Gold Certified, Semi Modular Design, (4+4)pin EPS  $119.00  Initial plan was to buy a 240W, however after talking to the technician

at the store changed it to 550W as he suggested that it is way too much for my needs

5. Corsair Carbide Series  200R Compact ATX Case with Window  $95.00  Definitely way big for my needs by I also had in mind options

to add additional HDD drives for media needs

6. Corsair 32GB (2x16GB) CMK32GX4M2A2400C14 DD4 2400MHz Vengeance LPX DIMM Black  $249.00
7. Asus Z170-K LGA1151 ATX Motherboard  $179.00
8. ASUS PCE-N15 WLAN PCI-Express N300 LP  $18.00  This is a temporary addition and hence just purchased the cheapest one
Total  $1307.00
I used a few items like a 2TB hdd and my dual monitor setup from my existing box in the new build and so not included in the above list. While I am not expert in PC building, I am fairly happy with this build and so far it is performing great.

I used a few items like a 2TB hdd and my dual monitor setup from my existing box in the new build and so not included in the above list. While I am not expert in PC building, I am fairly happy with this build and so far it is performing great.