Tuesday, April 3, 2018

Building Enterprise Grade Web APIs - Use Feature Folders

One of the most important attributes of any code we write is maintainability.  Over the lifetime of a piece of code, it will be debugged, modified and upgraded numerous times, often by developers who were not on the original team that wrote the code. 

One of the most important aspects of maintainability is that the developer maintaining the code needs to be able to easily find what they are looking for.  This is where feature folders come in.  When using feature folders in our API, we don't use the traditional folders named Models and Controllers.  Instead, we name our folders after the endpoints of our API (the features) and place all the code related to to that endpoint (feature) within that folder.  Here is an example:




As you have probably guessed, this is an API related to Food Trucks.  If you looked at the endpoints for the API you would see URLs like /api/FoodTrucks and /api/Locations.  It is pretty easy to guess where the code is for those endpoints, in the FoodTrucks and Locations folders respectively.  For child endpoints, that is, endpoints like /api/FoodTrucks/{foodTruckId}/Reviews, I use a dot notation on the folder name like FoodTrucks.Reviews.  As you can see though, it is very easy to map from what an endpoint is to where the code is that supports that endpoint.

As you see in the screenshot above, I have expanded the Locations folder so we can have a look inside.  We see the following:
  • The controller - LocationsController
  • The view models used by this feature - LocationModel, CreateLocationModel and UpdateLocationModel
  • Two validator classes that are used to implement validation rules related to this feature  - CreateLocationModelValidator and UpdateLocationModelValidator
  • The AutoMapper profile used to map objects related to this feature - LocationsAutoMapperProfile.
Without even looking inside the files at the code, you already have a pretty good idea about what is going on.  And you know where everything is related to this feature.  Contrast this with the traditional folder layout in MVC projects:




Here if we need to debug and then make a change to the Locations endpoint, we have to navigate multiple folders to do so.  In fact, I'm not even showing the validator classes or the AutoMapper profile classes in this screenshot because they don't fit on the screen.  Sure, we could put our validators in with our models in the ApiModels folder, but for even for this relatively API, we are starting to get some folders that are unwieldy.

There is another benefit of using feature folders.  Since all of the code used for a feature is in one folder, the folder (and thereby the namespace) forms a boundary in our code that delineates this feature.   If we find ourselves needing to reach across folders, we are crossing a boundary in our code base and we have an indication that something is wrong.  This helps to discourage us from sharing model objects across two or more controllers where the object would have more than one responsibility.  Having boundaries defined like this in our code helps us be more modular and keep everything where it belongs.

Summary

I think feature folders are the way to go.  This is a pattern that you actually see quite a bit in the Angular and React communities, and I think there is no reason not to adopt the same in ASP.NET projects.  I think we end up with a much cleaner organization of the code and it is easier to find what you are looking for.  So for your next project, give feature folders a try.  I think once you complete a project using feature folders you will agree it is the way to go.

Sunday, April 1, 2018

Implementing HATEOAS Functionality in an ASP.NET Core Web API

HATEOAS (Hypertext As The Engine Of Application State) is a feature of REST based APIs in which the server provides not just data in its responses, but hyperlinks to related data and/or actions that a client can be accessed on the API.  This allows the client to self discover what operations are available on an API and dynamically navigate through them.

An API response that implements HATEOAS would look like the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
{
    "foodTruckId": 4,
    "name": "Rice Bowl",
    "description": "Asian favorites served in a bowl of rice",
    "website": "http://foodtrucknation.com/RiceBowl",
    "lastModifiedDate": "2017-10-23T01:57:37.185",
    "tags": [
        "Asian",
        "Chinese Food"
    ],
    "reviewCount": 4,
    "reviewAverage": 4,
    "socialMediaAccounts": [],
    "meta": {
        "self": "http://localhost:8000/api/FoodTrucks/4",
        "reviews": "http://localhost:8000/api/FoodTrucks/4/Reviews",
        "schedules": "http://localhost:8000/api/FoodTrucks/4/Schedules"
    }
}

In this case, I've grouped all of the hyperlinks under a property named meta, and we can see in this case I'm providing links to additional data about the food truck, namely where to find the reviews and schedule for this food truck as well as the URL for this food truck itself.

Why would you want to implement HATEOAS, when after all many APIs work just fine without it?  One reason is because it is part of the REST spec, but I think there are better reasons than simply "it is part of the spec".  I think what is nice is the self discovery aspect, especially for a developer who is new to your API.  They can easily see how the different endpoints and objects relate to each other.  They can follow these links in their browser and walk through your API discovering the various relationships as they go.  Yes, there are tools like Swagger, but it can be really powerful to walk through an API and see real data as you click on links to different end points.  The hope is that one day there are automated clients can do this, but today, HATEOAS still helps the most important client of your API, the developer who is consuming it.

That said, one of the barriers to implementing HATEOAS is the ability to generate the correct URLs in your response.  In this post, I am going to show how this can be easily done in ASP.NET Core.

Sample Project

All of this code is implemented in my Food Truck Nation API that is available on Github at the following URL.


Assumptions

Lets set the stage with some assumptions so we are all talking the same language.

  • This blog post and the sample code is currently using ASP.NET Core 2.0.  The solution should work for ASP.NET Core 1.1 as well, but you may need a minor tweak here and there.
  • I am assuming you have separate view model objects that you use to return data back to the client and you aren't just returning your domain objects.  By domain objects, I mean whatever you are querying out of your database and working with in your application.
  • This post and the sample code uses AutoMapper to map between the domain objects and the view models.  If you roll your own mapping code or use a different mapping framework, many of the lessons should still apply, but of course the implementation will be different.

Model Objects

The first thing we need to do is define properties in the model objects that we will return to the client to hold the hyperlinks we want to include.  For my Food Truck Model, here is what my response model object looks like:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
    /// <summary>
    /// Model class to represent the food truck data sent back to the client
    /// </summary>
    public class FoodTruckModel
    {
        public int FoodTruckId { get; set; }
        public String Name { get; set; }
        public String Description { get; set; }
        public String Website { get; set; }
        public List<String> Tags { get; set; }
        public int ReviewCount { get; set; }
        public double ReviewAverage { get; set; }
        public DateTime LastModifiedDate { get; set; }
        public FoodTruckLinks Meta { get; set; }

        #region Nested Types

        public class FoodTruckLinks
        {
            public String Self { get; set; }
            public String Reviews { get; set; }
            public String Schedules { get; set; }
        }

        #endregion
    }

The Meta property is the property we want to pay attention to, as it contains a FoodTruckLinks object that groups together all of the hyperlinks we want to send back with this model object.  The Reviews and Schedules properties on the FoodTruckLinks object will all contain hyperlinks to their respective resources while the Self property will contain a link (the resource identifier) for this object.

The idea of using the property name Meta came from another talk I saw, though I don't remember which one.  Whatever name you use though, I think it is a good idea to group all of the links you want to provide together in an object like this and to give them a consistent name on your model objects.  This way the client knows where to look for this information and it is all grouped together in one place.

Finally, you can see that I model my FoodTruckLinks object as a nested (inner) class.  This is because the FoodTruckLinks object (and all my other Link objects) really don't have any purpose outside of the context of their parent class.  Therefore, I modeled them as nested classes to reinforce the notion that this object really belongs in and exists just in the context of its parent (owning) object.  You do not have to model you link objects this way, but I find that this a useful technique to use.

Creating URLs in ASP.NET Core

One of the major challenges is how to generate correct URLs in your application.  Fortunately, ASP.NET Core includes a built in Url Helper class that will help create URLs for us.  This class knows about the protocol, server name and the directory where our API is deployed so it can create a proper URL for us without us having to put all of these details together ourselves and then perform some string concatenation to create a URL.  Being able to use a built in class to create our URLs not only saves us a bunch of work, but is also more reliable because we can be confident that Microsoft has properly handled all of the corner cases that we might encounter.

The Url Helper class is exposed as an interface, IUrlHelper, and we can access the interface from the Url property of our Controller classes.   To generate an absolute URL, we want to use the RouteUrl method like this.
1
2
3
4
5
6
  String selfUrl = this.Url.Link(GET_FOOD_TRUCK_BY_ID, 
      new { foodTruckId = foodTruckId });
  String reviewsUrl = this.Url.Link(FoodTruckReviewsController.GET_ALL_FOOD_TRUCK_REVIEWS, 
      new { foodTruckId = foodTruckId });
  String schedulesUrl = this.Url.Link(FoodTruckSchedulesController.GET_FOOD_TRUCK_SCHEDULE, 
      new { foodTruckId = foodTruckId });

You might be tempted to use the Action method on IUrlHelper, but the Action method will only generate an absolute path of the URL (like /api/FoodTrucks/4), not the full URL including the server name and directory path unless you include these as arguments, and that is what we want to avoid.  So Link is the method we want to use.

The first argument is the name of the route.  If you want to implement HATEOAS in your API, you are going to need to name your routes.  To do this, you include the Name property on your HttpGetHttpPostHttpPut and HttpDelete attributes that you use to decorate your like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
    /// <summary>
    /// Route name constant for route that gets an individual food truck
    /// </summary>
    public const String GET_FOOD_TRUCK_BY_ID = "GetFoodTruckById";


    [HttpGet("{foodTruckId:int}", Name = GET_FOOD_TRUCK_BY_ID)]
    public IActionResult Get(int foodTruckId)
    {
        // Action Code Here
    }

In this case, I'm using a constant to hold the value of the route name.  More importantly, we see that we are including setting the Name property for the route in the HttpGet attribute so we can refer to this route in other parts of our code, namely the places we need to use the URL Resolver to create URLs for us.

The second argument is an anonymous object of the route parameters.  In our case, each of the three routes take just one parameter, foodTruckId, so that is the only parameter we include in our anonymous object.  If however your route required multiple parameters, your anonymous object would have one property for each parameter in the route.  Also note that the name of the property in the anonymous object must match the name of the parameter exactly.  Our FoodTruckReviewsController expects a parameter of foodTruckId, so that is what we name our property.  We can't name it just id and have things work.  We need to match the names exactly.

If we place the code snippet above in one of controllers and debug through it, we can see we get an actual URL:
Since I am debugging on my local machine, I get an address that includes localhost and the port number I am running on.  But rest assured that when we are running on a server, IUrlResolver will create a URL with that servers name and the path to where the application is deployed correctly.

Using code like this, we could map from our entity object to a model object in our controller like this.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
    [HttpGet("{foodTruckId:int}", Name = GET_FOOD_TRUCK_BY_ID)]
    public IActionResult Get(int foodTruckId)
    {
        FoodTruck foodTruck = this.foodTruckService.GetFoodTruck(foodTruckId);

        if (foodTruck == null)
        {
            return this.NotFound(new ApiMessageModel() { Message = $"No food truck found with id {foodTruckId}" });
        }
        else
        {
            String selfUrl = this.Url.Link(GET_FOOD_TRUCK_BY_ID,
                    new { foodTruckId = foodTruckId });
            String reviewsUrl = this.Url.Link(FoodTruckReviewsController.GET_ALL_FOOD_TRUCK_REVIEWS,
                new { foodTruckId = foodTruckId });
            String schedulesUrl = this.Url.Link(FoodTruckSchedulesController.GET_FOOD_TRUCK_SCHEDULE,
                new { foodTruckId = foodTruckId });

            var model = new FoodTruckModel()
            {
                FoodTruckId = foodTruck.FoodTruckId,
                Name = foodTruck.Name,
                Description = foodTruck.Description,
                Website = foodTruck.Website,
                Meta = new FoodTruckModel.FoodTruckLinks()
                {
                    Self = selfUrl,
                    Reviews = reviewsUrl,
                    Schedules = schedulesUrl
                }

            };

            return this.Ok(model);
        }
    }

This code will return a model object like we saw at the beginning of this article which includes the hyperlinks to this resource (the Food Truck), the reviews for the food truck and the schedule for the food truck.

What is not ideal is that this code requires us to do the mapping in directly in our controller action, and we would need to do this in each and every one of our actions that returned a model.  Typically though, we use a library like AutoMapper to translate our data from domain objects into model objects.  This is where things get a little tricky, so lets take a look at how we can make that work.

Creating Hyperlinks Using AutoMapper

When we created the URLs for our hyperlinks above, we were in an Action method of our Controller, so we had access to the IUrlHelper object.  However, when our objects are being mapped inside of AutoMapper, but default, AutoMapper (or any other mapping library) will not know anything about IUrlHelper or how to create URLs.  Fortunately though, there is a way we can inject both the IUrlHelper object and some custom mapping code into the mapping process such that we can create proper URLs when we are mapping our objects.

One of the features of AutoMapper is the ability to define Custom Value Resolvers.  These allow us to take control of the mapping process by writing a custom class that will handle the mapping process.  If you look at the FoodTruckAutoMapperProfile class, you will see this is exactly what I am doing.  Here is the relevant code snippet:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    this.CreateMap<FoodTruck, FoodTruckModel.FoodTruckLinks>()
        .ForMember(
            dest => dest.Self,
            opt => opt.ResolveUsing<UrlResolver, RouteUrlInfo>(src =>
                new RouteUrlInfo()
                {
                    RouteName = FoodTrucksController.GET_FOOD_TRUCK_BY_ID,
                    RouteParams = new { id = src.FoodTruckId }
                }
            )
        )
        .ForMember(
            dest => dest.Reviews,
            opt => opt.ResolveUsing<UrlResolver, RouteUrlInfo>(src =>
                new RouteUrlInfo()
                {
                    RouteName = Reviews.FoodTruckReviewsController.GET_ALL_FOOD_TRUCK_REVIEWS,
                    RouteParams = new { foodTruckId = src.FoodTruckId }
                }
            )
        )
        .ForMember(
            dest => dest.Schedules,
            opt => opt.ResolveUsing<UrlResolver, RouteUrlInfo>(src =>
                new RouteUrlInfo()
                {
                    RouteName = Schedules.FoodTruckSchedulesController.GET_FOOD_TRUCK_SCHEDULE,
                    RouteParams = new { foodTruckId = src.FoodTruckId }
                }
            )
        );

Rather than thinking of this code as mapping a FoodTruck object to a FoodTruckLinks object (line 1), think of this code as the code that will create the FoodTruckLinks object, and we need to use the FoodTruck object as input into this process for some of the data that we need.  The three ForMember calls (lines 2, 12 and 22) all do the same process, just for different URLs, so lets walk through the first mapping, the Self property that gets the hyperlink for the current object.

Line 3 designates that we are populating the self property.  Line 4 is where things get interesting.  The opt.ResolveUsing call tells AutoMapper we want to map this value using a Custom Value Resolver class.  The name of that Custom Value Resolver class is UrlHelper, which we see as the first generic parameter and we'll take a look at in a moment.  The second generic parameter allows us to specify a custom source object of data we need to pass into the value resolver.  This is critical, because now we can pass additional information into the resolver like the name of the route and the any route parameters that we need in our mapping process.  For our purposes, I've defined an object called RouteUrlInfo that just acts as a container for the data we need in our custom mapping process.

Finally, on lines 5 through 9 we see a lambda function that tells AutoMapper how to populate this custom source information before it calls the custom value resolver.  In this case, we are creating and populating the RouteUrlInfo object with the route name and the parameters needed for the route.

That may seem complicated on the surface, but all that is really happening is that we are writing a class with some custom mapping code that we need (UrlHelper) and then AutoMapper will invoke a method named Resolve() on that object when it needs to map that property.   In addition, we get the chance to pass some custom data into this mapping process, so we do that by using a RouteUrlInfo object that is created via a lambda function right before the mapping is to occur.

So lets take a look at the code for UrlResolver and see how what the custom mapping process looks like for creating a link.

UrlResolver

Below is the code for the UrlResolver class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
   public class UrlResolver : IMemberValueResolver<object, object, RouteUrlInfo, String>
    {

        public UrlResolver(IHttpContextAccessor httpContextAccessor)
        {
            var httpContext = httpContextAccessor.HttpContext;
            this.urlHelper = (IUrlHelper)httpContext.Items["URL_HELPER"];
        }

  
        private readonly IUrlHelper urlHelper;

  
        public virtual string Resolve(object source, object destination, 
      RouteUrlInfo sourceMember, string destMember, ResolutionContext context)
        {
            return this.urlHelper.Link(sourceMember.RouteName, sourceMember.RouteParams);
        }
    }

As you see, this class implements AutoMapper's  IMemberValueResolver interface.  Classes implementing IMemberValueResolver are defined with four generic parameters which are as follows.
  • The type of the source object being mapped from.  Since we want our resolver class to work for any domain object, we use the type object.
  • The type of the destination object being mapped to.  Since we want to allow any model or links object, again we use the type object.
  • The type of the custom source object AutoMapper will pass in.  This is our custom data carrier object RouteUrlInfo that was discussed earlier.  This parameter allows us to design a custom object that has any other information we need for our mapping and pass it into the custom resolver.  
  • The type of the object to be returned from this custom mapping process.  In our case, we want a URL which is just a string.

The Resolve() method is what gets called when AutoMapper needs to resolve the custom value during the mapping of these objects.  We see that this class has  IUrlHelper object that it grabs out of the HttpContext and then it is able call the Link() method on IUrlHelper just like before to create the hyperlink.  To call the Link() method, we need the name of the route and any parameters for the route, and these come our of the RouteUrlInfo class, which is a class we define to help pass additional information into the mapping process.

Here is the source code for the RouteUrlInfo class.
1
2
3
4
5
6
7
8
9
    public class RouteUrlInfo
    {

        public String RouteName { get; set; }


        public object RouteParams { get; set; }

    }

The RouteName parameter is obvious enough, just the name of the route we want to create a Hyperlink for.  The RouteParams property should be populated with a C# anonymous object that contains a property for each parameter needed in the route.  If a route needs just one parameter, then the anonymous object will have just one property.  If the route needs two parameters, then the object will have two properties and so on.  Further, the property names in the anonymous object need to exactly match the names of the parameters on the route.  In this way, this simple object can deliver all of the information needed to let our custom value resolver object know what it needs to know to create our route.

There is one item that we have still not discussed, and that is of how the IUrlHelper got into the HttpContext in the first place.

Making IUrlHelper Available to Your Custom Value Resolver Object

ASP.NET exposes an IUrlHelper object to us as a property on our Controller classes as we saw in the first part of this post, so our challenge is to get this instance from our controller over to our custom value resolver (UrlResolver).  The vehicle for sharing data throughout a request life cycle in ASP.NET is the Items property off of HttpContext.  What we need to do is every time before an action method runs, put a copy of IUrlHelper in Items collection of the current HttpContext.  The best way to do this is to override the OnActionExecuting method on your Controller class:
1
2
3
4
5
6
    public override void OnActionExecuting(ActionExecutingContext context)
    {
        base.OnActionExecuting(context);

        context.HttpContext.Items.Add("URL_HELPER", this.Url);
    }

I actually do this in a BaseController class that I define, and then have all of my controller classes derive from the BaseController class, so I know this automatically done for every action method I might define.  We see all this is doing is grabbing the IUrlHelper object in the Url property of the controller and putting it in HttpContext's Items collection.

Then, UrlResolver can pull the pull the reference out in its constructor so it is available when the Resolve() method gets called.  Here is the constructor for IUrlResolver:


1
2
3
4
5
    public UrlResolver(IHttpContextAccessor httpContextAccessor)
    {
        var httpContext = httpContextAccessor.HttpContext;
        this.urlHelper = (IUrlHelper)httpContext.Items["URL_HELPER"];
    }

We actually can't inject HttpContext directly, but rather have to inject an IHttpContextAccessor object.  The good news is though that ASP.NET Core's built in DI framework knows how to take care of everything, so we just need to define our constructor like this and everything else is taken care of.  And now, our custom value resolver (UrlResolver) will have access to the IUrlHelper object it needs to create hyperlinks.

Summary

This has been a long journey, but as when you look back, it actually isn't that hard to implement HATEOAS in your ASP.NET Core APIs.  And you can actually do this rather seamlessly, where the incremental cost for each response model is just defining the appropriate mappings in you  AutoMapper profile.  So to summarize, here are the major steps.

  • Leverage the built in IUrlHelper to create your hyperlinks.  This class knows how to form proper URLs and will take care of creating a link with the correct protocol, server, port, app directory, path and parameters for you.
  • Override the OnActionExecuting() method on your Controller class to put a reference to the IUrlHelper into the Items collection of HttpContext.  This way it will be available to our AutoMapper custom value resolver object later.  I suggest you define a base controller class and perform this logic there so it is consistently done for all your controllers and actions.
  • Create an AutoMapper custom value resolver object that contains the logic for how to create hyperlinks given a route name and its parameters.  This is the UrlResolver class, and this is where the custom mapping code lives that gets run when we need to create a hyperlink during the mapping process.
  • Define a simple data carrier object (RouteUrlInfo)  that we can use to passes additional information like the route name and any parameters down to our mapping process when we need to map a domain object to a model object
  • Name your routes.  This needs to be done so IUrlHelper can find your routes by name.  You do this by simply including the Name property in the HttpGet, HttpPost, HttpPut or HttpDelete attributes on your action methods.
  • Define our custom mappings in the our AutoMapper profile objects.  Basically, this is just some syntax to tell AutoMapper for each property in our Links object what route goes with the link and what the parameters are.
As you have seen, a lot of the code is code that you write once and just include in your project (or you can just copy it out of my project).  Now the only thing you have to do each time is to name your routes and implement to correct mapping code in your AutoMapper profile.  And that becomes very boiler plate.  So for not a lot of work, your APIs can support HATEOAS.

Is It Worth It?

This code actually took me a very long Saturday to develop as there were lots of pitfalls along the way.  But now I have it, so all the hard work has been done.  I like having the links in my responses just because I think it makes self-discovery a little easier for a new user of the API.  Being able to walk the tree and understand how everything relates is really invaluable when you are trying to figure out how everything fits together.  There are a few more bytes that end up going over the wire, but I will take that trade-off in order to make the API a little easier to use and understand.  This is especially true since the heavy lifting is done and I have all of the supporting framework developed and ready to go.

So what are your thoughts?  Leave them in the comments below or reach out on Twitter.  I'd love to know if you find this as a useful approach.






Sunday, September 10, 2017

MKE Dot Net 2017 Review



On Saturday, September 9, I attended the 3rd annual MKE DOT NET conference here in Milwaukee.  This is a one day event organized by Centare, a local technology consultancy and focuses on Microsoft .NET and related technologies.  I was also fortunate enough to be selected as a speaker for the second year in a row.

I said this last year after attending, and I will say it again this year.  This is one of the best organized and executed events that I have ever attended.  The 2016 version of the event was excellent, and yet the MKE DOT NET team found a way to deliver an even better event this year.  I say that as both a speaker and as an attendee.  This is a real first class event, an event that every developer in Wisconsin and the Chicago area should be looking to attend in future years.  You can travel farther and spend more money, but you will not find a better executed event than this one.

For the rest of this post, I'll review various aspects of my experiences in the hopes they are useful to others when looking to attend or speak at this event in future years.

Speaker Experience


The speaker experience at MKE DOT NET is in a word superior.

The communication between the organizers and speakers is excellent.  Speaker selections went out on June 22nd, well ahead of the event.  In addition, two weeks before the event they sent out a speaker guide, answering questions about the A/V capabilities of the room, the schedule for the day and other general information about the event it is good to know beforehand.  They also offered to pick up any out of town speakers at the airport if need be.  This level of communication really makes the speaker's job easier, because a lot of the questions you have about an event, they have already proactively answered.

Like last year, they hosted a speaker dinner on Friday night before the event.  This year it was at Third Space Brewing in Milwaukee.  This year, more speakers were able to attend, and the space was more conducive to being able to socialize with your fellow speakers than last year.  Not to say last year wasn't good, it was very good.  But last year was more of just a dinner where as this year was a dinner and an opportunity to get to know everyone, so this was excellent.

As for the actual speaking part, this was also excellent.  The event was held at the Potawatomi Casino in Milwaukee, which also has a hotel and conference space.  The A/V in the rooms was excellent.  Projectors where built into the room, had a 16:9 aspect ration, were bright and projected onto a large screen built into the room.  In short, the rooms were built for events like this, with high resolution projectors so everyone could easily see.  This is a big win to have high quality A/V like this.  The rooms sat around 75-80 people, so there was plenty of space for popular sessions.  Finally, while I was setting up, two volunteers from the conference came in to check with me that I had everything I needed and everything was working.  Again, a first class experience on the rooms.



As for an opportunity to improve, this year talks where 45 minutes rather than the normal hour.  I would prefer in future years they go back to the hour long format for talks.  As a speaker, you often prepare your talks for an hour because that is what most events allocate.  So I needed to do some trimming on my talk to have it fit in the allotted time, and even then, I still felt a little rushed on the last section.

Attendee Experience

Aside from giving my session, I was able to attend a number of other sessions at the event as well.  Again, I think the attendee experience was superior.


The most important factor for attendee experience comes down to who is speaking and are there interesting sessions to attend at each time slot.  What I like about this event is that there is a mix of local speakers and speakers from out of the area who have more recognizable names.  I think that is an important balance to keep in the future.  It is nice to see some recognizable names to draw people into the event, but I would hate for all of the area speakers to be crowded out of the event is well.  Being able to learn from and network with both is important, and I hope this continues.

I found Joel Karr's talk on not thinking when you right code very thought provoking.  He is right, people have different skill levels with different technologies, and we should consider that when assigning out work on a team.  What this also says is that we want to gain enough practice with with key technologies so we don't have to think when we work on a task -- and by that I mean we understand the technology and problem so well that we can rely on muscle memory.  Further, he talked about that we need to admit when we are in learning mode.  This is an excellent point.  There is nothing wrong with learning mode.  We all have to do it, and we all do it a lot.  But let's admit this to ourselves and know that progress will be slower and it will be hard to really define the work while learning.  And as such, we need to make time for this learning.  These are all very good thoughts to keep in mind.



I then attended the talk "Components of Good UI: An Intro to ReactJS" by Vince Maiuri and Ryan Feil.  During this talk, they live coded a simple ReactJS application, which was good to see because it really helped me understand how the different pieces of a ReactJS app fit together.  I've looked at some tutorials for ReactJS before, but this talk was the clearest explanation I've seen yet about how to get going with ReactJS.  My team is working on an Angular 4 app right now, so I don't have any plans to start working with ReactJS right away, but it is good to understand the different approaches the two frameworks take.



The next session I attended was on Mocking and Unit Testing by John Wright.  John works at Stack Overflow and is clearly a really sharp guy.  I liked the history he gave of mocking libraries and how he introduced the different things you could do with a mocking library.  The biggest takeway for me was the capabilities of some of the unconstrained mocking frameworks.  While these are all pay libraries, they will "rewrite code on the fly" during the JIT process and allow you to test things that otherwise you would not be able to test easily like legacy code.  So something worth keeping in mind.


The final session I attended was Dustin Ewer's session on d3.js.  I've seen Dustin speak a few times before, and he always does a really good job and this time was no exception.  I played around with d3 a couple years ago, so this was a good re-introduction to the capabilities for me.  He also talked about some of the libraries that have now been built on top of d3.js that make consuming the library a little easier.  This is something I wish I had time to play around with more, but is a topic that has to go onto the learning backlog for now.



There were a number of other sessions I wish I could of attended.  I think Jeff Strauss's talk on Open Source software would have been excellent.  So also would the talks on Functional JavaScript by Jonathan Mills and Jane Prusakova's talk on the power of a Nudge.  And I could go on and on, it just speaks to the depth of quality sessions which is the most important attribute of any conference.

Finally, one of the other nice perks of the event is they host a happy hour at the conclusion where everyone gets a drink ticket, a custom commemorative glass and the opportunity to talk with their fellow attendees about the day's happenings.  This is a really nice touch, and a great chance to catch up with others rather than everyone just taking off after the last session ends.  This year, the commemorative glass was a really heavy mug that I am sure will be great for a big mug of Sprecher root beer or even a root beer float.  Just need to get it washed so I can try it out.


Other Thoughts

I really liked the Potawatomi as the location for the event.  As I covered above, the rooms were first class and really designed for events like this.  I thought the food at both breakfast and lunch was very good and there was plenty of space to host an event of this size.  I also like the location because if someone is looking for something to do after the event, there are lots of options.  Obviously there is the casino itself for those who like that sort of thing, but they also have restaurants and live music at the Potawatomi.  And if that isn't your cup of tea, you are a quick 10-12 minute drive from downtown Milwaukee and the Historic Third Ward with all of their restaurants and entertainment options.  So just a really good location.

This year's event had about 320 attendees (excluding speakers), which is up from last year and is very good.  but for as good as this event is, that number should be more, like 500 or so.  I think most of the attendees were Milwaukee based, but there is no reason that developers from Chicago, Madison and Appleton/Green bay shouldn't be able to make the quick drive over as well, especially given the quality of the event.  You are just not going to find a better speaker lineup and a better put on event anywhere in the area.  Tickets ranged from $79 to $119, but that is really cheap considering that included in that were two meals, and after party and a t-shirt.  I am happy with the 320 number, but next year I would like to see what I can do to help get this number up even higher.

One other thought is that the event started at 8:00 AM this year.  Unfortunately, this was also complicated by the Brewer's half marathon blocking off most of the roads to the event, so a lot of people including myself arrived late.  For myself, this meant that I missed the keynote presentation which was disappointing.  However, I wonder if it might be smart to push back the keynote to start at say 8:45 or even 9:00 AM.  If an attendee is going to drive in from Chicago, Madison or the Fox Valley, a little later start time makes things easier, so maybe a simple change like this could boost attendance from those other areas.

Wrap Up

All in all, a great event.  Enough credit cannot go to David Pine, Steven Hicks, Rachel Krause, Amanda Daering and the rest of the Centare team for putting this on.  So take this as a big thank you, for putting on the event, for giving me the opportunity to speak and for giving all of us the opportunity to learn.







Saturday, September 2, 2017

Converting an Existing SQL Server Table to a Temporal Table

One of the very useful features added in SQL Server 2016 were temporal tables.  With a temporal table, SQL Server will automatically record a history of all changed data rows to a history table associated with the temporal table.  Further, SQL Server gives us some new syntax to be able to easily query what the data in the table looked like at any point in time or to show the entire history of a row in a table.  If you want more details, you can check out this earlier blog post I wrote on temporal tables.

However, what if we have an existing table in our database that we want to convert to a temporal table?  Lets take a look at how we do that.

For this example, lets assume that we have the following table that already exists in our database.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
CREATE TABLE Employees
(
    EmployeeId    INT          NOT NULL,
    FirstName     VARCHAR(20)  NOT NULL,
    LastName      VARCHAR(20)  NOT NULL,
    Email         VARCHAR(50)  NOT NULL,
    Phone         VARCHAR(20)  NULL,
    CONSTRAINT PK_Employees
        PRIMARY KEY (EmployeeId)
);


To convert this table to a temporal table, it is a two step process.  The first step is that we need to add our ValidFrom/ValidTo columns to the table to represent when the row was active in the table.  So we can run the following statement to do this.


1
2
3
4
5
6
ALTER TABLE Employees ADD 
    ValidFrom DATETIME2(3) GENERATED ALWAYS AS ROW START 
        NOT NULL DEFAULT '1900-01-01 00:00:00.000',
    ValidTo   DATETIME2(3)  GENERATED ALWAYS AS ROW END 
        NOT NULL DEFAULT '9999-12-31 23:59:59.999',
    PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo);

Here we are adding the two columns needed for when the row is valid and the PERIOD that is required by a temporal table.  Some things to note:

  • We could name the columns ValidFrom and ValidTo anything that we want to, these are just the names that I chose.
  • These columns must be of a DATETIME2 data type.  In this case, I am using DATETIME2(3) to go down to millisecond precision.
  • We need to provide default values for these columns in order to populate the existing rows on the table.  For my ValidFrom I chose 1/1/1900 as a default starting date.  The ending date for the rows in ValidTo column must be the maximum date/time value for our data type, so in this case, 12/31/999 at 23:59:99.999.
  • Otherwise, the syntax for the columns look much like the syntax for the columns in the CREATE TABLE statement.
Then, we need to run step 2 of the process:


1
2
ALTER TABLE Employees			
    SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeesHistory));

Here, we turn on SYSTEM_VERSIONING for the table so the ValidFrom and ValidTo dates will be auto-generated and define the name of the history table to use.

And that is all there is to it.  Now, your table has been converted to a temporal table and any changes to your table will be tracked in the history table.  






Saturday, April 29, 2017

Monday, April 24, 2017

A Scary Incident at Twin Cities Code Camp

We had a scary incident at Twin Cities Code Camp this last weekend.

About 45 minutes into Scott Addie's talk on Reusable UI Components in ASP.NET Core, an attendee collapsed, fell out of his chair and hit the floor completely unconscious.  I was looking down at the time, and by the time I looked up, several other attendees were rushing to aid the gentleman who had passed out.

Fortunately, he came to in 10 seconds, though he still complained about being light headed.  Though he was able to talk, he clearly was not right, so another attendee called 911.  Within a few minutes, Campus Safety had arrived followed by paramedics who took the gentleman to a local hospital to run further tests and keep him under observation.  During this 10-15 minutes from the time to collapsed until the time he was wheeled out on a stretcher, the man was conscious, coherent and able to talk.  But like I said above, he clearly was not right.  I do not have any more details on his present condition, but all of us at the event and in the technology community in general hope he makes a speedy recovery and is able to get whatever underlying issues that may be going on addressed.

This was scary, terribly scary.  When I looked up and first realized what was happening, my first thought was that he may have had a heart attack.  I was scared for this gentleman and I felt very helpless at the time.  It was just a few minutes until Campus Safety arrived, but those few minutes ticked by very slowly.  In a moment all of us went from trying to learn something new to wondering what kind of medical help this individual needed and if that help would arrive in time.

It was scary too in that you think of your own health, and in my case, I realize it isn't nearly as good as it should be.  I'm in my mid 40's now, and health isn't something that can be taken for granted.  The last few years, I have been much less active than I used to be, and as a result, I've gained a lot of weight.  When your clothes don't quite fit right or they don't fit at all any more, that is telling you something.  And not just that I have become too sedentary.  It also speaks to my eating habits and how in general, I am just not taking good care of my most important possession, my health.

I don't think I am alone in the technology community.  Like many of us, I sit in a chair all day.  I consume way too much soda while doing so.  And then I go home, eat, and sit in a different chair to code some more.  I know where too little activity and an unhealthy diet leads.  We all do.  Sometimes we need a wake up call though in order to take some action.

What I am Going to Do About It

I had a lot of time to think about this driving back from the Twin Cities, and I decided that I have to change some things.  Now.  These are things that I am changing right away:

  1. I will make sure to get at least 30 minutes of activity a day.  Ideally with summer coming a lot of this time will be on my road bike, and then can be a lot longer in duration.  But no matter what, every day, I need to get out and get some activity.  Tonight, that was a 30 minute walk around the neighborhood after dinner.  That is all it takes.  I'm not looking to be an Olympian, but we all need to be active every day, no exceptions.
  2. I will cut my soda intake in half.  Hate to say it, but soda is an addiction for me like cigarettes are for others.  Yes, I've tried to quit before, and that soda habit comes right back.  I wish I could quit cold turkey today, but I have to be realistic, and I want these goals to be achievable.  Cutting my intake in half will be major progress for me, and my weight and health will improve because of it.
  3. I will sign up for a Red Cross First Aid/CPR class and complete it by the end of summer. The last time I had first aid and CPR training was way back in college.  I hope this is training I never have to use.  But I didn't like that feeling of being helpless.   If a similar situation ever happens in the future, I want to be confident I know what to do and be able to do it.  
On June 1st, a little over a month from now, I'll post again on this topic with my progress.  and I'll post again at the beginning of September.  Mostly that is to help me hold myself accountable.

What I Am Encouraging Everyone Else To Do


I would encourage you to set up your own heath goals like I have above.  If you do, leave a comment here.  Or blog about them or tweet about them.  And then do so again in a month.  In the mean time, hold yourself accountable for them.  We can all do a 30 minute walk each and every day.  Yes, it means a little less time glued to our devices or a little less time hacking on our personal projects.  Its OK.  Our teams, our friends and most importantly our families need us, all of us to be healthy.  Besides, some time away from all of the screens we stare at all day might actually help us improve our creative processes.

On Saturday, everything appears to have turned out OK.  But it was a scary moment.  What we can do is learn from it and take action about it.  We spend a lot of time keeping our codebases healthy.  We need to pay just as much attention to keeping ourselves healthy.  I got that message loud and clear.



Sunday, April 23, 2017

JSON Functionality in SQL Server - Part 2

This is the second part of a series on JSON functionality in SQL Server.  This post will focus on accessing data in arrays that may be part of your JSON data.

You can find the other parts of the series here (I'll update as I go)

  1. Accessing scalar properties on JSON objects stored in SQL Server
  2. Accessing array data in JSON objects stored in SQL Server (this post)


Also, if you want to recreate the database I have here and try the samples out for yourself, you can find everything you need on Github ar:



In the first part of this series, we had a simple table that contained weather data, and each row of the table contained a single JSON object with the weather observations.

The JSON object in the ObservationData column contains all scalar properties.  That is, if we want a value, we can just write a path expression and use the JSON_VALUE function to get the data element we want back.  But life is not always so simple.  What if instead of a single object, we had an object that contained an array of values or an array of objects.  How would we interrogate the JSON data in our column to get access to those values?

Lets consider the following table.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
CREATE TABLE DailyWeatherDataJson
(
    ObservationId int IDENTITY(1,1)    NOT NULL,
    StationCode       VARCHAR(10)      NOT NULL,
    City              VARCHAR(30)      NOT NULL,
    State             VARCHAR(2)       NOT NULL,
    ObservationDate   DATE             NOT NULL,
    ObservationData   VARCHAR(max)     NOT NULL,
    CONSTRAINT PK_DailyWeatherDataJson
        PRIMARY KEY (ObservationId)
);

This is very similar to the table we had before with the following differences:

  • There will be one row in the table for each weather station per day
  • The ObservationData column will now hold a JSON object that contains an array of all of the observations for that day
  • Since our JSON will be larger, we have changed to a VARCHAR(MAX) data type to store our JSON
The JSON in the ObservationData column will look like this:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
{
  "stationCode": "14819",
  "observationDate": "2016-07-15T00:00:00",
  "weatherObservations": [
    {
      "stationCode": "14819",
      "location": {
        "city": "Chicago",
        "state": "IL",
        "stationName": "CHICAGO MIDWAY INTL ARPT"
      },
      "observationDate": "20160715",
      "observationTime": "0053",
      "observationDateTime": "2016-07-15T00:53:00",
      "skyCondition": "FEW070",
      "visibility": 10,
      "dryBulbFarenheit": 72,
      "dryBulbCelsius": 22.2,
      "wetBulbFarenheit": 64,
      "wetBulbCelsius": 17.5,
      "dewpointFarenheit": 58,
      "dewpointCelsius": 14.4,
      "relativeHumidity": 62,
      "windSpeed": 7,
      "windDirection": "280",
      "stationPressure": 29.26,
      "seaLevelPressure": null,
      "recordType": " ",
      "hourlyPrecip": null,
      "altimeter": 29.92
    },
    {
      "stationCode": "14819",
      "location": {
        "city": "Chicago",
        "state": "IL",
        "stationName": "CHICAGO MIDWAY INTL ARPT"
      },
      "observationDate": "20160715",
      "observationTime": "0153",
      "observationDateTime": "2016-07-15T01:53:00",
      "skyCondition": "FEW070",
      "visibility": 10,
      "dryBulbFarenheit": 71,
      "dryBulbCelsius": 21.7,
      "wetBulbFarenheit": 63,
      "wetBulbCelsius": 17,
      "dewpointFarenheit": 57,
      "dewpointCelsius": 13.9,
      "relativeHumidity": 61,
      "windSpeed": 8,
      "windDirection": "280",
      "stationPressure": 29.26,
      "seaLevelPressure": null,
      "recordType": " ",
      "hourlyPrecip": null,
      "altimeter": 29.92
    },
    ...
  ]
}

The '...' is meant to represent that in almost all cases, we'll have more than two elements in the weatherObservations array, since most National Weather Service stations (where I downloaded this data from) take observations at least every hour, if not more often.

Accessing Individual Elements with the JSON_VALUE Function
If we know we want to access a value on an individual element of the array, we can do that using the JSON_VALUE function we saw in the last post.  The following SQL statement will get the temperature reading for the first element of the array that represents readings for Chicago on July 15, 2016.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
SELECT 
        ObservationId,
        StationCode,
        City,
        State,
        ObservationDate,
        JSON_VALUE(ObservationData, '$.weatherObservations[0].dryBulbFarenheit') As Temperature,
        JSON_VALUE(ObservationData, '$.weatherObservations[0].observationDateTime') As ObservationTime,
        ObservationData
    FROM DailyWeatherDataJson
    WHERE City = 'Chicago'
    AND State = 'IL'
    AND ObservationDate = '2016-07-15';

And here are the results

This is useful, but we are still pretty limited.  We don't really want to go element by element through the array like this, and besides, we aren't even sure how many elements the array will have.  We need to be able to work with the array as an array, so lets look at a way to do that.

Accessing The Array With the JSON_QUERY Function
What I'm going to do here is jump ahead to the solution and then explain what is going on.  Our goal is to get back each element of the JSON array and basically turn it into a row in our result set such that we can operate on the data like it was in a normal relational format.  So lets see the answer:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
SELECT 
        ObservationId,
        StationCode,
        City,
        State,
        ObservationDate,        
        CONVERT(datetime, JSON_VALUE(Observations.[Value], '$.observationDateTime'), 126) As ObservationTime,
        JSON_VALUE(Observations.[Value], '$.dryBulbFarenheit') As Temperature,
        JSON_VALUE(Observations.[Value], '$.relativeHumidity') As Humidity,
        JSON_VALUE(Observations.[Value], '$.windDirection') As WindDIrection,
        JSON_VALUE(Observations.[Value], '$.windSpeed') As WindSpeed
    FROM DailyWeatherDataJson d
    CROSS APPLY OPENJSON(JSON_QUERY(ObservationData, '$.weatherObservations')) Observations
    WHERE City = 'Chicago'
    AND State = 'IL'
    AND ObservationDate = ('2016-07-15');

Lets start on line 13, because this is where the magic starts happening.

The JSON_QUERY function lets us pull back a JSON array or object inside of another JSON object as a String, so we use this with the path expression '$.weatherObservations' to pull just the array of observations out of our larger JSON object.

Then, we use the OPENJSON table valued function which will parse the JSON result we have into a rowset.  And then the CROSS APPLY operator is used so we can join these results back to our main data set.

So now we basically have a row for each individual observation in the JSON array that was in our original JSON object.  And in each row, we have an individual JSON object that represents that individual observation.  That object is in the Value property of the Observations alias, and now we can access properties on that object using JSON_VALUE like we did before, and we see this on lines 7 through 11.  So basically what we have done is get all of the array elements out of the array and turn them into a JSON object that we have a row for, and now, we can extract data out of that individual JSON object like we saw in the last post.

Yes, there is a lot going on here.  But this is just a pattern.  When you have a JSON object that has an array in it, you just look up this pattern, plug in the appropriate pieces and apply it.  And what is really cool is that now data that is down in a JSON array, we can interact with just like it was stored as normal rows and columns in our database.  So we can apply filters in the form of a WHERE clause, join to values, use aggregate functions, whatever we need to do and are familiar with using SQL.

Putting a View Over the Top
I think most people would probably agree, they don't want to have to memorize that syntax or write it every time.  So the answer is just to put a view over the top.  The data is still stored as JSON, but you can write traditional SQL that we are all familiar with in order to interact with the data.

I want to come back to one of the themes that I started the last post off with.  A lot of times today, the choice sees to be presented to us that you are either in a relational world or a NoSQL world.  The two camps strongly disagree with each other and if you do it the other way, someone thinks you are dead wrong.

What I take from this functionality in SQL Server is that really within one database, I can have a foot in both worlds.  If I want to act in a NoSQL way and just store JSON, sure, I can do that.  But if I need to interact with that data in traditional SQL ways, I can do that too.  So the question isn't either/or.  In fact, I don't even think there is a question.  The answer is I can do both, in the same place at the same time.  So whatever works best for my application, I can do, and I find that very exciting.