JWT Authentication with ASP.NET Core 2 Web API, Angular 5, .NET Core Identity and Facebook Login

This is an updated version of a post I did last May on the topic of jwt auth with Angular 2+ and ASP.NET Core Web Api. That post was based on ASP.NET Core 1.x so it's a little dated and not as relevant now since everyone is hacking on .NET Core 2.0 which brought changes to both the Identity membership system and jwt implementation.

So, here's an updated guide on implementing user registration and login functionality using ASP.NET Core 2 Web API and Angular 5. As a bonus, I see lots of folks wondering how to include social login with token-based web api authentication and spa apps (no cookies) so I have implemented Facebook login in this demo to show a potential approach of how this could be done.

Facebook login flow
Email login flow

Development Environment

  • Windows 10
  • Sql Server Express 2017 & Sql Server Management Studio 2017
  • Runs in both Visual Studio 2017 & Visual Studio Code
  • Node 8.9.4 & NPM 5.6.0
  • .NET Core 2.0 sdk
  • Angular CLI -> npm install -g @angular/cli https://github.com/angular/angular-cli

The data model

The user is the centerpiece of our demo and luckily the ASP.NET Core Identity provider offers up the IdentityUser class which provides a convenient entity to store all of our user-related data. Even better, it can be extended to add custom properties you may require all users to possess in your application. I did exactly this with AppUser class. This class maps directly to the AspNetUsers table in the database.

// Add profile data for application users by adding properties to this class
public class AppUser : IdentityUser
{
  // Extended Properties
  public string FirstName { get; set; }
  public string LastName { get; set; }
  public long? FacebookId { get; set; }
  public string PictureUrl { get; set; }
}

In applications with a number of different user roles we often need additional entities to store the unique bits of data for each role and also have a reference back to their main identity. To simulate this scenario, I created the Customer class. In this class, we have some custom properties and a reference to AppUser via the Identity property. The IdentityId is the foreign key in the database which Entity Framework Core uses to map the relationship between the two.

public class Customer
{
 public int Id { get; set; }
 public string IdentityId { get; set; }
 public AppUser Identity { get; set; }  // navigation property
 public string Location { get; set; }
 public string Locale { get; set; }
 public string Gender { get; set; }
}

The database context

We need to wire up the database and object graph in our application by creating a new DatabaseContext class which Entity Framework Core uses to interact with the database and our application entities. I created ApplicationDbContext which is quite simple as we only have to add the Customers mapping here. Because it inherits from IdentityDbContext it is already aware of IdentityUser and the other identity-related classes/tables so we don't have to map them explicitly.

public class ApplicationDbContext : IdentityDbContext
{
  public ApplicationDbContext(DbContextOptions options)
        : base(options)
  {
  }

 public DbSet Customers { get; set; }
}

The last important step is to register the context in the DI container in Startup so it can be automatically injected into other consuming classes.

public void ConfigureServices(IServiceCollection services)
{
  // Add framework services.
  services.AddDbContext(options =>
      options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"),
      b => b.MigrationsAssembly("AngularASPNETCore2WebApiAuth")));
...

Spin up a database

With the model and context created in the application, we can use Entity Framework Core's migrations to generate the database and its schema based on the entities, data types, and relationships we defined in our code.

The Entity Framework tooling is available from the .NET Core CLI so we can create an initial migration file by running this from the command line in the project root:

src>dotnet ef migrations add initial

The files are created in the migrations folder.

To create the database, return to the command line and run:

src>dotnet ef database update

This command will pull the connection string from the project's appsettings.json file, connect to Sql Server and create a new database based on the previously generated migrations. If everything worked well, you should see a shiny new database.

Create new user accounts with email registration

We have two flows for creating users in our app, we'll look at email first. The API for creating new users via standard email registration will be the responsibility of the AccountsController.

There's just a single action method here to accept a POST request with the user registration details. It's fairly straightforward as we're just using Identity's UserManager to create a new user in the database and then using the context to create the related customer entity as described earlier. Note there is also some implicit mapping and validation happening here using AutoMapper and FluentValidation to help keep our code a little tidier. I won't go into detail on those aspects but encourage you to explore the source code and these libraries can help project cleaner and DRYer code.

// POST api/accounts
[HttpPost]
public async Task Post([FromBody]RegistrationViewModel model)
{
    if (!ModelState.IsValid)
    {
        return BadRequest(ModelState);
    }

    var userIdentity = _mapper.Map(model);

    var result = await _userManager.CreateAsync(userIdentity, model.Password);

    if (!result.Succeeded) return new BadRequestObjectResult(Errors.AddErrorsToModelState(result, ModelState));

    await _appDbContext.Customers.AddAsync(new Customer { IdentityId = userIdentity.Id, Location = model.Location });
    await _appDbContext.SaveChangesAsync();

    return new OkObjectResult("Account created");
}

Finally, with the database and API created, I was able to run the project and test with Postman to verify new users were created in the database by checking the AspNetUsers and Customers tables.

JWT Authentication

Implementing basic authentication with JSON web tokens on top of an ASP.NET Core Web API is fairly straightforward. Most of what we need is in middleware provided by the Microsoft.AspNetCore.Authentication.JwtBearer package.

To get started, I added a new class JwtIssuerOptions defining some of the claim properties our generated tokens will contain.

I also added a new configuration section to the appsettings.json file and then used the Configuration API in ConfigureServices() to read these settings and wire up JwtIssuerOptions in the IoC container.

public void ConfigureServices(IServiceCollection services)
{
...
// Get options from app settings
var jwtAppSettingOptions = Configuration.GetSection(nameof(JwtIssuerOptions));

// Configure JwtIssuerOptions
services.Configure<JwtIssuerOptions>(options =>
{
  options.Issuer = jwtAppSettingOptions[nameof(JwtIssuerOptions.Issuer)];
  options.Audience = jwtAppSettingOptions[nameof(JwtIssuerOptions.Audience)];
  options.SigningCredentials = new SigningCredentials(_signingKey, SecurityAlgorithms.HmacSha256);
});
...

Following that, I added some more middleware code to ConfigureServices() which introduced JWT authentication to the request pipeline, specified the validation parameters to dictate how we want received tokens validated and finally, created an authorization policy to guard our API controllers and actions which we'll apply in a bit.

public void ConfigureServices(IServiceCollection services)
{
...
var tokenValidationParameters = new TokenValidationParameters
{
   ValidateIssuer = true,
   ValidIssuer = jwtAppSettingOptions[nameof(JwtIssuerOptions.Issuer)],

   ValidateAudience = true,
   ValidAudience = jwtAppSettingOptions[nameof(JwtIssuerOptions.Audience)],

   ValidateIssuerSigningKey = true,
   IssuerSigningKey = _signingKey,

   RequireExpirationTime = false,
   ValidateLifetime = true,
   ClockSkew = TimeSpan.Zero
};

services.AddAuthentication(options =>
{
   options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
   options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(configureOptions =>
  {
    configureOptions.ClaimsIssuer = jwtAppSettingOptions[nameof(JwtIssuerOptions.Issuer)];
    configureOptions.TokenValidationParameters = tokenValidationParameters;
    configureOptions.SaveToken = true;
  });

// api user claim policy
services.AddAuthorization(options =>
{
  options.AddPolicy("ApiUser", policy => policy.RequireClaim(Constants.Strings.JwtClaimIdentifiers.Rol, Constants.Strings.JwtClaims.ApiAccess));
});
...

The last piece to add was the JwtFactory which is just a helper to create the encoded tokens we'd like to exchange between the client and backend. This happens in GenerateEncodedToken() which simply creates a JwtSecurityToken with a combination of registered claims (from the jwt spec) Sub, Jti, Iat and two specific to our app: Rol and Id. We're also using values injected from the JwtOptions we set up in the previous step.

public async Task<string> GenerateEncodedToken(string userName, ClaimsIdentity identity)
 {
   var claims = new[]
   {
     new Claim(JwtRegisteredClaimNames.Sub, userName),
     new Claim(JwtRegisteredClaimNames.Jti, await _jwtOptions.JtiGenerator()),
     new Claim(JwtRegisteredClaimNames.Iat, ToUnixEpochDate(_jwtOptions.IssuedAt).ToString(), ClaimValueTypes.Integer64),
                 identity.FindFirst(Helpers.Constants.Strings.JwtClaimIdentifiers.Rol),
                 identity.FindFirst(Helpers.Constants.Strings.JwtClaimIdentifiers.Id)
   };

 // Create the JWT security token and encode it.
 var jwt = new JwtSecurityToken(
     issuer: _jwtOptions.Issuer,
     audience: _jwtOptions.Audience,
     claims: claims,
     notBefore: _jwtOptions.NotBefore,
     expires: _jwtOptions.Expiration,
     signingCredentials: _jwtOptions.SigningCredentials);

 var encodedJwt = new JwtSecurityTokenHandler().WriteToken(jwt);

return encodedJwt;
}

Authenticating Identity users and issuing access tokens

We've got the JWT infrastructure in place so we're ready to start generating tokens for authenticated users. The AuthController is responsible for authenticating users who registered directly with the Identity membership system using their username and password aka the email flow.

It contains a single action to receive the POSTed credentials and validate them by calling GetClaimsIdentity() which is just a helper within the same controller that uses the UserManager to check the passed credentials against the database to determine if we have a valid user in the Identity system. If so, a token is generated and returned in the response.

// POST api/auth/login
[HttpPost("login")]
public async Task<IActionResult> Post([FromBody]CredentialsViewModel credentials)
{
    if (!ModelState.IsValid)
    {
         return BadRequest(ModelState);
    }

    var identity = await GetClaimsIdentity(credentials.UserName, credentials.Password);
    if (identity == null)
       {
         return BadRequest(Errors.AddErrorToModelState("login_failure", "Invalid username or password.", ModelState));
       }

    var jwt = await Tokens.GenerateJwt(identity, _jwtFactory, credentials.UserName, _jwtOptions, new JsonSerializerSettings { Formatting = Formatting.Indented });
    return new OkObjectResult(jwt);
}

private async Task<ClaimsIdentity> GetClaimsIdentity(string userName, string password)
{
   if (string.IsNullOrEmpty(userName) || string.IsNullOrEmpty(password))
       return await Task.FromResult<ClaimsIdentity>(null);

    // get the user to verifty
    var userToVerify = await _userManager.FindByNameAsync(userName);

    if (userToVerify == null) return await Task.FromResult<ClaimsIdentity>(null);

    // check the credentials
    if (await _userManager.CheckPasswordAsync(userToVerify, password))
       {
          return await Task.FromResult(_jwtFactory.GenerateClaimsIdentity(userName, userToVerify.Id));
       }

    // Credentials are invalid, or account doesn't exist
   return await Task.FromResult<ClaimsIdentity>(null);
}

I tested this action using postman to make sure I got the expected response when sending valid and invalid credentials to the authentication endpoint. Using the mark@fullstackmark.com account we created earlier I set up a post request to /api/auth/login and voila - authentication passes and I get a fresh JWT in the response.

Protecting Web API Controllers with claims-based authorization

One important function of claims in token authentication is that we can use them to tell the application what the user is allowed to access. Earlier, in the JwtFactory, we saw a custom claim called Rol added to the token which is just a string representing a role named ApiAccess.

With this role stashed in our token, we can use a claims-based authorization check to give the role access to certain controllers and actions so that only users possessing the role claim may access those resources.

We already enabled claims based authorization as part of the JWT setup we did earlier. The specific code to do that was this bit in ConfigureServices() in Startup.cs where we build and register a policy called ApiUser which checks for the presence of the Rol claim with a value of ApiAccess.

...
// api user claim policy
services.AddAuthorization(options =>
{
   options.AddPolicy("ApiUser", policy => policy.RequireClaim(Constants.Strings.JwtClaimIdentifiers.Rol, Constants.Strings.JwtClaims.ApiAccess));
});
...

We can then apply the policy using the familiar Authorize attribute on any controllers or actions we wish to guard. An example of this is found in the DashboardController which is decorated with [Authorize(Policy = "ApiUser")] meaning that only users with the ApiAccess role claim as part of the ApiUser policy can access this controller.

[Authorize(Policy = "ApiUser")]
[Route("api/[controller]/[action]")]
public class DashboardController : Controller
...

To test the controller authorization I used postman once again to create a GET request to the /api/dashboard/home endpoint. I also included a request header containing the JWT token we created in the previous login test. The header key is Authorization with a value formatted as Bearer xxx where xxx is the JWT. Issuing this request the Web API responds with a 200 OK status and some secure user data in the body.

I modified the request by changing some characters in the JWT to send an invalid token. This time, the token validation failed and the server responded accordingly with a 401 Unauthorized response when I tried to hit the protected endpoint.

The Angular app

At this point, we've completed the majority of the backend. The remaining bits are for Facebook login which we'll look at shortly. First, we'll build out the frontend in Angular to see how JWT authentication works in a real application.

As you can see from the gifs above, there's not much to this app. It has just 4 functions:

Organizing functionality with modules

Angular modules provide a super-effective way to group related components, directives, and services, in a way that they can be combined with other modules to assemble an application. For this app, I grouped the functions above into two modules by using the Angular CLI to create them within the src\app folder.

src\app>ng g module account
src\app>ng g module dashboard

After running these commands, we can see new folders created for each module. We'll add code to them shortly but we have a few more components to add first.

The registration form component

Next up, we'll add a new form component where users will create their account. Head back to the command line to use the CLI once again within the src\app\account module folder.

src\app\account>ng g component registration-form

A new registration-form folder is generated containing associated .ts, .scss and .html files.

Create the additional components

I repeated the steps to above to scaffold out some of the other major components we need:

  • A login-form
  • A home component which is the default view for the app
  • A spinner component to entertain users while the UI is busy

Talking to the backend Web API with UserService

UserService contains the register() and login() methods which use Angular's Http client to invoke the Web API endpoints we built and tested earlier.

register(email: string, password: string, firstName: string, lastName: string,location: string): Observable<UserRegistration> 
{
   let body = JSON.stringify({ email, password, firstName, lastName,location });
   let headers = new Headers({ 'Content-Type': 'application/json' });
   let options = new RequestOptions({ headers: headers });

   return this.http.post(this.baseUrl + "/accounts", body, options)
   .map(res => true)
   .catch(this.handleError);
}

login(userName, password) {
    let headers = new Headers();
    headers.append('Content-Type', 'application/json');

    return this.http
      .post(
      this.baseUrl + '/auth/login',
      JSON.stringify({ userName, password }),{ headers }
      )
      .map(res => res.json())
      .map(res => {
        localStorage.setItem('auth_token', res.auth_token);
        this.loggedIn = true;
        this._authNavStatusSource.next(true);
        return true;
      })
      .catch(this.handleError);
}

Note that in the login() method we're storing the authorization token issued by the server in the users' local storage via the localStorage.setItem('auth_token', res.auth_token) call. We'll see shortly how to use the token to make authenticated requests to the backend api.

Finishing up the registration form

With the component and service ready, we have all the pieces to complete the user registration feature. The last few steps involved adding the form markup to registration-form.component.html and binding the submit button on the form to a method in the registration-form.component.ts class.

registerUser({ value, valid }: { value: UserRegistration, valid: boolean }) 
{
  this.submitted = true;
  this.isRequesting = true;
  this.errors='';
  if(valid)
  {
     this.userService.register(value.email,value.password,value.firstName,value.lastName,value.location)
         .finally(() => this.isRequesting = false)
         .subscribe(result  => {if(result){
             this.router.navigate(['/login'],{queryParams: {brandNew: true,email:value.email}});                         
         }},
         errors =>  this.errors = errors);
    }      
}

This method is pretty simple, it's just calling userService.register() and passing along the user data then handling the observable response accordingly. If the server-side validation returns an error it is displayed to the user. If the request succeeds, the user is routed to the login view. The isRequesting property flag triggers the spinner so the UI can indicate that the app is busy while the request is in flight.

Finishing up the login form

The login and registration forms are nearly identical. I added the required markup to login-form.component.html and wired up an event handler in the login-form.component.ts class.

login({ value, valid }: { value: Credentials, valid: boolean }) {
    this.submitted = true;
    this.isRequesting = true;
    this.errors='';
    if (valid) {
      this.userService.login(value.email, value.password)
        .finally(() => this.isRequesting = false)
        .subscribe(
        result => {         
          if (result) {
             this.router.navigate(['/dashboard/home']);             
          }
        },
        error => this.errors = error);
    }
}

Here we just call userService.login() to make a request to the server with the given user credentials and handle the response accordingly. Again, either display any errors returned by the server or route the user to the Dashboard component if they've successfully authenticated. The check of valid is related to the form validation provided by the ngForm directive in the form markup. I won't cover this in detail but check out the code to get a better understanding of binding and validation in Angular provided by NgForm.

Protecting routes

At this point in our application, users can navigate anywhere. We're going to fix this by restricting access to certain areas to logged-in users only. The Angular router provides a feature specifically for this purpose in Navigation Guards

A guard is simply a function added to your route configuration that returns either true or false.

true means navigation can proceed. false means navigation halts and the route is not accessed.

Guards are registered using providers so they can be injected into your component routing modules where needed.

In this app, I created auth.guard.ts to protect access to the dashboard which acts as an administrative feature only logged in users can see.

// auth.guard.ts
import { Injectable } from '@angular/core';
import { Router, CanActivate } from '@angular/router';
import { UserService } from './shared/services/user.service';

@Injectable()
export class AuthGuard implements CanActivate {
  constructor(private user: UserService,private router: Router) {}

  canActivate() {

    if(!this.user.isLoggedIn())
    {
       this.router.navigate(['/account/login']);
       return false;
    }

    return true;
  }
}

The AuthGuard is simply an @Injectable() class that implements CanActivate. It has a single method that checks the logged in status of the user by calling the isLoggedIn() method on the UserService.

isLoggedIn() is a little naive as it just checks for the presence of the JWT in the browser's localStorage. If it exists, we assume the user is logged in by returning true. If it is not found, the user is redirected back to the login page.

...
this.loggedIn = !!localStorage.getItem('auth_token')
...

To implement the guard in the dashboard's routing module I simply imported and updated the root dashboard route with a CanActivate() guard property.

import { ModuleWithProviders } from '@angular/core';
import { RouterModule }        from '@angular/router';

import { RootComponent }    from './root/root.component';
import { HomeComponent }    from './home/home.component'; 
import { SettingsComponent }    from './settings/settings.component'; 

import { AuthGuard } from '../auth.guard';

export const routing: ModuleWithProviders = RouterModule.forChild([
  {
      path: 'dashboard',
      component: RootComponent, canActivate: [AuthGuard],

      children: [      
       { path: '', component: HomeComponent },
       { path: 'home',  component: HomeComponent },
       { path: 'settings',  component: SettingsComponent },
      ]       
    }  
]);

We now have a protected dashboard feature!

Making authenticated Web API requests

Now that we have some level of authorization in place on the frontend, the last thing we want to do is start passing our JWT back to the server for Web API calls that require authentication. This is where we'll be utilizing the ApiAccess authorization policy we created and implemented earlier in the DashboardController.

To achieve this, I created a new dashboard service with a single method that retrieves some data for the Home page by making an authenticated HTTP call to the backend and passing the authorization token along in the request header.

export class DashboardService extends BaseService {

baseUrl: string = ''; 

constructor(private http: Http, private configService: ConfigService) {
   super();
   this.baseUrl = configService.getApiURI();
}

getHomeDetails(): Observable<HomeDetails> {
    let headers = new Headers();
    headers.append('Content-Type', 'application/json');
    let authToken = localStorage.getItem('auth_token');
    headers.append('Authorization', `Bearer ${authToken}`);
  
    return this.http.get(this.baseUrl + "/dashboard/home",{headers})
      .map(response => response.json())
      .catch(this.handleError);
  }  
}

getHomeDetails() simply retrieves the auth_token from localStorage and stashes is part of the Authorization.

With authenticated requests in place, I ran the project again and was able to complete an end to end test by creating a new user, logging in, and navigating to a protected route in the dashboard which displayed some highly secure data!

Adding Facebook OAuth authentication

The guide up to now has been based on standard credentials-based user registration and authentication directly with the ASP.NET Core Identity system. Now, we'll see how to incorporate a Facebook login flow into our app so users can signup/login directly with their Facebook credentials and gain access to the secure regions of the application.

The approach I'm showing here is quite simple. Basically, instead of relying on the ASP.NET Core Identity provider to authenticate the user's credentials as we do in the email flow we will integrate with Facebook's OAuth api and if login succeeds there we'll issue the user a JWT on our end which effectively logs them into the application.

Creating a Facebook application

Before we start coding, we need a Facebook application with which to integrate. For this demo, I created Fullstack Cafe which should work fine for you if you're running the project but if you wish to use your own app you will need to create and configure it on Facebook's developer portal.

It's fairly quick and painless:

  1. Once you've registered, create a new app

  2. Complete the Create New App Id prompt

  3. Add a product - choose Facebook Login. For the platform choose Web

  4. This step is important, here you must add the URL that Facebook will call back to after the OAuth process completes

  5. The two key values you will need to replace in the demo project are the App Id and App Secret. These live in appsettings.json file under FacebookAuthSettings

Extending the Angular app with Facebook login

Note: there are lots of options for integrating your app with Facebook. For this demo, I'm not using any SDKs or Angular packages. The approach here is probably a bit crude as is and can certainly be improved upon, it is based on this guide.

To add Facebook's browser-based login flow to the existing UI I created a new facebook-login component. The UI template is pretty simple, it's just a button that will invoke the login dialog to begin the process.

export class FacebookLoginComponent {

 private authWindow: Window;
 failed: boolean;
 error: string;
 errorDescription: string;
 isRequesting: boolean; 

 launchFbLogin() {
    // launch facebook login dialog
    this.authWindow = window.open('https://www.facebook.com/v2.11/dialog/oauth?&response_type=token&display=popup&client_id=1528751870549294&display=popup&redirect_uri=http://localhost:5000/facebook-auth.html&scope=email',null,'width=600,height=400');    
}
...

This will open a new dialog window with Facebook's login page. From there, the user carries out the login process by entering their Facebook creds and connecting with the application. When complete, Facebook will redirect back to our application where we must carry on processing the response to determine if login succeeded and take the appropriate action. The redirect in the demo is handled by facebook-auth.html.

There's not much happening here, just an empty page with some javascript to parse out the parameters containing the response data in the query string and then use the native window messaging api to send it back to the component via window.opener.postMessage(). The main thing we're interested in is the access_token which is received on a successful login and required by our backend Web API to carry out further validation.


 // if we don't receive an access token then login failed and/or the user has not connected properly
 var accessToken = getParameterByName("access_token");
 var message = {};
 if (accessToken) {
     message.status = true;
     message.accessToken = accessToken;
 }
 else
 {
     message.status = false;
     message.error = getParameterByName("error");
     message.errorDescription = getParameterByName("error_description");
 }
 window.opener.postMessage(JSON.stringify(message), "http://localhost:5000");

Within the component's handleMessage() method we interrogate the received data to determine authentication status. If a failure is received we show some UI to indicate there was a problem, otherwise if the authentication succeeded we make a call to userService.facebookLogin.

handleMessage(event: Event) {
 const message = event as MessageEvent;
 // Only trust messages from the below origin.
 if (message.origin !== "http://localhost:5000") return;

 this.authWindow.close();

    const result = JSON.parse(message.data);
    if (!result.status)
    {
      this.failed = true;
      this.error = result.error;
      this.errorDescription = result.errorDescription;
    }
    else
    {
      this.failed = false;
      this.isRequesting = true;

      this.userService.facebookLogin(result.accessToken)
        .finally(() => this.isRequesting = false)
        .subscribe(
        result => {
          if (result) {
            this.router.navigate(['/dashboard/home']);
          }
        },
        error => {
          this.failed = true;
          this.error = error;
        });      
    }
}

facebookLogin() passes along the accessToken to a Web API endpoint at /externalauth/facebook and expects an auth_token back. This endpoint is the final piece we need to complete the Facebook login integration.

facebookLogin(accessToken:string) {
  let headers = new Headers();
  headers.append('Content-Type', 'application/json');
  let body = JSON.stringify({ accessToken });  
  return this.http
  .post(
    this.baseUrl + '/externalauth/facebook', body, { headers })
    .map(res => res.json())
    .map(res => {
      localStorage.setItem('auth_token', res.auth_token);
      this.loggedIn = true;
      this._authNavStatusSource.next(true);
      return true;
     })
    .catch(this.handleError);
}

Generating JWT tokens for authenticated Facebook users

To complete the login process I created a new Web API controller named ExternalAuthController with a single action to handle Facebook logins.

// POST api/externalauth/facebook
[HttpPost]
public async Task<IActionResult> Facebook([FromBody]FacebookAuthViewModel model)
{
    // 1.generate an app access token
    var appAccessTokenResponse = await Client.GetStringAsync($"https://graph.facebook.com/oauth/access_token?client_id={_fbAuthSettings.AppId}&client_secret={_fbAuthSettings.AppSecret}&grant_type=client_credentials");
    var appAccessToken = JsonConvert.DeserializeObject<FacebookAppAccessToken>(appAccessTokenResponse);
    // 2. validate the user access token
      var userAccessTokenValidationResponse = await Client.GetStringAsync($"https://graph.facebook.com/debug_token?input_token={model.AccessToken}&access_token={appAccessToken.AccessToken}");
      var userAccessTokenValidation = JsonConvert.DeserializeObject<FacebookUserAccessTokenValidation>(userAccessTokenValidationResponse);

    if (!userAccessTokenValidation.Data.IsValid)
      {
        return BadRequest(Errors.AddErrorToModelState("login_failure", "Invalid facebook token.", ModelState));
      }

    // 3. we've got a valid token so we can request user data from fb
    var userInfoResponse = await Client.GetStringAsync($"https://graph.facebook.com/v2.8/me?fields=id,email,first_name,last_name,name,gender,locale,birthday,picture&access_token={model.AccessToken}");
    var userInfo = JsonConvert.DeserializeObject<FacebookUserData>(userInfoResponse);

    // 4. ready to create the local user account (if necessary) and jwt
    var user = await _userManager.FindByEmailAsync(userInfo.Email);

    if (user == null)
    {
       var appUser = new AppUser
       {
         FirstName = userInfo.FirstName,
         LastName = userInfo.LastName,
         FacebookId = userInfo.Id,
         Email = userInfo.Email,
         UserName = userInfo.Email,
         PictureUrl = userInfo.Picture.Data.Url
       };

       var result = await _userManager.CreateAsync(appUser, Convert.ToBase64String(Guid.NewGuid().ToByteArray()).Substring(0, 8));

       if (!result.Succeeded) return new BadRequestObjectResult(Errors.AddErrorsToModelState(result, ModelState));

       await _appDbContext.Customers.AddAsync(new Customer { IdentityId = appUser.Id, Location = "",Locale = userInfo.Locale,Gender = userInfo.Gender});
       await _appDbContext.SaveChangesAsync();
      }

    // generate the jwt for the local user...
    var localUser = await _userManager.FindByNameAsync(userInfo.Email);

    if (localUser==null)
    {
       return BadRequest(Errors.AddErrorToModelState("login_failure", "Failed to create local user account.", ModelState));
    }

    var jwt = await Tokens.GenerateJwt(_jwtFactory.GenerateClaimsIdentity(localUser.UserName, localUser.Id), _jwtFactory, localUser.UserName, _jwtOptions, new JsonSerializerSettings {Formatting = Formatting.Indented});
  
    return new OkObjectResult(jwt);
    }
}

There's a bit of code here that basically does this:

  1. Calls the Facebook api to generate an app access token we need to make the next request
  2. Make another call again to Facebook to validate the user access token we received on the initial login.
  3. If the token is valid, use it to request information about the user from the Facebook graph api: email, name, picture etc.
  4. Uses userManager to check if we have this user in our local database, if not we add them and also add an associated customer entity, in the same manner, we did during the email registration flow.
  5. Generate a JWT token and return it in the response back to the client.

With a token returned to the Angular app the loop is complete. Running the project we're now able to login with Facebook, receive a JWT and then hit our protected dashboard which displays some of our user data - sweet!

That's a wrap

Phew...if you're still reading - you rock! If you found this guide helpful or have any questions or feedback I'd love to hear it in the comments below. Finally, securing real applications for real people in production scenarios demands careful consideration, design, and testing. This post is intended as a guide to illustrate how these technologies can potentially be combined as part of a security solution, not as a prescription for one.

Source code here

Get Started Building Microservices with ASP.NET Core and Docker in Visual Studio Code

Containers and microservices are two huge, emerging trends in software development today.

For the uninitiated, containers are a super cool way to package up your application, its dependencies, and configuration in a portable, easily distributable image file. This image can then be downloaded and run in an execution environment called a container on any number of other computers acting as a container host. Microservices represent an architectural style in which the system can be broken up into individual services, each one with a single, narrowly focused capability that is exposed with an API to the rest of the system as well as external consumers like web and mobile apps.

Looking at the characteristics of both concepts, we can start to see why they might work well together to help us develop systems that are easier to deploy, scale, maintain and provide an increased level of stability compared to a traditional monolithic approach.

Two key elements of .NET Core's design are its modularity and lightweight nature. These properties make it ideal for building containerized microservice applications. In this post, we'll see how to combine ASP.NET Core and Docker using a cross-platform approach to build, debug and deploy a microservices-based proof-of-concept using Visual Studio Code, .NET Core CLI and Docker CLI.

Please Note - Both of these topics, particularly microservices - are vast and deep so there are many very important aspects I'll be skimming over or simply not mention here. The goal of this post is to get from zero to off-the-ground with ASP.NET Core-based microservices and Docker. Some of the most critical and challenging exercises in microservice architecture can be properly identifying and defining domain and data models, bounded contexts and their relationships. This post does not dive deeply into design and modeling theory. Likewise, for containers, there are many other important areas we will not be exploring in this guide like the principles of container design and orchestration.


The demo web app we'll build in this post is powered by 3 ASP.NET Core microservices, RabbitMQ, Redis and Sql Server on Linux all running in docker containers

Dev Environment

  • Windows 10 and PowerShell
  • Visual Studio Code - v1.19.0
    • C# for Visual Studio Code extension
    • Docker extension
  • SQL Server Management Studio 17.4
  • .NET Core SDK v2.0.0
  • Docker Community Edition 17.09.1-ce-win42 using Linux containers

Solution setup

Starting with an empty directory, you can create a new solution using the .NET Core CLI.

> dotnet new sln --name dotnetgigs

In the same directory, I created a new directory called services to house the microservices we'll be using: Applicants.Api, Identity.Api and Jobs.Api.

Within each microservice directory, I created a new Web API project. Note that you can omit the name parameter and the new project will inherit the name of the parent directory.

> dotnet new webapi

Next, I added each project to the previously created solution file:

> dotnet sln add services/applicants.api/applicants.api.csproj
> dotnet sln add services/jobs.api/jobs.api.csproj
> dotnet sln add services/identity.api/identity.api.csproj

Welcome Docker

At this point, we'll step away from the code for a bit to introduce Docker into our solution and workflow.

One thing we should understand: since we are using Visual Studio Code and a CLI development approach we need to know many of the steps involved in working with Docker in much greater detail than if we were using Visual Studio. Visual Studio 2017 has excellent support for Docker built-in so it offers much greater productivity and saves you from mucking with dockerfiles and the CLI directly. Visual Studio Code, on the other hand, is not nearly as refined at the moment and requires a much more hands-on approach. For our purposes, there is still a lot of value in the CLI approach we'll be using as it forces a greater understanding of the tooling and process involved in Docker development with .NET Core. These steps are also largely cross-platform as they should work in a mac or linux environment with very little adjustment.

Creating Debuggable Containers

One area in particular where the current developer experience with Docker is a bit lacking in Visual Studio Code compared to Visual Studio is debugging. We want to be able to debug our services while they run in Docker. I wasn't quite sure at first how to go about this but a little googling surfaced this thread (thanks galvesribeiro) and the following Dockerfile:

FROM microsoft/aspnetcore:latest
RUN mkdir app

#Install debugger
RUN apt-get update
RUN apt-get install curl -y unzip
RUN curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l /vsdbg

EXPOSE 80/tcp

#Keep the debugger container on
ENTRYPOINT ["tail", "-f", "/dev/null"]

Here's what's going on:

FROM microsoft/aspnetcore:latest

This command simply tells Docker to use Microsoft's official aspnetcore runtime image as its base. This means our microservice images are automatically provisioned with the .NET Core runtime and ASP.NET Core libs required to run an ASP.NET Core application. If we wanted to build our app in the container we would need to base it off the aspnetcore-build image as it is equipped with the full .NET Core SDK required to build and publish your application. So, when choosing a base image it's important to be aware that they are optimized for different use cases and as such we should look for one that suits our intended usage to avoid unnecessary bloat in our custom image. More info on the official ASP.NET Core images can be found on Microsoft's ASP.NET Core Docker hub repository page. Note also that we are using linux-based images as per our docker setup.

#Install debugger
RUN apt-get update
RUN apt-get install curl -y unzip
RUN curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l /vsdbg

This section installs the VSCode debugger in the container so we can remotely debug the application running inside the container from Visual Studio Code - more on this shortly.

#Keep the debugger container on
ENTRYPOINT ["tail", "-f", "/dev/null"]

The ENTRYPOINT command gives you a way to identify which executable should be run when a container is started. Normally, if we were simply running an ASP.NET Core application directly it would look something like ENTRYPOINT ["dotnet", "myaspnetapp.dll"] using the CLI to launch our app. Because this container is used for debugging we want the ability to start/stop the debugger and our application without having to stop and re-start the entire container each time we launch the debugger. To accomplish this we use tail -f /dev/null as the ENTRYPOINT which allows the debugger to start and stop in the background but doesn't stop the container because tail keeps running infinitely in the foreground.

I added the same Dockerfile to the project root of each microservice.

Using Docker-Compose to Organize Multi-Container Solutions

It is relatively easy to work with a single Dockerfile using the CLI commands build and run to create an image and spin up new containers. However, as your solution grows to include multiple containers, working with a collection of dockerfiles in this fashion will become painful and error-prone. To make life easier, we can leverage Docker-Compose to encapsulate these commands along with the configuration data for each container to define a set of related services which can be deployed together as a multi-container Docker application.

With Dockerfiles added to each microservice project I created a new Docker-Compose.yml file in the solution root. We'll flesh it out in a bit but starting out you can see it's just a simple YAML-based file with a section defining each of our application's services and some instructions to tell docker how we'd like it to build and configure our containers.

version: '3'

services:

  applicants.api:
    image: applicants.api       
    build:
      context: ./services/applicants.api
      dockerfile: Dockerfile.debug
    ports: 
    - "8081:80"
    volumes: 
      - ./services/applicants.api/bin/pub/:/app
    container_name: applicants.api

  identity.api:
    image: identity.api
    build:
      context: ./services/identity.api
      dockerfile: Dockerfile.debug
    ports: 
    - "8084:80"
    volumes: 
      - ./services/identity.api/bin/pub/:/app
    container_name: identity.api     

  jobs.api:
    image: jobs.api
    build:
      context: ./services/jobs.api
      dockerfile: Dockerfile.debug
    ports: 
    - "8083:80"
    volumes: 
      - ./services/jobs.api/bin/pub/:/app
    container_name: jobs.api   

Here's what these options do:

  • image - The image name to start the container from. When specified with the build directive, Docker-Compose will use this as the name when creating the image.
  • build - Contains the path to the dockerfile to use as well as the context docker should use to build the container from.
  • ports - Exposes ports to the host machine and shares them among different services started by the docker-compose. Essentially, it provides the network plumbing so we can talk to services running in containers by mapping to ports on the host.
  • volumes - Allows us to mount paths on our host inside the container. In our case, this is especially useful to support debugging. You can see we point the published output from our app's bin directory /services/jobs.api/bin/pub to a volume labeled /app that the container can access. This allows us to rebuild our application and launch the debugger from Visual Studio Code without touching the container - rad!
  • container_name - Allows us to specify a custom name for the container, rather than a generated default.

Adding a Database

With the core microservices defined for our solution, we can start thinking about data for our application. Microsoft recently launched Sql Server on Linux and associated docker images so this is a great opportunity to try it out. Sql Server on Linux - what a time to be alive. I created a Database folder in the solution root along with a new Dockerfile to pull from Microsoft's official image. You'll also notice extra bits in the Dockerfile to run the SqlCmdStartup.sh script. This script provisions the databases required by our microservices. Finally, I extended the docker-compose.yml file to include the new service. Notice that I am mapping local port 5433 to Sql Server's TCP port so I can use Sql Server Management Studio on my desktop to talk to the database running inside the container - rad! If you have a local instance of sql server running you'll want to do the same.

...
sql.data:
   image: mssql-linux
   build:
     context: ./Database
     dockerfile: Dockerfile
   ports:
      - "5433:1433"
   container_name: mssql-linux

For production use, it's typically not advisable to put your database in a container, however, there are exceptions to every rule so if you're thinking about production you'll need to carefully test and evaluate any containerized database.

Adding a Data Access Layer to the Microservice

We talk to the containerized Sql Server from our application the exact same way we would if it were installed normally. However, Docker does provide a method of identifying it as a dependency by adding a depends_on key to each service in the docker-compose.yml file that uses it. This is a handy way to manage dependencies between services when using compose.

...
jobs.api:
    ...
    depends_on:
      - sql.data

This tells docker to create and start the sql.data container before jobs.api.

As this is a very small and simple demo, the data access code is pretty straight forward and makes use of Dapper ORM to interact with the database. Check out the ApplicantRepository.cs or JobRepository.cs classes to see how it is implemented.

Caching with Redis

Caching is an essential part of any distributed system. We'll add a redis instance to our architecture that the Identity.Api microservice can use as a backing store for user/session information. This requires only 2 new lines in docker-compose.yml and boom - we have a redis instance. For this service, we don't require any additional dockerfile or configuration.

...
user.data:
    image: redis  

The redis cache is mainly used by IdentityRepository. You can see where the client connection is established and wired up in the container in Startup.cs which in turn is injected into the repository. Finally, in startup you will notice the redis host address is resolved for the connection using the configuration provider in the line:

configuration.EndPoints.Add(Configuration["RedisHost"]);

What makes this special is that the value is set in the docker-compose.yml by adding an environment key to the Identity.Api service definition and passed in when the container is run. This is also used for passing in the connection string to services using the database.

...
identity.api:
    image: identity.api
    environment:
      - RedisHost=user.data:6379
...

Event-Based Communication Between Microservices using RabbitMQ and MassTransit

An important rule for microservices architecture is that each microservice must own its data. In a traditional, monolithic application we often have one centralized database where we can retrieve and modify entities across the whole application often in the same process. In microservices, we don't have this kind of freedom. Microservices are independent and run in their own process. So, if a change to an entity or some other notable event occurs in one microservice and must be communicated to other interested services we can use a message bus to publish and consume messages between microservices. This keeps our microservices completely decoupled from one another and any other external systems they may integrate with.

To add messaging to the solution I first added a RabbitMQ message broker container by extending docker-compose.yml:

...
rabbitmq:
  image: rabbitmq:3-management
  ports:
    - "15672:15672"
  container_name: rabbitmq
...

Next, to publish and consume messages within the microservices I opted to use MassTransit which is a lightweight, message bus framework that works with RabbitMQ and Azure Service Bus. I could've very well used a raw rabbit client but MassTransit provides a nice, friendly abstraction over rabbit. One shortcut I took is not creating a single component to represent the event bus so in each microservice's Startup.cs you can see an instance of the bus is created and registered with the container.

builder.Register(c =>
{
   return Bus.Factory.CreateUsingRabbitMq(sbc =>
     {
       sbc.Host("rabbitmq", "/", h =>
       {
          h.Username("guest");
          h.Password("guest");
       });

       sbc.ExchangeType = ExchangeType.Fanout;
     });
})
.As<IBusControl>()
.As<IBus>()
.As<IPublishEndpoint>()
.SingleInstance();

After that, publishing messages to Rabbit is a breeze. Simply inject the instance of the bus, in our case into a controller and you're ready to publish events. An example of this is in the JobsController in Jobs.Api.

[HttpPost("/api/jobs/applicants")]
public async Task<IActionResult> Post([FromBody]JobApplicant model)
{
  // fetch the job data
  var job = await _jobRepository.Get(model.JobId);
  var id = await _jobRepository.AddApplicant(model);
  // dispatch 'ApplicantApplied' event message
  await _bus.Publish<ApplicantAppliedEvent>(new { model.JobId, model.ApplicantId, job.Title });
  return Ok(id);
}

Consuming events is fairly straightforward too. MassTransit provides a nice mechanism for defining message consumers via its IConsumer interface which is where our message handling code goes.

An example of this can be seen in the Identity.Api where a message consumer for the ApplicantApplied event is defined in ApplicantAppliedEventConsumer. These consumer classes can then be registered with the Autofac container and invoked automatically by MassTransit with just a little extra configuration on the bus instance we register in the container.

// register a specific consumer
builder.RegisterType<ApplicantAppliedEventConsumer>();

builder.Register(context =>
{
  var busControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
  {
     var host = cfg.Host(new Uri("rabbitmq://rabbitmq/"), h =>
     {
       h.Username("guest");
       h.Password("guest");
     });

   // https://stackoverflow.com/questions/39573721/disable-round-robin-pattern-and-use-fanout-on-masstransit
   cfg.ReceiveEndpoint(host, "dotnetgigs" + Guid.NewGuid().ToString(), e =>
   {
      e.LoadFrom(context);                             
   });
});

return busControl;

})
.SingleInstance()
.As<IBusControl>()
.As<IBus>();

If you look in the ApplicantAppliedEventConsumer class you'll see it's not doing much. It just increments a value in the redis cache but it clearly illustrates how asynchronous, event-driven communication between microservices can work.

public async Task Consume(ConsumeContext<ApplicantAppliedEvent> context)
{
   // increment the user's application count in the cache
   await _identityRepository.UpdateUserApplicationCountAsync(context.Message.ApplicantId.ToString());
}

Consuming Microservices with an ASP.Net Core MVC Web App

To consume our microservices and complete the DotNetGigs demo app I added a new ASP.NET Core MVC application to the solution. The most common method for client web and mobile applications to talk to microservices is over http - commonly via an API gateway. We're not using a gateway in this demo but I did create a simple http client so the mvc app can talk directly to the different microservices. The extent of the app's functionality is captured in the animated gif above - it can retrieve a list of jobs from Jobs.Api, the user can apply, and messages are dispatched and handled by the other microservices - that's it!

Building and Debugging the Solution

From the project's root folder (where docker-compose.yml resides) use the Docker CLI to build and start the containers for the solution: PS> docker-compose up -d. This step will take a few minutes or more as all the base images must be downloaded. When it completes you can check that all 7 containers for the solution have been built and started successfully by running PS> docker ps.

Additionally, you can connect to the Sql Server on Linux instance in the container using SQL Server Management Studio to ensure the databases dotnetgigs.applicants and dotnetgigs.jobs were created. The server name is: localhost,5433 with username sa and password Pass@word.

At this point, you can run and debug the solution from Visual Studio Code. Simply open the root folder in VSCode and start up each of the projects in the debugger. Unfortunately, they need to be started individually (if you know a way around this please let me know :) The order they're started does not matter.

Update: Thanks to Burhan below for pointing out that it is very easy to launch all the projects simultaneously using compound launch configurations. The sample code has been updated with his suggestion so you can launch all the projects in the solution in one shot by selecting the All Projects config in the VSCode debugger. Thanks Burhan!

With all services running in the debugger you can hit the web app in your browser at http://localhost:8080 and set breakpoints in any of the projects to debug directly.

Wrapping Up

If you're still here, you rock! I hope this guide is true to its title and can help you get off the ground and running with ASP.NET Core microservices and Docker. I'd like to finish by recapping a few key benefits these technologies and architectural style can provide developers and organizations.

  • Docker containers ensure consistency across multiple development and release cycles. This helps teams realize cost and time savings by preventing annoying deployment issues, improving DevOps and production stability.
  • .NET Core's modularity and lightweight nature make it ideal for microservice development.
  • Teams can move faster with microservices because they can be developed, deployed and tested independently of each other, unlike traditional monolithic applications.
  • Docker provides a better return on investment than traditional deployment models by dramatically reducing infrastructure resources. By nature, Docker can run the same application with fewer resources because of the reduced infrastructure requirements.
  • Microservices can produce more scalable and resilient systems. Because they typically only have a single, independent responsibility they are easier to scale and replicate than monolithic services.

Thanks for reading and if you have any questions or feedback I'd love to hear it in the comments below!

source code here

Better Software Design with Clean Architecture

Have you ever produced code that:

  • was bug laden
  • was painful to debug or enhance with new features
  • was hard/impossible to test without things like a database or web server
  • had presentation logic mixed with business logic or business logic mixed in with data access logic (sql)
  • was hard for other developers to understand because it did not clearly express its intent or purpose within the application it was written for

I know I have. Over time I learned about the various Gang of Four patterns and made a conscious effort to keep the SOLID principles running on a background thread in my mind as I wrote code. While these ideas certainly helped mitigate the problems listed above, they didn't eliminate them. When writing web or desktop software using MVC or MVVM I still found some of the same old symptoms showing up in my projects. Things like business logic leaking into controllers, entity models being used all over the place for different purposes and large regions of the code that had no unit test coverage because they had some sort of direct dependency on a database or http client.

The answer

One day, a colleague sent around this link introducing The Clean Architecture by Uncle Bob. It resonated with me instantly as it presented a solution for the same problems I was seeing. The best part, there's nothing mystical or complicated about Clean Architecture - it is a relatively simple and practical architecture template that can be applied to many application domains if you choose to follow just a few of its basic rules.

How Clean Architecture works

The key rule behind Clean Architecture is: The Dependency Rule. The gist of this is simply that dependencies are encapsulated in each "ring" of the architecture model and these dependencies can only point inward.

Clean Architecture keeps details like web frameworks and databases in the outer layers while important business rules and policies are housed in the inner circles and have no knowledge of anything outside of themselves. Considering this, you can start to see how it achieves a very clean separation of concerns. By ensuring your business rules and core domain logic in the inner circles are completely devoid of any external dependencies or 3rd party libraries means they must be expressed using pure C# POCO classes which makes testing them much easier.

In fact your business rules simply don’t know anything at all about the outside world.

Robert C. Martin

There are a few other important concepts that I'm going to highlight along the way with an example below but if you're interested in just the theory please go check out Uncle Bob's original post introducing Clean Architecture.

Implementing the "Course Registration" use case

Let's see how this works using a real-world use case. For the folks doing agile scrum, I realize a use case is not the most fashionable way to describe a requirement. But for this post, it's perfect because I'd like to show how all the details of the use case can be modeled within clean architecture. A user story would simply be too vague.

I've typed out the entire use case here for reference so you don't need to digest the whole thing right now. We'll cover its aspects below in detail as we walk through implementing it using clean architecture.

Title Register for courses
Description Student accesses the system and views the courses currently available for him to register. Then he selects the courses and registers for them.
Primary Actor Student
Preconditions
  • Student is logged into system
  • Student has not previously enrolled or registered
  • Student cannot register within 5 days of course start date
Postconditions Student is registered for courses
Main Success Scenario
  1. Student selects "Register New Courses" from the menu.
  2. System displays list of courses available for registering.
  3. Student selects one or more courses he wants to register for.
  4. Student clicks "Submit" button.
  5. System registers student for the selected courses and displays a confirmation message.
Extensions
  • (2a) No courses are available for this student.
    1. System displays error message saying no courses are available, and provides the reason & how to rectify if possible.
    2. Student either backs out of this use case, or tries again after rectifying the cause.
  • (5a) Some courses could not be registered.
    1. System displays message showing which courses were registered, and which courses were not registered along with a reason for each failure.
  • (5b) None of the courses could be registered.
    1. System displays message saying none of the courses could be registered, along with a reason for each failure.

This is a simple use case allowing a student to register for one or more classes and then returning either a success or error result to notify her of the outcome. We'll use clean architecture to write this use case in a fashion that meets the goals and avoids the problems I mentioned in the intro.

Creating the Entities

Entities are the heart of clean architecture and contain any enterprise-wide business rules and logic. Now, you might not be working in the context of an enterprise and that's perfectly fine. If you're writing a standalone application Uncle Bob suggests simply referring to these as Business Objects. The key is that they contain rules that are not application specific - so basically any global or shareable logic that could be reused in other applications should be encapsulated in an entity.

Inspecting our use case there are 2 entities we need: Student and Course.

Using a TDD approach I wrote a couple of tests and just enough code in the Student entity class to get them passing.

The RegisterForCourse() method implements 2 rules from our use case.

public class Student : EntityBase
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public IList<Course> RegisteredCourses { get; }
   public IList<Course> EnrolledCourses { get; set; }

   public Student()
   {
      RegisteredCourses = new List<Course>();
      EnrolledCourses = new List<Course>();
   }

   public bool RegisterForCourse(Course course)
   {
      // student has not previously enrolled
      if (EnrolledCourses.Any(ec => ec.Code == course.Code)) return false;

      // registration cannot occur with 5 days of course start date
      if (DateTime.UtcNow > course.StartDate.AddDays(-5)) return false;

      RegisteredCourses.Add(course);
      return true;
   }
}

[Fact]
public void CannotRegisterForCourseWithin5DaysOfStartDate()
{
  // arrange
  var student = new Student();
  var course = new Course { Code = "BIOL-1507EL", Name = "Biology II", StartDate = DateTime.UtcNow.AddDays(+3) };

  // act
  var result = student.RegisterForCourse(course);

  // assert
  Assert.False(result);
}

[Fact]
public void CannotRegisterForCourseAlreadyEnrolledIn()
{
  // arrange
  var student = new Student
  {
     EnrolledCourses = new List<Course>
     {
       new Course { Code = "BIOL-1507EL", Name = "Biology II" },
       new Course { Code = "MATH-4067EL", Name = "Mathematical Theory of Dynamical Systems, Chaos and Fractals" }
             }
     };

 // act
 var result = student.RegisterForCourse(new Course { Code = "BIOL-1507EL" });

 // assert
 Assert.False(result);
}

Use Cases

Moving up from the entities we have the Use Case layer. The classes that live here have a few unique features and responsibilities:

  • Contain the application specific business rules
  • Encapsulate and implement all of the use cases of the system. A good rule to start with is a class per use case
  • Orchestrate the flow of data to and from the entities, and can rely on their business rules to achieve the goals of the use case
  • Have NO dependency and are totally isolated from things like a database, UI or special frameworks
  • Will almost certainly require refactoring if details of the use case requirements change

Use case classes are typically suffixed with the word Interactor. Uncle Bob mentions in this talk that he considered calling them controllers but assumed this would be too easily confused with MVC so Interactor it is!

Our use case is modelled in RequestCourseRegistrationInteractor.cs.

There are a few important aspects of the use case class I'd like to highlight.

First off, it implements the IRequestHandler interface. This interface is an example of the mediator pattern which dictates that implementors will work with a certain request and response object in a loosely coupled fashion.

public class RequestCourseRegistrationInteractor : IRequestHandler<CourseRegistrationRequestMessage, CourseRegistrationResponseMessage>
...

There is a single TResponse Handle(TRequest message) method defined on the interface which essentially performs all the work of our use case. Pretty simple huh? Handle() takes a request object as its lone parameter which will typically contain any data passed in from the outer layer (the UI) and returns a response message with both types dictated by the IRequestHandler interface. All of our application specific logic for the use case will go into this method.

One key aspect of the request/response messages that flow in and out of use case interactors and across boundaries is that they are simple data structures meaning they contain no special types: ie. entities, or types provided by 3rd party libs etc. - they are pure C# objects.

public class CourseRegistrationRequestMessage : IRequest<CourseRegistrationResponseMessage>
{
  public int StudentId { get; private set; }
  public List<string> SelectedCourseCodes { get; private set; }

  public CourseRegistrationRequestMessage(int studentId,List<string> selectedCourseCodes)
  {
    StudentId = studentId;
    SelectedCourseCodes = selectedCourseCodes;
  }
}

The CourseRegistrationRequest object consists of only a StudentId and a list of selected course codes selected by the user.

Here's the full implementation of RequestCourseRegistrationInteractor.cs

public class RequestCourseRegistrationInteractor : IRequestHandler<CourseRegistrationRequestMessage, CourseRegistrationResponseMessage>
{
  private readonly IStudentRepository _studentRepository;
  private readonly ICourseRepository _courseRepository;
  private readonly IAuthService _authService;
  public RequestCourseRegistrationInteractor(IAuthService authService, IStudentRepository studentRepository, ICourseRepository courseRepository)
  {
    _authService = authService;
    _studentRepository = studentRepository;
    _courseRepository = courseRepository;
  }

public CourseRegistrationResponseMessage Handle(CourseRegistrationRequestMessage message)
{
   // student must be logged into system
   if (!_authService.IsAuthenticated())
   {
     return new CourseRegistrationResponseMessage(false,null,"Operation failed, not authenticated.");
   }

   // get the student
   var student = _studentRepository.GetById(message.StudentId);

   // save off any failures
   var errors = new List<string>();

   foreach (var c in message.SelectedCourseCodes)
   {
     var course = _courseRepository.GetByCode(c);

     if (!student.RegisterForCourse(course))
     {
         errors.Add($"unable to register for {course.Code}");
     }
   }

   _studentRepository.Save(student);
   return new CourseRegistrationResponseMessage(!errors.Any(), errors);
}

Note the use of _authService, _studentRepository and _courseRepository. These services are typically referred to as Gateways within clean architecture and get injected into the Use Case layer as per the dependency rule. These are the things that deal with the database, rest services or other external agencies and their implementation belongs in the Interface Adapters layer. Interactors only know what behavior these gateways offer by way of their interface definition. They have no idea how they do their work because those details are encapsulated in an outer layer which the Use Cases know nothing about.

Interface Adapters

The purpose of the interface adapter layer is to act as a connector between the business logic in our interactors and our framework-specific code. For example, in an ASP.Net MVC app, this is where the models, views, and controllers live. Gateways like services and repositories are also implemented here.

It is this layer, for example, that will wholly contain the MVC architecture of a GUI. The Presenters, Views, and Controllers all belong in here.

Also in this layer is any other adapter necessary to convert data from some external form, such as an external service, to the internal form used by the use cases and entities.

Robert C. Martin

In this example I'm using a basic console app to consume my use case so this serves as my interface adapter layer. It contains the concrete implementations of the required Gateways and has Presentation logic to format the response from the Use Case into something friendly for the UI.

In the Main() method we can see the usage of calling the use case and presenting the results.

//*************************************************************************************************
// Here we're connecting our app framework layer with our Use Case Interactors
// This would typically go in a Controller Action in an MVC context or ViewModel in MVVM etc.
//*************************************************************************************************
// 1. instantiate Course Registration Use Case injecting Gateways implemented in this layer
var courseRegistraionRequestUseCase = new RequestCourseRegistrationInteractor(authService, studentRepository, courseRepository);

// 2. create the request message passing with the target student id and a list of selected course codes 
var useCaseRequestMessage = new CourseRegistrationRequestMessage(1, new List<string> { userInput.ToUpper() });

// 3. call the use case and store the response
var responseMessage = courseRegistraionRequestUseCase.Handle(useCaseRequestMessage);

// 4. use a Presenter to convert the use case response to a user friendly ViewModel
var courseRegistraionResponsePresenter = new CourseRegistrationRequestResponsePresenter();
var vm = courseRegistraionResponsePresenter.Handle(responseMessage);

Console.Clear();

// render results

if (vm.Success)
{
  Console.BackgroundColor = ConsoleColor.DarkGreen;
  Console.ForegroundColor = ConsoleColor.White;
}
else
{
  Console.BackgroundColor = ConsoleColor.Red;
  Console.ForegroundColor = ConsoleColor.White;
}
Console.WriteLine();
Console.WriteLine(vm.ResultMessage);
Console.WriteLine();

Presentation

We'd like to show something friendly to the user when we get a response back from the interactor. To accomplish this, I created CourseRegistrationResponsePresenter which has the single responsibility of converting a CourseRegistrationResponseMessage into a CourseRegistrationResponseViewModel. I'll mention again that the response message and viewmodel are POCO objects containing no special types or data structures, just everyday collection and value types.

public class CourseRegistrationResponsePresenter
{
  public CourseRegistrationResponseViewModel Handle(CourseRegistrationResponseMessage responseMessage)
  {
    if (responseMessage.Success)
    {
         return new CourseRegistrationResponseViewModel(true,"Course registration successful!");
    }

    var sb = new StringBuilder();
    sb.AppendLine("Failed to register course(s)");
    foreach (var e in responseMessage.Errors)
    {
       sb.AppendLine(e);
    }

    return new CourseRegistrationResponseViewModel(false,sb.ToString());
  }
}

Frameworks and Drivers

This layer contains tools like databases or frameworks. By default, we don’t write very much code in this layer, but it’s important to clearly state the place and priority that those tools have in the architecture.

Summary

Clean Architecture provides a simple and effective framework for separating the different aspects of our system producing a highly decoupled, testable architecture.

Let's recap some key benefits:

  • Use Cases are encapsulated in one place meaning they are very visible and easier to understand. Business rules are not scattered all over the place making debugging and modification of the code painful.

  • The Dependency Rule and use of abstracted Gateways mean the core business logic in our Interactors and Entities is easily testable and not hampered by external things like databases and RESTful web services. The lack of 3rd party, feature-laden frameworks in our business logic also means the code there is only focused on the important rules and policies of our application.

  • Flexible and portable - because the Use Cases are completely decoupled from any UI or infrastructure it's easy to do things like switch the database or web framework or even port to an entirely new platform. Our example runs in a console app but it could just as easily work on the web, desktop or a phone.

Like most design decisions there are tradeoffs to be made when considering Clean Architecture. For the benefits I highlighted there are also a few disadvantages:

  • Your team's ability to ramp up and effectively apply Clean Architecture. There's nothing radically complex in here but there certainly is a learning curve and time required to adapt to any new design or architectural style.

  • Applying Clean Architecture adds some bloat in the form of many separate classes for all the Presenters, Use Case Request/Response dtos, Use Case Interactors, Entities, Gateways etc plus all the test cases :). Not a huge deal but a valid knock on the impact of this approach to the size of your project.

I hope this guide has provided some insight on how Clean Architecture can improve your software design and prevent many of the common pitfalls that hinder projects. Like any pattern, it takes a little familiarity with the concepts and principles before they can be effectively applied. A good exercise to start might be to think of some use cases near and dear to you currently - can you map them out mentally using Clean Architecture? Do you have a sense of the Entities, what the Use Case Interactor might look like, what data needs to flow back and forth in the request and response messages? Running your use cases through these questions can help you get started in modeling them using Clean Architecture.

Thanks for reading!

source code