Foreword

"REST-API.ts" is an adaptation of my previous book API on Rails in which I built an API with the Ruby on Rails framework. This book is one of my nice Open-Source experience since the book is very active and has has several contributors (which I thank again here). I also had much positive feedback. So I decided to retry the experience with different technologies than Ruby on Rails, which I appreciate a lot.

So I hope this book will be alive and evolving following the best practices of the moment. I also want it to be an entry point for beginners, as other resources were for me when I started developing.

So this book is naturally Open-Source and freely available on Github. You will find the PDF, EPUB versions, and also the sources in Asciidoctor format. A paid version is available on leanpub if you want to contribute to the project. Of course, this is not the only way to do it. You can also propose improvements by forking the project, talk about this book around you or thank me by mail contact@rousseau-alexandre.fr.

About the original author

My name is Alexandre Rousseau, and I am a passionate developer. I like to share my experience through my blog and some books like API on Rails or even this one.

I am currently a partner at iSignif, where I build and maintain a SAAS-like product using Ruby on Rails. I also contribute to the Ruby community by producing and now some gems that you can see on my Rubygem profile. Most of my projects are on GitHub, so don’t hesitate to visit follow me.

Copyrights and license

"REST-API.ts" from Alexandre Rousseau is made available under the terms of the license Creative Commons Attribution - Share Alike 4.0 International.

Thank you

Many thanks to all API on Rails contributors, to all those who supported the project at buying a paid version of this book and to all API on Rails contributors.

Introduction

Welcome to API-REST.ts, a steroid tutorial to learn the best way to build your next application with Typescript. The purpose of this book is to provide you with a complete methodology to develop a RESTful API following best practices.

When you are finished with this book, you will create your own API and integrate it with any client, such as a web browser or mobile application. The generated code is built with Typescript 4, which is the current version.

This book intends not only to teach you how to build an API but rather to teach you how to build a scalable and maintainable API with Typescript. In this journey, you will learn how to:

  • Using Git for version control

  • Building JSON responses

  • use test-driven development

  • Test your entry points with functional tests

  • Set up authentication with JSON Web Tokens (JWT)

  • Use JSON:API specifications

  • Optimize your application

I strongly recommend that you follow all the steps in this book. Try not to skip any chapters, as I will give you tips and tricks to improve yourself throughout the book. You can think of yourself as the main character in a video game that gets a higher level with each chapter.

In this first chapter, I will explain how to configure your environment (if you haven’t already done so). We will then create an application called market_place. I will make sure to teach you the best practices I have learned from my experience. This means that after initializing the project, we will start using Git.

In next chapters, we will build the application using test-driven development (TDD) workflow. I will also explain the interest in using an API for your next project and choosing a suitable response format such as JSON or XML. Later on, we will get our hands on the code and complete the application’s basics by building all the necessary endpoints. We will also secure access to the API by building authentication through HTTP header exchange. Finally, in the last chapter, we will add some optimization techniques to improve the server’s structure and response times.

The final application will be a marketplace application that will allow sellers to set up their own online store. Users will be able to place orders, download products, and more. There are many options to create an online store, such as Shopify, Spree, or Magento.

Throughout this journey (it depends on your expertise), you will improve and better understand how modern frameworks work. I have also taken some of the practices from my experience as a developer.

Conventions on this book

The conventions in this book are based on those in Ruby on Rails Tutorial. In this section, I will mention a few that you may not be familiar with.

I will use many examples using command line. I’m not going to deal with Windows cmd (sorry, guys). I’m going to base all the examples using the Unix-style command line prompt. Here is an example:

An example of a UNIX command
$ echo "A command-line command"
A command-line command

I will try to shorten the code pieces as much as possible, keeping only interesting lines depending on the context. I’ll hide irrelevant lines with the comment symbol // …​ or /* …​ */. At the beginning of each piece of code, I will also specify the path and the name of the file concerned. Here is an example.

An example of a Typescript block code
// user.ts
// ...
class User {
  constructor(/* ... */) {
      doInterestingStuff(message);
  }
  // ...
  doInterestingStuff(message: string) {
    console.log(message)
  }
}

Also, I will use some terms like:

  • "Avoid" means you’re not supposed to.

  • "Prefer" indicates that the first is the most appropriate of the two options.

  • "Use" means that you can use the resource.

If you encounter any error while executing a command, I recommend using your search engine to find your solution. Unfortunately, I cannot cover all possible errors. If you encounter any problems with this tutorial, you can always send me an email.

Development environments

For almost all developers, one of the most painful parts is setting up a comfortable development environment. If you do it right, the next steps should be a breeze. I will guide you through this step to make it easier and more motivating.

Text Editors and Terminal

There are two categories of code editors :

Development environments are more complete and offer more features but are often much heavier.

There is no bad choice, and it’s really a matter of taste.

I use Visual Studio Code from Microsoft, which is at the crossroads between a text editor and a development environment. Its auto-completion is really very powerful when using Typescript. If you don’t know what to use, you can’t go wrong by using this editor.

Web browser

As for the browser, I will advise directly Firefox. But other developers use Chrome or even Safari. Any of them will help you build the application you want. They all offer a good inspector for the DOM, a network analyzer, and many other features you may already be familiar with.

However, I advise you to use at least two web browsers. There are some differences in the interpretation of Javascript or CSS. By using two browsers, you make sure that your developments work correctly for most of your users.

Personally, I use Firefox in everyday life, and I check the proper functioning of my features on Chromium, a derivative of Google Chrome.

Package Manager

  • Mac OS: There are many options to manage how you install packages on your Mac, such as Mac Ports or Homebrew. Both are good options, but I would choose the latter. I’ve had fewer problems installing software with Homebrew. To install brew, just run the command below:

Install Homebrew
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
  • Linux: You’re already ready! It doesn’t matter if you use apt, pacman, yum as long as you feel comfortable and know how to install packages.

Git

We will use Git a lot, and you should use it too (not only for this tutorial but for all your projects). It’s straightforward to install it:

  • under Mac OS: $ brew install git.

  • under Linux: $ sudo apt-get install git.

Node.js

There are many ways to install and manage Node.js. You may even already have a version installed on your system. To find out, just type:

Get node.js version
$ node -v

If you haven’t installed it, you can do it with your package manager. However, I recommend that you use Node Version Manager (NVM). The principle of this tool is to allow you to install several versions of Node.js on the same machine, in an environment sealed to a possible version installed on your operating system, and to be able to switch from one to the other easily.

To install it, go to follow the official documentation. You have to launch the following script :

Install NVM
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.0/install.sh | bash

The URL of the script may vary depending on the current version.

Once the installation is complete, you can install the latest version of Node.js with the following command:

Install node.js using NVM
$ nvm install node

Database

I strongly recommend that you install PostgreSQL to manage your databases. But here, for simplicity, we will use SQLite. If you are using Mac OS you don’t have any additional libraries to install. If you are using Linux, don’t worry, I’ll guide you:

Install SQlite dependencies for Debian based Linux distribution
$ sudo apt-get install libxslt-dev libxml2-dev libsqlite3-dev

Initializing the project

In my opinion, this is one of the most interesting parts because you will discover a way of doing things that is certainly different from yours.

There is a ton of complete frameworks like Nest.js, which is really great. But here, we’re going to start from scratch using some prevalent libraries to master our application.

This method will also allow you to adapt and build the architecture that suits you best. Keep in mind that the architecture I’m going to present to you is the one I like. It is totally personal, and I don’t pretend that it is the best. Always keep a critical mind.

Are you ready? Here we go!

Go to the folder of your choice and create a new folder:

Create folder for new project
$ mkdir node_market_place
$ cd node_market_place

Version control

Remember that Git helps you track and maintain your code history. Version all your projects. Even if it’s a small project.

Initializing Git in your project is as simple as the following command:

Initialize Git
$ git init

However, you need to configure the committer’s information. If it is not already done, go to the directory and run the following commands:

Set basic Git configuration
$ git config user.name "John Doe"
$ git config user.email "john@doe.io"

And there you go. Let’s move on.

NPM Initialization

NPM is the official package manager of Node.js. Since version 0.6.3 of Node.js, NPM is part of the environment and is automatically installed by default.

Initializing your project with Node.js means that you will be able to install any library published on npmjs.com.

So let’s initialize NPM in our :

Initialize NPM
$ npm init

Several questions will be asked, and in the end, you will see a new package.json file. This file details the information about your project and its dependencies.

Setting up Typescript

Now that we have initialize Node.js project, we are ready to implement Typescript.

Typescript will bring us strong typing and will perform checks before transpiling Typescript code to Javascript :

Note
We talk about a compiler for compiling a program into an executable and a transpilation for converting a program from one language to another language.

Therefore, we install Typescript as a development dependency because it will only be used to transpile our code. It will be Node.js, which will execute the Javascript later :

Install Typescript for project
$ npm add typescript @types/node --save-dev

We have added two libraries :

  • typescript, which will give us the tools for transpilation.

  • @types/node which will add the definition of the types of Node.js

So let’s add our first Typescript file :

Our first typescript code
// src/main.ts
function say(message: string): void {
    console.log(`I said: ${message}`);
}
say("Hello");

This code is really basic and will just be used to check that the transpilation works.

To use Typescript transpilation, we need to define a configuration file tsconfig.json. Here is a basic one:

Basic Typescript configuration
{
  "compilerOptions": {
    "rootDir": "./",
    "outDir": "dist",
    "module": "commonjs",
    "types": ["node"],
    "target": "es6",
    "esModuleInterop": true,
    "lib": ["es6"],
    "moduleResolution": "node",
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true
  }
}

That’s much code but the two directives to remember here are: rootDir and outDir. They will specify where the Typescript files are (rootDir) and where the Javascript files resulting from the transpilation are (outDir).

In our case, I put all the Typescript files in the src folder and the result of the transpilation in dist.

From here, you can test that everything works by executing the following command:

$ ./node_modules/.bin/tsc

You will see a dist/main.js file of the following form.

Transpilation of our first Typescript code
// dist/main.js
function say(message) {
  console.log(`I said: ${message}`);
}
say("Hello");

This is the transposed version of our Typescript file.

Now that we’ve seen that everything works, we can automate this a bit by adding the commands directly into the package.json file:

Create NPM start script
{
  // ...
  "scripts": {
    "start": "tsc && node dist/main.js"
  },
  // ...
}

So now you can execute the script with the following command:

$ npm run start

Now that everything is working, it’s time to version our changes. Don’t add all the created files. It’s important only to version some folders:

  • the node_modules folder contains the libraries retrieved using NPM, and it will be changed when updating these libraries.

  • the dist folder because it results from the transpilation of our code

To ignore them, create a .gitignore file with the following content :

node_modules
dist

We can now add all our files with Git and commit :

$ git add .
$ git commit -m "Setup Typescript for backend"

Setting up Hot Reload with Nodemon

It’s nice to have a Hot Reload feature in the development phase. This means that our program will transpilate itself again and run every time our code changes.

The Nodemon library will provide us with this feature. Let’s add it :

$ npm add nodemon --save-dev

Now you have to define a nodemon.json file:

{
  "watch": ["src"],
  "ext": "ts",
  "ignore": ["src/**/*.spec.ts"],
  "exec": "npm run start".
}

A few explanations are necessary:

  • watch specifies the directory in which Nodemon will watch for file changes

  • ignore allows to avoid Hot Reload for certain types of files (here are the tests we will see later)

  • exec, the command to be executed at each change

Let’s check that everything works by running Nodemon by hand:

Start nodemon
$ ./node_modules/.bin/nodemon
[nodemon] 2.0.6
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): src/**/*
[nodemon] watching extensions: ts
[nodemon] starting `npm run start`
I said: Hello
[nodemon] clean exit - waiting for changes before restart

Our code has been transpilated and executed, and we can see that Nodemon is still running and waiting for a change. So let’s change our main.ts file:

[nodemon] restarting due to changes...
[nodemon] starting `npm run start`
Nodemon said: Hello
[nodemon] clean exit - waiting for changes before restart

Now that everything works, we can modify the package.json file and add the command nodemon:

Create start:watch NPM script
{
  // ...
  "scripts": {
    "start": "tsc && node dist/main.js",
    "start:watch": "nodemon"
  },
  // ...
}

We can now commit the changes:

$ git add .
$ git commit -m "Setup Nodemon"

Setting up the web server

So far, we have set up an environment that will allow us to avoid syntax and typing errors automatically with Typescript. It’s time to make a real feature: the web server.

There are several libraries to make a web server with Node.js. In my case, I recommend Express.js simply because it’s the one with a bigger community, and it offers basic features. It also gives you the freedom to organize your code the way you want and offers a ton of plugins to add features on top of it.

Adding it is very easy:

$ npm add express --save

We will also add the Typescript typings that will help your code editor a little bit:

$ npm add @types/express --save-dev

And now we can instantiate our server in the file main.ts.

Create Express HTTP server
// src/main.ts
import express, {Request, Response} from 'express';

const app = express();
const port = 3000;

app.get("/", (req: Request, res: Response) => res.send("Hello World!"));
app.listen(port, () => console.log(`listen on http://localhost:${port}/`));

You can start the server with Nodemon (if it is not already done) with npm run start:watch, and you will get the following result :

nodemon] restarting due to changes...
[nodemon] starting `npm run start`.
Server listen on http://localhost:3000/

So you can open your browser at http://localhost:3000 and see that everything works. Here is the result here using curl:

$ curl http://localhost:3000
Hello World!

Now that everything is working, let’s commit the changes:

$ git commit -am "Add express.js server"

Conclusion

It has been quite a long chapter. If you have arrived here, allow me to congratulate you. Things will get better from this point on. Let’s start getting our hands on the code!

The API

In this chapter, I will give you an overview of the application. You should have read the previous chapter. If you haven’t, I recommend you do so.

To summarize, we simply generated our Node.js application, setup Typescript, and performed our first commit.

Planning the application

Our application will be quite simple. It will consist of five models. Don’t worry if you don’t understand what’s going on. We will review and develop each of these resources as we move forward with the tutorial.

Diagram of Market Place API models
+---------+     +---------+
| User    +---->+Product  |
+---------+     +---------+
     |               |
     v               v
+---------+     +---------+
|Order    +---->+Placement|
+---------+     +---------+

In short, we have the User who will be able to create Product. He also make an Order who link multiples Product`s into a `Placement.

We are not going to build an interface for interaction with the API to not overload the tutorial. If you want to build views, there are many options, such as JavaScript frameworks (Angular, Vue.JS, React) or mobile libraries (AFNetworking).

At this stage, you should ask yourself this question:

Okay, but I need to explore and visualize the API I’m going to build, right?

That’s right. If you google something related to exploring an API, you’ll find many results. For example, you can use Postman, which has become a must-have. But we’re not going to use it. In our case, we will use cURL, which is a command line tool available almost everywhere. This will allow us to have reproductible commands whatever your development environment.

Set up the API

An API is defined by wikipedia as an Application Programming Interface (API) standardized set of components that serves as a front end through which one software provides services to other software. In other words, it is a way in which systems interact with each other via an interface (in our case, a web service built with JSON). There are other communication protocols, such as SOAP, but we are not talking about them here.

JSON has become a must as a file format for the Internet because of its readability, scalability, and ease of implementation. Many JavaScript frameworks use it as a default protocol, such as Angular or EmberJS. Other large Objective-C libraries use it like https://github.com/AFNetworking/AFNetworking [AFNetworking] or RESTKit. There are probably good solutions for Android, but I can’t recommend anything due to my lack of experience on this development platform.

So we will use the JSON format to build our API. The first idea that might come to your mind would be to start creating bulk routes. The problem is that they wouldn’t be standardized. A user wouldn’t be able to guess what resource is returned by an endpoint.

That’s why a standard exists: REST (Representational State Transfer). REST imposes a standard for routes that create, read, update, or delete information on a server using simple HTTP calls. It is an alternative to more complex mechanisms such as SOAP, CORBA, and RPC. A REST call is simply an HTTP GET request to the server.

And with REST, you can call a URL with a specific HTTP request. In this case, with a GET request:

http://domain.com/resources_name/uri_pattern

The RESTful APIs must follow at least three rules:

  • A basic URI like http://example.com/resources/

  • A mime type to represent data is commonly JSON and is commonly defined by the exchange of headers.

  • Follow standard HTTP methods such as GET, POST, PUT, DELETE.

  • GET: Reads the resource(s) defined by the URI model.

  • POST: Creates a new entry in the resource collection

  • PUT: Update a collection or resource member

  • DELETE: Destroys a collection or a member of the resources.

This may sound complicated, but it will become much easier to understand as we go through the tutorial.

Initialization of the application

Now that we know what conventions we’re going to follow, we can start building our application’s architecture. Therefore, we will continue to set up Typescript and certain libraries, which will help us respect best practices in terms of software development.

Dependency injection

In this section, we will set up the dependency injection system. If you have never heard of it, this is probably the most abstract part of this chapter.

Here I will try to summarize what dependency injection is and what it is used for. Let’s imagine a User class that needs a Database class to be saved. We would try to initialize the database connection in the user’s constructor:

An example of bad design
class Logger {
  log(message: string): void {
    const time = new Date().toISOString();
    console.log(`${time} -- ${message}`);
  }
}

class Database {
  constructor(connectionString: string) {
    // do some stuff here
  }
}

class User {
  private database: Database;

  constructor(public email: string, databaseString: string) {
    this.database = new Database(databaseString);
  }
}

const user = new User('john@doe.io', './user.sqlite')

This causes several problems:

  1. The User class depends on the Database class. If you change the implementation of the Database class, you will have to change the User class.

  2. the code is much less testable because to test a user, I need to know how User class works.

To accentuate the problem, let’s add a Logger class that allows you to log events in the app. Let’s say we need to log the database connection. The code becomes:

class Logger {
  log(message: string): void {
    const time = new Date().toISOString();
    console.log(`${time} -- ${message}`);
  }
}

class Database {
  constructor(connectionString: string) {
    const logger = new Logger();
    logger.log(`Connected to ${connectionString}`);
  }
}

class User {
  private database: Database;

  constructor(public email: string, databaseString: string) {
    this.database = new Database(databaseString);
  }
}

const user = new User('john@doe.io', './user.sqlite')

We can see that the situation is getting worse because all classes are becoming dependent on each other. To correct this, we are going to inject the Database class directly into the User constructor:

The Database class is now injected in the constructor.
class Logger {/* ... */}

class Database {
  constructor(logger: Logger, connectionString: string) {
    logger.log(`Connected to ${connectionString}`);
  }
}

class User {
  constructor(private database: Database) {}
}

const logger = new Logger();
const database = new Database(logger, "db.sqlite");
const user = new User(database);

This code becomes stronger because the User, Database, and Logger classes are decoupled.

OK, but it becomes harder to instantiate a User.

Yes, it does. That’s why we use a Container that will record the classes that can be injected and offer us to create instances easily:

An example of a container used to instantiate classes
class Logger {/* ... */}
class Database {/* ... */}
class User {/* ... */}

class Container {
  getLogger(): Logger {
    return new Logger();
  }

  getDatabase(): Database {
    return new Database(this.getLogger(), "db.sqlite");
  }

  getUser(): User {
    return new User(this.getDatabase());
  }
}

const container = new Container();
const user = container.getUser();

The code is longer, but everything gets cut out. Rest assured, we are not going to implement all this by hand. Excellent libraries exist. The one I chose is Inversify.

In this section, we are going to concretely implement a complete dependency injection system.

We will set up a Logger that can be injected into all classes of our application. It will allow us to handle HTTP requests, for example, but also many other events.

So let’s install inversify:

$ npm install inversify --save

And let’s create a simple event logging class:

Note
We could use a library like Winston or Morgan, but for the example, I will create a fairly basic class:
Create a basic logger
// src/services/logger.service.ts
export class Logger {
  public log(level: 'DEBUG' | 'INFO' | 'ERROR', message: string): void {
    const time = new Date().toISOString();
    console.log(`${time} - ${level} - ${message}`);
  }
}

To make it injectable, you need to add a @injectable decorator to it. This decorator will simply add metadata to our class so that it can be injected into our future dependencies.

Make Logger injectable
import {injectable} from 'inversify';

@injectable()
export class Logger {/* ... */}

And there you go. Now we just have to create the container that will register this service. The documentation recommends creating a TYPES object that will simply store the identifiers of our services. We will create a core folder that will contain all the code that is transversal to our entire application.

// src/core/types.core.ts
export const TYPES = {Logger: Symbol.for('Logger')};
Note
A Symbol is a primitive type that allows you to have a unique reference.

Now we can use this symbol to save our logger in a new container.core.ts file. Just instantiate a Container and add our service with the bind() method. We then export this instance for use in the application:

// src/core/container.core.ts
import {Container} from 'inversify';
import {Logger} from '../services/logger.service';
import {TYPES} from './types.core';

export const container = new Container();
container.bind(TYPES.Logger).to(Logger);

And there you go.

Creating a controller

Let’s leave aside this class that we will use later in our first controller. Controllers are part of the design patern MVC: Model, View, Controller. Their purpose is to intercept the request and call the dedicated services. There is an official Inversify library to integrate dependency injection directly into our controllers: inverisfy-express-utils.

We start by installing the library. We’ll also add body-parser, which will allow us to process the HTTP request parameters (we’ll talk about this later).

To install it, it’s straightforward. Just follow the official documentation. So we start by installing some libraries.

$ npm install inversify-express-utils reflect-metadata body-parse --save
  • reflet-metadata allows Inversify to add metadata on our class. This import must be located at the very beginning of the first file.

  • body-parse will give us the possibility to extract parameters from HTTP requests (we’ll talk about it later).

Before writing our first controller, it is necessary to make some modifications to the creation of our HTTP server. Let’s create a new file core/server.core.ts, which will simply define our HTTP server with inversify-express-utils:

The definition of our HTTP server with inversify-express-utils.
// src/core/server.ts
import * as bodyParser from 'body-parser';
import {InversifyExpressServer} from 'inversify-express-utils';
import {container} from './container.core';

export const server = new InversifyExpressServer(container);
server.setConfig(app => {
  app.use(bodyParser.urlencoded({extended: true}));
  app.use(bodyParser.json());
});

As you can see, we are now using an instance of InversifyExpressServer. The setConfig method allows you to add middleware (we’ll return to this later). Let’s move on to the main.ts file, which we’ll modify a bit:

// src/main.ts
import 'reflect-metadata';
import {container} from './core/container.core';
import {server} from './core/server';
import {TYPES} from './core/types.core';

const port = 3000;

server
  .build()
  .listen(port, () => console.log(`Listen on http://localhost:${port}/`));

And there you go. Now we can tackle our first controller.

The controller is a class like any other. It simply goes to the @controller decorator. This decorator will also declare this controller as @injectable but also offer us special features.

Let’s go straight to the implementation to make it more meaningful:

Creating the first controller with a single route
// src/controllers/home.controller.ts
import {controller, httpGet} from 'inversify-express-utils';

@controller('/')
export class HomeController {

  @httpGet('')
  public index(req: Request, res: Response) {
    return res.send('Hello world');
  }
}

As you can see, the implementation is obvious, thanks to the decorators:

  • The @controller("/") tells us that all the routes of this controller will be prefixed with /.

  • The second decorator @httpGet("/") defines that this method will be accessible on the URL / via the HTTP POST verb.

Now let’s try to inject the Logger to display a message when this route is used:

// src/controllers/home.controller.ts
// ...
import {TYPES} from '../core/types.core';
import {Logger} from '../services/logger.service';

@controller("/")
export class HomeController {
  public constructor(@inject(TYPES.Logger) private readonly logger: Logger) {}

  @httpGet('')
  public index(req: Request, res: Response) {
    this.logger.log('INFO', 'Get Home.index');
    return res.send('Hello world');
  }
}

There you go!

The @inject decorator takes care of everything. Just specify the symbol. It’s magic.

The last step is to manually import this controller into the container. It’s really very easy to do:

// src/core/container.core.ts
// ...
import '../controllers/home.controller';

You can now start the server with npm run start or wait for the transpilation to be done automatically if you have not stopped the previous server.

If everything works as before, you can commit the changes:

$ git add .
$ git commit -m "Add inversify"

Conclusion

It took a bit long, I know, but you did it! Don’t give up. It’s just our little foundation for something big, so keep going.

Presenting users

In the previous chapter, we managed to set up the basics for the configuration of our application. This chapter will perfect this base and add the Model layer, which will store the data and add the first tests.

In the next chapters, we will deal with user authentication using authentication tokens and defining permissions to limit access to connected users. We will then link products to users and give them the ability to place orders.

As you can already imagine, there are many authentication solutions for Node.js, such as Passport.js, Permit, and Currency. These solutions are turnkey libraries, meaning that they allow you to manage many things like authentication, password forgetting functionality, validation, etc.

We won’t use them to better understand the authentication mechanism. This will allow you to discover nothing magic behind password encryption and the creation of authentication tokens.

This chapter will be complete. It may be long, but I will try to cover as many topics as possible. Feel free to grab a coffee, and let’s go. By the end of this chapter, you will have built all the user logic, validation, and error handling.

Setting up TypeORM

Here we will put the Model layer of the design patern MVC. This is the layer related to the database.

To access the database, we will use an ORM (Object Relational Mapper). The purpose of an ORM is to interact with the database and save you from writing SQL queries by hand. It also allows us to add an abstraction layer to the database type and not worry about the differences between PostgreSQL and SQLite, for example.

There are several ORMs for Nodejs: Sequelize, Mongoose and TypeORM. I chose the last one because it is the one that integrates best with Typescript. It also offers a Active Record AND Data Mapper approach that I like very much.

To install it is straightforward. We are going to install the TypeORM library but also two additional libraries :

  • sqlite3 which will allow us to dialogue with our Sqlite database.

  • dotenv will allow us to start defining environment variables such as the connection to our database.

Here we go:

Adding libraries to install TypeORM
$ npm add typeorm sqlite3 dotenv --save

We will now generate our configuration file. By default, dotenv will look for a file named .env. Let’s create it:

$ touch .env

And let’s start by defining TypeORM environment variables for a basic connection to an SQLite database:

The basic configuration of TypeORM for a connection to SQLite
TYPEORM_CONNECTION=sqlite
TYPEORM_DATABASE=db/development.sqlite
TYPEORM_LOGGING=true
TYPEORM_SYNCHRONIZE=true
TYPEORM_ENTITIES=src/entities/*.entity.ts,dist/entities/*.entity.js

As you can see, we define that we will use SQLite and that the database will be stored in the db/ folder. TYPEORM_SYNCHRONIZE allows us to avoid not worrying about migrations and so let TypeORM do the modifications on our database schema if necessary. We then specify where our entities are located with TYPEORM_ENTITIES.

All we have to do is configure dotenv to load this file. To do this, I use Node.js flag --require, which allows us to pre-load a library. You just have to modify the package.json:

The basic TypeORM configuration for a connection to SQLite
{
  // ...
  "scripts": {
    "start": "tsc && node dist/main.js -r dotenv/config",
    "start:watch": "nodemon",
    // ...
  },
  // ...
}

We will now create a DatabaseService that will take care of connecting TypeORM to our database. As we have implemented dependency injection, this service will also be injectable. Here is the complete implementation. Don’t panic. I’ll detail the logic next.

Implementation of DatabaseService
// src/services/database.service.ts
// ...
@injectable()
export class DatabaseService {
  private static connection: Connection;

  public constructor(@inject(TYPES.Logger) private readonly logger: Logger) {}

  public async getConnection(): Promise<Connection> {
    if (DatabaseService.connection instanceof Connection) {
      return DatabaseService.connection;
    }

    try {
      DatabaseService.connection = await createConnection();
      this.logger.log('INFO', `Connection established`);
      return DatabaseService.connection;
    } catch (e) {
      this.logger.log('ERROR', 'Cannot establish database connection');
      process.exit(1);
    }
  }

  public async getRepository<T>(repository: ObjectType<T>): Promise<T> {
    const connection = await this.getConnection();
    return await connection.getCustomRepository<T>(repository);
  }
}

This class has two methods:

  • getConnection : this method will initialize a new connection to the database. This one will call the createConnection method, which will look for an ormconfig file (in our case, the environment variables loaded by dotenv) and establish a connection. Once the connection is made, it is stored in a static property, which will be returned the next time directly.

  • getRepository: this method will allow us to manipulate our models via the repository. We will talk about it in details later

Note
It is good practice to hide the logic of the library from our own class. This will allow us to depend on the library and to be able to migrate more easily if one day, we want to change.

Now that our service is created, we need to add it to our container:

Add the Symbol linked to the DatabaseService service.
// src/core/types.core.ts
export const TYPES = {
  // ...
  DatabaseService: Symbol.for('DatabaseService'),
};
Registration of the DatabaseService service in the Inversify container.
// src/core/container.core.ts
import {Container} from 'inversify';
import {DatabaseService} from '../services/database.service';
// ...
export const container = new Container();
// ...
container.bind(TYPES.DatabaseService).to(DatabaseService);

And there you go.

We can now create our first User model. Using the patern Data Mapper, we will have to create two classes :

  • the entity : it will define fields attributes to be saved in the database. In our case, I will simply create two attributes: email and password (the password will be encrypted later).

  • the repository: it will add some logic to save our entities.

To simplify the example, I will put these two classes in the same file, but you can separate them very well :

Creation of user entity and user repository
// src/entities/user.entity.ts
import {/* ... */} from 'typeorm';

@Entity()
export class User {
  @PrimaryGeneratedColumn()
  id: number;

  @Column({unique: true})
  email: string;

  @Column()
  password: string;
}

@EntityRepository(User)
export class UserRepository extends Repository<User> {}

And there you go. The result is really very simple, thanks to the @columns decorators offered by TypeORM. They can also define the type of information stored (text, date, etc…​). The implementation of this model is sufficient for the moment.

Our work is not very visible but hold on because you will see the result in the next section.

We can commit the changes made so far:

$ git add .
$ git commit -m "Setup TypeORM"

Creating the user controller

Now it’s time to get to the concrete part and create the controller to manage the users. This controller will respect the REST standards and propose classic CRUD actions. I.e. Create, Read, Update and Delete.

List users

We will start with the index method, which is the simplest.

As we saw earlier, controllers can inject our services. So we will inject the DatabaseService to be able to retrieve the UserRepository. Then we will just have to call the userRepository.find method to get the list of all users (which is empty for the moment).

Here is the implementation of our controller:

Implementation of user controller index
// src/controllers/users.controller.ts
import {Request, Response} from 'express';
import {inject} from 'inversify';
import {controller, httpGet} from 'inversify-express-utils';
import {TYPES} from '../core/types.core';
import {UserRepository} from '../entities/user.entity';
import {DatabaseService} from '../services/database.service';

@controller('/users')
export class UsersController {
  public constructor(@inject(TYPES.DatabaseService) private readonly database: DatabaseService) {}

  @httpGet('/')
  public async index(req: Request, res: Response) {
    const userRepository = await this.database.getRepository(UserRepository);

    const users = await userRepository.find();
    return res.json(users);
  }
}

And of course, don’t forget to add the import of this new controller in the container:

// src/core/container.core.ts
// ...
import "../controllers/users.controller";

And there you go. Run the command npm run start:watch to start the server if you have stopped it and let’s test the functionality with cURL:

$ curl http://localhost:3000/users

Command’s output indicates an empty result: this is normal because there is no user yet. On the other hand, the server terminal tells us that a lot has happened:

Output of TypeORM database initialization
query: BEGIN TRANSACTION
query: SELECT * FROM "sqlite_master" WHERE "type" = 'table' AND "name" IN ('user')
query: SELECT * FROM "sqlite_master" WHERE "type" = 'index' AND "tbl_name" IN ('user')
query: SELECT * FROM "sqlite_master" WHERE "type" = 'table' AND "name" = 'typeorm_metadata'.
query: CREATE TABLE "user" ("id" integer PRIMARY KEY AUTOINCREMENT NOT NULL, "email" varchar NOT NULL, "password" varchar NOT NULL)
query: COMMIT
2020-11-15T22:09:25.476Z - INFO - Connection established - {}
query: SELECT "User". "id" AS "User_id", "User". "email" AS "User_email", "User". "password" AS "User_password" FROM "user" "user" "User" "User".

These are TypeORM logs. These tell us that:

  1. TypeORM tried to see if there was a table named user.

  2. TypeORM created this table since it didn’t exist

  3. the connection to the database has been established

  4. The SQL query to retrieve all users has been executed.

This tells us that everything is working perfectly! But I feel a bit disappointed because we don’t have a user yet. Let’s move on!

Create

Now that our entire structure has been put in place, the rest will go much faster. Let’s go straight to the implementation, and I’ll explain the code next:

Adding the create method to the UserRepository class.
// src/controllers/home.controller.ts
// ...
import {controller, httpGet, httpPost, requestBody} from 'inversify-express-utils';
// ...

interface CreateUserBody {
  email: string;
  password: string;
}

@controller('/users')
export class UsersController {
  // ...
  @httpPost('/')
  public async create(@requestBody() body: CreateUserBody, req: Request, res: Response) {
    const repository = await this.database.getRepository(UserRepository);
    const user = new User();
    user.email = body.email;
    user.password = body.password;
    repository.save(user);
    return res.sendStatus(201);
  }
}

It’s a bit of code but don’t panic. CreateUserBody is an interface that defines the HTTP parameters that can be received. We take these parameters and send them directly to the repository.

Let’s test that it all works:

Creating a user with cURL.
$ curl -X POST -d "email=test@test.fr" -d "password=test" http://localhost:3000/users

Perfect. You can see that everything is working properly!

Let’s move on to retrieve the information of this user.

Show

The show method will take care of retrieving a user’s information. This method will take the user’s ID. We will then use the repository to retrieve the user.

Here is the implementation :

Adding the create method to the UserRepository class.
// src/controllers/home.controller.ts
// ...
@controller('/users')
export class UsersController {
  // ...
  @httpGet('/:userId')
  public async show(@requestParam('userId') userId: number) {
    const repository = await this.database.getRepository(UserRepository);
    return repository.findOneOrFail(userId);
  }
}

The implementation is really very simple. Just return an object, and inversify-express-utils will take care of converting the JavaScript object to JSON.

Let’s try it to see:

$ curl http://localhost:3000/users/1
{"id":1, "email": "test@test.fr", "password": "test"}.

And there you go. Everything is working properly. Now let’s try to update this user.

Update

The update method will take care of recovering, modifying, and registering the user. As for the previous method, TypeORM makes our task much easier:

Implementation of user update
// src/controllers/home.controller.ts
// ...
interface UpdateUserBody {
  email: string;
  password: string;
}

@controller('/users')
export class UsersController {
  // ...
  @httpPut('/:userId')
  public async update(
    @requestBody() body: UpdateUserBody,
    @requestParam('userId') userId: number,
    req: Request,
    res: Response
  ) {
    const repository = await this.database.getRepository(UserRepository);
    const user = await repository.findOneOrFail(userId);
    user.email = body.email ?? user.email;
    user.password = body.password ?? user.password;
    await repository.save(user);
    return res.sendStatus(204);
  }
  // ...
}

And there you go. As before, let’s see if it works:

Updating an user using cURL
$ curl -X PUT -d "email=foo@bar.com"  http://localhost:3000/users/1

Perfect! You can even see, our user has been updated and it is sent back to us in JSON format. You can even see the SQL query that TypeORM performed in the terminal logs.

query: SELECT "User"."id" AS "User_id", "User"."email" AS "User_email", "User"."password" AS "User_password" FROM "user" "User" WHERE "User"."id" IN (?) -- PARAMETERS: [1]
query: BEGIN TRANSACTION
query: UPDATE "user" SET "email" = ? WHERE "id" IN (?) -- PARAMETERS: ["foo@bar.com",1]
query: COMMIT

Delete

The delete method is the easiest. Just retrieve the user and call the repository.delete method. Let’s do it:

Implementation of user delete
// src/controllers/home.controller.ts
// ...
@controller('/users')
export class UsersController {
  // ...
  @httpDelete('/:userId')
  public async destroy(@requestParam('userId') userId: number, req: Request, res: Response) {
    const repository = await this.database.getRepository(UserRepository);
    const user = await repository.findOneOrFail(userId);
    await repository.delete(user);
    return res.sendStatus(204);
  }
}

The delete method is the easiest. Just retrieve the user and call the repository.delete method. Let’s do it:

Delete an user using cURL
$ curl -X DELETE  http://localhost:3000/users/1

Here again, we can verify that the user has been deleted by looking at the TypeORM logs:

query: SELECT "User"."id" AS "User_id", "User"."email" AS "User_email", "User"."password" AS "User_password" FROM "user" "User" WHERE "User"."id" IN (?) -- PARAMETERS: ["1"]
query: DELETE FROM "user" WHERE "id" = ? AND "email" = ? AND "password" = ? -- PARAMETERS: [1,"foo@bar.com","test"]

And there you go. Now that we are at the end of our controller, we can commit all these changes:

$ git commit -am "Implement CRUD actions on user"

Validation of our users

Everything seems to work, but there is still one problem: we do not validate the data we insert in the database. Thus, it is possible to create a user with a fake email:

Try to creating an invalid user using cURL
$ curl -X POST -d "whatever" -d "password=test" http://localhost:3000/users

Once again, we will use a ready-made library: class-validator. This library will offer us a ton of decorators to check our User instance very easily.

Let’s install it with NPM :

$ npm install class-validator --save

And then just add the @IsEmail and @IsDefined decorators like this :

// src/entities/user.entity.ts
+ import {IsDefined, IsEmail, validateOrReject} from 'class-validator';
- import {/* ... */} from 'typeorm';
+ import {BeforeInsert, BeforeUpdate, /* ... */} from 'typeorm';

@Entity()
export class User {
  // ...
+  @IsDefined()
+  @IsEmail()
  @Column()
  email: string;

+  @IsDefined()
  @Column()
  password: string;

+  @BeforeInsert()
+  @BeforeUpdate()
+  async validate() {
+    await validateOrReject(this);
+  }
}
// ...

It didn’t take a lot of code to add. The most interesting part is the validate method. It has two decorators BeforeInsert and BeforeUpdate, which will automatically call the validate method when using the save method of a repository. This is very convenient, and there is nothing to do. Now let’s try to create the same user with the wrong email:

Try to creating an invalid user using cURL
$ curl -X POST -d "whatever" -d "password=test" http://localhost:3000/users
...
<pre>An instance of User has failed the validation:<br> - property email has failed the following constraints: isDefined, isEmail <br></pre>
...

You can see that it is much better. However we would like to send an error formatted in JSON with the error code corresponding to the REST standard. So let’s modify the controller :

Add user validation in the UserController.
// src/controllers/home.controller.ts
// ...
@controller('/users')
export class UsersController {
  // ...
  @httpPost("/")
  public async create(/* ... */): Promise<User | Response> {
    // ...
    const errors = await validate(user);
    if (errors.length !== 0) {
      return res.status(400).json({ errors });
    }

    return repository.save(user);
  }

  @httpPut("/:id")
  public async update(/* ... */): Promise<User | Response> {
    // ...
    const errors = await validate(user);
    if (errors.length !== 0) {
      return res.status(400).json({ errors });
    }
    return repository.save(user);
  }
  // ...
}

Let’s try now:

Try to creating an invalid user using cURL
$ curl -X POST -d "test@test.fr" -d "password=test"  http://localhost:3000/users
{"errors":[{"target":{"password":"test"},"property":"email","children":[],"constraints":{"isDefined":"email should not be null or undefined","isEmail":"email must be an email"}}]}

The result is really complete and will allow an API user to quickly interpret the error.

Let’s commit these changes:

$ git commit -am "Validate user"

Factoring

Now that we have a code that works, it’s time to make a pass to make it all.

During setup, you may have noticed that the show, update, and destroy methods have a common logic: they all get the whole user.

To factorize this code, there would be two solutions:

  1. move the code snippet to a private method and call it

  2. create a Middleware that will be executed before the controller

I chose the second option because it reduces the code and the controller’s responsibility. Moreover, with inversify-express-utils it’s effortless. Let me show you:

import {NextFunction, Request, Response} from 'express';
import {inject, injectable} from 'inversify';
import {BaseMiddleware} from 'inversify-express-utils';
import {TYPES} from '../core/types.core';
import {User, UserRepository} from '../entities/user.entity';
import {DatabaseService} from '../services/database.service';

@injectable()
export class FetchUserMiddleware extends BaseMiddleware {
  constructor(@inject(TYPES.DatabaseService) private readonly database: DatabaseService) {
    super();
  }

  public async handler(
    req: Request & { user: User },
    res: Response,
    next: NextFunction
  ): Promise<void | Response> {
    const userId = req.query.userId ?? req.params.userId;
    const repository = await this.database.getRepository(UserRepository);
    req.user = await repository.findOne(Number(userId));

    if (!req.user) {
      return res.status(404).send("User not found");
    }

    next();
  }
}

Here are some explanations about this code :

  1. inversify-express-utils gives us access to an abstract class BaseMiddleware. We also need to add the @injectable decorator to use it later in our controller.

  2. a middleware is a simple handle method that takes :

    • req: the request sent by the user

    • res: the HTTP response to return.

    • next: a callback to call once our processing is complete.

  3. the handle method takes care of retrieving the user and adding it to the req object for later use.

  4. if the user does not exist, we use res to return a 404 response directly without even going through the user

Since we have defined a new injectable, we need to add it to our container:

Add FetchUserMiddleware type for inversify
// src/core/types.core.ts
export const TYPES = {
  // ...
  // Middlewares
  FetchUserMiddleware: Symbol.for("FetchUserMiddleware"),
};
Register FetchUserMiddleware to container
// src/core/container.core.ts
// ...
import {FetchUserMiddleware} from '../middlewares/fetchUser.middleware';
// ...
// middlewares
container.bind(TYPES.FetchUserMiddleware).to(FetchUserMiddleware);

Now we can use this middleware in our controller by adding TYPE.FetchUserMiddleware to the decorator. So here is the modification:

Using FetchUserMiddleware into user controller
// src/controllers/home.controller.ts
// ...
@controller('/users')
export class UsersController {
  // ...
  @httpGet('/:userId', TYPES.FetchUserMiddleware)
  public async show(/* ... */) {
    return req.user;
  }

  @httpPut('/:userId', TYPES.FetchUserMiddleware)
  public async update(/* ... */) {
    // ...
    req.user.email = body.email ?? req.user.email;
    req.user.password = body.password ?? req.user.password;
    // ...
  }

  @httpDelete('/:userId', TYPES.FetchUserMiddleware)
  public async destroy(/* ... */) {
    // ...
    await repository.delete(req.user);
    // ...
  }
}

Not bad, right? Let’s start the modifications before going further:

$ git add .
$ git commit -m "Factorize user controller with middleware"

Password Hash

Theory

We will use the basic library of Node.js: Crypto. Here is an example of a method for hashing the password:

Hash a password with crypto library
import {createHash} from 'crypto';

function hashPassword(password: string): string {
  return createHash("sha256").update(password).digest("hex");
}

console.log(hashPassword("$uper_u$er_p@ssw0rd"));
// => 51e649c92c8edfbbd8e1c17032...

And there it is! To know if the password matches, just check if the hash matches the previous one:

Compare an hashed password
import {createHash} from 'crypto';

function hashPassword(password: string): string {
  return createHash("sha256").update(password).digest("hex");
}

function isPasswordMatch(hash: string, password: string): boolean {
  return hash === hashPassword(password);
}

const hash = hashPassword("$uper_u$er_p@ssw0rd");// => 51e649c92c8edfbbd8e1c17032...

isPasswordMatch(hash, "$uper_u$er_p@ssw0rd");// => true
isPasswordMatch(hash, "wrong password");// => false

Impeccable. However, there is a small problem with this type of method.

If your passwords leak, it will be quite easy to retrieve the corresponding password by building a hash library. Concretely, the malicious person would use the current passwords, hash them one by one with the same algorithm, and compare them to ours. To correct this, a hash salt must be used.

The hash salt consists of adding a defined text to each password. Here is the modification:

Hash a password with a salt
import {createHash} from 'crypto';

const salt = "my private salt";

function hashPassword(password: string, salt: string): string {
  return createHash("sha256").update(`${password}_${salt}`).digest("hex");
}

function isPasswordMatch(hash: string, password: string): boolean {
  return hash === hashPassword(password, salt);
}

const hash = hashPassword("$uper_u$er_p@ssw0rd", salt);// => 3fdd2b9c934cd34c3150a72fb4c98...

isPasswordMatch(hash, "$uper_u$er_p@ssw0rd");// => true
isPasswordMatch(hash, "wrong password");// => false

There you go! The result is the same, but our application is more secure. If someone were to access our database, he would have to have the hash salt to retrieve the corresponding passwords.

The implementation

Now that we have seen the theory let’s put it into practice. We will use the same methods in a password.utils.ts file. Here we go:

Create utilities methods for password hashing
// src/utils/password.utils.ts
import {createHash} from 'crypto';

const salt = "my private salt";

export function hashPassword(password: string, salt: string): string {
  return createHash("sha256").update(`${password}_${salt}`).digest("hex");
}

export function isPasswordMatch(hash: string, password: string): boolean {
  return hash === hashPassword(password, salt);
}

We will now use the hashPassword method in the User entity. With TypeORM it’s very easy using hooks as we did with validation.

Hash user’s password
// src/entities/user.entity.ts
// ...
import {hashPassword} from '../utils/password.utils';

@Entity()
export class User {
  // ...
  @IsDefined()
  @Column()
  hashedPassword: string;

  set password(password) {
    if (password) {
      this.hashedPassword = hashPassword(password);
    }
  }  // ...
}
// ...

A few explanations are necessary:

  • We have created an attribute hashedPassword, which contains the password of the hashed user. This value will be saved in the database because we added the @column decorator. We’ll need it later to know if the user’s password matches the one he had defined.

  • the password attribute becomes a setter. It’s like a virtual attribute that will be called during the assignment. So by doing user.password = 'toto', this method will be called. This is perfect because we don’t want to store the password anymore in case our database leaks.

Now let’s try to create a user via the API:

Creating an user with cURL
$ curl -X POST -d "email=test@test.fr" -d "password=test" http://localhost:3000/users
{"email":"test@test.fr","password":"test","hashedPassword":"8574a23599216d7752ef4a2f62d02b9efb24524a33d840f10ce6ceacda69777b","id":1}

Everything seems to work perfectly because we can see that the user has a hashed password. If we change the password, the hash changes correctly :

Update user’s password with cURL
$ curl -X PUT   -d "password=helloWorld"  http://localhost:3000/users/4
{"id":4,"email":"test@test.fr","hashedPassword":"bdbe865951e5cd026bb82a299e3e1effb1e95ce8c8afe6814cecf8fa1e895d1f"}

Everything works perfectly well. Let’s do a commit before going any further.

$ git add .
$ git commit -m "Hash user password"

Setting up a unit test

We have a code that works, and that’s cool. If we can make sure it works like that every time we evolve, it’s even better. So this is where the unitary tests come in.

Unit testing’s role is to make sure that our method always works the way we decided it would. So here we’re going to set up a simplistic test to make sure that everything works well.

There are several libraries of tests in JavaScript. I chose Mocha because it’s one of the most popular libraries and straightforward to set up. We also install ts-mocha, which will transpose the TypeScript on the fly:

Install mocha library
$ npm install mocha ts-mocha @types/mocha --save-dev

We also need to modify our tsconfig.json to add Mocha’s declarations and tell Typescript not to compile these files:

Add mocha setting to Typescript configuration
{
  "compilerOptions": {
    // ..
    "types": [
      "node",
+      "mocha"
    ],
    // ...
  },
+   "exclude": ["./**/*.spec.ts"]
}

Here we are ready to create our first test:

Create first unit test about hashing password
// src/entities/user.entity.spec.ts
import assert from 'assert';
import {hashPassword} from '../utils/password.utils';
import {User} from './user.entity';

describe("User", () => {
  it("should hash password", () => {
    const user = new User();
    user.password = "toto";
    const expected = hashPassword("toto");
    assert.strictEqual(user.hashedPassword, expected);
  });
});

As I told you, it’s a really simple test. Now let’s add the command that will allow us to run this test in the package.json file:

Add NPM script to run tests
{
  // ...
  "scripts": {
    "start": "tsc && node dist/main.js",
    "start:watch": "nodemon",
+     "test": "DOTENV_CONFIG_PATH=.test.env ts-mocha -r reflect-metadata -r dotenv/config src/**/*.spec.ts",
    "build": "tsc"
  },
  // ...
}

Some explanations on this command:

  • -r reflect-metadata loads the reflect-metadata library and prevents us from importing it manually.

  • -r dotenv/config loads the dotenv library to get the TypeORM environment variables.

  • DOTENV_CONFIG_PATH will load a particular .env file that we will create right afterward.

When we test our application, we don’t want to pollute our database with data we create during testing. So it’s a good practice to create a dedicated database. In our case, we will use a SQLite in memory database. That is to say that it is not stored on the hard disk but directly in the random access memory. Here is the file .test.env:

TypeORM environnement variable for testing
TYPEORM_CONNECTION=sqlite
TYPEORM_DATABASE=:memory:
TYPEORM_LOGGING=true
TYPEORM_SYNCHRONIZE=true
TYPEORM_ENTITIES=src/entities/*.entity.ts
Note
The TYPEORM_ENTITIES directive also points to Typescript files because ts-mocha transpiles and executes these files directly.

That’s it. Now we can run this test:

$ npm test

  User
    ✓ should hash password


  1 passing (5ms)

And while we’re at it, we can also add another unit test on the isPasswordMatch password comparison method:

Add unit test about isPasswordMatch
// src/utils/password.utils.spec.ts
import assert from 'assert';
import {hashPassword, isPasswordMatch} from './password.utils';

describe("isPasswordMatch", () => {
  const hash = hashPassword("good");
  it("should match", () => {
    assert.strictEqual(isPasswordMatch(hash, "good"), true);
  });
  it("should not match", () => {
    assert.strictEqual(isPasswordMatch(hash, "bad"), false);
  });
});

Again, this kind of test may seem simplistic to you but they are very fast and provide additional security. Let’s run the tests:

$ npm test
...
  User
    ✓ should hash password

  isPasswordMatch
    ✓ should match
    ✓ should not match


  3 passing (6ms)

Now that you’re warmed up, let’s commit and move on to the next one:

$ git add .
$ git commit -m "Add unit test about password hash"

Add functional tests

Now that we have set up unit tests, it is time to set up the functional tests. This type of test will test functionalities rather than methods.

A good practice I learned while developing with the Ruby on Rails framework is to test the behavior of controllers. This is very easy because you just call an endpoint with parameters and check the result. For example, if I send a GET type request on the /users route, I should expect to receive a list of users. The library supertest allows us to do this without even starting the server.

So let’s install this library:

Install supertest library
$ npm install supertest @types/supertest --save-dev

Now let’s create our agent that will be used in all our tests:

Create supertest agent
// src/tests/supertest.utils.ts
import supertest, { SuperTest, Test} from 'supertest';
import {server} from '../core/server';

export const agent: SuperTest<Test> = supertest(server.build());

And now let’s start creating our first test for the index method for example:

Create functional test about GET /users endpoint
// src/controllers/users.controller.spec.ts
import {container} from '../core/container.core';
import {TYPES} from '../core/types.core';
import {UserRepository} from '../entities/user.entity';
import {agent} from '../tests/supertest.utils';

describe("UsersController", () => {
  let userRepository: UserRepository;

  describe("index", () => {
    it("should respond 200", (done) => {
      agent.get("/users").expect(200, done);
    });
  });
});

The test is really very simple, and the supertest syntax makes the test very readable. This test means "send an HTTP request of type Get and expect a response of type `200`". Let’s try to run the tests.

$ npm test
...
  UsersController
    index
      ✓ should respond 200
...
Note
TypeORM SQL queries may be hosted by you because we left the TYPEORM_LOGGING=true directive. You can pass it to false to stop seeing them.

Now here is the same test for create. This one is different because it sends HTTP parameters.

Create functional test about POST /users/ endpoint
// src/controllers/users.controller.spec.ts
// ...
describe("UsersController", () => {
  let userRepository: UserRepository;
  // ..
  describe("create", () => {
    it("should create user", (done) => {
      const email = `${new Date().getTime()}@test.io`;
      agent.post("/users").send({ email, password: "toto" }).expect(201, done);
    });

    it("should not create user with missing email", (done) => {
      const email = `${new Date().getTime()}@test.io`;
      agent.post("/users").send({ email }).expect(400, done);
    });
  });
});
Note
new Date().getTime() returns a Number of the number of milliseconds since 01/01/1970. I use it to get a unique number. We’ll see later how to improve this.

Here we test two things:

  1. if we send the right information, we should have a return of type 200.

  2. if you don’t specify a password, you must have a return of type 400.

This test is straightforward, and you can add others like "should not create user with invalid email" for example. These tests are easy to set up and validate a global behavior.

You can now commit the changes:

$ git add && git commit -m "Add functional tests"

Conclusion

Oh, you’re here! Well done! I know this was probably the longest chapter but don’t give up!

If you’re not used to using tests, we’ll see in the chapter how to use them to predefine the behavior we want before we even code the features. So we will set up tests for the show, update, and destroy methods that will need authentication. In other words, we will start doing test-driven development Test Driven Development. This is definitely the most important part of the book!

User authentication

In this chapter, things will become more interesting. We are going to set up our authentication mechanism. In my opinion, this will be one of the most interesting chapters because we will introduce a lot of new concepts. In the end, you will have a simple but powerful authentication system. Don’t panic. We’ll get there.

Stateless Sessions

Before going any further, something must be clear: an API does not manage sessions. This may sound a bit crazy if you don’t have any experience creating this kind of application. An API should be stateless. By definition, this means that an API that provides an answer after your request requires no further attention. This means that no previous or future state is required for the system to work.

The process of user authentication via an API is straightforward:

  1. The client requests a session resource with the corresponding credentials (usually an e-mail and a password).

  2. The server returns the user resource with its corresponding authentication token.

  3. For each page that requires authentication, the client must send this authentication token.

This section and the next will focus on building a session controller with its corresponding actions. We will then complete the request flow by adding the necessary authorization access.

Introducing JSON Web Token

When it comes to authentication tokens, a standard exists for the JSON Web Token (JWT).

JWT is an open standard defined in RFC 75191. It allows the secure exchange of tokens between multiple parties. - https://fr.wikipedia.org/wiki/JSON_Web_Token [Wikipedia]

Overall, a JWT token is composed of three parts :

  • a header structured in JSON will contain, for example, the validity date of the token.

  • a payload structured in JSON, which can contain any data. In our case, it will contain the identifier of the "connected" user.

  • a signature that will allow us to verify that the token has been encrypted by our application and therefore that it is valid.

These three parts are each encoded in base64 and then concatenated using dots .. This gives us something like this:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

Once decoded, this token gives us the following information:

  • the head

{ "alg": "HS256", "typ": "JWT" }
  • the payload

{ "sub": "1234567890", "name": "John Doe", "iat": 1516239022 }

For more information about JWT tokens, I invite you to visit jwt.io.

This has many advantages, such as sending information to the API consumer directly in the token. You can, for example, choose to integrate the user information in the payload.

Setting up the authentication token

The JWT standard has many implementations in various languages. Of course, there is a Nodejs library on this subject: node-jsonwebtoken.

So let’s start by installing it:

Install JWT library
$ npm install jsonwebtoken
$ npm install --save-dev @types/jsonwebtoken

The library is very easy to use with the jwt.sign and jwt.verify methods. Here is an example:

Exemple of how to use JWT library with Node.js
import {sign, verify} from 'jsonwebtoken';

const JWT_PRIVATE_KEY = "123456789";
const payload = { userId: 1 };
const token = sign(payload, JWT_PRIVATE_KEY, { expiresIn: "1 day" });

console.log(verify(token, JWT_PRIVATE_KEY)));
// => { userId: 1, iat: 1605730716, exp: 1605817116 }

In the first line, we encoded a payload with the secret key JWT_PRIVATE_KEY. So we get a token that we can simply decode. The second line decodes the token, and we can see that we get our payload back.

Now we’re going to put all this logic into a JsonWebTokenService class. This will allow us to avoid duplicating the code. This class will just encode and decode the JWT tokens. Here is the implementation :

Create JsonWebTokenService to encode / decode JWT token
// src/services/jsonWebToken.service.ts
import {injectable} from 'inversify';
import {sign, verify} from 'jsonwebtoken';

@injectable()
export class JsonWebTokenService {
  private readonly JWT_PRIVATE_KEY = "123456789";

  encode(payload: Object): string {
    return sign(payload, this.JWT_PRIVATE_KEY, { expiresIn: "1 day" });
  }

  decode(token: string): Object {
    return verify(token, this.JWT_PRIVATE_KEY);
  }
}

The implementation is straightforward. One method encodes a payload. The other decodes it. As this service is injectable, we have to save it in the container.

Add the Symbol for the JsonWebTokenService service.
// src/core/types.core.ts
export const TYPES = {
  // ...
  JsonWebTokenService: Symbol.for("JsonWebTokenService"),
};
Register JsonWebTokenService service to the container.
// src/core/container.core.ts
// ...
import {JsonWebTokenService} from '../services/jsonWebToken.service';

export const container = new Container();
// ...
container.bind(TYPES.JsonWebTokenService).to(JsonWebTokenService);

And there you go. We can even add a little quick test that will encode and decode a payload and check that we find the content:

Create unit test about JWT encoding / decoding
// src/services/jsonWebToken.service.spec.ts
import assert from 'assert';
import {container} from '../core/container.core';
import {TYPES} from '../core/types.core';
import {JsonWebTokenService} from './jsonWebToken.service';

describe("JsonWebTokenService", () => {
  let jsonWebTokenService: JsonWebTokenService;

  before(() => {
    jsonWebTokenService = container.get(TYPES.JsonWebTokenService);
  });

  it("should encode and decode payload", () => {
    const token = jsonWebTokenService.encode({ userId: 1 });
    const payload = jsonWebTokenService.decode(token);
    assert.strictEqual(payload.userId, 1);
  });
});

This test is a bit longer than the others because we have to retrieve an instance of JsonWebTokenService via the container. We use the before method that will be executed before our test battery.

Now let’s see if all our tests pass :

$ npm test
...
  JsonWebTokenService
    ✓ should encode and decode payload
...

It’s perfect. Let’s get started and move on:

$ git add .
$ git commit -m "Create JsonWebTokenService"

The token controller

So we set up the JWT token generation system. It is now time to create a route that will generate this token. The actions we will implement will be handled as RESTful services: the connection will be handled by a POST request to the create action.

Before moving on to the implementation, we will try to write a complete test.

Setting up the functional test

Here we will test the endpoint that we will create next. This endpoint will take the user’s email and password as parameters. So we can test three things:

  1. the user has sent the right information, so we return a token

  2. the password is wrong, so we return the error 400 - Bad request.

  3. the user does not exist, so we return the error 400 - Bad request.

Note
We return a code 400 without further explanation. Indeed, we do not want to tell the user that this email is not present in the database. This is a good practice that would make a brute force attack little more complicated.

Obviously, the test will start by creating a user. This is what we will do in the before method.

Creation of a part of the functional test of TokensController.
// src/controllers/tokens.controller.spec.ts
import {container} from '../core/container.core';
import {TYPES} from '../core/types.core';
import {User, UserRepository} from '../entities/user.entity';
import {DatabaseService} from '../services/database.service';

describe("TokensController", () => {
  let user: User;

  before(async () => {
    const databaseService = container.get<DatabaseService>(TYPES.DatabaseService);
    const userRepository = await databaseService.getRepository(UserRepository);

    const newUser = new User();
    newUser.email = `${new Date().getTime()}@test.io`;
    newUser.password = "p@ssw0rd";
    user = await userRepository.save(newUser);
  });
});
Note
we store the user variable outside the before method so that we can use it later.

Now we just have to write our tests.

// src/controllers/tokens.controller.spec.ts
import {container} from '../core/container.core';
import {TYPES} from '../core/types.core';
import {User, UserRepository} from '../entities/user.entity';
import {DatabaseService} from '../services/database.service';
import {agent} from '../tests/supertest.utils';

describe("TokensController", () => {
  // ...
  describe("create", () => {
    it("should get token", (done) => {
      agent
        .post("/tokens")
        .send({ email: user.email, password: "p@ssw0rd" })
        .expect(200, done);
    });

    it("should not get token user with bad password", (done) => {
      agent
        .post("/tokens")
        .send({ email: user.email, password: "bad password" })
        .expect(400, done);
    });

    it("should not create token with nonexisting email", (done) => {
      agent
        .post("/tokens")
        .send({ email: user.email, password: "bad password" })
        .expect(400, done);
    });
  });
});

And there you go. As we work in test-driven development, at this point our tests don’t pass:

$ npm test
...
  1) TokensController
       create
         should get token:
     Error: expected 200 "OK", got 404 "Not Found"
...
  2) TokensController
       create
         should not get token user with bad password:
     Error: expected 400 "Bad Request", got 404 "Not Found"
...
  3) TokensController
       create
         should not create token with nonexisting email:
     Error: expected 400 "Bad Request", got 404 "Not Found"
...

Our goal in the next section will be to pass these tests.

Implementation

So we will create the TokenController. Let’s start by creating the controller with the necessary dependencies:

  1. DatabaseService to retrieve the user that corresponds to the email

  2. JsonWebTokenService to create a JWT token

Creation of the TokensController with the necessary dependencies.
// src/controllers/tokens.controller.ts
import {inject} from 'inversify';
import {controller} from 'inversify-express-utils';
import {TYPES} from '../core/types.core';
import {UserRepository} from '../entities/user.entity';
import {DatabaseService} from '../services/database.service';
import {JsonWebTokenService} from '../services/jsonWebToken.service';

@controller("/tokens")
export class TokensController {
  public constructor(
    @inject(TYPES.JsonWebTokenService) private readonly jsonWebTokenService: JsonWebTokenService,
    @inject(TYPES.DatabaseService) private readonly database: DatabaseService
  ) {}
}

And now we add this container controller so that it can be loaded:

// src/core/container.core.ts
// ...
import "../controllers/tokens.controller";
// ...

Now all we have to do is focus on the create method of our controller.

Implement POST /tokens endpoint to get a JWT token
// src/controllers/tokens.controller.ts
// ...
import {Request, Response} from 'express';
import {controller, httpPost, requestBody} from 'inversify-express-utils';
import {isPasswordMatch} from '../utils/password.utils';

@controller("/tokens")
export class TokensController {
  // ...

  @httpPost("")
  public async create(
    @requestBody() body: { email: string; password: string },
    req: Request,
    res: Response
  ) {
    const repository = await this.databaseService.getRepository(UserRepository);
    const user = await repository.findOne({ email: body.email });

    if (!user) {
      return res.sendStatus(400);
    }

    if (isPasswordMatch(user.hashedPassword, body.password)) {
      const token = this.jsonWebTokenService.encode({
        userId: user.id,
        email: user.email,
      });
      return res.json({ token });
    }

    return res.sendStatus(400);
  }
}

Wow! This code looks complicated, but it’s actually straightforward:

  1. we create a create method in the controller that will create a token for the requested user.

  2. this method uses the userRepository to retrieve the user from the given email. If we can’t find the user, we return a 400 - Bad request error.

  3. we use the isPasswordMatch method to check if the password matches the hash we have stored. If it does, we create and return a token with the jsonWebTokenService.encode method.

Still there? Let’s try to run the tests to see if our code works:

$ npm test
...
  TokensController
    create
      ✓ should get token (41ms)
      ✓ should not get token user with bad password
      ✓ should not create token with nonexisting email

Let’s try the logic in the terminal. Let’s create a user (if not already done) :

Create a brand new user using cURL
$ curl -X POST -d "email=test@test.fr" -d "password=test" http://localhost:3000/users
{"email":"test@test.fr","hashedPassword":"8574a23599216d7752ef4a2f62d02b9efb24524a33d840f10ce6ceacda69777b","id":1}

Then let’s ask for the token for this one:

Request JWT token for an user using cURL
$ curl -X POST -d "email=test@test.fr" -d "password=test" http://localhost:3000/tokens
{"token": "eyJhbGciOiJIUzI1NiI..."}

Oura! Let’s try with a wrong password :

Request JWT token for an user using cURL using a bad password
$ curl -X POST -d "email=test@test.fr" -d "password=azerty" http://localhost:3000/tokens
Bad Request

It’s perfect!

Let’s count and move on:

$ git add .
$ git commit -m "Create token controller"

User logged in

We set up the following logic: the API returns an authentication token if the authentication parameters passed are correct.

We will now implement the following logic: Each time this client requests a protected page, we will have to retrieve the user from this authentication token that the user will have passed in the HTTP header.

In our case, we will use the HTTP header Authorization, which is often used for this. I find this the best way because it gives context to the request without polluting the URL with extra parameters.

This action will be central to our application and will be used everywhere. So it’s quite logical to create a dedicated middleware. As we have it earlier. But before moving on to the code, we will define the behavior we want.

Setting up the functional test

The operation we wish to set up is as follows:

  • there is no need for a token to create a user because this is the registration step.

  • an authentication token is required to view or modify a user

Now that we have defined that, we can create our functional test.

We’ll take the users.controller.spec.ts test, and we’ll implement the tests for show, update, and destroy.

These three tests require that we already have a basic user. We will create a utils method that will generate a random user:

Create a method to build a new user with random data for testing purposes
// src/utils/faker.utils.ts
import {randomBytes} from 'crypto';
import {User} from '../entities/user.entity';

export function randomString(size: number = 8): string {
  return randomBytes(size).toString("hex");
}

export function generateUser(user?: User): User {
  const newUser = new User();
  newUser.email = user?.email ?? `${randomString()}@random.io`;
  newUser.password = newUser.email;

  return newUser;
}

This method is straightforward and will rely on the randomBytes of crypto library to generate a totally random email address.

Note
there are libraries like Faker.js that allow you to do this, but here I prefer to do without them to simplify the example.

Now we can go back to our test and create a user in the before method:

Create user in before hook in functional test
// src/controllers/users.controller.spec.ts
// ...
describe("UsersController", () => {
  let userRepository: UserRepository;
  before(async () => {
    const databaseService = container.get<DatabaseService>(TYPES.DatabaseService);
    userRepository = await databaseService.getRepository(UserRepository);
  });
  // ...
  describe("show", () => {
    let user: User;

    before(async () => {
      user = await userRepository.save(generateUser());
    });
  });
});

Now all we have to do is try to access this user via GET /users/1 with and without a JWT token:

Functional tests of the method UsersController.show.
// src/controllers/users.controller.spec.ts
// ...
describe("UsersController", () => {
  let jsonWebTokenService: JsonWebTokenService;
  before(async () => {
    // ...
    jsonWebTokenService = container.get(TYPES.JsonWebTokenService);
  });
  // ...
  describe("show", () => {
    let user: User;
    // ...
    it("should not show user other user", (done) => {
      agent.get(`/users/${user.id}`).expect(403, done);
    });

    it("should show my profile", (done) => {
      const jwt = jsonWebTokenService.encode({ userId: user.id });
      agent
        .get(`/users/${user.id}`)
        .set("Authorization", jwt)
        .expect(200, done);
    });
  });
});

As you can see, the tests are really very simple. We simply check the HTTP status code of the response.

The principle is exactly the same for the update and destroy methods:

Functional tests of the method UsersController.show.
// src/controllers/users.controller.spec.ts
// ...
describe("UsersController", () => {
  // ...
  describe("update", () => {
    // ... create user on `before`
    it("should not update other user", (done) => {
      agent.put(`/users/${user.id}`)
        .send({ password: "test" })
        .expect(403, done);
    });

    it("should update my profile", (done) => {
      const jwt = jsonWebTokenService.encode({ userId: user.id });
      agent.put(`/users/${user.id}`)
        .set("Authorization", jwt)
        .send({ password: "test" })
        .expect(200, done);
    });
  });

  describe("destroy", () => {
    // ... create user on `before`
    it("should not destroy other user", (done) => {
      agent.delete(`/users/${user.id}`).expect(403, done);
    });

    it("should delete my profile", (done) => {
      const jwt = jsonWebTokenService.encode({ userId: user.id });
      agent.delete(`/users/${user.id}`)
        .set("Authorization", jwt)
        .expect(204, done);
    });
  });
});

And there you go. If you run the tests at this point you’re going to get a bunch of errors:

$ npm test
// ...
UsersController
    index
      ✓ should respond 200
    show
      1) should not show user other user
      2) should show my profile
    create
      ✓ should create user
      ✓ should not create user with missing email
    update
      3) should not update other user
      4) should update my profile
    destroy
      5) should not destroy other user
      6) should delete my profile
// ...
  10 passing (226ms)
  6 failing

This is quite normal because we haven’t implemented the suite yet. Now let’s move on to implementation.

Creating middleware

So we are going to create a Middleware FetchLoggerUserMiddleware to meet our needs. That is to say, find the user thanks his authentication token, which is sent on each request.

The principle is pretty much the same as the previous middleware we created earlier, so I’ll go straight to the implementation. In the same way as the TokenController, we inject it with

  • the jsonWebTokenService to decode the JWT token

  • the databaseService to retrieve the user associated with the token

Create a brand new middleware to fetch logged user
// src/middlewares/fetchLoggedUser.middleware.ts
import {inject, injectable} from 'inversify';
import {BaseMiddleware} from 'inversify-express-utils';
import {TYPES} from '../core/types.core';
import {DatabaseService} from '../services/database.service';
import {JsonWebTokenService} from '../services/jsonWebToken.service';

@injectable()
export class FetchLoggedUserMiddleware extends BaseMiddleware {
  constructor(
    @inject(TYPES.DatabaseService)
    private readonly databaseService: DatabaseService,
    @inject(TYPES.JsonWebTokenService)
    private readonly jsonWebTokenService: JsonWebTokenService
  ) {
    super();
  }
}

And now here is the implementation of the handler method

Create logic for FetchLoggedUserMiddleware
// src/middlewares/fetchLoggedUser.middleware.ts
// ...
import {NextFunction, Request, Response} from 'express';
import {User, UserRepository} from '../entities/user.entity';

@injectable()
export class FetchLoggedUserMiddleware extends BaseMiddleware {
  // ...
  public async handler(
    req: Request & { user: User },
    res: Response,
    next: NextFunction
  ): Promise<void | Response> {
    const repository = await this.databaseService.getRepository(UserRepository);
    const token = req.headers.authorization?.replace("bearer", "");

    if (token === undefined) {
      return res.status(403).send("You must provide an `Authorization` header");
    }

    try {
      const payload = this.jsonWebTokenService.decode(token);
      req.user = await repository.findOneOrFail(payload.userId);
    } catch (e) {
      return res.status(403).send("Invalid token");
    }

    next();
  }
}

Again the code seems long, but it is actually straightforward:

  1. we extract the JWT token in the header Authorization. If it is not defined, we return an error 403 - Forbidden with a short explanation

  2. we decode the JWT token and retrieve the associated user. If an error occurs (the token can’t be decoded or the user doesn’t exist), we return a 403 error as well.

  3. we inject the user in the request so that it can be used in the controller

Of course, we don’t forget to add this middleware to our container :

Add the FetchLoggedUserMiddleware symbol.
// src/core/types.core.ts
export const TYPES = {
  // ...
  FetchLoggedUserMiddleware: Symbol.for("FetchLoggedUserMiddleware"),
};

Added FetchLoggedUserMiddleware middleware in the container.

// src/core/container.core.ts
// ...
import {FetchLoggedUserMiddleware} from '../middlewares/fetchLoggedUser.middleware';

export const container = new Container();
// ...
container.bind(TYPES.FetchLoggedUserMiddleware).to(FetchLoggedUserMiddleware);

And here is our middleware ready to be used.

Using the middleware

And now we just have to use the middleware in the UsersController . Here is an example for the show method:

Use FetchLoggedUserMiddleware into user controller
// src/controllers/home.controller.ts
// ...
@controller('/users')
export class UsersController {
  // ...
-   @httpGet('/:userId', TYPES.FetchUserMiddleware)
+   @httpGet('/:userId', TYPES.FetchLoggedUserMiddleware)
  public async show(/* ... */) {
+    if (Number(userId) !== req.user.id) {
+      return res.sendStatus(403);
+    }
    return req.user;
  }
  // ...
}

As you can see, the changes are minimal because part of the logic is deported into the middleware. You can also see that I put a straightforward check to prevent a user from viewing another user’s information.

The middleware allowed us to keep the logic in our controller very simple.

The principle is exactly the same for the update and destroy method.

Adding FetchLoggedUserMiddleware updare
// src/controllers/home.controller.ts
// ...
@controller('/users')
export class UsersController {
  // ...
-  @httpPut('/:userId', TYPES.FetchUserMiddleware)
+  @httpPut('/:userId', TYPES.FetchLoggedUserMiddleware)
  public async update(/* ... */)> {
+    if (Number(userId) !== req.user.id) {
+      return res.sendStatus(403);
+    }
    // ...
    return repository.save(req.user);
  }

-  @httpDelete('/:userId', TYPES.FetchUserMiddleware)
+  @httpDelete('/:userId', TYPES.FetchLoggedUserMiddleware)
  public async destroy(/* ... */) {
+    if (Number(userId) !== req.user.id) {
+      return res.sendStatus(403);
+    }
    const repository = await this.databaseService.getRepository(UserRepository);
    await repository.delete(req.user);
  }
}

If all goes well. Our tests should pass:

$ npm test

  TokensController
    create
      ✓ should get token (41ms)
      ✓ should not get token user with bad password
      ✓ should not create token with nonexisting email

  UsersController
    index
      ✓ should respond 200
    show
      ✓ should not show user other user
      ✓ should show my profile
    create
      ✓ should create user
      ✓ should not create user with missing email
    update
      ✓ should not update other user
      ✓ should update my profile
    destroy
      ✓ should not destroy other user
      ✓ should delete my profile

  User
    ✓ should hash password

  JsonWebTokenService
    ✓ should encode and decode payload

  isPasswordMatch
    ✓ should match
    ✓ should not match


  16 passing (201ms)

It’s beautiful all this green, isn’t it?

Let’s try to do the same thing with cURL:

Obtains JWT token and use it with cURL
$ curl -X POST -d "email=test@test.fr" -d "password=test" http://localhost:3000/tokens
{"token": "eyJhbGciOiJIUzI1NiI..."}
$ curl -H "Authorization: eyJhbGciOiJIUzI1NiI..." http://localhost:3000/users/1
{"id":1,"email":"test@test.fr","hashedPassword":"8574a23599216d7752ef4a2f62..."}

Perfect! And what happens if we try to access this road without an authorization?

Try to access on protected endpoint without JWT token using cURL
$ curl http://localhost:3000/users/1
You must provide an `Authorization` header

And there you go. We were denied access as planned. It’s time to commit all our changes:

$ git add .
$ git commit -m "Add JWT middleware"

Conclusion

You did it! You’re halfway there! This chapter has been long and difficult, but it’s a big step forward in setting up a solid mechanism to handle user authentication. We’re even starting to scratch the surface for simple authorization rules.

In the next chapter, we will focus on customizing JSON output for the user and adding a product entity by giving the user the ability to create a product and publish it for sale.

User’s products

In the previous chapter, we implemented the authentication mechanism that we will use throughout the application.

At the moment, we have a straightforward implementation of the User model, but the moment of truth has come. We are going to customize the JSON output and add a second resource: the user’s products. These are the elements that the user will sell in the application.

If you are familiar with an ORM, you may already know what I’m talking about. But for those who don’t, we will combine the User model with the Product model using the @ManyToOne and @OneToMany TypeORM decorators.

In this chapter, we will build the Product model from scratch, associate it with the user, and create the necessary entries so that any client can access the information.

Before we start, and as usual, when we start new features, we create a new branch:

$ git checkout -b chapter05

The product model

We will first create a Product model, then we will add some validations to it, and finally, we will associate it with the User model. Like the User entity, the Product will be fully tested and automatically deleted if the user is deleted.

Product basics

The Product entity will need several fields: a price attribute for the price of the product, a published boolean to know if the product is ready to be sold or not, a title to define a sexy product title, and last but not least a userId to associate that particular product to a user:

Let’s go directly to the implementation.

Creation of the entity Product.
// src/entities/product.entity.ts
import {validateOrReject} from 'class-validator';
import {/* ... */} from "typeorm";
import {User} from "./user.entity";

@Entity()
export class Product {
  @PrimaryGeneratedColumn()
  id: number;

  @Column({ type: "text" })
  title: string;

  @Column({ type: "float" })
  price: number;

  @Column({ type: "boolean" })
  published: boolean;

  @Index()
  @ManyToOne(() => User, (user) => user.products, { onDelete: "CASCADE" })
  user: User;

  @CreateDateColumn()
  createdAt: Date;

  @UpdateDateColumn()
  updatedAt: Date;

  @BeforeInsert()
  @BeforeUpdate()
  async validate() {
    await validateOrReject(this);
  }
}

@EntityRepository(Product)
export class ProductRepository extends Repository<Product> {}

As you can see, this is very readable. The only new thing here is the appearance of the ManyToOne relationship. This is a decorator that will create a userId column of type int. It takes three parameters:

  1. a function that returns the class corresponding to the association

  2. a function that defines how the connection in the other direction is specified

  3. an object containing various parameters

Note
I also added a @Index decorator to make this column indexed. This is a good practice for association keys because it optimizes database queries. It is not mandatory, but I highly recommend it.

Before we move on, we also need to define the OneToMany association in the User entity.

Add products relation to user model
// src/entities/user.entity.ts
// ...
@Entity()
export class User {
  // ...
  @OneToMany(() => Product, (product) => product.user)
  products: Product[];
  // ...
}
// ...

And there you go. Our association is done, and if you start the server with the TypeORM query logs, you should see the SQL query that creates the table:

server logs in the terminal
...
query: BEGIN TRANSACTION
...
query: CREATE TABLE "product" ("id" integer PRIMARY KEY AUTOINCREMENT NOT NULL, "title" text NOT NULL, "price" float NOT NULL, "published" boolean NOT NULL, "createdAt" datetime NOT NULL DEFAULT (datetime('now')), "updatedAt" datetime NOT NULL DEFAULT (datetime('now')), 'userId' integer)
...
query: CREATE INDEX "IDX_329b8ae12068b23da547d3b479" ON "product" ('userId')
query: COMMIT

And there you go. Let’s make a commit:

$ git add .
$ git commit -m "Generate product model"

Product Validations

As we have seen with the user, validations are an important part of building any application. This allows us to prevent unwanted data from being recorded in the database. For the product, we need to make sure, for example, that a price is a number and that it is not negative.

For this part, we don’t need to set up tests because everything is already available and tested by the library class-validator. We just need to add the corresponding decorators. Here is the result:

// src/entities/product.entity.ts
import {IsDefined, IsPositive, validateOrReject} from 'class-validator';
// ...
@Entity()
export class Product {
  // ...
  @IsDefined()
  @Column({ type: "text", nullable: false })
  title: string;

  @IsPositive()
  @IsDefined()
  @Column({ type: "float", nullable: false })
  price: number;

  @Column({ type: "boolean", default: false })
  published: boolean;

  @Index()
  @ManyToOne(() => User, (user) => user.products, { onDelete: "CASCADE" })
  user: User;
  // ...
}
// ...

Decorators document the code, and there is not much to add here. I added the nullable: false property, which will modify the database schema and add a NOT NULL constraint.

Let’s make these changes and keep moving forward:

$ git commit -am "Adds some validations to products"

Entry point for our products

Now is the time to start building product entry points. For now, we will just build five REST actions.

First, we need to create the ProductsController. As a warm-up, we’ll start by building the show action for the product.

Product Show Action

Tests

As usual, we start by adding some tests from the product controller. The purpose here is straightforward. Just display a single product and make sure that the server response is what we expect.

But to do this, we will first create a product and a user in the before method. So we’re going to refine our utility to create entities by adding generateProduct:

Creating the generateProduct method
// src/utils/faker.utils.ts
// ...
import {Product} from '../entities/product.entity';

export function randomString(size: number = 8): string {
  return randomBytes(size).toString("hex");
}
// ...
export function generateProduct(product?: Partial<Product>): Product {
  const newProduct = new Product();
  newProduct.price = product?.price ?? Math.random() * 100;
  newProduct.published = product?.published ?? randomBoolean();
  newProduct.title = product?.title ?? randomString();
  newProduct.user = product?.user ?? generateUser();

  return newProduct;
}

We will now use this method in the before of the new test below:

Create a product in products.controller.spec
// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  let productRepository: ProductRepository;
  let product: Product;

  before(async () => {
    const databaseService = container.get<DatabaseService>( TYPES.DatabaseService);
    productRepository = await databaseService.getRepository(ProductRepository);
  });

  beforeEach(async () => {
    product = await productRepository.save(generateProduct({ user }));
  });
});

And now we can use this product and to test if it can be showed:

// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  describe("show", () => {
    it("should show product", (done) => {
      agent.get(`/products/${product.id}`).expect(200, done);
    });
  });
  // ...
});

Implementation

Now that our test is in place, it’s time to take the test.

Just like we did with the users, we will create a middleware FetchProductMiddleware. It will just fetch the product according to the productId parameter and inject it into the request:

Creating FetchProductMiddleware
// src/middlewares/fetchUser.middleware.ts
// ...
@injectable()
export class FetchProductMiddleware extends BaseMiddleware {
  constructor(@inject(TYPES.DatabaseService) private readonly databaseService: DatabaseService) {
    super();
  }

  public async handler(
    req: Request & { product: Product },
    res: Response,
    next: NextFunction
  ): Promise<void | Response> {
    const productId = req.query.productId ?? req.params.productId;
    const repository = await this.databaseService.getRepository(ProductRepository);
    req.product = await repository.findOne(Number(productId), { relations: ["user"] });

    if (!req.product) {
      return res.status(404).send("product not found");
    }

    next();
  }
}

The small novelty here is the appearance of the relation parameter of the findOne method. This parameter also allows the user to retrieve the product and fill in the product.user property, which will be useful a little further on.

Now we can switch to the controller:

Add FetchProductMiddleware to product controller
// src/controllers/home.controller.ts
// ...
@controller("/products")
export class ProductController {
  public constructor(
    @inject(TYPES.DatabaseService) private readonly databaseService: DatabaseService
  ) {}
  // ...
  @httpGet("/:productId", TYPES.FetchProductMiddleware)
  public async show(req: Request & { product: Product }) {
    return req.product;
  }
}

Wait! Don’t run the tests yet. Don’t forget that we need to add the route to the container:

Add FetchProductMiddleware type for dependency injection
// src/core/types.core.ts
export const TYPES = {
  // ...
  FetchProductMiddleware: Symbol.for("FetchProductMiddleware"),
};
Register product controller and FetchProductMiddleware to container
// src/core/container.core.ts
import "../controllers/products.controller";
// ...

export const container = new Container();
// ...
container.bind(TYPES.FetchProductMiddleware).to(FetchProductMiddleware);

Now we make sure the tests pass:

$ npm test
...
  ProductsController
    show
      ✓ should show product
...

Perfect! We can now move on to the next one.

$ git add .
$ git commit -m "Add logic to show product"

List of products

It is now time to create an entry for a product list that could display the product catalog of a market, for example. For this access point, we do not require the user to be logged in. As usual, we will start writing some tests:

Create functional test for product list endpoint
// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  describe( index ), ( ) => {
    it("should respond 200", (done) => {
      agent.get("/products").expect(200, done);
    });
  });
});

Now let’s move on to implementation, which for now is going to be a small process:

Create product list endpoint
// src/controllers/home.controller.ts
// ...

@controller("/products")
export class ProductController {
  // ...

  @httpGet("/")
  public async index() {
    const repository = await this.databaseService.getRepository(ProductRepository);
    return repository.find();
  }
}

In the following chapters, we will improve this entry point and give the possibility to receive parameters to filter them. Let’s go through these changes and keep moving forward:

$ git add. && git commit -m "Add logic to list product"

Product creation

Creating products is a bit more tricky because we’ll need an additional configuration. We will follow the strategy to assign the created product to the user who owns the supplied JWT token from the HTTP header Authorization.

Tests

So our first stop will be the products.controller.spec.ts file. We will first create a specific user in the before and retrieve his JWT token:

Creating and user and a valid JWT token in functional test
// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  let userRepository: UserRepository;
  let productRepository: ProductRepository;
  let jsonWebTokenService: JsonWebTokenService;
  let user: User;
  let jwt: string;
  let product: Product;

  before(async () => {
    jsonWebTokenService = container.get(TYPES.JsonWebTokenService);

    const databaseService = container.get<DatabaseService>(TYPES.DatabaseService);
    userRepository = await databaseService.getRepository(UserRepository);
    productRepository = await databaseService.getRepository(ProductRepository);
  });

  beforeEach(async () => {
    user = await userRepository.save(generateUser());
    product = await productRepository.save(generateProduct({ user }));
    jwt = jsonWebTokenService.encode({ userId: user.id });
  });
  // ...
});

The small novelty here is the appearance of the relation parameter of the findOne method.

  1. the case where we create a product with a user

  2. the case where a product cannot be created because it is incomplete

  3. in case we do not provide a JWT token, and we cannot create the product

Here we go:

Complete functional test suite of product creation
// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  describe("create", () => {
    it("should create product", (done) => {
      const { title, price, published } = generateProduct();
      agent
        .post("/products")
        .set("Authorization", jwt)
        .send({ title, price, published })
        .expect(201, done);
    });

    it("should not create product without auth", (done) => {
      const { title, price, published } = generateProduct();
      agent
        .post("/products")
        .send({ title, price, published })
        .expect(403, done);
    });

    it("should not create user with missing title", (done) => {
      const { price, published } = generateProduct();
      agent
        .post("/products")
        .set("Authorization", jwt)
        .send({ price, published })
        .expect(400, done);
    });
  });
  // ...
});

Wow! we added a lot of code. If you remember, the tests are actually the same as the user’s creation except for a few minor changes.

Implementation

So it’s time to take the test. The implementation is again very similar to the previous one in the user controller. With the difference that here we will retrieve the user associated with the JWT token and assign it to the product we are creating:

Implementation of product creation
// src/controllers/home.controller.ts
// ...
@controller("/products")
export class ProductController {
  // ...
  @httpPost("/", TYPES.FetchLoggedUserMiddleware)
  public async create(
    @requestBody() body: Partial<Product>,
    req: Request & { user: User },
    res: Response
  ) {
    const repository = await this.databaseService.getRepository(ProductRepository);
    const product = new Product();
    product.title = body.title;
    product.published = body.published;
    product.price = body.price;
    product.user = req.user;

    const errors = await validate(product);

    if (errors.length !== 0) {
      return res.status(400).json({ errors });
    }

    await repository.save(product);
    return res.sendStatus(201);
  }
}

And there you go. If you do the tests now, they should all pass:

$ npm test
...
  ProductsController
    index
      ✓ should respond 200
    show
      ✓ should show product
    create
      ✓ should create product
      ✓ should not create product without auth
      ✓ should not create user with missing title
...

Product update

I hope that now you understand the logic for building future actions. This section will focus on the update action that will work in a similar way to the creation action. We just need to get the product from the database and update it.

Before we start coding some tests, I just want to clarify that we will delimit the product to the current user in the same way as for the create action. We want to make sure that the product we are updating belongs to the user. So we’re going to look for that product in the product.user association.

Tests

First of all, we add some tests. Here we will test three things:

  1. the case where the user actually owns the product

  2. the case where the user does not own the product and therefore receives a 403 - Forbidden response

  3. the case without authentication

To set up these tests, we will create a product, a user who owns the product, and a stranger user who will be a user not associated with the product:

Create needed variable for functional tests about product controller
// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  let user: User;
  let stranger: User;
  let jwt: string;
  let strangerJwt: string;
  let product: Product;

  before(async () => {
    // ...
    stranger = await userRepository.save(generateUser());
    strangerJwt = jsonWebTokenService.encode({ userId: stranger.id });
  });

  beforeEach(async () => {
    user = await userRepository.save(generateUser());
    product = await productRepository.save(generateProduct({ user }));
    jwt = jsonWebTokenService.encode({ userId: user.id });
  });

  // ...
});

This may sound abstract, but look at the implementation of the tests that will use these variables:

Implementation of functional tests about update product endpoint
// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  describe("update", () => {
    it("should update product", (done) => {
      const { title, price, published } = generateProduct();
      agent
        .put(`/products/${product.id}`)
        .set("Authorization", jwt)
        .send({ title, price, published })
        .expect(204, done);
    });

    it("should not update product of other users", (done) => {
      const { price, published } = generateProduct();
      agent
        .put(`/products/${product.id}`)
        .set("Authorization", strangerJwt)
        .send({ price, published })
        .expect(403, done);
    });

    it("should not update product without auth", (done) => {
      const { price, published } = generateProduct();
      agent
        .put(`/products/${product.id}`)
        .send({ price, published })
        .expect(403, done);
    });
  });
});

The tests may seem complex, but at a glance they are almost identical to those of the users.

Implementation

Now let’s implement the code to pass our tests successfully:

Implementation of product update endpoint
// src/controllers/home.controller.ts
// ...
@controller("/products")
export class ProductController {
  // ...

  @httpPut("/:productId", TYPES.FetchLoggedUserMiddleware, TYPES.FetchProductMiddleware)
  public async update(
    @requestBody() body: Partial<Product>,
    req: Request & { user: User; product: Product },
    res: Response
  ) {
    if (!this.canEditProduct(req.user, req.product)) {
      return res.sendStatus(403);
    }

    req.product.title = body.title;
    req.product.published = body.published;
    req.product.price = body.price;

    const errors = await validate(req.product);

    if (errors.length !== 0) {
      return res.status(400).json({ errors });
    }
    const repository = await this.databaseService.getRepository(ProductRepository);
    await repository.save(req.product);
    return res.sendStatus(204);
  }

  private canEditProduct(user: User, product: Product): boolean {
    return user.id === product.user.id;
  }
}

As you can see, the implementation is quite simple. The Middleware will automatically retrieve the product and the user linked to the JWT token. All we have to do now is to verify that the user owns the product. This is what we do with the canEditProduct method. Then we update the product and save it after checking that it is valid of course.

If we run the tests, they should pass:

$ npm test
...
  ProductsController
    index
      ✓ should respond 200
    show
      ✓ should show product
    create
      ✓ should create product
      ✓ should not create product without auth
      ✓ should not create user with missing title
    update
      ✓ should update product
      ✓ should not update product of other users
      ✓ should not update product without auth
...

Deleting products

Our last stop for the product road will be the destroy action. Now you can imagine what that would look like. The strategy here will be quite similar to the create and update action. This means that we will retrieve the logged-in user, then verify that the user has the product, and finally remove it by returning a 204 code.

Let’s start by adding some tests:

Functional tests about product delete endpoint
// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  describe("destroy", () => {
    it("should destroy product", (done) => {
      const jwt = jsonWebTokenService.encode({ userId: user.id });
      agent
        .delete(`/products/${product.id}`)
        .set("Authorization", jwt)
        .expect(204, done);
    });

    it("should not destroy product without auth", (done) => {
      agent.delete(`/products/${product.id}`).expect(403, done);
    });

    it("should not destroy of other users", (done) => {
      agent
        .delete(`/products/${product.id}`)
        .set("Authorization", strangerJwt)
        .expect(403, done);
    });
  });
});

Now, let’s just add the code needed to run the tests:

Implementation to delete product endpoint
// src/controllers/home.controller.ts
// ...
@controller("/products")
export class ProductController {
  // ...
  @httpDelete("/:productId", TYPES.FetchLoggedUserMiddleware, TYPES.FetchProductMiddleware)
  public async destroy(
    req: Request & { user: User; product: Product },
    res: Response
  ) {
    if (!this.canEditProduct(req.user, req.product)) {
      return res.sendStatus(403);
    }
    const repository = await this.databaseService.getRepository(
      ProductRepository
    );
    await repository.delete(req.product);
    return res.sendStatus(204);
  }
  // ...
}

As you can see, the implementation does the job in three lines of code. We can run the tests to make sure everything is good.

$ npm test
...
  ProductsController
...
    destroy
      ✓ should destroy product
      ✓ should not destroy product without auth
      ✓ should not destroy of other users
...
  27 passing (344ms)

After that, we commit changes.

$ git commit -am "Adds the products create, update and destroy action"

Testing with cURL

Our tests tell us that everything is fine, but it’s always good to make sure. So we’re going to create a user, then we’re going to create a product, update it and then delete it. Here we go.

Start your server with npm start if you haven’t already done so, and let’s start by creating a user:

Creating an user using cURL
$ curl -X POST -d "email=test@test.io" -d "password=test" http://localhost:3000/users
{
  "email": "test@test.io",
  "hashedPassword": "8574a...69777b",
  "id": 1,
  "createdAt": "2020-11-25T20:37:20.000Z",
  "updatedAt": "2020-11-25T20:37:20.000Z"
}

And now let’s get a valid JWT token:

Get a JWT token using cURL
$ curl -X POST -d "email=test@test.io" -d "password=test" http://localhost:3000/tokens
{
  "token": "eyJhbGciOiJ..."
}

Write down this token and save it in a Bash variable:

Initialize Bash variable with JWT token
$ export JWT="eyJhbGciOiJ..."

Now let’s use this token to create a product:

curl -X POST -H "Authorization: $JWT" -d "title=my first product" -d "price=1" http://localhost:3000/products
{
  "id": 1,
  "title": "my first product",
  "price": 1,
...
}

We can update it easily with the request PUT:

Create a product using cURL
curl -X PUT -H "Authorization: $BASH" -d "title=my first product undated" -d "price=66" http://localhost:3000/products/1

And finally remove this product:

Delete a product using cURL
curl -X DELETE -H "Authorization: $JWT" http://localhost:3000/products/1

It’s perfect.

So it’s time to close this chapter and move on.

Conclusion

I hope you enjoyed this chapter. It’s a long job, but the code we’ve created is an excellent foundation for the main application.

In the next chapter, we will focus on customizing user and product entities output using the jsonapi-serializer library. It will allow us to easily filter the attributes to be displayed and manage associations such as embedded objects.

Building JSON

In the previous chapter, we added the products to the application and built all the necessary roads. We also associated a product with a user and restricted some of the actions of ProductsController.

Now you should be satisfied with all this work. But we still have a lot of work ahead of us. Right now, we have a JSON output that is not perfect. The JSON output looks like this one:

Current JSON output
[
  {
    "id": 1,
    "title": "Tag Case",
    "price": 98.77,
    "published": false,
    "userId": 1,
    "createdAt": "2018-12-20T12:47:26.686Z",
    "updatedAt": "2018-12-20T12:47:26.686Z"
  },
]

We want an output that does not contain the userId, createdAt, and updatedAt fields.

Moreover, an important and difficult part of creating your API is to decide on the output format. Fortunately, some organizations have already faced this kind of problem and have established certain conventions you will discover in this chapter.

Let’s start a new branch for this chapter:

$ git checkout -b chapter06

Presentation of JSON:API

As I said earlier, an important and difficult part of creating your API is to decide the output format. Fortunately, some conventions already exist.

One of them, certainly the most used, is JSON:API. The JSON:API documentation gives us some rules to follow regarding the JSON document formatting.

Thus, our document must contain these keys:

  • data which must contain the data we return

  • errors which should contain a table of errors that have occurred.

  • meta which contains a meta object

The content of the data key is also quite strict:

  • it must have a type key that describes the JSON model’s type (if it’s an article, a user, etc.).

  • the object properties must be placed in an attribute key, and the undescore (_) is replaced by dashes (-).

  • the links of the objects must be placed in a relationships key

In this chapter we will customize the JSON output using the library jsonapi-serializer which complies with all the standards JSON:API.

So let’s install this dependency:

$ npm install jsonapi-serializer
$ npm install @types/jsonapi-serializer --save-dev

You should be ready to continue with this tutorial.

Serialize user

jsonapi-serializer uses serializers. Serializers represent methods that will be responsible for converting one object into another object that complies with the JSON:API standard.

We first need to add a serializers.utils.ts file that will contain all the serializers. And in the meantime, I start directly with the implementation of userSerializer:

Serializer of User entity
// src/utils/serializers.utils.ts
import {Serializer} from 'jsonapi-serializer';

export const userSerializer = new Serializer("users", {
  attributes: ["email"],
  dataLinks: {
    self: (user) => `/users/${user.id}`,
  },
});

This serializer will allow us to convert our User object to JSON, which correctly implements the JSON:API standard. We have specified the email attribute to present in the data table. By listing the fields that we want to appear in, this library fixes the problem with our API’s hashedPassword attribute.

Now we just have to use this instance in our controller:

Using userSerializer into user controller
// src/controllers/home.controller.ts
// ...
import {userSerializer} from '../utils/serializers.utils';

@controller('/users')
export class UsersController {
  // ...
  @httpGet("/")
  public async index() {
    // ...
    return userSerializer.serialize(users);
  }
  // ...
  @httpGet('/:userId', TYPES.FetchLoggedUserMiddleware)
  public async show(/* ... */) {
    // ...
    return userSerializer.serialize(req.user);
  }
  // ...
}

As you can see, it doesn’t change much! We simply import our serializer and use its serialize method.

Let’s try all this with cURL:

$ curl http://localhost:3000/users
{
  "data": [
    {
      "type": "users",
      "id": "1",
      "attributes": {
        "email": "test@test.io"
      }
    }
  ]
}

Let’s make these changes and keep moving forward:

$ git add .
$ git commit -am "Adds user serializer for customizing the json output"

Serialize products

Now that we understand how the serialization gem works, it’s time to customize the output. The first step is the same as for the user, we need a product serializer, so let’s do it:

Implementation of productSerializer
// src/utils/serializers.utils.ts
// ...
export const productsSerializer = new Serializer("products", {
  attributes: ["title", "price", "published", "user"],
});

And there you go. It’s as simple as that. Let’s modify our controller a little bit.

Using productSerializer into product controller
// src/controllers/home.controller.ts
// ...
import {productsSerializer} from '../utils/serializers.utils';

@controller("/products")
export class ProductController {
  // ...
  @httpGet("/")
  public async index() {
    // ...
    return productsSerializer.serialize(products);
  }
  // ...
  @httpGet("/:productId", TYPES.FetchProductMiddleware)
  public async show(req: Request & { product: Product }) {
    return productsSerializer.serialize(req.product);
  }
  // ...
}

You can run the tests to check but they should still be good. Let’s make these small changes:

$ git add .
$ git commit -m "Adds product serializer for custom json output"

Serialize associations

We have worked with serializers and you may notice that it is very simple. In some cases, the difficult decision is how to name your routes or how to structure the JSON output so that your solution is future-proof. When working with associations between models on an API, there are many approaches you can take.

We don’t have to worry about this in our case, the JSON:API standard did it for us!

To summarize, we have a `has_many' type association between the user and the product model.

Add products relationships to user entity
// src/entities/user.entity.ts
// ...
@Entity()
export class User {
  // ...
  @OneToMany(() => Product, (product) => product.user)
  products: Product[];
  // ...
}
// ...
Add user relationships to product entity
// src/entities/product.entity.ts
// ...
@Entity()
export class Product {
  // ...
  @ManyToOne(() => User, (user) => user.products, { onDelete: "CASCADE" })
  user: User;
  // ...
}
// ...

It’s a good idea to integrate users into the JSON outputs of the products. This will make the output heavier, but it will save the API client from executing further requests to retrieve user information related to the products. This method can really save you a huge bottleneck.

Relationship Injection Theory

Imagine a scenario where you will search for products in the API, but in this case, you need to display some of the user information.

A possible solution would be to add the userId attribute to the productSerializer to retrieve the corresponding user later. This may sound like a good idea, but if you are concerned about performance or if your database transactions are not fast enough, you should reconsider this approach. You should understand that for each product you recover, you will need to recover its corresponding user.

Faced with this problem, there are several possible alternatives.

Embedding in a meta attribute

A good solution, in my opinion is to integrate the user IDs linked to the products in a meta attribute, so we would have a JSON output as:

{
  "meta": { "userIds": [1,2,3] },
  "data": [/* ... */]
}

This may require additional configuration on the user’s terminal to retrieve its users from these userIds.

Incorporate the object into the attribute

Another solution is to incorporate the user object into the product object. This can make the first request a little slower, but this way, the client doesn’t need to make another request. An example of the expected results is shown below:

Incorporate user relation into product attributes
{
  "data":
  [
    {
        "id": 1,
        "type": "product",
        "attributes": {
          "title": "First product",
          "price": "25.02",
          "published": false,
          "user": {
            "id": 2,
            "attributes": {
              "email": "stephany@lind.co.uk",
              "created_at": "2014-07-29T03:52:07.432Z",
              "updated_at": "2014-07-29T03:52:07.432Z",
              "auth_token": "Xbnzbf3YkquUrF_1bNkZ"
            }
          }
        }
    }
  ]
}

The problem with this approach is that we have to duplicate User objects for all products that belong to the same user:

{
  "data":
  [
    {
        "id": 1,
        "type": "product",
        "attributes": {
          "title": "First product",
          // ...
          "user": {
            "id": 2,
            "type": "user",
            "attributes": {
              "email": "stephany@lind.co.uk",
              // ...
            }
          }
        }
    },
    {
        "id": 2,
        "type": "product",
        "attributes": {
          "title": "Second product",
          // ...
          "user": {
            "id": 2,
            "type": "user",
            "attributes": {
              "email": "stephany@lind.co.uk",
              // ...
            }
          }
        }
    }
  ]
}

Incorporate relationships into include

The third solution, chosen by the JSON:API standard, is a mixture of the first two.

We will include all relations in an include key, which will contain all relations of the previously mentioned objects. Each object will also include a relationships key defining the relationship, which must be found in the include key.

One JSON is worth a thousand words:

{
  "data":
  [
    {
        "id": 1,
        "type": "product",
        "attributes": {/* ... */},
        "relationships": {
          "user": {
            "id": 1,
            "type": "user"
          }
        }
    },
    {
        "id": 2,
        "type": "product",
        "attributes": {/* ... */},
        "relationships": {
          "user": {
            "id": 1,
            "type": "user"
          }
        }
    }
  ],
  "include": [
    {
      "id": 2,
      "type": "user",
      "attributes": {
        "email": "stephany@lind.co.uk",
        "created_at": "2014-07-29T03:52:07.432Z",
        "updated_at": "2014-07-29T03:52:07.432Z",
        "auth_token": "Xbnzbf3YkquUrF_1bNkZ"
      }
    }
  ]
}

You see the difference? This solution drastically reduces the size of the JSON and, therefore, the bandwidth used.

Application of the relations injection

We will, therefore, incorporate the user object into the product. Let’s start by adding some tests.

We will simply modify the UsersController.show test to verify that we recover:

Add functional test to test include presence in JSON response
// src/controllers/users.controller.spec.ts
// ...
describe("UsersController", () => {
  // ...
  let productRepository: ProductRepository;

  before(async () => {
    // ...
    productRepository = await databaseService.getRepository(ProductRepository);
  });

  beforeEach(async () => {
    user = await userRepository.save(generateUser());
    const product = await productRepository.save(generateProduct({ user }));
    user.products = [product];
    // ...
  });

  // ...

  describe("show", () => {
    // ...
    it("should show my profile", () => {
      return agent
        .get(`/users/${user.id}`)
        .set("Authorization", jwt)
        .expect(200)
        .then((response) => {
          assert.strictEqual(response.body.data.attributes.email, user.email);
          assert.strictEqual(response.body.included[0].attributes.title, user.products[0].title);
        });
    });
  });
// ...
});

We are now checking two things on the JSON that is returned:

  1. It contains the title of the product

  2. user data is included in the include key

You may also notice that I have created and linked a product to the user saved in the beforeEach method.

To pass this test, we will start by including the relationship in the serializer:

Add relationship to user serializer
// src/utils/serializers.utils.ts
// ...
export const userSerializer = new Serializer("users", {
  attributes: ["email", "products"],
  included: true,
  products: {
    ref: "id",
    attributes: ["title", "price", "published"],
    included: true,
  },
} as any);
// ...
Note
at the time of this writing, I have not found any other way to get around the TypeScript typing error other than as any. Maybe the library will be updated soon.

This will add a relationship key containing the user’s ID and add an include key containing the relationship. Here is an example:

{
  data: {
    type: 'users',
    id: '16',
    attributes: {
      email: 'ddf1bbe99c3a7ee8@random.io'
    },
    relationships: {
      products: {
        data: [
          { type: 'products', id: '15' }
        ]
      }
    }
  },
  included: [
    {
      type: 'products',
      id: '15',
      attributes: {
        title: 'adc643eaa6bc1748',
        price: 72.45882186217555,
        published: false
      }
    }
  ],
}

The implementation is very simple: just add a line to the product serializer:

$ npm test

  ProductsController
...
    show
      ✓ should show product
...

Let’s make a commit to celebrate:

$ git commit -am "Add user relationship to product"

Retrieve the user of a product

Have you understood the principle? We have included user information in the JSON of the products.

Let’s start with the test:

Add functional test to test include presence in JSON response
// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  describe("show", () => {
    it("should show product", () => {
      agent
        .get(`/products/${product.id}`)
        .expect(200)
        .then((response) => {
          assert.strictEqual(response.body.data.attributes.title, product.title);
          assert.strictEqual(response.body.included[0].attributes.email, product.user.email);
        });
    });
  });
  // ...
});

Then build serializer:

Add relationship to serializer
// src/utils/serializers.utils.ts
// ...
export const productsSerializer = new Serializer("products", {
  attributes: ["title", "price", "published", "user"],
  included: true,
  user: {
    ref: "id",
    included: true,
    attributes: ["email"],
  },
} as any);

And finally update controller:

Using serializer in product controller
// src/controllers/home.controller.ts
// ...
@controller("/products")
export class ProductController {
  // ...
  @httpGet("/")
  public async index() {
    // ...
    return productsSerializer.serialize(products);
  }
  // ...
  @httpGet("/:productId", TYPES.FetchProductMiddleware)
  public async show(/* ... */) {
    return productsSerializer.serialize(req.product);
  }
  // ...
}

And there you go. We get a JSON of this shape:

{
  data: {
    type: 'products',
    id: '2',
    attributes: {
      title: 'd358a5c96b94a562',
      price: 56.85800753546402,
      published: false
    },
    relationships: {
      user: {
        data: {
          type: 'users',
          id: '3'
        }
      }
    }
  },
  included: [
    {
      type: 'users',
      id: '3',
      attributes: {
        email: 'ddaf230c3d15a057@random.io'
      }
    }
  ]
}

It was really easy. Let’s make a commit:

$ git commit -am "Add user relationship to ProductsController.show"

Search for products

This last section will continue to strengthen the Products#index action by implementing a straightforward search mechanism to allow any client to filter the results. This section is optional as it will have no impact on the modules of the application. But if you want to practice more with the TDD, I recommend that you complete this last step.

There are libraries to build advanced search forms extremely quickly. But here, since the goal is to learn and the search we’re going to do is straightforward, I think we can build a search engine from scratch. We just need to consider the criteria by which we’re going to filter the attributes. Hang on to your seats. It’s going to be a difficult journey.

So we’ll filter the products according to the following criteria:

  • By title

  • By price

  • Sort by creation date

It may seem short and easy, but trust me, it will give you a headache if you don’t plan it.

So we’re going to add a search' method to the `ProductRepository that will take the filters I just listed above as parameters:

Add ProductRepository.search method
// src/entities/product.entity.ts
// ...
interface ProductSearchFilters {
  // need to be implemented
}

@EntityRepository(Product)
export class ProductRepository extends Repository<Product> {
  public search(filters: ProductSearchFilters): SelectQueryBuilder<Product> {
    // need to be implemented
  }
}

Can you see how we’re going to do it? Let’s start with the first filter.

Published products

As from the beginning of this book, we will start by writing the test that will test our new method. Here is the basic structure of our test which should look familiar to you:

Add unit tests skeleton about ProductRepository.search
// src/entities/product.entity.spec.ts
import {container} from '../core/container.core';
import {TYPES} from '../core/types.core';
import {ProductRepository} from '../entities/product.entity';
import {DatabaseService} from '../services/database.service';

describe("ProductRepository", () => {
  let productRepository: ProductRepository;

  before(async () => {
    const databaseService = container.get<DatabaseService>(TYPES.DatabaseService);
    productRepository = await databaseService.getRepository(ProductRepository);
  });

  describe("search", () => {
    // will be implemented
  });
});

This test will require several existing database products that we will create by hand. Here is the structure of our test:

Add some fixtures for unit tests
// src/entities/product.entity.spec.ts
// ...
import {Product, ProductRepository} from '../entities/product.entity';
import {generateProduct} from '../tests/faker.utils';

describe("ProductRepository", () => {
  // ...
  describe("search", () => {
    let tvPlosmo: Product;
    let computer: Product;
    let tvCheap: Product;
    let unpublishedProduct: Product;

    before(async () => {
      tvPlosmo = await productRepository.save(generateProduct({
        title: "TV Plosmo Philopp",
        price: 9999.99,
        published: true,
      }));
      computer = await productRepository.save(generateProduct({
        title: "Azos Zeenbok",
        price: 499.99,
        published: true,
      }));
      tvCheap = await productRepository.save(generateProduct({
        title: "Cheap TV",
        price: 99.99,
        published: true,
      }));
      unpublishedProduct = await productRepository.save(generateProduct({
        published: false,
      }));
    });
    // ...
  });
});

As you can see, we have inserted in base 4 different products. In our first test we will call our method ProductReposiroty.search without parameters and we will check that no unpublished products are returned to us. Here is the test:

Implementing unit tests about ProductRepository.search
// src/entities/product.entity.spec.ts
// ...
describe("ProductRepository", () => {
  // ...
  describe("search", () => {
    // ...
    it("should not include unpublished products", async () => {
      const products = await productRepository.search({}).getMany();
      assert.ok(products.every((p) => p.published));
    });
  });
});

So let’s start by defining our method for taking this test:

// src/entities/product.entity.ts
// ...
interface ProductSearchFilters { }

@EntityRepository(Product)
export class ProductRepository extends Repository<Product> {
  public search(filters: ProductSearchFilters): SelectQueryBuilder<Product> {
    const query = this.createQueryBuilder()
                      .where("published = TRUE")
                      .orderBy("updatedAt", "DESC");

    return query;
  }
}

And there you go. The test should pass. Let’s go to our first filter.

By title

Now that the structure of our testing and implementation is in place, everything will go faster. Here’s the test for the filter, which is very similar to the previous one:

// src/entities/product.entity.spec.ts
// ...
describe("ProductRepository", () => {
  // ...
  describe("search", () => {
    // ...
    it("should filter products by title", async () => {
      const products = await productRepository.search({ title: "tv" }).getMany();
      assert.ok(products.some((p) => p.id === tvPlosmo.id));
      assert.ok(products.some((p) => p.id === computer.id) === false);
    });
  });
});

The following tests ensure that the method will correctly search for products based on their titles. We use the term tv in lower case to ensure that our search will not be case sensitive.

The implementation is straightforward:

// src/entities/product.entity.ts
// ...
interface ProductSearchFilters {
  title?: string;
}

@EntityRepository(Product)
export class ProductRepository extends Repository<Product> {
  public search(filters: ProductSearchFilters): SelectQueryBuilder<Product> {
    // ...
    if (filters.title !== undefined) {
      query.andWhere("lower(title) LIKE :title", { title: `%${filters.title}%` });
    }

    return query;
  }
}

The implementation is sufficient for our tests to pass:

$ npm test
....
  ProductRepository
    search
      ✓ should not include unpublished products
      ✓ should filter products by title
....

By price

To filter by price, things can get a little trickier. We will separate the logic of filtering by price into two different methods: one that will look for products that are larger than the price received and the other that will look for those below that price. This way, we will keep some flexibility, and we can easily test the scope.

Let’s start by building the tests:

// src/entities/product.entity.spec.ts
// ...
describe("ProductRepository", () => {
  // ...
  describe("search", () => {
    // ...
    it("should filter products by priceMax", async () => {
      const products = await productRepository
        .search({priceMax: 100})
        .getMany();
      assert.ok(products.some((p) => p.id === tvCheap.id));
      assert.ok(products.some((p) => p.id === tvPlosmo.id) === false);
    });

    it("should filter products by priceMin", async () => {
      const products = await productRepository
        .search({priceMin: 500})
        .getMany();
      assert.ok(products.some((p) => p.id === tvPlosmo.id));
      assert.ok(products.some((p) => p.id === tvCheap.id) === false);
    });
  });
});

The implementation is straightforward:

// src/entities/product.entity.ts
// ...
interface ProductSearchFilters {
  title?: string;
  priceMin?: number;
  priceMax?: number;
}

@EntityRepository(Product)
export class ProductRepository extends Repository<Product> {
  public search(filters: ProductSearchFilters): SelectQueryBuilder<Product> {
    // ...
    if (filters.priceMin !== undefined) {
      query.andWhere("price >= :priceMin", { priceMin: filters.priceMin });
    }

    if (filters.priceMax !== undefined) {
      query.andWhere("price <= :priceMax", { priceMax: filters.priceMax });
    }

    return query.getMany();
  }
}

The implementation is sufficient for our tests to pass:

$ npm test
...
  ProductRepository
    search
      ✓ should not include unpublished products
      ✓ should filter products by title
      ✓ should filter products by priceMax
      ✓ should filter products by priceMin
...

Great. The last step is to integrate it with our controller.

Integration into the controller

As usual, we will start with the tests. This will help us define the implementation of our endpoint.

As with the previous tests, we will create two specific products to search for using the different filters we have just implemented. The test will, therefore, look very familiar to you.

We will define a new describe that will group our two tests together. Let’s start with the beforeEach:

// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  describe("index", () => {
    // ...
    describe("search", () => {
      let computer: Product;
      let tvCheap: Product;

      before(async () => {
        computer = await productRepository.save(
          generateProduct({
            title: "Azos Zeenbok",
            price: 499.99,
            published: true,
          })
        );
        tvCheap = await productRepository.save(
          generateProduct({
            title: "Cheap TV",
            price: 99.99,
            published: true,
          })
        );
      });
    // ...
    });
  });
  // ...
});

Now let’s move on to the tests themselves:

// src/controllers/products.controller.spec.ts
// ...
describe("ProductsController", () => {
  // ...
  describe("index", () => {
    // ...
    describe("search", () => {
      // ...
      it("should find cheap TV", () => {
        const params = new URLSearchParams();
        params.append("title", "tv");
        params.append("priceMin", "50");
        params.append("priceMax", "150");

        return agent
          .get(`/products?${params.toString()}`)
          .expect(200)
          .then((response) => assert.ok(response.body.data.some((row) => row.id === String(tvCheap.id))));
      });

      it("should find computer", () => {
        const params = new URLSearchParams();
        params.append("title", "azos");
        params.append("priceMax", "500");

        return agent
          .get(`/products?${params.toString()}`)
          .expect(200)
          .then((response) => {
            assert.ok(
              response.body.data.some((row) => row.id === String(computer.id)),
              response.body
            );
          });
      });
    });
  });
  // ...
});
Note
we build the parameters with the URLSearchParams class. Then just use the toString method, which will build the GET parameters.

When we receive the answer, we check that the product we are looking for is present. Quite simply.

The implementation of the controller is straightforward. Just use our new method.

// src/controllers/products.controller.ts
// ...
@controller("/products")
export class ProductController {
  // ...
  @httpGet("/")
  public async index(req: Request) {
    const repository = await this.databaseService.getRepository(ProductRepository);
    const products = await repository.search(req.query).getMany();
    return productsSerializer.serialize(products);
  }
  // ...
}

We can run the entire test suite to make sure the application is healthy so far:

npm test
  ProductsController
    index
      ✓ should respond 200 (47ms)
      search
        ✓ should find cheap TV
        ✓ should find computer
...
  33 passing (786ms)

Great! Let’s make these changes:

$ git commit -am "Adds search class method to filter products"

And as we come to the end of our chapter, it’s time to apply all our changes to the master branch by doing a merge:

$ git checkout master
$ git merge chapter06

Conclusion

Until now, and thanks to the library jsonapi-serializer, it was easy. In the coming chapters, we will start building the Order model that will associate users with the products.

Placing Orders

In the previous chapters, we have dealt with associations between products and users. We have also seen how to serialize them well by optimizing them to be able to scale, (i.e., easily adapt to high demand on our application). Now it’s time to start placing orders. This will be a more complex situation because we are going to manage the associations between the three models. We need to be smart enough to handle the JSON output we provide.

In this chapter, we’re going to do several things:

  • Create an order entity with the corresponding specifications

  • Manage the JSON output association between the order user and product models

  • Send a confirmation email with the order summary

Now that everything is clear, we can start working. Let’s create a new branch to start working:

$ git checkout -b chapter07

Order modeling

If you remember the models associations entities, Order entity is associated with the User and Product entities. It’s actually straightforward to manage this with Rails. The tricky part is when serializing these objects. I’ll talk about it in more detail later.

Let’s start by creating the model of the order. It will have two ManyToOne relationships: User and Product. It will also have a total column for the order’s total cost and then the classic createdAt and updatedAt columns. Here is the full implementation.

First implementation of Order entity
// src/entities/order.entity.ts
import {IsDefined, IsPositive, validateOrReject} from 'class-validator';
import {/* ... */} from 'typeorm';
import {Product} from './product.entity';
import {User} from './user.entity';

@Entity()
export class Order {
  @PrimaryGeneratedColumn()
  id: number;

  @ManyToOne(() => User, (user) => user.orders)
  user: User;

  @IsNumber()
  @ValidateIf((total) => total >= 0)
  @Column({ type: "number", unsigned: true })
  total: number;

  @CreateDateColumn()
  createdAt: Date;

  @UpdateDateColumn()
  updatedAt: Date;

  @BeforeInsert()
  @BeforeUpdate()
  async validate() {
    await validateOrReject(this);
  }
}

@EntityRepository(Order)
export class OrderRepository extends Repository<Order> {}

As you can see, the implementation brings nothing new compared to what we have already seen.

I took the opportunity to add the ValidateIf constraint on the total field, which is a `number unsigned'. This means that it cannot be negative.

But before we forget, we must also define the relationship on the User side:

Add order relationship to user
// src/entities/user.entity.ts
// ...
import {Order} from './order.entity';

@Entity()
export class User {
  // ...
  @OneToMany(() => Order, (order) => order.user)
  orders: Order[];
  // ...
}

Perfect! We are ready to move on. Let’s make a commit before:

$ git add .
$ git commit -m "Generate orders"

Orders and products

We have to establish the link between orders and products. This is done with a many-to-many association because many products will be placed on several orders, and orders will have several products. In this case, we, therefore, need an additional entity that will join these two other objects and map the appropriate association. Here is the implementation:

The new entity who join order and product
// src/entities/placement.entity.ts
// ...
@Entity()
export class Placement {
  @PrimaryGeneratedColumn()
  id: number;

  @ManyToOne(() => Product, (product) => product.placements)
  product: Product;

  @ManyToOne(() => Order, (order) => order.placements)
  order: User;

  @BeforeInsert()
  @BeforeUpdate()
  async validate() {
    await validateOrReject(this);
  }
}

@EntityRepository(Placement)
export class PlacementRepository extends Repository<Placement> {}
Addition of the Placements relationship to the Product model.
// src/entities/product.entity.ts
// ...
@Entity()
export class Product {
  // ...
  @OneToMany(() => Placement, (placement) => placement.product)
  placements: Placement[];
  // ...
}
// ...
Addition of the Placements relationship to the Order model.
// src/entities/order.entity.ts
// ...
@Entity()
export class Order {
  // ...
  @OneToMany(() => Placement, (placement) => placement.order)
  placements: Placement[];
  // ...
}
// ...

Good! Let’s commit changes:

$ git add .
$ git commit -m "Associates products and orders with a placements model"

Expose the user model

Now it’s time to prepare the order controller to expose the right orders. If you remember the previous chapters where we used jsonapi-serializer you have to remember that it was straightforward.

Let’s first define what actions we are going to implement:

  1. An indexing action to retrieve current user orders

  2. A show action to retrieve a particular order from the current user

  3. Creative action to actually place the order

Let’s start with the index action. First, we need to create the order controller. But before we start typing code, we need to ask ourselves:

Should I leave my order routes nested in the UsersController, or should I isolate them?

The answer is straightforward: it depends on how much information you want to expose to the developer.

In our case, we’re not going to do that because we will retrieve the user orders on the /orders route. Let’s start with some tests:

Functional tests of the method OrdersController.index.
// src/controllers/orders.controller.spec.ts
// ...
describe("OrdersController", () => {
  let userRepository: UserRepository;
  let orderRepository: OrderRepository;
  let jsonWebTokenService: JsonWebTokenService;
  let user: User;
  let stranger: User;
  let jwt: string;
  let strangerJwt: string;
  let order: Order;

  before(async () => {
    jsonWebTokenService = container.get(TYPES.JsonWebTokenService);

    const databaseService = container.get<DatabaseService>(TYPES.DatabaseService);
    userRepository = await databaseService.getRepository(UserRepository);
    orderRepository = await databaseService.getRepository(OrderRepository);

    stranger = await userRepository.save(generateUser());
    strangerJwt = jsonWebTokenService.encode({ userId: stranger.id });
  });

  beforeEach(async () => {
    user = await userRepository.save(generateUser());
    order = await orderRepository.save(generateOrder({ user }));
    jwt = jsonWebTokenService.encode({ userId: user.id });
  });

  describe("index", () => {
    it("should forbid orders without auth", () => agent.get("/orders").expect(403));

    it("should get orders of user", () =>
      agent
        .get("/orders")
        .set("Authorization", jwt)
        .expect(200)
        .then(({ body }) => assert.ok(body.data.some(({ id }) => id === String(order.id)))));
  });
});
Implementation generateOrder test utility
// src/utils/faker.utils.ts
// ...
export function randomInteger(min: number = 0, max: number = 100): number {
  return Math.floor(Math.random() * (max - min) + min);
}
// ...
export function generateOrder(order?: Partial<Order>): Order {
  const newOrder = new Order();
  newOrder.user = order?.user ?? generateUser();
  newOrder.total = randomInteger(1); // TODO

  return newOrder;
}

The implementation of this test should remind you of product.controller.spec.ts. We try to access the new endpoint with a user with an Order and check that this order appears in the JSON return.

Note
You may have noticed the syntax ({body}) => …​. This is the functionality of spread syntax. It simply allows you to retrieve a property contained in an object directly from a variable of the same name. Thus const data = {a: 1}; const a = data.a; can be simplified to const { a } = {a: 1}. This syntax can be confusing at first so I preferred to use it rather than from this chapter on.

If we run the test suite now, as you might expect, both tests will fail. This is normal because we haven’t even defined the controller or even the order-specific serializer. So let’s do it.

So let’s start with the serializer:

Create serialize for order
// src/utils/serializers.utils.ts
// ...
export const ordersSerializer = new Serializer("orders", {
  attributes: ["total", "createdAt", "updatedAt"],
} as any);

And now we can use it in our brand new controller:

// src/controllers/orders.controller.ts
// ...
import {ordersSerializer} from '../utils/serializers.utils';

@controller("/orders", TYPES.FetchLoggedUserMiddleware)
export class OrdersController {
  public constructor(
    @inject(TYPES.DatabaseService)
    private readonly databaseService: DatabaseService
  ) {}

  @httpGet("/")
  public async index({ user }: Request & { user: User }) {
    const repository = await this.databaseService.getRepository(OrderRepository);
    const orders = await repository.find({ user });
    return ordersSerializer.serialize(orders);
  }
}

In the first decorator @controller, we globally inject the middleware FetchLoggedUserMiddleware. This means that we will have to give a JWT token to access all this controller’s actions. This allows us to retrieve the user in the index method and use it directly in the find method. We use the serializer to format the data and return it.

Let’s not forget to load our controller since it is a brand new controller:

Import order controller into container
// src/core/container.core.ts
// ...
import "../controllers/orders.controller";
// ...

And now our tests should pass:

$ npm test
...
  OrderController
    index
      ✓ should forbid orders without auth (44ms)
      ✓ should get orders of user
...

We like our commits very small. So let’s commit now:

$ git add .
$ git commit -m "Adds the index action for order"

Display a single order

As you can already imagine, this route is straightforward. We just have to set up some configurations (routes, controller action) and a new middleware that will take care of retrieving the order, and that will be all for this section. Later we will include the products related to this order in the output JSON.

Let’s start by adding some tests:

Functional test about get order information endpoint
// src/controllers/orders.controller.spec.ts
// ...
describe("OrdersController", () => {
  // ...
  describe("show", () => {
    it("should forbid show order for other users", () => {
      agent.get(`/orders/${order.id}`).set("Authorization", strangerJwt).expect(403);
    });

    it("should show order", () => {
      agent
        .get(`/orders/${order.id}`)
        .set("Authorization", jwt)
        .expect(200)
        .then(({ body }) => assert.strictEqual(body.data.id, String(order.id)));
    });
  });
  // ...
});

Let’s move on to implementation. We will start by creating a middleware that will search for the order according to the parameter. The code is really very similar to FetchProductMiddleware so I’ll skip over it a bit faster:

Creating the FetchOrderMiddleware.
// src/middlewares/fetchUser.middleware.ts
// ...
@injectable()
export class FetchOrderMiddleware extends BaseMiddleware {
  constructor(
    @inject(TYPES.DatabaseService)
    private readonly databaseService: DatabaseService
  ) {
    super();
  }

  public async handler(req: Request & { order: Order }, res: Response, next: NextFunction): Promise<void | Response> {
    const orderId = req.query.orderId ?? req.params.orderId;
    const repository = await this.databaseService.getRepository(OrderRepository);
    req.order = await repository.findOne(Number(orderId), {
      relations: ["user"],
    });

    if (!req.order) {
      return res.status(404).send("order not found");
    }
    next();
  }
}
Addition of Symbol for injection into the container.
// src/core/types.core.ts
export const TYPES = {
  // ...
  FetchOrderMiddleware: Symbol.for("FetchOrderMiddleware"),
};
Adding FetchOrderMiddleware into container.
// src/core/container.core.ts
// ...
export const container = new Container();
// ...
container.bind(TYPES.FetchOrderMiddleware).to(FetchOrderMiddleware);

All our tests now pass:

$ npm test
  OrderController
    index
      ✓ should forbid orders without auth (44ms)
      ✓ should get orders of user
    show
      ✓ should forbid show order for other users
      ✓ should show orders

Let’s commit changes and move on.

$ git commit -am "Adds the show action for order"

Placement and orders

Now it is time to give the user the possibility to place some orders. This will add complexity to the application but don’t worry. We’ll take it one step at a time.

Before launching this feature, let’s take some time to think about the implications of creating a order in the application. I’m not talking about setting up a transaction service like Stripe or Braintree but things like:

  • the management of out-of-stock products

  • Decrease in product inventory

  • add some validation for order placement to ensure that there are enough products at the time the order is placed

It looks like there’s still a lot to do but trust me: you’re closer than you think, and it’s not as hard as it looks. For now, let’s keep things simple and assume we still have enough products to place any number of orders. We’re just concerned about the response from the server at the moment.

If you remember the order entity, we need three things:

  • a total for the order

  • the user placing the order

  • order’s products.

Given this information, we can start adding some tests:

Add functional tests to order creation endpoint
// src/controllers/orders.controller.spec.ts
// ...
describe("OrderController", () => {
  // ...
  describe('create', () => {
    let product1: Product;
    let product2: Product;

    before(async () => {
      product1 = await manager.save(generateProduct());
      product2 = await manager.save(generateProduct());
    });

    it('should create order', () =>
      agent
        .post('/orders')
        .set('Authorization', jwt)
        .send({productIds: [product1.id, product2.id]})
        .expect(201));

    it('should not create product without auth', () =>
      agent
        .post('/orders')
        .send({productIds: [product1.id, product2.id]})
        .expect(403));

    it('should not create order with missing title', () =>
      agent.post('/orders').set('Authorization', jwt).send({productIds: []}).expect(400));
  });
  // ...
});

Once again, we will create tests that cover all possible cases. Respectively:

  • when everything goes well

  • when the user has not sent the necessary parameters

  • when the user has not specified his JWT token

As you can see in the first case, the user sends a table of the products he wants to add to his order. So we go to the controller:

  1. retrieve the list of associated products via the IDs

  2. calculate the total sum of these products

  3. Create the Order.

  4. create Placements associated with this order

It sounds complicated, but look at the implementation:

Handle multiples products in order creation
// src/controllers/orders.controller.ts
// ...
@controller("/orders", TYPES.FetchLoggedUserMiddleware)
export class OrdersController {
  // ...

  @httpPost('/')
  public async create(@requestBody() body: {productIds: number[]}, {user}: Request & {user: User}, res: Response) {
    const productRepository = await this.databaseService.getRepository(ProductRepository);
    const orderRepository = await this.databaseService.getRepository(OrderRepository);
    const placementRepository = await this.databaseService.getRepository(PlacementRepository);

    if (!body.productIds?.length) {
      return res.status(400).json({errors: {productIds: 'should be an array of products ids'}});
    }

    const products = await productRepository.findByIds(body.productIds);

    const total = products.reduce((sum, product) => sum + product.price, 0);
    const order = await orderRepository.save({user, total});

    const placements = products.map((product) => ({order, product}));
    order.placements = await placementRepository.save(placements);

    return res.sendStatus(201);
  }
  // ...
}

And now our tests should all pass:

$ npm test
...
  OrderController
...
    create
      ✓ should create order
      ✓ should not create product without auth
      ✓ should not create order with missing title

Let’s commit our changes:

$ git commit -am "Adds the create method for the orders controller"

Send a confirmation email

The last section of this chapter will send a confirmation email to the user who has just created an order. If you want, you can skip this step and go to the next chapter! This section is more of a bonus.

So we will use the library nodemailer. So let’s install the library:

$ npm install nodemailer
$ npm install --save-dev @types/nodemailer

Now let’s create a new service that will interface between the library and our code. As I said before, it’s always a good idea to do this because it will allow us to Mock this feature during our tests. Don’t worry, we’ll talk about it later.

Implementation of a service interfacing to nodemailer.
// src/services/mailer.service.ts
import {inject, injectable} from 'inversify';
import {createTestAccount, createTransport, SendMailOptions, Transporter} from 'nodemailer';
import {TYPES} from '../core/types.core';
import {Logger} from './logger.service';

@injectable()
export class MailerService {
  private static transporter: Transporter;

  public constructor(@inject(TYPES.Logger) private readonly logger: Logger) {}

  public async sendEmail(options: SendMailOptions): Promise<void> {
    await this.initializeTransporter();
    await MailerService.transporter.sendMail(options);
  }

  private async initializeTransporter() {
    if (MailerService.transporter !== undefined) {
      return;
    }

    let { user, pass } = await createTestAccount();

    MailerService.transporter = createTransport({
      host: "smtp.ethereal.email",
      port: 587,
      secure: false,
      auth: { user, pass },
    });
  }
}

As you can see, our service does not do much. We just initialize here a transporter that allows you to connect to an SMTP account. You can use the mail account of your choice and move the values to the .env file, but I chose to use the createTestAccount method, which allows you to create a test account on the fly.

And since we just created a service, we need to add it to the container:

Add mailer service to inversify types
// src/core/types.core.ts
export const TYPES = {
  // ...
  MailerService: Symbol.for("MailerService"),
  // ...
};
Bind mailer service to inversify container
// src/core/container.core.ts
// ...
container.bind(TYPES.MailerService).to(MailerService);
// ...

And there you go. I think it’s a good idea to add the product’s mail in the MailerService. On the other hand, we have to be careful that this service doesn’t become too big as we extend our application and don’t hesitate to cut it again if necessary. In our case, this is not a problem. So here is the method:

Implement method to send an email about brand new order
// src/services/mailer.service.ts
// ...
@injectable()
export class MailerService {
  // ...
  public async sendNewOrderEmail(order: Order): Promise<void> {
    const productText = order.placements.map((p) => `- ${p.product.title}`);
    const text = `Details of products:\n${productText}\nTOTAL:${order.total}€`;

    await this.sendEmail({
      to: order.user.email,
      text,
      subject: "Thanks for order",
    });
  }
  // ...
}

We can now call this method directly to our controller:

Call MailerService.sendNewOrderEmail into order controller
// src/controllers/orders.controller.ts
// ...
@controller("/orders", /* ... */)
export class OrdersController {
  // ...
  @httpPost("/")
  public async create(/* ... */) {
    // ...
    await this.mailerService.sendNewOrderEmail(order);
    return res.sendStatus(201);
  }
  // ...
}

And there it is!

Note
If our application grows, it would be more interesting to use a library specialized in job management such as graphile-worker to postpone the email sending. This would also allow us to prioritize the tasks and restart later the tasks that didn’t work. In our case, I didn’t set it up to keep this tutorial simpler.

Let’s run the tests to be sure:

$ npm test
...
  OrderController
...
    create
      1) should create order
      ✓ should not create product without auth
      ✓ should not create order with missing title
...

  1) OrderController
       create
         should create order:
     Error: Timeout of 2000ms exceeded.

We find that our test no longer works because it exceeds the time allotted for a test. We could increase the time allocated to this test with the setTimeout method, but it is not optimal. But don’t worry, we have a straightforward solution offered by the dependency injection we have implemented since the beginning: a Mock.

So the idea is to create a class that implements the features of the MailerService but behaves the way we want it to specifically in the given context. That is, we want the emails not to be sent during tests. It sounds complicated, but it’s actually effortless:

Create a new service that extends mailer service but does nothing
// src/tests/fakeMailer.service.ts
import {injectable} from 'inversify';
import {SendMailOptions} from 'nodemailer';
import {MailerService} from '../services/mailer.service';

@injectable()
export class FakeMailerService extends MailerService {
  public async sendEmail(options: SendMailOptions): Promise<void> {}
  protected async initializeTransporter() {}
}

And just rebind the service at the beginning of our test:

// src/controllers/orders.controller.spec.ts
// ...
describe("OrderController", () => {
  // ...
  before(async () => {
    container.rebind(TYPES.MailerService).to(FakeMailerService);
    // ...
  });
    // ...
});

There you go, our tests should pass again.

Let’s commit everything we just did to finish this section:

$ git add .
$ git commit -m "Adds order confirmation mailer"

And as we come to the end of our chapter, it’s time to apply all our changes to the master branch by doing a merge:

$ git checkout master
$ git merge chapter07

Conclusion

That’s it! You did it! You can applaud each other. I know it’s been a long time, but it’s almost over, believes me.

In the chapters to come, we will continue to work on the order entity to add validations when placing an order. Some scenarios are:

  • What happens when the products are not available?

  • Decrease the quantity of the current product when placing an order.

The next chapter will be short, but it is essential for the health of the application. So don’t skip it.

Improving orders

Previously we improved our API to place orders and send a confirmation email to the user (just to improve the user experience). This chapter will take care of some validations on the order entity to make sure it is valid. That is:

  • Decrease the quantity of the current product when creating an order

  • manage the case where the product is not available

We will also need to update the JSON output for orders a little bit. But let’s not divulge the rest.

Let’s create a new branch to start working:

$ git checkout -b chapter08

Decrease the quantity of product

In this part, we will update the product quantity to ensure that each order will deliver the actual product.

Adding the product.total attribute

We will first add a total field on the product, representing the available stock of the product.

Add total column to products
// src/entities/product.entity.ts
// ...
@Entity()
export class Product {
  // ...
  @Column({type: 'integer', default: 0})
  quantity: number = 0;
  // ...
}
// ...

This field must also be available when creating the product. So we need to update our controller:

Handle quantity attribute in products creation endpoint
// src/controllers/home.controller.ts
// ...
@controller('/products')
export class ProductController {
  // ...
  public async create(/* ... */) {
    // ...
    const product = new Product();
    product.quantity = Number(body.quantity);
    // ...
  }
  // ...
}

We also need to update the generateProduct method, which must handle this new attribute:

Handle quantity attribute in tests utilities
// src/utils/faker.utils.ts
// ...
export function generateProduct(product?: Partial<Product>): Product {
  // ...
  newProduct.quantity = product?.quantity ?? randomInteger(1);
  // ...
}
// ...

Now we have to check that the total can never be less than zero. This will secure our application and prevent an order from being placed if there is no stock on the product.

So let’s start by adding a test that will describe the desired behavior:

Implement a tests to ensure a negative quantity is not valid
// src/entities/product.entity.spec.ts
// ...
describe('ProductRepository', () => {
  // ...
  describe('validate', () => {
    it('should have a positive quantity', async () => {
      const product = generateProduct({quantity: -1});
      try {
        await productRepository.save(product);
        assert.fail('Should not validate product');
      } catch (errors) {
        assert.ok(errors.some(error => error.property === 'quantity'));
      }
    });
  });
});

Passing the test is very easy thanks to the class-validator decorators. Just add the decorators @IsInt and @Min like this:

Add product quantity validation
// src/entities/product.entity.ts
// ...
@Entity()
export class Product {
  // ...
  @IsInt()
  @Min(0)
  @Column({type: 'integer', default: 0})
  quantity: number = 0;
  // ...
}
// ...

As you can see it’s really very simple and the code is very readable. And that’s it. Let’s start the changes:

$ git commit -am "Add quantity to products"

Setting up the functional test

Before we start going further, we need to change the way we handle order creation because we now have to consider a quantity for each product. If you remember, until now, we have been waiting for a table of product identifiers. I’ll try to keep things simple, and we will now accept an object table containing the attributes id and quantity. A quick example would be something like this:

const productOrderParams = [
  { id: 1, quantity: 4 },
  { id: 3, quantity: 5 }
]

So let’s start by modifying our functional test about the order controller:

Update functional test about order creation with new params
// src/controllers/orders.controller.spec.ts
// ...
describe("OrderController", () => {
  // ...
  describe("create", () => {
    let productsParams;

    before(async () => {
      const product1 = await productRepository.save(generateProduct());
      const product2 = await productRepository.save(generateProduct());

      productsParams = [
        {id: product1.id, quantity: 1},
        {id: product2.id, quantity: 1},
      ];
    });

    it('should create order', () =>
      agent
        .post('/orders')
        .set('Authorization', jwt)
        .send({products: productsParams})
        .expect(201));
    // ...
  });
  // ...
});

As you can see, we have simply updated the parameters we pass to the query.

Let’s recap what we need to change in the controller. We need to find the product associated with the id in the table that creates the placements. Let’s see the implementation of the controller:

Implementation to handle multiple products in orders controller
// src/controllers/orders.controller.ts
// ...
@controller('/orders', TYPES.FetchLoggedUserMiddleware)
export class OrdersController {
  // ...
  @httpPost('/')
  public async create(
    @requestBody() body: {products: {id: number; quantity: number}[]},
    // ...
  ) {
    const {manager} = await this.databaseService.getConnection();

    if (!body.products?.length) {
      return res.status(400).json({
        errors: {
          products: 'should be an array of `{id, quantity}`',
        },
      });
    }

    const order = await manager.save(Order, {
      user,
      total: 0,
      placements: [],
    } as Order);

    for (const {id, quantity} of body.products) {
      const placement = new Placement();
      placement.product = await manager.findOneOrFail(Product, {id});
      placement.order = order;
      placement.quantity = quantity;

      order.placements.push(await manager.save(Placement, placement));
    }
    // ...
  }
  // ...
}

Wow. The code is getting a bit longer and deserves some explanations:

  • we create the order with a total equal to zero (We will see in the next section how to make this total update automatically).

  • we check the user’s data by checking that req.body.products contains values

  • we make a loop on req.body.products in which we retrieve the product, create an Investment, and add it to the order.investments table

  • the rest remains unchanged

The subscriber

It is now time to update the product quantity once an order is placed.

We would be tempted to do this quickly in the OrderController.create action, but that would be a bad idea because we would have to duplicate this logic on the OrderController.update and OrderController.destroy actions, which must also update the product quantity. It also goes against the good practice to minimize the responsibility of the controllers.

That’s why I think a Subscribers from TypeORM is a much better place for the simple reason that we are sure that our subscriber will be called no matter what happens without us having to worry about it.

Note
It would be possible to use the entity listeners as @afterInsert on the UserRepository.validate method, but I really recommend using the subscriber when we want to manipulate multiple entity types. This allows us to better split our code and not make one class depend on another.

The behavior we will implement is the following:

  • when a placement is created

  • we remove placement.quantity from the attribute product.quantity.

  • we recalculate the total cost of the order

  • when an investment is created

  • we add placement.quantity to the attribute product.quantity.

  • we recalculate the total cost of the order

The subscriber will materialize into a class that extends EntitySubscriberInterface. If we take a closer look at this interface, we see that we have access to a bunch of methods:

Some methods of the EntitySubscriberInterface interface
// node_modules/typeorm/subscriber/EntitySubscriberInterface.d.ts
export interface EntitySubscriberInterface<Entity = any> {
  // ...
  beforeInsert?(event: InsertEvent<Entity>): Promise<any> | void;
  afterInsert?(event: InsertEvent<Entity>): Promise<any> | void;
  beforeUpdate?(event: UpdateEvent<Entity>): Promise<any> | void;
  afterUpdate?(event: UpdateEvent<Entity>): Promise<any> | void;
  beforeRemove?(event: RemoveEvent<Entity>): Promise<any> | void;
  afterRemove?(event: RemoveEvent<Entity>): Promise<any> | void;
  // ...
}

So we can create a brand new class who implements EntitySubscriberInterface:

Create PlacementSubscriber
// src/subscribers/placement.subscriber.ts
import {/*...*/} from 'typeorm';
import {Order} from '../entities/order.entity';
import {Placement} from '../entities/placement.entity';
import {Product} from '../entities/product.entity';

@EventSubscriber()
export class PlacementSubscriber
  implements EntitySubscriberInterface<Placement> {

  listenTo() {
    return Placement;
  }

  async afterInsert({entity, manager}: InsertEvent<Placement>) {/*...*/}
  async beforeRemove({entity, manager}: RemoveEvent<Placement>) {/*...*/}
  async afterRemove({entity, manager}: RemoveEvent<Placement>) {/*...*/}
}

You can also notice that I have implemented the listenTo method, which will specify this subscriber’s listening field. But before moving on, we need to tell TypeORM where our migration is via the following configuration variable that you need to add to your .env and .test.env file.

Adding the configuration of subscribers
TYPEORM_SUBSCRIBERS=src/subscribers/*.subscriber.ts

We are now ready to move on to the implementation of the methods!

As usual, we will create a test dedicated to this new class. This test will simply create a product with a sufficient quantity and then create a Placement and check that the total has been updated. We then do the opposite by deleting the product and checking that the original quantity is found.

Create test to ensure product.entity is updated
// src/subscribers/placement.subscriber.spec.ts
// ...
describe('PlacementSubscriber', () => {
  let manager: EntityManager;

  before(async () => {
    const databaseService = container.get<DatabaseService>(
      TYPES.DatabaseService,
    );
    const connection = await databaseService.getConnection();
    manager = connection.manager;
  });

  it('should update product.quantity after insert', async () => {
    let product = await manager.save(generateProduct({quantity: 10}));
    const order = await manager.save(generateOrder());

    const placement = await manager.save(
      generatePlacement({order, product, quantity: 2}),
    );

    product = await manager.findOne(Product, product.id);
    assert.strictEqual(product.quantity, 10 - placement.quantity);

    await manager.remove(placement);
    product = await manager.findOne(Product, product.id);
    assert.strictEqual(product.quantity, 10);
  });
});

The implementation of the subscriber is really very simple. We will use the beforeInsert and beforeRemove methods to increment or decrement the product total and then save the product.

Complete subscriber to update product.quantity
// src/subscribers/placement.subscriber.ts
// ...
@EventSubscriber()
export class PlacementSubscriber
  implements EntitySubscriberInterface<Placement> {
  // ...
  async afterInsert({entity, manager}: InsertEvent<Placement>) {
    const productId = entity.product.id;
    const product = await manager.findOneOrFail(Product, {id: productId});
    product.quantity -= entity.quantity;
    await manager.save(product);
  }

  async beforeRemove({entity, manager}: RemoveEvent<Placement>) {
    const productId = entity.product.id;
    const product = await manager.findOneOrFail(Product, {id: productId});
    product.quantity += entity.quantity;
    await manager.save(product);
  }
}
Note
We retrieve the product via the manager instead of simply retrieving via the entity.product relationship to ensure that we have the latest version stored in the database.

And there you go. It was easy, wasn’t it? Let’s run the tests to be sure.

$ npm test
...
  PlacementSubscriber
    ✓ should update product.quantity after insert (40ms)

Perfect, let’s move on.

Update of the total stroke of the order

If you understood the previous section correctly, you could guess that the order stroke update will be quite similar.

Let’s start by writing the tests. So we will create a Product, then an Order and then a Placement to check that the order total has updated. We will then remove this Placement and check that the

// src/subscribers/placement.subscriber.spec.ts
// ...
describe('PlacementSubscriber', () => {
  // ...
  it('should update order.total after insert', async () => {
    const product = await manager.save(
      generateProduct({quantity: 10, price: 5}),
    );
    let order = await manager.save(generateOrder());

    const placement = generatePlacement({order, product, quantity: 2});
    await manager.save(placement);

    order = await manager.findOne(Order, order.id);
    assert.strictEqual(order.total, 2 * product.price);

    await manager.remove(placement);
    order = await manager.findOne(Order, order.id);
    assert.strictEqual(order.total, 0);
  });
});

And there you go. This test really looks like the previous one. So let’s move quickly to the implementation:

// src/subscribers/placement.subscriber.ts
// ...
@EventSubscriber()
export class PlacementSubscriber
  implements EntitySubscriberInterface<Placement> {
  // ...
  async afterInsert({entity, manager}: InsertEvent<Placement>) {
    // ...
    await this.updateOrderTotal(manager, entity.order);
  }
  // ...
  async afterRemove({entity, manager}: RemoveEvent<Placement>) {
    await this.updateOrderTotal(manager, entity.order);
  }

  private async updateOrderTotal(manager: EntityManager, order: Order) {
    const placements = await manager.find(Placement, {
      where: {order},
      relations: ['product'],
    });

    order.total = placements.reduce(
      (sum, placement) => sum + placement.quantity * placement.product.price,
      0,
    );

    await manager.save(Order, order);
  }
}

Let’s take a closer look at the updateOrderTotal method:

  1. we get all the placements of the order passed in parameter with the associated products

  2. we add up the total investment

The query builder of TypeORM

It is possible to rewrite the previous code with the Query Builder of TypeORM. The Query Builder gives you more control over the generated SQL query. The code can be more complex and more powerful because we don’t need to load several objects in memory.

This is the case here, so I wanted to make a little sidebar. Here is the equivalent with the Query Builder.

const result = await manager
  .createQueryBuilder(Placement, 'pl')
  .select('SUM(pl.quantity) * p.price', 'total')
  .innerJoin('pl.order', 'o')
  .innerJoin('pl.product', 'p')
  .where('o.id = :orderId', {orderId: order.id})
  .groupBy('o.id')
  .getRawOne();
order.total = result?.total ?? 0;

This query will directly total by multiplying the quantity by the price of the related product. Thus, we obtain the result directly in the form of a `number'. This avoids loading several Javascript objects and saves memory.

This code will generate the following SQL query:

SELECT SUM("pl". "quantity") * "p". "price" AS "total"
FROM "placement" "pl"
INNER JOIN "order" "o" ON "o". "id"="pl". "orderId"
INNER JOIN "product" "p" ON "p". "id"="pl". "productId"
WHERE "o". "id" = ?
GROUP BY "o". "id"

Therefore, I strongly advise you to improve your database managers' knowledge as they can be great allies.

Let’s see if the tests pass:

$ npm test
...
  OrderController
...
    create
      ✓ should create order (74ms)
      ✓ should not create product without auth
      ✓ should not create order with missing products
...
  PlacementSubscriber
    ✓ should update product.quantity after insert (42ms)
    ✓ should update order.total after insert (44ms)
...
  42 passing (1s)

Let’s go through our changes and recap what we’ve just done:

$ git commit -am "Updates the total calculation for order"

And as we come to the end of our chapter, it’s time to apply all our changes to the master branch by doing a merge:

$ git checkout master
$ git merge chapter08

Conclusion

Oh, you are here! Allow me to congratulate you! It’s a long way from the first chapter. But you are one step closer. In fact, the next chapter will be the last one. So try to make the best of it.

The last chapter will discuss how to optimize the API using paging, caching, and background tasks. So buckle up. It’s going to be an eventful journey.

Optimizations

Welcome to the last chapter of the book. It’s been a long road, but you’re only one step away from the end. In the previous chapter, we finished modeling the order model. We could say that the project is now complete, but I want to cover a few important details about optimization. The topics I will cover here will be:

  • Pagination

  • Caching

  • SQL query optimization

  • the activation of CORS

I will try to go as far as I can by trying to cover some common scenarios. I hope these scenarios will be useful for some of your projects.

Let’s create a new branch for this chapter:

$ git checkout -b chapter09

Pagination

A prevalent strategy for optimizing record retrieval from a database is to load only a limited amount of records by paging them. We will do it very easily.

The only tricky part here is how to manage JSON output to give the customer enough information about how the table is paginated. In the previous section, I shared some resources on the practices I’m going to follow here. One of them was JSON:API.

The JSON:API standard imposes a strict but clear format. This allows us not to worry about how it should be implemented. A subsection called Top Level of the official JSON:API documentation mentions something about pagination:

"meta": meta-information about a resource, such as pagination.

This is not very descriptive, but we have a hint on what to look for next about paging implementation. Don’t worry, that’s exactly what we’re going to do here.

Let’s start with the list of products.

The products

We need to provide the paging information on the meta tag as the following JSON document:

{
  "data": [
    ...
  ],
  "links": {
    "first": "/api/v1/products?page=1",
    "last": "/api/v1/products?page=30",
    "prev": "/api/v1/products",
    "next": "/api/v1/products?page=2"
  }
}

Now that we see what we should return, all we have to do is change our code a little. But before going any further, let’s add a few tests first:

Add functional tests about products pagination
// src/controllers/products.controller.spec.ts
// ...
describe('ProductsController', () => {
  // ...
  describe('index', () => {
    // ...
    it('should paginate results', async () => {
      for (let i = 0; i < 25; i++) {
        await productRepository.save(generateProduct({published: true}));
      }

      await agent
        .get('/products')
        .expect(200)
        .then(response => {
          assert.strictEqual(response.body.data.length, 20);
          assert.ok(response.body.links);
        });
    });
    // ...
  });
  // ...
});

So we are testing two things:

  1. we create 25 products, so we have to find only 20 of them in the API response because the results must be limited to one page.

  2. we need to find the links attributes we saw previously

So our goal is to get these tests done. We are not going to define the controller’s behavior because we know in advance that we want the same behavior for all controllers. So we’re going to create a generic method that will take as a parameter:

The HTTP request will allow us to easily find the page parameter and build the links according to the current URL of the request.

  • the SQL query, which will be useful to know how many results there are in the database and also apply the OFFSET and LIMIT filters to get only part of the results

  • the serializer to serialize the data according to the JSON:API schema

Let’s go!

// src/utils/paginate.utils.ts
import {Request} from 'express';
import {Serializer} from 'jsonapi-serializer';
import {SelectQueryBuilder} from 'typeorm';

const PER_PAGE = 20;

export async function paginate<T>(
  queryBuilder: SelectQueryBuilder<T>,
  serializer: Serializer,
  {query, baseUrl}: Request,
) {
  const page = Number(query.page ?? 1);

  const count = await queryBuilder.cache(60 * 1000).getCount();
  const totalPage = Math.floor(count / PER_PAGE);
  const prevPage = page === 1 ? 1 : page - 1;
  const nextPage = page === totalPage ? page : page + 1;
  const offset = page > 1 ? (page - 1) * PER_PAGE : 0;

  const data = await queryBuilder
    .clone()
    .offset(offset)
    .limit(PER_PAGE)
    .getMany();

  const getUrlForPage = page =>
    `${baseUrl}?${new URLSearchParams({...query, page})}`;

  const response = serializer.serialize(data);
  response.links = {
    first: getUrlForPage(1),
    last: getUrlForPage(totalPage),
    prev: getUrlForPage(prevPage),
    next: getUrlForPage(nextPage),
  };

  return response;
}

The implementation is a bit long, but we will review it together:

  1. queryBuilder.getCount() allows us to execute the query passed as a parameter but only to know the number of results.

  2. We use this value to calculate the number of pages and deduct the previous and next page number.

  3. we execute the SQL query of the queryBuilder adding an offset and a limit.

  4. we generate the URLs that we add to the previously serialized result

Are you still there? The implementation in the controller is much easier:

// src/controllers/home.controller.ts
// ...
import {paginate} from '../utils/paginate.utils';

@controller('/products')
export class ProductController {
  // ...
  @httpGet('/')
  public async index(/* ... */) {
    // ...
    return paginate(repository.search(req.query), productsSerializer, req);
  }
  // ...
}

And there you go. Let’s run the tests to be sure:

---
$ npm test
...
  ProductsController
    index
      ✓ should paginate results (94ms)
...
---

Now that we’ve done a great optimization for the product list route, it’s up to the customer to browse the pages.

Let’s go through these changes and continue with the order list.

$ git add .
$ git commit -m "Adds pagination for products index action to optimize response"

List of orders

Now it’s time to do exactly the same for the order list route. This should be very easy to implement. But first, let’s add some tests:

// src/controllers/orders.controller.spec.ts
// ...
describe('OrderController', () => {
  // ...
  describe('index', () => {
    // ...
    it('should paginate results', async () => {
      for (let i = 0; i < 20; i++) {
        await orderRepository.save(generateOrder({user}));
      }

      await agent
        .get('/orders')
        .set('Authorization', jwt)
        .expect(200)
        .then(response => {
          assert.strictEqual(response.body.data.length, 20);
          assert.ok(response.body.links);
        });
    });
  });
  // ...
});

And, as you may already suspect, our tests no longer pass:

$ npm test
...
  1 failing

  1) OrderController
       index
         should paginate results:

      AssertionError [ERR_ASSERTION]: Expected values to be strictly equal:

21 !== 20

      + expected - actual

      -21
      +20

Passing this test is again quite easy.

// src/controllers/orders.controller.ts
// ...
@controller('/orders', TYPES.FetchLoggedUserMiddleware)
export class OrdersController {
  // ...
  @httpGet('/')
  public async index(req: Request & {user: User}) {
    const {manager} = await this.databaseService.getConnection();

    return paginate(
      manager
        .createQueryBuilder(Order, 'o')
        .where('o.user = :user', {user: req.user.id}),
      ordersSerializer,
      req,
    );
  }
  // ...
}

The only difference from the implementation of the product controller is that here we needed to transform repository.find into queryBuilder.

The tests should now pass:

$ npm test
...
  46 passing (781ms)

Let’s do a commit before moving forward

$ git commit -am "Adds pagination for orders index action"

Caching

We can easily set up simple caching for some of our requests. The implementation will be effortless thanks to TypeORM. TypeORM will create a new table that will store the executed query, and the result is returned. At the next execution, TypeORM will return the same result as the previous one. This saves precious resources to our database manager (here Sqlite) during some expensive SQL queries. Here the result will not be obvious because the executed SQL queries remain simple, but we will implement it anyway.

Before seeing a little bit of the cache’s behavior, we will create a script that will insert dummy data in our database. This will be very easy because we just need to use the methods we created during our tests. Here’s a little script that we’re going to create in a new scripts folder:

Create a script to insert lot’s of data in development database
// src/scripts/loadFakeData.script.ts
import 'reflect-metadata';
// ...
async function createOrder(manager: EntityManager) {
  const user = await manager.save(User, generateUser());
  const owner = await manager.save(User, generateUser());
  const order = await manager.save(Order, generateOrder({user}));

  for (let j = 0; j < 5; j++) {
    const product = await manager.save(Product, generateProduct({user: owner}));
    await manager.save(Placement, {order, product, quantity: 2});
  }
}

async function main() {
  const {manager} = await container
    .get<DatabaseService>(TYPES.DatabaseService)
    .getConnection();
  const logger = container.get<Logger>(TYPES.Logger);

  for (let i = 0; i < 100; i++) {
    logger.log('DEBUG', `Inserting ${i} / 100`);
    await createOrder(manager);
  }
}

if (require.main === module) {
  main().then().catch(console.error);
}

And there you go. Some explanations:

  • The createOrder will, as its name suggests, create order and also create a product and five places.

  • The main will create a loop around createOrder to call it several times.

  • require.main === module may seem abstract, but it is actually straightforward: it means that the function will be executed only if we explicitly execute the file. In other words, it ensures that the method will not be executed if the file is accidentally imported.

Now we can run the script with the following order:

$ npm run build && node dist/scripts/loadfakedata.script.js

We can verify that everything went well by sending a small SQL query directly to the database:

$ sqlite3 db/development.sqlite "SELECT COUNT(*) FROM product"
500

Now let’s try to activate the cache. It’s really very easy. First we need to add the following environment variable so that TypeORM creates a table dedicated to the startup:

# .env
# ...
TYPEORM_CACHE=true

Do not forget to deactivate this parameter during tests:

# .test.env
# ...
TYPEORM_CACHE=false

Now we will add two lines to our paginate method:

// src/utils/paginate.utils.ts
// ...
export async function paginate<T>(/*...*/) {
  // ...
  const count = await queryBuilder.cache(60 * 1000).getCount();
  // ...
  const data = await queryBuilder
    .clone()
    .offset(offset)
    .limit(PER_PAGE)
    .cache(60 * 1000)
    .getMany();
  // ...
  return response;
}

And there you go. The cache method takes care of everything. Let’s try it to see. Start the npm start server and send an HTTP request:

$ curl -w 'Total: %{time_total}\n' -o /dev/null -s "http://localhost:3000/products?title=42"
Total: 0.019708
Note
The -w option allows us to retrieve the time of the request, -w redirects the response to a file and `--hides the cURL display.

The response time takes about 20 milliseconds using cURL. But let’s take a look at the server console that displays the SQL queries:

...
query: SELECT * FROM "query-result-cache" "cache" WHERE "cache"."query" = ? -- PARAMETERS: ...
query: SELECT COUNT(1) AS "cnt" FROM "product" "Product" WHERE published = TRUE AND lower(title) LIKE ? -- PARAMETERS: ...
query: INSERT INTO "query-result-cache"("identifier", "query", "time", "duration", "result") VALUES (NULL, ?, ?, ?, ?) -- PARAMETERS: ...
...

Here are some explanations for these requests:

  1. a query is made on the query-result-cache table to see if a cache is present

  2. the request is made because the cache did not exist

  3. the result is inserted in the query-result-cache table.

Let’s try to execute the cURL order again:

$ curl -w 'Total: %{time_total}\n' -o /dev/null -s "http://localhost:3000/products?title=42"
Total: 0.007368

We see that the response time is now halved. Of course, this figure is to be taken with tweezers but let’s see in the console what has just happened:

query: SELECT * FROM "query-result-cache" "cache" WHERE "cache" "query" = ? -- PARAMETERS: ...

And there you go. The cache has been used and …​ nothing more! Now it’s up to you to judge which queries can be cached and for how long as needed.

So the improvement is huge! Let’s commit our changes one last time.

$ git commit -am "Adds caching for the serializers"

Activation of CORS

In this last section, I will tell you about one last problem you will surely encounter if you have to work with your API.

The first time you request an external site (via an AJAX request, for example), you will encounter such an error:

Failed to load https://example.com/: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://anfo.pl' is therefore not allowed access. If an opaque response serves your needs, set the request’s mode to 'no-cors' to fetch the resource with CORS disabled.

"But what does Access-Control-Allow-Origin mean? The behavior you are observing is the effect of the CORS implementation of the browsers. Before the CORS standardization, there was no way to call an API terminal under another domain for security reasons. This was (and still is, to some extent) blocked by the policy of the same origin.

CORS is a mechanism to allow requests made on your behalf and at the same time to block certain requests made by rogue scripts and is triggered when you make an HTTP request to:

  • a different domain

  • a different sub-domain

  • a different port

  • a different protocol

We need to manually enable this feature so that any client can make requests to our API. A simple library already exists, so we will install it:

$ npm install --save horns

And then we just need to modify our server a little bit:

Setup CORS on Express.js
// src/main.ts
import 'reflect-metadata';
import cors from 'cors';
// ...
server
  .setConfig(app => app.use(cors()))
  .build()
  .listen(port, () => console.log(`Listen on http://localhost:${port}/`));

And there it is! Now it’s time to make our last commit and merge our changes on the master branch.

$ git commit -am "Activate CORS"
$ git checkout master
$ git merge chapter09

Conclusion

If you get to this point, it means you are done with the book. Good job! You’ve just become a great Node.js developer, that’s for sure. So we have built together with a solid and complete API. This one has all the qualities to dethrone Amazon, rest assured.

Thank you for going through this great adventure with me. Keep in mind that you have just seen one of many ways to build an API with Node.js. I hope that this one will have allowed you to discover new notions and especially that you took as much pleasure in coding as I did.

I would like to remind you that this book’s source code is available in Asciidoctor format on GitHub. So don’t hesitate to fork the project if you want to improve it or correct a mistake I might have missed.

If you liked this book, don’t hesitate to let me know by mail contact@rousseau-alexandre.fr. I’m open to any criticism, good or bad, over a good beer :) .