A Side-by-side Comparison of Express, Koa and Hapi.js

If you’re a Node.js developer, chances are you have, at some point, used Express.js to create your applications or APIs. Express.js is a very popular Node.js framework, and even has some other frameworks built on top of it such as Sails.js, kraken.js, KeystoneJS and many others. However, amidst this popularity, a bunch of other frameworks have been gaining attention in the JavaScript world, such as Koa and hapi.
In this article, we’ll examine Express.js, Koa and hapi.js — their similarities, differences and use cases.
Let’s firstly introduce each of these frameworks separately.
Express.js is described as the standard server framework for Node.js. It was created by TJ Holowaychuk, acquired by StrongLoop in 2014, and is currently maintained by the Node.js Foundation incubator. With about 170+ million downloads in the last year, it’s currently beyond doubt that it’s the most popular Node.js framework.
Development began on Koa in late 2013 by the same guys at Express. It’s referred to as the future of Express. Koa is also described as a much more modern, modular and minimalistic version of the Express framework.
Hapi.js was developed by the team at Walmart Labs (led by Eran Hammer) after they tried Express and discovered that it didn’t work for their requirements. It was originally developed on top of Express, but as time went by, it grew into a full-fledged framework.
Fun Fact: hapi is short for Http API server.
Now that we have some background on the frameworks and how they were created, let’s compare each of them based on important concepts, such as their philosophy, routing, and so on.
Note: all code examples are in ES6 and make use of version 4 of Express.js, 2.4 of Koa, and 17 for hapi.js.
Express was built to be a simple, unopinionated web framework. From its GitHub README:

The Express philosophy is to provide small, robust tooling for HTTP servers, making it a great solution for single page applications, web sites, hybrids, or public HTTP APIs.

Express.js is minimal and doesn’t possess many features out of the box. It doesn’t force things like file structure, ORM or templating engine.
While Express.js is minimal, Koa can boast a much more minimalistic code footprint — around 2k LOC. Its aim is to allow developers be even more expressive. Like Express.js, it can easily be extended by using existing or custom plugins and middleware. It’s more futuristic in its approach, in that it relies heavily on the relatively new JavaScript features like generators and async/await.
Hapi.js focusses more on configuration and provides a lot more features out of the box than Koa and Express.js. Eran Hammer, one of the creators of hapi, described the reason for building the framework properly in his blog post:

hapi was created around the idea that configuration is better than code, that business logic must be isolated from the transport layer, and that native node constructs like buffers and stream should be supported as first class objects.

Starting a Server
Starting a server is one of the basic things we’d need to do in our projects. Let’s examine how it can be done in the different frameworks. We’ll start a server and listen on port 3000 in each example.
const express = require(‘express’);
const app = express();

app.listen(3000, () => console.log(‘App is listening on port 3000!’));

Starting a server in Express.js is as simple as requiring the express package, initializing the express app to the app variable and calling the app.listen() method, which is just a wrapper around the native Node.js http.createServer() method.
Starting a server in Koa is quite similar to Express.js:
const Koa = require(‘koa’);
const app = new Koa();

app.listen(3000, () => console.log(‘App is listening on port 3000!’));

The app.listen() method in Koa is also a wrapper around the http.createServer() method.
Starting a server in hapi.js is quite a departure from what many of us may be used to from Express:
const Hapi = require(‘hapi’);

const server = Hapi.server({
host: ‘localhost’,
port: 3000

async function start() {
try {
await server.start();
catch (err) {
console.log(‘Server running at:’, server.info.uri);


In the code block above, first we require the hapi package, then instantiate a server with Hapi.server(), which has a single config object argument containing the host and port parameters. Then we start the server with the asynchronous server.start() function.
Unlike in Express.js and Koa, the server.start() function in hapi is not a wrapper around the native http.createServer() method. It instead implements its own custom logic.
The above code example is from the hapi.js website, and shows the importance the creators of hapi.js place on configuration and error handling.
Continue reading %A Side-by-side Comparison of Express, Koa and Hapi.js%

Link: https://www.sitepoint.com/express-koa-hapi/

Building Apps and Services with the Hapi.js Framework

Hapi.js is described as “a rich framework for building applications and services”. Hapi’s smart defaults make it a breeze to create JSON APIs, and its modular design and plugin system allow you to easily extend or modify its behavior.
The recent release of version 17.0 has fully embraced async and await, so you’ll be writing code that appears synchronous but is non-blocking and avoids callback hell. Win-win.
The Project
In this article, we’ll be building the following API for a typical blog from scratch:
# RESTful actions for fetching, creating, updating and deleting articles
GET /articles articles#index
GET /articles/:id articles#show
POST /articles articles#create
PUT /articles/:id articles#update
DELETE /articles/:id articles#destroy

# Nested routes for creating and deleting comments
POST /articles/:id/comments comments#create
DELETE /articles/:id/comments comments#destroy

# Authentication with JSON Web Tokens (JWT)
POST /authentications authentications#create

The article will cover:

Hapi’s core API: routing, request and response
models and persistence in a relational database
routes and actions for Articles and Comments
testing a REST API with HTTPie
authentication with JWT and securing routes
an HTML View and Layout for the root route /.

The Starting Point
Make sure you’ve got a recent version of Node.js installed; node -v should return 8.9.0 or higher.
Download the starting code from here with git:
git clone https://github.com/markbrown4/hapi-api.git
cd hapi-api
npm install

Open up package.json and you’ll see that the “start” script runs server.js with nodemon. This will take care of restarting the server for us when we change a file.
Run npm start and open http://localhost:3000/:
[{ “so": "hapi!" }]

Let’s look at the source:
// server.js
const Hapi = require(‘hapi’)

// Configure the server instance
const server = Hapi.server({
host: ‘localhost’,
port: 3000

// Add routes
method: ‘GET’,
path: ‘/’,
handler: () => {
return [{ so: ‘hapi!’ }]

// Go!
server.start().then(() => {
console.log(‘Server running at:’, server.info.uri)
}).catch(err => {

The Route Handler
The route handler is the most interesting part of this code. Replace it with the code below, comment out the return lines one by one, and test the response in your browser.
method: ‘GET’,
path: ‘/’,
handler: () => {
// return [{ so: ‘hapi!’ }]
return 123
return `<h1><marquee>HTML <em>rules!</em></marquee></h1>`
return null
return new Error(‘Boom’)
return Promise.resolve({ whoa: true })
return require(‘fs’).createReadStream(‘index.html’)

To send a response, you simply return a value and Hapi will send the appropriate body and headers.

An Object will respond with stringified JSON and Content-Type: application/json
String values will be Content-Type: text/html
You can also return a Promise or Stream.

The handler function is often made async for cleaner control flow with Promises:
method: ‘GET’,
path: ‘/’,
handler: async () => {
let html = await Promise.resolve(`<h1>Google<h1>`)
html = html.replace(‘Google’, ‘Hapi’)

return html

It’s not always cleaner with async though. Sometimes returning a Promise is simpler:
handler: () => {
return Promise.resolve(`<h1>Google<h1>`)
.then(html => html.replace(‘Google’, ‘Hapi’))

We’ll see better examples of how async helps us out when we start interacting with the database.
The Model Layer
Like the popular Express.js framework, Hapi is a minimal framework that doesn’t provide any recommendations for the Model layer or persistence. You can choose any database and ORM that you’d like, or none — it’s up to you. We’ll be using SQLite and the Sequelize ORM in this tutorial to provide a clean API for interacting with the database.
SQLite comes pre-installed on macOS and most Linux distributions. You can check if it’s installed with sqlite -v. If not, you can find installation instructions at the SQLite website.
Sequelize works with many popular relational databases like Postgres or MySQL, so you’ll need to install both sequelize and the sqlite3 adapter:
npm install –save sequelize sqlite3

Let’s connect to our database and write our first table definition for articles:
// models.js
const path = require(‘path’)
const Sequelize = require(‘sequelize’)

// configure connection to db host, user, pass – not required for SQLite
const sequelize = new Sequelize(null, null, null, {
dialect: ‘sqlite’,
storage: path.join(‘tmp’, ‘db.sqlite’) // SQLite persists its data directly to file

// Here we define our Article model with a title attribute of type string, and a body attribute of type text. By default, all tables get columns for id, createdAt, updatedAt as well.
const Article = sequelize.define(‘article’, {
title: Sequelize.STRING,
body: Sequelize.TEXT

// Create table

module.exports = {

Let’s test out our new model by importing it and replacing our route handler with the following:
// server.js
const { Article } = require(‘./models’)

method: ‘GET’,
path: ‘/’,
handler: () => {
// try commenting these lines out one at a time
return Article.findAll()
return Article.create({ title: ‘Welcome to my blog’, body: ‘The happiest place on earth’ })
return Article.findById(1)
return Article.update({ title: ‘Learning Hapi’, body: `JSON API’s a breeze.` }, { where: { id: 1 } })
return Article.findAll()
return Article.destroy({ where: { id: 1 } })
return Article.findAll()

If you’re familiar with SQL or other ORM’s, the Sequelize API should be self explanatory, It’s built with Promises so it works great with Hapi’s async handlers too.
Note: using Article.sync() to create the tables or Article.sync({ force: true }) to drop and create are fine for the purposes of this demo. If you’re wanting to use this in production you should check out sequelize-cli and write Migrations for any schema changes.
Our RESTful Actions
Let’s build the following routes:
GET /articles fetch all articles
GET /articles/:id fetch article by id
POST /articles create article with `{ title, body }` params
PUT /articles/:id update article with `{ title, body }` params
DELETE /articles/:id delete article by id

Add a new file, routes.js, to separate the server config from the application logic:
// routes.js
const { Article } = require(‘./models’)

exports.configureRoutes = (server) => {
// server.route accepts an object or an array
return server.route([{
method: ‘GET’,
path: ‘/articles’,
handler: () => {
return Article.findAll()
}, {
method: ‘GET’,
// The curly braces are how we define params (variable path segments in the URL)
path: ‘/articles/{id}’,
handler: (request) => {
return Article.findById(request.params.id)
}, {
method: ‘POST’,
path: ‘/articles’,
handler: (request) => {
const article = Article.build(request.payload.article)

return article.save()
}, {
// method can be an array
method: [‘PUT’, ‘PATCH’],
path: ‘/articles/{id}’,
handler: async (request) => {
const article = await Article.findById(request.params.id)

return article.save()
}, {
method: ‘DELETE’,
path: ‘/articles/{id}’,
handler: async (request) => {
const article = await Article.findById(request.params.id)

return article.destroy()

Import and configure our routes before we start the server:
// server.js
const Hapi = require(‘hapi’)
const { configureRoutes } = require(‘./routes’)

const server = Hapi.server({
host: ‘localhost’,
port: 3000

// This function will allow us to easily extend it later
const main = async () => {
await configureRoutes(server)
await server.start()

return server

main().then(server => {
console.log(‘Server running at:’, server.info.uri)
}).catch(err => {

Continue reading %Building Apps and Services with the Hapi.js Framework%

Link: https://www.sitepoint.com/build-apps-services-with-hapi-js/

An Introduction to Sails.js

Sails.js is a Node.js MVC (model–view–controller) framework that follows the “convention over configuration” principle. It’s inspired by the popular Ruby on Rails web framework, and allows you to quickly build REST APIs, single-page apps and real-time (WebSockets-based) apps. It makes extensive use of code generators that allow you to build your application with less writing of code—particularly of common code that can be otherwise scaffolded.
The framework is built on top of Express.js, one of the most popular Node.js libraries, and Socket.io, a JavaScript library/engine for adding real-time, bidirectional, event-based communication to applications. At the time of writing, the official stable version of Sails.js is 0.12.14, which is available from npm. Sails.js version 1.0 has not officially been released, but according to Sails.js creators, version 1.0 is already used in some production applications, and they even recommend using it when starting new projects.
Main Features
Sails.js has many great features:

it’s built on Express.js
it has real-time support with WebSockets
it takes a “convention over configuration” approach
it has powerful code generation, thanks to Blueprints
it’s database agnostic thanks to its powerful Waterline ORM/ODM
it supports multiple data stores in the same project
it has good documentation.

There are currently a few important cons, such as:

no support for JOIN query in Waterline
no support for SQL transactions until Sails v1.0 (in beta at the time of writing)
until version 1.0, it still uses Express.js v3, which is EOL (end of life)
development is very slow.

Sails.js vs Express.js
Software development is all about building abstractions. Sails.js is a high-level abstraction layer on top of Express.js (which itself is an abstraction over Node’s HTTP modules) that provides routing, middleware, file serving and so on. It also adds a powerful ORM/ODM, the MVC architectural pattern, and a powerful generator CLI (among other features).
You can build web applications using Node’s low-level HTTP service and other utility modules (such as the filesystem module) but it’s not recommended except for the sake of learning the Node.js platform. You can also take a step up and use Express.js, which is a popular, lightweight framework for building web apps.
You’ll have routing and other useful constructs for web apps, but you’ll need to take care of pretty much everything from configuration, file structure and code organization to working with databases.
Express doesn’t offer any built-in tool to help you with database access, so you’ll need to bring together the required technologies to build a complete web application. This is what’s called a stack. Web developers, using JavaScript, mostly use the popular MEAN stack, which stands for MongoDB, ExpressJS, AngularJS and Node.js.
MongoDB is the preferred database system among Node/Express developers, but you can use any database you want. The most important point here is that Express doesn’t provide any built-in APIs when it comes to databases.
The Waterline ORM/ODM
One key feature of Sails.js is Waterline, a powerful ORM (object relational mapper) for SQL-based databases and ODM (object document mapper) for NoSQL document-based databases. Waterline abstracts away all the complexities when working with databases and, most importantly, with Waterline you don’t have to make the decision of choosing a database system when you’re just starting development. It also doesn’t intimidate you when your client hasn’t yet decided on the database technology to use.
You can start building you application without a single line of configuration. In fact, you don’t have to install a database system at all initially. Thanks to the built-in sails-disk NeDB-based file database, you can transparently use the file system to store and retrieve data for testing your application functionality.
Once you’re ready and you have decided on the convenient database system you want to use for your project, you can then simply switch the database by installing the relevant adapter for your database system. Waterline has official adapters for popular relational database systems such as MySQL and PostgreSQL and the NoSQL databases, such as MongoDB and Redis, and the community has also built numerous adapters for the other popular database systems such as Oracle, MSSQL, DB2, SQLite, CouchDB and neo4j. In case when you can’t find an adapter for the database system you want to use, you can develop your own custom adapter.
Waterline abstracts away the differences between different database systems and allows you to have a normalized interface for your application to communicate with any supported database system. You don’t have to work with SQL or any low-level API (for NoSQL databases) but that doesn’t mean you can’t (at least for SQL-based databases and MongoDB).
There are situations when you need to write custom SQL, for example, for performance reasons, for working with complex database requirements, or for accessing database-specific features. In this case, you can use the .query() method available only on the Waterline models that are configured to use SQL systems (you can find more information about query() from the docs).
Since different database systems have common and database-specific features, the Waterline ORM/ODM can only be good for you as long as you only constrain yourself to use the common features. Also, if you use raw SQL or native MongoDB APIs, you’ll lose many of the features of Waterline, including the ability to switch between different databases.
Getting Started with Sails.js
Now that we’ve covered the basic concepts and features of Sails.js, let’s see how you can quickly get started using Sails.js to create new projects and lift them.
Before you can use Sails.js, you need to have a development environment with Node.js (and npm) installed. You can install both of them by heading to the official Node.js website and downloading the right installer for your operating system.

Make sure, also, to install whatever database management system you want to use with Sails.js (either a relational or a NoSQL database). If you’re not interested by using a full-fledged database system, at this point, you can still work with Sails.js thanks to sails-disk, which allows you to have a file-based database out of the box.
Installing the Sails.js CLI
After satisfying the working development requirements, you can head over to your terminal (Linux and macOS) or command prompt (Windows) and install the Sails.js Command Line Utility, globally, from npm:
sudo npm install sails -g

If you want to install the latest 1.0 version to try the new features, you need to use the beta version:
npm install sails@beta -g

You may or may not need sudo to install packages globally depending on your npm configuration.
Scaffolding a Sails.js Project
After installing the Sails.js CLI, you can go ahead and scaffold a new project with one command:
sails new sailsdemo

This will create a new folder for your project named sailsdemo on your current directory. You can also scaffold your project files inside an existing folder with this:
sails new .

You can scaffold a new Sails.js project without a front end with this:
sails new sailsdemo –no-frontend

Find more information about the features of the CLI from the docs.
The Anatomy of a Sails.js Project
Here’s a screenshot of a project generated using the Sails.js CLI:

A Sails.js project is a Node.js module with a package.json and a node_modules folder. You may also notice the presence of Gruntfile.js. Sails.js uses Grunt as a build tool for building front-end assets.
If you’re building an app for the browser, you’re in luck. Sails ships with Grunt — which means your entire front-end asset workflow is completely customizable, and comes with support for all of the great Grunt modules which are already out there. That includes support for Less, Sass, Stylus, CoffeeScript, JST, Jade, Handlebars, Dust, and many more. When you’re ready to go into production, your assets are minified and gzipped automatically. You can even compile your static assets and push them out to a CDN like CloudFront to make your app load even faster. (You can read more about these points on the Sails.js website.)
You can also use Gulp or Webpack as your build system instead of Grunt, with custom generators. See the sails-generate-new-gulp and sails-webpack projects on GitHub.
For more community generators, see this documentation page on the Sails.js site.
The project contains many configuration files and folders. Most of them are self explanatory, but let’s go over the ones you’ll be working with most of the time:

api/controllers: this is the folder where controllers live. Controllers correspond to the C part in MVC. It’s where the business logic for your application exists.
api/models: the folder where models exist. Models correspond to the M part of MVC architecture. This is where you need to put classes or objects that map to your SQL/NoSQL data.
api/policies: this is the folder where you need to put policies for your application
api/responses: this folder contains server response logic such as functions to handle the 404 and 500 responses, etc.
api/services: this where your app-wide services live. A service is a global class encapsulating common logic that can be used throughout many controllers.
./views: this folder contains templates used for displaying views. By default, this folder contains the ejs engine templates, but you can configure any Express-supported engine such as EJS, Jade, Handlebars, Mustache and Underscore etc.
./config: this folder contains many configuration files that enable you to configure every detail of your application, such as CORS, CSRF protection, i18n, http, settings for models, views, logging and policies etc. One important file that you’ll frequently use is config/routes.js, where you can create your application routes and map them to actual actions in the controllers or to views directly.
./assets: this is the folder where you can place any static files (CSS, JavaScript and images etc.) for your application.

Continue reading %An Introduction to Sails.js%

Link: https://www.sitepoint.com/an-introduction-to-sails-js/

Building a Real-time Chat App with Sails.js

If you’re a developer who currently uses frameworks such as Django, Laravel or Rails, you’ve probably heard about Node.js. You might already be using a popular front-end library such as Angular or React in your projects. By now, you should be thinking about doing a complete switchover to a server technology based on Node.js.
However, the big question is where to start. Today, the JavaScript world has grown at an incredibly fast pace in the last few years, and it seeming to be ever expanding.
If you’re afraid of losing your hard-earned programming experience in the Node universe, fear not, as we have Sails.js.
Sails.js is a real-time MVC framework designed to help developers build production-ready, enterprise-grade Node.js apps in a short time. Sails.js is a pure JavaScript solution that supports multiple databases (simultaneously) and multiple front-end technologies. If you’re a Rails developer, you’ll be happy to learn that Mike McNeil, the Sails.js founder, was inspired by Rails. You’ll find a lot of similarities between Rails and Sails.js projects.
In this article, I’ll teach you the fundamentals of Sails.js, by showing you how to build a simple, user-friendly chat application. The complete source code for the sails-chat project can be found in this GitHub repo.

Before you start, you need at least to have experience developing applications using MVC architecture. This tutorial is intended for intermediate developers. You’ll also need at least to have a basic foundation in these:

Modern JavaScript syntax (ES6+).

To make it practical and fair for everyone, this tutorial will use core libraries that are installed by default in a new Sails.js project. Integration with modern front-end libraries such as React, Vue or Angular won’t be covered here. However, I highly recommend you look into them after this article. Also, we won’t do database integrations. We’ll instead use the default, local-disk, file-based database for development and testing.
Project Plan
The goal of this tutorial is to show you how to build a chat application similar to Slack, Gitter or Discord.
Not really! A lot of time and sweat went into building those wonderful platforms. The current number of features developed into them is quite huge.
Instead, we’ll build a minimum viable product version of a chat application which consists of:

single chat room
basic authentication (passwordless)
profile update.

I’ve added the profile feature as a bonus in order to cover a bit more ground on Sails.js features.
Installing Sails.js
Before we start installing Sails.js, we need first to set up a proper Node.js environment. At the time of writing, the latest stable version currently available is v0.12.14. Sails.js v1.0.0 is also available but is currently in beta, not recommended for production use.
The latest stable version of Node I have access to is v8.9.4. Unfortunately, Sails.js v0.12 doesn’t work properly with the current latest LTS. However, I’ve tested with Node v.7.10 and found everything works smoothly. This is still good since we can use some new ES8 syntax in our code.
As a JavaScript developer, you’ll realize working with one version of Node.js is not enough. Hence, I recommend using the nvm tool to manage multiple versions of Node.js and NPM easily. If you haven’t done so, just purge your existing Node.js installation, then install nvm to help you manage multiple versions of Node.js.
Here are the basic instructions of installing Node v7 and Sails.js:
# Install the latest version of Node v7 LTS
nvm install v7

# Make Node v7 the default
nvm default alias v7

# Install Sails.js Global
npm install -g sails

If you have a good internet connection, this should only take a couple of minutes or less. Let’s now go ahead and create our new application using the Sails generator command:
# Go to your projects folder
cd Projects

# Generate your new app
sails generate new chat-app

# Wait for the install to finish then navigate to the project folder
cd chat-app

# Start the app
sails lift

It should take a few seconds for the app to start. You need to manually open the url http://localhost:1337 in your browser to see your newly created web app.

Seeing this confirms that we have a running project with no errors, and that we can start working. To stop the project, just press control + c at the terminal. User your favorite code editor (I’m using Atom) to examine the generated project structure. Below are the main folders you should be aware of:

api: controllers, models, services and policies (permissions)
assets: images, fonts, JS, CSS, Less, Sass etc.
config: project configuration e.g. database, routes, credentials, locales, security etc.
node_modules: installed npm packages
tasks: Grunt config scripts and pipeline script for compiling and injecting assets
views: view pages — for example, EJS, Jade or whatever templating engine you prefer
.tmp: temporary folder used by Sails to build and serve your project while in development mode.

Before we proceed, there are a couple of things we need to do:

Update EJS package. If you have EJS 2.3.4 listed in package.json, you need to update it by changing it to 2.5.5 immediately. It contains a serious security vulnerability. After changing the version number, do an npm install to perform the update.
Hot reloading. I suggest you install sails-hook-autoreload to enable hot reloading for your Sails.js app. It’s not a perfect solution but will make development easier. To install it for this current version of Sails.js, execute the following:

npm install sails-hook-autoreload@for-sails-0.12 –save

Installing Front-end Dependencies
For this tutorial, we’ll spend as little time as possible building an UI. Any CSS framework you’re comfortable with will do. For this tutorial, I’ll go with the Semantic UI CSS library.
Sails.js doesn’t have a specific guide on how to install CSS libraries. There are three or more ways you can go about it. Let’s look at each.
1. Manual Download
You can download the CSS files and JS scripts yourself, along with their dependencies. After downloading, place the files inside the assets folder.
I prefer not to use this method,
as it requires manual effort to keep the files updated. I like automating tasks.
2. Using Bower
This method requires you to create a file called .bowerrc at the root of your project. Paste the following snippet:
“directory" : "assets/vendor"

This will instruct Bower to install to the assets/vendor folder instead of the default bower_components folder. Next, install Bower globally, and your front-end dependencies locally using Bower:
# Install bower globally via npm-
npm install -g bower

# Create bower.json file, accept default answers (except choose y for private)
bower init

# Install semantic-ui via bower
bower install semantic-ui –save

# Install jsrender
bower install jsrender –save

I’ll explain the purpose of jsrender later. I thought it best to finish the task of installing dependencies in one go. You should take note that jQuery has been installed as well, since it’s a dependency for semantic-ui.
After installing, update assets/style/importer.less to include this line:
@import ‘../vendor/semantic/dist/semantic.css’;

Next include the JavaScript dependencies in tasks/pipeline.js:
var jsFilesToInject = [

// Load Sails.io before everything else

// Vendor dependencies

// Dependencies like jQuery or Angular are brought in here

// All of the rest of your client-side JS files
// will be injected here in no particular order.

When we run sails lift, the JavaScript files will automatically be injected into views/layout.ejs file as per pipeline.js instructions. The current grunt setup will take care of injecting our CSS dependencies for us.
Important: add the word vendor in the .gitignore file. We don’t want vendor dependencies saved in our repository.
3. Using npm + grunt.copy
The third method requires a little bit more effort to set up, but will result in a lower footprint. Install the dependencies using npm as follows:
npm install semantic-ui-css jsrender –save

jQuery will be installed automatically, since it’s also listed as a dependency for semantic-ui-css. Next we need to place code in tasks/config/copy.js. This code will instruct Grunt to copy the required JS and CSS files from node_modules to the assets/vendor folder for us. The entire file should look like this:
module.exports = function(grunt) {

grunt.config.set(‘copy’, {
dev: {
files: [{
expand: true,
cwd: ‘./assets’,
src: [‘**/*.!(coffee|less)’],
dest: ‘.tmp/public’
//Copy JQuery
expand: true,
cwd: ‘./node_modules/jquery/dist/’,
src: [‘jquery.min.js’],
dest: ‘./assets/vendor/jquery’
//Copy jsrender
expand: true,
cwd: ‘./node_modules/jsrender/’,
src: [‘jsrender.js’],
dest: ‘./assets/vendor/jsrender’
// copy semantic-ui CSS and JS files
expand: true,
cwd: ‘./node_modules/semantic-ui-css/’,
src: [‘semantic.css’, ‘semantic.js’],
dest: ‘./assets/vendor/semantic-ui’
//copy semantic-ui icon fonts
expand: true,
cwd: ‘./node_modules/semantic-ui-css/themes’,
src: ["*.*", "**/*.*"],
dest: ‘./assets/vendor/semantic-ui/themes’
build: {
files: [{
expand: true,
cwd: ‘.tmp/public’,
src: [‘**/*’],
dest: ‘www’


Add this line to assets/styles/importer.less:
@import ‘../vendor/semantic-ui/semantic.css’;

Add the JS files to config/pipeline.js:
// Vendor Dependencies

Finally, execute this command to copy the files from node_modules the assets/vendor folder. You only need to do this once for every clean install of your project:
grunt copy:dev

Remember to add vendor to your .gitignore.
Testing Dependencies Installation
Whichever method you’ve chosen, you need to ensure that the required dependencies are being loaded. To do this, replace the code in view/homepage.ejs with the following:

<i class="settings icon"></i>
<div class="content">
Account Settings
<div class="sub header">Manage your account settings and set e-mail preferences.</div>

After saving the file, do a sails lift. Your home page should now look like this:

Always do a refresh after restarting your app. If the icon is missing or the font looks off, please review the steps carefully and see what you missed. Use the browser’s console to see which files are not loading. Otherwise, proceed with the next stage.
Continue reading %Building a Real-time Chat App with Sails.js%

Link: https://www.sitepoint.com/building-real-time-chat-app-sails-js/

Build a Simple Beginner App with Node, Bootstrap & MongoDB

If you’re just getting started with Node.js and want to try your hand at building a web app, things can often get a little overwhelming. Once you get beyond the “Hello, World!" tutorials, much of the material out there has you copy-pasting code, with little or no explanation as to what you’re doing or why.
This means that, by the time you’ve finished, you’ve built something nice and shiny, but you also have relatively few takeaways that you can apply to your next project.
In this tutorial, I’m going to take a slightly different approach. Starting from the ground up, I’ll demonstrate how to build a no-frills web app using Node.js, but instead of focusing on the end result, I’ll focus on a range of things you’re likely to encounter when building a real-world app. These include routing, templating, dealing with forms, interacting with a database and even basic authentication.
This won’t be a JavaScript 101. If that’s the kind of thing you’re after, look here. It will, however, be suitable for those people who feel reasonably confident with the JavaScript language, and who are looking to take their first steps in Node.js.
What We’ll Be Building
We’ll be using Node.js and the Express framework to build a simple registration form with basic validation, which persists its data to a MongoDB database. We’ll add a view to list successful registration, which we’ll protect with basic HTTP authentication, and we’ll use Bootstrap to add some styling. The tutorial is structured so that you can follow along step by step. However, if you’d like to jump ahead and see the end result, the code for this tutorial is also available on GitHub.
Basic Setup
Before we can start coding, we’ll need to get Node, npm and MongoDB installed on our machines. I won’t go into depth on the various installation instructions, but if you have any trouble getting set up, please leave a comment below, or visit our forums and ask for help there.
Many websites will recommend that you head to the official Node download page and grab the Node binaries for your system. While that works, I would suggest that you use a version manager instead. This is a program which allows you to install multiple versions of Node and switch between them at will. There are various advantages to using a version manager, for example it negates potential permission issues which would otherwise see you installing packages with admin rights.
If you fancy going the version manager route, please consult our quick tip: Install Multiple Versions of Node.js Using nvm. Otherwise, grab the correct binaries for your system from the link above and install those.
npm is a JavaScript package manager which comes bundled with Node, so no extra installation is necessary here. We’ll be making quite extensive use of npm throughout this tutorial, so if you’re in need of a refresher, please consult: A Beginner’s Guide to npm — the Node Package Manager.
MongoDB is a document database which stores data in flexible, JSON-like documents.
The quickest way to get up and running with Mongo is to use a service such as mLabs. They have a free sandbox plan which provides a single database with 496 MB of storage running on a shared virtual machine. This is more than adequate for a simple app with a handful of users. If this sounds like the best option for you, please consult their quick start guide.
You can also install Mongo locally. To do this, please visit the official download page and download the correct version of the community server for your operating system. There’s a link to detailed, OS-specific installation instructions beneath every download link, which you can consult if you run into trouble.
Although not strictly necessary for following along with this tutorial, you might also like to install Compass, the official GUI for MongoDB. This tool helps you visualize and manipulate your data, allowing you to interact with documents with full CRUD functionality.
At the time of writing, you’ll need to fill out your details to download Compass, but you won’t need to create an account.
Check that Everything is Installed Correctly
To check that Node and npm are installed correctly, open your terminal and type:
node -v

followed by:
npm -v

This will output the version number of each program (8.9.4 and 5.6.0 respectively at the time of writing).
If you installed Mongo locally, you can check the version number using:
mongo –version

This should output a bunch of information, including the version number (3.6.2 at the time of writing).
Check the Database Connection Using Compass
If you have installed Mongo locally, you start the server by typing the following command into a terminal:

Next, open Compass. You should be able to accept the defaults (server: localhost, port: 27017), press the CONNECT button, and establish a connection to the database server.

MongoDB Compass connected to localhost
Note that the databases admin and local are created automatically.
Using a Cloud-hosted Solution
If you’re using mLabs, create a database subscription (as described in their quick-start guide), then copy the connection details to the clipboard. This should be in the form:

When you open Compass, it will inform you that it has detected a MongoDB connection string and asks if you would like to use it to fill out the form. Click Yes, noting that you might need to adjust the username and password by hand. After that, click CONNECT and you should be off to the races.

MongoDB Compass connected to mLabs
Note that I called my database sp-node-article. You can call yours what you like.
Initialize the Application
With everything set up correctly, the first thing we need to do is initialize our new project. To do this, create a folder named demo-node-app, enter that directory and type the following in a terminal:
npm init -y

This will create and auto-populate a package.json file in the project root. We can use this file to specify our dependencies and to create various npm scripts, which will aid our development workflow.
Install Express
Express is a lightweight web application framework for Node.js, which provides us with a robust set of features for writing web apps. These features include such things as route handling, template engine integration and a middleware framework, which allows us to perform additional tasks on request and response objects. There is nothing you can do in Express that you couldn’t do in plain Node.js, but using Express means we don’t have to re-invent the wheel and reduces boilerplate.
So let’s install Express. To do this, run the following in your terminal:
npm install –save express

By passing the –save option to the npm install command, Express will be added to the dependencies section of the package.json file. This signals to anyone else running our code that Express is a package our app needs to function properly.
Install nodemon
nodemon is a convenience tool. It will watch the files in the directory it was started in, and if it detects any changes, it will automatically restart your Node application (meaning you don’t have to). In contrast to Express, nodemon is not something the app requires to function properly (it just aids us with development), so install it using:
npm install –save-dev nodemon

This will add nodemon to the dev-dependencies section of the package.json file.
Create Some Initial Files
We’re almost through with the setup. All we need to do now is create a couple of initial files before kicking off the app.
In the demo-node-app folder create an app.js file and a start.js file. Also create a routes folder, with an index.js file inside. After you’re done, things should look like this:
├── app.js
├── node_modules
│ └── …
├── package.json
├── routes
│ └── index.js
└── start.js

Now, let’s add some code to those files.
In app.js:
const express = require(‘express’);
const routes = require(‘./routes/index’);

const app = express();
app.use(‘/’, routes);

module.exports = app;

Here, we’re importing both the express module and (the export value of) our routes file into the application. The require function we’re using to do this is a built-in Node function which imports an object from another file or module. If you’d like a refresher on importing and exporting modules, read Understanding module.exports and exports in Node.js.
After that, we’re creating a new Express app using the express function and assigning it to an app variable. We then tell the app that, whenever it receives a request from forward slash anything, it should use the routes file.
Finally, we export our app variable so that it can be imported and used in other files.
In start.js:
const app = require(‘./app’);

const server = app.listen(3000, () => {
console.log(`Express is running on port ${server.address().port}`);

Here we’re importing the Express app we created in app.js (note that we can leave the .js off the file name in the require statement). We then tell our app to listen on port 3000 for incoming connections and output a message to the terminal to indicate that the server is running.
And in routes/index.js:
const express = require(‘express’);

const router = express.Router();

router.get(‘/’, (req, res) => {
res.send(‘It works!’);

module.exports = router;

Here, we’re importing Express into our routes file and then grabbing the router from it. We then use the router to respond to any requests to the root URL (in this case http://localhost:3000) with an “It works!” message.
Kick off the App
Finally, let’s add an npm script to make nodemon start watching our app. Change the scripts section of the package.json file to look like this:
"scripts": {
"watch": "nodemon ./start.js"

The scripts property of the package.json file is extremely useful, as it lets you specify arbitrary scripts to run in different scenarios. This means that you don’t have to repeatedly type out long-winded commands with a difficult-to-remember syntax. If you’d like to find out more about what npm scripts can do, read Give Grunt the Boot! A Guide to Using npm as a Build Tool.
Now, type npm run watch from the terminal and visit http://localhost:3000.
You should see “It works!”
Continue reading %Build a Simple Beginner App with Node, Bootstrap & MongoDB%

Link: https://www.sitepoint.com/build-simple-beginner-app-node-bootstrap-mongodb/

An Introduction to MongoDB

MongoDB is an open-source, document-oriented, NoSQL database program. If you’ve been involved with the traditional, relational databases for long, the idea of a document-oriented, NoSQL database might indeed sound peculiar. “How can a database not have tables?”, you might wonder. This tutorial introduces you to some of the basic concepts of MongoDB and should help […]
Continue reading %An Introduction to MongoDB%

Link: https://www.sitepoint.com/an-introduction-to-mongodb/

Create New Express.js Apps in Minutes with Express Generator

Express.js is a Node.js web framework that has gained immense popularity due to its simplicity. It has easy-to-use routing and simple support for view engines, putting it far ahead of the basic Node HTTP server.
However, starting a new Express application requires a certain amount of boilerplate code: starting a new server instance, configuring a view engine, setting up error handling.
Although there are various starter projects and boilerplates available, Express has its own command-line tool that makes it easy to start new apps, called the express-generator.
What is Express?
Express has a lot of features built in, and a lot more features you can get from other packages that integrate seamlessly, but there are three main things it does for you out of the box:

Routing. This is how /home /blog and /about all give you different pages. Express makes it easy for you to modularize this code by allowing you to put different routes in different files.
Middleware. If you’re new to the term, basically middleware is “software glue”. It accesses requests before your routes get them, allowing them to handle hard-to-do stuff like cookie parsing, file uploads, errors, and more.
Views. Views are how HTML pages are rendered with custom content. You pass in the data you want to be rendered and Express will render it with your given view engine.

Getting Started
Starting a new project with the Express generator is as simple as running a few commands:
npm install express-generator -g

This installs the Express generator as a global package, allowing you to run the express command in your terminal:
express myapp

This creates a new Express project called myapp which is then placed inside of the myapp directory.
cd myapp

If you’re unfamiliar with terminal commands, this one puts you inside of the myapp directory.
npm install

npm is the default Node.js package manager. Running npm install installs all dependencies for the project. By default, the express-generator includes several packages that are commonly used with an Express server.
The generator CLI takes half a dozen arguments, but the two most useful ones are the following:

-v . This lets you select a view engine to install. The default is jade. Although this still works, it has been deprecated in favor of pug.
-c . By default, the generator creates a very basic CSS file for you, but selecting a CSS engine will configure your new app with middleware to compile any of the above options.

Now that we’ve got our project set up and dependencies installed, we can start the new server by running the following:
npm start

Then browse to http://localhost:3000 in your browser.
Application Structure
The generated Express application starts off with four folders.
The bin folder contains the executable file that starts your app. It starts the server (on port 3000, if no alternative is supplied) and sets up some basic error handling. You don’t really need to worry about this file because npm start will run it for you.
The public folder is one of the important ones: ​everything​ in this folder is accessible to people connecting to your application. In this folder, you’ll put JavaScript, CSS, images, and other assets that people need when they connect to your website.
The routes folder is where you’ll put your router files. The generator creates two files, index.js and users.js, which serve as examples of how to separate out your application’s route configuration.
Usually, here you’ll have a different file for each major route on your website. So you might have files called blog.js, home.js, and/or about.js in this folder.
The views folder is where you have the files used by your templating engine. The generator will configure Express to look in here for a matching view when you call the render method.
Outside of these folders, there’s one file that you should know well.
Continue reading %Create New Express.js Apps in Minutes with Express Generator%

Link: https://www.sitepoint.com/create-new-express-js-apps-with-express-generator/

Forms, File Uploads and Security with Node.js and Express

If you’re building a web application, you’re likely to encounter the need to build HTML forms on day one. They’re a big part of the web experience, and they can be complicated.
Typically the form handling process involves:

displaying an empty HTML form in response to an initial GET request
user submitting the form with data in a POST request
validation on both the client and the server
re-displaying the form populated with escaped data and error messages if invalid
doing something with the sanitized data on the server if it’s all valid
redirecting the user or showing a success message after data is processed.

Handling form data also comes with extra security considerations.
We’ll go through all of these and explain how to build them with Node.js and Express — the most popular web framework for Node. First, we’ll build a simple contact form where people can send a message and email address securely and then take a look what’s involved in processing file uploads.

Make sure you’ve got a recent version of Node.js installed; node -v should return 8.9.0 or higher.
Download the starting code from here with git:
git clone https://github.com/sitepoint-editors/node-forms.git
cd node-forms
npm install
npm start

There’s not too much code in there. It’s just a bare-bones Express setup with EJS templates and error handlers:
// server.js
const path = require(‘path’)
const express = require(‘express’)
const layout = require(‘express-layout’)
const routes = require(‘./routes’)
const app = express()

app.set(‘views’, path.join(__dirname, ‘views’))
app.set(‘view engine’, ‘ejs’)

const middleware = [
express.static(path.join(__dirname, ‘public’)),

app.use(‘/’, routes)

app.use((req, res, next) => {
res.status(404).send(“Sorry can’t find that!")

app.use((err, req, res, next) => {
res.status(500).send(‘Something broke!’)

app.listen(3000, () => {
console.log(`App running at http://localhost:3000`)

The root url / simply renders the index.ejs view.
// routes.js
const express = require(‘express’)
const router = express.Router()

router.get(‘/’, (req, res) => {

module.exports = router

Displaying the Form
When people make a GET request to /contact, we want to render a new view contact.ejs:
// routes.js
router.get(‘/contact’, (req, res) => {

The contact form will let them send us a message and their email address:

<div class="form-header">
<h2>Send us a message</h2>
<form method="post" action="/contact" novalidate>
<div class="form-field">
<label for="message">Message</label>
<textarea class="input" id="message" name="message" rows="4" autofocus></textarea>
<div class="form-field">
<label for="email">Email</label>
<input class="input" id="email" name="email" type="email" value="" />
<div class="form-actions">
<button class="btn" type="submit">Send</button>

See what it looks like at http://localhost:3000/contact.
Form Submission
To receive POST values in Express, you first need to include the body-parser middleware, which exposes submitted form values on req.body in your route handlers. Add it to the end of the middlewares array:
// server.js
const bodyParser = require(‘body-parser’)

const middlewares = [
// …

It’s a common convention for forms to POST data back to the same URL as was used in the initial GET request. Let’s do that here and handle POST /contact to process the user input.
Let’s look at the invalid submission first. If invalid, we need to pass back the submitted values to the view so they don’t need to re-enter them along with any error messages we want to display:
router.get(‘/contact’, (req, res) => {
res.render(‘contact’, {
data: {},
errors: {}

router.post(‘/contact’, (req, res) => {
res.render(‘contact’, {
data: req.body, // { message, email }
errors: {
message: {
msg: ‘A message is required’
email: {
msg: ‘That email doesn‘t look right’

If there are any validation errors, we’ll do the following:

display the errors at the top of the form
set the input values to what was submitted to the server
display inline errors below the inputs
add a form-field-invalid class to the fields with errors.

<!– views/contact.ejs –>
<div class="form-header">
<% if (Object.keys(errors).length === 0) { %>
<h2>Send us a message</h2>
<% } else { %>
<h2 class="errors-heading">Oops, please correct the following:</h2>
<ul class="errors-list">
<% Object.values(errors).forEach(error => { %>
<li><%= error.msg %></li>
<% }) %>
<% } %>

<form method="post" action="/contact" novalidate>
<div class="form-field <%= errors.message ? ‘form-field-invalid’ : ” %>">
<label for="message">Message</label>
<textarea class="input" id="message" name="message" rows="4" autofocus><%= data.message %></textarea>
<% if (errors.message) { %>
<div class="error"><%= errors.message.msg %></div>
<% } %>
<div class="form-field <%= errors.email ? ‘form-field-invalid’ : ” %>">
<label for="email">Email</label>
<input class="input" id="email" name="email" type="email" value="<%= data.email %>" />
<% if (errors.email) { %>
<div class="error"><%= errors.email.msg %></div>
<% } %>
<div class="form-actions">
<button class="btn" type="submit">Send</button>

Submit the form at http://localhost:3000/contact to see this in action. That’s everything we need on the view side.
Continue reading %Forms, File Uploads and Security with Node.js and Express%

Link: https://www.sitepoint.com/forms-file-uploads-security-node-express/

A Beginner Splurge in Node.js

It’s 3 a.m., hands over the keyboard while staring at an empty console. The bright prompt over a dark backdrop ready, yearning to take in commands. Want to hack up Node.js for a little while? One of the exciting news about Node.js is that it runs anywhere. This opens up the stack to various ways to experiment with it. For any seasoned veteran, this is a fun run of the command line tooling. What I like the most is that we can survey the stack from within the safety net of the command line. The cool thing is that we are still talking about JavaScript, hence most of you should not have any problem. So, why not fire up node up in the console?
In this article, I’ll introduce you to Node.js. My goal is to go over the main highlights while hiking up some pretty high ground. This is an intermediate overview of the stack while keeping it all inside the console. If you want a beginner-friendly guide about Node.js, I suggest you to watch the SitePoint premium’s course Node.js: An Introduction.
Continue reading %A Beginner Splurge in Node.js%

Link: https://www.sitepoint.com/a-beginner-splurge-in-node-js/

How to Use SSL/TLS with Node.js

It’s time! No more procrastination and poor excuses: Let’s Encrypt works beautifully, and having an SSL-secured site is easier than ever.
In this article, I’ll work through a practical example of how to add a Let’s Encrypt-generated certificate to your Express.js server.
But protecting our sites and apps with HTTPS isn’t enough. We should also demand encrypted connections from the servers we’re talking to. We’ll see that possibilities exist to activate the SSL/TLS layer even if it wouldn’t be enabled by default.
Let’s start with a short review of HTTPS’s current state.
HTTPS Everywhere
The HTTP/2 specification was published as RFC 7540 in May 2015, which means at this point it’s a part of the standard. This was a major milestone. Now we can all upgrade our servers to use HTTP/2. One of the most important aspects is the backwards compatibility with HTTP 1.1 and the negotiation mechanism to choose a different protocol. Although the standard doesn’t specify mandatory encryption, currently no browser supports HTTP/2 unencrypted. This gives HTTPS another boost. Finally we’ll get HTTPS everywhere!
What does our stack actually look like? From the perspective of a website running in the browser (application level) we have roughly the following layers to reach the IP level:

Client browser

HTTPS is nothing more than the HTTP protocol on top of SSL/TLS. Hence all the rules of HTTP still have to apply. What does this additional layer actually give us? There are multiple advantages: we get authentication by having keys and certificates; a certain kind of privacy and confidentiality is guaranteed, as the connection is encrypted in an asymmetric manner; and data integrity is also preserved: that transmitted data can’t be changed during transit.
One of the most common myths is that using SSL/TLS requires too many resources and slows down the server. This is certainly not true anymore. We also don’t need any specialized hardware with cryptography units. Even for Google, the SSL/TLS layer accounts for less than 1% of the CPU load. Furthermore, the network overhead of HTTPS as compared to HTTP is below 2%. All in all, it wouldn’t make sense to forgo HTTPS for having a little bit of overhead.

The most recent version is TLS 1.3. TLS is the successor of SSL, which is available in its latest release SSL 3.0. The changes from SSL to TLS preclude interoperability. The basic procedure is, however, unchanged. We have three different encrypted channels. The first is a public key infrastructure for certificate chains. The second provides public key cryptography for key exchanges. Finally, the third one is symmetric. Here we have cryptography for data transfers.
TLS 1.3 uses hashing for some important operations. Theoretically, it’s possible to use any hashing algorithm, but it’s highly recommended to use SHA2 or a stronger algorithm. SHA1 has been a standard for a long time but has recently become obsolete.
HTTPS is also gaining more attention for clients. Privacy and security concerns have always been around, but with the growing amount of online accessible data and services, people are getting more and more concerned. A useful browser plugin is “HTTPS Everywhere”, which encrypts our communications with most websites.

The creators realized that many websites offer HTTPS only partially. The plugin allows us to rewrite requests for those sites, which offer only partial HTTPS support. Alternatively, we can also block HTTP altogether (see the screenshot above).
Basic Communication
The certificate’s validation process involves validating the certificate signature and expiration. We also need to verify that it chains to a trusted root. Finally, we need to check to see if it was revoked. There are dedicated trusted authorities in the world that grant certificates. In case one of these were to become compromised, all other certificates from the said authority would get revoked.
The sequence diagram for a HTTPS handshake looks as follows. We start with the init from the client, which is followed by a message with the certificate and key exchange. After the server sends its completed package, the client can start the key exchange and cipher specification transmission. At this point, the client is finished. Finally the server confirms the cipher specification selection and closes the handshake.

The whole sequence is triggered independently of HTTP. If we decide to use HTTPS, only the socket handling is changed. The client is still issuing HTTP requests, but the socket will perform the previously described handshake and encrypt the content (header and body).
So what do we need to make SSL/TLS work with an Express.js server?
By default, Node.js serves content over HTTP. But there’s also an HTTPS module which we have to use in order to communicate over a secure channel with the client. This is a built-in module, and the usage is very similar to how we use the HTTP module:
const https = require(“https"),
fs = require("fs");

const options = {
key: fs.readFileSync("/srv/www/keys/my-site-key.pem"),
cert: fs.readFileSync("/srv/www/keys/chain.pem")

const app = express();

app.use((req, res) => {
res.end("hello world\n");


https.createServer(options, app).listen(8080);

Ignore the /srv/www/keys/my-site-key.pem and and /srv/www/keys/chain.pem files for now. Those are the SSL certificates we need to generate, which we’ll do a bit later. This is the part that changed with Let’s Encrypt. Previously, we had to generate a private/public key pair, send it to a trusted authority, pay them and probably wait a bit in order to get an SSL certificate. Nowadays, Let’s Encrypt generates and validates your certificates for free and instantly!
Generating Certificates
A certificate, which is signed by a trusted certificate authority (CA), is demanded by the TLS specification. The CA ensures that the certificate holder is really who he claims to be. So basically when you see the green lock icon (or any other greenish sign to the left side of the URL in your browser) it means that the server you’re communicating with is really who it claims to be. If you’re on facebook.com and you see a green lock, it’s almost certain you really are communicating with Facebook and no one else can see your communication — or rather, no one else can read it.
It’s worth noting that this certificate doesn’t necessarily have to be verified by an authority such as Let’s Encrypt. There are other paid services as well. You can technically sign it yourself, but then the users visiting your site won’t get an approval from the CA when visiting and all modern browsers will show a big warning flag to the user and ask to be redirected “to safety”.
In the following example, we’ll use the Certbot, which is used to generate and manage certificates with Let’s Encrypt.
On the Certbot site you can find instructions on how to install Certbot on your OS. Here we’ll follow the macOS instructions. In order to install Certbot, run
brew install certbot

Webroot is a Certbot plugin that, in addition to the Certbot default functionallity which automatically generates your public/private key pair and generates an SSL certificate for those, also copies the certificates to your webroot folder and also verifies your server by placing some verification codes into a hidden temporary directory named .well-known. In order to skip doing some of these steps manually, we’ll use this plugin. The plugin is installed by default with Certbot. In order to generate and verify our certificates, we’ll run the following:
certbot certonly –webroot -w /var/www/example/ -d www.example.com -d example.com

You may have to run this command as sudo, as it will try to write to /var/log/letsencrypt.
You’ll also be asked for your email address. It’s a good idea to put in a real address you use often, as you’ll get a notification if your certificate expires is about to expire. The trade off for Let’s Encrypt being a free certificate is that it expires every three months. Luckily, renewal is as easy as running one simple command, which we can assign to a cron and then not have to worry about expiration. Additionally, it’s a good security practice to renew SSL certificates, as it gives attackers less time to break the encryption. Sometimes developers even set up this cron to run daily, which is completely fine and even recommended.
Keep in mind that you have to run this command on a server to which the domain specified under the -d (for domain) flag resolves — that is, your production server. Even if you have the DNS resolution in your local hosts file, this won’t work, as the domain will be verified from outside. So if you’re doing this locally, it will most likely not work at all, unless you opened up a port from your local machine to the outside and have it running behind a domain name which resolves to your machine, which is a highly unlikely scenario.
Last but not least, after running this command, the output will contain paths to your private key and certificate files. Copy these values into the previous code snippet, into the cert property for certificate and key property for the key.
// …

const options = {
key: fs.readFileSync("/var/www/example/sslcert/privkey.pem"),
cert: fs.readFileSync("/var/www/example/sslcert/fullchain.pem") // these paths might differ for you, make sure to copy from the certbot output

// …

Continue reading %How to Use SSL/TLS with Node.js%

Link: https://www.sitepoint.com/how-to-use-ssltls-with-node-js/