Why I’m Switching from React to Cycle.js

I would guess that most developers these days are using some sort of framework for developing apps. Frameworks are there to help us structure complex apps and save us time. Every day, we can see much discussion about which framework is the best, which framework should you learn first, etc. So today, I would like to share my experience, and why I’m switching to Cycle.js from React.
React is probably the most popular frontend framework these days and it has a great community. I am a big fan of it and it really helped me to change the way I think about web apps and how I develop them. Some developers love it, and some think that it’s not as good as everyone says.
Most people start to use React without thinking that there might be a better way to build a web app. That reflection made me try Cycle.js, a new reactive framework that is becoming more popular every day. In this article, I want to explain what reactive programming is, how Cycle.js works, and why I think it’s better than React. So let’s start!
What is Reactive Programming?
Reactive programming (RP) is programming with asynchronous data streams. If you’ve already built a web app, you probably did a lot of reactive programming. As an example, click events are asynchronous data streams. We can observe them and perform some side effects. The idea behind RP is to give us an ability to create data streams from anything and manipulate with them. We then have the same abstraction for all our side effects which is easier to use, maintain, and test.
You’re probably thinking “why do I need this new reactive programming thing?" The answer is simple: Reactive programming will help you unify your code and make it more consistent. You won’t need to think about how the things should work and how to properly implement them. Just write the code in the same way, no matter what data you work on (click events, HTTP calls, web sockets…). Everything is a stream of data and each stream has many functions that you can use to work with it, such as map, and filter. These function will return new streams that can be used, and so on.
Reactive programming gives you the bigger abstraction of your code. It will give you an ability to create interactive user experiences and focus on business logic.

Image taken from https://gist.github.com/staltz/868e7e9bc2a7b8c1f754
Reactive Programming in JavaScript
In JavaScript, we have a couple of awesome libraries for dealing with data streams. The most well-known one is RxJS. It’s an extension of ReactiveX, an API for asynchronous programming with observable streams. You can create an Observable (a stream of data), and manipulate it with various functions.
The second one is Most.js. It has the best performance and they can prove that with some numbers: Performance comparation.
I would also like to mention one small and fast library, made by the creator of Cycle.js and made specifically for it. It’s called xstream. It has only 26 methods, is approximately 30kb, and is one of the fastest libraries for reactive programming in JS.
In the examples below, I will use xstream library. Cycle.js is made to be a small framework and I want to attach the smallest reactive library to it.
What is Cycle.js?
Cycle.js is a functional and reactive JavaScript framework. It abstracts your application as a pure function, main(). In functional programming, functions should have only inputs and outputs, without any side effects. In Cycle.js’s main() function, inputs are read effects (sources) from the external world and outputs (sinks) are write effects to the external world. Managing side effects is done using drivers. Drivers are plugins that handle DOM effects, HTTP effects, and web sockets etc.

Image taken from Cycle.js website
Cycle.js is there to help us build our user interfaces, test them and write reusable code. Each component is just one pure function that can run independently.
The core API has just one function, run.
run(app, drivers);

It has two arguments, app and drivers. app is the main pure function and drivers are plugins that need to handle side effects.
Cycle.js separates additional functionality into smaller modules. They are:

@cycle/dom – a collection of drivers that work with DOM; it has a DOM driver and HTML driver, based on the snabdom virtual DOM library
@cycle/history – a driver for the History API
@cycle/http – a driver for HTTP requests, based on superagent
@cycle/isolate – a function for making scoped dataflow components
@cycle/jsonp – a driver for making HTTP requests through JSONP
@cycle/most-run – a run function for apps made with most
@cycle/run – a run function for apps made with xstream
@cycle/rxjs-run – a run function for apps made with rxjs

Cycle.js Code
Let’s see some Cycle.js code? We will create a simple app that should demonstrate how it works. I think that a good old counter app should be ideal for this example. We will see how handling DOM events and re-rendering the DOM works in Cycle.js.
Let’s create two files, index.html and main.js. index.html will just serve our main.js file, where the whole of our logic will be. We are also going to create a new package.json file, so run:
npm init -y

Next, let’s install our main dependencies:
Continue reading %Why I’m Switching from React to Cycle.js%

Link: https://www.sitepoint.com/switching-from-react-to-cycle-js/

Introduction to Kubernetes: How to Deploy a Node.js Docker App

While container technology has existed for years, Docker really took it mainstream. A lot of companies and developers now use containers to ship their apps. Docker provides an easy to use interface to work with containers.
However, for any non-trivial application, you will not be deploying “one container", but rather a group of containers on multiple hosts. In this article, we’ll take a look at Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications.

Prerequisites: This article assumes some familiarity with Docker. If you need a refresher, check out Understanding Docker, Containers and Safer Software Delivery.

What Problem Does Kubernetes Solve?
With Docker, you have simple commands like docker run or docker stop to start/stop a container respectively. Unlike these simple commands that let you do operations on a single container, there is no docker deploy command to push new images to a group of hosts.
Many tools have appeared in recent times to solve this problem of "container orchestration"; popular ones being Mesos, Docker Swarm (now part of the Docker engine), Nomad, and Kubernetes. All of them come with their pros and cons but, arguably, Kubernetes has the most mileage at this point.
Kubernetes (also referred to as ‘k8s’) provides powerful abstractions that completely decouple application operations such as deployments and scaling from underlying infrastructure operations. So, with Kubernetes, you do not work with individual hosts or virtual machines on which to run you code, but rather Kubernetes sees the underlying infrastructure as a sea of compute on which to put containers.
Kubernetes Concepts
Kubernetes has a client/server architecture. Kubernetes server runs on your cluster (a group of hosts) on which you will deploy your application. And you typically interact with the cluster using a client, such as the kubectl CLI.
Pods
A pod is the basic unit that Kubernetes deals with, a group of containers. If there are two or more containers that always need to work together, and should be on the same machine, make them a pod. A pod is a useful abstraction and there was even a proposal to make them a first class docker object.
Node
A node is a physical or virtual machine, running Kubernetes, onto which pods can be scheduled.
Label
A label is a key/value pair that is used to identify a resource. You could label all your pods serving production traffic with "role=production", for example.
Selector
Selections let you search/filter resources by labels. Following on from the previous example, to get all production pods your selector would be "role=production".
Service
A service defines a set of pods (typically selected by a "selector") and a means by which to access them, such as single stable IP address and corresponding DNS name.
Deploy a Node.js App on GKE using Kubernetes
Now, that we are aware of basic Kubernetes concepts, let’s see it in action by deploying a Node.js application on Google Container Engine (referred to as GKE). You’ll need a Google Cloud Platform account for the same (Google provides a free trial with $300 credit).
1. Install Google Cloud SDK and Kubernetes Client
kubectl is the command line interface for running commands against Kubernetes clusters. You can install it as a part of Google Cloud SDK. After Google Cloud SDK installs, run the following command to install kubectl:
$ gcloud components install kubectl

or brew install kubectl if you are on Mac. To verify the installation run kubectl version.
You’ll also need to setup the Google cloud SDK with credentials for your Google cloud account. Just run gcloud init and follow the instructions.
2. Create a GCP project
All Google Cloud Platform resources are created under a project, so create one from the web UI.
Set the default project ID while working with CLI by running:
gcloud config set project {PROJECT_ID}

3. Create a Docker Image of your application
Here is the application that we’ll be working with: express-hello-world. You can see in the Dockerfile that we are using an existing Node.js image from dockerhub. Now, we’ll build our application image by running:
$ docker build -t hello-world-image .

Run the app locally by running:
docker run –name hello-world -p 3000:3000 hello-world-image

If you visit localhost:3000 you should get the response.
4. Create a cluster
Now we’ll create a cluster with three instances (virtual machines), on which we’ll deploy our application. You can do it from the fairly intuitive web UI by going to container engine page or by running this command:
Continue reading %Introduction to Kubernetes: How to Deploy a Node.js Docker App%

Link: https://www.sitepoint.com/kubernetes-deploy-node-js-docker-app/

Refactor Code in Your Lunch Break: Getting Started with Codemods

Maintaining a codebase can be a frustrating experience for any developer, especially a JavaScript codebase. With ever-changing standards, syntax, and third party package breaking changes, it can be hard to keep up.
In recent years, the JavaScript landscape has changed beyond recognition. Advancements in the core JavaScript language has meant that even the simplest simple task of variable declaration has been changed. ES6 introduced let and const, arrow functions, and many more core changes, each bringing improvements and benefits to developers and their applications.
Pressure on developers to produce and maintain code that will stand up to the test of time is on the increase. This article will show you how you can automate large-scale refactoring tasks with the use of codemods and the JSCodeshift tool, allowing you to easily update your code to take advantage of newer language features, for example.
Codemod
Codemod is a tool developed by Facebook to help with the refactor of large-scale codebases. It enables the developer to refactor a large codebase in a small amount of time. In some cases, a developer might use an IDE to perform the refactor of a class or variable name, however, this is usually scoped to one file at a time. The next tool in a developer’s refactoring tool kit is a global find and replace. This can work in many cases with the use of complex regular expressions. Many scenarios are not suited to this method; for example, when there are multiple implementations that need to be changed.
Codemod is a Python tool that takes a number of parameters including the expression you wish to match and the replacement.
codemod -m -d /code/myAwesomeSite/pages –extensions php,html \

Link: https://www.sitepoint.com/getting-started-with-codemods/

How to Build a Real-Time GitHub Issue To-Do List with CanJS

CanJS is a collection of front-end libraries that make it easier to build complex and innovative web apps that are maintainable over a long period of time. It’s broken up into dozens of individual packages, so you can pick-and-choose what you’d like in your application without being bogged down by a huge 100kb+ dependency.
CanJS promotes the MVVM (Model-View-ViewModel) architecture with the following key packages:

can-component for custom elements
can-connect for communicating with APIs
can-define for observables
can-stache for Handlebars-like templating

In this tutorial, we’re going to make a to-do list app that uses a GitHub repository’s issue list as its source. Our app will update in real-time thanks to GitHub’s Webhook API and we’ll be able to reorder issues thanks to jQuery UI’s sortable interaction.
You can find the finished source code for this app on GitHub. Here’s what the final app will look like:

If you’re interested in taking your JavaScript skills to the next level, sign up for SitePoint Premium and check out our latest book, Modern JavaScript

MVVM in CanJS
Before we start our project for this tutorial, let’s dive into what MVVM means within a CanJS application.
Data Models
The “Model” in MVVM is for your data model: a representation of the data within your application. Our app deals with individual issues and a list of issues, so these are the data types that we have in our model.
In CanJS, we use can-define/list/list and can-define/map/map to represent arrays and objects, respectively. These are observable types of data that will automatically update the View or ViewModel (in MVVM) when they change.
For example, our app will have an Issue type like this:
import DefineMap from ‘can-define/map/map’;
const Issue = DefineMap.extend(‘Issue’, {
id: ‘number’,
title: ‘string’,
sort_position: ‘number’,
body: ‘string’
});

Each instance of Issue will have four properties: id, title, sort_position, and body. When a value is set, can-define/map/map will convert that value to the type specified above, unless the value is null or undefined. For example, setting the id to the string “1" will give the id property the number value 1, while setting it to null will actually make it null.
We’ll define a type for arrays of issues like this:
import DefineList from ‘can-define/list/list’;
Issue.List = DefineList.extend(‘IssueList’, {
‘#’: Issue
});

The # property on a can-define/list/list will convert any item in the list to the specified type, so any item in an Issue.List will be an Issue instance.
View Templates
The “view” in a web application is the HTML user interface with which users interact. CanJS can render HTML with a few different template syntaxes, including can-stache, which is similar to Mustache and Handlebars.
Here’s a simple example of a can-stache template:

    <li>
    How to Build a Real-Time GitHub Issue To-Do List with CanJS
    </li>

    </ol>

    In the above example, we use to iterate through a list of issues, then show the title of each issue with How to Build a Real-Time GitHub Issue To-Do List with CanJS. Any changes to the issues list or the issue titles will cause the DOM to be updated (e.g. an li will be added to the DOM if a new issue is added to the list).
    View Models
    The ViewModel in MVVM is the glue code between the Model and View. Any logic that can’t be contained within the model but is necessary for the view is provided by the ViewModel.
    In CanJS, a can-stache template is rendered with a ViewModel. Here’s a really simple example:
    import stache from ‘can-stache’;
    const renderer = stache(‘world’);
    const viewModel = {greeting: ‘Hello’};
    const fragment = renderer(viewModel);
    console.log(fragment.textContent);// Logs “Hello world”

    Components
    The concept that ties all of these things together is a component (or custom element). Components are useful for grouping functionality together and making things reusable across your entire app.
    In CanJS, a can-component is made up of a view (can-stache file), a view-model (can-define/map/map), and (optionally) an object that can listen for JavaScript events.
    import Component from ‘can-component’;
    import DefineMap from ‘can-define/map/map’;
    import stache from ‘can-stache’;

    const HelloWorldViewModel = DefineMap.extend(‘HelloWorldVM’, {
    greeting: {value: ‘Hello’},
    showExclamation: {value: true}
    });

    Component.extend({
    tag: ‘hello-world’,
    view: stache(‘world!’),
    ViewModel: HelloWorldViewModel,
    events: {
    ‘{element} click’: () => {
    this.viewModel.showExclamation = !this.viewModel.showExclamation;
    }
    }
    });

    const template = stache(‘hello-world’);
    document.body.appendChild(template);

    In the example above, our template will either show “Hello world!” or just “Hello world” (no exclamation mark), depending on whether the user has clicked our custom element.
    These four concepts are all you need to know to build a CanJS app! Our example app will use these four ideas to build a full-fledged MVVM app.
    Prerequisites for this tutorial
    Before getting started, install a recent version of Node.js. We’ll use npm to install a backend server that will handle the communication with GitHub’s API.
    Additionally, if you don’t already have a GitHub account, sign up for one.
    Set up our local project
    Let’s start by creating a new directory for our project and switching to that new directory:
    mkdir canjs-github
    cd canjs-github

    Now let’s create the files we’ll need for our project:
    touch app.css app.js index.html

    We’ll use app.css for our styles, app.js for our JavaScript, and index.html for the user interface (UI).
    CanJS Hello World
    Let’s get coding! First, we’re going to add this to our index.html file:
    <!DOCTYPE html>
    <html>
    <head>
    <meta charset="utf-8">
    <title>CanJS GitHub Issues To-Do List</title>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
    <link rel="stylesheet" href="app.css">
    </head>
    <body>

    <script type="text/stache" id="app-template">
    <div class="container">
    <div class="row">
    <div class="col-md-8 col-md-offset-2">
    <h1 class="page-header text-center">

    </h1>
    </div>
    </div>
    </div>
    </script>

    <script type="text/stache" id="github-issues-template">
    </script>

    <script src="https://unpkg.com/jquery@3/dist/jquery.min.js"></script>
    <script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script>
    <script src="https://unpkg.com/can@3/dist/global/can.all.js"></script>
    <script src="/socket.io/socket.io.js"></script>
    <script src="app.js"></script>
    </body>
    </html>

    This has a bunch of different parts, so let’s break it down:

    The two link elements in the head are the stylesheets for our project. We’re using Bootstrap for some base styles and we’ll have some customizations in app.css
    The first script element (with id="app-template") contains the root template for our app
    The second script element (with id="github-issues-template") will contain the template for the github-issues component we will create later in this tutorial
    The script elements at the end of the page load our dependencies: jQuery, jQuery UI, CanJS, Socket.io, and our app code

    In our app, we’ll use jQuery UI (which depends on jQuery) to sort the issues with drag and drop. We’ve included can.all.js so we have access to every CanJS module; normally, you would want to use a module loader like StealJS or webpack, but that’s outside the scope of this article. We’ll use Socket.io to receive events from GitHub to update our app in real-time.
    Next, let’s add some styles to our app.css file:
    form {
    margin: 1em 0 2em 0;
    }

    .list-group .drag-background {
    background-color: #dff0d8;
    }

    .text-overflow {
    overflow: hidden;
    text-overflow: ellipsis;
    white-space: nowrap;
    }

    Lastly, let’s add some code to our app.js file:
    var AppViewModel = can.DefineMap.extend(‘AppVM’, {
    pageTitle: {
    type: "string",
    value: "GitHub Issues",
    }
    });

    var appVM = new AppViewModel();
    var template = can.stache.from(‘app-template’);
    var appFragment = template(appVM);
    document.body.appendChild(appFragment);

    Let’s break the JavaScript down:

    can.DefineMap is used for declaring custom observable object types
    AppViewModel is the observable object type that will serve as the root view-model for our app
    pageTitle is a property of all AppViewModel instances that defaults to the value GitHub Issues
    appVM is a new instance of our app’s view-model
    can.stache.from converts the contents of a script tag into a function that renders the template
    appFragment is a document fragment of the rendered template with the appVM data
    document.body.appendChild takes a DOM node and appends it to the HTML body

    Continue reading %How to Build a Real-Time GitHub Issue To-Do List with CanJS%

    Link: https://www.sitepoint.com/real-time-github-issue-list-canjs/

How I Designed & Built a Fullstack JavaScript Trello Clone

A few weeks ago, I came across a developer sharing one of his side projects on GitHub: a Trello clone. Built with React, Redux, Express, and MongoDB, the project seemed to have plenty of scope for working on fullstack JS skills.
I asked the developer, Moustapha Diouf, if he’d be interested in writing about his process for choosing, designing, and building the project and happily, he agreed. I hope you’ll find it as interesting as I did, and that it inspires you to work on ambitious projects of your own!
Nilson Jacques, Editor

In this article, I’ll walk you through the approach I take, combined with a couple of guidelines that I use to build web applications. The beauty of these techniques is that they can be applied to any programming language. I personally use them at work on a Java/JavaScript stack and it has made me very productive.
Before moving on to the approach, I’ll take some time to discuss how:

I defined my goals before starting the project.
I decided on the tech stack to use.
I setup the app.

Keep in mind that since the entire project is on GitHub (madClones), I’ll focus on design and architecture rather than actual code. You can check out a live demo of the current code: you can log in with the credentials Test/Test.

If you’re interested in taking your JavaScript skills to the next level, sign up for SitePoint Premium and check out our latest book, Modern JavaScript

Defining the Goals
I started by taking a couple of hours a day to think about my goals and what I wanted to achieve by building an app. A to-do list was out of the question, because it was not complex enough. I wanted to dedicate myself to at least 4 months of serious work (it’s been 8 months now). After a week of thinking, I came up with the idea to clone applications that I like to use on a daily basis. That is how the Trello clone became a side project.
In summary, I wanted to:

Build a full stack JavaScript application. Come out of my comfort zone and use a different server technology.
Increase my ability to architect, design, develop, deploy and maintain an application from scratch.
Practice TDD (test driven development) and BDD (behavior driven development). TDD is a software practice that requires the developer to write test, watch them fail, then write the minimum code to make the test pass and refactor (red, green, refactor). BDD, on the other hand, puts an emphasis on developing with features and scenario. Its main goal is to be closer to the business and write a language they can easily understand.
Learn the latest and the hottest frameworks. At my job, I use angular 1.4 and node 0.10.32 (which is very sad I know) so I needed to be close to the hot stuff.
Write code that follows the principle of the 3R’s: readability, refactorability, and reusability.
Have fun. This is the most important one. I wanted to have fun and experiment a lot since I was (and still am) the one in charge of the project.

Choosing the Stack
I wanted to build a Node.js server with Express and use a Mongo database. Every view needed to be represented by a document so that one request could get all the necessary data. The main battle was for the front-end tech choice because I was hesitating a lot between Angular and React.
I am very picky when it comes to choosing a framework because only testability, debuggability and scalability really matter to me. Unfortunately, discovering if a framework is scalable only comes with practice and experience.
I started with two proof-of-concepts (POCs): one in Angular 2 and another one in React. Whether you consider one as a library and the other one as a framework doesn’t matter, the end goal is the same: build an app. It’s not a matter of what they are, but what they do. I had a huge preference for React, so I decided to move forward with it.
Getting Started
I start by creating a main folder for the app named TrelloClone. Then I create a server folder that will contain my Express app. For the React app, I bootstrap it with Create React App.
I use the structure below on the client and on the server so that I do not get lost between apps. Having folders with the same responsibility helps me get what I am looking for faster:

src: code to make the app work
src/config: everything related to configuration (database, URLs, application)
src/utils: utility modules that help me do specific tasks. A middleware for example
test: configuration that I only want when testing
src/static: contains images for example
index.js: entry point of the app

Setting up the Client
I use create-react-app since it automates a lot of configuration out of the box. “Everything is preconfigured and hidden so that you can focus on code", says the repo.
Here is how I structure the app:

A view/component is represented by a folder.
Components used to build that view live inside the component folder.
Routes define the different route options the user has when he/she is on the view.
Modules (ducks structure) are functionalities of my view and/or components.

Setting up the Server
Here is how I structure the app with a folder per domain represented by:

Routes based on the HTTP request
A validation middleware that tests request params
A controller that receives a request and returns a result at the end

If I have a lot of business logic, I will add a service file. I do not try to predict anything, I just adapt to my app’s evolution.
Choosing Dependencies
When choosing dependencies I am only concerned by what I will gain by adding them: if it doesn’t add much value, then I skip. Starting with a POC is usually safe because it helps you "fail fast".
If you work in an agile development you might know the process and you might also dislike it. The point here is that the faster you fail, the faster you iterate and the faster you produce something that works in a predictable way. It is a loop between feedback and failure until success.
Client
Here is a list of dependencies that I always install on any React app:
Continue reading %How I Designed & Built a Fullstack JavaScript Trello Clone%

Link: https://www.sitepoint.com/fullstack-javascript-trello-clone/

Exploring ES2017 Decorators in JavaScript

With the introduction of ES2015+, and as transpilation has become commonplace, many of you will have come across newer language features, either in real code or tutorials. One of these features that often has people scratching their heads when they first come across them are JavaScript decorators.
Decorators have become popular thanks to their use in Angular 2+. In Angular, decorators are available thanks to TypeScript, but they are due to be part of the ES2017 language update due out later this year. Let’s take a look at what decorators are, and how they can be used to make your code cleaner and more easily understandable.
Note: If you’re more of a video person, why not sign up for SitePoint Premium and check out some of our popular JavaScript courses?
What is a Decorator?
In its simplest form, a decorator is simply a way of wrapping one piece of code with another – literally “decorating" it.
This is a concept you might well have heard of previously as "Functional Composition", or "Higher-Order Functions".
This is already possible in standard JavaScript for many use cases, simply by calling one one function to wrap another:
function doSomething(name) {
console.log(‘Hello, ‘ + name);
}

function loggingDecorator(wrapped) {
return function() {
console.log(‘Starting’);
const result = wrapped.apply(this, arguments);
console.log(‘Finished’);
return result;
}
}

const wrapped = loggingDecorator(doSomething);

This example produces a new function – in the variable wrapped – that can be called exactly the same way as the doSomething function, and will do exactly the same thing. The difference is that it will do some logging before and after the wrapped function is called.
doSomething(‘Graham’);
// Hello, Graham
wrapped(‘Graham’);
// Starting
// Hello, Graham
// Finished

How to use JavaScript Decorators?
Decorators use a special syntax in ES2017, whereby they are prefixed with an @ symbol and placed immediately before the code being decorated.

Note: At the time of writing, the decorators are currently in "Stage 2 Draft" form, meaning that they are mostly finished but still subject to changes.

It is possible to use as many decorators on the same piece of code as you desire, and they will be applied in the order that you declare them.
For example:
@log()
@immutable()
class Example {
@time(‘demo’)
doSomething() {

}
}

This defines a class and applies three decorators: two to the class itself and one to a property of the class:

@log could log all access to the class.
@immutable could make the class immutable – maybe it calls Object.freeze on new instances.
@time will record how long a method takes to execute and log this out with a unique tag.

At present, using decorators requires transpiler support, since no current browser or Node release has support for them yet.
If you are using Babel, this is enabled simply by using the transform-decorators-legacy plugin.

Note: the use of the word "legacy" in this Plugin is because it supports the way that Babel 5 handled Decorators, which might well be different to the final form when they are standardized.

Why use Decorators?
Whilst functional composition is already possible in JavaScript, it is significantly more difficult – or even impossible – to apply the same techniques to other pieces of code (e.g. classes and class properties).
The ES2017 draft adds support for class and property decorators that can be used to resolve these issues, and future JavaScript versions will probably add decorator support for other troublesome areas of code.
Decorators also allow for a cleaner syntax for applying these wrappers around your code, resulting in something that detracts less from the actual intention of what you are writing.
Different Types of Decorator
At present, the only types of decorator that are supported are on classes and members of classes. This includes properties, methods, getters, and setters.
Decorators are actually nothing more than functions that return another function, and that are called with the appropriate details of the item being decorated. These decorator functions are evaluated once when the program first runs, and the decorated code is replaced with the return value.
Continue reading %Exploring ES2017 Decorators in JavaScript%

Link: https://www.sitepoint.com/exploring-es2017-decorators-javascript/

Angular and RxJS: Create an API Service to Talk to a REST Backend

This article is part 3 of the SitePoint Angular 2+ Tutorial on how to create a CRUD App with the Angular CLI.

Part 0— The Ultimate Angular CLI Reference Guide
Part 1— Getting our first version of the Todo application up and running
Part 2— Creating separate components to display a list of todo’s and a single todo
Part 3— Update the Todo service to communicate with a REST API
Part 4— Use Angular Router to resolve data
Part 5— Add authentication to protect private content

In part one we learned how to get our Todo application up and running and deploy it to GitHub pages. This worked just fine but, unfortunately, the whole app was crammed into a single component.
In part two we examined a more modular component architecture and learned how to break this single component into a structured tree of smaller components that are easier to understand, re-use and maintain.
In this part, we will update our application to communicate with a REST API back-end.
You don’t need to have followed part one or two of this tutorial, for three to make sense. You can simply grab a copy of our repo, checkout the code from part two, and use that as a starting point. This is explained in more detail below.
A Quick Recap
Here is what our application architecture looked like at the end of part 2:

Currently the TodoDataService stores all data in memory. In this third article, we will update our application to communicate with a REST API back-end instead.
We will:

create a mock REST API back-end
store the API URL as an environment variable
create an ApiService to communicate with the REST API
update the TodoDataService to use the new ApiService
update the AppComponent to handle asynchronous API calls
create an ApiMockService to avoid real HTTP calls when running unit tests

By the end of this article, you will understand:

how you can use environment variables to store application settings
how you can use the Angular HTTP client to perform HTTP requests
how you can deal with Observables that are returned by the Angular HTTP client
how you can mock HTTP calls to avoid making real HTTP request when running unit tests

So, let’s get started!
Up and Running
Make sure you have the latest version of the Angular CLI installed. If you don’t, you can install this with the following command:
npm install -g @angular/cli@latest

If you need to remove a previous version of the Angular CLI, you can:
npm uninstall -g @angular/cli angular-cli
npm cache clean
npm install -g @angular/cli@latest

After that, you’ll need a copy of the code from part two. This is available at https://github.com/sitepoint-editors/angular-todo-app. Each article in this series has a corresponding tag in the repository so you can switch back and forth between the different states of the application.
The code that we ended with in part two and that we start with in this article is tagged as part-2. The code that we end this article with is tagged as part-3.

You can think of tags like an alias to a specific commit id. You can switch between them using git checkout. You can read more on that here.

So, to get up and running (the latest version of the Angular CLI installed) we would do:
git clone git@github.com:sitepoint-editors/angular-todo-app.git
cd angular-todo-app
git checkout part-2
npm install
ng serve

Then visit http://localhost:4200/. If all is well, you should see the working Todo app.
Setting up a REST API back-end
Let’s use json-server to quickly set up a mock back-end.
From the root of the application, run:
npm install json-server –save

Next, in the root directory of our application, create a file called db.json with the following contents:
{
“todos": [
{
"id": 1,
"title": "Read SitePoint article",
"complete": false
},
{
"id": 2,
"title": "Clean inbox",
"complete": false
},
{
"id": 3,
"title": "Make restaurant reservation",
"complete": false
}
]
}

Finally, add a script to package.json to start our back-end:
"scripts": {

"json-server": "json-server –watch db.json"
}

We can now launch our REST API using:
npm run json-server

which should display:
\{^_^}/ hi!

Loading db.json
Done

Resources
http://localhost:3000/todos

Home
http://localhost:3000

That’s it! We now have a REST API listening on port 3000.
To verify that your back-end is running as expected, you can navigate your browser to http://localhost:3000.
The following endpoints are supported:

GET /todos: get all existing todo’s
GET /todos/:id: get an existing todo
POST /todos: create a new todo
PUT /todos/:id: update an existing todo
DELETE /todos/:id: delete an existing todo

so if you navigate your browser to http://localhost:3000/todos, you should see a JSON response with all todo’s from db.json.
To learn more about json-server, make sure to check out mock REST API’s using json-server.
Storing the API URL
Now that we have our back-end in place, we must store its URL in our Angular application.
Ideally, we should be able to:

store the URL in a single place so that we only have to change it once when we need to change its value
make our application connect to a development API during development and connect to a production API in production

Luckily, Angular CLI supports environments. By default, there are two environments: development and production, both with a corresponding environment file: src/environments/environment.ts and ‘src/environments/environment.prod.ts.
Let’s add our API URL to both files:
// src/environments/environment.ts
// used when we run `ng serve` or `ng build`
export const environment = {
production: false,

// URL of development API
apiUrl: ‘http://localhost:3000’
};

// src/environments/environment.prod.ts
// used when we run `ng serve –environment prod` or `ng build –environment prod`
export const environment = {
production: true,

// URL of production API
apiUrl: ‘http://localhost:3000’
};

This will later allow us to get the API URL from our environment in our Angular application by doing:
import { environment } from ‘environments/environment’;

// we can now access environment.apiUrl
const API_URL = environment.apiUrl;

When we run ng serve or ng build, Angular CLI uses the value specified in the development environment (src/environments/environment.ts).
But when we run ng serve –environment prod or ng build –environment prod, Angular CLI uses the value specified in src/environments/environment.prod.ts.
This is exactly what we need to use a different API URL for development and production, without having to change our code.

The application in this article series is not hosted in production, so we specify the same API URL in our development and production environment. This allows us to run ng serve –environment prod or ng build –environment prod locally to see if everything works as expected.

You can find the mapping between dev and prod and their corresponding environment files in .angular-cli.json:
"environments": {
"dev": "environments/environment.ts",
"prod": "environments/environment.prod.ts"
}

You can also create additional environments such as staging by adding a key:
"environments": {
"dev": "environments/environment.ts",
"staging": "environments/environment.staging.ts",
"prod": "environments/environment.prod.ts"
}

and creating the corresponding environment file.
To learn more about Angular CLI environments, make sure to check out the The Ultimate Angular CLI Reference Guide.
Now that we have our API URL stored in our environment, we can create an Angular service to communicate with the REST API.
Creating the Service to Communicate with the REST API
Let’s use Angular CLI to create an ApiService to communicate with our REST API:
ng generate service Api –module app.module.ts

which gives the following output:
installing service
create src/app/api.service.spec.ts
create src/app/api.service.ts
update src/app/app.module.ts

The –module app.module.ts option tells Angular CLI to not only create the service but to also register it as a provider in the Angular module defined in app.module.ts.
Let’s open src/app/api.service.ts:
import { Injectable } from ‘@angular/core’;

@Injectable()
export class ApiService {

constructor() { }

}

and inject our environment and Angular’s built-in HTTP service:
import { Injectable } from ‘@angular/core’;
import { environment } from ‘environments/environment’;
import { Http } from ‘@angular/http’;

const API_URL = environment.apiUrl;

@Injectable()
export class ApiService {

constructor(
private http: Http
) {
}

}

Before we implement the methods we need, let’s have a look at Angular’s HTTP service.

If you’re unfamiliar with the syntax, why not buy our Premium course, Introducing TypeScript.

The Angular HTTP Service
The Angular HTTP service is available as an injectable class from @angular/http.
It is built on top of XHR/JSONP and provides us with an HTTP client that we can use to make HTTP requests from within our Angular application.
The following methods are available to perform HTTP requests:

delete(url, options): perform a DELETE request
get(url, options): perform a GET request
head(url, options): perform a HEAD request
options(url, options): perform an OPTIONS request
patch(url, body, options): perform a PATCH request
post(url, body, options): perform a POST request
put(url, body, options): perform a PUT request

Each of these methods returns an RxJS Observable.
In contrast to the AngularJS 1.x HTTP service methods, which returned promises, the Angular HTTP service methods return Observables.
Don’t worry if you are not yet familiar with RxJS Observables. We only need the basics to get our application up and running. You can gradually learn more about the available operators when your application requires them and the ReactiveX website offers fantastic documentation.

If you want to learn more about Observables, it may also be worth checking out SitePoint’s Introduction to Functional Reactive Programming with RxJS.

Implementing the ApiService Methods
If we think back of the endpoints our REST API back-end exposes:

GET /todos: get all existing todo’s

GET /todos/:id: get an existing todo

POST /todos: create a new todo

PUT /todos/:id: update an existing todo

DELETE /todos/:id: delete an existing todo
we can already create a rough outline of methods we need and their corresponding Angular HTTP methods:

import { Injectable } from ‘@angular/core’;
import { environment } from ‘environments/environment’;

import { Http, Response } from ‘@angular/http’;
import { Todo } from ‘./todo’;
import { Observable } from ‘rxjs/Observable’;

const API_URL = environment.apiUrl;

@Injectable()
export class ApiService {

constructor(
private http: Http
) {
}

// API: GET /todos
public getAllTodos() {
// will use this.http.get()
}

// API: POST /todos
public createTodo(todo: Todo) {
// will use this.http.post()
}

// API: GET /todos/:id
public getTodoById(todoId: number) {
// will use this.http.get()
}

// API: PUT /todos/:id
public updateTodo(todo: Todo) {
// will use this.http.put()
}

// DELETE /todos/:id
public deleteTodoById(todoId: number) {
// will use this.http.delete()
}
}

Let’s have a closer look at each of the methods.
Continue reading %Angular and RxJS: Create an API Service to Talk to a REST Backend%

Link: https://www.sitepoint.com/angular-rxjs-create-api-service-rest-backend/

Redux vs MobX: Which Is Best for Your Project?

For a lot of JavaScript developers, the biggest complaint with Redux is the amount of boilerplate code needed to implement features. A better alternative is MobX which provides similar functionality but with lesser code to write.
For MobX newbies, take a quick look at this introduction written by MobX’s creator. You can also work through this tutorial to gain some practical experience.
The goal of this article is to help JavaScript developers decide which of these two state management solutions are best for their projects. I’ve migrated this CRUD Redux project to MobX to use as an example in this article. I’ll first discuss the pros and cons of using MobX then I’ll demonstrate actual code samples from both versions to show the difference.

The code for the projects mentioned in this article can be found on GitHub:

https://github.com/sitepoint-editors/redux-crud-example
https://github.com/sitepoint-editors/mobx-crud-example

What do Redux and MobX have in Common?
First, let’s look at what they both have in common. They:

Are open-source libraries
Provide client-side state management
Support time-travel debugging via redux-devtools-extension
Are not tied to a specific framework
Have extensive support for React/React Native frameworks

4 Reasons to use MobX
Let’s now look at the main differences between Redux and MobX.
1. Easy to learn and use
For a beginner, you can learn how to use MobX in just 30 minutes. Once you learn the basics, that’s it. You don’t need to learn anything new. With Redux the basics are easy too. However, once you start building more complex applications you will have to deal with:

Handling async actions with redux-thunk
Simplifying your code with redux-saga
Define selectors to handle computed values etc.

With MobX, all these situations are “magically" taken care of. You don’t need additional libraries to handle such situations.
2. Less code to write
To implement a feature in Redux, you need to update at least four artifacts. This includes writing code for reducers, actions, containers and components. This is particularly annoying if you are working on a small project. MobX only requires you to update at least 2 artifacts (i.e. the store and the view component).
3. Full support for object-oriented programming
If you prefer writing object-oriented code, then you will be pleased to know you can use OOP to implement state management logic with MobX. Through the use of decorators such as @observable and @observer, you can easily make your plain JavaScript components and stores reactive. If you prefer functional programming, no problem; that is supported as well. Redux, on the other hand, is heavily geared towards functional programming principles. However, you can use the redux-connect-decorator library if you want a class based approach.
4. Dealing with nested data is easy
In most JavaScript applications, you’ll find yourself working with relational or nested data. To be able to use it in a Redux store, you will have to normalize it first. Next, you have to write some more code to manage tracking of references in normalized data.
In MobX, it is recommended to store your data in a denormalized form. MobX can keep track of the relations for you and will automatically re-render changes. By using domain objects to store your data, you can refer directly to other domain objects defined in other stores. In addition, you can use (@)computed decorators and modifiers for observables to easily solve complex data challenges.
3 Reasons not to use MobX
1. Too much freedom
Redux is a framework that provides strict guidelines on how you write state code. This means you can easily write tests and develop maintainable code. MobX is a library and has no rules on how to implement it. The danger with this is that it’s very easy to take shortcuts and apply quick fixes which can lead to unmaintainable code.
2. Hard to debug
MobX’s internal code "magically" handles a lot of logic to make your application reactive. There’s an invisible area where your data passes between the store and your component which makes it difficult to debug when you have a problem. If you change state directly in components, without using @actions, you will have a hard time pinpointing the source of a bug.
3. There could be a better alternative to MobX
In software development, new emerging trends appear all the time. Within a few short years, current software techniques can quickly loose momentum. At the moment, there are several solutions competing with both Redux and Mobx. A few examples are Relay/Apollo & GraphQL, Alt.js and Jumpsuit. Any of these technologies has the potential to become the most popular. If you really want to know which one is best for you, you’ll have to try them all.
Continue reading %Redux vs MobX: Which Is Best for Your Project?%

Link: https://www.sitepoint.com/redux-vs-mobx-which-is-best/

Introduction to Data Management & Visualization in JavaScript

In order to create meaningful visual representations of our data, and the complimentary tools to analyze said data, it is important to have a well-conceived data management framework. This requires the right backend storage, a paradigm for data access, and an engaging front-end for presentation and analysis. There are a variety of tools that you can use to build a data access stack in your web applications which we will be exploring here.
If you are collecting data that data is relevant to your visitors they will want some way to consume it. Our responsibility is to provide transparency to our visitors, give them the best possible experience, and build intuitive and performant tools to allow them access to these insights. The visual representation of that data is only a part of that. It is the mechanisms that we use to store, transform, and transport that data which plays as much a part in providing these rich experiences.
Data Storage Options
Data storage has become a huge market in recent years. Deciding which technology you want to use for your application can be a daunting task. There are a few things to think about: performance, scalability, ease of implementation, as well as the particular skill set of you and your team. This last point being extremely important and often overlooked. If you have a team of SQL developers on your team the benefits of moving to a MongoDB implementation would have to be overwhelming in order to persuade you to go down that route.
Other than “stick with what you know" there is no quick and easy answer to which you should use. Flat datasets can be relatively easy to work with. They are structured as a single table (think CSV files) and can be relatively easy to understand. The limitations of these sources show themselves quickly, because they perform poorly as they grow and can be difficult to maintain. If you have a flat dataset you most likely want to break it apart into one of the other storage options.
Relational Databases (MySQL, SQL Server) are great for storing data in separate tables that can be joined up using unique keys. Advantages of these are that they reduce the size of the datasets, perform better, and can be accessed using a well established querying language (SQL). This solution also requires a good deal of planning, creating unique keys for establishing relationships, and tuning for performance.
Growing in popularity are Document-Oriented Databases (e.g. MongoDB) that allow you to store data in JSON objects. This is also more efficient than flat files in that data is structured to reduce redundancy. There is the added advantage of storing the data in a format that is native to JavaScript, but it can get increasingly complicated if you are trying to join multiple datasets or summarize/create aggregations.
Unstructured databases (e.g. Hadoop) are good for extremely large datasets and outside the scope of this discussion. If you are working with datasets of this size you are likely going to want to use an ETL process to normalize the data before bringing it into your application.
The option to store data client-side is also appealing but it doesn’t come without its disadvantages. File storage and caching data on a client machine has some advantages in certain use cases but it requires a certain level of trust between you and the user. If this is a trusted service, or if the user knows they are going to be working with large volumes of data, then it is reasonable to expect them to allow access to file storage. By default, however, I would not recommend making this an expectation in any but the most demanding of use cases.
Creating Access Layers
There are a few methods for creating access layers into your data. Views have long been the standard way of doing this in relational databases. Views allow you to write queries around your data and present it as a table. Using data aggression techniques such as group by, order by, sum, etc you can create smaller, more targeted datasets for your visualizations and analytics.
CREATE VIEW population_vw AS
SELECT country, age, year,
sum(total) AS TOTAL
FROM census_data
WHERE year IN (‘2010’)
AND country IN (‘United States’)
GROUP BY country, age, year;

Most relational databases also allow for the creation of materialized views that require ETL to create the view but perform better because they only require one table to be accessed.
A hybrid approach can be effective as well. Often times this can be accomplished by creating a more targeted MongoDB layer for your larger dataset that is being stored in SQL Server. Offloading the most crucial data to the document-oriented database for quick access and consumption while storing the full breadth of data in your backend SQL database. If you are using Node you can use Express to manage the creation of these datasets and store them on your MongoDB server.
OLAP also allows you to create datasets that can be aggregated but allow you to pre-establish the dimensions and measures you want to use to represent your data. OLAP uses Multidimensional Expressions (MDX) for accessing data types but is not very well supported in web applications.
Network Dependencies
Aggregating your data before sending it to the client has always been considered best practice. Most likely, you want to reduce the data as much as possible on the server before you present it to your users. This can be troublesome, however, because you will often be tempted to reduce it to its most aggregated form on the server. If the user wants to change the layout of the data, you end up with network thrashing because you constantly need to pull a dataset from the server with the appropriate level of aggregation.

It is critical that you find that medium where the data is aggregated to a size that is responsible from a bandwidth perspective but also provides adequate detail to allow for analysis. This can be accomplished through requirements gathering and establishing the key metrics and dimensions that the end user requires for analysis.
Continue reading %Introduction to Data Management & Visualization in JavaScript%

Link: https://www.sitepoint.com/data-management-visualization-javascript/

Modern JavaScript Development Is Hard

It’s not uncommon these days to see people complaining about just how complex JavaScript development seems to have become. I can have some sympathy with that view when it’s coming from someone new to the language.
If you’re learning JS, it won’t take long for you to be exposed to the enormity of the ecosystem and the sheer number of moving pieces you need to understand (at least conceptually) to build a modern web application.
Package management, linting, transpilation, module bundling, minification, source maps, frameworks, unit testing, hot reloading… it can’t be denied that this is a lot more complex that just including a couple script tags in your page and FTPing it up to the server.
Some people who have been involved with web development for years are still pining for those ‘good old days’, and it’s this kind of complaining that I have much less sympathy for. One such comment I read this last week claimed that web development had been hijacked by those who enjoy using the command line and writing JSON config files.
For a long time, JavaScript was looked upon by many as a joke; a toy language whose only real use was to add non-essential eye-candy, such as mouseover changes, and was often a source of weird errors and broken pages. The language is still not taken seriously by some today, despite having made much progress since those early days. It’s not hard to have some sympathy with PHP developers.
For better or for worse, JavaScript was (and still is) the only language supported natively by the vast majority of web browsers. The community has worked hard to improve the language itself, and to provide the tooling to help build production-grade apps. I find it ironic that now people attack JavaScript development fo being “too complicated". Unfortunately, you just can’t have it both ways.
Continue reading %Modern JavaScript Development Is Hard%

Link: https://www.sitepoint.com/modern-javascript-development-hard/