Why I’m Switching from React to Cycle.js

I would guess that most developers these days are using some sort of framework for developing apps. Frameworks are there to help us structure complex apps and save us time. Every day, we can see much discussion about which framework is the best, which framework should you learn first, etc. So today, I would like to share my experience, and why I’m switching to Cycle.js from React.
React is probably the most popular frontend framework these days and it has a great community. I am a big fan of it and it really helped me to change the way I think about web apps and how I develop them. Some developers love it, and some think that it’s not as good as everyone says.
Most people start to use React without thinking that there might be a better way to build a web app. That reflection made me try Cycle.js, a new reactive framework that is becoming more popular every day. In this article, I want to explain what reactive programming is, how Cycle.js works, and why I think it’s better than React. So let’s start!
What is Reactive Programming?
Reactive programming (RP) is programming with asynchronous data streams. If you’ve already built a web app, you probably did a lot of reactive programming. As an example, click events are asynchronous data streams. We can observe them and perform some side effects. The idea behind RP is to give us an ability to create data streams from anything and manipulate with them. We then have the same abstraction for all our side effects which is easier to use, maintain, and test.
You’re probably thinking “why do I need this new reactive programming thing?" The answer is simple: Reactive programming will help you unify your code and make it more consistent. You won’t need to think about how the things should work and how to properly implement them. Just write the code in the same way, no matter what data you work on (click events, HTTP calls, web sockets…). Everything is a stream of data and each stream has many functions that you can use to work with it, such as map, and filter. These function will return new streams that can be used, and so on.
Reactive programming gives you the bigger abstraction of your code. It will give you an ability to create interactive user experiences and focus on business logic.

Image taken from https://gist.github.com/staltz/868e7e9bc2a7b8c1f754
Reactive Programming in JavaScript
In JavaScript, we have a couple of awesome libraries for dealing with data streams. The most well-known one is RxJS. It’s an extension of ReactiveX, an API for asynchronous programming with observable streams. You can create an Observable (a stream of data), and manipulate it with various functions.
The second one is Most.js. It has the best performance and they can prove that with some numbers: Performance comparation.
I would also like to mention one small and fast library, made by the creator of Cycle.js and made specifically for it. It’s called xstream. It has only 26 methods, is approximately 30kb, and is one of the fastest libraries for reactive programming in JS.
In the examples below, I will use xstream library. Cycle.js is made to be a small framework and I want to attach the smallest reactive library to it.
What is Cycle.js?
Cycle.js is a functional and reactive JavaScript framework. It abstracts your application as a pure function, main(). In functional programming, functions should have only inputs and outputs, without any side effects. In Cycle.js’s main() function, inputs are read effects (sources) from the external world and outputs (sinks) are write effects to the external world. Managing side effects is done using drivers. Drivers are plugins that handle DOM effects, HTTP effects, and web sockets etc.

Image taken from Cycle.js website
Cycle.js is there to help us build our user interfaces, test them and write reusable code. Each component is just one pure function that can run independently.
The core API has just one function, run.
run(app, drivers);

It has two arguments, app and drivers. app is the main pure function and drivers are plugins that need to handle side effects.
Cycle.js separates additional functionality into smaller modules. They are:

@cycle/dom – a collection of drivers that work with DOM; it has a DOM driver and HTML driver, based on the snabdom virtual DOM library
@cycle/history – a driver for the History API
@cycle/http – a driver for HTTP requests, based on superagent
@cycle/isolate – a function for making scoped dataflow components
@cycle/jsonp – a driver for making HTTP requests through JSONP
@cycle/most-run – a run function for apps made with most
@cycle/run – a run function for apps made with xstream
@cycle/rxjs-run – a run function for apps made with rxjs

Cycle.js Code
Let’s see some Cycle.js code? We will create a simple app that should demonstrate how it works. I think that a good old counter app should be ideal for this example. We will see how handling DOM events and re-rendering the DOM works in Cycle.js.
Let’s create two files, index.html and main.js. index.html will just serve our main.js file, where the whole of our logic will be. We are also going to create a new package.json file, so run:
npm init -y

Next, let’s install our main dependencies:
Continue reading %Why I’m Switching from React to Cycle.js%

Link: https://www.sitepoint.com/switching-from-react-to-cycle-js/

GreenSock for Beginners: a Web Animation Tutorial (Part 1)

My aim in this article is to give you a thorough introduction to GreenSock, also known as GSAP (GreenSock Animation Platform), a super performant, professional-grade HTML5 animation engine for the modern web.
This is the fourth article in the series Beyond CSS: Dynamic DOM Animation Libraries.
Here’s what I covered in the past issues:

Animating the DOM with Anime.js touches on how to make the best use of animation on the web and when you could consider using a JavaScript animation library instead of CSS-only animation. It then focuses on Anime.js, a free and lightweight JavaScript animation library
Fun Animation Effects with KUTE.js introduces you to KUTE.js, a free and feature-rich JavaScript animation library
Make Your Website Interactive and Fun with Velocity.js (No jQuery) shows you how to use Velocity.js, an open source, robust free animation library, to create performant web animations.

GSAP has too many features to fit in one article. This is why I’ve opted for a multi-part article all dedicated to various areas of the GreenSock library.
In more detail:

By the end of this first part, you’ll have learned about GreenSock’s capabilities and features, licensing model, core components, and basic syntax to tween and stagger DOM elements
In part 2, I’ll introduce GreenSock’s native timeline functionality
Finally, part 3 will focus on some powerful bonus plugins GreenSock makes available to accomplish complex animation tasks easily with a few lines of code.

If you’re already a GSAP Ninja, check out Animating Bootstrap Carousels with the GSAP Animation Library, where George Martsoukos illustrates an effective use of GreenSock for UI animation.
Without further ado, brace yourself, the journey is about to begin!
What is GreenSock and What Is It Good For?
GreenSock is the de facto industry-standard JavaScript animation platform available today.
It’s a mature JavaScript library that has its roots in Flash-based animation. This means the guys behind GreenSock know web animation inside-out, the library has been around for a long time and is not going anywhere any time soon.
GSAP includes a comprehensive suite of tools, utilities, and plugins you can leverage to handle any web animation challenge you happen to face, from animating SVGs consistently across multiple browsers to coding complex animation sequences, dragging elements across the screen, splitting and scrambling text, and so much more.
To mention just three things I particularly love about GreenSock:

Although the library is mega rich in terms of features, the learning curve is relatively shallow because it uses an intuitive and consistent syntax across all its various implementations and plugins. In addition, it offers awesome documentation, tutorials, and support via the GSAP Forums. Also, the library itself is continually updated and maintained. These are all crucial factors when building a project which relies on an external library for any of its key features and user experience
It’s modular and light-weight, which means it won’t add bloat to your web projects
Thanks to its powerful timeline functionality, GSAP affords precise control over the timings not only of single tweens, but also of multiple tweens as part of a whole animation flow.

Core GreenSock Tools
These are GreenSock’s core modules:

TweenLite — the foundation of GSAP, a lightweight and fast HTML5 animation library
TweenMax — TweenLite’s extension, which besides comprising TweenLite itself, includes:


TimelineLite — a lightweight timeline to take control of multiple tweens and/or other timelines
TimelineMax — an enhanced version of TimelineLite, which offers additional, non essential capabilities like repeat, repeatDelay, yoyo, and more.

You can choose to include TweenLite in your project and then add other modules separately as you need them. Alternatively, you can choose to include TweenMax (the approach I’m going to take in this multi-part series), which packages all the core modules in one optimized file.
It’s also worth noting that GSAP offers some paid extras like the DrawSVGPlugin to create animated line drawing effects with SVGs, the powerful MorphSVGPlugin to morph one SVG shape into another (even without requiring that the two shapes have the same number of points), and others. Although you need to be a paying member of the Club GreenSock to use these plugins, GSAP makes available to you for free a special CodePen-based version so that you can try them out. This is a cool offer I’m going to take full advantage of later on in part 3 (and you with me).
GSAP hasn’t adopted a free open-source license like MIT largely for reasons that concern keeping the product’s quality high and its maintenance financially sustainable over the long term.
GreenSock makes available two licenses:

Standard License — use of the library is totally free in digital products that are free to use (even if developers get paid to build them)
Business Green — this license includes three tiers with the middle tier, Shockingly Green, being the most popular. As a Shockingly Green member you’ll gain access to all the bonus plugins and extras, but without the commercial license.

Despite the non-adherence to MIT and similar free use licenses, by letting you peek into its raw code both on the project’s GitHub repo and in your downloads, GreenSock encourages you to learn from its developers’ code and their mastery of JavaScript animation, which is one of the best features of the open source philosophy.
Tweening with GreenSock
Continue reading %GreenSock for Beginners: a Web Animation Tutorial (Part 1)%

Link: https://www.sitepoint.com/web-animation-tutorial-part-1/

A Dirty Webpack Trick That Reduced Our gzipped Bundle Size by 55KB

At my day job, we’re using Webpack 2 to make our JavaScript as small as possible.
Bundle splitting ensures web apps get only the code they need, individual vendor files ensure caching stays as stable as possible, dead code elimination and tree shaking make our files small. Everything is minified and gzipped and cached forever. Fingerprinting ensures the code you download is never stale.

Link: https://dzone.com/articles/a-dirty-webpack-trick-that-reduced-our-gzipped-bun?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev

BDD in JavaScript: Getting Started with Cucumber and Gherkin

By now, everyone has heard about Test Driven Development (TDD), and the benefits that this can have on your product and your development lifecycle. It’s a no-brainer really. Every time you write a test for a piece of code, you know that code works. And, what’s more, you will know in the future if that code breaks.
Behavior Driven Development (BDD) is an extension to this concept, but instead of testing your code you are testing your product, and specifically that your product behaves as you desire it to.
In this article, I’m going to show you how to get up and running with Cucumber, a framework that runs automated acceptance tests written in a BDD style. The advantage of these tests is that they can written in plain English and consequently understood by non-technical people involved in a project. After reading, you will be in a position to decide if Cucumber is a good fit for you and your team and to start writing acceptance tests of your own.
Ready? Then let’s dive in.
BDD vs TDD — so, What’s the Difference?
Primarially in the way that tests are structured and written.
In a TDD setting, the tests are written, maintained and understood by the developers who wrote the code they are testing. It might well be that nobody else ever needs to read the tests at all, and that’s fine.
In a BDD setting, the tests need to be understood by a lot more than just the developer writing the functionality. There are many more stakeholders who have an interest that the product behaves as it should.
These might include QA people, product analysts, sales, even upper management.
This means that, in an ideal world, BDD tests need to be written in a way that anyone who understands the product will be able to pick up the tests and understand them as well.
It’s the difference between:
const assert = require(‘assert’);
const webdriver = require(‘selenium-webdriver’);
const browser = new webdriver.Builder()
.withCapabilities({‘browserName’: ‘chrome’ })

assert.equal(19, links.length); // Made up number

Given I have opened a Web Browser
When I load the Wikipedia article on "Wiki"
Then I have "19" Wiki Links

The two tests do exactly the same thing, but one is actually human readable and the other is only readable by someone who knows both JavaScript and Selenium.
This article will show you how you can implement BDD Tests in your JavaScript project, using the Cucumber.js framework, allowing you to benefit from this level of testing for your product.
What Is Cucumber / Gherkin?
Cucumber is a testing framework for behavior driven development. It works by allowing you to define your tests in Gherkin form, and makes these gherkins executable by tying them to code.
Gherkin is the Domain Specific Language (DSL) that is used for writing Cucumber tests. It allows for test scripts to be written in a human readable format, which can then be shared between all of the stakeholders in the product development.
Gherkin files are files that contain tests, written in the Gherkin language. These files typically have a .feature file extension. The contents of these Gherkin files are often referred to as simply "gherkins".
In a Gherkin defined test, you have the concept of features and scenarios. These are analogous to test suites and test cases in other testing frameworks, allowing for a clean way to structure your tests.
A scenario is literally just a single test. It should test exactly one thing in your application.
A feature is a group of related scenarios. As such, it will test many related things in your application. Ideally the features in the Gherkin files will closely map on to the Features in the application — hence the name.
Every Gherkin file contains exactly one feature, and every feature contains one or more scenarios.
Scenarios are then comprised of steps, which are ordered in a specific manner:

Given – These steps are used to set up the initial state before you do your test
When – These steps are the actual test that is to be executed
Then – These steps are used to assert on the outcome of the test

Ideally each scenario should be a single test case, so the number of When steps should be kept very small.
Steps are entirely optional. If you have no need to set anything up at all, you might have no Given steps, for example.
Gherkin files are designed to be human readable, and to give benefit to anyone involved in the product development. This includes non-technical people, so the Gherkin files should always be written in business language and not technical language. This means, for example, that you do not refer to individual UI components, but instead describe the product concepts that you are wanting to test.
An Example Gherkin Test
The following is an example Gherkin for searching Google for Cucumber.js
Given I have loaded Google
When I search for "cucumber.js"
Then the first result is "GitHub – cucumber/cucumber-js: Cucumber for JavaScript"

Straight away we can see that this test tells us what to do and not how to do it. It is written in a language that makes sense to anyone reading it, and — importantly — that will most likely be correct no matter how the end product might be tweaked. Google could decide to completely change their UI, but as long as the functionality is equivalent then the Gherkin is still accurate.
You can read more about Given When Then on the Cucumber wiki.
Once you have written your test cases in Gherkin form, you need some way to execute them. In the JavaScript world, there is a module called Cucumber.js that allows you to do this. It works by allowing you to define JavaScript code that it can connect to the various steps defined inside of your Gherkin files. It then runs the tests by loading the Gherkin files and executing the JavaScript code associated with each step in the correct order.
For example, in the above example you would have the following steps:
Given(‘I have loaded Google’, function() {});
When(‘I search for {stringInDoubleQuotes}’, function() {});
Then(‘the first result is {stringInDoubleQuotes}’, function() {});

Don’t worry too much about what all of this means – it will be explained in detail later. Essentially though, it defines some ways that the Cucumber.js framework can tie your code to the steps in your Gherkin files.
Including Cucumber.js in Your Build
Including Cucumber.js in your build is as simple as adding the cucumber module to your build, and then configuring it to run. The first part of this is as simple as:
$ npm install –save-dev cucumber

The second of these varies depending on how you are executing your build.
Running by Hand
Executing Cucumber by hand is relatively easy, and it’s a good idea to make sure you can do this first because the following solutions are all just automated ways of doing the same.
Once installed, the executable will be ./node_modules/.bin/cucumber.js. When you run it, it needs to know where on the file system it can find all of its required files. These are both the Gherkin files and the JavaScript code to be executed.
By convention, all of your Gherkin files will be kept in the features directory, and if you don’t instruct it otherwise then Cucumber will look in the same directory for the JavaScript code to execute as well. Instructing it where to look for these files is a sensible practice however, so that you have better control over your build process.
For example, if you keep all of your Gherkin files in the directory myFeatures and all of your JavaScript code in mySteps then you could execute the following:
$ ./node_modules/.bin/cucumber.js ./myFeatures -r ./mySteps

The -r flag is a directory containing JavaScript files to automatically require for the tests. There are other flags that might be of interest as well — just read the help text to see how they all work: $ ./node_modules/.bin/cucumber.js –help.
These directories are scanned recursively so you can nest files as shallowly or deeply as makes sense for your specific situation.
npm scripts
Once you have got Cucumber running manually, adding it to the build as an npm Script is a trivial case. You simply need to add the following command — without the fully qualified path, since npm handles that for you — to your package.json as follows:
"scripts": {
"cucumber": "cucumber.js ./myFeatures -r ./mySteps"

Once this is done, you can execute:
$ npm run cucumber

And it will execute your Cucumber tests exactly as you did before.
There does exist a Grunt plugin for executing Cucumber.js tests. Unfortunately it is very out of date, and doesn’t work with the more recent versions of Cucumber.js, which means that you will miss out on a lot of improvements if you use it.
Instead, my preferred way is to simply use the grunt-shell plugin to execute the command in the exact same way as above.
Once installed, configuring this is simply a case of adding the following plugin configuration to your Gruntfile.js:
shell: {
cucumber: {
command: ‘cucumber.js ./myFeatures -r ./mySteps’

And now, as before, you can execute your tests by running grunt shell:cucumber.
Continue reading %BDD in JavaScript: Getting Started with Cucumber and Gherkin%

Link: https://www.sitepoint.com/bdd-javascript-cucumber-gherkin/

Pre-Render a Vue.js App (With Node Or Laravel)

Server-side rendering is all the rage right now. But it’s not without its downsides. Pre-rendering is an alternative approach that may even be better in some circumstances.

Note: this article was originally posted here on the Vue.js Developers blog on 2017/04/01

Server-Side Rendering
One of the downsides to JavaScript-based apps is that the browser receives an essentially empty page from the server. The DOM cannot be built until the JavaScript has been downloaded and run.

Link: https://dzone.com/articles/pre-render-a-vuejs-app-with-node-or-laravel?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev

Introduction to Kubernetes: How to Deploy a Node.js Docker App

While container technology has existed for years, Docker really took it mainstream. A lot of companies and developers now use containers to ship their apps. Docker provides an easy to use interface to work with containers.
However, for any non-trivial application, you will not be deploying “one container", but rather a group of containers on multiple hosts. In this article, we’ll take a look at Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications.

Prerequisites: This article assumes some familiarity with Docker. If you need a refresher, check out Understanding Docker, Containers and Safer Software Delivery.

What Problem Does Kubernetes Solve?
With Docker, you have simple commands like docker run or docker stop to start/stop a container respectively. Unlike these simple commands that let you do operations on a single container, there is no docker deploy command to push new images to a group of hosts.
Many tools have appeared in recent times to solve this problem of "container orchestration"; popular ones being Mesos, Docker Swarm (now part of the Docker engine), Nomad, and Kubernetes. All of them come with their pros and cons but, arguably, Kubernetes has the most mileage at this point.
Kubernetes (also referred to as ‘k8s’) provides powerful abstractions that completely decouple application operations such as deployments and scaling from underlying infrastructure operations. So, with Kubernetes, you do not work with individual hosts or virtual machines on which to run you code, but rather Kubernetes sees the underlying infrastructure as a sea of compute on which to put containers.
Kubernetes Concepts
Kubernetes has a client/server architecture. Kubernetes server runs on your cluster (a group of hosts) on which you will deploy your application. And you typically interact with the cluster using a client, such as the kubectl CLI.
A pod is the basic unit that Kubernetes deals with, a group of containers. If there are two or more containers that always need to work together, and should be on the same machine, make them a pod. A pod is a useful abstraction and there was even a proposal to make them a first class docker object.
A node is a physical or virtual machine, running Kubernetes, onto which pods can be scheduled.
A label is a key/value pair that is used to identify a resource. You could label all your pods serving production traffic with "role=production", for example.
Selections let you search/filter resources by labels. Following on from the previous example, to get all production pods your selector would be "role=production".
A service defines a set of pods (typically selected by a "selector") and a means by which to access them, such as single stable IP address and corresponding DNS name.
Deploy a Node.js App on GKE using Kubernetes
Now, that we are aware of basic Kubernetes concepts, let’s see it in action by deploying a Node.js application on Google Container Engine (referred to as GKE). You’ll need a Google Cloud Platform account for the same (Google provides a free trial with $300 credit).
1. Install Google Cloud SDK and Kubernetes Client
kubectl is the command line interface for running commands against Kubernetes clusters. You can install it as a part of Google Cloud SDK. After Google Cloud SDK installs, run the following command to install kubectl:
$ gcloud components install kubectl

or brew install kubectl if you are on Mac. To verify the installation run kubectl version.
You’ll also need to setup the Google cloud SDK with credentials for your Google cloud account. Just run gcloud init and follow the instructions.
2. Create a GCP project
All Google Cloud Platform resources are created under a project, so create one from the web UI.
Set the default project ID while working with CLI by running:
gcloud config set project {PROJECT_ID}

3. Create a Docker Image of your application
Here is the application that we’ll be working with: express-hello-world. You can see in the Dockerfile that we are using an existing Node.js image from dockerhub. Now, we’ll build our application image by running:
$ docker build -t hello-world-image .

Run the app locally by running:
docker run –name hello-world -p 3000:3000 hello-world-image

If you visit localhost:3000 you should get the response.
4. Create a cluster
Now we’ll create a cluster with three instances (virtual machines), on which we’ll deploy our application. You can do it from the fairly intuitive web UI by going to container engine page or by running this command:
Continue reading %Introduction to Kubernetes: How to Deploy a Node.js Docker App%

Link: https://www.sitepoint.com/kubernetes-deploy-node-js-docker-app/

JavaScript Threading: The Magic Framework Problem

The Magic Framework Problem
This happens when two ways of thinking collide.

The framework developers create lots of “super smart” components. These components are so good at their job because they hide the underlying complexity and deal with major issues very well.
The designers have an imperfect understanding of the challenges they face. This is normal – design usually happens before most challenges come up. So they design with what they know, and often with a preconceived notion of how things work.

Mix the two together: The designers assume the framework will deal with all the design issues automatically. In other words, magic.

Link: https://dzone.com/articles/javascript-threading-the-magic-framework-problem?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev

Build Desktop Apps Using Electron: The Beginner’s Guide

Electron is a framework for building cross-platform desktop applications using Node.js and Chromium. Electron uses web pages as its GUI, so you could also see it as a minimal Chromium browser, controlled by JavaScript.
Installing Electron
Before installing electron, make sure that you have installed Node.js on your computer. If you have not installed it already, get the installer from here. Note that this installs npm as well for you. Once the installation is complete, you can confirm that Node and npm are installed by running the following commands in your terminal.

Link: https://dzone.com/articles/build-desktop-apps-using-electron-the-beginners-gu?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev

Refactor Code in Your Lunch Break: Getting Started with Codemods

Maintaining a codebase can be a frustrating experience for any developer, especially a JavaScript codebase. With ever-changing standards, syntax, and third party package breaking changes, it can be hard to keep up.
In recent years, the JavaScript landscape has changed beyond recognition. Advancements in the core JavaScript language has meant that even the simplest simple task of variable declaration has been changed. ES6 introduced let and const, arrow functions, and many more core changes, each bringing improvements and benefits to developers and their applications.
Pressure on developers to produce and maintain code that will stand up to the test of time is on the increase. This article will show you how you can automate large-scale refactoring tasks with the use of codemods and the JSCodeshift tool, allowing you to easily update your code to take advantage of newer language features, for example.
Codemod is a tool developed by Facebook to help with the refactor of large-scale codebases. It enables the developer to refactor a large codebase in a small amount of time. In some cases, a developer might use an IDE to perform the refactor of a class or variable name, however, this is usually scoped to one file at a time. The next tool in a developer’s refactoring tool kit is a global find and replace. This can work in many cases with the use of complex regular expressions. Many scenarios are not suited to this method; for example, when there are multiple implementations that need to be changed.
Codemod is a Python tool that takes a number of parameters including the expression you wish to match and the replacement.
codemod -m -d /code/myAwesomeSite/pages –extensions php,html \

Link: https://www.sitepoint.com/getting-started-with-codemods/

New & Upcoming Course Highlights: Introducing Text Editors & Express Basics

Every week, new courses and workshops are published to the growing Treehouse Library! Here’s a short list of what we’ve added recently, upcoming course highlights, and our weekly video update of What’s New at Treehouse. Start learning to code today…
The post New & Upcoming Course Highlights: Introducing Text Editors & Express Basics appeared first on Treehouse Blog.

Link: http://blog.teamtreehouse.com/new-upcoming-course-highlights-text-editors-express-basics