webpack 最佳化:thread-loader

近期一直在與 webpack 打交道,原因不外乎是為了更好的開發體驗與使用者體驗。webpack 設定之複雜實在是為人之所詬病,不過在 parcel 出現後所帶動的 zero config 風潮,webpack 在 v4 也大幅的改進這一部分。但是 zero config…Continue reading on Medium »

Link: https://medium.com/@shinychang/webpack-%E6%9C%80%E4%BD%B3%E5%8C%96-thread-loader-bd18471ffb4c?source=rss——javascript-5

Habits + Google Sheets = Profit!

Because good habits are worth it

What is a New Year Resolution?

A New Year’s resolution is a tradition in which we make a bunch of promises¹ -on the last days of the year or so- to ourselves to do something in order to improve our behaviour or lifestyle in a good way during the year ahead (i.e. quit smoking, eat less junk food, lose weight, do some form of exercise and/or stop saying JS is a real programming language²).
Even though some they could be just a personal goal or challenge (i.e. travel solo), almost every promise we make of this kind usually is -to a greater or lesser degree- a habit (to either make or break it).

What is a habit?

A habit is the behaviour pattern, acquired through frequent repetition³, we do often unconsciously -unintentionally and uncontrollably- in response to a known cue⁴.

What does Google have to do with all this?

Most of us have ever promised to break some bad habit as a New Year’s resolution and then realise after summer holidays -at best- that we hadn’t even started. And then, once again, another year is gone…

Tracking our habits⁵ every day — or at least once a week — is the best way to keep us motivated and gives us a glimpse of how well (or bad) we are doing it.

I’ve tried a bunch of applications and websites to track habits but for whatever reason (pricing, lack of export options, uncertain future, etc…) I haven’t felt comfortable with any of them. Then, I talked to two huge organisation experts Jesus Cerviño & David.S -whom I’m lucky enough to have as co-workers- and I was told that I could do everything I wanted (and even more) with a powerful tool⁶ widely used: Google Sheets.

First we make our habits, then our habits make us

After doing some research I’ve created an improved Habit Tracker template based on the awesome work of Harold Kim.

Log in to your Google Account, make a copy of my Habit Tracker template (File>Make a copy…) and fill down Activity column with your smart goals and habits.

Then, clear the sample data and you will be able to track your own habits putting a X (or 😄/🙁 depending of your mood) on every cell you want to mark it as Done.
Don’t break the chain!


Streak and Max Streak
Progression based on the value you have put in Expected column (which has to be in days⁷)
Highlight current day
Alternate colours every month
Autohide columns based on date

Tip: you could even use the Habit Tracker as a habits related journal through the built-in notes feature.

Don’t forget to set up a recurring reminder (you can use also Google Calendar btw) so you don’t forget to keep track of your habits. Then make a habit of this too!


Above all, don’t be a slave of your (bad) habits… and be like Bill!
This article was originally published on Medium
[1] The most common New Year’s Resolutions in UK (December 2015) and US (December 2017)[2] Just trolling, don’t feed me 🐟[3] Developing a new habit takes between 18 and 254 days (66 days on average). Unfortunately, there are no magic numbers. [4] A great tip about habits and willpower: we should not focus on the behaviour but rather focus on the cue (i.e. location, time of day, emotional state, belief, other people, etc…) to develop new habits (or to break them)[5] What gets measured gets done[6] It’s free, multi-platform, allow us to export the data and works offline 💪[7] Remember that a common year has 52 weeks (i.e. if you want to workout 3 times a week you just have to put in that cell =3*52) 👨‍🎓
External links 👀

Link: https://dev.to//hector6872/habits–google-sheets–profit-4if5

Understanding JavaScript/TypeScript Memoization

Originally published at www.carloscaballero.io on February 8, 2019.

What means Memoization?

The definintion of memoization from the wikipedia is the following:

In computing, memoization or memoisation is an optimization technique used
primarily to speed up computer programs by storing the results of expensive
function calls and returning the cached result when the same inputs occur

Memoization is a programming techique which allows reduce the function’s time
cost for space cost. That is, the functions which are memoized gains speed
for a higher use of memory space.
The memoization only can be used in pure functions, so the first point is known
that is a pure function
In the following animation you can see the final result of applied memoization
in our code.

What is a pure function?

A pure function is a function that meets the following criteria:

It is a function that always returns the same result when the arguments are the
same. For example, the following functions are impure:

Functions which use random numbers.
Functions which use datetime as seed to generate the result.

It is a function that does not produce side effects in the application:

Data mutation or change application state.
Network request.
Database or file request.
Obtaining user input.
Querying the DOM.


The pure functions are being used in the web develop due to several benefits.
Although, the pure functions are not only use in the web develop. Well, the main
pure function’s benefits are:

Your code is more declarative which is focus in what must to do and doest not in
how must to do. Also, the functions are focus in how different inputs are
related to outputs.
The code is more testability, and find bugs is more easy that in impure

But, in the real life there are side-effects and it is a good part of the code
(for example, when you access to the database or communicate different servers
to request information about the system). So, pure functions are a part of your
code, and you need to know when you can use a pure functions and when you can
use memoization in your code.

Pure functions example

Recursive functions frequently use the pure functions, the most classical
recursive problem is the factorial.

But the imperative version of the function factorial is pure too, because the
pure functions is related with inputs and outputs. In both case when the input
is the same the output will be the same.

Another interesting examples of pure functions are the following:

Memoization in recursive functions

The memoization is the programming technique which allows doesn’t recalculated
the value of the pure function. I.e, the pure functions returns the same value
when have the same inputs. So, the value return can be store in the system using
any cache system (for example a map or array). So, if you calculate the value of
factorial(1) you can store the return value 1 and the same action can be
done in each execution. So, when you run the factorial(100) you take a while the
first time but the second and more times the time will be reduced!
In this case, if you note the recursive factorial version, you can note that
this version execute several times the function factorial which can be cache
in our system (using memoization) but if you use the imperative factorial
version your performance will be worse. For this reason, memoization is a good
known technique in declarative languages.

Memoization Example! — Live code!

In this section I’m going to show you how implement memoization using closure
and the decorator pattern using JavaScript.
The decorator pattern allows add new features to any object in runtime using
composition instead of hierarchy. The pattern goal is avoid create a class
hierarchy of our features.
A good example to understand this pattern can be found in the Addy Osmany’s
So, a basic implementation of memoize decorator in JavaScript is the following:

Define the cache in which the execution’s result will be store. We use a object
as map to store this results.
The decorator returns a new function which has the same behaviour that original
function but memoization is implemented.
The key of the key-value map is generated using the stringify and args from
the original function.
The result of the new function will be
The execution of the original function (fn(…args)) whether there is not
store in the cache.
The value stored in the cache (whether there is pre-calculated previously).
The result is returned.

How to used our memoized decorator ?

The way to used this decorator using JavaScript is very easy:

In this case the add function is the original function without memoization and
the addMemoized function is the new function which has the new feature
(memoization) using the decorator pattern.

A real demo using memoization!

Now, I’m going to show you a real deam using memoization. Imagine a complex
algorithm that indicate you if an array has a unique value (as the
Array.prototype.some) but horribly programmed.

The following step is run the original code and the code using memoization and
compare the time use in each function. It is very important remember that the
original code is not modified but the memoization feature is added.
The following function is used to measure the time used in each execution.

The array are generated at the begin of the script:

And finally, when the user click in a button the functions are execute.

No memoization


The result is shown in the following animation:


The memoization has been widely developed in web development using TypeScript
or JavaScript. The following list of resources must be the starting point to
use them in your projects.

Fast Memoize.


Fast-Memoize use this graph to compare different implementations of memoize:

More Theory:

The GitHub project is

Originally published at www.carloscaballero.io on February 8, 2019.
Hi! My name is Carlos Caballero and I’m PhD. in Computer Science from Málaga,
Spain. Teaching developers and degree/master computer science how to be experts!

Link: https://dev.to//carlillo/understanding-javascripttypescript-memoization-o7k

What the heck is polymorphism?

Polymorphism is the idea of defining data structures or algorithms in general, so you can use them for more than one data type. The complete answer is a bit more nuanced though. Here I have collected the various forms of polymorphism from the common types that you most likely already used, to the less common ones, and compare how they look in object-oriented or functional languages.

Parametric polymorphism

This is a pretty common technique in many languages, albeit better known as “Generics". The core idea is to allow programmers to use a wildcard type when defining data structures that can later be filled with any type. Here is how this looks in Java for example:
class List {
class Node<T> {
T data;
Node<T> next;

public Node<T> head;

public void pushFront(T data) { /* … */ }

The T is the type variable because you can later "assign" any type you want:
List<String> myNumberList = new List<String>();
myNumberList.pushFront(8) // Error: 8 is not a string

Here the list can only contain elements of type string and nothing else. We get helpful compiler errors if we try to violate this. Also, we did not have to define the list for every possible data type again, because we can just define it for all possible types.
But not only imperative or object oriented languages have parametric polymorphism, it is also very common in functional programming. For example in Haskell, a list is defined like this:
data List a = Nil | Cons a (List a)

This definition means: A list takes a type parameter a (everything left of the equals sign defines the type) and is either an empty list (Nil) or an element of type a and a list of type List a. We don’t need any external pushFront method, because the second constructor already does this:
let emptyList = Nil
oneElementList = Cons "foo" emptyList
twoElementList = Cons 8 oneElementList — Error: 8 is not a string

Ad-hoc polymorphism

This is more commonly known as function or operator overloading. In languages that allow this, you can define a function multiple times to deal with different input types. For example in Java:
class Printer {
public String prettyPrint(int x) { /* … */ }
public String prettyPrint(char c) { /* … */ }

The compiler will automatically choose the right method depending on the type of data you pass to it. This can make APIs easier to use as you can just call the same function with any type and you do not have to remember a bunch of variants for different types (à la print_string, print_int, etc).
In Haskell, Ad-hoc polymorphism works via type classes. Type classes are a bit like interfaces in object oriented languages. See here for example the same pretty printer:
class Printer p where
prettyPrint :: p -> String

instance Printer Int where
prettyPrint x = — …

instance Printer Char where
prettyPrint c = — …

Subtype polymorphism

Subtyping is better known as object oriented inheritance. The classic example is a vehicle type, here in Java:
abstract class Vehicle {
abstract double getWeight();

class Car extends Vehicle {
double getWeight() { return 10.0; }

class Truck extends Vehicle {
double getWeight() { return 100.0; }

class Toyota extends Car { /* … */ }

static void printWeight(Vehicle v) {
// Allowed because all vehicles have to have this method

Here we can use any child class of the vehicle class as if it was a vehicle class directly. Note that we cannot go the other way, because not every vehicle is guaranteed to be a car for example.
This relation gets a bit hairy when you are allowed to pass functions around, here for example in Typescript:
const driveToyota = (c: Toyota) => { /* … */ };
const driveVehicle = (c: Vehicle) => { /* … */ };

function driveThis(f: (c: Car) => void): void { /* … */ }

Which of the two functions are you allowed to pass to driveThis? You might think the first one, after all as we have seen above a function that expects an object can also be passed its subclasses (see the printWeight method). But this is wrong if you pass a function. You can think of it like this: driveThis wants something that can accept any car. But if you pass in driveToyota, the function can only deal with Toyotas which is not enough. On the other hand if you pass in a function that can drive any vehicle (driveVehicle), this also includes cars, so driveThis would accept it.
As Haskell is not object oriented, subtyping would not make much sense.

Row polymorphism

Now we come to the less commonly used types of polymorphism that are only implemented in very few languages.
Row polymorphism is like the little brother of subtyping. Instead of saying every object in the form { a :: A, b :: B } is also a { a :: A }, we allow to specify a row extension: { a :: A | r }. This makes it easier to use in functions, as you do not have to think if you are allowed to pass the more specific or the more general type, but instead you just check if you type matches the pattern. So { a :: A, b :: B } matches { a :: A | r } but { b :: B, c :: C} does not. This also has the advantage that you do not loose information. If you cast a Car to a Vehicle you have lost the information which specific vehicle the object was. With the row extension, you keep all the information.
printX :: { x :: Int | r } -> String
printX rec = show rec.x

printY :: { y :: Int | r } -> String
printY rec = show rec.y

— type is inferred as `{x :: Int, y :: Int | r } -> String`
printBoth rec = printX rec ++ printY rec

One of the most popular languages implementing row polymorphism is PureScript and I am also currently working on bringing it to Haskell.

Kind polymorphism

Kinds are sort of the types of types. We all know values, it’s the data that every function deals with. 5, "foo", false are all examples of values. Then there is the type level, describing values. This should also be very known to programmers. The types of the three values before are Int, String and Bool. But there is even a level above that: Kinds. The kind of all types is Type also written as *. So this means 5 :: Int :: Type (:: means "has type"). There are also other kinds. For example while our list type earlier is a type (List a), what is List (without the a)? It still needs another type as argument to form a normal type. Therefore its kind is List :: Type -> Type. If you give List another type (for example Int) you get a new type (List Int).
Kind polymorphism is when you can define a type only once but still use it with multiple kinds. The best example is the Proxy data type in Haskell. It is used to "tag" a value with a type:
data Proxy a = ProxyValue

let proxy1 = (ProxyValue :: Proxy Int) — a has kind `Type`
let proxy2 = (ProxyValue :: Proxy List) — a has kind `Type -> Type`

Higher-rank polymorphism

Sometimes, normal ad-hoc polymorphism is not enough. With ad-hoc polymorphism you provide many implementations for different types and the consumer of your API chooses which type he wants to use. But sometimes you as producer of an API wants to choose which implementation you want to use. This is where you need higher-rank polymorphism. In Haskell this looks like this:
— ad-hoc polymorphism
f1 :: forall a. MyTypeClass a => a -> String
f1 = — …

— higher-rank polymorphism
f2 :: Int -> (forall a. MyTypeClass a => a -> String) -> Int
f2 = — …

Instead of having the forall on the outer most place, we push it inwards and therefore declare: Pass me a function that can deal with any type a that implements MyTypeClass. You can probably see that f1 is such a function, so you are allowed to pass it to f2.

Linearity polymorphism

Linearity polymorphism is connected to linear types, aka types that track "usage" of data. A linear type tracks the so called multiplicity of some data. In general you distinguish 3 different multiplicities: "zero", for stuff that exists only at type level and is not allowed to be used at value level; "one", for data that is not allowed to be duplicated (file descriptors would be an example for this) and "many" which is for all other data.
Linear types are useful to guarantee resource usage. For example if you fold over a mutable array in a functional language. Normally you would have to copy the array at every step or you have to use low-level unsafe functions. But with linear types you can guarantee that this array can only be used at one place at a time, so no data races can happen.
The polymorphism aspect comes into play with functions like the mentioned fold. If you give fold a function that uses its argument only once, the whole fold will use the initial value only once. If you pass a function that uses the argument multiple times, the initial value will also be used multiple times. Linarity polymorphism allows you to define the function only once and still offer this guarantee.
Linear types are similar to Rust’s borrow checker, but Rust does not really have linearity polymorphism. Haskell is getting linear types and polymorphism soon.

Levity polymorphism

In Haskell all normal data types are just references to the heap, just like in Python or Java, too. But Haskell allows also to use so called "unlifted" types, meaning machine integers directly for example. This means that Haskell actually encodes memory layout and location (stack or heap) of data types on the type level! These can be used to further optimize code, so the CPU does not have to first load the reference and then request the data from the (slow) RAM.
Levity polymorphism is when you define functions that work on lifted as well as unlifted types.

Link: https://dev.to//jvanbruegge/what-the-heck-is-polymorphism-nmh

Using CSS Grid the right way

Violet Peña has shared her recommendations for using CSS Grid. They basically boil down to these high-level points:

Use names instead of numbers for setting up our grid columns.
fr should be our flexible unit of choice.
We don’t really need a grid system anymore.

Although this is all great advice and Violet provides a few examples to support her recommendations, I particularly like what she has to say about learning CSS Grid:

“Learning” CSS Grid requires developing working knowledge

… Read article
The post Using CSS Grid the right way appeared first on CSS-Tricks.

Link: https://vgpena.github.io/using-css-grid-the-right-way/