Node.js Child Processes: Everything you need to know

Single-threaded, non-blocking performance in Node.js works great for a single process. But eventually, one process in one CPU is not going to be enough to handle the increasing workload of your application.

No matter how powerful your server may be, a single thread can only support a limited load.

The fact that Node.js runs in a single thread does not mean that we can’t take advantage of multiple processes and, of course, multiple machines as well.

Using multiple processes is the best way to scale a Node application. Node.js is designed for building distributed applications with many nodes. This is why it’s named Node. Scalability is baked into the platform and it’s not something you start thinking about later in the lifetime of an application.

This article is a write-up of part of my Pluralsight course about Node.js. I cover similar content in video format there.

Please note that you’ll need a good understanding of Node.js events and streams before you read this article. If you haven’t already, I recommend that you read these two other articles before you read this one:

The Child Processes Module

We can easily spin a child process using Node’s child_process module and those child processes can easily communicate with each other with a messaging system.

The child_process module enables us to access Operating System functionalities by running any system command inside a, well, child process.

We can control that child process input stream, and listen to its output stream. We can also control the arguments to be passed to the underlying OS command, and we can do whatever we want with that command’s output. We can, for example, pipe the output of one command as the input to another (just like we do in Linux) as all inputs and outputs of these commands can be presented to us using Node.js streams.

Note that examples I’ll be using in this article are all Linux-based. On Windows, you need to switch the commands I use with their Windows alternatives.

There are four different ways to create a child process in Node: spawn()fork()exec(), and execFile().

We’re going to see the differences between these four functions and when to use each.

Spawned Child Processes

The spawn function launches a command in a new process and we can use it to pass that command any arguments. For example, here’s code to spawn a new process that will execute the pwd command.

const { spawn } = require('child_process');
const child = spawn('pwd');

We simply destructure the spawn function out of the child_process module and execute it with the OS command as the first argument.

The result of executing the spawn function (the child object above) is a ChildProcess instance, which implements the EventEmitter API. This means we can register handlers for events on this child object directly. For example, we can do something when the child process exits by registering a handler for the exit event:

child.on('exit', function (code, signal) {
  console.log('child process exited with ' +
              `code ${code} and signal ${signal}`);
});

The handler above gives us the exit code for the child process and the signal, if any, that was used to terminate the child process. This signalvariable is null when the child process exits normally.

The other events that we can register handlers for with the ChildProcessinstances are disconnecterrorclose, and message.

  • The disconnect event is emitted when the parent process manually calls the child.disconnect function.
  • The error event is emitted if the process could not be spawned or killed.
  • The close event is emitted when the stdio streams of a child process get closed.
  • The message event is the most important one. It’s emitted when the child process uses the process.send() function to send messages. This is how parent/child processes can communicate with each other. We’ll see an example of this below.

Every child process also gets the three standard stdio streams, which we can access using child.stdinchild.stdout, and child.stderr.

When those streams get closed, the child process that was using them will emit the close event. This close event is different than the exit event because multiple child processes might share the same stdio streams and so one child process exiting does not mean that the streams got closed.

Since all streams are event emitters, we can listen to different events on those stdio streams that are attached to every child process. Unlike in a normal process though, in a child process, the stdout/stderr streams are readable streams while the stdin stream is a writable one. This is basically the inverse of those types as found in a main process. The events we can use for those streams are the standard ones. Most importantly, on the readable streams, we can listen to the data event, which will have the output of the command or any error encountered while executing the command:

child.stdout.on('data', (data) => {
  console.log(`child stdout:\n${data}`);
});

child.stderr.on('data', (data) => {
  console.error(`child stderr:\n${data}`);
});

The two handlers above will log both cases to the main process stdout and stderr. When we execute the spawn function above, the output of the pwdcommand gets printed and the child process exits with code 0, which means no error occurred.

We can pass arguments to the command that’s executed by the spawnfunction using the second argument of the spawn function, which is an array of all the arguments to be passed to the command. For example, to execute the find command on the current directory with a -type f argument (to list files only), we can do:

const child = spawn('find', ['.', '-type', 'f']);

If an error occurs during the execution of the command, for example, if we give find an invalid destination above, the child.stderr data event handler will be triggered and the exit event handler will report an exit code of 1, which signifies that an error has occurred. The error values actually depend on the host OS and the type of error.

A child process stdin is a writable stream. We can use it to send a command some input. Just like any writable stream, the easiest way to consume it is using the pipe function. We simply pipe a readable stream into a writable stream. Since the main process stdin is a readable stream, we can pipe that into a child process stdin stream. For example:

const { spawn } = require('child_process');

const child = spawn('wc');

process.stdin.pipe(child.stdin)

child.stdout.on('data', (data) => {
  console.log(`child stdout:\n${data}`);
});

In the example above, the child process invokes the wc command, which counts lines, words, and characters in Linux. We then pipe the main process stdin (which is a readable stream) into the child process stdin (which is a writable stream). The result of this combination is that we get a standard input mode where we can type something and when we hit Ctrl+D, what we typed will be used as the input of the wc command.

We can also pipe the standard input/output of multiple processes on each other, just like we can do with Linux commands. For example, we can pipe the stdout of the find command to the stdin of the wc command to count all the files in the current directory: For Top web design company check Vivid Designs

const { spawn } = require('child_process');

const find = spawn('find', ['.', '-type', 'f']);
const wc = spawn('wc', ['-l']);

find.stdout.pipe(wc.stdin);

wc.stdout.on('data', (data) => {
  console.log(`Number of files ${data}`);
});

I added the -l argument to the wc command to make it count only the lines. When executed, the code above will output a count of all files in all directories under the current one.

Shell Syntax and the exec function

By default, the spawn function does not create a shell to execute the command we pass into it. This makes it slightly more efficient than the exec function, which does create a shell. The exec function has one other major difference. It buffers the command’s generated output and passes the whole output value to a callback function (instead of using streams, which is what spawn does).

Here’s the previous find | wc example implemented with an exec function.

const { exec } = require('child_process');

exec('find . -type f | wc -l', (err, stdout, stderr) => {
  if (err) {
    console.error(`exec error: ${err}`);
    return;
  }

  console.log(`Number of files ${stdout}`);
});

Since the exec function uses a shell to execute the command, we can use the shell syntax directly here making use of the shell pipe feature.

Note that using the shell syntax comes at a security risk if you’re executing any kind of dynamic input provided externally. A user can simply do a command injection attack using shell syntax characters like ; and $ (for example, command + ’; rm -rf ~’ )

The exec function buffers the output and passes it to the callback function (the second argument to exec) as the stdout argument there. This stdoutargument is the command’s output that we want to print out.

The exec function is a good choice if you need to use the shell syntax and if the size of the data expected from the command is small. (Remember, execwill buffer the whole data in memory before returning it.)

The spawn function is a much better choice when the size of the data expected from the command is large, because that data will be streamed with the standard IO objects.

We can make the spawned child process inherit the standard IO objects of its parents if we want to, but also, more importantly, we can make the spawnfunction use the shell syntax as well. Here’s the same find | wc command implemented with the spawn function:

const child = spawn('find . -type f | wc -l', {
  stdio: 'inherit',
  shell: true
});

Because of the stdio: 'inherit' option above, when we execute the code, the child process inherits the main process stdinstdout, and stderr. This causes the child process data events handlers to be triggered on the main process.stdout stream, making the script output the result right away.

Because of the shell: true option above, we were able to use the shell syntax in the passed command, just like we did with exec. But with this code, we still get the advantage of the streaming of data that the spawn function gives us. This is really the best of both worlds.

There are a few other good options we can use in the last argument to the child_process functions besides shell and stdio. We can, for example, use the cwd option to change the working directory of the script. For example, here’s the same count-all-files example done with a spawn function using a shell and with a working directory set to my Downloads folder. The cwdoption here will make the script count all files I have in ~/Downloads:

const child = spawn('find . -type f | wc -l', {
  stdio: 'inherit',
  shell: true,
  cwd: '/Users/samer/Downloads'
});

Another option we can use is the env option to specify the environment variables that will be visible to the new child process. The default for this option is process.env which gives any command access to the current process environment. If we want to override that behavior, we can simply pass an empty object as the env option or new values there to be considered as the only environment variables:

const child = spawn('echo $ANSWER', {
  stdio: 'inherit',
  shell: true,
  env: { ANSWER: 42 },
});

The echo command above does not have access to the parent process’s environment variables. It can’t, for example, access $HOME, but it can access $ANSWER because it was passed as a custom environment variable through the env option.

One last important child process option to explain here is the detachedoption, which makes the child process run independently of its parent process.

Assuming we have a file timer.js that keeps the event loop busy:

setTimeout(() => {  
  // keep the event loop busy
}, 20000);

We can execute it in the background using the detached option:

const { spawn } = require('child_process');

const child = spawn('node', ['timer.js'], {
  detached: true,
  stdio: 'ignore'
});

child.unref();

The exact behavior of detached child processes depends on the OS. On Windows, the detached child process will have its own console window while on Linux the detached child process will be made the leader of a new process group and session.

If the unref function is called on the detached process, the parent process can exit independently of the child. This can be useful if the child is executing a long-running process, but to keep it running in the background the child’s stdio configurations also have to be independent of the parent.

The example above will run a node script (timer.js) in the background by detaching and also ignoring its parent stdio file descriptors so that the parent can terminate while the child keeps running in the background.

The execFile function

If you need to execute a file without using a shell, the execFile function is what you need. It behaves exactly like the exec function, but does not use a shell, which makes it a bit more efficient. On Windows, some files cannot be executed on their own, like .bat or .cmd files. Those files cannot be executed with execFile and either exec or spawn with shell set to true is required to execute them.

The *Sync function

The functions spawnexec, and execFile from the child_process module also have synchronous blocking versions that will wait until the child process exits.

const { 
  spawnSync, 
  execSync, 
  execFileSync,
} = require('child_process');

Those synchronous versions are potentially useful when trying to simplify scripting tasks or any startup processing tasks, but they should be avoided otherwise.

The fork() function

The fork function is a variation of the spawn function for spawning node processes. The biggest difference between spawn and fork is that a communication channel is established to the child process when using fork, so we can use the send function on the forked process along with the global process object itself to exchange messages between the parent and forked processes. We do this through the EventEmitter module interface. Here’s an example: Web designing services in Hyderabad visit Vivid Designs 

The parent file, parent.js:

const { fork } = require('child_process');

const forked = fork('child.js');

forked.on('message', (msg) => {
  console.log('Message from child', msg);
});

forked.send({ hello: 'world' });

The child file, child.js:

process.on('message', (msg) => {
  console.log('Message from parent:', msg);
});

let counter = 0;

setInterval(() => {
  process.send({ counter: counter++ });
}, 1000);

In the parent file above, we fork child.js (which will execute the file with the node command) and then we listen for the message event. The messageevent will be emitted whenever the child uses process.send, which we’re doing every second.

To pass down messages from the parent to the child, we can execute the sendfunction on the forked object itself, and then, in the child script, we can listen to the message event on the global process object.

When executing the parent.js file above, it’ll first send down the { hello: 'world' } object to be printed by the forked child process and then the forked child process will send an incremented counter value every second to be printed by the parent process.

Let’s do a more practical example about the fork function.

Let’s say we have an http server that handles two endpoints. One of these endpoints (/compute below) is computationally expensive and will take a few seconds to complete. We can use a long for loop to simulate that:

const http = require('http');
const longComputation = () => {
  let sum = 0;
  for (let i = 0; i < 1e9; i++) {
    sum += i;
  };
  return sum;
};
const server = http.createServer();
server.on('request', (req, res) => {
  if (req.url === '/compute') {
    const sum = longComputation();
    return res.end(`Sum is ${sum}`);
  } else {
    res.end('Ok')
  }
});

server.listen(3000);

This program has a big problem; when the the /compute endpoint is requested, the server will not be able to handle any other requests because the event loop is busy with the long for loop operation.

There are a few ways with which we can solve this problem depending on the nature of the long operation but one solution that works for all operations is to just move the computational operation into another process using fork.

We first move the whole longComputation function into its own file and make it invoke that function when instructed via a message from the main process:

In a new compute.js file:

const longComputation = () => {
  let sum = 0;
  for (let i = 0; i < 1e9; i++) {
    sum += i;
  };
  return sum;
};

process.on('message', (msg) => {
  const sum = longComputation();
  process.send(sum);
});

Now, instead of doing the long operation in the main process event loop, we can fork the compute.js file and use the messages interface to communicate messages between the server and the forked process.

const http = require('http');
const { fork } = require('child_process');

const server = http.createServer();

server.on('request', (req, res) => {
  if (req.url === '/compute') {
    const compute = fork('compute.js');
    compute.send('start');
    compute.on('message', sum => {
      res.end(`Sum is ${sum}`);
    });
  } else {
    res.end('Ok')
  }
});

server.listen(3000);

When a request to /compute happens now with the above code, we simply send a message to the forked process to start executing the long operation. The main process’s event loop will not be blocked.

Once the forked process is done with that long operation, it can send its result back to the parent process using process.send.

In the parent process, we listen to the message event on the forked child process itself. When we get that event, we’ll have a sum value ready for us to send to the requesting user over http.

The code above is, of course, limited by the number of processes we can fork, but when we execute it and request the long computation endpoint over http, the main server is not blocked at all and can take further requests.

Node’s cluster module, which is the topic of my next article, is based on this idea of child process forking and load balancing the requests among the many forks that we can create on any system.

That’s all I have for this topic. Thanks for reading! Until next time!

Source

Functional setState is the future of React

Justice Mba

Update: I gave a follow up talk on this topic at React Rally. While this post is more about the “functional setState” pattern, the talk is more about understanding setState deeply

React has popularized functional programming in JavaScript. This has led to giant frameworks adopting the Component-based UI pattern that React uses. And now functional fever is spilling over into the web development ecosystem at large.

But the React team is far from relenting. They continue to dig deeper, discovering even more functional gems hidden in the legendary library.

So today I reveal to you a new functional gold buried in React, best kept React secret — Functional setState!

Okay, I just made up that name… and it’s not entirely new or a secret. No, not exactly. See, it’s a pattern built into React, that’s only known by few developers who’ve really dug in deep. And it never had a name. But now it does — Functional setState!

Going by Dan Abramov’s words in describing this pattern, Functional setState is a pattern where you

“Declare state changes separately from the component classes.”

Huh?

Okay… what you already know

React is a component based UI library. A component is basically a function that accept some properties and return a UI element.

function User(props) {
  return (
    <div>A pretty user</div>
  );
}

A component might need to have and manage its state. In that case, you usually write the component as a class. Then you have its state live in the class constructor function:

class User {
  constructor () {
    this.state = {
      score : 0
    };
  }
  render () {
    return (
      <div>This user scored {this.state.score}</div>
    );
  }
}

To manage the state, React provides a special method called setState(). You use it like this:

class User {
  ...
  increaseScore () {
    this.setState({score : this.state.score + 1});
  }
  ...
}

Note how setState() works. You pass it an object containing part(s) of the state you want to update. In other words, the object you pass would have keys corresponding to the keys in the component state, then setState() updates or sets the state by merging the object to the state. Thus, “set-State”.

What you probably didn’t know

Remember how we said setState() works? Well, what if I told you that instead of passing an object, you could pass a function?

Yes. setState() also accepts a function. The function accepts the previousstate and current props of the component which it uses to calculate and return the next state. See it below:

this.setState(function (state, props) {
 return {
  score: state.score - 1
 }
});

Note that setState() is a function, and we are passing another function to it(functional programming… functional setState) . At first glance, this might seem ugly, too many steps just to set-state. Why will you ever want to do this?

Why pass a function to setState?

The thing is, state updates may be asynchronous.

Think about what happens when setState() is called. React will first merge the object you passed to setState() into the current state. Then it will start that reconciliation thing. It will create a new React Element tree (an object representation of your UI), diff the new tree against the old tree, figure out what has changed based on the object you passed to setState() , then finally update the DOM.

Whew! So much work! In fact, this is even an overly simplified summary. But trust in React!

React does not simply “set-state”.

Because of the amount of work involved, calling setState() might not immediately update your state.

React may batch multiple setState() calls into a single update for performance.

What does React mean by this?

First, “multiple setState() calls” could mean calling setState() inside a single function more than once, like this:

...
state = {score : 0};
// multiple setState() calls increaseScoreBy3 () { this.setState({score : this.state.score + 1}); this.setState({score : this.state.score + 1}); this.setState({score : this.state.score + 1}); }
...

Now when React, encounters “multiple setState() calls”, instead of doing that “set-state” three whole times, React will avoid that huge amount of work I described above and smartly say to itself: “No! I’m not going to climb this mountain three times, carrying and updating some slice of state on every single trip. No, I’d rather get a container, pack all these slices together, and do this update just once.” And that, my friends, is batching!

Remember that what you pass to setState() is a plain object. Now, assume anytime React encounters “multiple setState() calls”, it does the batching thing by extracting all the objects passed to each setState() call, merges them together to form a single object, then uses that single object to do setState() . For Website design services check Vivid Designs

In JavaScript merging objects might look something like this:

const singleObject = Object.assign(
  {}, 
  objectFromSetState1, 
  objectFromSetState2, 
  objectFromSetState3
);

This pattern is known as object composition.

In JavaScript, the way “merging” or composing objects works is: if the three objects have the same keys, the value of the key of the last object passed to Object.assign() wins. For example:

const me  = {name : "Justice"}, 
      you = {name : "Your name"},
      we  = Object.assign({}, me, you);
we.name === "Your name"; //true
console.log(we); // {name : "Your name"}

Because you are the last object merged into we, the value of name in the you object — “Your name” — overrides the value of name in the me object. So “Your name” makes it into the we object… you win! 🙂

Thus, if you call setState() with an object multiple times — passing an object each time — React will merge. Or in other words, it will compose a new object out of the multiple objects we passed it. And if any of the objects contains the same key, the value of the key of the last object with same key is stored. Right?

That means that, given our increaseScoreBy3 function above, the final result of the function will just be 1 instead of 3, because React did not immediatelyupdate the state in the order we called setState() . But first, React composed all the objects together, which results to this: {score : this.state.score + 1} , then only did “set-state” once — with the newly composed object. Something like this: User.setState({score : this.state.score + 1}.

To be super clear, passing object to setState() is not the problem here. The real problem is passing object to setState() when you want to calculate the next state from the previous state. So stop doing this. It’s not safe!

Because this.props and this.state may be updated asynchronously, you should not rely on their values for calculating the next state.

Here is a pen by Sophia Shoemaker that demos this problem. Play with it, and pay attention to both the bad and the good solutions in this pen:

Functional setState to the rescue

If you’ve not spent time playing with the pen above, I strongly recommend that you do, as it will help you grasp the core concept of this post.

While you were playing with the pen above, you no doubt saw that functional setState fixed our problem. But how, exactly?

Let’s consult the Oprah of React — Dan.

Note the answer he gave. When you do functional setState…

Updates will be queued and later executed in the order they were called.

So, when React encounters “multiple functional setState() calls” , instead of merging objects together, (of course there are no objects to merge) React queues the functions “in the order they were called.”

After that, React goes on updating the state by calling each functions in the “queue”, passing them the previous state — that is, the state as it was before the first functional setState() call (if it’s the first functional setState() currently executing) or the state with the latest update from the previous functional setState() call in the queue.

Again, I think seeing some code would be great. This time though, we’re gonna fake everything. Know that this is not the real thing, but is instead just here to give you an idea of what React is doing.

Also, to make it less verbose, we’ll use ES6. You can always write the ES5 version later if you want.

First, let’s create a component class. Then, inside it, we’ll create a fake setState() method. Also, our component would have a increaseScoreBy3()method, which will do a multiple functional setState. Finally, we’ll instantiate the class, just as React would do.

class User{
  state = {score : 0};
  //let's fake setState
  setState(state, callback) {
    this.state = Object.assign({}, this.state, state);
    if (callback) callback();
  }
  // multiple functional setState call
  increaseScoreBy3 () {
    this.setState( (state) => ({score : state.score + 1}) ),
    this.setState( (state) => ({score : state.score + 1}) ),
    this.setState( (state) => ({score : state.score + 1}) )
  }
}
const Justice = new User();

Note that setState also accepts an optional second parameter — a callback function. If it’s present React calls it after updating the state.

Now when a user triggers increaseScoreBy3(), React queues up the multiple functional setState. We won’t fake that logic here, as our focus is on what actually makes functional setState safeBut you can think of the result of that “queuing” process to be an array of functions, like this:

const updateQueue = [
  (state) => ({score : state.score + 1}),
  (state) => ({score : state.score + 1}),
  (state) => ({score : state.score + 1})
];

Finally, let’s fake the updating process:

// recursively update state in the order
function updateState(component, updateQueue) {
  if (updateQueue.length === 1) {
    return component.setState(updateQueue[0](component.state));
  }
return component.setState(
    updateQueue[0](component.state), 
    () =>
     updateState( component, updateQueue.slice(1)) 
  );
}
updateState(Justice, updateQueue);

True, this is not as so sexy a code. I trust you could do better. But the key focus here is that every time React executes the functions from your functional setState, React updates your state by passing it a fresh copy of the updated state. That makes it possible for functional setState to set state based on the previous state.

Here I made a bin with the complete code. Tinker around it (possibly make it look sexier), just to get more sense of it.

Play with it to grasp it fully. When you come back we’re gonna see what makes functional setState truly golden. Web development services in Hyderabad visit Vivid Designs 

The best-kept React secret

So far, we’ve deeply explored why it’s safe to do multiple functional setStates in React. But we haven’t actually fulfilled the complete definition of functional setState: “Declare state changes separately from the component classes.”

Over the years, the logic of setting-state — that is, the functions or objects we pass to setState() — have always lived inside the component classes. This is more imperative than declarative.

Well today, I present you with newly unearthed treasure — the best-kept React secret:

Thanks to Dan Abramov!

That is the power of functional setState. Declare your state update logic outside your component class. Then call it inside your component class.

// outside your component class
function increaseScore (state, props) {
  return {score : state.score + 1}
}
class User{
  ...
// inside your component class
  handleIncreaseScore () {
    this.setState( increaseScore)
  }
  ...
}

This is declarative! Your component class no longer cares how the state updates. It simply declares the type of update it desires.

To deeply appreciate this, think about those complex components that would usually have many state slices, updating each slice on different actions. And sometimes, each update function would require many lines of code. All of this logic would live inside your component. But not anymore!

Also, if you’re like me, I like keeping every module as short as possible, but now you feel like your module is getting too long. Now you have the power to extract all your state change logic to a different module, then import and use it in your component.

import {increaseScore} from "../stateChanges";
class User{
  ...
  // inside your component class
  handleIncreaseScore () {
    this.setState( increaseScore)
  }
  ...
}

Now you can even reuse the increaseScore function in a different component. Just import it.

What else can you do with functional setState?

Make testing easy!

You can also pass extra arguments to calculate the next state (this one blew my mind… #funfunFunction).

Expect even more in…

The Future of React

For years now, the react team has been experimenting with how to best implement stateful functions.

Functional setState seems to be just the right answer to that (probably).

Hey, Dan! Any last words?

If you’ve made it this far, you’re probably as excited as I am. Start experimenting with this functional setState today!

If you feel like I’ve done any nice job, or that others deserve a chance to see this, kindly click on the green heart below to help spread a better understanding of React in our community.

If you have a question that hasn’t been answered or you don’t agree with some of the points here feel free to drop in comments here or via Twitter.

Happy Coding!

The 100% correct way to do CSS breakpoints

David Gilbertson

For the next minute or so, I want you to forget about CSS. Forget about web development. Forget about digital user interfaces.

And as you forget these things, I want you to allow your mind to wander. To wander back in time. Back to your youth. Back to your first day of school.

It was a simpler time, when all you had to worry about was drawing shapes and keeping your incontinence in check.

Take a look at the dots above. Notice how some of them are clumped together, and some of them spread out? What I want you to do is break them up into five groups for me, however you see fit.

Go ahead. After checking that no one is watching, draw a circle around each of the five groups with your child-like finger.

You probably came up with something like the below, right? (And whatever you do, don’t tell me you scrolled down without doing the exercise. I will face palm.)

Sure, those two dots on the right could have gone either way. If you grouped them together it’s OK, I guess. They say there’s no wrong answer, but I’ve never been wrong, so I’ve never been on the receiving end of that particular platitude.

Before I go on, did you draw something like the below?

Probably not. Right?

But that’s basically what you’d be doing if you set your breakpoints at positions matching the exact width of popular devices (320px, 768px, 1024px).

Have words of the below nature ever entered your ears or exited your mouth?

“Is the medium breakpoint up to 768px, or including 768? I see… and that’s iPad landscape, or is that ‘large’? Oh, large is 768px and up. I see. And small is 320px? What is this range from 0 to 319px? A breakpoint for ants?”

I could proceed to show you the correct breakpoints and leave it at that. But I find it very curious that the above method (“silly grouping”) is so widespread.

Why should that be?

I think the answer to this problem, like so many problems, comes down to misaligned terminology. After all, waterboarding at Guantanamo Bay sounds super rad if you don’t know what either of those things are. (Oh I wish that was my joke.)

I think we mix up “boundaries” and “ranges” in our discussions and implementations of breakpoints.

Tell me, if you do breakpoints in Sass, do you have a variable called $largethat is, say, 768px?

Is that the lower boundary of the range you refer to as large, or the upper boundary? If it’s the lower, then you must have no $small because that should be 0, right?

And if it’s the upper boundary then how would you define a breakpoint $large-and-up? That must be a media query with a min-width of $medium, right?

And if you are referring to just a boundary when you say large, then we’re in for confusion later on because a media query is always a range.

This situation is a mess and we’re wasting time thinking about it. So I have three suggestions:

  1. Get your breakpoints right
  2. Name your ranges sensibly
  3. Be declarative

Tip #1: Get your breakpoints right

So what are the right breakpoints?

Your kindergarten self already drew the circles. I’ll just turn them into rectangles for you.

600px, 900px, 1200px, and 1800px if you plan on giving the giant-monitor people something special. On a side note, if you’re ordering a giant monitor online, make sure you specify it’s for a computer. You don’t want to get a giant lizard in the mail. For Website development services check Vivid Designs

Those dots your channeled young self has been playing with actually represent the 14 most common screen sizes:

image credit

So we can make a pretty little picture that allows for the easy flow of words between the folks dressed up as business people, designers, developers, and testers.

I’m regretting my choice of orange and green, but I’m not redoing all of these pictures now.

Tip #2: Name your ranges sensibly

Sure, you could name your breakpoints papa-bear and baby-bear if you like. But if I’m going to sit down with a designer and discuss how the site should look on different devices, I want it to be over as quickly as possible. If naming a size portrait tablet facilitates that, then I’m sold. Heck, I’d even forgive you for calling it “iPad portrait.”

“But the landscape is changing!” you may shout. “Phones are getting bigger, tablets are getting smaller!”

But your website’s CSS has a shelf life of about three years (unless it’s Gmail). The iPad has been with us for twice that time, and it has yet to be dethroned. And we know that Apple no longer makes new products, they just remove things from existing ones (buttons, holes, etc).

So 1024 x 768 is here to stay, folks. Let’s not bury our heads in the sand. (Fun fact: ostriches don’t live in cities because there is no sand, and thus nowhere to hide from predators.)

Conclusion: communication is important. Don’t purposefully detach yourself from helpful vocabulary.

Tip #3: Be declarative

I know, I know, that word “declarative” again. I’ll put it another way: your CSS should define what it wants to happen, not how it should happen. The “how” belongs hidden away in some sort of mixin.

As discussed earlier, part of the confusion around breakpoints is that variables that define a boundary of a range are used as the name of the range. $large: 600px simply makes no sense if large is a range. It’s the same as saying var coordinates = 4;.

So we can hide those details inside a mixin rather than expose them to be used in the code. Or we can do one better and not use variables at all.

At first I did the below snippet as a simplified example. But really I think it covers all the bases. To see it in action, check out this pen. I’m using Sass because I can’t imagine building a site without it. The logic applies to CSS or Less just the same.

Note that I’m forcing the developer to specify the -up or -only suffix.

Ambiguity breeds confusion.

An obvious criticism might be that this doesn’t handle custom media queries. Well good news, everybody. If you want a custom media query, write a custom media query. (In practice, if I needed more complexity than the above I’d cut my losses and run into the loving embrace of Susy’s toolkit.)

Another criticism might be that I’ve got eight mixins here. Surely a single mixin would be the sane thing to do, then just pass in the required size, like so:

Sure, that works. But you won’t get compile-time errors if you pass in an unsupported name. And to pass in a sass variable means exposing 8 variables just to pass to a switch in a mixin.

Not to mention the syntax @include for-desktop-up {...} is totes more pretty than @include for-size(desktop-up) {...}.

A criticism of both these code snippets might be that I’m typing out 900px twice, and also 899px. Surely I should just use variables and subtract 1 when needed. Web designing services in Hyderabad visit Vivid Designs 

If you want to do that, go bananas, but there are two reasons I wouldn’t:

  1. These are not things that change frequently. These are also not numbers that are used anywhere else in the code base. No problems are caused by the fact that they aren’t variables — unless you want to expose your Sass breakpoints to a script that injects a JS object with those variables into your page.
  2. The syntax is nasty when you want to turn numbers into strings with Sass. Below is the price you pay for believing that repeating a number twice is the worst of all evils:
  3. Oh and since I’ve taken on a ranty tone over the last few paragraphs … I pity the fool who does something magical like store breakpoints in a Sass list and loop over them to output media queries, or something similarly ridiculous that future developers will struggle to decipher.

    Complexity is where the bugs hide.

    Finally, you may be thinking “shouldn’t I be basing my breakpoints on content, not devices?”. Well I’m amazed you made it this far and the answer is yes … for sites with a single layout. Or if you have multiple layouts and are happy to have a different set of breakpoints for each layout. Oh and also if your site design doesn’t change often, or you’re happy to update your breakpoints when your designs update since you’ll want to keep them based on the content, right?

    For complex sites, life is much easier if you pick a handful of breakpoints to use across the site.

    We’re done! But this post has not been as furry as I would like, let me see if I can think of an excuse to include some…

    Oh, I know!

    Bonus tips for breakpoint development

    1. If you need to experience CSS breakpoints for screen sizes bigger than the monitor you’re sitting at, use the ‘responsive’ mode in Chrome DevTools and type in whatever giant size you like.
    2. The blue bar shows ‘max-width’ media queries, the orange bar is ‘min-width’ media queries, and the green bar shows media queries with both a min and a max.
    3. Clicking a media query sets the screen to that width. If you click on a green media query more than once, it toggles between the max and min widths.
    4. Right click a media query in the media queries bar to go to the definition of that rule in the CSS.

    Hey, thanks for reading! Comment with your tops ideas, I’d love the hear them. And click the little heart if you think I deserve it, or leave it hollow and empty, like my sense of self-worth will be if you don’t.

     

A Study Plan To Cure JavaScript Fatigue

Like everybody else, I recently came across Jose Aguinaga’s post “How it feels to learn JavaScript in 2016”.

It’s clear that this post hit a nerve: I saw it reaching the top spot on Hacker News not once but twice. It was the most popular post on /r/javascript as well, and as of right now it has over 10k likes on Medium — which is probably more than all my own posts put together. But who’s counting?

This didn’t come as a surprise though: I’ve known for a long time that the JavaScript ecosystem can be confusing. In fact, the very reason why I ran the State Of JavaScript survey was to find out which libraries were actually popular, and finally sort the signal from the noise.

But today, I want to go one step further. Instead of simply complaining about the state of things, I’m going to give you a concrete, step-by-step study plan to conquering the JavaScript ecosystem.

Who Is This For

This study plan is for you if:

  • You’re already familiar with basic programming concepts like variables and functions.
  • You might have already done back-end work with languages such as PHP and Python, and maybe used front-end libraries such as jQuery for a few simple hacks.
  • You now want to get into more serious front-end development but are drowning in frameworks and libraries before you’ve even started.

Things We’ll Cover

  • What a modern JavaScript web app looks like
  • Why you can’t just use jQuery
  • Why React is the safest pick
  • Why you may not need to “learn JavaScript properly” first
  • How to learn ES6 syntax
  • Why and how to learn Redux
  • What GraphQL is and why it’s a big deal
  • Where to go next

Resources Mentioned Here

Disclaimer: this post will include a few affiliate links to courses by Wes Bos, but the material is recommended because I genuinely think it’s good, and not just because of the affiliate scheme.

If you would rather find other resources, Mark Erikson maintains a great list of React, ES6, and Redux links.

JavaScript vs JavaScript

Before we start, we need to make sure we’re talking about the same thing. If you google “Learn JavaScript” or “JavaScript study plan”, you’ll find a ton of resources that teach you how to learn the JavaScript language.

But that’s actually the easy part. While you can definitely dig deep and learn the intricacies of the language, the truth is most web apps use relatively simple code. In other words, 80% of what you’ll ever need to write web apps is typically covered in the first few chapters of your typical JavaScript book.

No, the hard problem is mastering the JavaScript ecosystem, with its countless competing frameworks and libraries. The good news is, that’s exactly what this study plan focuses on.

The Building Blocks Of JavaScript Apps

In order to understand why modern JavaScript apps seem so complex, you first have to understand how they work.

For starters, let’s look at a “traditional” web app circa 2008:

  1. The database sends data to your back-end (e.g. your PHP or Rails app).
  2. The back-end reads that data and outputs HTML.
  3. The HTML is sent to the browser, which displays it as the DOM (basically, a web page)

Now a lot of these apps also sprinkled in some JavaScript code on the client to add interactivity, such as tabs and modal windows. But fundamentally, the browser was still receiving HTML and going from there.

Now compare this with a “modern” 2016 web app (also known as the “Single Page App”):

Notice the difference? Instead of sending HTML, the server now sends data, and the “data to HTML” conversion step happens on the client instead (which is why you’re also sending along the code that tells the client how to perform said conversion). For Best web design company check Vivid Designs

This has many implications. First, the good:

  • For a given piece of content, sending only data is faster than sending entire HTML pages.
  • The client can swap in content instantly without having to ever refresh the browser window (thus the term “Single Page App”).

The bad:

  • The initial load takes longer since the “data to HTML” codebase can grow quite large.
  • You now need a place to store and manage the data on the client too, in case you want to cache it or inspect it.

And the ugly:

  • Congratulations — you now have to deal with a client-side stack, which can get just as complex as your server-side stack.

The Client-Server Spectrum

So why go through all this trouble if there are so many downsides? Why not just stick to the good old PHP apps of old?

Well, imagine you’re building a calculator. If the user wants to know what 2 + 2 is, it doesn’t make sense to go all the way back to the server to perform the operation when the browser is perfectly capable of doing it.

On the other hand, if you’re building a purely static site such as a blog, it’s perfectly fine to generate the final HTML on the server and be done with it.

The truth is, most web apps fall somewhere in the middle of the spectrum. The problem is knowing where.

But the key thing is that the spectrum is not continuous: you can’t start with a pure server-side app and slowly move towards a pure client-side app. At some point (the Divide), you’ll be forced to stop and refactor everything, or else end up with a mess of unmaintainable spaghetti code.

This is why you shouldn’t “just use jQuery” for everything. You can think of jQuery like duct tape. It’s amazingly handy for small fixes around the house, but if you keep adding more and more things will start looking ugly.

On the other hand, modern JavaScript frameworks are more like 3D-printing a replacement piece: it takes more time, but the result is a lot cleaner and sturdier.

In other words, mastering the modern JavaScript stack is a bet that no matter where they start, most web apps will probably end up on the right side of the divide sooner or later. So yes, it’s more work, but better safe than sorry.

Week 0: JavaScript Basics

Unless you’ve a pure back-end developer, you probably know some JavaScript. And even if you don’t, JavaScript’s C-like syntax will look somewhat familiar if you’re a PHP or Java developer.

But if JavaScript is a complete mystery to you, don’t despair. There are a lot of free resources out there that will quickly bring you up to speed. For example, a good place to start is Codecademy’s JavaScript lessons.

Week 1: Start With React

Now that you know basic JavaScript syntax, and that you understand why JavaScript apps can appear so complex, let’s talk specifics. Where should you start?

I believe the answer is React.

React is a UI library created and open-sourced by Facebook. In other words, it takes care of that “data to HTML” step (the View Layer).

Now don’t get me wrong: I’m not telling you to pick React because it’s the bestlibrary out there (because that’s highly subjective), but because it’s pretty good.

  • React might not be the most popular library, but it’s pretty popular.
  • React might not be the most lightweight library, but it’s pretty lightweight.
  • React might not be the easiest to learn, but it’s pretty easy to learn.
  • React might not be the most elegant library, but it’s pretty elegant.

In other words, React might not be the best choice in every situation, but I believe it’s the safest. And believe me, “just when you’re starting out” is not the right time to take risks with your technological choices.

React will also introduce you to some useful concepts like components, application state, and stateless functions that will prove useful no matter which framework or libraries you end up using during your career.

Finally, React has a large ecosystem of other packages and libraries that work well with it. And its sheer popularity means you’ll be able to find a lot of help on sites like Stack Overflow.

I personally recommend the React for Beginners course by Wes Bos. It’s how I learned React myself, and it’s just been completely overhauled with the latest React best practices.

Should You “Learn JavaScript Properly” First?

If you’re a very methodical learner, you might want to get a good grasp of the fundamentals of JavaScript before you do anything else.

But for others, this feels like learning to swim by studying human anatomy and fluid dynamics. Sure, they both play a huge role in swimming, but it’s more fun to just jump in the pool!

There’s no right or wrong answer here, it all depends on your learning style. The truth is, most basic React tutorials will probably use only a tiny subset of JavaScript anyway, so it’s perfectly fine to focus on only what you need now and leave the rest for later.

This also applies to the JavaScript ecosystem at large. Don’t worry too much about understanding the ins and outs of things like Webpack or Babel for now. In fact React recently came out with its own little command-line utility that lets you create apps with no build configuration whatsoever.

Week 2: Your First React Project

Let’s assume you’ve just completed a React course. If you’re like me, two things are probably true:

  • You’ve already forgotten half of what you just learned.
  • You can’t wait to put the half you do remember in practice.

I believe the best way to learn a framework or a language is to just use it. And personal projects are the perfect occasion to try out new technologies.

A personal project could be anything from a single page to a complex web app, but I feel like redesigning your own personal site can be a good middle ground. Plus, I know you’ve probably been putting it off for years!

Now I did say earlier that using single-page apps for static content was often overkill, but React actually has a secret weapon: Gatsby, a React static site generator that lets you “cheat” and get all the benefits of React without any of the downsides.

Here’s why Gatsby is a great way to get started with React:

  • A pre-configured Webpack, meaning you get all the benefits without any of the headaches.
  • Automatic routing based on your directory structure.
  • All HTML content is also generated server-side, so you get the best of both worlds.
  • Static content means no server and super-easy hosting on GitHub Pages.

I used Gatsby for the State Of JavaScript site, and not having to worry about routing, build tool configuration, or server-side rendering saved me a ton of time.

Week 3: Mastering ES6

In my own quest to learn React, I soon reached a point where I could get by copy-pasting code samples, but there was still a lot I didn’t understand.

Specifically, I was unfamiliar with all the new features introduced by ES6, such as:

  • Arrow functions
  • Object destructuring
  • Classes
  • The spread operator

If you’re in the same boat, it might be time to take a couple days and learn ES6 properly. If you enjoyed the React for Beginners course, you might want to check out Wes’ excellent ES6 for Everybody videos.

Or if you prefer free resources, check out Nicolas Bevacqua’s book, Practical ES6.

A good exercise for mastering ES6 is going through an older codebase (such as the one you just created in Week 2!) and converting your code to ES6’s shorter, terser syntax whenever possible.

Week 4: Taking On State Management

As this point you should be capable of building a simple React front-end backed by static content.

But real web apps are not static: they need to get their data from somewhere, generally a database of some kind.  Web development services in Hyderabad visit Vivid Designs 

Now you could just send data to your individual components, but that quickly gets messy. For example, what if two components need to display the same piece of data? Or need to talk to each other?

This is where State Management comes in. Instead of storing your state (in other words, your data) bit by bit in each component, you store it in a single global store that then dispatches it to your React components:

In the React world, the most popular state management library is Redux. Redux not only helps centralize your data, but it also enforces some strict protocols for manipulating this data.

You can think of Redux as a bank: you can’t go to your local branch and manually modify your account total (“here, let me just add a couple extra zeroes!”). Instead, you fill out a deposit form, then give it to a bank teller authorized to perform the action.

Similarly, Redux also won’t let you modify your global state directly. Instead, you pass actions to reducers, special functions that perform the operation and return the new, updated state as a result.

The result of all this extra work is a highly standardized and maintainable data flow throughout your app, and access to tools such as the Redux Devtoolsto help you visualize it:

Once again you can stay with our friend Wes and learn Redux with his Redux course, which is actually completely free!

Or, you can check out Redux creator Dan Abramov’s video series on egghead.io, which is free as well.

Bonus Week 5: Building APIs With GraphQL

So far we’ve pretty much only talked about the client, and that’s only half the equation. And even without going into the whole Node ecosystem, it’s important to address one key aspect of any web app: how data gets from the server to the client.

It won’t come as a surprise that this, too, is rapidly changing, with GraphQL(yet another Facebook open-source project) emerging as a serious alternative to the traditional REST APIs.

Whereas a REST API exposes multiple REST routes that each give you access to a predefined dataset (say, /api/posts, /api/comments, etc.), GraphQL exposes a single endpoint that lets the client query for the data it needs.

Think of it as making multiple trips to the butcher shop, bakery, and grocery store, versus giving someone a shopping list and sending them on their way to all three.

This new strategy becomes especially significant when you need to query multiple data sources. Just like with our shopping list example, you can now get data back from all these sources with a single request.

GraphQL has been picking up steam over the past year or so, with many projects (such Gatsby, which we used in Week 2) planning on adopting it.

GraphQL itself is just a protocol, but its best implementation right now is probably the Apollo library, which works well with Redux. There is still a lack of instructional material around GraphQL and Apollo, but hopefully the Apollo documentation can help you get started.

Beyond React & Co

I recommended you start with the React ecosystem because it’s a safe pick, but it’s by no means the only valid front-end stack. If you want to keep exploring, here are two recommendations:

Vue

Vue is a relatively new library but it’s growing at record speeds and has already been adopted by major companies, especially in China where it’s being used by the likes of Baidu and Alibaba (think Chinese Google and Chinese Amazon). And it’s also the official front-end layer of PHP framework Laravel.

Compared to React, some of its key selling points are:

  • Officially-maintained routing and state management libraries.
  • Focus on performance.
  • Lower learning curve thanks to using HTML-based templates.
  • Less boilerplate code.

As it stands, the two main things that still give React an edge over Vue are the size of the React ecosystem, and React Native (more on this later). But I wouldn’t be surprised to see Vue catch up soon!

Elm

If Vue is the more approachable option, Elm is the more cutting-edge one. Elm is not just a framework, but an entire new language that compiles down to JavaScript.

This brings multiple advantages, such as improved performance, enforced semantic versioning, and no runtime exceptions.

I haven’t tried Elm personally, but it’s been warmly recommended by friends and Elm users generally seem very happy with it (as shown by its 84% satisfaction rating in the State Of JavaScript survey).

Next Steps

By now you should have a pretty good grasp of the entire React front-end stack, and hopefully be reasonably productive with it.

That doesn’t mean you’re done though! This is only the beginning of your journey through the JavaScript ecosystem. Some of the other topics you’ll eventually run into include:

  • JavaScript on the server (Node, Express…)
  • JavaScript testing (Jest, Enzyme…)
  • Build tools (Webpack…)
  • Type systems (TypeScript, Flow…)
  • Dealing with CSS in your JavaScript apps (CSS Modules, Styled Components…)
  • JavaScript for mobile apps (React Native…)
  • JavaScript for desktop apps (Electron…)

I can’t cover all this here but don’t despair! The first step is always the hardest, and guess what: you’ve just taken it by reading this study plan.

And now that you understand how the various pieces of the ecosystem fit together, it’s just a matter of lining up what you want to learn next and knocking down a new technology each month.

Source

GitHub vs. Bitbucket vs. GitLab vs. Coding

flow.ci

Today, repository management services are key components of collaborative software development. They enable software developers to manages changes to the source code and related files, create and maintain multiple versions in one central place. There are numerous benefits of using them, even if you work in a small team or you are a one man army. Using repository management services enables teams to move faster and preserve efficiency as they scale up.

In this article we briefly introduce and compare four popular repository management services GitHub, Bitbucket, GitLab, and Coding by touching on multiple aspects including basic features, relationship to open source, importing repositories, free plans, cloud-hosted plans, and self-hosted plans. The purpose of this article is not to swing opinions, but serve as a starting point for your research when you are looking for the best solution for your project.

GitHub

GitHub is Git based repository hosting platform which was originally launched in 2008 by Tom Preston-Werner, Chris Wanstrath, and PJ Hyatt. This is the largest repository host with more than 38+ million projects.

Bitbucket

Bitbucket was also launched in 2008 by an Australian startup, originally only supporting Mercurial projects. In 2010 Bitbucket was acquired by Atlassian and from 2011 it also started to support Git hosting, which is now its main focus. It integrates smoothly with other services from Atlassian and their main market is large enterprises.

GitLab

GitLab started as a project by Dmitriy Zaporozhets and Valery Sizov providing an alternative to the available repository management solutions in 2011. In 2012 the site GitLab.com was launched, but the company was only incorporated in 2014.

Coding

Coding was founded by Zhang Hai Long (张海龙) in Shenzhen, China in 2014 and received $15 million funding in the same year. Coding is currently used by 300 000 developers and hosts 500 000 projects. Their user base is rapidly growing on the mainland Chinese market and they have already set their eyes on international users.

Basic Features

Each of the four platforms is one big universe on its own when it comes down to features and capabilities. Making a detailed feature comparison is beyond the scope of this post. But if we are only looking at basic features they show a lot of similarities:

  • Pull request
  • Code review
  • Inline editing
  • Issue tracking
  • Markdown support
  • Two factor authentications
  • Advanced permission management
  • Hosted static web pages
  • Feature rich API
  • Fork / Clone Repositories
  • Snippets
  • 3rd party integrations

For more details please visit the feature pages of Bitbucket, GitHub, GitLab, and Coding.

Which one is open source?

From the four repository management services, only GitLab has an open source version. The source code of GitLab Community Edition is available on their website, the Enterprise edition is proprietary.

GitHub, who is famous for open source friendliness and hosts the largest amount (19.4M+) of open source projects is not open source.

Bitbucket is not open source but upon buying the self-hosted version the full source code is provided with product customization options.

Coding is also entirely proprietary and the source code is not available in any form. For Best web design company check Vivid Designs

What is the best place to discover public projects and connect with other developers?

GitHub, GitLab, Bitbucket, and Coding both have public repository discovery functions and apart from GitLab each offers the ability to easily follow other users. Coding even lets you add customized tags to personal profiles, which helps to find and to connect with other users with a particular interest.

Even though GitHub is not open source, it is still the hotbed of open source collaboration. It has by far the largest amount of public and open source projects and also hosts many of the most significant ones.(Docker, npm) With the early adoption of social features and with the free hosting of public projects, it is clearly a social hub for professional developers and everyone else who is interested in software development. What’s more, an exciting active GitHub profile could help you landing a great job. In more and more cases recruiters favor candidates with an active GitHub profile.

Importing Repositories

When you are trying to decide which system to use, the ability to import and use your previous projects is critical. Bitbucket is in this sense stands out from the other three because this is the only one that supports Mercurial repositories.

Coding, GitHub, and Bitbucket supports importing repos based on multiple different VCSs, GitLab on the other hand only supports Git. Git is the most popular VCS, but moving to GitLab could be complicated if you are using Mercurial or SVN repositories at the moment. GitLab’s repository importing feature explicitly geared to help users migrate from other more popular platforms.

GitHub supports:
– The import of Git, SVN, HG, TFS.

GitLab supports: 
-The import of Git.
-Easy import from other services GitHub, Bitbucket, Google Code, Fogbugz.

Coding supports: 
– The import of Git, SVN, HG.

Bitbucket supports:
– The import of Git, CodePlex, Google Code, HG, SourceForge, SVN.

Free Plans

All the 4 providers offer a free plan, but when we are looking at the details they have some significant differences.

GitHub free plan allows you to host an unlimited number of public repositories with the ability to clone, fork and contribute to them. There is no limit on disk usage, however, projects should not exceed 1 GB and individual files 100 MB. If you are looking to host private projects for free you need to look at other providers.

Bitbucket’s Small teams plan let’s 5 members to collaborate on an unlimited number of projects. Repositories here have a 1 GB soft size limit, when you reach this they will notify you by email, but your ability to push to the repository will only be suspended when your repo’s size reaches 2 GB.

GitLab cloud-hosted plan lets an unlimited number of users to collaborate on an unlimited number of public and private projects. They have 10GB space limit per repository, which is definitely a very generous proposition compared to what the other 3 provider offers. Top web development company in Hyderabad visit Vivid Designs 

The free plan from Coding let’s 10 members to collaborate on an unlimited number of public and private repositories, but they impose a 1 GB overall storage limit which feels like a big restriction.

If you are looking for a free cloud-based solution for private projects GitLab’s offer is probably the most appealing.

GitLab Community Edition is the only self-hosted free plan on our list. This is definitely the best option for those who like to have full control over the code base and have the resources to maintain their own servers. The downside here is that it only comes with community support and some more advanced features as code search are not included.

Paid Cloud-Hosted Plans

All the paid cloud-hosted plans offer an unlimited number of private repository storage and email support as well.

GitHub personal account offers essentially the same functionalities as their free account with the ability to host an unlimited number of private repositories. There is no limit on how many users with a personal account can collaborate, but they can’t use organizational features such as team-based access permissions and billing is done independently. GitHub Organisation plan starts at $25 / month for 5 people and each additional user cost $9 / month.

Bitbucket cloud-hosted Growing Team plan start with 10 users / $10 / month and for $100 / month it removes the limit on the number of team members.

Coding has two paid plans the Developer Plan for maximum 20 users and the Advanced plan for 50 users. Each case you and your team can host an unlimited number of repositories with the storage limit 5GB and 10GB respectively. It is worth mentioning that coding has more flexible billing options, competitive prices, and strong support including live chat and phone call. (They might only be available in Chinese though.)

Paid Self-Hosted Plans

GitHub, GitLab, and Bitbucket self-hosted versions provide enhanced features compared to their cloud-hosted counterparts. Each of these providers has created comparison tables to compare the features of the cloud-based and the self-hosted editions:

  • GitHub
  • GitLab
  • Bitbucket

Coding is rather mysterious about their Enterprise edition, they don’t disclose any details of pricing and feature on their website. If you are considering their host their solution behind your firewall, you need to reach out to their team. They assess the client’s needs first and then they provide custom quotes based on the assessment.

GitHub Enterprise plan starts at $2500 / 10 users and it’s billed annually. If you need more than that, which is likely to be the case, then you need to contact their sales team. Apart from your the servers at your own premises, GitHub Enterprise can also be deployed to AWS and Azure.

One of the best thing about Bitbucket Small Teams and Growing Teams that they only need a one-time payment. Paying $10 for Bitbucket Small Teams once for all, definitely makes GitHub look expensive. The Bitbucket Enterprise version has a limit of 2000 users. If you need more than that we suggest that you should check out Bitbucket Data Center.

GitLab Enterprise edition cost $39 / user / year and has no minimum limit on the number of users. It is more expensive than Bitbucket but it is still a wallet-friendly option. Adding some of the extra tools and services can make it quite pricey:

  • Premium support $99 / user / year (min 100 users)
  • GitLab Geo $99 / user / year (no min users)
  • Pivotal Tile $99 / user / year (no min users)
  • File Locking $99 / user / year (no min users)

Integration with flow.ci

GitHub, Bitbucket, GitLab, and Coding work seamlessly together with flow.ci. Connecting your either of your accounts to flow.ci only takes few steps.

Summary

We can not announce one service to be ultimately superior to the others. Not only because that would easily start a pub fight but also all of them are powerful and feature rich services. Nevertheless, there are particular scenarios when it is not far fetched to recommend a certain service:

  • If you want an open source solution you should pick GitLab.
  • If you are using other products from Atlassian (eg.: Confluence, Jira.. ), hosting your repositories on Bitbucket definitely make sense.
  • If you are working on an open source project then GitHub is definitely a great choice.
  • At this moment we would only recommend Coding for Chinese speaking teams since only their Web IDE has English UI.

It is likely that one of the four repository hosting services can give you what you need. If it is not the case, then check out Assembla or CloudForge.


flow.ci is a hosted continuous integration and delivery service, designed for teams who need a flexible and scalable solution but prefer not to maintain their own infrastructure. In flow.ci, development pipelines or automation workflows are simply called flows. In a flow, every step is a plugin that can be added by two clicks. You can add as many steps to your flow as you need, and there is no time limit on builds.

Function as Child Components

Merrick Christensen

Classic Ryan, “Rethinking Best Practices” anyone? If you don’t know what the “Function as Child” pattern is, this article is my attempt to:

  1. Teach you what it is.
  2. Convince you of why it is useful.
  3. Get some fetching hearts, or retweets or likes or newsletters or something, I don’t know. I just want to feel appreciated, you know?

What are Function as Child Components?

“Function as Child Component”s are components that receive a function as their child. The pattern is simply implemented and enforced thanks to React’s property types.

class MyComponent extends React.Component { 
  render() {
    return (
      <div>
        {this.props.children('Scuba Steve')}
      </div>
    );
  }
}
MyComponent.propTypes = {
  children: React.PropTypes.func.isRequired,
};

That is it! By using a Function as Child Component we decouple our parent component and our child component letting the composer decide what & how to apply parameters to the child component. For example:

<MyComponent>
  {(name) => (
    <div>{name}</div>
  )}
</MyComponent>

And somebody else, using the same component could decide to apply the name differently, perhaps to an attribute:

<MyComponent>
  {(name) => (
    <img src=’/scuba-steves-picture.jpg’ alt={name} />
  )}
</MyComponent>

What is really neat here is that My Component, the Function as Child Component can manage state on behalf of components it is composed with, without making demands on how that state is leveraged by its children. Lets move on to a more realistic example.

The Ratio Component

The Ratio Component will use the current device width, listen for resize events and call into its children with a width, height and some information about whether or not it has computed the size yet.

First we start out with a Function as Child Component snippet, this is common across all Function as Child Component’s and it just lets consumers know we are expecting a function as our child, not React nodes.

class Ratio extends React.Component {
  render() {
    return (
        {this.props.children()}
    );
  }
}
Ratio.propTypes = {
 children: React.PropTypes.func.isRequired,
};

Next lets design our API, we want a ratio provided in terms of X and Y axis which we will then use the current width to compute, lets set up some internal state to manage the width and height, whether or not we have even calculated that yet, along with some prop Types and default Props to be good citizens for people using our component.

class Ratio extends React.Component {
  constructor() {
    super(...arguments);
    this.state = {
      hasComputed: false,
      width: 0,
      height: 0, 
    };
  }
  render() {
    return (
      {this.props.children()}
    );
  }
}
Ratio.propTypes = {
  x: React.PropTypes.number.isRequired,
  y: React.PropTypes.number.isRequired,
  children: React.PropTypes.func.isRequired,
};
Ratio.defaultProps = {
  x: 3,
  y: 4
};

Alright so we aren’t doing anything interesting yet, lets add some event listeners and actually calculate the width (accommodating as well for when our ratio changes): For Web designing services check Vivid Designs

class Ratio extends React.Component {
  constructor() {
    super(...arguments);
    this.handleResize = this.handleResize.bind(this);
    this.state = {
      hasComputed: false,
      width: 0,
      height: 0, 
    };
  }
  getComputedDimensions({x, y}) {
    const {width} = this.container.getBoundingClientRect();
return {
      width,
      height: width * (y / x), 
    };
  }
  componentWillReceiveProps(next) {
    this.setState(this.getComputedDimensions(next));
  }
  componentDidMount() {
    this.setState({
      ...this.getComputedDimensions(this.props),
      hasComputed: true,
    });
    window.addEventListener('resize', this.handleResize, false);
  }
  componentWillUnmount() {
    window.removeEventListener('resize', this.handleResize, false);
  }
  handleResize() {
    this.setState({
      hasComputed: false,
    }, () => {
      this.setState({
        hasComputed: true,
        ...this.getComputedDimensions(this.props),
      });
    });
  }
render() {
    return (
      <div ref={(ref) => this.container = ref}>
        {this.props.children(this.state.width, this.state.height, this.state.hasComputed)}
      </div>
    );
  }
}
Ratio.propTypes = {
  x: React.PropTypes.number.isRequired,
  y: React.PropTypes.number.isRequired,
  children: React.PropTypes.func.isRequired,
};
Ratio.defaultProps = {
  x: 3,
  y: 4
};

Alright, so I did a lot there. We added some event listeners to listen for resize events as well as actually computing the width and height using the provided ratio. Neat, so we’ve got a width and height in our internal state, how can we share it with other components?

This is one of those things that is hard to understand because it is so simple that when you see it you think, “That can’t be all there is to it.” but this is all there is to it.

Children is literally just a JavaScript function.

That means in order to pass the calculated width and height down we just provide them as parameters:

render() {
    return (
      <div ref='container'>
        {this.props.children(this.state.width, this.state.height, this.state.hasComputed)}
      </div>
    );
}

Now anyone can use the ratio component to provide a full width and properly computed height in whatever way they would like! For example, someone could use the Ratio component for setting the ratio on an img:

<Ratio>
  {(width, height, hasComputed) => (
    hasComputed 
      ? <img src='/scuba-steve-image.png' width={width} height={height} /> 
      : null
  )}
</Ratio>

Meanwhile, in another file, someone has decided to use it for setting CSS properties.

<Ratio>
  {(width, height, hasComputed) => (
    <div style={{width, height}}>Hello world!</div>
  )}
</Ratio>

And in another app, someone is using to conditionally render different children based on computed height:

<Ratio>
  {(width, height, hasComputed) => (
    hasComputed && height > TOO_TALL
      ? <TallThing />
      : <NotSoTallThing />
  )}
</Ratio>

Strengths

  1. The developer composing the components owns how these properties are passed around and used.
  2. The author of the Function as Child Component doesn’t enforce how its values are leveraged allowing for very flexible use.
  3. Consumers don’t need to create another component to decide how to apply properties passed in from a “Higher Order Component”. Higher Order Components typically enforce property names on the components they are composed with. To work around this many providers of “Higher Order Components” provide a selector function which allows consumers to choose your property names (think redux-connects select function). This isn’t a problem with Function as Child Components.
  4. Doesn’t pollute “props” namespace, this allows you to use a “Ratio” component and a “Pinch to Zoom” component together regardless that they are both calculating width. Higher Order Components carry an implicit contract they impose on the components they are composed with, unfortunately this can mean colliding prop names being unable to compose Higher Order Components with other ones.
  5. Higher Order Components create a layer of indirection in your development tools and components themselves, for example setting constants on a Higher Order Component will be unaccessible once wrapped in a Higher Order Component. For example:  Web development company in Hyderabad visit Vivid Designs 
MyComponent.SomeContant = ‘SCUBA’;

Then wrapped by a Higher Order Component,

export default connect(...., MyComponent);

RIP your constant. It is no longer accessible without the Higher Order Component providing a function to access the underlying component class. Sad.

Summary

Most the time when you think “I need a Higher Order Component for this shared functionality!” I hope I have convinced you that a Function as Child Component is a better alternative for abstracting your UI concerns, in my experience it nearly always is, with the exception that your child component is truly coupled to the Higher Order Component it is composed with.

An Unfortunate Truth About Higher Order Components

As an ancillary point, I believe that Higher Order Components are improperly named though it is probably to late to try and change their name. A higher order function is a function that does at least one of the following:

  1. Takes n functions as arguments.
  2. Returns a function as a result.

Indeed Higher Order Components do something similar to this, namely take a Component as and argument and return a Component but I think it is easier to think of a Higher Order Component as a factory function, it is a function that dynamically creates a component to allow for runtime composition of your components. However, they are unaware of your React state and props at composition time!

Function as Child Components allow for similar composition of your components with the benefit of having access to state, props and context when making composition decisions. Since Function as Child Components:

  1. Take a function as an argument.
  2. Render the result of said function.

I can’t help but feel they should have gotten the title “Higher Order Components” since it is a lot like higher order functions only using the component composition technique instead of functional composition. Oh well, for now we will keep calling them “Function as Child Components” which is just wordy and gross sounding.

Examples

  1. Pinch to Zoom — Function as Child Component
  2. react-motion — This project introduced me to this concept after being a long time Higher Order Component convert.

The Ugly

Since Function as Child Components give you this power at render time, you typically can’t optimize them using should Component Update without hindering your composition ability

However, even Dan Abramov, the Benevolent Leader Of Whatever We Are Going To Be Doing Next, has acknowledged the grey area:

I have personally not found it a hinderance to our application’s performance since Function as Child Components are passthrough components to be composed with children it doesn’t know about anyhow. Higher Order Components find themselves in a similar situation as they are often designed to take unknown properties as well and therefore need to do “as good as we can or pass through” optimizations such as shallow equals which you can find in react-redux.

One does not simply learn to code

Quincy Larson

One does not simply learn to code. Because coding isn’t easy. Coding is hard. Everyone knows that. Anyone who’s scoured a stack trace — or git detached their head — can tell you that.

Unfortunately, there are a lot of marketers out there trying to cash in on the notion that “coding is easy!” Or that it will be, if you use their product.

When someone tells you that coding is easy, they are doing you a huge disservice. This can really only play out in one of three ways:

Scenario 1

Person 1: “I tried to learn to code once. I had a hard time. Life got in the way, and I am no longer trying to learn to code.”

Marketer: “Coding is easy!”

Person 1: “What? Oh. Maybe coding is easy after all. Maybe I’m just dumb.”

Scenario 2

Person 2: “I want to learn to code, but it sounds hard.”

Marketer: “Coding is easy!”

Person 2: “Really?”

Marketer: “Yes. Buy my course/program/e-book and you’ll be an elite coder in less than a month.”

Person 2:

Shut up and take my money!

Person 2, one month later: “I thought coding was supposed to be easy. Maybe I’m just dumb.”

Scenario 3

Person 3: I have no interest in ever learning to code. I’m a successful manager. If I ever need something coded, I’ll just pay someone to code it for me.

Marketer: Coding is easy!

Person 3: Oh, OK. Figures. In that case, I guess I won’t pay those code monkeys very much, or hold their work in very high regard.

Brain surgery is easy

Saying “Coding is easy!” is like saying “Brain surgery is easy!” Or saying “Writing novels is easy!” For Best Website deign services check Vivid Designs 

A brain surgeon at a dinner party says to novelist Margret Atwood: “I’ve always wanted to write. When I retire and have the time, I’m going to be a writer.”

Margret Atwood replies: “What a coincidence, because when I retire, I’m going to be a brain surgeon.”

And yet, marketers continue to say: “Coding is easy!”, “Coding isn’t that hard!”, or my personal favorite, “Coding is easy! It’s the <something that makes coding hard> that’s hard!”

And all that these marketers achieve in saying this is to make people feel dumb — sometimes taking their money in the process.

The curse of knowledge

Unfortunately, it’s not just marketers who say coding is easy. I meet experienced developers all the time who also say “coding is easy!”

Why would someone who’s gone through the thousands of hours it takes to get good at coding say that coding is easy? Because they’re suffering from a cognitive bias called the curse of knowledge. They cannot remember what it was like to not know how to code. And even if they can, they’ve probably long forgotten how hard coding was for them at first.

The curse of knowledge prevents many experienced developers from being able to empathize with beginners. And nowhere is this lack of empathy more apparent than everyone’s favorite Google result: the coding tutorial.

How many times have you actually been able to finish a random tutorial you found through Google, without getting derailed by some cryptic error or ambiguity?

And the worst thing about this process is when tutorial authors unconsciously pepper their instructions with words like “obviously” “easily” and most mocking of all: “simply.”

Nothing is more frustrating than being 30 minutes into a tutorial and getting stuck on a step that says “simply integrate with Salesforce’s API,” or “simply deploy to AWS.”

And when this happens, the voice of a thousand marketers echoes through your head: “Coding is easy!”

You’ll remember those experienced developers you met a few weeks ago who tried their best to encourage you by saying: “Coding is easy!”

You’ll even have flashbacks of all those bad Hollywood hacking scenes where they make coding look so easy.

Before you know it, you’ll suddenly hear the sound of your own voice screaming, feel your body rising to its feet and (╯°□°)╯︵ ┻━┻

But it’s OK. Take a deep breath. Coding isn’t easy. Coding is hard. Everyone knows that.

Coding in real life VS coding in the movies

Still, you’ll yearn for those l33+ h@x0r skills. You’ll feel impelled to vanquish bugs with nothing but your wits — and a gratuitous number of green-on-black monitors.

So let’s chase that dragon. Let’s be that elite Hollywood programmer. If only for a moment, let’s feel what that’s like. Web designing services in Hyderabad visit Vivid Designs

Here we go:

Step 1: Turn out the lights, pop your collar, put on some aviator sunglasses

Step 2: Guzzle an energy drink, crush the can, and chuck it over your shoulder

Step 3: Go here and bang on your keyboard as fast as humanly possible

Power fantasy fulfilled.

Do you feel better? Are you laughing at the absurdity of our collective construct of software development?

Now that we’ve gotten that out of our system, let’s talk about the most insidious word in the English language.

Nothing is ever simple

There’s a good chance that if you encounter a word like “simply” in a tutorial, that tutorial will assume a lot about your prior knowledge.

Maybe the author assumes that you’ve previously coded something similar and are just using their tutorial as a reference. Maybe the author wrote the tutorial with themselves in mind as their target audience.

Either way, there’s a good chance that the tutorial will not be designed for someone at your exact level of coding skills.

Hence the “rule of simply”:

Don’t use the word “simply” in your tutorials, and don’t use tutorials that use the word “simply.”

Learn it. Know it. Live it.

Unfortunately, twenty minutes into a desperate Googling session, you are unlikely to remember that you should search the tutorial page to see whether its author presumptively uses words like “simply.”

Well, we’ve got you covered. Albert Meija has created a Chrome extension that will detect the word “simply” in a tutorial and will pop up a notice that the tutorial isn’t designed for beginners.

This Chrome Extension serves as a proverbial canary in the coal mine, notifying you of the presence of the word “simply” — and thus likely assumptions about your prior knowledge — before you get too far into the tutorial.

Albert built this chrome extension in just a few hours, in response to a challenge I tweeted out. Here are some of the other entries from Free Code Camp campers, whose extensions do similar things:

We could certainly take these chrome extensions further. Maybe use Natural Language Processing to produce a more accurate assessment of the relative difficulty of a given tutorial, or its “presumptive index.”

But in the meantime, this simple extension may steer you clear of those ship-sinking “simply” icebergs out there in the chilly ocean that is learning to code.

Until we meet again — stay strong and don’t believe the hype. Learning to code is hard. Tune out the noise, stick with it, and profit.

When should I use TypeScript?

This article is now available in Japanese and Chinese.

Last summer we had to convert a huge code base (18,000+ lines of code) from JavaScript to TypeScript. I learned a lot about the strengths and weaknesses of each, and when it makes sense to use one over the other.

When it makes sense to use TypeScript When you have a large codebase When your codebase is huge, and more than one person works on the project, a type system can help you avoid a lot of common errors. This is especially true for single-page applications.

Any time one developer could introduce breaking changes, it’s generally good to have some sort of safety mechanism.

The TypeScript transpiler reveals the most obvious mistakes — though it won’t magically eliminate the need for debugging.

If your codebase isn’t all that big, it probably doesn’t make sense to make it larger by adding type annotations. I’ve converted 180+ files from JavaScript to TypeScript, and in most cases it added roughly 30% to the total code size.

When your team’s developers are already accustom to statically-typed languages If you or the majority of the team come from a strongly typed language like C# or Java, and don’t want to go all-in on JavaScript, TypeScript is a good alternative. For Best web design company check Vivid Designs

Even though I recommend learning Javascript thoroughly, there’s nothing preventing you from using TypeScript without knowing JavaScript. In fact, TypeScript was created by the same guy who made C#, so the syntaxes are similar.

In my company, we had a team of C# developers who were coding a sophisticated desktop application in C# and WPF (which is basically a front end development tool for the desktop world). They were then asked to join a web project as full stack developers. So in short order, they were able to learn TypeScript for the front end, then leverage their C# knowledge for the back end.

TypeScript can serve as a replacement for Babel The old Microsoft used to take standard tools — Java for example — and add proprietary non-standard features to them — in this case resulting in J++. Then they would try to force developers to choose between the two.

TypeScript is exactly the same approach — this time for JavaScript. By the way, this isn’t Microsoft’s first fork of JavaScript. In 1996, they forked JavaScript to create JScript.

Though it’s is a less-common use case, it’s technically possible to transpile ES6 code into ES5 using the TypeScript transpiler. This is possible because ES6 is essentially a subset of TypeScript, and the TypeScript transpiler generates ES5 code.

Typescript’s transpiler generates pretty readable Javascript (EcmaScript 5) code as output. That was one of the reasons why the Angular 2 team chose TypeScript over Google’s own Dart language.

Also, TypeScript has some cool features that are not in ES6, like enums and the ability to initialize member variables in a constructor. I’m not a big fan of inheritance, but I find it useful to have the public, private, protected, and abstract keywords in classes. TypeScript has them and ES6 doesn’t.

Our C# developers thought it was super amazing to be able to write a lambda function as the body of a method — which eliminated the headaches associated with the this keyword.

When a library or framework recommends TypeScript If you are using Angular 2 or another library that recommends TypeScript, go for it. Take a look at what these developers have to say after using Angular 2 for six months.

Just know that — even though TypeScript can use all JavaScript libraries out of the box — if you want good syntax errors, you’ll need to add the type definitions for those libraries externally. Fortunately the nice guys at DefinitelyTyped have built a community-driven repo with tooling for doing just that. But this is still one extra step when you’re setting up your project

(On a side note: for all you JSX fans, check out TSX.)

When you really feel the need for speed This may come as a shock to you, but the TypeScript code can in some situations perform better than JavaScript. Let me explain.

In our JavaScript code, we had a lot of type checks. It was a MedTech app, so even a small error could be literally fatal if it wasn’t dealt with properly. So a lot of functions had statements like:

if(typeof name !== ‘string) throw ‘Name should be string’ With TypeScript, we could eliminate a lot of these type checks all together.

This especially showed its effect in parts of the code where we previously had a performance bottleneck, because we were able to skip a lot of unnecessary runtime type checking.

So when are you better off without Typescript? When you can’t afford an extra transpilation tax There are no plans to support TypeScript natively in the browsers. Chrome did some experiment, but later cancelled support. I suspect this has something to do with unnecessary runtime overhead.

If someone wants training wheels, they can install them. But bikes shouldn’t come with permanent training wheels. This means that you will always have to transpile your TypeScript code before running it in a browser.

For standard ES6, it’s a whole different story. When ES6 is supported by most browsers, the current ES6 to ES5 transpilation will become unnecessary (update: yes indeed!).

ES6 is the biggest change to the JavaScript language, and I believe most programmers will just settle with it. But those brave few who want to try the next version of JavaScript’s experimental features, or the features not yet implemented on all browsers — they will need to transpile anyway.

Without transpilation, you just modify the file and refresh your browser. That’s it. No watching, transpiling on demand, or build system are necessary.

If you choose TypeScript, you will end up doing some extra bookkeeping for the type definitions for your Javascript libraries and frameworks (by using DefinitelyTyped or writing your own type annotations). That’s something you wouldn’t need to do for a pure JavaScript projects. Top web development company in Hyderabad visit Vivid Designs

When you want to avoid weird debugging edge cases Sourcemaps make it easier to debug Typescript, but the status quo is not perfect. There are really annoying and confusing edge cases.

Also, there are some problems debugging the “this” keyword and properties attached to it (hint: “_this” works in most cases). That is because Sourcemaps currently don’t have a good support for variables — though this may change in the future.

When you want to avoid potential performance penalties In our project, we had 9,000+ lines of good old ES5 JavaScript that delivered pure horse power to a 3D WebGL canvas. We kept it that way.

The TypeScript transpiler (just like Babel) has features that require generating extra code (inheritance, enum, generics, async/await, etc). No matter how good your transpiler is, it can’t surpass the optimizations of a good programmer. So we decided to keep it in plain ES5 for ease of debug and deployment (no transpilation whatsoever).

That being said, the performance penalty is probably negligible compared to benefits of a type system and more modern syntax for most projects. But there are cases where milliseconds and even microseconds matter, and in those cases transpilation of any kind is not recommended (even with Babel, CoffeeScript, Dart, etc.).

Note that Typescript doesn’t add any extra code for runtime type checking. All the type checking happens at transpile time and the type annotations are removed from the generated Javascript code.

When you want to maximize your team’s agility It’s quicker to set up something in JavaScript. The lack of a type system makes for agility and ease of changing stuff. It also makes it easier to break things, so make sure you know what you’re doing.

Javascript is more flexible. Remember one of the main use cases for a type system is to make it hard to break stuff. If Typescript is Windows, Javascript is Linux.

In JavaScript Land, you don’t get the training wheels of a type system, and the computer assumes you know what you’re doing, but allows you to ride much faster and maneuver easier.

This is particularly important to note if you’re still in the prototyping phase. If so, don’t waste your time with TypeScript. JavaScript is so much more flexible.

Remember that TypeScript is a superset of JavaScript. This means that you can easily convert JavaScript to TypeScript later if you need to.

My preference on JavaScript VS TypeScript There is no one best language overall. But for each individual project, there is probably one objectively best language and library and framework and database and operating system and… you get the picture.

For our project it made sense to use TypeScript. I tried to refactor some of my hobby projects in TypeScript but it sucked. I personally like 4 things about TypeScript:

1 It’s fully compatible with ES6. It is really nice seeing Microsoft playing fair with the other browsers. Our ecosystem can benefit from a strong rival to Google, Mozilla, and Apple. Microsoft is spending serious energy on it — such as writing Visual Studio Code from scratch using TypeScript on Google Chrome, of all platforms.

2 The type system is optional. Coming from a C and Java background, I found the lack of type system in JavaScript liberating. But I hated losing time when I encountered stupid bugs during runtime. TypeScript allows me to avoid many common bugs so I can focus my time on fixing the real tricky ones. It’s a good balance. I like it. It’s my taste. I use types whenever I can because it gives me peace of mind. But that’s me. If I use TypeScript, I don’t want to limit myself to its ES6 features.

3 The transpiler output is very readable. I am not a fan of Sourcemaps, so I do most of my debugging on the generated JavaScript. It’s absolutely awesome. I can totally understand why Angular 2 chose TypeScript over Dart.

4 TypeScript’s tooling is fantastic. WebStorm is very smart when dealing with JavaScript (some may argue it’s the smartest JS IDE). But TypeScript pushes the limits to a whole new level. The autocompletion and refactoring features in VSCode work much more accurately, and it’s not because the IDE is super smart. That’s all thanks to TypeScript.

Typescript is not the answer for everything. You can still write terrible code in it.

TypeScript haters are gonna hate, either because of fear of change or because they know somebody who knows somebody who is afraid of it. Life goes on and TypeScript introduces new features to its community anyway.

But like React, TypeScript is one of those influential technologies that is pushing the boundaries of our web development.

Whether you use TypeScript or not, it doesn’t hurt to try it out in order to develop your own opinions on it. It has a learning curve, but if you already know JavaScript, it will be a smooth one.

Here is an online realtime TS transpiler with some examples that let you compare TypeScript code with its equivalent JavaScript code.

Here is a quick tutorial, and a very nice guide, but I’m more a language-reference kinda guy. If you like video, here’s a course from Udemy.

John Papa has a a short article about ES5 and TypeScript.

There’s an interesting study that shows all things equal, a type system reduces bugs by 15%.

Oh, and if you feel like going on a side mission, read why programming is the best job ever.

Source