Top 8 Key Points Getting Started With An Ecommerce Business

Starting an ecommerce business is something many people consider, but a much smaller number actually take the plunge. If your idea is strong enough, you’ve tested it, and you’re ready to get up and running, then getting started doesn’t need to be a fraught experience. Key to your ecommerce success and growth is a good grasp of digital marketing, and again, this can seem daunting or overwhelming, but it doesn’t need to be.

Ecommerce marketing simply uses the best channels and most effective digital methods for the benefit of your business. Digital marketing is a package of ways to help grow your business, further its reach and raise awareness. As digital marketing becomes more and more technologically advanced, and marketing automation becomes the norm for the most basic tasks, you can focus on drawing in sales and growing your business. For Digital Marketing Companies Check Vivid Digital  

With more than 206 million predicted shoppers spending money online this year, there’s never been a better time to start an eCommerce business. If you’re thinking about starting an eCommerce business and selling products online, Here we’re looking at what you need to do to promote your ecommerce business and harness all the data available to you to help you succeed.

1. Start With Your Business Name

The first thing to do (after you decide what you want to sell, of course) is choose a fabulous, memorable business name that no one else is using. You can conduct a corporate name search to make sure it’s not already in use. Once you’ve chosen the name, register it. (If you form an LLC or corporation, this will happen automatically in the state where you file your paperwork.)

2. Secure Your Domain Name and Website

Ideally, you’ll get your business name as your domain name, but if it’s not available, choose a URL that’s easy to say and spell, and relates to your business. So if your business is Karen’s Craft Creations and isn’t available, try something similar like

The design of your eCommerce site may be the biggest business expense you have. But you want to ensure that it’s not only visually appealing, but also functional. There are out-of-the-box eCommerce solutions like Shopify to begin with, but you may require something more custom-made if your needs are more than basic.

3. Know Your Customer

Your customer’s digital experience is entirely down to you. You can deliver and shape the customer journey each person who visits your website experiences, but you need to know them first. Your target customer should be someone you know inside out, and you should do all you can to focus in on their wants and needs.

You can discover all this information through engaging with your customer, using social media to talk directly to them and find their chosen channels for communication and engagement. The more you know about the people you want to sell to, the better you can shape your content and digital presence to suit them.

4. Know Your Industry

You may already think you know the ins and outs of your business and its field, but more research and analysis can never hurt. In the digital age, everything moves quickly. Any new developments should always be followed and explored in depth.

Likewise, recent history should be fully understood so you can forecast for the future. The better your grasp of the current status of your industry, the easier it is to predict where things may go next. Your strategic plan should make the most of your data and insights.

5 Select The Best Business Structure and Register Your Business

You’ve got several options when it comes to your business structure:

. Sole Proprietor

. Partnership (if you have a business partner)


. Corporation

If you don’t choose a business structure like a corporation or LLC, you’ll automatically be considered a sole proprietor (or partnership) by the IRS. However, operating as a sole proprietor, your personal assets are at risk. If your company is ever sued, the court can seize your personal assets if your business doesn’t have enough to cover its debts. Both the corporation and LLC separate you and your assets from the business, and provide other tax benefits.You can register on your own by filling out the appropriate business structure paperwork from the IRS yourself, or you can hire a business filing company to do it for you. A lawyer is another option, but that’s often overkill for the average small business owner’s needs. SEO Agency in Hyderabad visit here

6. Get Your Employer Identification Number

You’ll need an Employer Identification Number (EIN) to open a business bank account and file your business taxes next April. Your EIN is a bit like your business’ social security number: it’s a unique number that identifies your business and helps you file important paperwork. Every business needs one, whether you’ll have employees or not.

7 Know Your Competitors

The wide range of analytics tools out there makes analyzing and understanding your competitors easier than ever before. Pinpointing their strengths and weaknesses gives you something to build from or work towards.

You can examine their social media to see where the small (or large) knowledge gaps lie. It’s in these gaps that you can find your own space and carve out your position in the market where your competitors aren’t already succeeding.

8. Building Your Digital Presence

Nothing matters more than your digital presence as an ecommerce business. Your website isn’t the be all and end all, but it needs to look its best. It is your virtual shop window and a 24/7 portal for leads and sales for your business. You don’t need to invest thousands to make it appealing but you need to give it your time and attention to succeed.

Design Matters

Designing your website doesn’t necessarily mean bringing in an expensive agency. Design is unique to your business and to get started, you may be able to put something together yourself. Your website is a chance to show off what you’re all about and there is no one-size-fits-all approach to web design.

Keep it simple, get your point across and you can be sure your customers will be keen to find out more. All websites are always in beta so don’t expect perfection, just deliver everything you want in the best way you can, ensuring you update your content regularly.

Don’t Doubt Do It Yourself

There are a wide range of self-serve website tools which allow you to piece together your own design. Ready-made templates exist to let you build a compelling experience with ease. No business in 2018 can get by without a strong web presence. Difficulty creating and managing a proper website shouldn’t be a hurdle if you choose the right tools.

Choosing your Domain

If you’re starting from scratch, then you need to begin by selecting your domain. Without a domain you have nowhere for your website to be held and no way for people to access it. Choosing a domain takes seconds, and within a few minutes, you can have your piece of the internet registered in your name and ready to build.

The 3 Important Strategies to Improve Your Search Engine Rankings

For more information on Digital Marketing Services Check here

We’ve learned to live with desktop page speed as a Google ranking factor, and now we’ve accepted that mobile page speed is a ranking factor, too. After the search giant announced that they were going to roll out the Speed Update, they changed their approach to measuring page speed via the Google PageSpeed Insights tool, as well.My team wanted to know if there was any correlation between a page’s speed and a page’s positions in mobile search results, so we ran experiments to find out. We conducted one before and one immediately after the Speed Update. The results? Surprising.

Based on what we learned, these are top three ways you can use page speed as an opportunity to improve your ranking.

1: User Engagement

Simply, user engagement is the measurement of how well your website engages Internet users. A focus on optimizing your website to improve the SEO factors above related to user engagement is the key to this strategy.

SEO ranking factors for user engagement include the following:

Direct website visits – users who visit your website by directly typing in your URL address into their Internet browser (in other websites, the popularity of your website is one of the signals) Time on site – the amount of time users spend on your website Pages per session – the average number of pages users view during a visit to your website Bounce rate – the percentage measure of website visitors who leave your site after viewing only one page without interacting with it 3 Quick Ways to Improve User Engagement Create content that helps users satisfy the problem they are having or challenge they are trying to solve as expressed by the query they use during a search Link to deeper, more specific content within your website giving the user a path to progress and become more immersed in your content and your expertise Create content in multiple forms such as lists, photos that aid the message, videos, and recommended steps or actions Your website design should be appealing to your target market, showcase your brand, and create an environment that makes your content highly and easily readable— particularly on a mobile device.

Quickly provide information on how your products/ services work and an incentive motivating users to perform a certain action; whether that’s calling you, filling out a contact form, making a purchase, etc. Check For Digital Marketing Companies in Vivid Digital

2: Link Structure

Link building, content marketing, link solicitation— it has many names. The most overlooked and undervalued tactic in all of search engine optimization, link structure is how your website interconnects with itself and other websites across the Internet.

Essentially, link structure involves how your website links to other pages within itself (internal links), how your website links to other websites (outbound links), and how other websites link to yours (backlinks/ inbound links).

A focus on optimizing your website to improve the SEO factors involving link structure is the key to this strategy. The SEO ranking factors involving linking include:

Total referring domains – the total number of domains that link to your website Total backlinks – the total number of times (or links) other websites link to your website Total referring IPs – the total number of IP addresses involved in the backlinks to your website Total follow backlinks – the total amount of backlinks counting towards your website’s PageRank (See the difference between follow vs. no follow links) 3 Quick Ways to Improve Your Link Structure Avoid overloading links to your internal top level pages Create a white hat SEO link solicitation plan to get more high-quality backlinks Use appropriately descriptive anchor text for all your links (No more links with clickable text saying, “Click here.”) Your links should briefly describe the type of content the link goes to, the page’s benefit, OR the action the user should take when they land on that page.

3. Increase your Optimization Score.

The findings of our experiment are curious: while we found no correlation between a mobile site’s position and site’s FCP/DCL metrics, we did find an extremely high correlation (0.97!) between a mobile site’s position in search results and its average Optimization Score.

While there is little we can do to influence FCP/DCL metrics (as they are based not only on the actual speed of your site, but also on the users’ connection speeds and their devices), improving your Optimization Score is crucial. The good news? It’s also totally manageable. For SEO Services in Hyderabad visit  here

Google provides a list of recommendations on how to deal with the factors that can lower your Optimization Score in PageSpeed Insights.

Here is a quick list of what you can do:

Avoid landing page redirects. They slow down rendering of a page, which negatively affects desktop and mobile experience. Enable compression. Small image size cuts time spent to download the resource as well as data usage for the client, plus it improves pages’ rendering time. Improve server response time. 53% of mobile users will leave a page if it does not load in less than 3 seconds. Implement a caching policy. Its absence leads to a great number of roundtrips between the client and server during the resources’ fetching process, which leads to delays, page rendering blocking, and higher costs for visitors. Minify resources (HTML, CSS, and JavaScript). It helps to cut on redundant data from the resources delivered to visitors. Optimize images. They account for about 60% of a page size, and heavy images can significantly slow down site’s rendering. Optimize CSS delivery. A page needs to process CSS prior to its rendering. When CSS is full of render-blocking external stylesheets, the process requires a great number of roundtrips that delay rendering. Prioritize visible content. If above-the-fold content exceeds 14,6kB compressed, it requires multiple roundtrips between the server and user’s browser to load and render content. Remove render-blocking JavaScript. Every time a browser encounters it in site’s HTML, it has to stop and execute this script, which slows down the rendering process. If you’re unsure how to implement any of the above page speed optimizations, talk to your website developer or check out more tips here.


Top 4 Digital Marketing Strategies You Need to Know for 2019

Digital marketing is ever-changing as innovation creates new opportunities for marketers every day. Along with it, your strategy must change and grow with technology to keep ahead of your competitors. Trending digital marketing strategies keep marketers on their toes trying to innovate new and different ways to engage their audiences.

In this blog, we’ll take stock of the digital marketing landscape in 2018, discuss what’s changed and what’s new, and see where you should be investing your energy for 2019.

1. Virtual and Augmented Reality

Smart companies are leveraging mobile cameras to improve their customer experience. Through VR and AR, you can improve brand engagement and help with pre-purchase decisions by bringing your products to life. By allowing customers to engage in more profound ways through immersive experiences, they are better equipped to find what they are looking for and be delighted in doing so.

Consider Amazon, who set up Oculus Rift VR booths around Prime Day, allowing shoppers to experience a wide range of products from nerf guns to refrigerators as they would in physical reality. By empowering potential buyers to literally picture themselves owning products and simulate this potential reality, VR preemptively addresses needs and pain points, greatly enhancing the customer journey.

When getting users to a specific physical location isn’t possible, augmented reality can provide greater flexibility and reach through integration with mobile apps. StubHub executed this masterfully, introducing an AR feature that allowed fans to better understand the city and stadium ahead of this year’s Super Bowl. Potential ticket buyers could click to see a 3D map of the stadium, parking, transit lines, and more, making it easier to envision themselves at the event.

By deploying virtual or augmented reality strategically, you have a unique opportunity to supply consumers with the depth of pre-purchase information they crave, while minimizing the effort they must take to obtain it. For Digital Marketing Company visit Vivid Digital  

2. Artificial Intelligence

In the past, digital marketers have been hesitant to incorporate artificial intelligence into their strategies. But as AI continues to prove itself useful for simplifying data-based experiences and improving user experience, confidence in it has increased.

KLM has done a great job with this, creating a plug-in with Messenger that streamlines everything from booking to check-in and flight status updates. It’s a win-win for both sides: travelers can access all their travel info from anywhere, and KLM can supply it to them without tying up personnel.

Chatbots can also be valuable for facilitating pre-purchase decisions. For instance, Bing’s Business Bot, which is embedded into search results, allows interested users to have basic questions answered by the businesses around them. If their query is not on the pre-configured list, the bot refers them to a phone number. The bot also asks business owners additional questions based on what users are looking for, so that common requests can be answered faster in the future. By refining responses to fit user needs, artificial intelligence allows you to help users better and faster over time.

3. Visual and Voice Search

Search is evolving beyond its text-only origins, meaning that visual and voice search deserve serious consideration now. Think of visual search as a sort of reverse search, using images to find text-based info instead of the other way around. Google, Microsoft, and Pinterest have all jumped into the fray, and it’s only going to get bigger over time. Marketers can gain an edge here by preparing tailor-made content to await potential customers after their image searches, while also gaining even more insights into their preferences.

Dominos Voice Search Example

Voice search also continues to grow as a way for consumers to find more information without even having to lift a finger. Domino’s has done well here, teaming up with Amazon Echo to let customers order pizza hands-free. Allowing people to interact with you via voice search makes their life easier, and offers a chance to incorporate brand personality and tone in the way you respond. The dynamics of voice search also present a challenge for digital marketers, who must figure out how to optimize for both humans and devices.  SEO Agency in Hyderabad visit Vivid Digital

4. Vertical Video

With the shift from desktop to mobile a consistent digital marketing consideration, it should come as no surprise that mobile video ads continue to be hot. Savvy marketers are using videos to both engage their audiences in-app between tasks and on social platforms.

What’s new, however, is the movement towards more vertical video. Instagram’s recent introduction of IGTV continues this trend, allowing users to create long-form vertical videos. While advertising is not available (yet) on IGTV, its great place for brands to share their longer content organically. The success of IGTV and other similar platforms is worth keeping an eye on, as it could cause a major shift in favor of vertical video. If this is the case, marketers will need to create horizontal and vertical assets to reach their audience fully.

Staying Ahead of the Competition

These emerging digital marketing strategies make it easier to both reach customers when and where they are ready to buy and improve their experience post-purchase. Competitive companies know that capitalizing on new forms of content and technology will help them capture new audiences on fresh playing fields.

So which of these hot new digital marketing strategies do you plan on implementing? Let me know in the comments below!

Node.js Child Processes: Everything you need to know

Single-threaded, non-blocking performance in Node.js works great for a single process. But eventually, one process in one CPU is not going to be enough to handle the increasing workload of your application.

No matter how powerful your server may be, a single thread can only support a limited load.

The fact that Node.js runs in a single thread does not mean that we can’t take advantage of multiple processes and, of course, multiple machines as well.

Using multiple processes is the best way to scale a Node application. Node.js is designed for building distributed applications with many nodes. This is why it’s named Node. Scalability is baked into the platform and it’s not something you start thinking about later in the lifetime of an application.

This article is a write-up of part of my Pluralsight course about Node.js. I cover similar content in video format there.

Please note that you’ll need a good understanding of Node.js events and streams before you read this article. If you haven’t already, I recommend that you read these two other articles before you read this one:

The Child Processes Module

We can easily spin a child process using Node’s child_process module and those child processes can easily communicate with each other with a messaging system.

The child_process module enables us to access Operating System functionalities by running any system command inside a, well, child process.

We can control that child process input stream, and listen to its output stream. We can also control the arguments to be passed to the underlying OS command, and we can do whatever we want with that command’s output. We can, for example, pipe the output of one command as the input to another (just like we do in Linux) as all inputs and outputs of these commands can be presented to us using Node.js streams.

Note that examples I’ll be using in this article are all Linux-based. On Windows, you need to switch the commands I use with their Windows alternatives.

There are four different ways to create a child process in Node: spawn()fork()exec(), and execFile().

We’re going to see the differences between these four functions and when to use each.

Spawned Child Processes

The spawn function launches a command in a new process and we can use it to pass that command any arguments. For example, here’s code to spawn a new process that will execute the pwd command.

const { spawn } = require('child_process');
const child = spawn('pwd');

We simply destructure the spawn function out of the child_process module and execute it with the OS command as the first argument.

The result of executing the spawn function (the child object above) is a ChildProcess instance, which implements the EventEmitter API. This means we can register handlers for events on this child object directly. For example, we can do something when the child process exits by registering a handler for the exit event:

child.on('exit', function (code, signal) {
  console.log('child process exited with ' +
              `code ${code} and signal ${signal}`);

The handler above gives us the exit code for the child process and the signal, if any, that was used to terminate the child process. This signalvariable is null when the child process exits normally.

The other events that we can register handlers for with the ChildProcessinstances are disconnecterrorclose, and message.

  • The disconnect event is emitted when the parent process manually calls the child.disconnect function.
  • The error event is emitted if the process could not be spawned or killed.
  • The close event is emitted when the stdio streams of a child process get closed.
  • The message event is the most important one. It’s emitted when the child process uses the process.send() function to send messages. This is how parent/child processes can communicate with each other. We’ll see an example of this below.

Every child process also gets the three standard stdio streams, which we can access using child.stdinchild.stdout, and child.stderr.

When those streams get closed, the child process that was using them will emit the close event. This close event is different than the exit event because multiple child processes might share the same stdio streams and so one child process exiting does not mean that the streams got closed.

Since all streams are event emitters, we can listen to different events on those stdio streams that are attached to every child process. Unlike in a normal process though, in a child process, the stdout/stderr streams are readable streams while the stdin stream is a writable one. This is basically the inverse of those types as found in a main process. The events we can use for those streams are the standard ones. Most importantly, on the readable streams, we can listen to the data event, which will have the output of the command or any error encountered while executing the command:

child.stdout.on('data', (data) => {
  console.log(`child stdout:\n${data}`);

child.stderr.on('data', (data) => {
  console.error(`child stderr:\n${data}`);

The two handlers above will log both cases to the main process stdout and stderr. When we execute the spawn function above, the output of the pwdcommand gets printed and the child process exits with code 0, which means no error occurred.

We can pass arguments to the command that’s executed by the spawnfunction using the second argument of the spawn function, which is an array of all the arguments to be passed to the command. For example, to execute the find command on the current directory with a -type f argument (to list files only), we can do:

const child = spawn('find', ['.', '-type', 'f']);

If an error occurs during the execution of the command, for example, if we give find an invalid destination above, the child.stderr data event handler will be triggered and the exit event handler will report an exit code of 1, which signifies that an error has occurred. The error values actually depend on the host OS and the type of error.

A child process stdin is a writable stream. We can use it to send a command some input. Just like any writable stream, the easiest way to consume it is using the pipe function. We simply pipe a readable stream into a writable stream. Since the main process stdin is a readable stream, we can pipe that into a child process stdin stream. For example:

const { spawn } = require('child_process');

const child = spawn('wc');


child.stdout.on('data', (data) => {
  console.log(`child stdout:\n${data}`);

In the example above, the child process invokes the wc command, which counts lines, words, and characters in Linux. We then pipe the main process stdin (which is a readable stream) into the child process stdin (which is a writable stream). The result of this combination is that we get a standard input mode where we can type something and when we hit Ctrl+D, what we typed will be used as the input of the wc command.

We can also pipe the standard input/output of multiple processes on each other, just like we can do with Linux commands. For example, we can pipe the stdout of the find command to the stdin of the wc command to count all the files in the current directory: For Top web design company check Vivid Designs

const { spawn } = require('child_process');

const find = spawn('find', ['.', '-type', 'f']);
const wc = spawn('wc', ['-l']);


wc.stdout.on('data', (data) => {
  console.log(`Number of files ${data}`);

I added the -l argument to the wc command to make it count only the lines. When executed, the code above will output a count of all files in all directories under the current one.

Shell Syntax and the exec function

By default, the spawn function does not create a shell to execute the command we pass into it. This makes it slightly more efficient than the exec function, which does create a shell. The exec function has one other major difference. It buffers the command’s generated output and passes the whole output value to a callback function (instead of using streams, which is what spawn does).

Here’s the previous find | wc example implemented with an exec function.

const { exec } = require('child_process');

exec('find . -type f | wc -l', (err, stdout, stderr) => {
  if (err) {
    console.error(`exec error: ${err}`);

  console.log(`Number of files ${stdout}`);

Since the exec function uses a shell to execute the command, we can use the shell syntax directly here making use of the shell pipe feature.

Note that using the shell syntax comes at a security risk if you’re executing any kind of dynamic input provided externally. A user can simply do a command injection attack using shell syntax characters like ; and $ (for example, command + ’; rm -rf ~’ )

The exec function buffers the output and passes it to the callback function (the second argument to exec) as the stdout argument there. This stdoutargument is the command’s output that we want to print out.

The exec function is a good choice if you need to use the shell syntax and if the size of the data expected from the command is small. (Remember, execwill buffer the whole data in memory before returning it.)

The spawn function is a much better choice when the size of the data expected from the command is large, because that data will be streamed with the standard IO objects.

We can make the spawned child process inherit the standard IO objects of its parents if we want to, but also, more importantly, we can make the spawnfunction use the shell syntax as well. Here’s the same find | wc command implemented with the spawn function:

const child = spawn('find . -type f | wc -l', {
  stdio: 'inherit',
  shell: true

Because of the stdio: 'inherit' option above, when we execute the code, the child process inherits the main process stdinstdout, and stderr. This causes the child process data events handlers to be triggered on the main process.stdout stream, making the script output the result right away.

Because of the shell: true option above, we were able to use the shell syntax in the passed command, just like we did with exec. But with this code, we still get the advantage of the streaming of data that the spawn function gives us. This is really the best of both worlds.

There are a few other good options we can use in the last argument to the child_process functions besides shell and stdio. We can, for example, use the cwd option to change the working directory of the script. For example, here’s the same count-all-files example done with a spawn function using a shell and with a working directory set to my Downloads folder. The cwdoption here will make the script count all files I have in ~/Downloads:

const child = spawn('find . -type f | wc -l', {
  stdio: 'inherit',
  shell: true,
  cwd: '/Users/samer/Downloads'

Another option we can use is the env option to specify the environment variables that will be visible to the new child process. The default for this option is process.env which gives any command access to the current process environment. If we want to override that behavior, we can simply pass an empty object as the env option or new values there to be considered as the only environment variables:

const child = spawn('echo $ANSWER', {
  stdio: 'inherit',
  shell: true,
  env: { ANSWER: 42 },

The echo command above does not have access to the parent process’s environment variables. It can’t, for example, access $HOME, but it can access $ANSWER because it was passed as a custom environment variable through the env option.

One last important child process option to explain here is the detachedoption, which makes the child process run independently of its parent process.

Assuming we have a file timer.js that keeps the event loop busy:

setTimeout(() => {  
  // keep the event loop busy
}, 20000);

We can execute it in the background using the detached option:

const { spawn } = require('child_process');

const child = spawn('node', ['timer.js'], {
  detached: true,
  stdio: 'ignore'


The exact behavior of detached child processes depends on the OS. On Windows, the detached child process will have its own console window while on Linux the detached child process will be made the leader of a new process group and session.

If the unref function is called on the detached process, the parent process can exit independently of the child. This can be useful if the child is executing a long-running process, but to keep it running in the background the child’s stdio configurations also have to be independent of the parent.

The example above will run a node script (timer.js) in the background by detaching and also ignoring its parent stdio file descriptors so that the parent can terminate while the child keeps running in the background.

The execFile function

If you need to execute a file without using a shell, the execFile function is what you need. It behaves exactly like the exec function, but does not use a shell, which makes it a bit more efficient. On Windows, some files cannot be executed on their own, like .bat or .cmd files. Those files cannot be executed with execFile and either exec or spawn with shell set to true is required to execute them.

The *Sync function

The functions spawnexec, and execFile from the child_process module also have synchronous blocking versions that will wait until the child process exits.

const { 
} = require('child_process');

Those synchronous versions are potentially useful when trying to simplify scripting tasks or any startup processing tasks, but they should be avoided otherwise.

The fork() function

The fork function is a variation of the spawn function for spawning node processes. The biggest difference between spawn and fork is that a communication channel is established to the child process when using fork, so we can use the send function on the forked process along with the global process object itself to exchange messages between the parent and forked processes. We do this through the EventEmitter module interface. Here’s an example: Web designing services in Hyderabad visit Vivid Designs 

The parent file, parent.js:

const { fork } = require('child_process');

const forked = fork('child.js');

forked.on('message', (msg) => {
  console.log('Message from child', msg);

forked.send({ hello: 'world' });

The child file, child.js:

process.on('message', (msg) => {
  console.log('Message from parent:', msg);

let counter = 0;

setInterval(() => {
  process.send({ counter: counter++ });
}, 1000);

In the parent file above, we fork child.js (which will execute the file with the node command) and then we listen for the message event. The messageevent will be emitted whenever the child uses process.send, which we’re doing every second.

To pass down messages from the parent to the child, we can execute the sendfunction on the forked object itself, and then, in the child script, we can listen to the message event on the global process object.

When executing the parent.js file above, it’ll first send down the { hello: 'world' } object to be printed by the forked child process and then the forked child process will send an incremented counter value every second to be printed by the parent process.

Let’s do a more practical example about the fork function.

Let’s say we have an http server that handles two endpoints. One of these endpoints (/compute below) is computationally expensive and will take a few seconds to complete. We can use a long for loop to simulate that:

const http = require('http');
const longComputation = () => {
  let sum = 0;
  for (let i = 0; i < 1e9; i++) {
    sum += i;
  return sum;
const server = http.createServer();
server.on('request', (req, res) => {
  if (req.url === '/compute') {
    const sum = longComputation();
    return res.end(`Sum is ${sum}`);
  } else {


This program has a big problem; when the the /compute endpoint is requested, the server will not be able to handle any other requests because the event loop is busy with the long for loop operation.

There are a few ways with which we can solve this problem depending on the nature of the long operation but one solution that works for all operations is to just move the computational operation into another process using fork.

We first move the whole longComputation function into its own file and make it invoke that function when instructed via a message from the main process:

In a new compute.js file:

const longComputation = () => {
  let sum = 0;
  for (let i = 0; i < 1e9; i++) {
    sum += i;
  return sum;

process.on('message', (msg) => {
  const sum = longComputation();

Now, instead of doing the long operation in the main process event loop, we can fork the compute.js file and use the messages interface to communicate messages between the server and the forked process.

const http = require('http');
const { fork } = require('child_process');

const server = http.createServer();

server.on('request', (req, res) => {
  if (req.url === '/compute') {
    const compute = fork('compute.js');
    compute.on('message', sum => {
      res.end(`Sum is ${sum}`);
  } else {


When a request to /compute happens now with the above code, we simply send a message to the forked process to start executing the long operation. The main process’s event loop will not be blocked.

Once the forked process is done with that long operation, it can send its result back to the parent process using process.send.

In the parent process, we listen to the message event on the forked child process itself. When we get that event, we’ll have a sum value ready for us to send to the requesting user over http.

The code above is, of course, limited by the number of processes we can fork, but when we execute it and request the long computation endpoint over http, the main server is not blocked at all and can take further requests.

Node’s cluster module, which is the topic of my next article, is based on this idea of child process forking and load balancing the requests among the many forks that we can create on any system.

That’s all I have for this topic. Thanks for reading! Until next time!


Functional setState is the future of React

Justice Mba

Update: I gave a follow up talk on this topic at React Rally. While this post is more about the “functional setState” pattern, the talk is more about understanding setState deeply

React has popularized functional programming in JavaScript. This has led to giant frameworks adopting the Component-based UI pattern that React uses. And now functional fever is spilling over into the web development ecosystem at large.

But the React team is far from relenting. They continue to dig deeper, discovering even more functional gems hidden in the legendary library.

So today I reveal to you a new functional gold buried in React, best kept React secret — Functional setState!

Okay, I just made up that name… and it’s not entirely new or a secret. No, not exactly. See, it’s a pattern built into React, that’s only known by few developers who’ve really dug in deep. And it never had a name. But now it does — Functional setState!

Going by Dan Abramov’s words in describing this pattern, Functional setState is a pattern where you

“Declare state changes separately from the component classes.”


Okay… what you already know

React is a component based UI library. A component is basically a function that accept some properties and return a UI element.

function User(props) {
  return (
    <div>A pretty user</div>

A component might need to have and manage its state. In that case, you usually write the component as a class. Then you have its state live in the class constructor function:

class User {
  constructor () {
    this.state = {
      score : 0
  render () {
    return (
      <div>This user scored {this.state.score}</div>

To manage the state, React provides a special method called setState(). You use it like this:

class User {
  increaseScore () {
    this.setState({score : this.state.score + 1});

Note how setState() works. You pass it an object containing part(s) of the state you want to update. In other words, the object you pass would have keys corresponding to the keys in the component state, then setState() updates or sets the state by merging the object to the state. Thus, “set-State”.

What you probably didn’t know

Remember how we said setState() works? Well, what if I told you that instead of passing an object, you could pass a function?

Yes. setState() also accepts a function. The function accepts the previousstate and current props of the component which it uses to calculate and return the next state. See it below:

this.setState(function (state, props) {
 return {
  score: state.score - 1

Note that setState() is a function, and we are passing another function to it(functional programming… functional setState) . At first glance, this might seem ugly, too many steps just to set-state. Why will you ever want to do this?

Why pass a function to setState?

The thing is, state updates may be asynchronous.

Think about what happens when setState() is called. React will first merge the object you passed to setState() into the current state. Then it will start that reconciliation thing. It will create a new React Element tree (an object representation of your UI), diff the new tree against the old tree, figure out what has changed based on the object you passed to setState() , then finally update the DOM.

Whew! So much work! In fact, this is even an overly simplified summary. But trust in React!

React does not simply “set-state”.

Because of the amount of work involved, calling setState() might not immediately update your state.

React may batch multiple setState() calls into a single update for performance.

What does React mean by this?

First, “multiple setState() calls” could mean calling setState() inside a single function more than once, like this:

state = {score : 0};
// multiple setState() calls increaseScoreBy3 () { this.setState({score : this.state.score + 1}); this.setState({score : this.state.score + 1}); this.setState({score : this.state.score + 1}); }

Now when React, encounters “multiple setState() calls”, instead of doing that “set-state” three whole times, React will avoid that huge amount of work I described above and smartly say to itself: “No! I’m not going to climb this mountain three times, carrying and updating some slice of state on every single trip. No, I’d rather get a container, pack all these slices together, and do this update just once.” And that, my friends, is batching!

Remember that what you pass to setState() is a plain object. Now, assume anytime React encounters “multiple setState() calls”, it does the batching thing by extracting all the objects passed to each setState() call, merges them together to form a single object, then uses that single object to do setState() . For Website design services check Vivid Designs

In JavaScript merging objects might look something like this:

const singleObject = Object.assign(

This pattern is known as object composition.

In JavaScript, the way “merging” or composing objects works is: if the three objects have the same keys, the value of the key of the last object passed to Object.assign() wins. For example:

const me  = {name : "Justice"}, 
      you = {name : "Your name"},
      we  = Object.assign({}, me, you); === "Your name"; //true
console.log(we); // {name : "Your name"}

Because you are the last object merged into we, the value of name in the you object — “Your name” — overrides the value of name in the me object. So “Your name” makes it into the we object… you win! 🙂

Thus, if you call setState() with an object multiple times — passing an object each time — React will merge. Or in other words, it will compose a new object out of the multiple objects we passed it. And if any of the objects contains the same key, the value of the key of the last object with same key is stored. Right?

That means that, given our increaseScoreBy3 function above, the final result of the function will just be 1 instead of 3, because React did not immediatelyupdate the state in the order we called setState() . But first, React composed all the objects together, which results to this: {score : this.state.score + 1} , then only did “set-state” once — with the newly composed object. Something like this: User.setState({score : this.state.score + 1}.

To be super clear, passing object to setState() is not the problem here. The real problem is passing object to setState() when you want to calculate the next state from the previous state. So stop doing this. It’s not safe!

Because this.props and this.state may be updated asynchronously, you should not rely on their values for calculating the next state.

Here is a pen by Sophia Shoemaker that demos this problem. Play with it, and pay attention to both the bad and the good solutions in this pen:

Functional setState to the rescue

If you’ve not spent time playing with the pen above, I strongly recommend that you do, as it will help you grasp the core concept of this post.

While you were playing with the pen above, you no doubt saw that functional setState fixed our problem. But how, exactly?

Let’s consult the Oprah of React — Dan.

Note the answer he gave. When you do functional setState…

Updates will be queued and later executed in the order they were called.

So, when React encounters “multiple functional setState() calls” , instead of merging objects together, (of course there are no objects to merge) React queues the functions “in the order they were called.”

After that, React goes on updating the state by calling each functions in the “queue”, passing them the previous state — that is, the state as it was before the first functional setState() call (if it’s the first functional setState() currently executing) or the state with the latest update from the previous functional setState() call in the queue.

Again, I think seeing some code would be great. This time though, we’re gonna fake everything. Know that this is not the real thing, but is instead just here to give you an idea of what React is doing.

Also, to make it less verbose, we’ll use ES6. You can always write the ES5 version later if you want.

First, let’s create a component class. Then, inside it, we’ll create a fake setState() method. Also, our component would have a increaseScoreBy3()method, which will do a multiple functional setState. Finally, we’ll instantiate the class, just as React would do.

class User{
  state = {score : 0};
  //let's fake setState
  setState(state, callback) {
    this.state = Object.assign({}, this.state, state);
    if (callback) callback();
  // multiple functional setState call
  increaseScoreBy3 () {
    this.setState( (state) => ({score : state.score + 1}) ),
    this.setState( (state) => ({score : state.score + 1}) ),
    this.setState( (state) => ({score : state.score + 1}) )
const Justice = new User();

Note that setState also accepts an optional second parameter — a callback function. If it’s present React calls it after updating the state.

Now when a user triggers increaseScoreBy3(), React queues up the multiple functional setState. We won’t fake that logic here, as our focus is on what actually makes functional setState safeBut you can think of the result of that “queuing” process to be an array of functions, like this:

const updateQueue = [
  (state) => ({score : state.score + 1}),
  (state) => ({score : state.score + 1}),
  (state) => ({score : state.score + 1})

Finally, let’s fake the updating process:

// recursively update state in the order
function updateState(component, updateQueue) {
  if (updateQueue.length === 1) {
    return component.setState(updateQueue[0](component.state));
return component.setState(
    () =>
     updateState( component, updateQueue.slice(1)) 
updateState(Justice, updateQueue);

True, this is not as so sexy a code. I trust you could do better. But the key focus here is that every time React executes the functions from your functional setState, React updates your state by passing it a fresh copy of the updated state. That makes it possible for functional setState to set state based on the previous state.

Here I made a bin with the complete code. Tinker around it (possibly make it look sexier), just to get more sense of it.

Play with it to grasp it fully. When you come back we’re gonna see what makes functional setState truly golden. Web development services in Hyderabad visit Vivid Designs 

The best-kept React secret

So far, we’ve deeply explored why it’s safe to do multiple functional setStates in React. But we haven’t actually fulfilled the complete definition of functional setState: “Declare state changes separately from the component classes.”

Over the years, the logic of setting-state — that is, the functions or objects we pass to setState() — have always lived inside the component classes. This is more imperative than declarative.

Well today, I present you with newly unearthed treasure — the best-kept React secret:

Thanks to Dan Abramov!

That is the power of functional setState. Declare your state update logic outside your component class. Then call it inside your component class.

// outside your component class
function increaseScore (state, props) {
  return {score : state.score + 1}
class User{
// inside your component class
  handleIncreaseScore () {
    this.setState( increaseScore)

This is declarative! Your component class no longer cares how the state updates. It simply declares the type of update it desires.

To deeply appreciate this, think about those complex components that would usually have many state slices, updating each slice on different actions. And sometimes, each update function would require many lines of code. All of this logic would live inside your component. But not anymore!

Also, if you’re like me, I like keeping every module as short as possible, but now you feel like your module is getting too long. Now you have the power to extract all your state change logic to a different module, then import and use it in your component.

import {increaseScore} from "../stateChanges";
class User{
  // inside your component class
  handleIncreaseScore () {
    this.setState( increaseScore)

Now you can even reuse the increaseScore function in a different component. Just import it.

What else can you do with functional setState?

Make testing easy!

You can also pass extra arguments to calculate the next state (this one blew my mind… #funfunFunction).

Expect even more in…

The Future of React

For years now, the react team has been experimenting with how to best implement stateful functions.

Functional setState seems to be just the right answer to that (probably).

Hey, Dan! Any last words?

If you’ve made it this far, you’re probably as excited as I am. Start experimenting with this functional setState today!

If you feel like I’ve done any nice job, or that others deserve a chance to see this, kindly click on the green heart below to help spread a better understanding of React in our community.

If you have a question that hasn’t been answered or you don’t agree with some of the points here feel free to drop in comments here or via Twitter.

Happy Coding!

Land Rover Discovery Sport Faclift Review & Test Drive

Land Rover Discovery Sport Overview

Land Rover has been putting in efforts to improve their product line up and lately have become more aggressive in the compact crossover segment. The result of this aggression is the new Discovery Sport. We checked out this new SUV from Land Rover, to get a feel of what it is all about. Land Rover has introduced the Discovery Sport with a 2.0-litre petrol engine. This engine produces 237bhp of power and comes with an eight-speed automatic transmission.  For information on contact details of Land Rover car dealers in New Delhi

For price details on Land Rover Discovery Sport check Carzprice

The Discovery Sport comes as a replacement for the Freelander2. At first glance, you will be excused for thinking it to be the Range Rover Evoque, as it looks identical from far, especially the front portion. Though the two share most of the underpinnings, there are many differences between the two. The major difference here is that the the Discovery Sport is a more rugged machine with better off-roading capabilities.

Land Rover Discovery Sport Exterior & Style

The Discovery Sport, unlike the Freelander 2 isn’t boxy, and that’s a welcome change. In fact, a quick glance at it is enough to figure that it draws some inspiration from the Range Rover Evoque. But amazingly, the Disco still manages make its own identity with its subtle design cues.

Look at the Discovery Sport from the front and it screams Land Rover. That smoothly carved out bonnet, and the honeycomb grille grill with the Discovery badge above it give it a clean look up front. And then adding a bit of spunk, there are those sweet looking circular daytime running lamps around the projector headlamps. We particularly liked the way the rear end coexists with the front thanks to certain elements in its design. The smoked tail lamp cluster too has circular LEDs and there’s the all important Discovery badge above the registration plate. Although the oddball rear design doesn’t look as evolved and butch as the front, the overall styling will surely grow on you over time.

Land Rover Discovery Sport Interior & Space

On the inside, the Discovery Sport is straight and simple. Purposeful, yet classy. It cannot be termed very premium but the build quality is solid and built to last. The finish in some areas does feel ordinary. The instrument cluster has twin dials and is lit in white, having a simplistic design. A new centre console comes in place and houses the gear dial which rises for use only when the ignition switch is activated. A new touchscreen infotainment system is seen in this SUV which is easy to operate even while driving, however, it takes time getting used to the interface .The driving position is not very tall but near perfect, and offers good visibility. The seats are firm, well contoured and comfortable. The seats also have electric adjustments. The vehicle over all is pretty spacious and has air vents for all rows. The huge panoramic glass roof further makes it feel spacious. The unique thing here is that every passenger gets a USB charging point which makes it a total of seven USB ports.

The Discovery Sport is slightly longer than the Freelander 2 and hence it also comes as a seven-seater option. Hence it makes this compact SUV a good option for a larger family. But its only the kids who can occupy the third row comfortably . The seven-seat version gets a space saving spare tyre instead of the full-size spare which is found on the standard five-seat version.

Land Rover Discovery Sport Engine & Gearbox

There is only two engine options on the Discovery Sport – both use the same 2.0-litre turbocharged four-cylinder diesel unit, producing two outputs – 148bhp and 177bhp.

However, our test car used Ford-derived 2.2-litre diesel and the Sport’s most noticeable connection to the past is unmistakably that engine, which currently shadows everything the car does with the clatter and gunsmoke odour of yesteryear. Denying the car the new four-cylinder Ingenium oil-burner from launch was clearly the model’s on-paper Achilles heel and, to a greater or lesser extent, that’s the way it plays out on the road.

However, although the direct-injected 2.2-litre motor is not a paragon of refinement or efficiency, its later-life development has at least ensured that it produces the unmistakable surge expected of a modern blower-equipped diesel.

On stream, its 310lb ft of torque is a plentiful amount, and it feels that way. For a car that tipped the scales on the wrong side of two tonnes when we weighed it, a sub-9.0sec 0-60mph time is very decent. So is the 9.0sec it takes to get from 30mph to 70mph, very slightly bettering the time we recorded for the much-admired 2.2-litre engine in the Mazda CX-5 a couple of years ago.

In fact, the soft underbelly of the package is at times evident less in the 20th century motor and more in the 21st century gearbox to which it has been shackled.

Rather inevitably, the nine-speed automatic transmission’s keenness to keep the engine spinning at its productive mid-range pitch means that you’re going to have to live with a lot of downshifting – particularly on the motorway, where the never-ending 47.5mph per 1000rpm final ratio cannot be trusted with even modest acceleration.

Land Rover Discovery Sport Driving Dynamics

Driving in city or the highway you won’t run out of steam. There is sufficient grunt in the engine and makes it easy to drive. Ideally drive it in D mode, the S mode is when you want to sprint ahead but the difference isn’t anything major. In D the engine is comfortable as it is mostly in its power band and this is sufficient to drive.

The ride quality of the Discovery Sport is composed and pliant. This SUV glides over potholes with a hint of stiffness that can be left. All the due credit goes to the engineering team. Drive on bad roads and you won’t complain much. The suspension set up is balanced and doesn’t feel rough or unsettled. The chassis too is stiffer and the alacrity is much better. There is hardly any body roll when compared to the Freelander2 and this is at par with the German rivals. The steering wheel is light and convenient to drive on city and it starts to weigh up once the speed increases. There is nothing to complain in this department. There are multiple off-roading modes as well. These are nearly placed on the centre console freeing up space for more stowage.

Land Rover Discovery Sport Safety & Security

The Land Rover Discovery Sport sets the standard for safety in its class and achieved the full five stars in Euro NCAP’s crash tests. An airbag springs up from the top of the bonnet to help reduce head injuries in the case of a pedestrian collision, while interior airbags include one for the driver’s knee, as well as airbags that cover the head, chest and side areas of those up front. There are head and side airbags for passengers in the middle row of seats, too.

An automatic emergency braking system and lane departure assist are other standard features that don’t feature on all rivals. Traffic sign recognition, which displays the speed limit, is optional on all but base SE trim.

Land Rover Discovery Sport Price in Chennai

Land Rover Discovery Sport Ex-Showroom Price in Chennai ranges from 42,46,997/- (Discovery Sport 2.0l Diesel Pure 5 Seats) to 52,29,484/- (Discovery Sport 2.0l Diesel HSE Luxury 5 Seats). Get best offers for Land Rover Discovery Sport from Land Rover Dealers in Chennai. Check for Discovery Sport price in Chennai at Carzprice

Land Rover Discovery Sport Bottomline

Should you buy one then? The biggest draw is clearly the prestigious Land Rover badge. And then, there are a lot of options available in the market at that price point including its cousin – the Range Rover Evoque. Moreover when you consider the fact that its arch rivals offer a more powerful motor, then things get slightly more difficult. But then if you want a car that looks good, handles well, is practical and most importantly can take the road unexplored without breaking a sweat then the Land Rover Discovery Sport surely should be on your list.

Ford Figo Aspire Hatchback Test Drive & Performance

Ford Figo Aspire Overview

The Indian market saw a big shake up in 1990s when foreign car makers were invited with open doors to the country. Ford was among the first few to enter in 1997 and although the Blue Oval has been in the country since almost two decades now, they haven’t been able to dominate with a significant market share. This is set to change as the company brought in the Figo, got a terrific response and decided to invest more money in India. Then the EcoSport came, an even bigger success and more and more Indians brought home their first Ford. Now the American automaker is set to launch the Figo Aspire, a car which will further boost Ford’s sales numbers in India as it sits in the highly competitive yet volume churning compact sedan space. Has Ford nailed it yet again? A drive around Udaipur helps us get our answer. View offers on Ford Cars from Ford dealers in Hyderabad at Autozhop

For More details on On Road Price of Ford Figo Aspire check at

Ford Figo Aspire Exterior & Style

The compact sedan segment generally has cars which look like more of a compromise. Good looks takes a back seat here. Thats not the cars with the Ford Figo Aspire, though. This one is a very good looking car. The Ford Figo Aspire is based on Ford’s Kinetic Design 2.0. The bonnet or boot are neither too long nor short. The front design is similar to other new Fords like the Fiesta and Mondeo. There is a prominent swag line that runs across the car. The glass area is small so the car looks larger. This also makes it look well balanced and not too tall. The boot design too is good with a chrome strip that is well proportionate. Its not too flashy not too thin. The car looks premium ever from the rear.

Ford Figo Aspire Interior & Space

After exploring the upmarket exteriors, the interiors continue to impress us considering how the previous generation Figo was. The Aspire has a touch of premium-ness to its cabin thanks to the extensive use of beige and the new dashboard, which is derived from its elder siblings. You must be very familiar with the centre console and steering wheel from the EcoSport and Fiesta. The three-spoke steering as always feels great to hold with those chunky contours and the piano black inserts look good. There are controls for the audio system and Bluetooth telephony. The stalks have been finally swapped for Indian driving style! The indicator, trip meter and dipper controls are on the right while the wiper controls are on the left-hand-side. The three-pod instrument cluster looks small in size and is a bit basic with a tiny MID display but it is quite easy to read. The headlight and fog lamp controls continue to have the European position, which is convenient and also gets the boot release button there. The rearview mirror controls are placed on the A-pillar like the old Figo and Classic that offer electric adjustment and power folding function with a tap downwards.

The Figo Aspire gets automatic climate control on the Titanium and Titanium + variants that chills the cabin quite effectively but at higher fan speeds there is a lot of noise coming out from the vents. The SYNC system with AppLink comes on the Titanium + variant that has a 4.2-inch screen. It offers CD, AUX, USB and Bluetooth connectivity with voice-activated handsfree system. It streams music, which sounds good through its 4-speaker audio system but at high volumes the bass tends to get distorted. There is an emergency assist system that activates when the car experiences a collision and automatically calls the emergency responders providing location and vehicle information. The AppLink system currently works with four apps including Glympse, which lets you share your location with contacts, ESPN Cricinfo that keeps you remain updated with scores, MapMyIndia to explore new attractions and Burrp to discover new food destinations.

Another interesting new feature which is unheard of in this segment is the MyKey technology available with the SYNC system. This system lets the owners program the key that limits the top speed of the car, music volume, prevents switching TCS off and also ensures the usage of seatbelt by turning off the audio system unless the driver wears the seatbelt. So if you don’t want your car to be mishandled by some other driver then you can programme the key, which offers great peace of mind. The Trend and Titanium variants don’t offer SYNC system and instead come with an innovative MyFord Dock feature. There is a small compartment on the top of the dashboard, where you can mount your phone and charge it with the USB port placed in the same compartment and also the AUX port for music connectivity. This way you can easily access your phone’s navigation system too without fumbling with the device.

The quality inside the cabin is good and never does it feel cheap or built to a cost. The doors are heavy and the car has solid build quality. The controls have a tactile feel, the air vents (none at the rear but the AC is a chiller), audio controls, climate control knobs, window switches, etc. feel built to last. There are more than 20 smart storage spaces inside the cabin to make the cabin look neat and tidy. The front door pockets can hold two bottles including a 1.5-litre and a 1.0-litre bottle with still some extra space left for more things. Then there is the sizeable glovebox with a neat pen holder. Just below the audio system there is a convenient place to park your phone that has rubberised material to keep it in place. Between the front seats there is a compartment that gets three cup holders, coin storage and a bin for the rear passengers. The one we liked the most is a secret side compartment, which is only accessible when the driver’s door is open. There are no door pockets for the rear but there are seatback pockets for newspapers and magazines and a parcel shelf at the back with carved out space to keep tissue boxes and similar stuff. There are no grab handles on the Titanium + trim because of six airbags but other variants get it. There are cabin lights for front passengers but missing for the rear.

Ford Figo Aspire Engine & Performance

There are three engines to choose from in the Ford Figo Aspire. First is the 1.2-litre petrol that churns out 87bhp of power and 112Nm and comes mated to a five-speed manual transmission. This engine is ideal for city driving, while for highway it is just fine if you want to cruise. The punch from the engine comes after 3000rpm and keep it above that if you want to extract the best performance.

The Ford Figo Aspire’s second engine is the 1.5-litre petrol. This engine comes only with a six-speed DCT (dual clutch transmission). Simply put, it is a high-end automatic transmission. This is quite a powerful engine and is fun to drive in a spirited fashion. It is very responsive and will satisfy those looking for extra performance with the convenience of an automatic.

The third engine is the 1.5-litre diesel that produces 98bhp of power and 215Nm of torque. This engine of the Ford Figo Aspire has been tuned for better performance, however, there is still some amount of turbo lag. This is the perfect engine be it city or highway driving. This is the best engine of the lot and we will recommend this for performance and even fuel efficiency

The diesel is certainly our pick of lot and it has a good mid-range, at low rpms it struggles to pull. The sweet spot for this engine is between 2000-3500rpm. The diesel will return about 15-17km/l in city. The Ford Figo Aspire has good suspension set-up. At the rear it is softer than the front and so the rear has still decent ride. The handling of the Ford Figo Aspire is one of the best-in-class and even the electric power steering is light and easy to drive in city and even on the highway.

Ford Figo Aspire Driving Dynamics

Ford cars are known to be dynamically rich and while the Figo Aspire handles very well, being better than its rivals in the segment, it doesn’t have the same feel of a Fiesta or even the good old Classic (it uses the same platform though which has seen some modifications). The company has tested the vehicle for 150 hours in the wind tunnel for improved aerodynamics which works well in both performance and efficiency. The steering system is now electric (EPAS with pull drift compensation) and there is some vagueness in the centre, the steering weighing up decently at speed but still being on lighter side in the interest of ease of driving in the city. The suspension is naturally on the stiffer side but just enough to keep the car planted at high speeds while the focus is clearly on comfort as road, tyre and suspension noise is well insulated.

Due to the body being lighter, it’s not as surefooted as other Ford cars but still inspires enough confidence to drive fast. The petrol Figo offers slightly better handling due to its lower front-end weight while there is some body roll although not much. The Figo Aspire excels in the ride quality department, it glides over roads with authority and takes bad roads in its stride with utmost confidence. Hit a big bump at speed and you will encounter some bounciness but for the most part, the suspension does a fantastic job of ironing out inconsistent tarmac. Braking performance is excellent on the Figo Aspire and the car stops with utmost confidence, even when you stand on the brake pedal hard.

Ford Figo Aspire Safety & Security

Ford has given special attention to the safety of the occupants inside and thus the new sedan comes with robust passenger cage created from high-strength steel, both driver and passenger side airbags and for the first time in the history of Indian auto industry Ford has even provided side and curtain airbags for the occupants as an option. The top end trims also gets ABS and EBD and also hill assist which is available only with the auto transmission variant though.

Ford Figo Aspire Cost in Chennai

Ford Figo Aspire Ex-Showroom Price in Chennai ranges from 5,47,069/- (Figo Aspire 1.2P Ambiente MT) to 8,11,098/- (Figo Aspire 1.5D Titanium Plus MT). Get best offers for Ford Figo Aspire from Ford Dealers in Chennai. Check for Figo Aspire price in Chennai at Carzprice

Ford Figo Aspire Verdict

The Aspire is a very impressive package which is arguably the best to drive in the segment and is incredibly spacious for a car in the sub-four-meter category. There are plenty of smart storage spaces around the cabin and they have created a smart docking station for devices at the top of the centre console to make it user-friendly for today’s consumers, with an easy to use charging port

The 100% correct way to do CSS breakpoints

David Gilbertson

For the next minute or so, I want you to forget about CSS. Forget about web development. Forget about digital user interfaces.

And as you forget these things, I want you to allow your mind to wander. To wander back in time. Back to your youth. Back to your first day of school.

It was a simpler time, when all you had to worry about was drawing shapes and keeping your incontinence in check.

Take a look at the dots above. Notice how some of them are clumped together, and some of them spread out? What I want you to do is break them up into five groups for me, however you see fit.

Go ahead. After checking that no one is watching, draw a circle around each of the five groups with your child-like finger.

You probably came up with something like the below, right? (And whatever you do, don’t tell me you scrolled down without doing the exercise. I will face palm.)

Sure, those two dots on the right could have gone either way. If you grouped them together it’s OK, I guess. They say there’s no wrong answer, but I’ve never been wrong, so I’ve never been on the receiving end of that particular platitude.

Before I go on, did you draw something like the below?

Probably not. Right?

But that’s basically what you’d be doing if you set your breakpoints at positions matching the exact width of popular devices (320px, 768px, 1024px).

Have words of the below nature ever entered your ears or exited your mouth?

“Is the medium breakpoint up to 768px, or including 768? I see… and that’s iPad landscape, or is that ‘large’? Oh, large is 768px and up. I see. And small is 320px? What is this range from 0 to 319px? A breakpoint for ants?”

I could proceed to show you the correct breakpoints and leave it at that. But I find it very curious that the above method (“silly grouping”) is so widespread.

Why should that be?

I think the answer to this problem, like so many problems, comes down to misaligned terminology. After all, waterboarding at Guantanamo Bay sounds super rad if you don’t know what either of those things are. (Oh I wish that was my joke.)

I think we mix up “boundaries” and “ranges” in our discussions and implementations of breakpoints.

Tell me, if you do breakpoints in Sass, do you have a variable called $largethat is, say, 768px?

Is that the lower boundary of the range you refer to as large, or the upper boundary? If it’s the lower, then you must have no $small because that should be 0, right?

And if it’s the upper boundary then how would you define a breakpoint $large-and-up? That must be a media query with a min-width of $medium, right?

And if you are referring to just a boundary when you say large, then we’re in for confusion later on because a media query is always a range.

This situation is a mess and we’re wasting time thinking about it. So I have three suggestions:

  1. Get your breakpoints right
  2. Name your ranges sensibly
  3. Be declarative

Tip #1: Get your breakpoints right

So what are the right breakpoints?

Your kindergarten self already drew the circles. I’ll just turn them into rectangles for you.

600px, 900px, 1200px, and 1800px if you plan on giving the giant-monitor people something special. On a side note, if you’re ordering a giant monitor online, make sure you specify it’s for a computer. You don’t want to get a giant lizard in the mail. For Website development services check Vivid Designs

Those dots your channeled young self has been playing with actually represent the 14 most common screen sizes:

image credit

So we can make a pretty little picture that allows for the easy flow of words between the folks dressed up as business people, designers, developers, and testers.

I’m regretting my choice of orange and green, but I’m not redoing all of these pictures now.

Tip #2: Name your ranges sensibly

Sure, you could name your breakpoints papa-bear and baby-bear if you like. But if I’m going to sit down with a designer and discuss how the site should look on different devices, I want it to be over as quickly as possible. If naming a size portrait tablet facilitates that, then I’m sold. Heck, I’d even forgive you for calling it “iPad portrait.”

“But the landscape is changing!” you may shout. “Phones are getting bigger, tablets are getting smaller!”

But your website’s CSS has a shelf life of about three years (unless it’s Gmail). The iPad has been with us for twice that time, and it has yet to be dethroned. And we know that Apple no longer makes new products, they just remove things from existing ones (buttons, holes, etc).

So 1024 x 768 is here to stay, folks. Let’s not bury our heads in the sand. (Fun fact: ostriches don’t live in cities because there is no sand, and thus nowhere to hide from predators.)

Conclusion: communication is important. Don’t purposefully detach yourself from helpful vocabulary.

Tip #3: Be declarative

I know, I know, that word “declarative” again. I’ll put it another way: your CSS should define what it wants to happen, not how it should happen. The “how” belongs hidden away in some sort of mixin.

As discussed earlier, part of the confusion around breakpoints is that variables that define a boundary of a range are used as the name of the range. $large: 600px simply makes no sense if large is a range. It’s the same as saying var coordinates = 4;.

So we can hide those details inside a mixin rather than expose them to be used in the code. Or we can do one better and not use variables at all.

At first I did the below snippet as a simplified example. But really I think it covers all the bases. To see it in action, check out this pen. I’m using Sass because I can’t imagine building a site without it. The logic applies to CSS or Less just the same.

Note that I’m forcing the developer to specify the -up or -only suffix.

Ambiguity breeds confusion.

An obvious criticism might be that this doesn’t handle custom media queries. Well good news, everybody. If you want a custom media query, write a custom media query. (In practice, if I needed more complexity than the above I’d cut my losses and run into the loving embrace of Susy’s toolkit.)

Another criticism might be that I’ve got eight mixins here. Surely a single mixin would be the sane thing to do, then just pass in the required size, like so:

Sure, that works. But you won’t get compile-time errors if you pass in an unsupported name. And to pass in a sass variable means exposing 8 variables just to pass to a switch in a mixin.

Not to mention the syntax @include for-desktop-up {...} is totes more pretty than @include for-size(desktop-up) {...}.

A criticism of both these code snippets might be that I’m typing out 900px twice, and also 899px. Surely I should just use variables and subtract 1 when needed. Web designing services in Hyderabad visit Vivid Designs 

If you want to do that, go bananas, but there are two reasons I wouldn’t:

  1. These are not things that change frequently. These are also not numbers that are used anywhere else in the code base. No problems are caused by the fact that they aren’t variables — unless you want to expose your Sass breakpoints to a script that injects a JS object with those variables into your page.
  2. The syntax is nasty when you want to turn numbers into strings with Sass. Below is the price you pay for believing that repeating a number twice is the worst of all evils:
  3. Oh and since I’ve taken on a ranty tone over the last few paragraphs … I pity the fool who does something magical like store breakpoints in a Sass list and loop over them to output media queries, or something similarly ridiculous that future developers will struggle to decipher.

    Complexity is where the bugs hide.

    Finally, you may be thinking “shouldn’t I be basing my breakpoints on content, not devices?”. Well I’m amazed you made it this far and the answer is yes … for sites with a single layout. Or if you have multiple layouts and are happy to have a different set of breakpoints for each layout. Oh and also if your site design doesn’t change often, or you’re happy to update your breakpoints when your designs update since you’ll want to keep them based on the content, right?

    For complex sites, life is much easier if you pick a handful of breakpoints to use across the site.

    We’re done! But this post has not been as furry as I would like, let me see if I can think of an excuse to include some…

    Oh, I know!

    Bonus tips for breakpoint development

    1. If you need to experience CSS breakpoints for screen sizes bigger than the monitor you’re sitting at, use the ‘responsive’ mode in Chrome DevTools and type in whatever giant size you like.
    2. The blue bar shows ‘max-width’ media queries, the orange bar is ‘min-width’ media queries, and the green bar shows media queries with both a min and a max.
    3. Clicking a media query sets the screen to that width. If you click on a green media query more than once, it toggles between the max and min widths.
    4. Right click a media query in the media queries bar to go to the definition of that rule in the CSS.

    Hey, thanks for reading! Comment with your tops ideas, I’d love the hear them. And click the little heart if you think I deserve it, or leave it hollow and empty, like my sense of self-worth will be if you don’t.


A Study Plan To Cure JavaScript Fatigue

Like everybody else, I recently came across Jose Aguinaga’s post “How it feels to learn JavaScript in 2016”.

It’s clear that this post hit a nerve: I saw it reaching the top spot on Hacker News not once but twice. It was the most popular post on /r/javascript as well, and as of right now it has over 10k likes on Medium — which is probably more than all my own posts put together. But who’s counting?

This didn’t come as a surprise though: I’ve known for a long time that the JavaScript ecosystem can be confusing. In fact, the very reason why I ran the State Of JavaScript survey was to find out which libraries were actually popular, and finally sort the signal from the noise.

But today, I want to go one step further. Instead of simply complaining about the state of things, I’m going to give you a concrete, step-by-step study plan to conquering the JavaScript ecosystem.

Who Is This For

This study plan is for you if:

  • You’re already familiar with basic programming concepts like variables and functions.
  • You might have already done back-end work with languages such as PHP and Python, and maybe used front-end libraries such as jQuery for a few simple hacks.
  • You now want to get into more serious front-end development but are drowning in frameworks and libraries before you’ve even started.

Things We’ll Cover

  • What a modern JavaScript web app looks like
  • Why you can’t just use jQuery
  • Why React is the safest pick
  • Why you may not need to “learn JavaScript properly” first
  • How to learn ES6 syntax
  • Why and how to learn Redux
  • What GraphQL is and why it’s a big deal
  • Where to go next

Resources Mentioned Here

Disclaimer: this post will include a few affiliate links to courses by Wes Bos, but the material is recommended because I genuinely think it’s good, and not just because of the affiliate scheme.

If you would rather find other resources, Mark Erikson maintains a great list of React, ES6, and Redux links.

JavaScript vs JavaScript

Before we start, we need to make sure we’re talking about the same thing. If you google “Learn JavaScript” or “JavaScript study plan”, you’ll find a ton of resources that teach you how to learn the JavaScript language.

But that’s actually the easy part. While you can definitely dig deep and learn the intricacies of the language, the truth is most web apps use relatively simple code. In other words, 80% of what you’ll ever need to write web apps is typically covered in the first few chapters of your typical JavaScript book.

No, the hard problem is mastering the JavaScript ecosystem, with its countless competing frameworks and libraries. The good news is, that’s exactly what this study plan focuses on.

The Building Blocks Of JavaScript Apps

In order to understand why modern JavaScript apps seem so complex, you first have to understand how they work.

For starters, let’s look at a “traditional” web app circa 2008:

  1. The database sends data to your back-end (e.g. your PHP or Rails app).
  2. The back-end reads that data and outputs HTML.
  3. The HTML is sent to the browser, which displays it as the DOM (basically, a web page)

Now a lot of these apps also sprinkled in some JavaScript code on the client to add interactivity, such as tabs and modal windows. But fundamentally, the browser was still receiving HTML and going from there.

Now compare this with a “modern” 2016 web app (also known as the “Single Page App”):

Notice the difference? Instead of sending HTML, the server now sends data, and the “data to HTML” conversion step happens on the client instead (which is why you’re also sending along the code that tells the client how to perform said conversion). For Best web design company check Vivid Designs

This has many implications. First, the good:

  • For a given piece of content, sending only data is faster than sending entire HTML pages.
  • The client can swap in content instantly without having to ever refresh the browser window (thus the term “Single Page App”).

The bad:

  • The initial load takes longer since the “data to HTML” codebase can grow quite large.
  • You now need a place to store and manage the data on the client too, in case you want to cache it or inspect it.

And the ugly:

  • Congratulations — you now have to deal with a client-side stack, which can get just as complex as your server-side stack.

The Client-Server Spectrum

So why go through all this trouble if there are so many downsides? Why not just stick to the good old PHP apps of old?

Well, imagine you’re building a calculator. If the user wants to know what 2 + 2 is, it doesn’t make sense to go all the way back to the server to perform the operation when the browser is perfectly capable of doing it.

On the other hand, if you’re building a purely static site such as a blog, it’s perfectly fine to generate the final HTML on the server and be done with it.

The truth is, most web apps fall somewhere in the middle of the spectrum. The problem is knowing where.

But the key thing is that the spectrum is not continuous: you can’t start with a pure server-side app and slowly move towards a pure client-side app. At some point (the Divide), you’ll be forced to stop and refactor everything, or else end up with a mess of unmaintainable spaghetti code.

This is why you shouldn’t “just use jQuery” for everything. You can think of jQuery like duct tape. It’s amazingly handy for small fixes around the house, but if you keep adding more and more things will start looking ugly.

On the other hand, modern JavaScript frameworks are more like 3D-printing a replacement piece: it takes more time, but the result is a lot cleaner and sturdier.

In other words, mastering the modern JavaScript stack is a bet that no matter where they start, most web apps will probably end up on the right side of the divide sooner or later. So yes, it’s more work, but better safe than sorry.

Week 0: JavaScript Basics

Unless you’ve a pure back-end developer, you probably know some JavaScript. And even if you don’t, JavaScript’s C-like syntax will look somewhat familiar if you’re a PHP or Java developer.

But if JavaScript is a complete mystery to you, don’t despair. There are a lot of free resources out there that will quickly bring you up to speed. For example, a good place to start is Codecademy’s JavaScript lessons.

Week 1: Start With React

Now that you know basic JavaScript syntax, and that you understand why JavaScript apps can appear so complex, let’s talk specifics. Where should you start?

I believe the answer is React.

React is a UI library created and open-sourced by Facebook. In other words, it takes care of that “data to HTML” step (the View Layer).

Now don’t get me wrong: I’m not telling you to pick React because it’s the bestlibrary out there (because that’s highly subjective), but because it’s pretty good.

  • React might not be the most popular library, but it’s pretty popular.
  • React might not be the most lightweight library, but it’s pretty lightweight.
  • React might not be the easiest to learn, but it’s pretty easy to learn.
  • React might not be the most elegant library, but it’s pretty elegant.

In other words, React might not be the best choice in every situation, but I believe it’s the safest. And believe me, “just when you’re starting out” is not the right time to take risks with your technological choices.

React will also introduce you to some useful concepts like components, application state, and stateless functions that will prove useful no matter which framework or libraries you end up using during your career.

Finally, React has a large ecosystem of other packages and libraries that work well with it. And its sheer popularity means you’ll be able to find a lot of help on sites like Stack Overflow.

I personally recommend the React for Beginners course by Wes Bos. It’s how I learned React myself, and it’s just been completely overhauled with the latest React best practices.

Should You “Learn JavaScript Properly” First?

If you’re a very methodical learner, you might want to get a good grasp of the fundamentals of JavaScript before you do anything else.

But for others, this feels like learning to swim by studying human anatomy and fluid dynamics. Sure, they both play a huge role in swimming, but it’s more fun to just jump in the pool!

There’s no right or wrong answer here, it all depends on your learning style. The truth is, most basic React tutorials will probably use only a tiny subset of JavaScript anyway, so it’s perfectly fine to focus on only what you need now and leave the rest for later.

This also applies to the JavaScript ecosystem at large. Don’t worry too much about understanding the ins and outs of things like Webpack or Babel for now. In fact React recently came out with its own little command-line utility that lets you create apps with no build configuration whatsoever.

Week 2: Your First React Project

Let’s assume you’ve just completed a React course. If you’re like me, two things are probably true:

  • You’ve already forgotten half of what you just learned.
  • You can’t wait to put the half you do remember in practice.

I believe the best way to learn a framework or a language is to just use it. And personal projects are the perfect occasion to try out new technologies.

A personal project could be anything from a single page to a complex web app, but I feel like redesigning your own personal site can be a good middle ground. Plus, I know you’ve probably been putting it off for years!

Now I did say earlier that using single-page apps for static content was often overkill, but React actually has a secret weapon: Gatsby, a React static site generator that lets you “cheat” and get all the benefits of React without any of the downsides.

Here’s why Gatsby is a great way to get started with React:

  • A pre-configured Webpack, meaning you get all the benefits without any of the headaches.
  • Automatic routing based on your directory structure.
  • All HTML content is also generated server-side, so you get the best of both worlds.
  • Static content means no server and super-easy hosting on GitHub Pages.

I used Gatsby for the State Of JavaScript site, and not having to worry about routing, build tool configuration, or server-side rendering saved me a ton of time.

Week 3: Mastering ES6

In my own quest to learn React, I soon reached a point where I could get by copy-pasting code samples, but there was still a lot I didn’t understand.

Specifically, I was unfamiliar with all the new features introduced by ES6, such as:

  • Arrow functions
  • Object destructuring
  • Classes
  • The spread operator

If you’re in the same boat, it might be time to take a couple days and learn ES6 properly. If you enjoyed the React for Beginners course, you might want to check out Wes’ excellent ES6 for Everybody videos.

Or if you prefer free resources, check out Nicolas Bevacqua’s book, Practical ES6.

A good exercise for mastering ES6 is going through an older codebase (such as the one you just created in Week 2!) and converting your code to ES6’s shorter, terser syntax whenever possible.

Week 4: Taking On State Management

As this point you should be capable of building a simple React front-end backed by static content.

But real web apps are not static: they need to get their data from somewhere, generally a database of some kind.  Web development services in Hyderabad visit Vivid Designs 

Now you could just send data to your individual components, but that quickly gets messy. For example, what if two components need to display the same piece of data? Or need to talk to each other?

This is where State Management comes in. Instead of storing your state (in other words, your data) bit by bit in each component, you store it in a single global store that then dispatches it to your React components:

In the React world, the most popular state management library is Redux. Redux not only helps centralize your data, but it also enforces some strict protocols for manipulating this data.

You can think of Redux as a bank: you can’t go to your local branch and manually modify your account total (“here, let me just add a couple extra zeroes!”). Instead, you fill out a deposit form, then give it to a bank teller authorized to perform the action.

Similarly, Redux also won’t let you modify your global state directly. Instead, you pass actions to reducers, special functions that perform the operation and return the new, updated state as a result.

The result of all this extra work is a highly standardized and maintainable data flow throughout your app, and access to tools such as the Redux Devtoolsto help you visualize it:

Once again you can stay with our friend Wes and learn Redux with his Redux course, which is actually completely free!

Or, you can check out Redux creator Dan Abramov’s video series on, which is free as well.

Bonus Week 5: Building APIs With GraphQL

So far we’ve pretty much only talked about the client, and that’s only half the equation. And even without going into the whole Node ecosystem, it’s important to address one key aspect of any web app: how data gets from the server to the client.

It won’t come as a surprise that this, too, is rapidly changing, with GraphQL(yet another Facebook open-source project) emerging as a serious alternative to the traditional REST APIs.

Whereas a REST API exposes multiple REST routes that each give you access to a predefined dataset (say, /api/posts, /api/comments, etc.), GraphQL exposes a single endpoint that lets the client query for the data it needs.

Think of it as making multiple trips to the butcher shop, bakery, and grocery store, versus giving someone a shopping list and sending them on their way to all three.

This new strategy becomes especially significant when you need to query multiple data sources. Just like with our shopping list example, you can now get data back from all these sources with a single request.

GraphQL has been picking up steam over the past year or so, with many projects (such Gatsby, which we used in Week 2) planning on adopting it.

GraphQL itself is just a protocol, but its best implementation right now is probably the Apollo library, which works well with Redux. There is still a lack of instructional material around GraphQL and Apollo, but hopefully the Apollo documentation can help you get started.

Beyond React & Co

I recommended you start with the React ecosystem because it’s a safe pick, but it’s by no means the only valid front-end stack. If you want to keep exploring, here are two recommendations:


Vue is a relatively new library but it’s growing at record speeds and has already been adopted by major companies, especially in China where it’s being used by the likes of Baidu and Alibaba (think Chinese Google and Chinese Amazon). And it’s also the official front-end layer of PHP framework Laravel.

Compared to React, some of its key selling points are:

  • Officially-maintained routing and state management libraries.
  • Focus on performance.
  • Lower learning curve thanks to using HTML-based templates.
  • Less boilerplate code.

As it stands, the two main things that still give React an edge over Vue are the size of the React ecosystem, and React Native (more on this later). But I wouldn’t be surprised to see Vue catch up soon!


If Vue is the more approachable option, Elm is the more cutting-edge one. Elm is not just a framework, but an entire new language that compiles down to JavaScript.

This brings multiple advantages, such as improved performance, enforced semantic versioning, and no runtime exceptions.

I haven’t tried Elm personally, but it’s been warmly recommended by friends and Elm users generally seem very happy with it (as shown by its 84% satisfaction rating in the State Of JavaScript survey).

Next Steps

By now you should have a pretty good grasp of the entire React front-end stack, and hopefully be reasonably productive with it.

That doesn’t mean you’re done though! This is only the beginning of your journey through the JavaScript ecosystem. Some of the other topics you’ll eventually run into include:

  • JavaScript on the server (Node, Express…)
  • JavaScript testing (Jest, Enzyme…)
  • Build tools (Webpack…)
  • Type systems (TypeScript, Flow…)
  • Dealing with CSS in your JavaScript apps (CSS Modules, Styled Components…)
  • JavaScript for mobile apps (React Native…)
  • JavaScript for desktop apps (Electron…)

I can’t cover all this here but don’t despair! The first step is always the hardest, and guess what: you’ve just taken it by reading this study plan.

And now that you understand how the various pieces of the ecosystem fit together, it’s just a matter of lining up what you want to learn next and knocking down a new technology each month.


GitHub vs. Bitbucket vs. GitLab vs. Coding

Today, repository management services are key components of collaborative software development. They enable software developers to manages changes to the source code and related files, create and maintain multiple versions in one central place. There are numerous benefits of using them, even if you work in a small team or you are a one man army. Using repository management services enables teams to move faster and preserve efficiency as they scale up.

In this article we briefly introduce and compare four popular repository management services GitHub, Bitbucket, GitLab, and Coding by touching on multiple aspects including basic features, relationship to open source, importing repositories, free plans, cloud-hosted plans, and self-hosted plans. The purpose of this article is not to swing opinions, but serve as a starting point for your research when you are looking for the best solution for your project.


GitHub is Git based repository hosting platform which was originally launched in 2008 by Tom Preston-Werner, Chris Wanstrath, and PJ Hyatt. This is the largest repository host with more than 38+ million projects.


Bitbucket was also launched in 2008 by an Australian startup, originally only supporting Mercurial projects. In 2010 Bitbucket was acquired by Atlassian and from 2011 it also started to support Git hosting, which is now its main focus. It integrates smoothly with other services from Atlassian and their main market is large enterprises.


GitLab started as a project by Dmitriy Zaporozhets and Valery Sizov providing an alternative to the available repository management solutions in 2011. In 2012 the site was launched, but the company was only incorporated in 2014.


Coding was founded by Zhang Hai Long (张海龙) in Shenzhen, China in 2014 and received $15 million funding in the same year. Coding is currently used by 300 000 developers and hosts 500 000 projects. Their user base is rapidly growing on the mainland Chinese market and they have already set their eyes on international users.

Basic Features

Each of the four platforms is one big universe on its own when it comes down to features and capabilities. Making a detailed feature comparison is beyond the scope of this post. But if we are only looking at basic features they show a lot of similarities:

  • Pull request
  • Code review
  • Inline editing
  • Issue tracking
  • Markdown support
  • Two factor authentications
  • Advanced permission management
  • Hosted static web pages
  • Feature rich API
  • Fork / Clone Repositories
  • Snippets
  • 3rd party integrations

For more details please visit the feature pages of Bitbucket, GitHub, GitLab, and Coding.

Which one is open source?

From the four repository management services, only GitLab has an open source version. The source code of GitLab Community Edition is available on their website, the Enterprise edition is proprietary.

GitHub, who is famous for open source friendliness and hosts the largest amount (19.4M+) of open source projects is not open source.

Bitbucket is not open source but upon buying the self-hosted version the full source code is provided with product customization options.

Coding is also entirely proprietary and the source code is not available in any form. For Best web design company check Vivid Designs

What is the best place to discover public projects and connect with other developers?

GitHub, GitLab, Bitbucket, and Coding both have public repository discovery functions and apart from GitLab each offers the ability to easily follow other users. Coding even lets you add customized tags to personal profiles, which helps to find and to connect with other users with a particular interest.

Even though GitHub is not open source, it is still the hotbed of open source collaboration. It has by far the largest amount of public and open source projects and also hosts many of the most significant ones.(Docker, npm) With the early adoption of social features and with the free hosting of public projects, it is clearly a social hub for professional developers and everyone else who is interested in software development. What’s more, an exciting active GitHub profile could help you landing a great job. In more and more cases recruiters favor candidates with an active GitHub profile.

Importing Repositories

When you are trying to decide which system to use, the ability to import and use your previous projects is critical. Bitbucket is in this sense stands out from the other three because this is the only one that supports Mercurial repositories.

Coding, GitHub, and Bitbucket supports importing repos based on multiple different VCSs, GitLab on the other hand only supports Git. Git is the most popular VCS, but moving to GitLab could be complicated if you are using Mercurial or SVN repositories at the moment. GitLab’s repository importing feature explicitly geared to help users migrate from other more popular platforms.

GitHub supports:
– The import of Git, SVN, HG, TFS.

GitLab supports: 
-The import of Git.
-Easy import from other services GitHub, Bitbucket, Google Code, Fogbugz.

Coding supports: 
– The import of Git, SVN, HG.

Bitbucket supports:
– The import of Git, CodePlex, Google Code, HG, SourceForge, SVN.

Free Plans

All the 4 providers offer a free plan, but when we are looking at the details they have some significant differences.

GitHub free plan allows you to host an unlimited number of public repositories with the ability to clone, fork and contribute to them. There is no limit on disk usage, however, projects should not exceed 1 GB and individual files 100 MB. If you are looking to host private projects for free you need to look at other providers.

Bitbucket’s Small teams plan let’s 5 members to collaborate on an unlimited number of projects. Repositories here have a 1 GB soft size limit, when you reach this they will notify you by email, but your ability to push to the repository will only be suspended when your repo’s size reaches 2 GB.

GitLab cloud-hosted plan lets an unlimited number of users to collaborate on an unlimited number of public and private projects. They have 10GB space limit per repository, which is definitely a very generous proposition compared to what the other 3 provider offers. Top web development company in Hyderabad visit Vivid Designs 

The free plan from Coding let’s 10 members to collaborate on an unlimited number of public and private repositories, but they impose a 1 GB overall storage limit which feels like a big restriction.

If you are looking for a free cloud-based solution for private projects GitLab’s offer is probably the most appealing.

GitLab Community Edition is the only self-hosted free plan on our list. This is definitely the best option for those who like to have full control over the code base and have the resources to maintain their own servers. The downside here is that it only comes with community support and some more advanced features as code search are not included.

Paid Cloud-Hosted Plans

All the paid cloud-hosted plans offer an unlimited number of private repository storage and email support as well.

GitHub personal account offers essentially the same functionalities as their free account with the ability to host an unlimited number of private repositories. There is no limit on how many users with a personal account can collaborate, but they can’t use organizational features such as team-based access permissions and billing is done independently. GitHub Organisation plan starts at $25 / month for 5 people and each additional user cost $9 / month.

Bitbucket cloud-hosted Growing Team plan start with 10 users / $10 / month and for $100 / month it removes the limit on the number of team members.

Coding has two paid plans the Developer Plan for maximum 20 users and the Advanced plan for 50 users. Each case you and your team can host an unlimited number of repositories with the storage limit 5GB and 10GB respectively. It is worth mentioning that coding has more flexible billing options, competitive prices, and strong support including live chat and phone call. (They might only be available in Chinese though.)

Paid Self-Hosted Plans

GitHub, GitLab, and Bitbucket self-hosted versions provide enhanced features compared to their cloud-hosted counterparts. Each of these providers has created comparison tables to compare the features of the cloud-based and the self-hosted editions:

  • GitHub
  • GitLab
  • Bitbucket

Coding is rather mysterious about their Enterprise edition, they don’t disclose any details of pricing and feature on their website. If you are considering their host their solution behind your firewall, you need to reach out to their team. They assess the client’s needs first and then they provide custom quotes based on the assessment.

GitHub Enterprise plan starts at $2500 / 10 users and it’s billed annually. If you need more than that, which is likely to be the case, then you need to contact their sales team. Apart from your the servers at your own premises, GitHub Enterprise can also be deployed to AWS and Azure.

One of the best thing about Bitbucket Small Teams and Growing Teams that they only need a one-time payment. Paying $10 for Bitbucket Small Teams once for all, definitely makes GitHub look expensive. The Bitbucket Enterprise version has a limit of 2000 users. If you need more than that we suggest that you should check out Bitbucket Data Center.

GitLab Enterprise edition cost $39 / user / year and has no minimum limit on the number of users. It is more expensive than Bitbucket but it is still a wallet-friendly option. Adding some of the extra tools and services can make it quite pricey:

  • Premium support $99 / user / year (min 100 users)
  • GitLab Geo $99 / user / year (no min users)
  • Pivotal Tile $99 / user / year (no min users)
  • File Locking $99 / user / year (no min users)

Integration with

GitHub, Bitbucket, GitLab, and Coding work seamlessly together with Connecting your either of your accounts to only takes few steps.


We can not announce one service to be ultimately superior to the others. Not only because that would easily start a pub fight but also all of them are powerful and feature rich services. Nevertheless, there are particular scenarios when it is not far fetched to recommend a certain service:

  • If you want an open source solution you should pick GitLab.
  • If you are using other products from Atlassian (eg.: Confluence, Jira.. ), hosting your repositories on Bitbucket definitely make sense.
  • If you are working on an open source project then GitHub is definitely a great choice.
  • At this moment we would only recommend Coding for Chinese speaking teams since only their Web IDE has English UI.

It is likely that one of the four repository hosting services can give you what you need. If it is not the case, then check out Assembla or CloudForge. is a hosted continuous integration and delivery service, designed for teams who need a flexible and scalable solution but prefer not to maintain their own infrastructure. In, development pipelines or automation workflows are simply called flows. In a flow, every step is a plugin that can be added by two clicks. You can add as many steps to your flow as you need, and there is no time limit on builds.