Get 10$ Digital Ocean SSD Hosting Credit for free.

Deprecation Warning from 2020: Where to find me now

I just realized there is still a bit of traffic on this blog, which is cool 🙂

The page however is no longer actively maintained or updated.
You can find my more recent content about what I work on in our snapADDY Medium Publication:
snapADDY Tech Blog
On my personal Medium Blog (which I also did not update for 2 years by now 🙈)
Sebastian Metzger Medium Blog
And most recently I started my own Youtube channel where I regurally rant about various topics:
Sebastian Metzger YouTube Channel

Also we are still hiring @snapADDY 🙂

Get 10$ Digital Ocean SSD Hosting Credit for free.

Pumped up about Angular 2.0 and NodeJS in 2016

This was 2015

This year was very eventful for me. It was my first year as a freelancer and I already graduated from that to the next level of entrepreneurship by taking the chance to join the sofware startup snapADDY as the technical co-founder. Awesome!

If you or your company has a marketing or sales team that researches leads online or struggles with CRM data quality, you will love what we do, so check it out!

At snapADDY we are heavy JavaScript users, mainly AngularJS at the front and NodeJS with Express at the back end. We are a team of 4 great developers right now, which will further grow this year. (Hint: We are hiring 😉 )

Rather than contemplating too much on what happened this year, I took some time rambling about the most interesting technological developments, that will affect us next year.

What will 2016 bring?

Adopting Gulp

Finally saying goodbye to Grunt and adopting Gulp. Grunt was very important to professionalize JavaScript development with a modern task runner tool for building, testing and process automation.

But after some time the flaws of Grunt became apparent. The Gulp way of using NodeJS streams, which enables in memory processing and easier configuration of the tasks is way more performant.
Many other projects already switched and now we do too. So goodbye Grunt, welcome Gulp!

ECMAScript 2015

The current NodeJS versions have it natively and for browsers there is BabelJS for now.

ECMAScript2015 is a big step forward to mature the JavaScript language. Really looking forward to block scoping, arrow functions, classes, modules and lots of other neat things!

Who knows? I probably won’t use the var or function keyword ever again! 😀

Angular 2.0 is coming

Official Angular Logo

Official Angular Logo


I read and tinkered a bit with the Angular 2 Beta the last couple of days. I like the concept of using components as the basic abstraction to structure applications and its usage of the web components API at base.

import {Component} from 'angular2/core';
import {Todo} from './todo';
import {TodoList} from './todo_list';
import {TodoForm} from './todo_form';

@Component({
  selector: 'todo',
  template: `
    <h2>Todo</h2>
    <span>{{remaining}} of {{todos.length}} remaining</span>
    [ <a href="javascript: false" (click)="archive()">archive</a> ]
    
    <todo-list [todos]="todos"></todo-list>
    <todo-form (newTask)="addTask($event)"></todo-form>`,
  directives: [TodoList, TodoForm]
})
export class TodoApp {
  todos: Todo[] = [
      {text:'learn angular', done:true},
      {text:'build an angular app', done:false}
  ];
  
  get remaining(): number {
    return this.todos.reduce((count, todo: Todo) => count + todo.done, 0);
  }
  
  archive(): void {
    var oldTodos = this.todos;
    this.todos = [];
    oldTodos.forEach((todo: Todo) => {
      if (!todo.done) this.todos.push(todo);
    });
  }
  
  addTask(task: Todo) {
    this.todos.push(task);
  }
}

Taken from the official Angular 2 example plunker

With the new classes and annotations it now looks like a Java framework like GWT and I believe programmers coming from this world will like Angular 2 for that.
For people coming from classic HTML/CSS front end development and jQuery I believe the hurdle to get into Angular 2 will be higher, as there are more advanced object oriented programming paradigms to learn. With Angular 1 you could just start annotating your html a bit and have a simple “One Controller” application to start out and be amazed of two way data binding.

When going full Angular 2, it probably will make sense to use TypeScript, which is something I am not really sure about yet.
I am also curious about the migration from Angular 1; the first blog posts I read about this topic looked promising, that one will be able to gradually move your application over.
But I am actually not sure if we want to do this in reality or if we just start new projects with Angular 2 and keep the old ones in Angular 1 land, as it might not be really worth it economically (Angular 1 is still pretty cool I believe 🙂 )

I also looked a lot at the Polymer project by Google over the last years and I am curious about how the promised interoperability via shared web component roots will play out between Angular 2 and Polymer elements.

Pro tip: Check out the thoughtram blog about everything Angular 2 related, great stuff!

NodeJS micro service architecture

Even though I was certanily sceptical at the beginning, I never regreted choosing NodeJS as the main back end solution. Most other programming languages and frameworks try to be more functional, reactive and asynchronous. NodeJS has all of this out of the box!

An requirement for using NodeJS without headaches though is to have a firm grasp on JavaScript basics and to be familiar with its functional nature and the asynchronous event loop based approach. You can mess up way more in JavaScript than in PHP, Java or C# if you don’t know what you are doing.

But it is well worth to learn in my opionion. It is super easy to deploy and update your server, even without complicated continous delivery infrastructure: git pull, npm install, service restart -> Updated without much downtime.

NPM packages are also a very nice way to structure your application into many small projects, which I highly recommend. It makes your software easier to test and develop in a team.

Besides using NPM packages to structure your code, NodeJS is also great to put your application into multiple services, rather than running one monolithic server. This architecture principle became recently popular under the buzzworkd micro services.
This makes for a more resilient and scalable architecture, that again is easier to test and work in teams on. There is a great talk by Martin Fowler on youtube about this topic, although he disses NodejS a bit in it.
Another great case study for micro services is the refactoring of the Wunderlist backend led by Chad Fowler.

Finally writing automated tests for NodeJS with mocha and chai is very easy and they execute super fast, which actually encourages you to do more TDD.

PostgreSQL as a hybrid SQL/NoSQL database

To be honest: I did not want to go full NoSQL. Perhaps I am used to classic SQL statements and table structured databases. The possibility of using JSON as a type in Postgres intrigued me and we are using it as our database.
We actually have yet to really faciliate all the Postgres JSON features, but so far it just works fine! It is a database and it does what it should do 🙂

As it has not been necessary yet, we have not put much thought into using a cache in addition to the database, which will probably also become a topic in 2016.

Conclusion

After some quiet time with family over christmas, reading about the new developments and technologies made me very excited to start working in 2016 again. It will be an important year for us at snapADDY and the JavaScript ecosystem finally takes a huge maturation step with ECMAScript2015 being here to stay and Angular 2.0 knocking on the door.

go2016(); 🙂

Get 10$ Digital Ocean SSD Hosting Credit for free.

Handle asynchronous non-blocking IO in JavaScript

Motivation

One of the big “WTF” hurdles for apprentice JavaScript developers, that come from languages that mostly embrace synchronous and blocking IO APIs like Java or PHP is to get into thinking asynchronously about everything IO related in JavaScript with its event loop construct.

The mind of a Java developer learning JavaScript

The mind of a Java developer learning JavaScript

It is actually one of the cool things about JavaScript and why NodeJS on the server got so much attention in the beginning, so it is something anyone at least half serious about learning JS should learn about.

What does asynchronous vs synchronous IO actually mean?

For any computer program to do something useful it is important to handle Input/Output (IO) operations.
IO is basically everything that gets in and out of the “container” your program runs in, like mouse or keyboard input, sending a request and receiving a response from a web service or reading a file from disk.
To handle this there are two different API models, one is synchronous and blocking, the other is asynchronous and non-blocking.
The illustration explains the basic difference between the two approaches.

Blocking UI vs Non-blocking IO

Blocking IO vs Non-blocking IO

The blocking IO approach on the left ‘waits’ for the response to come back until the program continues. The asynchronous program on the right however continues immediately and invokes a callback once the response came back.

From Callbacks to Generators

While handling blocking IO is pretty straight forward and intuitive, non-blocking IO can be confusing at first. The next part of this article gives concrete code samples of different ways of doing ajax calls in JavaScript synchronously and asynchronously. The examples make use of the jQuery library.

Synchronous/Blocking IO

The XMLHttpRequest API actually allows you to do ajax calls synchronously.
This is however almost never a good idea. Due to the single threaded nature of JavaScript your complete UI will be blocked while you do your call. It is even flagged as deprecated by recent Chrome versions

$.ajax({
   url: 'doSth',
   async: false,
   complete: function(data){
      console.log('First');
   }
});
console.log('Second');
Pros
  • People are often more used to blocking IO APIs
Cons
  • You block the UI thread
  • Bad user experience, as your complete UI hangs during the request

Callback Functions: The basis of non-blocking IO

This is the way the browser JS APIs such as DOM handlers are implemented at base. You register callback functions, that get called by the browser once the IO operations returns.
To use this approach directly however may lead to pretty ugly code. Novice programmers tend to build endless ‘callbacks inside callbacks inside callbacks’ chains, which are very hard to read, maintain and debug.

$.ajax({
   url: 'doSth',
   success: function(data){
      console.log('Second');
      $.ajax({
         url: 'doSthElse',
         success: function(data){
            console.log('Third');
         },
         error: function(err){
            console.error(err);
         }
      });
   }
});
console.log("First");
Pros
  • Asynchronouse IO gives the UI room to breathe
Cons
  • Code can get really ugly with endless callback inside callback chains

Promise objects: The ‘State of the Art’ in non-blocking IO

As a way to solve the ‘callback hell’ problem a design pattern called promises (jQuery also calls it Deferreds) got very widely adopted and integrated by popular frameworks.
When using promises you write your asynchronous function calls not by passing in a callback function, but by directly returning a so called Promise object.
As the name implicates this object ‘promises’ you a value.
The promise object is now the place where you can attach your callback functions. This makes it easier to chain asynchronous calls, while staying on the same nesting level.

var promise = $.ajax({url: 'doSth'});
promise.then(function(data){
   console.log('Second');
   return $.ajax({url: 'doSthElse'});
}).then(function(data){
   console.log('Third');
},function(err){
  console.error(err);
});
console.log('First');

Notice the callbacks get registered via the ‘then’ function on the promise object.

Pros
Cons
  • It is a relatively advanced concept for beginners to grasp for solving the simple problem of just chaining two async calls.

ECMAScript 6 Generators: The future of non-blocking IO?

ECMAScript 6 will introduce Generator functions to JavaScript. Generators are a programming construct that basically enable a way of doing iterations.
These iterator functions can be used to create synchronously looking code, that actually gets executed asynchronously in the background. A great blog post, that describes in detail how this will work in detail can be found on the website of Strongloop.

run(function*(){
    try {
       var result = yield makeFirstAsyncCall();
       var finalResult = yield makeSecondAsyncCall(result);
    catch (e) {
       console.error(e);
    }
});

The final code will look something like this. Notice the “function*” keyword, that marks a generator function and the “yield” keyword that marks the iteration steps inside the generator.
It will also be possible to do seemingly synchronous error handling with try/catch in this construct, without an extra fail or error callback as with the other async methods.
You can already play around with all the ECMAScript 6 goodness by using NodeJS 0.11.2+ with the –harmony flag.

Pros
  • Straight forward syntax, that looks just like a blocking API.
  • Still calls are really asynchronous, so the UI thread can breathe.
  • Enables programmers to easily write async code, without necessarily needing to understand it.
Cons

Conclusion

In retrospect really understanding asynchronicity and the event loop construct is one of the great things I took from learning JavaScript.
Besides the obvious benefit of writing better JavaScript applications, it opened my mind in thinking about other languages and frameworks too.
For example it recently helped me a lot grasping the concepts behind Akka, which is a framework written in Scala, that implements the actor model for distributed and concurrent computing.