Project Description
This is a WPF library containing a powerhouse of controls, frameworks, helpers, tools, etc. for productive WPF development.
If you have ever heard of Drag and Drop with Attached properties, ElementFlow, GlassWindow, this is the library that will contain all such goodies.
Here is the introductory blog post

At this time the library is in a Source Only form and requires .Net Framework 3.5 SP1 or later. To build this project on your machine, you need to have VS2010.

The library so far ...
  • ImageButton
  • DragDropManager
  • GlassWindow
  • BalloonDecorator
  • ItemSkimmingPanel + SkimmingContextAdorner
  • PennerDoubleAnimation
  • ElementFlow
  • TransitionPresenter
  • GenieAnimation
  • WarpEffect using Pixel Shaders
  • Simple 3D Engine ( New )
  • HalfCirclePanel ( New )

Contributions
  • CogWheelShape, PolygonShape <Boris Tschirner>

If you wish to contribute or share ideas please direct your mail to pavan@pixelingene.com

Screenshots
Here is a quick way to know what these controls look like: Screenshots


Team
  • Pavan Podila ( Blog )

 Pixel in Gene News Feed 
Friday, October 28, 2016  |  From Pixel in Gene

The previous two parts (Part 1, Part 2) focused on the fundamental building blocks of MobX. With those blocks in hand we can now start solving some real-world scenarios through the lens of MobX. This post is going to be a series of examples that applies the concepts we have seen so far.



Of course, this is not an exhaustive list but should give you a taste of the kind of mental-shift you have to make to apply the MobX lens. All of the examples have been created without the @decorator syntax. This allows you to try this out inside Chrome Console, Node REPL or in an IDE like WebStorm that supports scratch files.





/>





No TLDR?



This is a long post. Sorry, no TLDR here. I have 4 examples and it should get faster and easier to read after Example 2. I think :-).



  1. Send analytics for important actions
  2. Kick off operations as part of a workflow
  3. Perform form validation as inputs change
  4. Track if all registered components have loaded

Making the shift in thinking



When you learn the theory behind some library or framework and try to apply it to your own problems, you may draw a blank initially. It happens to an average guy like me and even to the best folks out there. The writing world calls it the “Writer’s block” and in the artist’s world, it’s the “Painter’s block”.



What we need are examples from simple to complex to shape our thinking style. It is only by seeing the applications, can we start to imagine the solutions to our own problems.



For MobX, it starts by understanding the fact that you have a reactive object-graph. Some parts of the tree may depend on other parts. As the tree mutates, the connected parts will react and update to reflect the changes.




The shift in thinking is about envisioning the system at hand as a set of reactive mutations + a set of corresponding effects.


Effects can be anything that produce output as a result of the reactive change. Let’s explore a variety of real-world examples and see how we can model and express them with MobX.





Example 1: Send analytics for important actions



Problem
We have certain one-time actions in the app that have to be logged to the server. We want to track when these actions are performed and send analytics.



Solution




1
The first step is to model the state. Our actions are limited and we only care when it is performed once. We can model with a map of action-name to a boolean. This is our observable state.


const actionMap = observable({
    login: false,
    logout: false,
    forgotPassword: false,
    changePassword: false,
    loginFailed: false
});


2
Next we have to react to changes happening to these action states. Since they only happen once during the lifetime, we are not going to use long-running effects like autorun() or reaction(). We also don’t want these effects lying around after they execute. Well, that leaves us with only one option: ….



….

….

….

….

….



when().


Object.keys(actionMap)
    .forEach(key => {
        when(
            () => actionMap[key],
            () => reportAnalyticsForAction(key)
        );
    });

function reportAnalyticsForAction(actionName) {
    console.log('Reporting: ', actionName);

    /* ... JSON API Request ... */
}


In the above code, we are simply looping over the keys in our actionMap and setting up a when() side-effect for each key. The side-effect will run when the tracker-function (the first argument) returns true. After running the effect-function (second argument), when() will auto-dispose. So there is no issue of multiple reports being sent out from the app!



3
We will also need a MobX action to change the observable state. Remember: never modify your observables directly. Always do it through an action.



For us, this looks as below:


const markActionComplete = action((name) => {
    actionMap[name] = true;
});

markActionComplete('login');
markActionComplete('logout');

markActionComplete('login');

// [LOG] Reporting:  login
// [LOG] Reporting:  logout


Note that, even though I am marking the login action twice, there is no reporting happening. Perfect. That is exactly the behavior we need.



It works for two reasons:



  1. The login flag is already true, so there is no change in value
  2. Also the when() side-effect has been disposed so there is no tracking happening anymore.



Example 2: Kick off operations as part of a workflow



Problem
We have a workflow that consists of several states. Each state is mapped to certain tasks, which are performed when the workflow reaches that state.



Solution




1
From the description above, it seems that the only value that is observable is the state of the Workflow. The tasks that need to run for each state, can be stored as a simple map. With this we can model our workflow like so:


class Workflow {

    constructor(taskMap) {
        this.taskMap = taskMap;
        this.state = observable({
            previous: null,
            next: null
        });

        this.transitionTo = action((name) => {
            this.state.previous = this.state.next;
            this.state.next = name;
        });

        this.monitorWorkflow();
    }

    monitorWorkflow() {
        /* ... */
    }
}

// Usage
const workflow = new Workflow({
    start() {
        console.log('Running START');
    },

    process(){
        console.log('Running PROCESS');
    },

    approve() {
        console.log('Running APPROVE');
    },

    finalize(workflow) {
        console.log('Running FINALIZE');

        setTimeout(()=>{
            workflow.transitionTo('end');
        }, 500);
    },

    end() {
        console.log('Running END');
    }
});


Note that we are storing an instance variable called state that tracks the current and previous state of the Workflow. We are also passing the map of state->task, stored as taskMap.



2
Now the interesting part is about monitoring the workflow. In this case, we don’t have a one-time action like the previous example. A Workflow is usually long-running, possibly for the lifetime of the application. This calls for either autorun() or reaction().



The tasks for a state are only performed when you transition into the state. So we need to wait for a change on this.state.next before we can run any side-effects (tasks). Waiting for a change indicates the use of reaction() as it will run only when the tracked observable changes value. So our monitoring code will look like so:


class Workflow {
    /* ... */

    monitorWorkflow() {
        reaction(
            () => this.state.next,
            (nextState) => {
                const task = this.taskMap[nextState];
                if (task) {
                    task(this);
                }
            }
        )
    }
}


The first argument to reaction() is the tracking-function, which in this case simply returns this.state.next. When the return value of the tracking-function changes, it will trigger the effect-function. The effect-function looks at the current state, looks up the task from this.taskMap and simply invokes it.



Note that we are also passing the instance of the Workflow into the task. This can be used to transition the workflow into other states.


workflow.transitionTo('start');

workflow.transitionTo('finalize');

// [LOG] Running START
// [LOG] Running FINALIZE
/* ... after 500ms ... */
// [LOG] Running END


Interestingly, this technique of storing a simple observable, like this.state.next and using a reaction() to trigger side-effects, can also be used for:



  • Routing via react-router
  • Navigating within a presentation app
  • Switching between different views based on a mode

I’ll leave it as a reader-exercise to try this out. Feel free to leave comments if you hit any road blocks.





Example 3: Perform form validation as inputs change



Problem
This is a classic Web form use case where you have a bunch of inputs that need to be validated. When they are valid, you can allow submission of the form.



Solution




1
Let’s model this with a simple form-data class whose fields have to be validated.


class FormData {

    constructor() {
        extendObservable(this, {
            firstName: '',
            lastName: '',
            email: '',
            acceptTerms: false,

            errors: {},

            valid() { // this becomes a computed() property
                return (this.errors === null);
            }
        });

        this.setupValidation(); // We will look at this below
    }

}


The extendObservable() API is something we haven’t seen before. By applying it on our class instance (this), we get an ES5 equivalent of making an @observable class property.


class FormData {
    @observable firstName = '';
    /* ... */
}


2
Next we need to monitor when any of those fields change and run some validation logic. If the validation goes through we can mark the entity as valid and allow submission. The validity itself is being tracked with a computed property: valid.



Since the validation logic needs to run for the lifetime of FormData, we are going to use autorun(). We could have used reaction() as well but we want to run validation immediately instead of waiting for the first change.


class FormData {
    setupValidation() {
        autorun(() => {
            // Dereferencing observables for tracking
            const {firstName, lastName, email, acceptTerms} = this;
            const props = {
                firstName,
                lastName,
                email,
                acceptTerms
            };

            this.runValidation(props, {/* ... */})
                .then(result => {
                    this.errors = result;
                })
        });
    }

    runValidation(propertyMap, rules) {
        return new Promise((resolve) => {
            const {firstName, lastName, email, acceptTerms} = propertyMap;

            const isValid = (firstName !== '' && lastName !== '' && email !== '' && acceptTerms === true);
            resolve(isValid ? null : {/* ... map of errors ... */});
        });
    }

}


In the above code, the autorun() will automatically trigger anytime there is a change to the tracked observables. Note that for MobX to properly track your observables, you have to use dereferencing.



runValidation() is an async call, which is why we are returning a promise. In the example above, it does not matter but in the real-world you will probably make a server call for some special validation. When the result comes back we will set the error observable, which will in turn update the valid computed property.



If you have an expensive validation logic, you can even use autorunAsync(), which has an argument to debounce the execution with some delay.



3
Alright, lets put our code into action. We will setup a simple console logger (via autorun()) and track the valid computed property.


const instance = new FormData();

// Simple console logger
autorun(() => {
    // tracking this so autorun() runs for every input change
    const validation = instance.errors;

    console.log(`Valid = ${instance.valid}`);
    if (instance.valid) {
        console.log('--- Form Submitted ---');
    }

});

// Let's change the fields
instance.firstName = 'Pavan';
instance.lastName = 'Podila';
instance.email = 'pavan@pixelingene.com';
instance.acceptTerms = true;


This is the logged output:



1 Valid = false
2 Valid = false
3 Valid = false
4 Valid = false
5 Valid = false
6 Valid = true
7 --- Form Submitted ---



Since autorun() runs immediately, you will see the two extra logs in the beginning, one for instance.errors and one for instance.valid, lines 1-2. The remaining four lines (3-6) are for each change in the field.



Each field change triggers runValidation(), which internally returns a new error object each time. This causes a change in reference for instance.errors and then trigges our autorun() to log the valid flag. Finally when we have set all the fields, instance.errors becomes null (again change in reference) and that logs the final “Valid = true”.



4

So in short, we are doing form validation by making the form fields observable. We also add an extra errors property and a valid computed property to keep track of the validity. autorun() saves the day by tying everything together.





Example 4: Track if all registered components have loaded



Problem
We have a set of registered components and we want to keep track when all of them get loaded. Every component will expose a load() method that returns a promise. If the promise resolves, we mark the component as loaded. If it rejects, we mark it as failed. When all of them finish loading, we will report if the entire set loaded or failed.



Solution




1

Let’s first look at the components we are dealing with. We are creating a set of components that randomly report their load status. Also note that some are async.


const components = [
    {
        name: 'first',
        load() {
            return new Promise((resolve, reject) => {
                Math.random() > 0.5 ? resolve(true) : reject(false);
            });
        }
    },
    {
        name: 'second',
        load() {
            return new Promise((resolve, reject) => {
                setTimeout(() => {
                    Math.random() > 0.5 ? resolve(true) : reject(false);
                }, 1000);
            });
        }
    },
    {
        name: 'third',
        load() {
            return new Promise((resolve, reject) => {
                setTimeout(() => {
                    Math.random() > 0.25 ? resolve(true) : reject(false);
                }, 500);
            });
        }
    },
];


2

The next step is to design the observable state for the Tracker. The load() of the components will not complete in a specific order. So we need an observable array to store the loaded state of each component. We will also track the reported state of each component.



When all components have reported, we can notify the final loaded state of the set of components. The below code sets up the observables.


class Tracker {

    constructor(components) {
        this.components = components;

        extendObservable(this, {

            // Create an observable array of state objects,
            // one per component
            states: components.map(({name}) => {
                return {
                    name,
                    reported: false,
                    loaded: undefined
                };
            }),

            // computed property that derives if all components have reported
            reported() {
                return this.states.reduce((flag, state) => {
                    return flag && state.reported;
                }, true);
            },

            // computed property that derives the final loaded state 
            // of all components
            loaded() {
                return this.states.reduce((flag, state) => {
                    return flag && !!state.loaded;
                }, true);
            },

            // An action method to mark reported + loaded
            mark: action((name, loaded) => {
                const state = this.states.find(state => state.name === name);

                state.reported = true;
                state.loaded = loaded;
            })

        });

    }
}


We are back to using extendObservable() for setting up our observable state. The reported and loaded computed properties track as and when the components complete their load. mark() is our action-method to mutate the observable state.



3
To kick off the tracking, we will create a track() method on the Tracker. This will fire off the load() of each component and wait for the returned Promise to resolve/reject. Based on that it will mark the load state of the component.



when() all the components have reported, the tracker can report the final loaded state. We use when here since we are waiting on a condition to become true (this.reported). The side-effect of reporting back needs to happen only once, a perfect fit for when().



The code below takes care of the above:


class Tracker {

    /* ... */ 

    track(done) {

        when(
            () => this.reported,
            () => {
                done(this.loaded);
            }
        );

        this.components.forEach(({name, load}) => {
            load()
                .then(() => {
                    this.mark(name, true);
                })
                .catch(() => {
                    this.mark(name, false);
                });
        });
    }

    setupLogger() {
        autorun(() => {
            const loaded = this.states.map(({name, loaded}) => {
                return `${name}: ${loaded}`;
            });

            console.log(loaded.join(', '));
        });
    }
}


setupLogger() is not really part of the solution but is used to log the reporting. It’s a good way to know if our solution works.



4
Now comes the part where we try this out:


const t = new Tracker(components);
t.setupLogger();
t.track((loaded) => {
    console.log('All Components Loaded = ', loaded);
});


And the logged output shows its working as expected. As the components report, we log the current loaded state of each component. When all of them report, this.reported becomes true, and we see the “All Components Loaded” message.



1 first: undefined, second: undefined, third: undefined
2 first: true, second: undefined, third: undefined
3 first: true, second: undefined, third: true
4 All Components Loaded =  false
5 first: true, second: false, third: true



Did the Mental Shift happen?



Hope the above set of examples gave you a taste of thinking in MobX.




Its all about side-effects on an observable data-graph.


  1. Design the observable state
  2. Setup mutating action methods to change the observable state
  3. Put in a tracking function (when, autorun, reaction) to respond to changes on the observable state

The above formula should work even for complex scenarios where you need to track something after something changes, which can result in repeat of steps 1-3.





/>

Tuesday, October 18, 2016  |  From Pixel in Gene

In the previous part we looked at how you can setup a MobX state tree and make it observable. With that in place, the next step is to start reacting to changes. Frankly this is where the fun begins!





MobX guarantees that whenever there is a change in your reactive data-graph, the parts that are dependent on the observable properties are automatically synced up. This means you can now focus on reacting to changes and causing side-effects rather than worrying about data synchronization.



Let’s look at some of the ways in which you can take action.



Using @action as an entry point



By default when you modify observables, MobX will detect and keep other depending observables in sync. This happens synchronously. However there may be times when you want to modify multiple observables in the same method. This can result in several notifications being fired and may even slow down your app. A better way to do this is to wrap the method you are invoking in an action(). This creates a transaction boundary around your method and all affected observables will be kept in sync after your method executes. Note that this works only for observables in the current function scope. If you have async actions which modify more observables, you will have to wrap them in runInAction().


class Person {

    @observable firstName;
    @observable lastName;

    changeName(first, last) {
        this.firstName = first;
        this.lastName = last;
    }
}


Using autorun to trigger side-effects



Using reactions to trigger side-effects after first change



Using when to trigger one-time side-effects





/>

Sunday, October 16, 2016  |  From Pixel in Gene

MobX provides a simple and powerful approach to managing client side state. It uses a technique called Transparent Functional Reactive Programming (TFRP) wherein it automatically computes a derived value if any of the dependent values change. Behind the scenes, it transparently sets up a dependency graph and tracks the values as they change.



MobX causes a shift in mindset (for the better) and changes your mental model around managing client side state.



After having used it for more than 6+ months on multiple React projects, I find certain patterns of usage recurring very frequently. This post is a compilation of various techniques I’ve been using to manage client state with MobX. This is not an introductory post but assumes some familiarity with MobX.





This is going to be a 3-part series. In this first part we will look at shaping the MobX State Tree.



  1. Shaping the observables
  2. Reacting to changes
  3. A Cookbook of common use cases

Shaping the observables




This the part where you sculpt the shape of your Store.


Modeling the client-state is probably the first step when starting with MobX. This is most likely a direct reflection of your domain-model that is being rendered on the client. Now, when I say client-state, I am really talking about the “Store”, a concept you may be familiar with if you are coming from a Redux background. Although you only have one Store, it is internally composed of many sub-Stores that handle the various features of your application.



The easiest way to get started is to annotate properties of your Store that will keep changing as @observable. Note that I am using the decorator syntax but the same can be achieved with simple ES5 function wrappers.


import {observable} from 'mobx';

class AlbumStore {
    @observable name;
    @observable date;
    @observable description;
    @observable author;
    
    @observable photos = [];
}


Pruning the observability



By marking an object as @observable, you automatically observe all of its nested properties. Now this may be something you want but many a time its better to limit the observability. You can do that with a few MobX modifiers:



  • asReference: This will turn off observing the property completely. This is useful when there are certain properties that will never change.


  • asFlat: This is slightly more loose than asReference. asFlat allows the property itself to be observable but not any of its children. The typical usage is for arrays where you only want to observe the array instance but not its items.




/>



Tip Start off by making everything @observable and then apply the asReference and asFlat modifiers to prune the observability.



This kind of pruning is something you discover as you go deeper into implementing the various features of your app. It may not be obvious when you start out, and that is perfectly OK! Just make sure to revisit your Store as and when you recognize properties that don’t need deep observability. It can have a positive impact on your app’s performance.


import {observable} from 'mobx';

class AlbumStore {
    @observable name;
    
    // No need to observe date
    @observable date = asReference(null); 
    
    @observable description;
    @observable author;
    
    // Only observing the photos array, not the individual photos
    @observable photos = asFlat([]); 
}


Expanding the observability



This is the symmetric opposite of pruning the observables. Instead of removing observability you can expand the scope/behavior of observability on the object. Here you have three modifiers that can control this:





/>



  • asStructure: This modifies the way equality checks are done when a new value is assigned to an observable. By default only reference changes are considered as a change. If you prefer to compare based on an internal structure, you can use this modifier. This is essentially for value-types (aka structs) that are equal only if their values match.


  • asMap: By default when you mark an object as observable, it can only track the properties initially defined on the object. If you add new properties, those are not tracked. With asMap, you can make even the newly added properties observable.


Instead of using this modifier, you can also achieve the same effect by starting with a regular observable object. You can then add more observable properties using the extendObservable() API.



  • computed: This is such a powerful concept that its importance cannot be emphasized enough. A computed property is not a real property of your domain, rather it is derived (aka computed) using real properties. A classic example is the fullName property of a person instance. It is derived from the firstName and lastName properties.



/>



Shaping the observable tree is an essential aspect of using MobX. This is sets up MobX to start tracking the parts of your Store that are interesting and change-worthy!



To be continued…



In Part 2 we will look at how you can take @action when your observables change. These are the side effects of your application!

Tuesday, August 2, 2016  |  From Pixel in Gene

Unit Testing is one of those aspects of a UI project that is often ignored. Not because nobody wants to do it, just that the cost
of setup and maintenance can be taxing for a small team. Even on a large team, it can be ignored during those looming deadlines.
Testing takes far greater discipline and rigor to do it right and to keep doing it right.



This post is not a panacea for Unit Testing but rather distilling the process of Unit Testing to a few key areas. Also, note that we
are just focusing on Unit Testing and not on other kinds of testing like Integration (aka end-to-end), Acceptance, Stress/Chaos testing, etc.
I’ll also limit myself to Web UI but the ideas are applicable to other UI platforms.



I’ll get it out right now and state it boldly:




Unit Testing is Hard


With familiarity, it will become easier, but it is still a multi-step process before you are in cruising mode.



Why is it hard?



1 Setup can be challenging:



  • You have to first pick a testing framework such as Jasmine, Mocha. You may also have to pick other libraries for framework specific testing.
  • Pick a build tool like Gulp, Browserify, Webpack or plain NPM.
  • Configure the build to run in test mode
  • Setup harness like Karma and related plugins
  • Setup Coverage and reporting
  • Makes sure it works on your Continuous Integration Server such as Jenkins, or TeamCity

On every project I worked on there was always a bit of fiddling with the settings and doing things differently based on the available infrastructure.



2 Devising a test for certain scenarios can be hard. This can require lot of mocking or changing the code to become more explicit about dependencies. This is
probably the best part of testing but can be demotivating sometimes. The end result can be cleaner code and most likely results in better understanding.



3 Sometimes it makes more sense to do Test-Driven development instead of Test-First but requires more experience to know when to pick between the two.
A Test-First approach may not be as rewarding and has a longer lead-time to gratification.
Instead, seeing working code with manual testing can be more gratifying. Adding unit-tests at that point will also be more meaningful.



On a separate note, Test First Development works great when doing API Design.



4 Once you are in the middle of the project, a breaking test can result in lot of investigation.
This is part of the pain of software development but manifests more with a failing test! Of course, those same tests will help you later when doing
serious refactoring.



The above is a decent real-world representation of what you have to go through. At least I’ve never had a silk road experience yet :-)



Stereotypes



Luckily things will get brighter. As you do more Unit Testing, you will start seeing the patterns emerge. After a few projects doing testing,
you will realize the repetition that is happening. It will be the same kind of tests being done, possibly with different frameworks or libraries.
The key is that the types of tests are quite limited. The following is a representative list of all the possible unit-tests you will ever write:



  1. Algorithmic / State-based
  2. Form Validation
  3. Interaction
  4. Network related
  5. Time/Clock based
  6. Async testing

1. Algorithmic / State-based tests





/>



These tests are purely logic and have no UI involvement. Most likely these tests focus on the business-logic
of your application. For example, you may have a very specific way of parsing the JSON payload from a network request.
In this case, you will have a separate module, say parseEntity.js that knows how to consume this payload
and make it usable on the client side.



The kind of tests you will write will include the following:



  • Parsing for empty payloads
  • Parsing for really large payloads
  • Parsing for malformed payloads
  • Parsing for various types of payloads

As you can see, its purely logic based and runs through a variety of situations to ensure the parsing module
produces the correct results. There is one clear advantage to doing Algorithmic tests, which is they can
all be done as data-driven tests. You can define a giant list of inputoutput cases and simply
run through them. You can even keep this list separately in a JSON file or even a Database! As you find
more edge-cases, you can craft a specific data-test, include in your list, and ensure your module works correctly.



<aside class="aside left ">


Data-driven tests





Data-driven tests rely on the fact that the only thing changing in the test are the inputs and outputs. This means
you can keep a list of input-output pairs and simply run through them one by one. Every test would read the input,
perform the operation and check if the result matches the corresponding output.



Note that the inputs and outputs can be fairly complex. If you have a large set of input-output pairs,
you can even store them externally, say in a JSON file, CSV or a NoSQL DB!


Operation = Math.pow(2, input)

| Input | Output |
|-------|--------|
| 1     | 2      |
| 2     | 4      |
| 3     | 8      |



</aside>


Other examples of logic-based testing could be for:



  • Mathematical Calculations
  • Data transformations (such as map, sort, filter, group)
  • Search algorithms, Regex
  • Correct translations of strings based on locale
  • Testing services used in the app (eg: persistence, preferences)

2. Form Validation tests





/>



Technically this type of test falls under the Algorithmic category. However this use case is quite frequent on
UI apps that it demands its own category. Here again you can have a data-table for all the input-output pairs.
The types of things you will test include:



  • Validation of field values against a set of constraints. This can be encoded as a data-driven test.
  • Testing for the proper error messages for failed validations
  • Testing for any success messages for successful validations
  • UI feedback for messages and errors
  • Ensuring various form field are in the correct state (visibility, enabled, etc.)

3. UI interaction tests



This category is probably the easiest to explain. These tests are for the UI components of your
application where you test the behavior and visual feedback. For example, if you had a SearchBar component,
you would have tests such as:





/>



  • Ensuring the textbox has a placeholder text
  • Search button is disabled if there is no text
  • On focus, the style of the textbox changes
  • Entering text, enables the search button
  • Hitting the Return key fires the search callback. Same holds for clicking on the search button.

These tests can also get very tricky for certain scenarios. For example, drag-and-drop is not an easy one. So is
testing for a combination of hot-keys and mouse/touch operations. For such tests, its probably best to wrap the core logic
in a service and expect the service to change the internal state correctly. By reducing these user interactions to
some known, expected state, you can simplify your testing. It essentially becomes a state-based test at that point.



If you rather test this more explicitly, you can simulate events via jquery and check if the callbacks are getting
fired. Of course, you also need to check if the correct state is being reflected via proper visual feedback.







/>



These tests can also be treated as service-tests. Usually the network related activity is performed by the data-layer
of your application, usually wrapped as a service. Making a real network request is not a responsibility of the Unit Test,
neither is it feasible.
This is where you will mock the network backend and ensure the proper call and parameters are being sent.



This test usually involves firing a service method and ensuring the proper network request is being made. You can also
mock the response to return valid / error payloads and ensure the service layer behaves as expected.



5. Time based tests





/>



If you have some functionality in your app where you are relying on setTimeout() or setInterval(), you have to do a time-based test.
However simulating or even waiting for the specific period is not feasible as it can slow down your tests. This is the case for
Mock the Clock”! Yes, literally. Before you can run the test you have to hijack (normally via a library) the window.setTimeout() and window.setInterval()
methods.



Your mock will provide methods to advance the clock by the required time. This is the way to forward time in your test. At this point
you can check your behavior to see if it has performed the required set of operations.



6. Async testing





/>



Most UI code these days relies on async/await, Promises or callback-based asynchronicity. This requires some change in the way you test the functionality.
All testing libraries run synchronously, so they provide hooks for your test to signal back (with a done callback) when it is ready.
Once you signal done, the test will check the expectations and pass / fail the test.



Mechanics of a Unit Test



Every unit test follows a standard 3-step process:



  • Prepare the context and environment for the test
  • Run the test code
  • Assert things are working as expected

Libraries (Mocha, Jasmine) will provide you APIs for these 3 parts of testing. The popular libraries follow a Behavior-Driven approach.
This means the API is more english-like and focuses on User-level behavior for the test cases.





/>



  • describe - used to create a test-suite or a group of test-cases
  • it - run a single test-case
  • before - called once to setup the context for the test suite
  • after - called after completion of the test suite
  • beforeEach - called before each test case
  • afterEach - called after completion of each test case

The important thing to note in BDD frameworks is that you can nest the describe and it. This creates nested context so the inner-most
it accumulates the state from all of the parent describe.



To help you during the test execution part, there are few more helpers that you can use. These include:



  • Mocks: Help you simulate time-consuming dependencies like Network, Database, Services, etc.
  • Stubs: Help you provide canned responses for certain API dependencies
  • Spies: Help you spy on dependencies to ensure they are getting called correctly and in a timely fashion

Finally the aforementioned libraries also have APIs to assert your expectations for a test. These assertions will result in passing or failing
the test. This is the last part of the test, where you check that the behavior was correctly performed and as per expectations.



There are also specialized libraries (Chai) that offer more fluent-APIs for performing assertions.



Principles of good Unit tests



Irrespective of the kind of tests you write, there are some golden rules to adhere for all Unit Tests. Violating these rules will only
make it difficult to scale your codebase as you add more features. They will also degrade your overall Developer Experience. So always
strive to meet these rules!



  • Should run fast. Use mocks where necessary to speed up slow running dependencies (eg: Network)
  • Should be isolated and run independently
  • Keep the assertions limited and focused. Create separate tests if the assertions are different.
  • Give very specific names for your tests. Good naming is monumental.
  • Do not test library code, even if done indirectly!
  • Test the happy path
  • Test the boundary conditions
  • Test the failure conditions

Summary



Testing is hard and takes experience to get it right. By following the above principles and remembering that: “there are only a limited types of tests”,
you can keep the pain of testing to a minimum.



Next Step: Make Testing Fun (MTF) :-)

Tuesday, July 12, 2016  |  From Pixel in Gene



height="128" />



Higher Order Components (HOC) are a direct translation of Higher Order Functions from functional languages. A HOC extends the state / behavior of the inner component in a composable way, so you can add many more extensions without the knowledge of the inner component.



React is particularly suited to support this with minimum friction. The tag structure of JSX also helps in visualizing this in text.



In this blog post, we’ll take a look at many of the use-cases for HOCs. They are all from real projects so you are assured of its practicality!





/>



Creating HOCs



HOCs can be created in couple of ways:



  1. Stateless functions that return a wrapper class, rendering the inner-component via props.children
  2. Stateless functions that render the component passed via props.children
  3. Regular classes that render the component passed via props.children
  4. Using the @decorator language extension

The code below shows these ways of constructing an HOC for a Guard Component (which we will cover in the next section).



 1 import React from 'react';
 2 
 3 // 1. Wrapper function returning a class
 4 export function guardedComponentFunction(condition, Component) {
 5     return class Guarded extends React.Component {
 6         render() {
 7             return condition ? <Component {...this.props} /> : null;
 8         }
 9     }
10 }
11 
12 // 2. Wrapper function
13 export const GuardedComponent = ({condition, children})=> {
14     return (condition ? children : null);
15 };
16 
17 // 3. Class that wraps the component via props.children
18 export class GuardedComponentClass extends React.Component {
19 
20     static get defaultProps() {
21         return {
22             condition: true
23         };
24     }
25 
26     static get propTypes() {
27         return {
28             condition: React.PropTypes.bool
29         };
30     }
31 
32     render() {
33 
34         return (this.props.condition ? this.props.children : null);
35     }
36 }
37 
38 // 4. As a @decorator
39 function guardWith(condition) {
40     return function(Component) {
41         return guardedComponentFunction(condition, Component);
42     }
43 }
44 
45 @guardWith(true)
46 class ComponentToGuard extends React.Component {
47     render() {
48         return <h2>Advanced Admin Component</h2>;
49     }
50 }





The possibilities…



1. Guard components



Guard components are most useful when you want to render a component only if a certain condition matches. For example, if you have the Admin area which should only be visible to logged-in admin users, you can protect it with a Guard component. Other names for this type of component are Protected or Conditional or Toggle.






2. If/Else components



This is an extension of the Guard component and adds the ability to handle both true and false conditions. You can also treat this as a Toggle wrapper that shows one or the other depending on the condition. I’ve used this in cases where I’ll show the list of items when the number of items are > 0 and an empty message when = 0.



The code below shows the use of an IfElse component. It is fairly simple and uses the first child as the “true” component and the second one as the “false” component.



 1 // if-else.jsx
 2 
 3 import React from 'react';
 4 
 5 export function IfElse({condition, children}) {
 6     const childrenArray = React.Children.toArray(children);
 7 
 8     const trueChild = childrenArray[0],
 9         falseChild = childrenArray[1] || null;
10 
11     return condition ? trueChild : falseChild;
12 }
13 
14 
15 
16 // Somewhere in the app
17 
18 import {IfElse} from './if-else';
19 import {ListOfItems, EmptyList} from './list-components';
20 
21 class SomeAppComponent extends React.Component {
22     // ...
23 
24     render() {
25         const {items} = this.props;
26 
27         return (
28             <IfElse condition={items && items.length > 0}>
29                 <ListOfItems items={items}/>
30                 <EmptyList message="There are no items in the list"/>
31             </IfElse>
32         );
33     }
34 
35     // ...
36 
37 }





Note: You can also extend the IfElse component to be more general with a SwitchCase component! I’ll leave that as a reader exercise :-)
If you feel more adventurous, you can even create looping-constructs as HOCs! Think WhileComponent, ForComponent, etc.



3. Provider components



Provider components (or wrapper functions) allow you to mixin behavior and state and make it available as props on the wrapped component.



If you have used Redux or MobX, the connect() and observer, respectively, work as a Providers (or wrappers). They abstract the details about the connection to the store and make it available as props on the wrapped inner component.



The React-Router is yet another example where a Provider component (RouterContext) takes care of instantiating the inner component(s) and passing the Router details.



The provider component is probably the most versatile of all HOC and can do a variety of things like:



  • Swapping out components based on the Viewport size (enabling responsive components)
  • Handling Authentication and passing credentials to inner components
  • Perform logging or support debug behaviors based on certain lifecycle events or app-specific events
  • Handling analytics and reporting user behaviors
  • Doing dependency injection and passing shared services or data to inner components

To be continued…



Provide JSFiddles for the examples above.



If you already have examples of the above, I will be happy to link it here.



Summary



HOCs are a powerful concept derived from functional languages. It allows you to create composable components that abstract details and make the component tree more declarative. Hope the examples above give you some ideas to extend, discover your own patterns of HOC in your application.

Sunday, July 10, 2016  |  From Pixel in Gene

If you are only using Browserify or only Webpack in your project, you don’t have to worry about consuming external bundles.
Either of them will take care of it for you. However if you are in a situation like mine where you have legacy code in the application,
bundled by Browserify and newer shiny code bundled with Webpack, then this post is all for you!




In Short: We will consume the Browserify-bundled code as externals within Webpack.


How do we do it?



Now the general idea we are going for is to treat the browserify-bundles as externals to our Webpack build.
If you read the documentation for externals, it has options like



  • string
  • object
  • function: function(context, request, callback)
  • RegExp
  • array

which tell you how your external bundle should be resolved at runtime.



In my case, the function-based option was exactly what I needed.


function(context, request, callback) {
    /* ... */
}


With a function, you get to decide how the request should be resolved. For the other types of externals,
Webpack will look at the value for output.libraryTarget.



<aside class='aside'>


output.libraryTarget





output.libraryTarget has a bunch of different options like:



  • var
  • this
  • commonjs
  • amd
  • umd

After a bunch of trial and error, commonjs appeared like the right value. But … in vain. It resulted in a runtime error:



Uncaught ReferenceError: exports is not defined


function-based external was my only remaining hope.




</aside>


function-based external



Since Browserify provides CommonJS-style behavior on the browser, it also shims a handy utility: the require function, on the window object.
Luckily, this is our savior when trying to load browserify-bundles with Webpack.



If we go with the function-based approach to resolving the external, we will end up with a function like so.


const BROWSERIFY_BUNDLE_PATTERN = /core|services|helpers|(^.*\.bundle)/;
function(context, request, callback) {
    if (BROWSERIFY_BUNDLE_PATTERN.test(request)) {
        return callback(null, `require('${request}')`);
    }

    callback();
}


Since we know that browserify will put the require function on window, we can use that to do the resolution of the bundle (aka request) at runtime.
Note how I am passing the require statement in the callback(). If the request matches the known set of bundle patterns, we will resolve them with the
require statement.



In the webpack-generated bundle, we will see some lines like so:


// ...

function(module, exports) {
    module.exports = require('hello.bundle');
}

// ...


where hello.bundle is an external browserify-bundle.



And that’s how we consume browserify bundles with webpack!

Monday, May 30, 2016  |  From Pixel in Gene

Over the past few months I’ve spent a lot of time building video courses. It’s actually quite a long process given
my client commitments but the end results are rewarding.



There is a lot of research required to understand and explain concepts in the best possible way. This means reading a ton of code on Github, blog posts by other folks and finally getting down to structuring the content in a meaningful way.



Luckily I have great support from the Editors at Tuts+ and they help me in reducing scope, cleaning up my narrations and adding the required polish. So far I’ve published a few courses, but this year has been more busy with:



  • React Deep Dive (published)
  • Refactoring JavaScript using ES2015 (waiting to be published)
  • Angular 2 Deep Dive (under development)

Process



1 The process starts out by first proposing a topic on the Trello Board. If there is sufficent interest, it will be moved to an assigned state, from where on development begins



2 Basecamp is the chosen communication platform for developing the course. There is a fixed template for every course and it starts out by first creating a outline. This is a required step to carve out the scope and ensure the topic stays sufficiently focused.



Recently Tuts+ has started with two kinds of course formats: Long and Coffeebreak. All of my earlier courses were in the long format, which is roughly 1.5 hours. The coffee-break format is a shorter format and involves just an intro video followed by a single 15-min lesson.



3 Since most of my courses are code-based, I first create a Github repo and finish building the course material. I then break it down by lessons and start recording.



4 I use ScreenFlow for recording all the lessons. There are some strict requirements around the screen resolution and pace of the video and they are all outlined in the instructor site for Tuts+.



5 After screen-recording its time for the voiceover. Some folks tend to record and narrate at the same time. I find that a little restrictive since you don’t always have the best sentences to say while writing code! Keeping these two activities (recording + voiceover) separate gives me the flexibility to pause the video, slow it down, add additional context or even speed up the video while I narrate the relevant text. I have been following this technique for all my lessons and so far I am happy with this approach.



I use Final Cut Pro X for doing the editing and narration. The magnetic timeline of FCPX makes the whole editing process a breeze. Early on I had to decide between ScreenFlow, Premiere Pro and FCPX for editing the video. I am glad I chose FCPX and the time I spent learning it has been worthwhile.



I post the videos on Basecamp as and when I complete them. My editor would chime in occasionally and give some guidance around the content, narration style or other pointers to watch out.



6 Once all the recording and voiceovers are done, its time to prepare the Intro lesson. This is usually done last to take bits and pieces of all the recordings and add preview footage.



7 Finally the last step is to prepare the course notes, descriptions of each lesson and also an overall description of the course. At this stage, the finish line is close. It normally takes a month or so for the course to go live. Tuts+ does some post-production like adding watermarks, animations and preparing the course site.



Takes time



As you can tell, the process is time consuming and requires significant effort. However doing it a few times, makes it seem less so. I have become comfortable recording and narrating without redoing too many times. The editing process has also become quick with sufficient practice. Between the long and coffee-break formats, I am leaning towards the shorter one.



The first time you start doing these courses, it will seem like an eternity to finish. However its also lot of fun going through the process. There are some side benefits as well:



  • You become an effective communicator
  • You learn to prioritize and structure content
  • You pick up skills at Editing video, Narration and overall course production
  • You learn a LOT. In fact, this is the biggest reason I do these courses!
  • You get a sense of achievement when a course goes live! This is hard to describe in words but you will know the feeling is great.

You should try it out!



If you are passionate about a topic and would like to share your viewpoint, you should try building video courses. You can start out with simple, short videos on YouTube and then get into serious course production. It doesn’t hurt to have more people explain topics their own way :-)

Sunday, August 16, 2015  |  From Pixel in Gene

As the application scales in size, controlling communication between components requires enough thought to ensure there isn’t too much or too little of it. In this post we will look at the various ways of communicating in AngularJS 1.x. Some of these techniques may still apply in the Angular 2 world (eg: Events), however the actual mechanics will be different.



The different ways of communication



AngularJS 1.x offers several ways of communicating between components. These are based on the core abstractions that Angular provides, namely: Services, Directives, Controllers and of course the Scope.



Below, we explore these alternatives with some simple examples.





Communicating via Scope








Data binding was the pivotal feature of Angular that got its initial popularity. By having a scope (the model) bound to a template you are able to replace placeholder mustache-like strings {{ model.prop }} with their actual values from the scope (aka model). This way of expanding templates to build pages is very convenient and productive. Here, Scope acts as the binding glue to fill in values for the mustache-strings. At the same time, scope also has references to event-handlers that can be invoked by interacting with the DOM.



Note that these placeholders automatically create a two-way binding between the model and the DOM. This is possible, as you already know, via the watchers. Also worth mentioning is that with Angular 1.3, you can create one-time bindings with {{ ::model.prop }} syntax. Make sure you put the ::.



The example below shows the controller and its usage in the template. The key part here is the use of scope (the binding glue) to read model values as well provide interaction.



1 angular.module('exemplar')
2     .controller('MidLevelController', function ($scope) {
3 
4         $scope.midLevelLabel = 'Call Mid Level';
5         $scope.midLevelMethod = function () {
6             console.log('Mid Level called');
7         };
8     });



1     <div class="panel mid-level" ng-controller="MidLevelController">
2         Mid Level Panel
3 
4         <div class="panel bottom-level">
5             <button ng-click="callMidLevelMethod()">{{ midLevelLabel }}</button>
6         </div>
7     </div>





Communicating implicitly via the Prototypical Scope








Scopes in Angular directives (and Controllers) prototypically inherit from the parent scopes. This means a child-directive (or Controller) is able to reference and use properties of its parent scope just by knowing the properties. Although not a recommended approach, this can work well for simple directives that are not using Isolate Scopes. Here there is an implicit contract between the parent and child directives (or Controllers) to share a few properties.



In the example below, you can see that the BottomLevelController is able to invoke a method on the TopLevelController purely because of the prototypical scope.



 1 <div class="panel top-level" ng-controller="TopLevelController">
 2     Top Level Panel
 3     <div class="panel mid-level" ng-controller="MidLevelController">
 4         Mid Level Panel
 5 
 6         <div class="panel bottom-level" ng-controller="BottomLevelController">
 7             <button ng-click="callMidLevelMethod()">{{ midLevelLabel }}</button>
 8             <button ng-click="topLevelMethod('Bottom Level')">Call Top Level</button>
 9         </div>
10     </div>
11 </div>



And here are the controllers:



 1 angular.module('exemplar')
 2     .controller('TopLevelController', function ($scope) {
 3 
 4         $scope.topLevelMethod = function (sender) {
 5             console.log('Top Level called by : ' + sender);
 6         };
 7     })
 8     .controller('MidLevelController', function ($scope) {
 9 
10         $scope.midLevelLabel = 'Call Mid Level';
11         $scope.midLevelMethod = function (sender) {
12             console.log('Mid Level called by: ' + sender);
13         };
14     })
15     .controller('BottomLevelController', function ($scope) {
16 
17         $scope.callMidLevelMethod = function () {
18             $scope.midLevelMethod('bottom-level');
19         };
20     });





Communicating via Controllers








<figure class='quote'>


When there is a natural, nested relationship with directives, it is possible to communicate between them by having the child depend on the parent’s Controller. This is generally done within the child-directive by providing a link() function and a dependency on the parent directive’s controller. The dependency is established using the require attribute of the child’s directive-definition-object. You can even depend on more controllers from the parent chain by using the array syntax. They all show up as the fourth parameter in the link() function. {” Note, for this to work, the parent-directive must have a Controller defined. “}




</figure>



Consider the example below where we have nested directive structure:



1 <parent-component>
2     <child-component></child-component>
3 </parent-component>



Here we can wire the <child-component> and <parent-component> with the following directives. Note line#23 where we require the parent controller and line#25 where we take in the instance in the link function.



 1 angular.module('exemplar')
 2     .directive('parentComponent', function () {
 3 
 4         return {
 5             restrict: 'E',
 6             templateUrl: 'parent-child-directive/parent-component.template.html',
 7             transclude: true,
 8             controller: ParentComponentController
 9         };
10 
11         function ParentComponentController($scope) {
12 
13             var vm = this;
14             vm.takeAction = function () {
15                 console.log('The <child-component> called me');
16             }
17         }
18     })
19     .directive('childComponent', function () {
20 
21         return {
22             restrict: 'E',
23             require: '^parentComponent',
24             templateUrl: 'parent-child-directive/child-component.template.html',
25             link: function (scope, element, attrs, parentController) {
26 
27                 scope.notifyParent = function () {
28                     parentController.takeAction();
29                 }
30             }
31         }
32     });





Communicating via Services








Services are the singletons of Angular that are used to capture behavior. However by the virtue of being singletons, they also act as shared storage and can be used to aid communication between disparate components (Directives). The communicating parties depend on the shared service and use the methods on the service to do the communication.



In the example below, you can see a simple clipboardService that provides a shared storage for the copyButton and pasteButton directives.



 1 (function () {
 2     angular.module('exemplar')
 3         .factory('clipboardService', serviceFunction);
 4 
 5     function serviceFunction() {
 6 
 7         var clipboard = {};
 8 
 9         return {
10             copy: function (data, key) { /* ... */ },
11             get: function (key) { /* ... */ }
12         };
13     }
14 })();



 1 angular.module('exemplar')
 2     .directive('copyButton', function (clipboardService) {
 3 
 4         return {
 5             restrict: 'A',
 6             link: function (scope) {
 7 
 8                 scope.performCopy = function () {
 9                     // Invoke Copy
10                     clipboardService.copy({}, 'abc');
11                 };
12             }
13         };
14     })
15     .directive('pasteButton', function (clipboardService) {
16 
17         return {
18             restrict: 'A',
19             link: function (scope) {
20 
21                 scope.performPaste = function () {
22                     // Fetch from clipboard
23                     var data = clipboardService.get('abc');
24 
25                     /* ... Handle the clipboard data ... */
26                 };
27             }
28         };
29     });





Communicating via Events








Events are the cornerstones of all UI Frameworks (or any event-driven framework). Angular gives you two ways to communicate up and down the UI tree. Communicate with parents or ancestors via $emit(). Talk to children or descendants via $broadcast().



As an extension, you can talk to every component in the app via $rootScope.$broadcast(). This is a great way to relay global events.



On the other hand, a more focused $rootScope.$emit() is useful for directed communication. Here $rootScope acts like a shared service. Communicating with events is more like message-passing where you establish the event-strings and the corresponding data that you want to send with that event. With the right protocol (using some event-string convention) you can open up a bi-directional channel to communicate up and down the UI tree.



In the example below, you can see two controllers (DeepChildController and RootController) which communicate using the $rootScope. With $rootScope, you get a built-in shared service to allow any child component to communicate with the root.



 1 angular.module('exemplar')
 2     .controller('DeepChildController', function ($scope, $rootScope) {
 3 
 4         $scope.notifyOnRoot = function () {
 5             $rootScope.$emit('app.action', {name: 'deep-child'});
 6         };
 7     })
 8     .controller('RootController', function ($scope, $rootScope) {
 9 
10         $rootScope.$on('app.action', function (event, args) {
11             console.log('Received app.action from: ', args.name);
12         });
13     });



 1     <div class="root" ng-controller="RootController">
 2     <!-- Some where deep inside the bowels of the app -->
 3         <ul>
 4             <li>One</li>
 5             <li>Two</li>
 6             <li ng-controller="DeepChildController">Three has
 7                 <button ng-click="notifyOnRoot()">Talk to Root</button>
 8             </li>
 9         </ul>
10     </div>





Communication outside Angular








Although you may be using Angular, you are not limited to doing everything the angular way. You can even have side communication outside of Angular using a shared bus from the PubSub model. You could also use WebWorkers for running intensive operations and showing the results via Angular.



The catch here is that once you want to display results on the DOM, you will have to enter the Angular context. This is easily done with a call to $rootScope.apply() at some point where you obtain the results.



Now the question is how do you get the $rootScope outside of angular. Well, below is the snippet. Here we are assuming your ng-app is rooted on document.body.



1 // Get $rootScope
2 var rootScope = angular.element(document.body).injector().get('$rootScope');
3 
4 // Trigger a $digest
5 rootScope.$apply(function(){
6     // Set scope variables for DOM update
7 });



Performance Gotchas in communication



Communicating at scale (inside your app) comes with a few gotchas and can seriously affect performance. For example, if you are listening to a streaming server that is pumping market data every second, you may be running $scope.$digest() or $rootScope.$digest() every second!. You can imagine the turmoil it will cause in terms of performance. End result: An ultra-sluggish app.








One of the most popular techniques to handle high volume communication is to debounce the handler. This ensures that the actual event is only handled once in a defined time interval. So for the duration of the interval, events are ignored. Debouncing can be introduced at various places in your data-pipeline to control the burst of events.



Note: If you don’t want to ignore the data in an event, you can buffer it for use at the end of the interval. In general, batching is universal for controlling volume. It is much more efficient to combine several small activities into one batched-activity.



In Summary



Communication within an application, just like building software, is a mixture of Art + Science. The science part is the mechanics of communication, many of which we have seen above. The art is knowing when to employ the right one! Using the right methods can make a big difference to the maintainability, stability and overall performance of your application.



Although we have used Angular as a pretext to discuss these communication styles, many of them are universal and apply to any JavaScript application. I’ll leave it as an exercise for you to map to your own favorite framework.



Question: Have I missed out any particular technique here? Something that you have used effectively? Please do share in comments.

Sunday, March 29, 2015  |  From Pixel in Gene

Now we all know AngularJS is an awesome framework to build large Single Page Applications. At the core of angular is a little thing called the HTML Compiler. It takes text as input (most likely from your templates) and transforms it into DOM elements. In the process it also does a bunch of things like:



  • Directive detection
  • Expression interpolation
  • Template expansion
  • Setting up data-bindings, etc.

Let me drive your attention to one area: Template Expansion. I was curious to know if AngularJS could be used purely as a template expander. Something akin to Handlebars or Underscore templates. Is it possible? Can we take that aspect of Angular and use it independently?



Well, it turns out, it’s not that straightforward. But hey, this is an experiment and we are on a path to discover something!



But Why?



A valid question. YMMV: Your Motivations May Vary. For me it was about:



  • Using it as an Isomorphic library on both client and server
  • Use it in a custom control (in non-Angular projects) where you need to expand a template
  • Just for fun
  • Learning something new!

Disclaimer Now, if you were to ask me point-blank: “Should I be doing this in my project?”, I will say “NO”, without blinking an eyelid.



Elements of Angular



Templates are the fundamental building blocks of modern-day web apps. It enables better structuring by keeping your view separate from the domain logic. It also gives you composability where you can compose the UI by combining a set of templates. This is of course better than one giant, monolithic HTML file. So there are real, practical benefits to using templates.



Angular definitely gives you this capability but does so with a few key abstractions:








Let’s see them in turn.



  • $templateRequest: Does the $http request to fetch the template from a URL. Before it does that, it checks the $templateCache if it’s already available. If not, it goes out to get it.
  • $templateCache: This is the cache of all your templates. Why make a second request when you can cache it, right?
  • $compile: Does the hard part of converting text to a function. Calling the function with a scope will generate the jqLite element (DOM)
  • $digest(): The scope from the previous step must have $digest invoked to do the real “template expansion”. The $digest is the necessary syncing mechanism between the scope and the DOM.

Extracting Text



At the end of step 5, we should have a jqLite element bound to the right data from the scope. Now, the reason why we are doing this is to generate a template-expanded text. So we really need the text part.



This can be done by reading the element’s outerHTML property, like so: element[0].outerHTML. Finally, we have what we started out to get. It was a little round-about but we used a template and expanded it to real text by supplying a scope (the context for the template). All with AngularJS.



Few gotchas



I must admit, the above is not the complete picture to generate template-expanded text. In order to use the services such as $templateRequest, $templateCache and $compile, you have to rely on Angular’s $injector. Additonally the scope has to be an instance of the angular Scope. It can’t be a plain JavaScript object! To create a scope you have to rely on $rootScope, which you also get from $injector.



[Aside]: If you try using a simple object, you will see exceptions being thrown. Also we need to call $digest() to bind the data. As you guessed, we can’t do that on a simple js-object.



The code below is a working example of using Angular to expand a template. You can copy and paste it as a snippet in Chrome DevTools, and execute to see the results. Make sure you run on a web-page that uses the AngularJS library. AngularJS.org is a decent bet.



 1 // The injector knows about all the angular services
 2 var injector = angular.element(document).injector();
 3 
 4 // This template could have been fetched with a $templateRequest
 5 // We are inlining it here for this snippet
 6 var template = '<div>{{ ::firstName }} -- {{ ::lastName }}</div>';
 7 
 8 // $compile creates a template-function that can be invoked with the scope
 9 // to expand the template
10 var templateFunction = injector.get('$compile')(template);
11 
12 // Create the scope. 
13 // Note: this has to be a real angular Scope and not a plain js-object
14 var scope = injector.get('$rootScope').$new();
15 
16 // Set some properties
17 scope.firstName = 'Pavan';
18 scope.lastName = 'Podila';
19 
20 var element = templateFunction(scope);
21 scope.$digest();
22 
23 // Remove the noise around the generated element. This can be disabled
24 // by configuring the $compileProvider.debugInfoEnabled()
25 // Here we take the easy way out
26 element.removeClass('ng-scope ng-binding');
27 
28 // -------- GRAND FINALE ---------
29 // The expanded template as TEXT
30 // -------------------------------
31 var expandedText = element[0].outerHTML; 
32 
33 // Output: 
34 // <div class="">Pavan -- Podila</div>



Grey Matter



As you can see, it is quite round about. Definitely not suggested for a real project. If you need such capability, you are better off with Handlebars or Underscore templates. But if you have read this far, hopefully you have put an additional fold in your Angular grey matter!

Friday, March 6, 2015  |  From Pixel in Gene

On March 5th and 6th I attended ng-conf 2015, which was held in Salt Lake City, Utah. It was great to meet several people building Angular apps as well as speaking first hand to the people behind Angular and TypeScript. On the technical front, I found a few resounding themes throughout the conference:








Many of these themes are covered in this Welcome talk on Day 1.



TypeScript



  • What was originally called AtScript has now merged with TypeScript. No more AtScript. It is all TypeScript from now on. In fact, Google and Microsoft are collaborating closely on building TypeScript.
  • TS will be a superset of ES6
  • Angular 2 apps can be written in ES5, ES6 or TypeScript. Writing apps in TypeScript is definitely the recommended approach.
  • Jonathan Turner, from Microsoft, did a great job describing TypeScript and Angular 2.0

WebComponents



  • The team from OpenTable gave a very compelling presentation about the benefits of WebComponents. It is definitely a solid approach to scale large apps.
  • Moving forward, most apps will be composed of components and can be visualized as a component-tree.






  • It is the evolution of the Angular 1.x Directives

Performance



  • Several improvements have been made in the change detection algorithm which dramatically improves the $digest cycle. Some of these include: use of Immutable Data Structures, Unidirectional Change Detection which always completes in one iteration and View Caching.
  • View Caching will lead to faster render
  • Dave Smith did a more in-depth exploration of Angular 2, comparing it to React in this talk. You will be surprised at the results.
  • More details can be seen in these blog posts
  • There is a new framework called Benchpress for doing E2E performance testing. Jeff Cross covers it in his talk on Fast from the Start

Material Design



E2E Testing with Protractor



  • Custom plugins can be added to hook into various lifecycle
    events of the protractor harness

Observables



  • The concept inspired from the Rx.Net will be part of TC39 and also Angular 2. Although there is already a perf boost just by using Angular 2, the use of Observables as data structures helps in boosting the $digest even more.
  • Using the design principles of Rx Observables, one can create reactive, data-intensive applications far more easily. The Netflix team is relying heavily on this approach to build their internal tools.
  • The Netflix team also unveiled Falcor, which is an evolution of the MVC model for the cloud. The talk by Jafar Husain is worth watching.
  • This talk about using Rx gives a more practical example

Community and Collaboration



There seems to be a greater push to collaborate than compete.



  • We saw an example where Google and Microsoft are collaborating on the TypeScript language
  • The Angular team is also working with the Ember team and taking tips from the Ember-CLI implementation
  • In the Day 2 Panel meeting, we saw Igor mention that they want to consolidate NG-Inspector and Batarang extensions into one.
  • Of course, this means we all benefit from the best work done by various teams, both within and outside Google.

Angular 2 Syntax philosophy



  • The main aim of this syntax change is to make it more consistent and compatible with the HTML spec. Because of this change several directives are no longer needed and removed from Angular 2.0.
  • Attribute syntax is now [property]="expression". This eliminates a bunch of directives like ng-bind, ng-bind-html
  • Event syntax is now (event)="statement". This eliminates a bunch of directives such as ng-mouse*, ng-key* and most of the event related directives.
  • A new reference syntax allows you to reference tags and variables.

1 <div click="input.focus()"></div>
2 <input #input type="text">



Note the #input which sets a reference to the input control. It
can now be referenced from the statement above: input.focus().



  • With a fixed set of syntax choices, angular templates are more amenable for tooling and introspection
  • We should expect some tooling that can do static analysis of templates and catch compile-time issues, especially if TypeScript is the primary language.
  • Do watch the keynote by Misko where he describes more about the philosophy of Angular 2 syntax

Chuckles



<figure class='quote'>



I like to write CoffeeScript. Write some JavaScript and then get Coffee while the script finishes



<figcaption class='quote-source'>

Dr. Gleb Bahmutov

</figcaption>

</figure>



ng-wat? was the funniest talk. Period.



Keep an eye on



New Website for Angular: Angular.io. This is the place where there will be lot more information about the future of Angular!



For all the ng-conf videos, checkout this YouTube Channel.

Saturday, February 28, 2015  |  From Pixel in Gene

If you see my Archive page, you will notice a complete void for 2014, a year where I did not post at all. So what happened? Well, besides taking those much needed sleep-breaks, I was busy building [QuickLens][quicklens]: a Mac App that provides a set of tools to explore User Interfaces.



<figure class='quote'>

The app was mostly built on the Nights and Weekends plan.




</figure>






A Brief History



QuickLens started its life around April 2013. It was an app born out of pure need, with features built around my own workflow. You see, back in early 2013, I was creating video courses and training videos on Web Development. I wanted an app that could help me highlight areas of the screen and possibly dim rest of the screen. That would be useful to highlight snippets of code in real-time.



Being on the Mac, I found one: OmniDazzle. The description seemed to fit my exact need. Sadly, it was not supported for Mavericks and above. I tried my best to make it work but it was futile. After struggling with it for few days, I decided to take the next step: Build it myself.



And thus, QuickLens was born. The name “QuickLens”, itself was a cue at moving a lens around the screen…quickly! I thought I will have this tool ready in few days, so I could use it in my presentations. That estimate of course went overboard by 365 days. Yes, it took me a complete year to build it and release it on the Mac App Store.



Year-long



“What? an year just to create a little app to highlight an area!” Well, what started as a simple exercise, ended up in a fairly sophisticated app with a suite of 7 tools. And it does lot more than just highlighting an area. QuickLens became a tool-set for



  • Magnifying areas
  • Sampling / Exporting colors
  • Inspecting alignments and layouts
  • Measuring dimensions
  • Overlaying Grids
  • Taking snapshots
  • Simulating vision defects and so much more

Sometimes the path you take opens up a world of detours!



It was Monocle



If you have seen QuickLens, you know it has 7 tools:








  • Lens
  • Ruler
  • Frame
  • Guide
  • Tape
  • Monocle
  • Crosshair

The tool I originally started building was Monocle.








Tools to test Tools



Although Monocle looks simple, it required a monumental effort to test and ensure every pixel is rendered correctly with precise positioning and alignments. That need forced me to build a bunch of ancillary tools that helped in testing it.



  • Lens was extremely useful to ensure the lines were always pixel-aligned
  • Ruler made sure I wasn’t drawing extra pixels outside of the boundaries
  • Guide was needed to check alignment of controls within a tool
  • Tape was great to check the angle of zoom-labels inside the monocle
  • Crosshair helped in getting the mouse and pixel position on the screen
  • The above tools were also used in combination for some extreme testing

QuickLens in its current state includes all those tools and features. I personally found them super useful while building the Monocle. I think these tools are generally useful and applicable in lots of different areas. If you are a UI Designer/Developer, you owe it to yourself to try [QuickLens][quicklens]!



Dynamic Theming



There are several features of QuickLens that stand out compared to other apps that do similar things. Today, I want to just focus on one feature that I haven’t seen other apps do: dynamic theming.



Since QuickLens works on top of all your apps, it is always visible and easy to access. Sometimes, the design/UI you are working with provides very little contrast against the color of the tool. This makes it hard to work effectively against the backdrop of a similarly colored design. Take a look at the screenshot below and you’ll see what I mean. You can barely separate the Guide tool from the underlying design. The White theme for the tools is not helping here!








To address this, we have dynamic theming that allows you to switch the colors of the app on the fly. Using the shortcut Command+E you can flip through various colors and then pick the one which gives you the most contrast. Use the tool-palette to see all of the choices quickly.








If you are not happy with the choices, you can also pick a custom color! Surely you will find a color in the 16-million choices provided by the color-wheel :-)



With proper contrast, you can see the tools more clearly.








Switch tool colors on the fly to get the best contrast on your designs. ⌘E works too. pic.twitter.com/LOgOKlaHRL

— QuickLens App (@QuickLensApp) June 26, 2014



Last mile to App Store



When I first started in April 2013, I did not expect a ramp up of a year to get to a release state. A significant amount of time was spent in polishing and the overall fit and finish. Its crazy how the 80-20 rule plays out in reality. The last 20% is always the part where you sweat the most!



I was ready by early April 2014, a year after I started. The next month was spent in designing the product website, twitter, App Icon and finalizing some legal stuff. After some late nights and burnt weekends, I was ready to submit the app for review.



Here is my experience with the App Store approval process:



  • May 22, 2014 App submitted for review
  • May 25, 2014 Rejected: wrong folder used for storing the snapshots
  • May 26, 2014 Fixed and resubmitted for review
  • June 04, 2014 App Approved
  • June 15, 2014 App released to public

The next several months was spent in promotions, advertising on Twitter, talking at local User Groups and improving the website. I was also prepping the next version (v1.5), which incorporated a ton of feedback I got from Designers and Developers.



Free Trial



It is quite natural for people to try out someting new before buying. It took me a year to realize that! Yes, some lessons are learnt the hard, long way. So without further ado:



You can now download a [7-Day Free Full-featured Trial][quicklens]



I hope you will give it a shot. You never know, it might just fill the need you have in your design/development workflow!



[quicklens]: http://www.quicklensapp.com

Wednesday, February 4, 2015  |  From Pixel in Gene

It’s no secret that QuickLens is built using RubyMotion.




RubyMotion is a fantastic toolset to build your iOS and Mac Apps using the Ruby tool chain. It compiles down to the Objective-C runtime and has no interpreter overhead. The performance profile is also great.


It’s an honor to be featured as a Success Story on RubyMotion’s website.

Sunday, May 12, 2013  |  From Pixel in Gene

Alright, this blog has been quiet for a few months. But that doesn’t mean that I have stopped writing.



NetTuts+



On the contrary, I am doing more of it as a contributing author at NetTuts+. The topics are quite varying but are all related to Web Development in one form or other. A sampling of my articles so far include:




Thanks to my editor, Jeffrey Way, I was also given the opportunity to create a Video course on the latest JS technologies like NodeJS, MongoDB, EmberJS, RequireJS, etc. This should be live soon and I’ll tweet the link once it is prime.



So, if you find this place a little quiet, be sure to check out NetTuts+.

Saturday, December 22, 2012  |  From Pixel in Gene

A seemingly simple language yet a tangled mess of complexity. If you are picturing a giant CSS file from your website, you are on the right track. Yes, CSS can start out as a really simple language to learn but can be hard to master. The CSS chaos starts slowly and seems innocuous at first. Overtime as you accumulate features and more variations on your website, you see the CSS explode and you are soon fighting with the spaghetti monster.



CSS Monster



Luckily this complexity can be brought under control. By following a few simple rules, you can bring order and structure to your growing pile of CSS rules.



CSS Monster



These rules, as laid down by Scalable Modular Architecture for CSS (SMACSS), have a guiding philosophy:



  1. Do one thing well
  2. Be context-free (as far as possible)
  3. Think in terms of the entire website/system instead of a single page
  4. Separate layout from style
  5. Isolate the major concerns for a webpage into layout, modules and states
  6. Follow naming conventions
  7. Be consistent

SMACSS in action



The above principles can be translated in the following ways:



  1. Avoid id-selectors since you can only have one ID on a page. Rely on class, attribute and pseudo selectors
  2. Avoid namespacing classes under an ID. Doing so limits those rules only to that section of the page. If the same rules needs to be applied on other sections, you will end up adding more selectors to the rule. This seems harmless at the outset but soon becomes a habit. Avoid it with vengeance.
  3. Modules help in isolating pieces of content on the page. Modules are identified by classes and can be extended with sub-modules. By relying on the fact that you can apply multiple classes to a HTML tag, you can mix rules from modules and sub-modules into a tag.
  4. The page starts out as a big layout container, which is then broken down into smaller layout containers such as header, footer, navigation, sidebar, content. This can go as deep as you wish. For example, the content area will be broken down further on most websites. When defining a layout rule make sure you don’t mix presentation rules such as fonts, colors, backgrounds or borders. Layout rules should only contain box-model properties like margins, padding, positioning, width, height, etc.,
  5. The content inside a layout container is described via modules. Modules can change containers but always retain their default style. Variations in modules are handled as states and sub-modules. States are applied via class selectors, pseudo selectors or attribute selectors. Sub-modules are handled purely via class selectors.
  6. Naming conventions such as below make it easier to identify the type of rule: layout, module, sub-module or state

    • layout: .l-*
    • state: .is-*
    • module: .<name>
    • sub module: .<name> .<name>-<state>
  7. Be conscious of Depth of applicability. Making the rule deeply nested will tie the CSS to your HTML structure making it harder to reuse and increasing duplicate rules.

An example to tie it all together



Alright, there are lot of abstract ideas in here. Let’s do something concrete and build a simple webpage that needs to show a bunch of contact cards, like below:



Cards



Demo



There are few things to note here:



  • There are 4 modules: card, pic, company-info and contact-info
  • The card module has a sub-module: card-gov, for contacts who work for the government
  • The card and contact-info module change layouts via media queries.


<figure class='code'><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
/* ----- Picture ----- */
.pic {}
.pic-right {}

/* ----- Card ----- */
.card {}
@media screen and (max-width: 640px) {
  .card {  }
}
.card h4 {}

.card-gov {}
.card-gov .contact-info {}

/* ----- Company Info ----- */
.company-info {}

.company-info-title {}
.company-info-name {}

/* ----- Contact Info ----- */
.contact-info {}
@media screen and (max-width: 640px) {
  .contact-info {  }
}

.contact-info-field {}
.contact-info-field:after {}
</figure>

Parallels to OO languages



To me the whole idea of SMACSS seems like an application of some of the ideas from OO languages. Here is a quick comparison:



  • Minimize or avoid Singletons: minimize or avoid #id selectors
  • Instances: tags in html which have a class applied
  • Single inheritance: Modules and Sub-modules
  • Mixins: context free rules via states and layouts

Summary



SMACSS can save you a lot of maintenance headache by following few simple rules. It may seem a little alien at first but after you do a simple project, it will become more natural. In the end,
its all about increasing productivity and having a worry-free sleep ;-)



Some resources to learn more about SMACSS:


Sunday, October 7, 2012  |  From Pixel in Gene

These are some of the common idioms I find myself using again and again. I am going to keep this as a live document and will update as I discover more useful idioms.



Disclaimer: I’ll be using the Underscore library in all of my examples





Use Array.join to concatenate strings



It is quite common to build html in strings, especially when you are writing a custom formatter or just plain building simple views in code. Lets say you want to output the html for 3 buttons:



<figure class='code'><figcaption></figcaption>
1
2
3
4
5
var html = '<div class="button-set">' +
  '<span class="button">OK</span>' +
  '<span class="button">Apply</span>' +
  '<span class="button">Cancel</span>' +
'</div>';
</figure>


This works, but consider the alternate version, where you build the strings as elements of an array and join them using Array.join().



<figure class='code'><figcaption></figcaption>
1
2
3
4
5
6
7
var html = [
  '<div class="button-set">'
      '<span class="button">OK</span>',
      '<span class="button">Apply</span>',
      '<span class="button">Cancel</span>',
  '</div>'
].join('');
</figure>


It reads a little better and can almost look like real-html with the identation ;)




Minimize use of if/else blocks by creating object hashes



Lets say you want perform a bunch of different actions based on the value of a certain parameter. For example, if you want to show different views based on the weather condition received via an AJAX request, you could do something like below:



<figure class='code'><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
function showView(type) {
  if (_.isObject(type)) {
      // read object structure and prepare view
  }
  else if (_.isString(type)) {
      // validate string and show the view
  }
}

function showWeatherView(condition){
  
  if (condition === 'sunny') showView('sunny-01');
  else if (condition === 'partly sunny') showView('sunny-02');
  else if (condition === 'cloudy') showView('cloudy-01');
  else if (condition === 'rain') showView({ type: 'rain-01', style:'dark' })
}

$.get('http://myapp.com/weather/today', function(response){
  
  var condition = response.condition;

  // Show view based on this condition
  showWeatherView(condition);
});
</figure>


You will notice in showWeatherView(), there is lot of imperative noise with if/else statements. This can be removed with an object hash:



<figure class='code'><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
function showWeatherView(condition){

  var viewMap = {
      'sunny': 'sunny-01',
      'partly sunny': 'sunny-02',
      'cloudy': 'cloudy-01',
      'rain': { type: 'rain-01', style:'dark' }
  };   

  showView(viewMap[condition]);
}
</figure>


If you want to support more views, it should be easier to add it to the viewMap hash. The general idea is to look at a piece of code and think in terms of data + code. What part is pure data and what part is pure code. If you can make the separation, you can easily capture the data part as an object-hash and write simple code to loop/process the data.



As a side note, if you want to eliminate the use of if/else, switch statements, you can have Haskell-style pattern-matching with the matches library.




Make the parameter value be of any-type



When you are building a simple utility library/module, it is good to expose an option that can be any of string, number, array or function type. This makes the option more versatile and allows for some logic to be executed each time the option value is needed. I first saw this pattern used in libraries like HighCharts and SlickGrid and found it very natural.



Let’s say you want to build a simple formatter. It can accept a string to be formatted using one of the pre-defined formats or use a custom formatter. It can also apply a chain of formatters, when passed as an array. You can have the API for the formatter as below:



<figure class='code'><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
function format(formatter, value) {
  var knownFormatters = {
      '###,#': function(value) {},
      'mm/dd/yyyy': function(value) {},
      'HH:MM:ss': function(value) {}
  },
      formattedValue = value;

  if (_.isString(formatter)) {

      // Lookup the formatter from list of known formatters
      formattedValue = knownFormatters[formatter](value);

  }
  else if (_.isFunction(formatter)) {

      formattedValue = formatter(value);

  }
  else if (_.isArray(formatter)) {

      // This could be a chain of formatters
      formattedValue = value;
      _.each(formatter, function(f) {
          formattedValue = format(f, formattedValue); // Note the recursive use format()
      });

  }

  return formattedValue;
}
</figure>


As an addendum to a multi-type parameter, it is also common to normalize the parameter value to an object hash and remove type differences.




Use IIFE to compute on the fly



Sometimes you just need a little bit of code to set the value of an option. You can either do it by computing the value separately or do it inline by writing an Immediately Invoked Function Expression (IIFE):



<figure class='code'><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
var options = {
  title: (function(){
      var html = '<h1>' + titleText + '</h1>';
      var icons = '<div class="icon-set"><span class="icon-gear"></span></div>';

      return html + icons;
  })(),
  buttons: ['Apply', 'Cancel', 'OK']
};
</figure>


In the above code there is little bit of computation for the title text. For simple code like above it is sometimes best to have the logic right in there for improved readability.

Thursday, June 14, 2012  |  From Pixel in Gene

The ExpressJS framework is one of the simpler yet very powerful web frameworks for NodeJS.
It provides a simple way to expose GET / POST endpoints on your web application, which then serves
the appropriate response. Getting started with ExpressJS is easy and the Guides on the
ExpressJS website are very well written to make you effective in short order.


Moving towards a flexible app structure



When you have a simple app with a few endpoints, it is easy to keep everything
self-contained right inside of the top-level app.js. However as you start
buliding up more GET / POST endpoints, you need to have an organization scheme
to help you manage the complexity. As a simple rule,



When things get bigger, they need to be made smaller ;-)


Fortunately, several smart folks have figured this out earlier and have come up
with approaches that are wildly successful. Yes, I am talking about Rails and
the principle of “Convention over Configuration”. So lets apply them to our
constantly growing app.


Route management



Most of the routes (aka restful endpoints) that you
expose on your app can be logically grouped together, based on a feature. For
example, if you have some endpoints such as:



  • /login
  • /login/signup
  • /login/signup/success
  • /login/lostpassword
  • /login/forgotusername


… you can try grouping them under the “login” feature. Similarly you may have other endpoints
dedicated to handle other workflows in your app, like uploading content, creating users, editing
content, etc. These kind of routes naturally fit into a group and that’s the first cue for
breaking them apart
. As a first step, you can put the logically related GET / POST endpoints in
their own file, eg: login.js. Since you may have several groups of routes, you will end up with
lots of route files.



Putting all of these files at the top-level is definitely going to cause a
clutter. So to simplify this further, put all of these files into a sub-folder, eg: /routes. The project structure now looks more clean:



<figure class='code'>
1
2
3
4
5
6
7
project
  |- routes
  |   |- login.js
  |   |- create_users.js
  |   |- upload.js
  |   |- edit_users.js
  |- app.js
</figure>


Since we are working with NodeJS, each file becomes a module and the objects in the module can be
exposed via the exports object. We can establish a simple protocol that each route module must
have an init function which we call from app.js, passing in the necessary context for the route.
In case of the login this could look like so:



<figure class='code'><figcaption>Routes in login.js </figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
function init(app) {
  
  app.get('/login', function (req, res){

  });

  app.get('/login/signup', function (req, res){

  });

  app.get('/login/signup/success', function (req, res){

  });

  app.get('/login/lostpassword', function (req, res){

  });

  app.get('/login/forgotusername', function (req, res){

  });

}
</figure>


If you are using a recent version of ExpressJS, 2.5.8 as of this writing, the command-line
interface provides a way to quickly generate the express app. If you type express [options]
name-of-the-app
, it will generate a folder named name-of-the-app in the current working directory. Not surprisingly, express creates the /routes folder for you, which is already taking you in the right direction. I only learnt this recently and have so far been doing the hard work of starting from scratch each time. Sometimes spending a little more time on the manual helps! RTFM FTW.



Once we have the route files as described, it is easy to load them from app.js. Using the filesystem module we can quickly load each module and call init() on each one of them. We do this before the app is started. The app.js skeleton looks like so:



<figure class='code'><figcaption>App skeleton app.js </figcaption>
1
2
3
4
5
6
7
8
9
10
11
var fs = require('fs'),
    path = require('path');

var RouteDir = 'routes',
    files = fs.readdirSync(RouteDir);

files.forEach(function (file) {
    var filePath = path.resolve('./', RouteDir, file),
        route = require(filePath);
    route.init(app);
});
</figure>


Now we can just keep adding more routes, grouped in their own file and continue to build several endpoints without severerly complicating the app.js. The app.js file now follows the Open-Closed-Principle (app.js is open for extension but closed for modification).


In short…



As you can see, it is actually a simple idea, but when applied to other parts of your application, it can substantially reduce the maintenance overhead. So in summary:



  • Establish conventions to standardize a certain aspect of the program. In our case it was routes.
  • Group related items into their own module
  • Collect the modules into a logical folder and load from that folder

Sunday, May 6, 2012  |  From Pixel in Gene

Its been a while since I posted anything on this blog. Thought I’ll break the calm with a quick post about my recent sketch.



I generally use Autodesk SketchBook Pro (SBP) on my Mac for the intial doodling. I then develop a fairly finished sketch before importing it into Photoshop for any post-processing. Luckily SBP saves the files in PSD format, making it easy to do the Photoshop import. The following sketch was entirely done in SBP:



Rain and Tears



This was done in about 30 mins as a quick sketch to demonstrate the use of SBP and a Wacom tablet to a close friend. He was quite impressed and immediately ordered a bunch of items, including a Wacom Bamboo stylus for the iPad. I guess marketing wouldn’t be a bad alternate career!



BTW, the sketch is called Rain and Tears.
Rain and Tears - Tiles

Tuesday, February 21, 2012  |  From Pixel in Gene

It’s going to be a rather long post, so if you want to jump around, here are your way points:



  1. First steps

    1. A path for the slice
    2. Animating the pie-slice
  2. Raising the level of abstraction

    1. Custom CALayer, the PieSliceLayer
    2. Rendering the PieSliceLayer
  3. It all comes together in PieView

    1. Managing the slices
  4. Demo and Source code


With a powerful platform like iOS, it is not surprising to have a variety of options for drawing. Picking the one that works best may sometimes require a bit of experimentation. Case in point: a pie chart whose slices had to be animated as the values changed over time. In this blog post, I would like to take you through various stages of my design process before I ended up with something close to what I wanted. So lets get started.




First steps



Lets quickly look at the array of options that we have for building up graphics in iOS:



  • Use the standard Views and Controls in UIKit and create a view hierarchy
  • Use the UIAppearance protocol to customize standard controls
  • Use UIWebView and render some complex layouts in HTML + JS. This is a surprisingly viable option for certain kinds of views
  • Use UIImageView and show a pre-rendered image. This is sometimes the best way to show a complex graphic instead of building up a series of vectors. Images can be used more liberally in iOS and many of the standard controls even accept an image as parameter.
  • Create a custom UIView and override drawRect:. This is like the chain-saw in our toolbelt. Used wisely it can clear dense forests of UI challenges.
  • Apply masking (a.k.a. clipping) on vector graphics or images. Masking is often underrated in most toolkits but it does come very handy.
  • Use Core Animation Layers: CALayer with shadows, cornerRadius or masks. Use CAGradientLayer, CAShapeLayer or CATiledLayer
  • Create a custom UIView and render a CALayer hierarchy


As you can see there are several ways in which we can create an interactive UI control. Each of these options sit at a different level of abstraction in the UI stack. Choosing the right combination can thus be an interesting thought-exercise. As one gains more experience, picking the right combination will become more obvious and also be a lot faster.




A path for the slice



With that quick overview of the UI options in iOS, lets get back to our problem of building an animated Pie Chart. Since we are talking about animation, it is natural to think about Core Animation and CALayers. In fact, the choice of a CAShapeLayer with a path for the pie-slice is a good first step. Using the UIBezierPath class, it is easier than using a bunch of CGPathXXX calls.



<figure class='code'><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
-(CAShapeLayer *)createPieSlice {
  CAShapeLayer *slice = [CAShapeLayer layer];
  slice.fillColor = [UIColor redColor].CGColor;
  slice.strokeColor = [UIColor blackColor].CGColor;
  slice.lineWidth = 3.0;
  
  CGFloat angle = DEG2RAD(-60.0);
  CGPoint center = CGPointMake(100.0, 100.0);
  CGFloat radius = 100.0;
  
  UIBezierPath *piePath = [UIBezierPath bezierPath];
  [piePath moveToPoint:center];
  
  [piePath addLineToPoint:CGPointMake(center.x + radius * cosf(angle), center.y + radius * sinf(angle))];
  
  [piePath addArcWithCenter:center radius:radius startAngle:angle endAngle:DEG2RAD(60.0) clockwise:YES];
  
//   [piePath addLineToPoint:center];
  [piePath closePath]; // this will automatically add a straight line to the center
  slice.path = piePath.CGPath;

  return slice;
}
</figure>


  • The path consists of two radial lines originating at the center of the cirlce, with an arc between the end-points of the lines
  • The angles in the call to addArcWithCenter use the following unit-coordinate system:


Unit Coordinates



  • DEG2RAD is a simple macro that converts from degrees to radians
  • When rendered the pie slice looks like below. The background gray circle was added to put the slice in the context of the whole circle.


UIBezierPath Render




Animating the pie-slice



Now that we know how to render a pie-slice, we can start looking at animating it. When the angle of the pie-slice changes we would like to smoothly animate to the new slice. Effectively the pie-slice will grow or shrink in size, like a radial fan of cards spreading or collapsing. This can be considered as a change in the path of the CAShapeLayer. Since CAShapeLayer naturally animates changes to the path property, we can give it a shot and see if that works. So, let’s say, we want to animate from the current slice to a horizontally-flipped slice, like so:



UIBezierPath Render



To achieve that, lets refactor the code a bit and move the path creation into its own method.



<figure class='code'><figcaption></figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
-(CGPathRef)createPieSliceWithCenter:(CGPoint)center
              radius:(CGFloat)radius
              startAngle:(CGFloat)degStartAngle
              endAngle:(CGFloat)degEndAngle {
  
  UIBezierPath *piePath = [UIBezierPath bezierPath];
  [piePath moveToPoint:center];
  
  [piePath addLineToPoint:CGPointMake(center.x + radius * cosf(DEG2RAD(degStartAngle)), center.y + radius * sinf(DEG2RAD(degStartAngle)))];
  
  [piePath addArcWithCenter:center radius:radius startAngle:DEG2RAD(degStartAngle) endAngle:DEG2RAD(degEndAngle) clockwise:YES];
  
  // [piePath addLineToPoint:center];
  [piePath closePath]; // this will automatically add a straight line to the center

  return piePath.CGPath;
}

-(CAShapeLayer *)createPieSlice {
  
  CGPoint center = CGPointMake(100.0, 100.0);
  CGFloat radius = 100.0;

  CGPathRef fromPath = [self createPieSliceWithCenter:center radius:radius startAngle:-60.0 endAngle:60.0];
  CGPathRef toPath = [self createPieSliceWithCenter:center radius:radius startAngle:120.0 endAngle:-120.0];

  CAShapeLayer *slice = [CAShapeLayer layer];
  slice.fillColor = [UIColor redColor].CGColor;
  slice.strokeColor = [UIColor blackColor].CGColor;
  slice.lineWidth = 3.0;
  slice.path = fromPath;

  
  CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:@"path"];
  anim.duration = 1.0;
  
  // flip the path
  anim.fromValue = (__bridge id)fromPath;
  anim.toValue = (__bridge id)toPath;
  anim.removedOnCompletion = NO;
  anim.fillMode = kCAFillModeForwards;
  
  [slice addAnimation:anim forKey:nil];
  return slice;
}
</figure>


In the refactored code, createPieSlice: just calls the createPieSliceWithCenter:radius:startAngle:endAngle function for the from and to-paths and sets up an animation between these two paths. In action, this looks like so:



Path Animation



Yikes! That is definitely not what we expected. CAShapeLayer is morphing the paths rather than growing or shrinking the pie slices. Of course, this means we need to adopt more stricter measures for animating the pie slices.




Raising the level of abstraction



Clearly CAShapeLayer doesn’t understand pie-slices and has no clue about how to animate a slice in a natural manner. We definitely need more control around how the pie slice changes. Luckily we have an API that gives a hint at the kind of abstraction we need: a pie slice described in terms of {startAngle, endAngle}. This way our parameters are more strict and not as flexible as the points of a bezier path. By making these parameters animatable, we should be able to animate the pie-slices just the way we want.



Applying this idea to our previous animation example, the path can be said to be changing from {-60.0, 60.0} to {120.0, -120.0}. By animating the startAngle and endAngle, we should be able to make the animation more natural. In general, if you find yourself tackling a tricky problem like this, take a step back and check if you are at the right level of abstraction.




Custom CALayer, the PieSliceLayer



If a CAShapeLayer can’t do it, we probably need our own custom CALayer. Let’s call it the PieSliceLayer and give it two properties: … you guessed it… startAngle and endAngle. Any change to these properties will cause the custom layer to redraw and also animate the change. This requires following a few standard procedures as prescribed by Core Animation Framework.



  • Firstly don’t @synthesize the animatable properties and instead mark them as @dynamic. This is required because Core Animation does some magic under the hood to track changes to these properties and call appropriate methods on your layer.


<figure class='code'><figcaption>PieSliceLayer.h</figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
#import <QuartzCore/QuartzCore.h>

@interface PieSliceLayer : CALayer


@property (nonatomic) CGFloat startAngle;
@property (nonatomic) CGFloat endAngle;

@property (nonatomic, strong) UIColor *fillColor;
@property (nonatomic) CGFloat strokeWidth;
@property (nonatomic, strong) UIColor *strokeColor;
@end
</figure>




<figure class='code'><figcaption>PieSliceLayer.m</figcaption>
1
2
3
4
5
6
7
8
9
10
#import "PieSliceLayer.h"

@implementation PieSliceLayer

@dynamic startAngle, endAngle;
@synthesize fillColor, strokeColor, strokeWidth;

...

@end
</figure>


  • Override actionForKey: and return a CAAnimation that prepares the animation for that property. In our case, we will return an animation for the startAngle and endAngle properties.

  • Override initWithLayer: to copy the properties into the new layer. This method gets called for each frame of animation. Core Animation makes a copy of the presentationLayer for each frame of the animation. By overriding this method we make sure our custom properties are correctly transferred to the copied-layer.

  • Finally we also need to override needsDisplayForKey: to tell Core Animation that changes to our startAngle and endAngle properties will require a redraw.



<figure class='code'><figcaption>PieSliceLayer.m</figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
-(id<CAAction>)actionForKey:(NSString *)event {
  if ([event isEqualToString:@"startAngle"] ||
      [event isEqualToString:@"endAngle"]) {
      return [self makeAnimationForKey:event];
  }
  
  return [super actionForKey:event];
}

- (id)initWithLayer:(id)layer {
  if (self = [super initWithLayer:layer]) {
      if ([layer isKindOfClass:[PieSliceLayer class]]) {
          PieSliceLayer *other = (PieSliceLayer *)layer;
          self.startAngle = other.startAngle;
          self.endAngle = other.endAngle;
          self.fillColor = other.fillColor;

          self.strokeColor = other.strokeColor;
          self.strokeWidth = other.strokeWidth;
      }
  }
  
  return self;
}

+ (BOOL)needsDisplayForKey:(NSString *)key {
  if ([key isEqualToString:@"startAngle"] || [key isEqualToString:@"endAngle"]) {
      return YES;
  }
  
  return [super needsDisplayForKey:key];
}
</figure>


With that we now have a custom PieSliceLayer that animates changes to the angle-properties. However the layer does not display any visual content. For this we will override the drawInContext: method.




Rendering the PieSliceLayer



Here we draw the slice just the way we did earlier. Instead of using UIBezierPath, we now go with the Core Graphics calls. Since the startAngle and endAngle properties are animatable and also marked for redraw, this layer will be rendered each frame of the animation. This will give us the desired animation when the slice changes its inscribed angle.



<figure class='code'><figcaption>PieSliceLayer.m</figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
-(void)drawInContext:(CGContextRef)ctx {
  
  // Create the path
  CGPoint center = CGPointMake(self.bounds.size.width/2, self.bounds.size.height/2);
  CGFloat radius = MIN(center.x, center.y);
  
  CGContextBeginPath(ctx);
  CGContextMoveToPoint(ctx, center.x, center.y);
  
  CGPoint p1 = CGPointMake(center.x + radius * cosf(self.startAngle), center.y + radius * sinf(self.startAngle));
  CGContextAddLineToPoint(ctx, p1.x, p1.y);

  int clockwise = self.startAngle > self.endAngle;
  CGContextAddArc(ctx, center.x, center.y, radius, self.startAngle, self.endAngle, clockwise);

  CGContextClosePath(ctx);
  
  // Color it
  CGContextSetFillColorWithColor(ctx, self.fillColor.CGColor);
  CGContextSetStrokeColorWithColor(ctx, self.strokeColor.CGColor);
  CGContextSetLineWidth(ctx, self.strokeWidth);

  CGContextDrawPath(ctx, kCGPathFillStroke);
}
</figure>



It all comes together in PieView



When we originally started, we wanted to build a Pie Chart that animated changes to its slices. After some speed bumps we got to a stage where a single slice could be described in terms of start/end angles and have any changes animated.



If we can do one slice, we can do multiples! A Pie Chart is a visualization for an array of numbers, where each numbers is an instance of the PieSliceLayer. The size of a slice depends on its relative value within the array. An easy way to get the relative value is to normalize the array and use the normal value [0, 1] to arrive at the angle of the slice, ie. normal * 2 * M_PI. For example, if the normal value is 0.5, the angle of the slice will be M_PI or 180°.




Managing the slices



The PieView manages the slices in a way that makes sense for a Pie Chart. Given an array of numbers, the PieView takes care of normalizing the numbers, creating the right number of slices and positioning them correctly in the pie. Since PieView will be a subclass of UIView, we also have the option to introduce some touch interaction later. Having a UIView that hosts a bunch of CALayers is a common approach when dealing with an interactive element like the PieChart.



The PieView exposes a sliceValues property which is an NSArray of numbers. When this property changes, PieView manages the CRUD around the PieSliceLayers. If there are more numbers than slices, PieView will add the missing slices. If there are fewer numbers than slices, it removes the excess. All the existing slices are updated with the new numbers. All of this happens in the updateSlices method.



<figure class='code'><figcaption>PieView.h</figcaption>
1
2
3
4
5
6
7
8
#import <UIKit/UIKit.h>

@interface PieView : UIView

@property (nonatomic, strong) NSArray *sliceValues;

-(id)initWithSliceValues:(NSArray *)sliceValues;
@end
</figure>




<figure class='code'><figcaption>PieView.m</figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
#import "PieView.h"
#import "PieSliceLayer.h"
#import <QuartzCore/QuartzCore.h>

#define DEG2RAD(angle) angle*M_PI/180.0


@interface PieView() {
  NSMutableArray *_normalizedValues;
  CALayer *_containerLayer;
}

-(void)updateSlices;
@end

@implementation PieView
@synthesize sliceValues = _sliceValues;

-(void)doInitialSetup {
  _containerLayer = [CALayer layer];
  [self.layer addSublayer:_containerLayer];
}

- (id)initWithFrame:(CGRect)frame
{
    self = [super initWithFrame:frame];
    if (self) {
      [self doInitialSetup];
    }
  
    return self;
}

-(id)initWithCoder:(NSCoder *)aDecoder {
  if (self = [super initWithCoder:aDecoder]) {
      [self doInitialSetup];
  }
  
  return self;
}

-(id)initWithSliceValues:(NSArray *)sliceValues {
  if (self) {
      [self doInitialSetup];
      self.sliceValues = sliceValues;
  }
  
  return self;
}

-(void)setSliceValues:(NSArray *)sliceValues {
  _sliceValues = sliceValues;
  
  _normalizedValues = [NSMutableArray array];
  if (sliceValues) {

      // total
      CGFloat total = 0.0;
      for (NSNumber *num in sliceValues) {
          total += num.floatValue;
      }
      
      // normalize
      for (NSNumber *num in sliceValues) {
          [_normalizedValues addObject:[NSNumber numberWithFloat:num.floatValue/total]];
      }
  }
  
  [self updateSlices];
}

-(void)updateSlices {
  
  _containerLayer.frame = self.bounds;
  
  // Adjust number of slices
  if (_normalizedValues.count > _containerLayer.sublayers.count) {
      
      int count = _normalizedValues.count - _containerLayer.sublayers.count;
      for (int i = 0; i < count; i++) {
          PieSliceLayer *slice = [PieSliceLayer layer];
          slice.strokeColor = [UIColor colorWithWhite:0.25 alpha:1.0];
          slice.strokeWidth = 0.5;
          slice.frame = self.bounds;
          
          [_containerLayer addSublayer:slice];
      }
  }
  else if (_normalizedValues.count < _containerLayer.sublayers.count) {
      int count = _containerLayer.sublayers.count - _normalizedValues.count;

      for (int i = 0; i < count; i++) {
          [[_containerLayer.sublayers objectAtIndex:0] removeFromSuperlayer];
      }
  }
  
  // Set the angles on the slices
  CGFloat startAngle = 0.0;
  int index = 0;
  CGFloat count = _normalizedValues.count;
  for (NSNumber *num in _normalizedValues) {
      CGFloat angle = num.floatValue * 2 * M_PI;
      
      NSLog(@"Angle = %f", angle);
      
      PieSliceLayer *slice = [_containerLayer.sublayers objectAtIndex:index];
      slice.fillColor = [UIColor colorWithHue:index/count saturation:0.5 brightness:0.75 alpha:1.0];
      slice.startAngle = startAngle;
      slice.endAngle = startAngle + angle;
      
      startAngle += angle;
      index++;
      hue += num.floatValue;
  }
}
@end
</figure>


There is one thing we didn’t do yet, which is enabling some touch interaction. I’ll leave that as a reader exercise for now.




Demo and Source code



With all that reading you did so far, your eyes are probably thirsty for some visuals. Well, treat yourself with the YouTube video and the github source on the side.



Wednesday, December 14, 2011  |  From Pixel in Gene

Unit testing in Javascript, especially with RequireJS can be a bit of challenge. Jasmine, which is our unit testing framework does not have any out of the box support for RequireJS. I have seen a few ways of integrating RequireJS but that requires hacking the SpecRunner.html file, the main test harness that executes all jasmine tests. That wasn’t really an option for us as we were using a ruby gem called jasmine to auto generate this html file from our spec files. There is however an experimental gem created by Brendan Jerwin that provides RequireJS integration. We did consider that option before ruling it out for lack of official support. After a bit of flailing around, we finally hit upon a little nugget in the core jasmine framework that seemed to provide a solution.


Async tests in Jasmine



For a long time, most of our tests used the standard prescribed procedure in jasmine, which is describe() with a bunch of it()s. This worked well for the most part until we switched to RequireJS as our script loader. Then there was only blood red on our test pages.



Clearly jasmine and RequireJS have no mutual contract, but there is a way to run async tests in jasmine with methods like runs(), waits() and waitsFor(). Out of these, runs() and waitsFor() were the real nuggets, which complement each other when running async tests.



waitsFor() takes in a function that should return a boolean when the work item has completed. Jasmine will keep calling this function until it returns true, with a default timeout of 5 seconds. If the worker function doesn’t complete by that time, the test will be marked as a failure. You can change the error message and the timeout period by passing in additional arguments to waitsFor().



runs() takes in a function that is called whenever it is ready. If a runs() is preceded by a waitsFor(), it will execute only when the waitsFor() has completed. This is great since it is exactly what we need to make our RequireJS based tests to run correctly. In code, the usage of waitsFor() and runs() looks as shown below. Note that I am using CoffeeScript here for easier readability.




— Short CoffeeScript Primer —

In CoffeeScript, the -> (arrow operator) translates to a function(){} block. Functions can be invoked without the parenthesis,eg: foo args is similar to foo(args). The last statement of a function is considered as the return value. Thus, () -> 100 would become function(){ return 100; }
“With this primer, you should be able to follow the code snippet below.”






<figure class='code'><figcaption>waitsFor() and runs() </figcaption>
1
2
3
4
5
6
7
    it "should do something nice", ->
        waitsFor ->
          isWorkCompleted()

        runs ->
            completedWork().doSomethingNice()
  
</figure>

Jasmine meets RequireJS





waitsFor() along with runs() holds the key to running our RequireJS based tests. Within waitsFor() we wait for the RequireJS modules to load and return true whenever those modules are available. In runs() we take those modules and execute our test code. Since this pattern of writing tests was becoming so common, I decided to capture that into a helper method, called ait().



<figure class='code'><figcaption>Helper method for running RequireJS tests </figcaption>
1
2
3
4
5
6
7
8
9
10
ait = (description, modules, testFn)->
    it description, ->
        readyModules = []
        waitsFor ->
            require modules, -> readyModules = arguments
            readyModules.length is modules.length # return true only if all modules are ready

        runs ->
            arrayOfModules = Array.prototype.slice.call readyModules
            testFn(arrayOfModules...)
</figure>


If are wondering why the name ait(), it is just to keep up with the spirit of jasmine methods like it for the test case and xit for ignored test case. Hence ait, which stands for “async it”. This method takes care of waiting for the RequireJS modules to load (which are passed in the modules argument) and then proceeding with the call to the testFn in runs(), which has the real test code. The testFn takes the modules as individual arguments. Note the special CoffeeScript syntax arrayOfModules... for the expansion of an array into individual elements.



The ait method really reads as: it waitsFor() the RequireJS modules to load and then runs() the test code


To make things a little clear, here is an example usage:



<figure class='code'><figcaption>Example usage of ait() </figcaption>
1
2
3
4
5
6
7
describe 'My obedient Model', ->

    ait 'should do something nice', ['obedient_model', 'sub_model'], (ObedientModel, SubModel)->
        subModel = new SubModel
        model = new ObedientModel(subModel)
        expect(model.doSomethingNice()).toEqual "Just did something really nice!"
      
</figure>


The test case should do something nice, takes in two modules: obedient_model and sub_model, which resolve to the arguments: ObedientModel and SubModel, and then executes the test code. Note that I am relying on the default timeout for the waitsFor() method. So far this works great, but that may change as we build up more tests.

Monday, October 17, 2011  |  From Pixel in Gene

In the world of jQuery or for that matter, any JavaScript library, callbacks are the norm for programming asynchronous tasks. When you have several operations dependent on the completion of some other operation, it is best to handle them as a callback. At a later point when your dependent task completes, all of the registered callbacks will be triggered.



This is a simple and effective model and works great for UI applications. With jQuery.Deferred(), this programming model has been codified with a set of utility methods.



$.Deferred() is the entry point for dealing with deferred operations. It creates a “promise” (a.k.a Deferred object) to trigger all the registered done() or then() callbacks once the Deferred object goes into the resolve() state. This is according to the CommonJS specification for Promises. I am not going to cover all the details of $.Deferred(), since the jQuery docs do a much better job. Instead, I’ll jump right into the main topic of this post.


The soup of AMD, $.Deferred and Google Maps





In one of my recent explorations with web apps, the AMD pattern turned out to be extremely useful. AMD, with the RequireJS library, forces a certain structure on your project and makes building large web apps more digestible. Abstractions like the require/define calls allows building apps that are more composable and extensible. It sure is a great way to think about composable JS apps in contrast to the crude <script> tags.



With these abstractions, it was easier to think of the app as a set of modules. Some modules provide base level services, while others depend on such service-modules. One particular module, which also happens to be the entry point into the app, was heavily dependent on the Google Maps API. Early on, it was decided to never keep the user waiting for the maps to load and allow interaction right from the get go.This meant that they could do some map-related tasks even before the maps API had loaded. Although this felt impossible at the onset, it turned out to be quite easy, all thanks to $.Deferred().



The first step was to wrap the Google Maps API in a GoogleMaps object. This hides away the details about loading the maps while allowing the user to carry on with the map related tasks.



<figure class=’code’><figcaption>Wrapping the google maps API </figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function GoogleMaps() {
  
}

GoogleMaps.prototype.init = function() {
  
};

GoogleMaps.prototype.createMap = function(container) {
};

GoogleMaps.prototype.search = function(searchText) {
};

GoogleMaps.prototype.placeMarker = function(options) {
};
</figure>


The calls to createMap, search and placeMarker need to be queued up until the maps API has loaded. We start off with a single $.Deferred() object, _mapsLoaded



<figure class=’code’><figcaption>The deferred object </figcaption>
1
2
3
4
5
_mapsLoaded = $.Deferred()

function GoogleMaps() {
  // …
}
</figure>


Then in each of the methods mentioned earlier, we wrap the actual code inside a deferred.done(), like so:



<figure class=’code’><figcaption>Wrapping calls in deferred.done() </figcaption>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
function GoogleMaps() {
    _mapsLoaded.done(_.bind(function() {
        this.init();
    }, this));
}

GoogleMaps.prototype.init = function() {
};

GoogleMaps.prototype.createMap = function(container) {
    _mapsLoaded.done(_.bind(function() {
      // create the maps object
    }, this));
};

GoogleMaps.prototype.search = function(searchText) {
    _mapsLoaded.done(_.bind(function() {
      // search address
    }, this));
};

GoogleMaps.prototype.placeMarker = function(options) {
    _mapsLoaded.done(_.bind(function() {
      // position marker
    }, this));
};
  
</figure>


With this, we can continue making calls to each of these methods as if the maps API is already loaded. Each time we make a call, it will be pushed into the deferred queue. At some point, when the maps API is loaded, we need to call a resolve() on the deferred object. This will cause the queue of calls to be flushed and resulting in real work being done.



One aside on the code above is the use of _.bind(function(){}, this)_. This is required because the callback to done() changes the context of this. To keep it pointing to the GoogleMaps instance, we employ _.bind().



<figure class=’code’><figcaption>Resolving the deferred object </figcaption>
1
2
3
4
5
6
7
window.gmapsLoaded = function() {
    delete window.gmapsLoaded;
    _mapsLoaded.resolve();
};

require(['http://maps.googleapis.com/maps/api/js?sensor=true&callback=gmapsLoaded']);
  
</figure>


The google maps API has an async loading option with a callback name specified in the query parameter for the api URL. When the api loads, it will call this function (in our case: gmapsLoaded). Note that this needs to be a global function, ie. on the window object. A require call (from RequireJS) makes it easy to load this script.



Once the callback is made, we finally call resolve() on our deferred object: _mapsLoaded. This will trigger the enqueued calls and the user starts seeing the results of his searches.


Summary



In short, what we have really done is:



  1. Abstract the google maps API with a wrapper object
  2. Create a single $.Deferred() object
  3. Queue up calls on the maps API by wrapping the code inside done()
  4. Use the async loading option of google maps api with a callback
  5. In the maps callback, call resolved() on the deferred object
  6. Make the user happy

Demo



In the following demo, you can start searching on an address even before the map loads. Go ahead and try it. I have deliberately put in a 5 second delay on the call to load the maps API, just for a flavor of 3G connectivity!





Don’t forget to browse the code in your Chrome Inspector. You do use Chrome, don’t you? ;-)

 Pixel in Gene News Feed 

Last edited May 8, 2010 at 7:42 PM by pavanpodila, version 28