Project Description
This is a WPF library containing a powerhouse of controls, frameworks, helpers, tools, etc. for productive WPF development.
If you have ever heard of Drag and Drop with Attached properties, ElementFlow, GlassWindow, this is the library that will contain all such goodies.
Here is the introductory blog post

At this time the library is in a Source Only form and requires .Net Framework 3.5 SP1 or later. To build this project on your machine, you need to have VS2010.

The library so far ...
  • ImageButton
  • DragDropManager
  • GlassWindow
  • BalloonDecorator
  • ItemSkimmingPanel + SkimmingContextAdorner
  • PennerDoubleAnimation
  • ElementFlow
  • TransitionPresenter
  • GenieAnimation
  • WarpEffect using Pixel Shaders
  • Simple 3D Engine ( New )
  • HalfCirclePanel ( New )

Contributions
  • CogWheelShape, PolygonShape <Boris Tschirner>

If you wish to contribute or share ideas please direct your mail to pavan@pixelingene.com

Screenshots
Here is a quick way to know what these controls look like: Screenshots


Team
  • Pavan Podila ( Blog )

 Pixel in Gene News Feed 
Sunday, May 12, 2013  |  From Pixel in Gene

Alright, this blog has been quiet for a few months. But that doesn’t mean that I have stopped writing.



NetTuts+



On the contrary, I am doing more of it as a contributing author at NetTuts+. The topics are quite varying but are all related to Web Development in one form or other. A sampling of my articles so far include:




Thanks to my editor, Jeffrey Way, I was also given the opportunity to create a Video course on the latest JS technologies like NodeJS, MongoDB, EmberJS, RequireJS, etc. This should be live soon and I’ll tweet the link once it is prime.



So, if you find this place a little quiet, be sure to check out NetTuts+.



Saturday, December 22, 2012  |  From Pixel in Gene

A seemingly simple language yet a tangled mess of complexity. If you are picturing a giant CSS file from your website, you are on the right track. Yes, CSS can start out as a really simple language to learn but can be hard to master. The CSS chaos starts slowly and seems innocuous at first. Overtime as you accumulate features and more variations on your website, you see the CSS explode and you are soon fighting with the spaghetti monster.



CSS Monster



Luckily this complexity can be brought under control. By following a few simple rules, you can bring order and structure to your growing pile of CSS rules.



CSS Monster



These rules, as laid down by Scalable Modular Architecture for CSS (SMACSS), have a guiding philosophy:



  1. Do one thing well
  2. Be context-free (as far as possible)
  3. Think in terms of the entire website/system instead of a single page
  4. Separate layout from style
  5. Isolate the major concerns for a webpage into layout, modules and states
  6. Follow naming conventions
  7. Be consistent

SMACSS in action



The above principles can be translated in the following ways:



  1. Avoid id-selectors since you can only have one ID on a page. Rely on class, attribute and pseudo selectors
  2. Avoid namespacing classes under an ID. Doing so limits those rules only to that section of the page. If the same rules needs to be applied on other sections, you will end up adding more selectors to the rule. This seems harmless at the outset but soon becomes a habit. Avoid it with vengeance.
  3. Modules help in isolating pieces of content on the page. Modules are identified by classes and can be extended with sub-modules. By relying on the fact that you can apply multiple classes to a HTML tag, you can mix rules from modules and sub-modules into a tag.
  4. The page starts out as a big layout container, which is then broken down into smaller layout containers such as header, footer, navigation, sidebar, content. This can go as deep as you wish. For example, the content area will be broken down further on most websites. When defining a layout rule make sure you don’t mix presentation rules such as fonts, colors, backgrounds or borders. Layout rules should only contain box-model properties like margins, padding, positioning, width, height, etc.,
  5. The content inside a layout container is described via modules. Modules can change containers but always retain their default style. Variations in modules are handled as states and sub-modules. States are applied via class selectors, pseudo selectors or attribute selectors. Sub-modules are handled purely via class selectors.
  6. Naming conventions such as below make it easier to identify the type of rule: layout, module, sub-module or state

    • layout: .l-*
    • state: .is-*
    • module: .<name>
    • sub module: .<name> .<name>-<state>
  7. Be conscious of Depth of applicability. Making the rule deeply nested will tie the CSS to your HTML structure making it harder to reuse and increasing duplicate rules.

An example to tie it all together



Alright, there are lot of abstract ideas in here. Let’s do something concrete and build a simple webpage that needs to show a bunch of contact cards, like below:



Cards



Demo



There are few things to note here:



  • There are 4 modules: card, pic, company-info and contact-info
  • The card module has a sub-module: card-gov, for contacts who work for the government
  • The card and contact-info module change layouts via media queries.


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
/* ----- Picture ----- */
.pic {}
.pic-right {}

/* ----- Card ----- */
.card {}
@media screen and (max-width: 640px) {
  .card {  }
}
.card h4 {}

.card-gov {}
.card-gov .contact-info {}

/* ----- Company Info ----- */
.company-info {}

.company-info-title {}
.company-info-name {}

/* ----- Contact Info ----- */
.contact-info {}
@media screen and (max-width: 640px) {
  .contact-info {  }
}

.contact-info-field {}
.contact-info-field:after {}


Parallels to OO languages



To me the whole idea of SMACSS seems like an application of some of the ideas from OO languages. Here is a quick comparison:



  • Minimize or avoid Singletons: minimize or avoid #id selectors
  • Instances: tags in html which have a class applied
  • Single inheritance: Modules and Sub-modules
  • Mixins: context free rules via states and layouts

Summary



SMACSS can save you a lot of maintenance headache by following few simple rules. It may seem a little alien at first but after you do a simple project, it will become more natural. In the end,
its all about increasing productivity and having a worry-free sleep ;-)



Some resources to learn more about SMACSS:




Sunday, October 7, 2012  |  From Pixel in Gene

These are some of the common idioms I find myself using again and again. I am going to keep this as a live document and will update as I discover more useful idioms.



Disclaimer: I’ll be using the Underscore library in all of my examples





Use Array.join to concatenate strings



It is quite common to build html in strings, especially when you are writing a custom formatter or just plain building simple views in code. Lets say you want to output the html for 3 buttons:



1
2
3
4
5
var html = '<div class="button-set">' +
  '<span class="button">OK</span>' +
  '<span class="button">Apply</span>' +
  '<span class="button">Cancel</span>' +
'</div>';



This works, but consider the alternate version, where you build the strings as elements of an array and join them using Array.join().



1
2
3
4
5
6
7
var html = [
  '<div class="button-set">'
      '<span class="button">OK</span>',
      '<span class="button">Apply</span>',
      '<span class="button">Cancel</span>',
  '</div>'
].join('');



It reads a little better and can almost look like real-html with the identation ;)




Minimize use of if/else blocks by creating object hashes



Lets say you want perform a bunch of different actions based on the value of a certain parameter. For example, if you want to show different views based on the weather condition received via an AJAX request, you could do something like below:



1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
function showView(type) {
  if (_.isObject(type)) {
      // read object structure and prepare view
  }
  else if (_.isString(type)) {
      // validate string and show the view
  }
}

function showWeatherView(condition){
  
  if (condition === 'sunny') showView('sunny-01');
  else if (condition === 'partly sunny') showView('sunny-02');
  else if (condition === 'cloudy') showView('cloudy-01');
  else if (condition === 'rain') showView({ type: 'rain-01', style:'dark' })
}

$.get('http://myapp.com/weather/today', function(response){
  
  var condition = response.condition;

  // Show view based on this condition
  showWeatherView(condition);
});



You will notice in showWeatherView(), there is lot of imperative noise with if/else statements. This can be removed with an object hash:



1
2
3
4
5
6
7
8
9
10
11
function showWeatherView(condition){

  var viewMap = {
      'sunny': 'sunny-01',
      'partly sunny': 'sunny-02',
      'cloudy': 'cloudy-01',
      'rain': { type: 'rain-01', style:'dark' }
  };   

  showView(viewMap[condition]);
}



If you want to support more views, it should be easier to add it to the viewMap hash. The general idea is to look at a piece of code and think in terms of data + code. What part is pure data and what part is pure code. If you can make the separation, you can easily capture the data part as an object-hash and write simple code to loop/process the data.



As a side note, if you want to eliminate the use of if/else, switch statements, you can have Haskell-style pattern-matching with the matches library.




Make the parameter value be of any-type



When you are building a simple utility library/module, it is good to expose an option that can be any of string, number, array or function type. This makes the option more versatile and allows for some logic to be executed each time the option value is needed. I first saw this pattern used in libraries like HighCharts and SlickGrid and found it very natural.



Let’s say you want to build a simple formatter. It can accept a string to be formatted using one of the pre-defined formats or use a custom formatter. It can also apply a chain of formatters, when passed as an array. You can have the API for the formatter as below:



1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
function format(formatter, value) {
  var knownFormatters = {
      '###,#': function(value) {},
      'mm/dd/yyyy': function(value) {},
      'HH:MM:ss': function(value) {}
  },
      formattedValue = value;

  if (_.isString(formatter)) {

      // Lookup the formatter from list of known formatters
      formattedValue = knownFormatters[formatter](value);

  }
  else if (_.isFunction(formatter)) {

      formattedValue = formatter(value);

  }
  else if (_.isArray(formatter)) {

      // This could be a chain of formatters
      formattedValue = value;
      _.each(formatter, function(f) {
          formattedValue = format(f, formattedValue); // Note the recursive use format()
      });

  }

  return formattedValue;
}



As an addendum to a multi-type parameter, it is also common to normalize the parameter value to an object hash and remove type differences.




Use IIFE to compute on the fly



Sometimes you just need a little bit of code to set the value of an option. You can either do it by computing the value separately or do it inline by writing an Immediately Invoked Function Expression (IIFE):



1
2
3
4
5
6
7
8
9
var options = {
  title: (function(){
      var html = '<h1>' + titleText + '</h1>';
      var icons = '<div class="icon-set"><span class="icon-gear"></span></div>';

      return html + icons;
  })(),
  buttons: ['Apply', 'Cancel', 'OK']
};



In the above code there is little bit of computation for the title text. For simple code like above it is sometimes best to have the logic right in there for improved readability.



Thursday, June 14, 2012  |  From Pixel in Gene

The ExpressJS framework is one of the simpler yet very powerful web frameworks for NodeJS.
It provides a simple way to expose GET / POST endpoints on your web application, which then serves
the appropriate response. Getting started with ExpressJS is easy and the Guides on the
ExpressJS website are very well written to make you effective in short order.


Moving towards a flexible app structure



When you have a simple app with a few endpoints, it is easy to keep everything
self-contained right inside of the top-level app.js. However as you start
buliding up more GET / POST endpoints, you need to have an organization scheme
to help you manage the complexity. As a simple rule,



When things get bigger, they need to be made smaller ;-)


Fortunately, several smart folks have figured this out earlier and have come up
with approaches that are wildly successful. Yes, I am talking about Rails and
the principle of “Convention over Configuration”. So lets apply them to our
constantly growing app.


Route management



Most of the routes (aka restful endpoints) that you
expose on your app can be logically grouped together, based on a feature. For
example, if you have some endpoints such as:



  • /login
  • /login/signup
  • /login/signup/success
  • /login/lostpassword
  • /login/forgotusername


… you can try grouping them under the “login” feature. Similarly you may have other endpoints
dedicated to handle other workflows in your app, like uploading content, creating users, editing
content, etc. These kind of routes naturally fit into a group and that’s the first cue for
breaking them apart
. As a first step, you can put the logically related GET / POST endpoints in
their own file, eg: login.js. Since you may have several groups of routes, you will end up with
lots of route files.



Putting all of these files at the top-level is definitely going to cause a
clutter. So to simplify this further, put all of these files into a sub-folder, eg: /routes. The project structure now looks more clean:



1
2
3
4
5
6
7
project
  |- routes
  |   |- login.js
  |   |- create_users.js
  |   |- upload.js
  |   |- edit_users.js
  |- app.js



Since we are working with NodeJS, each file becomes a module and the objects in the module can be
exposed via the exports object. We can establish a simple protocol that each route module must
have an init function which we call from app.js, passing in the necessary context for the route.
In case of the login this could look like so:



Routes in login.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
function init(app) {
  
  app.get('/login', function (req, res){

  });

  app.get('/login/signup', function (req, res){

  });

  app.get('/login/signup/success', function (req, res){

  });

  app.get('/login/lostpassword', function (req, res){

  });

  app.get('/login/forgotusername', function (req, res){

  });

}



If you are using a recent version of ExpressJS, 2.5.8 as of this writing, the command-line
interface provides a way to quickly generate the express app. If you type express [options]
name-of-the-app
, it will generate a folder named name-of-the-app in the current working directory. Not surprisingly, express creates the /routes folder for you, which is already taking you in the right direction. I only learnt this recently and have so far been doing the hard work of starting from scratch each time. Sometimes spending a little more time on the manual helps! RTFM FTW.



Once we have the route files as described, it is easy to load them from app.js. Using the filesystem module we can quickly load each module and call init() on each one of them. We do this before the app is started. The app.js skeleton looks like so:



App skeleton app.js

1
2
3
4
5
6
7
8
9
10
11
var fs = require('fs'),
    path = require('path');

var RouteDir = 'routes',
    files = fs.readdirSync(RouteDir);

files.forEach(function (file) {
    var filePath = path.resolve('./', RouteDir, file),
        route = require(filePath);
    route.init(app);
});



Now we can just keep adding more routes, grouped in their own file and continue to build several endpoints without severerly complicating the app.js. The app.js file now follows the Open-Closed-Principle (app.js is open for extension but closed for modification).


In short…



As you can see, it is actually a simple idea, but when applied to other parts of your application, it can substantially reduce the maintenance overhead. So in summary:



  • Establish conventions to standardize a certain aspect of the program. In our case it was routes.
  • Group related items into their own module
  • Collect the modules into a logical folder and load from that folder


Sunday, May 6, 2012  |  From Pixel in Gene

Its been a while since I posted anything on this blog. Thought I’ll break the calm with a quick post about my recent sketch.



I generally use Autodesk SketchBook Pro (SBP) on my Mac for the intial doodling. I then develop a fairly finished sketch before importing it into Photoshop for any post-processing. Luckily SBP saves the files in PSD format, making it easy to do the Photoshop import. The following sketch was entirely done in SBP:



Rain and Tears



This was done in about 30 mins as a quick sketch to demonstrate the use of SBP and a Wacom tablet to a close friend. He was quite impressed and immediately ordered a bunch of items, including a Wacom Bamboo stylus for the iPad. I guess marketing wouldn’t be a bad alternate career!



BTW, the sketch is called Rain and Tears.
Rain and Tears - Tiles



Tuesday, February 21, 2012  |  From Pixel in Gene

It’s going to be a rather long post, so if you want to jump around, here are your way points:



  1. First steps

    1. A path for the slice
    2. Animating the pie-slice
  2. Raising the level of abstraction

    1. Custom CALayer, the PieSliceLayer
    2. Rendering the PieSliceLayer
  3. It all comes together in PieView

    1. Managing the slices
  4. Demo and Source code


With a powerful platform like iOS, it is not surprising to have a variety of options for drawing. Picking the one that works best may sometimes require a bit of experimentation. Case in point: a pie chart whose slices had to be animated as the values changed over time. In this blog post, I would like to take you through various stages of my design process before I ended up with something close to what I wanted. So lets get started.




First steps



Lets quickly look at the array of options that we have for building up graphics in iOS:



  • Use the standard Views and Controls in UIKit and create a view hierarchy
  • Use the UIAppearance protocol to customize standard controls
  • Use UIWebView and render some complex layouts in HTML + JS. This is a surprisingly viable option for certain kinds of views
  • Use UIImageView and show a pre-rendered image. This is sometimes the best way to show a complex graphic instead of building up a series of vectors. Images can be used more liberally in iOS and many of the standard controls even accept an image as parameter.
  • Create a custom UIView and override drawRect:. This is like the chain-saw in our toolbelt. Used wisely it can clear dense forests of UI challenges.
  • Apply masking (a.k.a. clipping) on vector graphics or images. Masking is often underrated in most toolkits but it does come very handy.
  • Use Core Animation Layers: CALayer with shadows, cornerRadius or masks. Use CAGradientLayer, CAShapeLayer or CATiledLayer
  • Create a custom UIView and render a CALayer hierarchy


As you can see there are several ways in which we can create an interactive UI control. Each of these options sit at a different level of abstraction in the UI stack. Choosing the right combination can thus be an interesting thought-exercise. As one gains more experience, picking the right combination will become more obvious and also be a lot faster.




A path for the slice



With that quick overview of the UI options in iOS, lets get back to our problem of building an animated Pie Chart. Since we are talking about animation, it is natural to think about Core Animation and CALayers. In fact, the choice of a CAShapeLayer with a path for the pie-slice is a good first step. Using the UIBezierPath class, it is easier than using a bunch of CGPathXXX calls.



1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
-(CAShapeLayer *)createPieSlice {
  CAShapeLayer *slice = [CAShapeLayer layer];
  slice.fillColor = [UIColor redColor].CGColor;
  slice.strokeColor = [UIColor blackColor].CGColor;
  slice.lineWidth = 3.0;
  
  CGFloat angle = DEG2RAD(-60.0);
  CGPoint center = CGPointMake(100.0, 100.0);
  CGFloat radius = 100.0;
  
  UIBezierPath *piePath = [UIBezierPath bezierPath];
  [piePath moveToPoint:center];
  
  [piePath addLineToPoint:CGPointMake(center.x + radius * cosf(angle), center.y + radius * sinf(angle))];
  
  [piePath addArcWithCenter:center radius:radius startAngle:angle endAngle:DEG2RAD(60.0) clockwise:YES];
  
//   [piePath addLineToPoint:center];
  [piePath closePath]; // this will automatically add a straight line to the center
  slice.path = piePath.CGPath;

  return slice;
}



  • The path consists of two radial lines originating at the center of the cirlce, with an arc between the end-points of the lines
  • The angles in the call to addArcWithCenter use the following unit-coordinate system:


Unit Coordinates



  • DEG2RAD is a simple macro that converts from degrees to radians
  • When rendered the pie slice looks like below. The background gray circle was added to put the slice in the context of the whole circle.


UIBezierPath Render




Animating the pie-slice



Now that we know how to render a pie-slice, we can start looking at animating it. When the angle of the pie-slice changes we would like to smoothly animate to the new slice. Effectively the pie-slice will grow or shrink in size, like a radial fan of cards spreading or collapsing. This can be considered as a change in the path of the CAShapeLayer. Since CAShapeLayer naturally animates changes to the path property, we can give it a shot and see if that works. So, let’s say, we want to animate from the current slice to a horizontally-flipped slice, like so:



UIBezierPath Render



To achieve that, lets refactor the code a bit and move the path creation into its own method.



1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
-(CGPathRef)createPieSliceWithCenter:(CGPoint)center
              radius:(CGFloat)radius
              startAngle:(CGFloat)degStartAngle
              endAngle:(CGFloat)degEndAngle {
  
  UIBezierPath *piePath = [UIBezierPath bezierPath];
  [piePath moveToPoint:center];
  
  [piePath addLineToPoint:CGPointMake(center.x + radius * cosf(DEG2RAD(degStartAngle)), center.y + radius * sinf(DEG2RAD(degStartAngle)))];
  
  [piePath addArcWithCenter:center radius:radius startAngle:DEG2RAD(degStartAngle) endAngle:DEG2RAD(degEndAngle) clockwise:YES];
  
  // [piePath addLineToPoint:center];
  [piePath closePath]; // this will automatically add a straight line to the center

  return piePath.CGPath;
}

-(CAShapeLayer *)createPieSlice {
  
  CGPoint center = CGPointMake(100.0, 100.0);
  CGFloat radius = 100.0;

  CGPathRef fromPath = [self createPieSliceWithCenter:center radius:radius startAngle:-60.0 endAngle:60.0];
  CGPathRef toPath = [self createPieSliceWithCenter:center radius:radius startAngle:120.0 endAngle:-120.0];

  CAShapeLayer *slice = [CAShapeLayer layer];
  slice.fillColor = [UIColor redColor].CGColor;
  slice.strokeColor = [UIColor blackColor].CGColor;
  slice.lineWidth = 3.0;
  slice.path = fromPath;

  
  CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:@"path"];
  anim.duration = 1.0;
  
  // flip the path
  anim.fromValue = (__bridge id)fromPath;
  anim.toValue = (__bridge id)toPath;
  anim.removedOnCompletion = NO;
  anim.fillMode = kCAFillModeForwards;
  
  [slice addAnimation:anim forKey:nil];
  return slice;
}



In the refactored code, createPieSlice: just calls the createPieSliceWithCenter:radius:startAngle:endAngle function for the from and to-paths and sets up an animation between these two paths. In action, this looks like so:



Path Animation



Yikes! That is definitely not what we expected. CAShapeLayer is morphing the paths rather than growing or shrinking the pie slices. Of course, this means we need to adopt more stricter measures for animating the pie slices.




Raising the level of abstraction



Clearly CAShapeLayer doesn’t understand pie-slices and has no clue about how to animate a slice in a natural manner. We definitely need more control around how the pie slice changes. Luckily we have an API that gives a hint at the kind of abstraction we need: a pie slice described in terms of {startAngle, endAngle}. This way our parameters are more strict and not as flexible as the points of a bezier path. By making these parameters animatable, we should be able to animate the pie-slices just the way we want.



Applying this idea to our previous animation example, the path can be said to be changing from {-60.0, 60.0} to {120.0, -120.0}. By animating the startAngle and endAngle, we should be able to make the animation more natural. In general, if you find yourself tackling a tricky problem like this, take a step back and check if you are at the right level of abstraction.




Custom CALayer, the PieSliceLayer



If a CAShapeLayer can’t do it, we probably need our own custom CALayer. Let’s call it the PieSliceLayer and give it two properties: … you guessed it… startAngle and endAngle. Any change to these properties will cause the custom layer to redraw and also animate the change. This requires following a few standard procedures as prescribed by Core Animation Framework.



  • Firstly don’t @synthesize the animatable properties and instead mark them as @dynamic. This is required because Core Animation does some magic under the hood to track changes to these properties and call appropriate methods on your layer.


PieSliceLayer.h
1
2
3
4
5
6
7
8
9
10
11
12
#import <QuartzCore/QuartzCore.h>

@interface PieSliceLayer : CALayer


@property (nonatomic) CGFloat startAngle;
@property (nonatomic) CGFloat endAngle;

@property (nonatomic, strong) UIColor *fillColor;
@property (nonatomic) CGFloat strokeWidth;
@property (nonatomic, strong) UIColor *strokeColor;
@end





PieSliceLayer.m
1
2
3
4
5
6
7
8
9
10
#import "PieSliceLayer.h"

@implementation PieSliceLayer

@dynamic startAngle, endAngle;
@synthesize fillColor, strokeColor, strokeWidth;

...

@end



  • Override actionForKey: and return a CAAnimation that prepares the animation for that property. In our case, we will return an animation for the startAngle and endAngle properties.

  • Override initWithLayer: to copy the properties into the new layer. This method gets called for each frame of animation. Core Animation makes a copy of the presentationLayer for each frame of the animation. By overriding this method we make sure our custom properties are correctly transferred to the copied-layer.

  • Finally we also need to override needsDisplayForKey: to tell Core Animation that changes to our startAngle and endAngle properties will require a redraw.



PieSliceLayer.m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
-(id<CAAction>)actionForKey:(NSString *)event {
  if ([event isEqualToString:@"startAngle"] ||
      [event isEqualToString:@"endAngle"]) {
      return [self makeAnimationForKey:event];
  }
  
  return [super actionForKey:event];
}

- (id)initWithLayer:(id)layer {
  if (self = [super initWithLayer:layer]) {
      if ([layer isKindOfClass:[PieSliceLayer class]]) {
          PieSliceLayer *other = (PieSliceLayer *)layer;
          self.startAngle = other.startAngle;
          self.endAngle = other.endAngle;
          self.fillColor = other.fillColor;

          self.strokeColor = other.strokeColor;
          self.strokeWidth = other.strokeWidth;
      }
  }
  
  return self;
}

+ (BOOL)needsDisplayForKey:(NSString *)key {
  if ([key isEqualToString:@"startAngle"] || [key isEqualToString:@"endAngle"]) {
      return YES;
  }
  
  return [super needsDisplayForKey:key];
}



With that we now have a custom PieSliceLayer that animates changes to the angle-properties. However the layer does not display any visual content. For this we will override the drawInContext: method.




Rendering the PieSliceLayer



Here we draw the slice just the way we did earlier. Instead of using UIBezierPath, we now go with the Core Graphics calls. Since the startAngle and endAngle properties are animatable and also marked for redraw, this layer will be rendered each frame of the animation. This will give us the desired animation when the slice changes its inscribed angle.



PieSliceLayer.m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
-(void)drawInContext:(CGContextRef)ctx {
  
  // Create the path
  CGPoint center = CGPointMake(self.bounds.size.width/2, self.bounds.size.height/2);
  CGFloat radius = MIN(center.x, center.y);
  
  CGContextBeginPath(ctx);
  CGContextMoveToPoint(ctx, center.x, center.y);
  
  CGPoint p1 = CGPointMake(center.x + radius * cosf(self.startAngle), center.y + radius * sinf(self.startAngle));
  CGContextAddLineToPoint(ctx, p1.x, p1.y);

  int clockwise = self.startAngle > self.endAngle;
  CGContextAddArc(ctx, center.x, center.y, radius, self.startAngle, self.endAngle, clockwise);

  CGContextClosePath(ctx);
  
  // Color it
  CGContextSetFillColorWithColor(ctx, self.fillColor.CGColor);
  CGContextSetStrokeColorWithColor(ctx, self.strokeColor.CGColor);
  CGContextSetLineWidth(ctx, self.strokeWidth);

  CGContextDrawPath(ctx, kCGPathFillStroke);
}




It all comes together in PieView



When we originally started, we wanted to build a Pie Chart that animated changes to its slices. After some speed bumps we got to a stage where a single slice could be described in terms of start/end angles and have any changes animated.



If we can do one slice, we can do multiples! A Pie Chart is a visualization for an array of numbers, where each numbers is an instance of the PieSliceLayer. The size of a slice depends on its relative value within the array. An easy way to get the relative value is to normalize the array and use the normal value [0, 1] to arrive at the angle of the slice, ie. normal * 2 * M_PI. For example, if the normal value is 0.5, the angle of the slice will be M_PI or 180°.




Managing the slices



The PieView manages the slices in a way that makes sense for a Pie Chart. Given an array of numbers, the PieView takes care of normalizing the numbers, creating the right number of slices and positioning them correctly in the pie. Since PieView will be a subclass of UIView, we also have the option to introduce some touch interaction later. Having a UIView that hosts a bunch of CALayers is a common approach when dealing with an interactive element like the PieChart.



The PieView exposes a sliceValues property which is an NSArray of numbers. When this property changes, PieView manages the CRUD around the PieSliceLayers. If there are more numbers than slices, PieView will add the missing slices. If there are fewer numbers than slices, it removes the excess. All the existing slices are updated with the new numbers. All of this happens in the updateSlices method.



PieView.h
1
2
3
4
5
6
7
8
#import <UIKit/UIKit.h>

@interface PieView : UIView

@property (nonatomic, strong) NSArray *sliceValues;

-(id)initWithSliceValues:(NSArray *)sliceValues;
@end





PieView.m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
#import "PieView.h"
#import "PieSliceLayer.h"
#import <QuartzCore/QuartzCore.h>

#define DEG2RAD(angle) angle*M_PI/180.0


@interface PieView() {
  NSMutableArray *_normalizedValues;
  CALayer *_containerLayer;
}

-(void)updateSlices;
@end

@implementation PieView
@synthesize sliceValues = _sliceValues;

-(void)doInitialSetup {
  _containerLayer = [CALayer layer];
  [self.layer addSublayer:_containerLayer];
}

- (id)initWithFrame:(CGRect)frame
{
    self = [super initWithFrame:frame];
    if (self) {
      [self doInitialSetup];
    }
  
    return self;
}

-(id)initWithCoder:(NSCoder *)aDecoder {
  if (self = [super initWithCoder:aDecoder]) {
      [self doInitialSetup];
  }
  
  return self;
}

-(id)initWithSliceValues:(NSArray *)sliceValues {
  if (self) {
      [self doInitialSetup];
      self.sliceValues = sliceValues;
  }
  
  return self;
}

-(void)setSliceValues:(NSArray *)sliceValues {
  _sliceValues = sliceValues;
  
  _normalizedValues = [NSMutableArray array];
  if (sliceValues) {

      // total
      CGFloat total = 0.0;
      for (NSNumber *num in sliceValues) {
          total += num.floatValue;
      }
      
      // normalize
      for (NSNumber *num in sliceValues) {
          [_normalizedValues addObject:[NSNumber numberWithFloat:num.floatValue/total]];
      }
  }
  
  [self updateSlices];
}

-(void)updateSlices {
  
  _containerLayer.frame = self.bounds;
  
  // Adjust number of slices
  if (_normalizedValues.count > _containerLayer.sublayers.count) {
      
      int count = _normalizedValues.count - _containerLayer.sublayers.count;
      for (int i = 0; i < count; i++) {
          PieSliceLayer *slice = [PieSliceLayer layer];
          slice.strokeColor = [UIColor colorWithWhite:0.25 alpha:1.0];
          slice.strokeWidth = 0.5;
          slice.frame = self.bounds;
          
          [_containerLayer addSublayer:slice];
      }
  }
  else if (_normalizedValues.count < _containerLayer.sublayers.count) {
      int count = _containerLayer.sublayers.count - _normalizedValues.count;

      for (int i = 0; i < count; i++) {
          [[_containerLayer.sublayers objectAtIndex:0] removeFromSuperlayer];
      }
  }
  
  // Set the angles on the slices
  CGFloat startAngle = 0.0;
  int index = 0;
  CGFloat count = _normalizedValues.count;
  for (NSNumber *num in _normalizedValues) {
      CGFloat angle = num.floatValue * 2 * M_PI;
      
      NSLog(@"Angle = %f", angle);
      
      PieSliceLayer *slice = [_containerLayer.sublayers objectAtIndex:index];
      slice.fillColor = [UIColor colorWithHue:index/count saturation:0.5 brightness:0.75 alpha:1.0];
      slice.startAngle = startAngle;
      slice.endAngle = startAngle + angle;
      
      startAngle += angle;
      index++;
      hue += num.floatValue;
  }
}
@end



There is one thing we didn’t do yet, which is enabling some touch interaction. I’ll leave that as a reader exercise for now.




Demo and Source code



With all that reading you did so far, your eyes are probably thirsty for some visuals. Well, treat yourself with the YouTube video and the github source on the side.





Wednesday, December 14, 2011  |  From Pixel in Gene

Unit testing in Javascript, especially with RequireJS can be a bit of challenge. Jasmine, which is our unit testing framework does not have any out of the box support for RequireJS. I have seen a few ways of integrating RequireJS but that requires hacking the SpecRunner.html file, the main test harness that executes all jasmine tests. That wasn’t really an option for us as we were using a ruby gem called jasmine to auto generate this html file from our spec files. There is however an experimental gem created by Brendan Jerwin that provides RequireJS integration. We did consider that option before ruling it out for lack of official support. After a bit of flailing around, we finally hit upon a little nugget in the core jasmine framework that seemed to provide a solution.


Async tests in Jasmine



For a long time, most of our tests used the standard prescribed procedure in jasmine, which is describe() with a bunch of it()s. This worked well for the most part until we switched to RequireJS as our script loader. Then there was only blood red on our test pages.



Clearly jasmine and RequireJS have no mutual contract, but there is a way to run async tests in jasmine with methods like runs(), waits() and waitsFor(). Out of these, runs() and waitsFor() were the real nuggets, which complement each other when running async tests.



waitsFor() takes in a function that should return a boolean when the work item has completed. Jasmine will keep calling this function until it returns true, with a default timeout of 5 seconds. If the worker function doesn’t complete by that time, the test will be marked as a failure. You can change the error message and the timeout period by passing in additional arguments to waitsFor().



runs() takes in a function that is called whenever it is ready. If a runs() is preceded by a waitsFor(), it will execute only when the waitsFor() has completed. This is great since it is exactly what we need to make our RequireJS based tests to run correctly. In code, the usage of waitsFor() and runs() looks as shown below. Note that I am using CoffeeScript here for easier readability.




— Short CoffeeScript Primer —

In CoffeeScript, the -> (arrow operator) translates to a function(){} block. Functions can be invoked without the parenthesis,eg: foo args is similar to foo(args). The last statement of a function is considered as the return value. Thus, () -> 100 would become function(){ return 100; }
“With this primer, you should be able to follow the code snippet below.”






waitsFor() and runs()

1
2
3
4
5
6
7
    it "should do something nice", ->
        waitsFor ->
          isWorkCompleted()

        runs ->
            completedWork().doSomethingNice()
  


Jasmine meets RequireJS





waitsFor() along with runs() holds the key to running our RequireJS based tests. Within waitsFor() we wait for the RequireJS modules to load and return true whenever those modules are available. In runs() we take those modules and execute our test code. Since this pattern of writing tests was becoming so common, I decided to capture that into a helper method, called ait().



Helper method for running RequireJS tests

1
2
3
4
5
6
7
8
9
10
ait = (description, modules, testFn)->
    it description, ->
        readyModules = []
        waitsFor ->
            require modules, -> readyModules = arguments
            readyModules.length is modules.length # return true only if all modules are ready

        runs ->
            arrayOfModules = Array.prototype.slice.call readyModules
            testFn(arrayOfModules...)



If are wondering why the name ait(), it is just to keep up with the spirit of jasmine methods like it for the test case and xit for ignored test case. Hence ait, which stands for “async it”. This method takes care of waiting for the RequireJS modules to load (which are passed in the modules argument) and then proceeding with the call to the testFn in runs(), which has the real test code. The testFn takes the modules as individual arguments. Note the special CoffeeScript syntax arrayOfModules... for the expansion of an array into individual elements.



The ait method really reads as: it waitsFor() the RequireJS modules to load and then runs() the test code


To make things a little clear, here is an example usage:



Example usage of ait()

1
2
3
4
5
6
7
describe 'My obedient Model', ->

    ait 'should do something nice', ['obedient_model', 'sub_model'], (ObedientModel, SubModel)->
        subModel = new SubModel
        model = new ObedientModel(subModel)
        expect(model.doSomethingNice()).toEqual "Just did something really nice!"
      



The test case should do something nice, takes in two modules: obedient_model and sub_model, which resolve to the arguments: ObedientModel and SubModel, and then executes the test code. Note that I am relying on the default timeout for the waitsFor() method. So far this works great, but that may change as we build up more tests.



Monday, October 17, 2011  |  From Pixel in Gene

In the world of jQuery or for that matter, any JavaScript library, callbacks are the norm for programming asynchronous tasks. When you have several operations dependent on the completion of some other operation, it is best to handle them as a callback. At a later point when your dependent task completes, all of the registered callbacks will be triggered.



This is a simple and effective model and works great for UI applications. With jQuery.Deferred(), this programming model has been codified with a set of utility methods.



$.Deferred() is the entry point for dealing with deferred operations. It creates a “promise” (a.k.a Deferred object) to trigger all the registered done() or then() callbacks once the Deferred object goes into the resolve() state. This is according to the CommonJS specification for Promises. I am not going to cover all the details of $.Deferred(), since the jQuery docs do a much better job. Instead, I’ll jump right into the main topic of this post.


The soup of AMD, $.Deferred and Google Maps





In one of my recent explorations with web apps, the AMD pattern turned out to be extremely useful. AMD, with the RequireJS library, forces a certain structure on your project and makes building large web apps more digestible. Abstractions like the require/define calls allows building apps that are more composable and extensible. It sure is a great way to think about composable JS apps in contrast to the crude <script> tags.



With these abstractions, it was easier to think of the app as a set of modules. Some modules provide base level services, while others depend on such service-modules. One particular module, which also happens to be the entry point into the app, was heavily dependent on the Google Maps API. Early on, it was decided to never keep the user waiting for the maps to load and allow interaction right from the get go.This meant that they could do some map-related tasks even before the maps API had loaded. Although this felt impossible at the onset, it turned out to be quite easy, all thanks to $.Deferred().



The first step was to wrap the Google Maps API in a GoogleMaps object. This hides away the details about loading the maps while allowing the user to carry on with the map related tasks.



Wrapping the google maps API

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function GoogleMaps() {
  
}

GoogleMaps.prototype.init = function() {
  
};

GoogleMaps.prototype.createMap = function(container) {
};

GoogleMaps.prototype.search = function(searchText) {
};

GoogleMaps.prototype.placeMarker = function(options) {
};



The calls to createMap, search and placeMarker need to be queued up until the maps API has loaded. We start off with a single $.Deferred() object, _mapsLoaded



The deferred object

1
2
3
4
5
_mapsLoaded = $.Deferred()

function GoogleMaps() {
  // …
}



Then in each of the methods mentioned earlier, we wrap the actual code inside a deferred.done(), like so:



Wrapping calls in deferred.done()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
function GoogleMaps() {
    _mapsLoaded.done(_.bind(function() {
        this.init();
    }, this));
}

GoogleMaps.prototype.init = function() {
};

GoogleMaps.prototype.createMap = function(container) {
    _mapsLoaded.done(_.bind(function() {
      // create the maps object
    }, this));
};

GoogleMaps.prototype.search = function(searchText) {
    _mapsLoaded.done(_.bind(function() {
      // search address
    }, this));
};

GoogleMaps.prototype.placeMarker = function(options) {
    _mapsLoaded.done(_.bind(function() {
      // position marker
    }, this));
};
  



With this, we can continue making calls to each of these methods as if the maps API is already loaded. Each time we make a call, it will be pushed into the deferred queue. At some point, when the maps API is loaded, we need to call a resolve() on the deferred object. This will cause the queue of calls to be flushed and resulting in real work being done.



One aside on the code above is the use of _.bind(function(){}, this)_. This is required because the callback to done() changes the context of this. To keep it pointing to the GoogleMaps instance, we employ _.bind().



Resolving the deferred object

1
2
3
4
5
6
7
window.gmapsLoaded = function() {
    delete window.gmapsLoaded;
    _mapsLoaded.resolve();
};

require(['http://maps.googleapis.com/maps/api/js?sensor=true&callback=gmapsLoaded']);
  



The google maps API has an async loading option with a callback name specified in the query parameter for the api URL. When the api loads, it will call this function (in our case: gmapsLoaded). Note that this needs to be a global function, ie. on the window object. A require call (from RequireJS) makes it easy to load this script.



Once the callback is made, we finally call resolve() on our deferred object: _mapsLoaded. This will trigger the enqueued calls and the user starts seeing the results of his searches.


Summary



In short, what we have really done is:



  1. Abstract the google maps API with a wrapper object
  2. Create a single $.Deferred() object
  3. Queue up calls on the maps API by wrapping the code inside done()
  4. Use the async loading option of google maps api with a callback
  5. In the maps callback, call resolved() on the deferred object
  6. Make the user happy

Demo



In the following demo, you can start searching on an address even before the map loads. Go ahead and try it. I have deliberately put in a 5 second delay on the call to load the maps API, just for a flavor of 3G connectivity!





Don’t forget to browse the code in your Chrome Inspector. You do use Chrome, don’t you? ;-)



Wednesday, September 28, 2011  |  From Pixel in Gene

As I blogged about earlier, Octopress is a great framework for writing blog posts and packs in all the features for writing a code-centric blogs. Of course, it goes without saying that the blog also looks awesome as if designed by a true designer. Some of the nicer things about writing posts is that there are rake tasks that do most of the grunt work:



  • rake new_post[“Just type the title of the post here in plain English”]

    This will create a new file under source/_posts called 2011-09-29-just-type-the-title-of-the-post-here-in-plain-english.markdown
  • rake new_page[about]

    This will create a new page under source/about, called index.markdown
  • rake preview

    This sets up a local webserver on http://localhost:4000 and starts monitoring the source folder for any changes. It automatically generates the corresponding HTML/CSS for the Markdown/SASS files respectively.

Speed up



If you have just migrated from a Wordpress blog or have lots of posts under your source/_posts, the rake task that generates the HTML output can take a very long time (several minutes). Obviously if you are just working on one post, there is no need to wait for the entire site to generate. What you are looking for is the rake isolate[partial_post_name] task.



Using rake isolate, you can “isolate” only that post you are working on and move all the others to the source/_stash folder. The partial_post_name parameter is just some words in the file name for the post. For example, if I want to isolate the post from the earlier example, I would use



1
rake isolate[plain-english]



This will move all the other posts to source/_stash and only keep the 2011-09-29-just-type-the-title-of-the-post-here-in-plain-english.markdown post in source/_posts. You can also do this while you are running rake preview. It will just detect a massive change and only regenerate that one post from then on.


All set to publish



When you are ready to publish your site, just run rake integrate and it will pull all the posts from source/_stash and put them under source/_posts. Now you can run rake generate and then rake deploy to publish your updated blog.



If these seem like lot of commands to remember, don’t worry, they will become second nature once you do it few times. As a summary, below are all the tasks that we talked about in this post. The description of each task comes from the Rakefile used by Octopress. I just did a rake list -T to get a dump of all the tasks.



  • rake new_post[title]: Begin a new post in source/_posts
  • rake new_page[filename]: Create a new page in source/(filename)/index.markdown
  • rake generate: Generate jekyll site
  • rake deploy: Default deploy task
  • rake preview: Preview the site in a web browser
  • rake isolate[filename]: Move all other posts than the one currently being worked on to a temporary stash location (stash) so regenerating the site happens much quicker
  • rake integrate: Move all stashed posts back into the posts directory, ready for site generation


Monday, September 12, 2011  |  From Pixel in Gene



I have been using Wordpress for few years now and have been very happy with its features. In the past year, I have tried several times to change the theme on my blog and also semantify my posts by using Markdown as my de facto style. Of course, none of it happened and I was still using a combination of HTML and Rich Text Editor for formatting my posts.The more I delayed, the more I realized that there were a lot more reasons to NOT like Wordpress:



  • I wanted to use Markdown to write all my posts and Wordpress forced me to use HTML. I could certainly use some plugins to upload a markdown file which would then convert it into html, but that meant I had to store these markdown files in the wordpress database: less than optimal.
  • Code formatting was not an easy task. I used Live Writer as my primary blog editor and it had a few plugins that can give you inline code highlighting. Although you get real time view of your syntax highlighted code, it internally converted everything to HTML and discarded the original code snippet. Also you had to be careful about editing around that code snippet as a simple delete in the wrong place would require redoing the whole process. I felt it was too much work just to get some code highlighting.
  • The Backup and local testing scenario was involving. For backup, I could either export all my posts in WXR format or take a dump of my database. To re-create my blog locally meant getting an installation of MAMP and then importing the WXR or the database backup. I would have preferred a less intrusive approach to try out my wordpress site locally.
  • The wordpress technology stack was not very exciting for me. I never really enjoyed PHP and learnt it only to maintain my Wordpress site.

Exploring beyond Wordpress



I had seen a few bloggers use GitHub as their blogging engine with the Jekyll framework to auto-generate their HTML pages from their markdown posts. This was very inviting except for the fact that I had to store all my posts publicly on Github. Even if I purchased a private plan from Github, the storage allocated was quite minimal. GitHub for me was definitely not cost effective.



About this time, I saw a tweet from Matt Gemmell where he migrated from Wordpress to a different engine called Octopress. After reading his blog entry, I realized this was exactly the kind of framework I wanted. Matt has lot more content than I do and seeing him convert his blog successfully gave me the courage to do the same. Thus began an almost 10 day journey to convert my Wordpress blog to an Octopress blog!





There are many things to like about Octopress:



  • Write all my posts in Markdown
  • Default theme is very beautiful with rich support for styling via Compass/SASS
  • Modifying the theme is simple as its based on Jekyll. If you haven’t explored Jekyll yet, I strongly encourage to give it a try.
  • Writing plugins is also quite simple and uses the Liquid templating system
  • My entire blog is contained within a folder from which I can generate the HTML
  • Uploading is taken care with a rake task to deploy (Did I mention Octopress uses Ruby!)
  • I can preview my site locally with a simple rake preview command that starts up a local web server. It monitors changes to my blog and auto generates the html. This is great for composing posts and testing on the fly.
  • Excellent integration with social features like Google+, Twitter, Disqus, etc.
  • The Octopress tagline says its “the blogging framework for hackers” :-)

Migrating Wordpress posts



This was the most elaborate part of the process. Octopress requires that you do write all your posts in Markdown or Textile, however my Wordpress posts were all plain html. So I needed some converter that would do this transformation for me. Luckily on Matt’s blog I read about the exitWP plugin that takes care of this conversion. Although not seamless, exitWP did give me a good starting ground since it converts all the posts to a Jekyll-compliant site.



I did have to go in and change several of my posts that used code snippets. I had been using a variety of code prettifiers over the years and the corresponding HTML was not the best for a Markdown conversion. It did mess up lot of my posts and I spent several hours touching up the Markdown text.



I also got the chance to fix some of my old Urls that were still pointing to my old blog on Live Spaces. I also decided to make all my internal blog links relative and this required a combination of grep/awk and some manual intervention to fix up all the links. Overall it was a fun exercise experimenting with some bash shell commands and a mix of some ruby scripting.


Migrating Wordpress comments to Disqus





Octopress has excellent integration with Disqus, a hosted comment management system. Disqus works by linking all the comments to a specific Url. As long as your posts maintain the same Url you can just use Disqus to import all of your comments into your octopress blog. In my case, my comments were all on Wordpress and I had to first import
them into Disqus. As it turns out, this wasn’t a straightforward process.



I started by exporting my comments from Wordpress in the standard WXR Xml format. When I tried to import this file into Disqus, it choked by complaining that the <link> tags were missing. The <link> tag contains the url that links the post to the comments. To fix that I wrote some simple ruby code to update the WXR with the proper <link> tags. Now trying the import again inside Disqus went through without issues and all my comment threads got pulled in. The threads however were using the raw wordpress url (http://blog.pixelingene.com/?p=123) and I wanted to use a more semantic url of the form http://blog.pixelingene.com/year/month/the-post-slug. To fix this I created a simple Url map (CSV) and used the Disqus Url Mapping Tool to fix these links.



Finally with all that done, my comments were safe and sound inside Disqus, with the right permalink-Urls. Now the next part was to link them up with my blog. Luckily this is as simple as specifying a disqus_short_name in the Octopress config file!


Url Rewrites and other changes



Now that I had chosen to use a semantic permalink to my posts, I also had to make sure my existing links to the posts continued working. This was a matter to having some redirects set up on my website. I used the standard Apache directives (RewriteCond, RewriteRule) in my .htaccess to permanently redirect all of my old urls.



A few other things I had to do include:



  • 404 page
  • Plugins (Liquid Tags) to embed Silverlight apps and Youtube videos
  • Change the feed Url from the default /atom.xml to my FeedBurner url


The one thing I havent’ done yet is modify the theme from the default. I’ll probably get to it one of these days.


Epilogue



So that’s my experience with the Wordpress to Octopress migration. Although not a smooth transition, it wasn’t terribly bad and I actually enjoyed the process using a variety of tools. I have tried my best to make sure that all existing wordpress links, images, download links, demos, etc. continue working, but there is always that infinitesimal probability of missing something. If something does break, I’ll find out in one way or other. Until then “enjoy the new blog!”



Thursday, August 25, 2011  |  From Pixel in Gene

In JavaScript, if you set a property on the prototype, it is like a
static property that is shared by all instances of the Function. This is
common knowledge in JavaScript and quite visible in the code. However if
you are writing all your code in
CoffeeScript, this fact
gets hidden away by the way you declare properties.



Properties in CoffeeScript

1
2
3
4
5
class Site extends Backbone.view
  staticProp: "hello"

  initialize: ->
    @instanceProp: "instance hello"



If you declare properties without the @ symbol, you are effectively
creating properties on the prototype of the class. This of course works
great if you want a shared property but certainly not the way to go if
you want per-instance properties. I missed out using the @ symbol and my
app went bonkers. This simple oversight cost me fair bit of time
debugging it. The right thing to do was using the @property syntax,
since I needed per-instance properties. In the code snippet shown above,
the staticProp is a property on the prototype of the Site function.
@instanceProp is an instance property that will be available on each
instance of Site. CoffeeScript translates the above source to the
following JavaScript:



Compiled JavaScript output

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var Site;
var __hasProp = Object.prototype.hasOwnProperty, __extends = function(child, parent) {
  for (var key in parent) { if (__hasProp.call(parent, key)) child[key] = parent[key]; }
  function ctor() { this.constructor = child; }
  ctor.prototype = parent.prototype;
  child.prototype = new ctor;
  child.__super__ = parent.prototype;
  return child;
};
Site = (function() {
  __extends(Site, Backbone.view);
  function Site() {
    Site.__super__.constructor.apply(this, arguments);
  }
  Site.prototype.staticProp = "hello";
  Site.prototype.initialize = function() {
    return {
      this.instanceProp: "instance hello"
    };
  };
  return Site;
})();



As you can see, staticProp (line# 15)is set on the Site.prototype
and instanceProp (line# 18) is on the instance (this) of Site. So
the take away is:



Be careful while declaring properties in CoffeeScript


CoffeeScript offers a lot of syntax sugar which makes programming
JavaScript a lot more fun. Do check out the
website for other
interesting features.



Thursday, August 25, 2011  |  From Pixel in Gene

In JavaScript, if you set a property on the prototype, it is like a static property that is shared by all instances of the Function. This is common knowledge in JavaScript and quite visible in the code. However if you are writing all your code in CoffeeScript, this fact gets hidden away by the way [...]

Thursday, August 11, 2011  |  From Pixel in Gene

In the previous post we saw how the D3.js library could be used to render tree diagrams. If you haven’t read that post yet, I would encourage reading it as we will be expanding on it in this post. Now that we have a nice tree diagram of a hierarchy, it would be good to [...]

Tuesday, July 19, 2011  |  From Pixel in Gene

In the past few weeks, I have spent some time evaluating some visualization frameworks in Javascript. The most prominents ones include: Javascript InfoVis Tookit, D3 and Protovis. Each of them is feature rich and provides a varieties of configurable layouts. In particular I was impressed with D3 as it gives a nice balance of features [...]

Sunday, July 10, 2011  |  From Pixel in Gene

Of late, I have been building some Html/Javascript apps and exploring a bunch of javascript libraries, including the usual suspects (jQuery, jQuery UI, jQuery template, underscore, etc). The more interesting ones are visualization libraries like d3, isotope, highcharts. In this post, I will focus on a specific scenario in the isotope.js library. Isotope.js Isotope.js is [...]

Wednesday, May 25, 2011  |  From Pixel in Gene

A few days back while I was busy designing some UI for a Silverlight app, I accidentally hit upon this fun hack. If you assign a shared Brush resource to the CaretBrush property of the TextBox control, then you start seeing some crazy blinking-light effects at places where the shared Brush is used. It is [...]

Saturday, March 19, 2011  |  From Pixel in Gene

After having worked full-time for several years in the Corporate world, I have decided to make a career change and jump on to Consulting. I have joined my friends at , where I’ll be working in the Financial district of New York building solutions using Microsoft .Net, C#, WPF, Silverlight and others. I have known [...]

Saturday, March 12, 2011  |  From Pixel in Gene

I have been playing around with Quartz Composer (included as part of the Developer tools installation on Mac OSX) for almost a year. It’s a great tool for creating screen savers, music visualizations and also for quick prototyping of some visual concepts. I personally find the patch-based approach to solving problems quite refreshing and offers [...]

Thursday, October 28, 2010  |  From Pixel in Gene

In this post I want to talk about some interesting ideas regarding a control called TokenizingControl ? What is that you may ask, so lets start with the basics. A Tokenizing control takes in some text, delimited by some character and converts that text to a token, a token that is represented by some UI [...]

Monday, October 25, 2010  |  From Pixel in Gene

The designers in my team use a lot of nested double-Border elements to achieve a nice rounded border-effect around containers. In XAML this looks like so: <Border Background="#FF414141" Padding="3" Width="300" Height="200" CornerRadius="8"> <Border Padding="3" CornerRadius="8"> <Border.Background> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF910000" Offset="1" /> <GradientStop Color="#FFAF5959" /> </LinearGradientBrush> </Border.Background> </Border> </Border> You will notice that there [...]

 Pixel in Gene News Feed 

Last edited May 8, 2010 at 6:42 PM by pavanpodila, version 28