Thursday, July 25, 2013

Vimgrep on steriods - even on Windows

So I was looking at this vim tip for finding in files from within Vim - while it looks helpful, there are a number of possible improvements:

  1. Why a static binding? being able to tweak the patterns or the files to search is quite common - so much more value if you could have the command printed in the command line, ready to be edited to your heart's content or just go ahead and execute the search with Enter.
  2. The tip wont work for files without extensions (say .vimrc) - in this case, expand("%:e") returns empty string
  3. lvimgrep is cross platform but slow - let's use Mingw grep too for vimgrep
  4. And make that Mingw grep integration work on different machines
It was more of an evening of scratching an itch (a painful one if you're zero in vimscript :) ). Here's the gist for it- hope someone finds it useful.

Feel free to tweak the mappings - I use the following:

  1. leader+f: normal mode: vimgrep for current word, visual mode: search for current selection
  2. leader+fd: Similar - but look in the directory of the file and below
  3. leader+*: Similar to the above, but use internal grep

Save the file to your .vim folder and source it from .vimrc

    so ~/.vim/grephacks.vim

A few notes:

  1. GNUWIN is an env variable pointing to some folder where you've extracted mingw findutils and grep and dependencies
  2. The searches by default work down from whatever vim thinks is your present working directory. I highly recommend vim-rooter if you're using anything like subversion, mercurial or git as vim-rooter automatically looks for a parent folder that contains .git, .hg or .svn (and more - please look it up)

Happy vimming!

Saturday, February 16, 2013

Downloading over an unreliable connection with Wget

This is a part rant, part tip - so bear with me... My broadband connection absolutely sucks over the past week. I upgraded from 2Mbps with a download limit to a 4Mbps with unlimited downloads and since then it has been nothing but trouble... Damn BSNL!! I've probably registered about 30 odd complaints with them to no avail. If there was a Nobel for bad customer service, BSNL would probably win it by a mile. Some examples:
  1. They'll call to find out what the complaint it and even when I explain what's happening, they hardly hear me out at all.
  2. Either they call up and say 'We have fixed it at the Exchange' and nothing has changed
  3. They automatically close the complaints :)
Guess they find it too troublesome that someone who's paying for broadband actually expects the said broadband connection to work reliably!

Anyway, Airtel doesn't seem to be any better - they need 10 days to set up a connection and when I was on the phone with them, they didn't seem too interested in increasing their customer count by 1 :).

I also tried calling an ISP called YouBroadband after searching some of the Bangalore forums for good ISP providers. They promised a call in 24 hours to confirm if they have coverage in my area and it was feasible for them to set up the connection and that was 48 hours ago!

At work, I've heard good things about ACTBroadband and they have some ads in TOI as well, but they said they don't have coverage in my area :(.

So how do you download


Today I needed to download something and doing it from the browser failed each time since my DSL connection would blink out in between!

After ranting and raving and writing the first part above and still mentally screaming at BSNL, decided to do something about it... Time for trusty old wget - surely, it'll have something?

Turns out that guess was a 100% on the money... it took a few tries experimenting with different options, but finally worked like a charm

wget -t0 --waitretry=5 -c -T5 url
# where
# -t0 - unlimited retries
# --waitretry - seconds to wait between retries
# -c resume partially downloaded files
# -T5 - set all timeouts to 5 seconds. Timeouts here are connect timeout, read timeout and dns timeout

Sunday, February 03, 2013

Single Page Apps

We released the Scheduler service (cloud hosted cron that does webhooks) on the 18th of Jan. It was our first release (still in beta) and you can sign up for it via the Windows Azure store as an addon. Upcoming release will have a full portal and the ability to register without going via the Windows Azure portal.

We've been building the user portal to the Scheduler service as a Single Page app (SPA) and I wanted to share a some background and insights we've gained.

SPA overview

To review, an SPA is a web app contained in a single page - where 'pages' are nothing but divs being shown/hidden based on the state of the app and user navigation.

The benefits are that you never have a full page refresh at all - essentially, page loads are instantaneous and data is retrieved and shown via AJAX calls. From a UX standpoint, this delivers a 'speedier' experience, since you never see the 'static' portions of your page reload when you navigate around.

All that speediness is great but the downsides are equally important.

SPA - Challenges

  1. Navigation - SPA's by nature break the normal navigation mechanism of the browser. Normally, you click a link, it launches off a request and would update the url on the address bar. The response is then fetched and painted. In an SPA however, a link click is trapped in JS and the state is changed and you show a different div (with a background AJAX req being launched).
    This breaks Back/Forward navigation and since the URL doesn't change, bookmarkability is also broken to boot.
  2. SEO - SEO also breaks because links are associated with JS script and most bots cannot follow such links.
Now, none of this is really new. Gmail was probably the first well known SPA implementation and that's been around since 2004. What's changed is tha t now there are better tools and frameworks for writing SPAs. So how do you get around the problems?
  1. Back/Forward nav and Bookmarkability: SPA's use hash fragment navigation - links contain hash fragments. According to the original HTTP standard, hash fragments are for within page navigation and hence while the browser will update the address on the address bar and push an entry into the history stack, it will not make a request to the server. Client side routing can listen for changes to the location hash and manipulate the DOM to show the right 'section' of the page.

  2. SEO - Google (and later Bing) support crawling SPA websites provided the links are formatted specifically. See

Why we went the SPA way

When we started out with the Portal, we needed to take some decisions around how to go about it
  1. Scheduler REST service is a developer focused offering and the primary interaction for our users is the API interface itself. While the portal will have a Scheduler management features, this is really to give our users a 'manual' interface to scheduler. The other important use case for the portal is when you want to see history of a task's execution. Given that the API was primary, we wanted to build the UI using the APIs to dogfood our API early and often.
  2. It just made sense to have the UI consume the APIs so that we weren't re-writing the same capabilities again just to support the UI.
  3. Getting the portal to work across devices was important. In that sense, going with an approach that reduces page loads makes sense.
  4. We wanted public pages to be SEO friendly - so the SPA experience kicks in only after you login.
  5. Bookmarkability is important and it should be easy to paste/share links within the app.

Tools and frameworks

We evaluated different frameworks for building the SPA. We wrote a thin slice of the portal - a few public pages, a Social login page and a couple of logged in pages for navigation and bookmarkability.
  1. KO+ approach - I'm calling this KO+ as it's KO is just a library for MVVM binding and we needed a bunch of other libraries for managing other aspects of the SPA.
    • Knockout.js - for MVVM binding
    • Sammy.js - Client side routing
    • Require.js - script dependency management.
    • Jquery - general DOM manipulation when we needed it.
  2. Angular.js - Google's Angular.js is a full suite SPA framework that handles all the aspects of SPA
We chose the KO+ approach as there was knowledge and experience on KO in the team. The learning curve's also lesser since each library can be tackled at a time. While Angular offers a full fledged SPA framework, it also comes with more complexity to be grappled with and understood - essentially, the 'Angular' way of building apps.

That said, once you get over the initial learning curve of Angular, it does have a pleasant experience and you don't have to deal with integration issues that come up when using different libraries. We had prior experience on KO on the team so it just made sense to pick it given our timelines.

I'll post an update once we have it out of the door and ready for public consumption.

Thursday, December 13, 2012

Rewriting history with Git

What's this about rewriting history?

While developing any significant piece of code, you end up making a lot of incremental advances. Now, it'll be ideal if you are able to save your state at each increment with a commit and then proceed forward. This gives you the freedom to try out approaches, go in one way or the other and at each point have a safe harbor to return to. However, this ends up with your history looking messy and folks whom you're collaborating with have to follow your mental drivel as you slowly built up the feature. Now imagine if you could do incremental commits but at the same time, before you share your epic with the rest of the world, were able to clean up your history of commits by reordering commits, dropping useless commits, squashing a few commits together (remove those 'oops missed a change' commits) and clean up your commit messages and so on and then let it loose on the world! Git's interactive rebase lets you do exactly this!!!

git rebase --interactive to the rescue

Git's magic incantation to rewrite history is git rebase -i. This takes as argument a commit or a branch on which to apply the effects of rewritten rebase operation Lets see it in operation:

Squashing and reordering commits

Let's say you made two commits A and B. Then you realize that you've missed out something which should really have been a part of A, so you fix that with a 'oops' commit and call it C. So your history looks like A->B->C whereas you'd like it to look like AC->B Let's say your history looks like this:

bbfd1f6 C                           # ------> HEAD
94d8c9c B                           # ------> HEAD~1
5ba6c52 A                           # ------> HEAD~2
26de234 Some other commit           # ------> HEAD~3
....
....

You'd like to fix up all commits after 'some other commit' - that's HEAD~3. Fire up git rebase -i HEAD~3 The HEAD~3 needs some explaining - you made 3 commits A, B and C. You'd like to rewrite history on top of the 4th commit before HEAD (HEAD~3). The commit you specify as the base in rebase is not included. Alternatively, you could just pick up the SHA1 for the commit from log and use that in your rebase command. Git will open your editor with something like this:

pick 5ba6c52 A
pick 94d8c9c B
pick bbfd1f6 C
# Rebase 7a0ff68..bbfd1f6 onto 7a0ff68
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

Basically, git is showing you the list of commands it will use to operate on all commits since your starting point. Also, it gives instructions on how to pick (p), squash (s)/fixup (f) or reword(r) each of your commits. To modify the history order, you can simply reorder the lines. If you delete any line altogehter, then that commit totally skipped (However, if you delete all the lines, then the rebase operation is aborted). So, here we tell that we want to pick A, squash commit C into it and then pick commit B.

pick 5ba6c52 A
squash bbfd1f6 C
pick 94d8c9c B

Save the editor and Git will perform the rebase. It will then pop up another editor window allowing you to give a single commit message for AC (helpfully pre filled with the two original messages for A and C). Once you provide that, git rebase proceeds and now your history looks like AC->B as you'd like it to be.

Miscellaneous tips

Using GitExtensions

  1. If you use Git Extensions, you can do the rebase though it's not very intuitive. First, select the commit on which you'd like the interactive rebase. Right click and choose 'Rebase on this'.
  2. This opens the rebase window. In this window, click 'Show Options'
  3. In the options, select 'Interactive rebase' and hit the 'Rebase' button on the right
  4. You'll get an editor window populated similarly.

If the editor window comes up blank then the likely cause is that you have both cygwin and msysgit installed and GitExtensions is using the cygwin version of git. Making sure that msysgit is used in GitExtensions will avoid any such problems.

Using history rewriting

Rewrite history only for what you have not pushed. Modifying history for something that's shared with others is going to confuse the hell out of them and cause global meltdown. You've been warned.

Handling conflicts

You could end up with a conflict - in which case you can simply continue the rebase after resolving the conflicts with a git rebase --continue

Aborting

Sometimes, you just want the parachute to safety in between a rebase. Here, the spell to use is git rebase --abort

Final words

Being able to rewrite history is a admittedly a powerful feature. It might even feel a little esoteric at first glance. However, embracing it gives you the best of both worlds - quick, small commits and a clean history. Another and probably more important effect is that instead of 'waiting to get things in shape' before committing, commits happen all the time. Trying out that ingenious approach that's still taking shape in your head isn't a problem now since you always have a point in time to go back to in case things don't work out. Being able to work 'messily' and commit anytime and being secure in the knowledge that you'd be able fix up stuff later provides an incredible amount of freedom of expression and security. Avoiding the wasted mental cycles spent around planning things carefully before you attack your codebase is worth it's weight in gold!!!

Wednesday, October 03, 2012

Nexus 7 - First impressions and tips and tricks

So I got my Dad the 8GB Nexus 7. This is an awesome tablet - exactly what a good tablet should be. The UI is buttery smooth and things just fly. The hardware is not a compromise, excellent price point and overall a superb experience.

Of course, there are some things to deal with like 8 GB storage,lack of mobile data connectivity, lack of expandable storage and no rear camera. These aren't issues at all as far as I'm concerned.

If I'm traveling with the tablet, then I always have the phone's 3G data to tether to using WiFi tethering. The 8GB storage is only an issue if you're playing the heavyweight games or want to carry all your videos or a ton of movies with you. Given the 8GB storage, I'm more than happy to load up a few movies/music before travel. Provided you have a good way to get files/data in and out of the computer and are OK with not carrying your complete library with you always, you don't have to worry about the storage. A camera though would be nice - but then hey - you can't have everything your way :).

File transfer to/from PC

Which brings us to the topic of file transfers to/from your PC. Now wifi is really the best way to go - and I couldn't find a way to make WiFi direct work with Windows 7. So for now, Connectify seems to be the best option. It runs in the background on your PC and makes your PC's wireless card publish its own Wireless network. You can connect to this network from your phone and if you share folders on your PC, you're set to move data around.

Now, on the Android side, ES file explorer is free and gets the job done from a file management/copying/moving perspective. I also tried File Expert but its more cumbersome. ES excels in multiple file selection and copying.

Ebooks

The one area where the N7 excels is for reading books. The form factor and weight are just right for extended reading sessions. However, Google Play books doesn't work in India and so you need an alternate app. I tried out Moon+ Reader, FBReader and Reader+ - and out of the lot, FBReader was the best. Moon+ has a nicer UI but choked on some of my Ebooks. Reader+ didn't get the tags right and felt a little clunky. FB reader provided the smoothest experience of the lot. I'm already through half of my first book - and did not have any issues. I have a decent collection of e-books on my PC but once I copied them to the N7, all the meta data was messed up. Editing metadata and grabbing covers is a pain on the tablet and best done on the PC.

This is where Calibre comes in - this is a full blown ebook library management app. It does a great job of keeping your ebooks organized and editing the metadata on them. It can also fetch metadata and covers from Amazon and google and update your collection. Once you're done, transferring to the N7 is a little tricky. The first time, I just copied the library over to the N7 - but N7 showed each book thrice. Some troubleshooting later, found that the best way was to create an export folder and use teh 'Connect to Folder' feature to mount it as a destination. Then you can select all the books you want and use the 'Send to destination in one format' to publish EPub format to the folder. This generates one epub file per book with the metadata and covers embedded in it and you can then copy this folder over to the N7's Books folder using ESFileExplorer

Playing movies on your N7 over WIFI

My movie collection is on XBMC - and XBMC is DLNA/uPNP compatible. Dive into XBMC system settings and make turn on the uPnP/DLNA services. Then on the N7, you can use uPnPlay. For playing video, it relies on having a video player app isntalled. I like MXplayer. Don't forget to also install the HW Player codec for ARM V7 and to turn on HW decoding in the settings.

Playing movies on your TV from the N7

You wont be doing much of this as there isn't a rear camera - but if you do decide to take a video or pics from the N7's FFC, then you can use the uPnPlay to project them on to your TV (that is, provided you have a DLNA/uPnP compatible TV or compliant media center hooked to your TV)
For XBMC, turn on uPnp in settings and you're done. XBMC should be able to discover your tablet and you'll be able to browse and play videos.
If you'd rather use the table to control what's played on XBMC, then turn on the setting to allow control via uPnP in XBMC settings. Now, in uPnPlay you can select XBMC as the play to device and playing any video/song, plays it on the tv.

That's all for now... I'm loving this tablet and the stuff it can do... looks like I'd be buying a few more soon :)

Wednesday, September 26, 2012

Websocket server using Jetty/Cometd

So I just wrote up a Websocket server using CometD/Bayeux. It's a ridiculously simple app - but went quite a long way in helping to understand the nitty gritties with putting up a Websocket server and CometD/Bayeux. Thought that I'll put it up for reference - should help in getting a leg up on getting started with CometD.

The sample's up on github at https://github.com/raghur/rest-websocket-sample

Here's how to go about running it:
  1. clone the repo above
  2. run mvn jetty:run
  3. Now browse to http://localhost:8080 to see the front page
  4. There are two parts to the app
    1. A RESTful API at http://localhost:8080/user/{name} - hypothetical user info - get retrieves a user, put creates a user and delete obviously deletes the user.
    2. The websocket server at localhost:8080/cometd has a broadcast channel at /useractivity which receives events whenever a user is added/deleted. The main page at http://localhost:8080 has a websocket client that updates the page with the user name whenever a user is added or removed.
And here's the nuts and bolts:
  1. BayeuxInitializer - initializes the Bayeux Service and the EventBroadcaster. Puts the EventBroadcaster in the servlet context from where the RESTful service can pick it up to broadcast.
  2. EventBroadcaster - creates a broadcast channel in the ctor. Provides APIs to publish messages on this channel.
  3. HelloService - basic echo service taken from Maven archetype
  4. MyResource - the RESTful resource which responds to GET/PUT/DELETE - nothing major here. If a user is added or deleted, then it pushes a message on the broadcast channel by getting the EventBroadcaster instance from the servlet context.
It's about as simple as you can get (beyond a Hello world or a chat example). Specifically, I wanted a sample where back end changes can be pushed to clients.

Friday, September 21, 2012

Android WordHero - product lessons

So, yesterday I figured that now I'm an addict.. fully and totally to something called wordhero on my phone... it's one of those games where you have a 4x4 grid of letters and you need to find as many words as you can within 2 mins. Nothing special... and there are tons of look alikes and also rans on the Google Play store. Even installed some of them and then removed them...

So what's different? Turns out that there's quite a few things - and apart from one, they're all at the detail level. The most significant one is that there are its online only and everyone's solving the same grid at the same time - so you get to see your ranking at the end. No searching for opponents, no clicking - just every game.

Apart from that, the main game idea is the same (form words on a 4x4 grid) so details are the only place where one can innovate... reminds me of Jeff Atwood's post that a product is nothing but a collection of details. So what are these details?
  1. Its online only. You can play only if you have an Internet connection.. otherwise, scoot!
  2. The information level and detail is just right: Tracing through the letters highlights the whole word; If you find a word, you see green; wrong word, red; dupe - yellow. At 10s, there's a warning been upto 5s. Not down to 0... so it warns - but doesn't distract. Simple. Effective. Efficient. Brillant!
Now sample the competition:
  1. Tracing - line through the letters, shaky squiggly letters when you pass over them and other sorts of UI idiocy, grid that's too small, grid that isn't a square, word check indicators at some other place. Sure, some of this is debatable..esp the ones around the bells and whistles. They looks great the first time, the second time and a few more times after that. By the time you hit the tenth time (if you do ), you start hating it.
  2. Offline mode - this is counter intuitive.. in fact, after playing wordhero, I ran to find one which had an offline mode. Once I found it though, surprisingly, I did not like it.. Turns out that there's little thrill in forming words on a grid; the thrill is in seeing where you stand and if you're improving.
  3. Timed mode - pretenders to the throne have untimed modes, customizable timers and so on. Didn't work for me - 2 minutes is that absolute sweet spot where you can grab a game anytime... and have that deadline adrenaline rush work for you... Thought I'd do great on the untimed games - but while I scored more, it wasn't significantly more. More importantly, it was missing the fun. Turns out that we want to see where we rank far more than we want to form words :D
So after promising myself one last game at 11 in the night yesterday and ending up playing up to 12:30 AM, I tore myself away from this satanic game. Kept the phone far away to ensure that I don't pick it up again in the middle of the night and started thinking what makes wordhero tick. There's nothing earth shaking about the reasons - but the effect of getting it right is surprising:
  1. Figure out what will tickle the right pleasure centers - and optmize like hell for that: This is hard... in wordhero, this is the global rankings per game and the stats... optimizing for this means that you take away offline mode totally. That isn't a small decision - especially when an offline mode is easy to implement and feels like giving the user 'more'. Tough to argue against it too - but as I've seen myself - something like that will kill the multiplier effect of seeing a large number of people play. Chances are, your users dont know that either - so no point asking them. Apple seems to have figured this out very well.
  2. Keep the UI simple and efficient - and show me what I need when I need it: Should look good for the casual user. For power users, it should be efficient and not irritating... so keep all those nice bells and whistles under control.
  3. Keep the options simple - I like options.. I like options more than what your average joe likes them... most of the times, I've seen the options that you didn't know were there... but when you're designing a game that's 2:30 minutes from start to finish, I don't want to think about options. More importantly, don't ask me questions about it.. just start the damn game...
So does it mean that WordHero's perfect? Far from it - but its successful by anyone's measure. If you're looking for perfection, you won't ever launch :). Some of the stuff that I'm sure they'll get to at some point
  1. Better explanation of the stats
  2. Charts/trends over the stats instead of only the current value
  3. Better explanation of some of the UI color coding on the results screen.

Thursday, September 06, 2012

Google Maps Navigation enabled in India!!

Just came across an awesome piece of news - Google Maps now has turn by turn, voice guided directions officially in India!!

Uptil now, I used to get the Ownhere mod for Google Maps that enables World navigation - It used to be available on XDA-Forums but got taken down once google frowned on it!

No more of that hassle - just go to Play store and install Maps.

Very cool! Thanks Google.

Tuesday, August 21, 2012

Converting xml to json with a few nice touches

During my recent outings in heavyweight programming, one of the things we needed to do was converting a large XML structure from the server to JSON object on the browser to facilitate easy manipulation/inspection.

Also, the XML from the server was not the nice kind - what I mean is that tag names were consistent - but the content was wildly inconsistent. For ex, all of the following were recd:


<!-- different variations of a particular tag -->
<BgSize>100,23</BgSize>
<BgSize>0,0</BgSize>
<BgSize>,</BgSize>

Ideally, in this case, we wanted to parse and validate the node (and all its different variations) and convert it to an X,Y pair only if it was a valid data in it. Also, a lot of these were common tags as you might expect that showed up in various different entities in the XML, so we wanted that all these rules get applied sooner centrally rather than having to deal with them at disparate places later down the stream.

The other reason was that a lot of the nodes really had structured data crammed into a single tag - which we ideally wanted parsed as a javascript object so that we could manipulate it easily


<!-- xml data with structured content -->
<!-- font, size, color, bold, italic-->
<Font>Arial;Lucida,14,0x0044,True,False</Font>

So that brought up a search for the best way to convert XML to JSOn -and of course stackoverflow had a question. THe article in the answer makes for very interesting reading into all the different conditions that have to be handled. The associated script at http://goessner.net/download/prj/jsonxml/ is the solution I picked. Really not much going on below other than to use the xml2json function to convert the xml to a raw json object.


@parseXML2Json: (xmlstr) ->
    log xmlstr
    json = $.parseJSON (xml2json $.parseXML (xmlstr)
    destObj = Utils.__parseTypesInJson(json)
    log "raw and parsed objects", json, destObj
    return destObj

But now to the more interesting part - once the xml is converted to a JSON, we need to do our magic on top of it - of applying validations and conversions. This is where the Utils.__parseTypesInJson method comes in

What we're doing here is walking through the JSON object recursively. At each step, we keep track of the path of the xml that we have descended into so that we can check the path and based on the path, apply validations or conversions. At each step, we also need to check the type of JSOn object we're dealing with - starting with undefined, null, string, array or object

If its a string, we further delegate to a __parseString function to convert the string to an object if needed.


@__parseTypesInJson: (obj, path = "") ->
 if typeof obj is "undefined"
  return undefined
 else if obj is null
  return null
 else if typeof obj is "string"
  newObj =  Utils.__parseString(obj, path)
  validator = _.find Utils.CUSTOM_VALIDATORS, (v)->
  v.regex.test path
  return validator.fn(newObj)  if validator?
  return newObj
 else if Object.prototype.toString.call(obj) is '[object Array]'
  destObj = (Utils.__parseTypesInJson(o, path) for o,i in obj)
  destObj = _.reject destObj,  (obj) ->
  obj == null
  return destObj
 else if typeof obj is "object"
  destObj = {}
  destObj[k]  = Utils.__parseTypesInJson(obj[k],  "#{path}.#{k}") for k of obj
  validator = _.find Utils.CUSTOM_VALIDATORS, (v)->
  v.regex.test path
  return validator.fn(obj)  if validator?
  return destObj
 else
  return obj


At each step, once the object is formed, we see if there's a custom validator defined in the array of custom Validators. Each validator is a regex and a callback function - if the regex matches the path, then the callback is passed the json object which it may manipulate before returning


@CUSTOM_VALIDATORS = [ choice =
                        regex: /choice$/
                        fn: (obj)->
                            if obj["#text"]?
                                return obj
                            else
                                log "returning null"
                                return null
                        ]

THe parseString method for completeness - you can really tweak this to your
taste and there's nothing complicated going on in this.


@__parseString : (str,  path) ->
    if not str?
        return str
    if _.any(Utils.SKIP_STRING_PARSING_REGEXES, (r)->
                                                    r.test path)
        log "Skipping string parsing for:" , path, str
        return  str
    if str
        if /^\d+$/.test str
        return parseInt str
    else if /^\d+,\d+$/.test str
        [first,second] = str.split(",")
        return  {"x": parseInt(first), "y": parseInt(second)}
    else if str == ','
        return null
    else if /^true$/i.test str
        return true
    else if /^false$/i.test str
        return false
    else if   /^[^,]+,\d+,(0x[0-9a-f]{0,6})?,((True|False),(True|False))?$/i.test str
        log "Matched font: ", str
        return  Utils.parseFontSpec(str)
    else
        return str

Microsoft Releases Git TFS integration tool

Microsoft released a cross platform Git TFS integration tool Git TF!! It's definitely a good step and acknowledgement about the mindshare that Git has.
I took it for a spin - the integration is supposed to be cross platform - so that it should work on cygwin also. However, the first time I tried, it did not and had to tweak the script a little.

In the script <install folder>/git-tf

# On cygwin and mingw32, simply run the cmd script, otherwise we'd have to
# figure out how to mangle the paths appropriately for each platform
if [ "$PLATFORM" = "cygwin" -o "$PLATFORM" = "mingw32" ]; then
#exec cmd //C "$0.cmd" "$@"                 #Orig
exec cmd /C "$(cygpath -aw "$0.cmd")" "$@"  #Changed
fi

Anyway, after that, things did seem to work - the only issue is that your windows domain password is echoed on the cygwin console :(... other than that minor irritant, I was able to clone the project and work on it using the Git integration. Going to try it out some more over the next few days and will post if find anything more. THis is definitely a great step from MS - and if this works properly, it will almost make working with TFS source control much much bearable :D

Friday, August 10, 2012

Coffeescript rocks!

I've been absent a few weeks from the blog. Life got taken over by work - been deep in the Javascript jungles and Coffeescript has been a lifesaver.
Based on my earlier peek at Coffeescript, we went ahead full on with Coffeescript and I have to say it has been a pleasant ride for the team with over 4.7KLoc of Javascript (with Coffeescript source weighing in around 3.7KLoc including comments etc) that now I can confidently recommend it for any sort of Javascript heavy development.
I'm going to list down benefits we saw with Coffeescript and hopefully someone else trying to evaluate it might find this useful:
  1. Developers who haven't dove deep into Javascript's prototype based model find it easier to get up to speed sooner. Yes - once in a while they do get tripped up and then have to look again into what's going under the covers - but this is normal. The key point is that its much much more productive and enjoyable to use Coffeescript.
  2. The conciseness of the Coffeescript definitely goes a long way in improving readability. One of the algorithms implemented was applying a bunch of time overlap rules. We also used Underscore.js - and between Coffeescript and Underscore.js, the whole routine was within 20 lines, mostly bug free and very easy for new folks to pick upand maintain over time. Correspondingly, the generated JS was much more complicated (though Underscore helped hide some of loop iteration noise) - and it wouldn't have been too different had we written the JS directly.
  3. Integrating with external frameworks - jquery, jquery ui etc was again painless and simple.
  4. Another benefit was that the easy class structure syntactic sugar helped quickly prototype new ideas and then refine them to production quality. With developers who're still shaky on JS, I doubt the same approach would have worked since they'd have spent cycles trying to get their heads wrapped around JS's prototype based model.
  5. Coffeescript also allows you to split the code to multiple source files and merge all of them before compiling to JS - this allowed us to keep each source file separate and reduce merges required during commits.
  6. Finally, performance is a non issue - you do have to be a little careful otherwise you might find yourself allocating function objects and returning them back when you don't mean to but this is easily caught in reviews.
One latent doubt I had going into this was the number of times we'd have to jump in to the JS level to debug issues. With a larger Coffeescript codebase spread across multiple files, this is a real concern since the error line numbers wouldn't match with source and if we have to jump through hoops to fix issues. Luckily, this wasn't a problem at all - over time, in cases of either an error in JS or just inspecting code in the browser, its easy to map to the Coffeescript class/function - so you just fix it there and regenerate the JS. Secondly, the generated JS is quite readable - so even when investigating issues, it's quite trivial to drop breakpoints in Chrome and know what's going on.
The one minor irritation was if there was a Coffeescript compile issue, then when joining the file, the line number reporting.fails and then you have to compile each file independently to figure out the error. Easily automated with a script - so that's just being nitpicky.
Anyway, if you got here looking for advice on using Coffeescript, then you've reached the right place and maybe this post's helped you make up your mind!

Tuesday, July 03, 2012

Media center setup - XBMC-XVBA

I finally got my nettop - AMD E-350 based barebones system. Installed 4G of RAM and the plan was to set it up with XBMCBuntu or XBMC-XvBA. Instead of installing the XBMC-XvBA version directly, I figured that I could start with XBMCBuntu, see how it does and then if necessary move to the XvBA enabled builds.

I don't have a hard drive for the nettop - the plan was to have the system run off a 8Gig pen drive.

Basic Installation - XBMCBuntu

What you need

  1. The nettop with RAM installed.
  2. 2 USB pendrives - One for installation (2GB) and another which is going to act as your HDD (8G)

Steps

  1. Download UNetBootin for windows and the XBMCBuntu iso image
  2. Create a Live USB using UNetBootin: Once you have UNetBootin installed, stick in a flash drive in the usb, start UNetBootin and selec the XBMCBuntu iso image as the source distribution iso and the flash drive as the destination.
  3. Boot the nettop using the USB drive: You might have to play around with boot devices and priorities in the BIOS settings to get it to boot from the USB drive. To keep things simple, stick the pendrive into one of the USB2 ports (avoid the USB3)
  4. ON the UNetBootin boot menu, you can just try out XBMCBuntu live image. I did so and things seemed to work well enough for me to do the full install to another USB drive plugged into the system. Note that if you're not able to find the target drive, then just reboot with both the USB drives plugged in - sometimes, newly inserted devices aren't detected.
  5. Install, go through the menus and wait for it to complete.
  6. As you go through the menus, keep in mind to choose a custom partitioning scheme. In my case, I had 4G of RAM and there's no sense in having a swap drive on the pen drive. If you plan on having hibernation support, then use a 2G swap partition (50% of RAM) - else you can skip the swap altogether.
  7. Once done, pull out the installation pen drive and reboot. You should be able to reboot off the USB pendrive that you installed into. The installation pendrive is pretty much done - you won't need it any longer.

XBMCBuntu

At this point, I had XBMCBuntu up and running however, there were a few problems:

  1. On idle, CPU utilization was very high (~ 60 - 70%) and the unit was running hot.
  2. Display resolution proved troublesome - my LCD's native resolution is 1366x768 but that wasn't available over HDMI.
  3. I was able to get 1360x768 on DVI/D-Sub - but that meant using a separate cable for audio out.

Of these, the high CPU utilization was the biggest worry - so there's a few steps available to try

  1. Within XBMC - set sync to display refresh - always.
  2. Turn off RSS feeds
  3. Tweaks .xbmc/userdata/advancedsettings.xml:
<advancedsettings>
    <useddsfanart>true</useddsfanart>
    <cputempcommand>cputemp</cputempcommand>
    <samba>
        <clienttimeout>30</clienttimeout>
    </samba>
    <network>
        <disableipv6>true</disableipv6>
    </network>
    <loglevel hide="false">1</loglevel>
    <gui>
        <algorithmdirtyregions>1</algorithmdirtyregions>
        <visualizedirtyregions>false</visualizedirtyregions>
        <nofliptimeout>0</nofliptimeout>
    </gui>
    <measurerefreshrate>true</measurerefreshrate>
    <videoextensions>
        <add>.dat|.DAT</add>
    </videoextensions>
    <tvshowmatching append="yes">
        <!-- matches title 01/04 episode title and similar.-->
        <regexp>[s]?([0-9]+)[/._ ][e]?([0-9]+)</regexp>
    </tvshowmatching>
    <gputempcommand>/usr/bin/aticonfig --od-gettemperature | grep Temperature | cut -f 2 -d "-" | cut -f 1 -d "." | sed -e "s, ,," | sed 's/$/ C/'</gputempcommand>
</advancedsettings>

Did those and while they dropped the CPU utilization to about 25% which was quite good. However, during videos, the CPU was still high - and that's because even though XBMCBuntu official uses hardware acceleration through VAAPI, it still is spotty.

Getting XvBA

I went over to the XBMC-XvBA installation thread and followed the directions in the first post to add the XBMC-XvBA ppas. The download took some time and XvBA build got installed. Started XBMC and things were much, much better.

sudo apt-add-repository ppa:wsnipex/xbmc-xvba
sudo apt-get update
sudo apt-get install xbmc xbmc-bin    

There are other tweaks that are listed on the XBMC-XvBA installation thread which I also went ahead and applied.

Other tweaks

Optimizing Linux for a flash/pen drive installation

Installing on a pen drive /usb flash drive has its pain points. My boot time was around painfully slow (~3.5 minutes). Opening Chromium took forever and even page loads were slow (it would be stuck with the status bar on 'checking cache'...). Also, the incessant writing to disk is probably killing off my pen drive much much faster. I ended up doing the following:

  1. Use the noatime and nodiratime flags for the USB drive

    # /etc/fstab
    UUID=39f52ccf-363b-4b6e-abdd-927809618d83 /               ext4    noatime,nodiratime,errors=remount-ro 0       1
  2. Use tmpfs - In memory, reduces writes to disk and is faster. With 4G of RAM, this is a no-brainer.

    # /etc/fstab
    tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0
  3. Browsers - use profile-sync-daemon for Ubuntu from Arch Linux - will automatically move your browser profile directory from your home folder to /tmpfs
  4. Move .xbmc to NAS/External drive along with your media. Makes a lot more sense to keep your .xbmc folder with your media on a external hdd.
  5. Change to noop or deadline scheduler:

    # Assuming sda is your USB drive
    sudo echo noop > /sys/block/sda/queue/scheduler
  6. Change system swappiness. We don't want the OS to use swap drive at all.

    # /etc/sysctl.conf
    vm.swappiness=1

Getting suspend/hibernate to work

I had greatest trouble here - but was able to get pm-utils working eventually. pm-utils is a framework of shell scripts around suspend/hibernation/wakeup that provides hooks to execute scripts before standby/hibernation and when the computer resumes from sleep/shutdown. First test if basic suspend/hibernate works

# check suspend methods supported
cat /sys/power/state
# S3
sudo sh -c "echo mem > /sys/power/state"

If your system goes into standby, then things are good. But its just a good start. In my case, system would go into standby only the first time after boot. After that, it would go into standby but then resume immediately. Its been asked enough times on Google and I've probably tried all the fixes. The first one is to update a kernel param acpi_enforce_resources=lax

# /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_enforce_resources=lax"

After that, make sure to run  sudo update-grub In my case, the magic incantation above failed (your mileage might vary). Nothing bad happened so I kept it on. Anyway, so I rebooted, then suspended and resumed the first time (which works) and took a dump of dmesg > dmesg.1.log. After that again tried to suspend and when it came back immediately, I could get a dmesg output and scan the entries after the first run. Turned out that the log had entries related to xhci_hcd - so decided to unload it first and then try to suspend:

sudo modprobe -r xhci_hcd
sudo sh -c "echo mem > /sys/power/state"

After this, the system was able to standby each and every time. Now it was time to get pm-utils working. Out of the box, pm-utils came with a config that had a bunch of things that I didn't understand (and I doubt they applied to this machine). If standby was working directly, then it should have worked through pm-utils. However, it needed some amount of pushing around before that comes around to a functional state.

Getting pm-utils to play nice

So now that I had confirmed suspend working, it was time to see why pm-utils was being so bad. First off, time to clean up the default configuration. So copied /usr/lib/pm-utils/config to /etc/pm/config.d/config and then start editing it

SLEEP_MODULE="kernel"
# These variables will be handled specially when we load files in
# /etc/pm/config.d.
# Multiple declarations of these environment variables will result in
# their contents being concatenated instead of being overwritten.
# If you need to unload any modules to suspend/resume, add them here.
SUSPEND_MODULES="xhci_hcd"
# If you want to keep hooks from running, add their names  here.
HOOK_BLACKLIST="99_fglrx 99lirc-resume novatel_3g_suspend"

Waking up with the keyboard

if you'd like wake up with a usb device (usb keybd), then you need to find out the usb port where your device is connected. The easiest way might be to check dmesg output which would usually print this out. In my case, my wireless keyboard/trackball are connected on USB3

echo USB3 >  /sys/proc/acpi
echo  enabled > /sys/proc/devices/usb3/power/wakeup

After that, the HTPC could be woken up with a keypress. Now I haven't been able to find a way to do the same thing with only the keyboard (so that the system doesn't wake up anytime anyone picks up the keyboard - so for now, have turned this off). The above change won't persist over a reboot - so to make it persistent, put the two lines above to /etc/rc.local before the exit 0

Fixing up fglrx annoyances (ATI binary driver)

Not much point of a HTPC if the video isn't top quality. And there are a lot of variables involved there - your computer hardware, software, drivers, type of connection (HDMI/DVI) and the telly itself. Also, video driver support on Linux for ATI leaves quite a bit to be desired. One of the reasons of going with XBMCBuntu was knowing that there'll be large community support available on ubuntuforums.

Right off the bat, things started at the mildly irritating level. Catalyst control center in root mode won't start even though there's a big fat menu item there. Quick google and it says that the easiest way out is to use gksu amdcccle in the run dialog (ALT-F2).

So where does all this get us

After all this, it makes a sea change in the overall experience:

  1. XBMC idles at 15 - 20% cpu utilization. During video playback, stil stays at a comfy 40% - 50% while playing 720p/1080p videos
  2. Browsers (Chrome and FF) open near instantly; browsing experience is better than my desktop and page loads, tab switches etc feel much nimbler than on my desktop (AMD 6 core, 12G monster running Win 7x64)
  3. Total cost - USD 180

More to come

  1. Hibernation support
  2. Torrenting
  3. Scheduled wake up from shutdown/hibernate/suspend state

Saturday, June 30, 2012

Avast! trial expiration kills Internet connection - how bad is that!

So I use 'free for personal use' Avast Antivirus at home for the past couple of years. It's been mostly good though I've had some reservations about it - namely, nag pop-ups and so on. Some months ago (or maybe was it a year ago?) there was a program update and it wanted me to install 'Avira Internet Security'. Now I had no need for this (I use COMODO firewall which has been quite good) however, there was no way around it. Avira's update process said I could revert back to the free Antivirus version anytime without a re-install or a re-anything!
Not much of an option and you can't blame them for trying to push their products and convert free users to paying ones - so I went ahead with the upgrade. About a fortnight ago, I started getting warnings about 'your trial licenses is about to expire' and so on. The good thing about the internet security product was that it was discreet - in fact, safe to say that I even forgot that I installed it.
Remembering the notices about the trial expiring and reverting back to the free version, I chose to ignore all the warnings till yesterday afternoon when the wife called me up at work about 'internet not working from the home machine'. Now my wifi dongle on the home pc does once in a while show up with a 'Limited connection' that's quickly fixed with either disabling and enabling the dongle or unplugging it and putting it back in the USB port. I offered that up as a solution and a few hours later am told that it hasn't fixed the issue.This morning I finally sat down to see what was up. Turns out that the WIFI just wouldnt connect. So up comes device manager and under Network Devices I see a whole lot of 'avast! NDIS filter' virtual devices showing up. Opened Avast! gui and there are no panels for turning the thing off... Its reverted back to the free version - but has killed my net connection in the process! Not a happy camper at this point - still wasn't worried since I figured there've got to be tons of users with the same problem and it probably has a simple fix. Google did not reveal any simple fixes - Avast's community forum had help! Avast Internet Security trial expired, no internet connection and Avast Internet Security Trial seems to have affected my internet connection. The suggestions offered - buy a license, uninstall and re-install Avast etc were just not Ok. I definitely wasn't the only one affected but looks like a small but sizable user population was affected. If that was so, then Avast! should have done something about it - however, looks like they don't believe much in that. Agreed it's a free product and will not merit levels of support that you'd get for something that you pay for. But:

  1. I did not ask for the installation of the 'Premium' product trial.
  2. There was no option to opt out of the 'trial'
  3. They actively messaged that there's 'nothing to lose' from using the trial.
Given all that, they should have stepped up and either taken care of the issue with an update OR put up steps on how to solve it. It doesn't take that much. Here's how I got back my net connection:
  1. Device manager - Remove all 'Avast!' virtual devices with a right click 'Uninstall'
  2. Restart
  3. No WIFI still... so open 'Network and sharing center-> Change Adapter settings-> Wifi Connection -> properties'. In the 'This connection uses the following items:' list there was one more avast! filter device. Selected and uninstalled that too and restarted again.
  4. Back in business...WIFI is back up and running!
It's time to say goodbye to Avast!. Any recommendations for good, free antivirus solutions?

Wednesday, April 04, 2012

Media center upgrades - part two

So this is a continuation to my last post on my effort to upgrade the media center at home. While I wait for hardware to come, I've been reading up through forums and blogs online and am finding it real hard to get some good advice. So, thought it might help to list down concisely the situation as it stands currently, in the hope that it will server other folks who're trying to find similar answers.

So what's the fuss all about?

Getting XBMC on Linux with AMD fusion APUs to work nicely and render hardware accelerated video. Also, while we're at it, also do it by booting off a pendrive (ie hdd less system)

Background

Graphics APIs

To get hardware accelerated video on ATI/AMD hardware on Linux, currently, there are two choices

  1. XvBA - this is AMD's graphic APIs (similar to VDPAU on nVidia). Not very well supported.
  2. VAAPI - this is intels APIs. XBMC Eden is said to work well with VAAPI.

Drivers

  1. Open source Linux drivers for ATI chips lag behind the closed source ATI proprietary drivers. For HD video, you're pretty much limited to using ATI's proprietary drivers. So, let's emphasize - from now on, driver means ATI Catalyst for Linux

The Contenders

OpenElec.tv

OpenElec is covered in the earlier post - but essentially you have a Fusion optimized micro builds that can run off an SD card/flash drive. From a video perspective, this should be identical to XBMCBuntu. The upside is that everything is pre-configured while the downside is that it's pretty limited.

XBMCBuntu

Also covered in my previous post - lightweight Ubuntu based distro/LiveCD. XBMC Eden implements VAAPI and Catalyst Fusion APUs drivers can be used asa backend with these and provide hardware accelerated video. There are some cases where this bridging doesn't/may not work well. On the other hand, since this is the officially supported method, its going to be around and improved upon, and likely to have more info available in public domain etc.

XBMC-XvBA PVR builds

So this is an unofficial build by the community. THe promise is that instead of going the VAAPI route, this has direct support for XvBA api so, offes better performance. The forum thread tracking this is available here. While the build is supposed to be quite usable, from the thread activity, it seems its also heavily under development.The goal is to merge this back to the mainline once it stabilizes.

I plan to go the path of least resistance - OpenElec, then XBMC-XvBA and finally settle on XBMCBuntu - but things might change once I actually get down to it.

Time for the big fat disclaimer - Nothing in this post is guaranteed to be correct. this is my read of stuff on the net and it could be wrong. You're welcome to correct it in the comments and I'd be more than happy to fix the post.

Monday, April 02, 2012

Compiling Vim again - Cygwin

Vim installed by Cygwin's setup project does not have Ruby/Python/Perl support enabled by default. As my list of must have vim plugins has a few which use Ruby and Python, thought that it might be good to build my own Cygwin build of Vim. Turned out a little more work than I thought - but that's more due to the misleading (at least for me :) ) Make file in the vim source tree called Make_cyg.mak.

Here's how to compile:
  1. Make sure you have python (and ruby, perl and whatever other interpreters you need vim built with) installed.
  2. Do not install vim through cygwin (or uninstall it if you have it)
  3. Download vim source tarball, untar it and go into the vim73/src folder.
  4. Configure

    ./configure --enable-pythoninterp --enable-perlinterp --enable-rubyinterp --enable-gui=no --without-x --enable-multibyte --prefix=/usr
    make && make install
  5. You're off to the races!

Saturday, March 31, 2012

Media center upgrades

I have a small form factor (SFF) machine on the way to take up duties as a media center machine. After waiting for long, finally pulled the trigger on a Foxconn Barebones Book sized system and 4G of RAM. I haven't ordered a hard drive - the plan is to run XBMC completely off a USB drive. As it is, media is on a 1TB external disk and the cost of 2.5" laptop HDDs has gone through the roof.

In terms of software, I've got to figure out which XBMC to use - the contenders are to either install XBMCBuntu or go with one of the specialized builds from OpenElec. I'm still new to both - so will need to do some reading up before I decide.

OpenElec

OpenElec has small footprint (100MB), customized builds for different chipsets. Its meant to be run from a flash drive - so it has a few optimizations to make sure that it doesn't clobber your flash drive. Also, the stable version of OpenElec based on XBMC 10.0 "Dharma" has native AMD Fusion chipset support. Its also designed to be self updating and from reading the manuals, boots right into XBMC and OpenElec settings are all accessed via a XBMC extension so you never have to drop down to the linux machine underneath it.

At this point, looks like OpenELec is really limiting. I would really love to run a browser, use the machine for torrenting etc - and somehow using the XBMC interface for all those doesn't sound too good.

Also, XBMC Eden is supposed to support AMD Fusion natively and OpenElec hasn't been updated yet for Eden (there are nightly builds available though that are based on Eden).

XBMCBuntu

XBMCBuntu is XBMC's official liveCD - you can use the live CD to install to another USB drive media and provided the system can boot from USB, you're off to the races.
The thing here is that it isnt specific to a 'flash' drive - so there's a small tradeoff in terms of the flash drive life. XBMCBuntu is based on Lubuntu 11.10.

Migrating the database

I also have to figure out how to migrate my XBMC database of movie information from Windows XP to the Linux setup - not sure if its even possible - but its something that's definitely worth a shot. In any case, if it doesnt work, then will just let XBMC rebuild its database overnight.

The hardware's supposed to come in april 2nd week - can't wait for it :)

Friday, March 23, 2012

Moved to bitbucket

I've been using Git Enterprise for hosting private repositories since github's free plan doesn't include any private repos. Git enterprise's worked - but the UI leads a lot to be desired the few times that you actually have to use the web interface.

So the other day while doing something else, I landed on bitbucket . Bitbucket is Atlassian's code hosting service - and for some reason I was under the impression that it only supported mercurial repositories. Was pleasantly surprised to see that not only can you have git repos, you also get unlimiited private and public repos with upto 5 collaborators all for the unbeatable price of free!

Can't ask for more - so it's Bye-bye Git Enterprise! and Hello! Bitbucket... Bitbucket also has a nice helpful repo import - plug in the url to your git repo and it gets cloned. Once that was done, it was a simple matter to update the origin url of my repo with


git remote set-url origin https://raghur@bitbucket.org/raghur/home.git

Thursday, March 22, 2012

Hey look! A flying pig!

:)

MS has added git support to Codeplex - who'd have thought that such a day would ever dawn.

Kudos to the good souls at MS who made this happen - One can only imagine the kind of conversations that would've taken place to get the necessary approvals for this :).

Still Git has great mindshare but native windows support is pretty bad. Hopefully this might even help making a good gui for git on windows. After creating an abomination like TFS, MS should realize the benefit of just going with openly available tools rather than create.

Maybe not - that's more like seeing a squadron of pigs flying!

Tuesday, March 13, 2012

Coffeescript looks promising

I've just ran across Coffeescript... can't believe what sort of a hole I've been living in.

It's a source to source compiler (ie when you 'compile' a coffeescript script, you get javascript source.)

So why would you want a source to source compiler for Javascript?
Well, as apps become more and more 'front-end' heavy with DHTML/Ajax bling bling, the javascript that holds all that together also becomes more and more complex. Yeah, sure you used Jquery (or 'insert your favourite js framework') - but that's not even scratching the surface. You're still writing tons of js code, and dealing with its idiosyncracies and tearing your hair apart.

Enter Coffeescript - clean syntax with elements of style borrowed from ruby and python, this is super clean and efficient. You write your code in coffeescript which is neat, clean and concise. What it generates is very idiomatic and clean javascript.

Let's try something - take a guess at what the following does:

    var Animal, Mammal, animal, farm, _i, _len,
      __hasProp = {}.hasOwnProperty,
      __extends = function(child, parent) { for (var key in parent) { if (__hasProp.call(parent, key)) child[key] = parent[key]; } function ctor() { this.constructor = child; } ctor.prototype = parent.prototype; child.prototype = new ctor(); child.__super__ = parent.prototype; return child; };

    Animal = (function() {

      function Animal(name) {
        this.name = name;
      }

      Animal.prototype.speak = function() {
        return console.log("I am a " + this.name);
      };

      return Animal;

    })();

    Mammal = (function(_super) {

      __extends(Mammal, _super);

      function Mammal() {
        return Mammal.__super__.constructor.apply(this, arguments);
      }

      Mammal.prototype.speak = function() {
        Mammal.__super__.speak.apply(this, arguments);
        return console.log("and I'm a mammal");
      };

      return Mammal;

    })(Animal);

    farm = [new Animal("fish"), new Mammal("dog")];

    for (_i = 0, _len = farm.length; _i < _len; _i++) {
      animal = farm[_i];
      animal.speak();
    }
    
And now - see if you like this better:


    class Animal
        constructor: (@name)->
        speak: ->
            console.log "I am a #{@name}"
    
    class Mammal extends Animal
        speak:->
            super
            console.log ("and I'm a mammal")
    
    farm=[ (new Animal "fish"), (new Mammal "dog")]
    
    animal.speak() for animal in farm

The javascript version is the generated from the coffeescript version above . Head over to coffeescript.org page - they have an online interpreter where you can try out coffeescript code and it generates equivalent javascript source.


If you're wow'ed with that (I am) - and just in case you're saying good bye to javascript, here's the nub.. since its a source to source compiler, unless you understand what's going on under the covers, you'll hit a problem soonish when you have to debug something.

So, Javascript isn't optional - but if you have that bit covered, there's no reason to have to 'live' with the iffy side of javascript. Take a look something like coffeescript and have a little fun along the way.

Friday, February 17, 2012

VIM config updates

Ultisnips has been updated to 2.0. See the video here for the updates and new features. One piece of information - and one that I was eagerly waiting for is that 2.0 works perfectly with auto complete popup. This wasn't always the case - in fact, the bug on launchpad for the same had been marked as a 'wont-fix'. In any case, I was super thrilled to see that its been fixed.


Zencoding.vim also has been updated... if you're writing any sort of markup the old way, then just google zencoding - there's a couple of videos that will blow your socks off. For the truly impatient, you write a CSS like expression and its converted back to markup! How cool is that?