Thursday, December 13, 2012

Rewriting history with Git

What's this about rewriting history?

While developing any significant piece of code, you end up making a lot of incremental advances. Now, it'll be ideal if you are able to save your state at each increment with a commit and then proceed forward. This gives you the freedom to try out approaches, go in one way or the other and at each point have a safe harbor to return to. However, this ends up with your history looking messy and folks whom you're collaborating with have to follow your mental drivel as you slowly built up the feature. Now imagine if you could do incremental commits but at the same time, before you share your epic with the rest of the world, were able to clean up your history of commits by reordering commits, dropping useless commits, squashing a few commits together (remove those 'oops missed a change' commits) and clean up your commit messages and so on and then let it loose on the world! Git's interactive rebase lets you do exactly this!!!

git rebase --interactive to the rescue

Git's magic incantation to rewrite history is git rebase -i. This takes as argument a commit or a branch on which to apply the effects of rewritten rebase operation Lets see it in operation:

Squashing and reordering commits

Let's say you made two commits A and B. Then you realize that you've missed out something which should really have been a part of A, so you fix that with a 'oops' commit and call it C. So your history looks like A->B->C whereas you'd like it to look like AC->B Let's say your history looks like this:

bbfd1f6 C                           # ------> HEAD
94d8c9c B                           # ------> HEAD~1
5ba6c52 A                           # ------> HEAD~2
26de234 Some other commit           # ------> HEAD~3
....
....

You'd like to fix up all commits after 'some other commit' - that's HEAD~3. Fire up git rebase -i HEAD~3 The HEAD~3 needs some explaining - you made 3 commits A, B and C. You'd like to rewrite history on top of the 4th commit before HEAD (HEAD~3). The commit you specify as the base in rebase is not included. Alternatively, you could just pick up the SHA1 for the commit from log and use that in your rebase command. Git will open your editor with something like this:

pick 5ba6c52 A
pick 94d8c9c B
pick bbfd1f6 C
# Rebase 7a0ff68..bbfd1f6 onto 7a0ff68
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

Basically, git is showing you the list of commands it will use to operate on all commits since your starting point. Also, it gives instructions on how to pick (p), squash (s)/fixup (f) or reword(r) each of your commits. To modify the history order, you can simply reorder the lines. If you delete any line altogehter, then that commit totally skipped (However, if you delete all the lines, then the rebase operation is aborted). So, here we tell that we want to pick A, squash commit C into it and then pick commit B.

pick 5ba6c52 A
squash bbfd1f6 C
pick 94d8c9c B

Save the editor and Git will perform the rebase. It will then pop up another editor window allowing you to give a single commit message for AC (helpfully pre filled with the two original messages for A and C). Once you provide that, git rebase proceeds and now your history looks like AC->B as you'd like it to be.

Miscellaneous tips

Using GitExtensions

  1. If you use Git Extensions, you can do the rebase though it's not very intuitive. First, select the commit on which you'd like the interactive rebase. Right click and choose 'Rebase on this'.
  2. This opens the rebase window. In this window, click 'Show Options'
  3. In the options, select 'Interactive rebase' and hit the 'Rebase' button on the right
  4. You'll get an editor window populated similarly.

If the editor window comes up blank then the likely cause is that you have both cygwin and msysgit installed and GitExtensions is using the cygwin version of git. Making sure that msysgit is used in GitExtensions will avoid any such problems.

Using history rewriting

Rewrite history only for what you have not pushed. Modifying history for something that's shared with others is going to confuse the hell out of them and cause global meltdown. You've been warned.

Handling conflicts

You could end up with a conflict - in which case you can simply continue the rebase after resolving the conflicts with a git rebase --continue

Aborting

Sometimes, you just want the parachute to safety in between a rebase. Here, the spell to use is git rebase --abort

Final words

Being able to rewrite history is a admittedly a powerful feature. It might even feel a little esoteric at first glance. However, embracing it gives you the best of both worlds - quick, small commits and a clean history. Another and probably more important effect is that instead of 'waiting to get things in shape' before committing, commits happen all the time. Trying out that ingenious approach that's still taking shape in your head isn't a problem now since you always have a point in time to go back to in case things don't work out. Being able to work 'messily' and commit anytime and being secure in the knowledge that you'd be able fix up stuff later provides an incredible amount of freedom of expression and security. Avoiding the wasted mental cycles spent around planning things carefully before you attack your codebase is worth it's weight in gold!!!

Wednesday, October 03, 2012

Nexus 7 - First impressions and tips and tricks

So I got my Dad the 8GB Nexus 7. This is an awesome tablet - exactly what a good tablet should be. The UI is buttery smooth and things just fly. The hardware is not a compromise, excellent price point and overall a superb experience.

Of course, there are some things to deal with like 8 GB storage,lack of mobile data connectivity, lack of expandable storage and no rear camera. These aren't issues at all as far as I'm concerned.

If I'm traveling with the tablet, then I always have the phone's 3G data to tether to using WiFi tethering. The 8GB storage is only an issue if you're playing the heavyweight games or want to carry all your videos or a ton of movies with you. Given the 8GB storage, I'm more than happy to load up a few movies/music before travel. Provided you have a good way to get files/data in and out of the computer and are OK with not carrying your complete library with you always, you don't have to worry about the storage. A camera though would be nice - but then hey - you can't have everything your way :).

File transfer to/from PC

Which brings us to the topic of file transfers to/from your PC. Now wifi is really the best way to go - and I couldn't find a way to make WiFi direct work with Windows 7. So for now, Connectify seems to be the best option. It runs in the background on your PC and makes your PC's wireless card publish its own Wireless network. You can connect to this network from your phone and if you share folders on your PC, you're set to move data around.

Now, on the Android side, ES file explorer is free and gets the job done from a file management/copying/moving perspective. I also tried File Expert but its more cumbersome. ES excels in multiple file selection and copying.

Ebooks

The one area where the N7 excels is for reading books. The form factor and weight are just right for extended reading sessions. However, Google Play books doesn't work in India and so you need an alternate app. I tried out Moon+ Reader, FBReader and Reader+ - and out of the lot, FBReader was the best. Moon+ has a nicer UI but choked on some of my Ebooks. Reader+ didn't get the tags right and felt a little clunky. FB reader provided the smoothest experience of the lot. I'm already through half of my first book - and did not have any issues. I have a decent collection of e-books on my PC but once I copied them to the N7, all the meta data was messed up. Editing metadata and grabbing covers is a pain on the tablet and best done on the PC.

This is where Calibre comes in - this is a full blown ebook library management app. It does a great job of keeping your ebooks organized and editing the metadata on them. It can also fetch metadata and covers from Amazon and google and update your collection. Once you're done, transferring to the N7 is a little tricky. The first time, I just copied the library over to the N7 - but N7 showed each book thrice. Some troubleshooting later, found that the best way was to create an export folder and use teh 'Connect to Folder' feature to mount it as a destination. Then you can select all the books you want and use the 'Send to destination in one format' to publish EPub format to the folder. This generates one epub file per book with the metadata and covers embedded in it and you can then copy this folder over to the N7's Books folder using ESFileExplorer

Playing movies on your N7 over WIFI

My movie collection is on XBMC - and XBMC is DLNA/uPNP compatible. Dive into XBMC system settings and make turn on the uPnP/DLNA services. Then on the N7, you can use uPnPlay. For playing video, it relies on having a video player app isntalled. I like MXplayer. Don't forget to also install the HW Player codec for ARM V7 and to turn on HW decoding in the settings.

Playing movies on your TV from the N7

You wont be doing much of this as there isn't a rear camera - but if you do decide to take a video or pics from the N7's FFC, then you can use the uPnPlay to project them on to your TV (that is, provided you have a DLNA/uPnP compatible TV or compliant media center hooked to your TV)
For XBMC, turn on uPnp in settings and you're done. XBMC should be able to discover your tablet and you'll be able to browse and play videos.
If you'd rather use the table to control what's played on XBMC, then turn on the setting to allow control via uPnP in XBMC settings. Now, in uPnPlay you can select XBMC as the play to device and playing any video/song, plays it on the tv.

That's all for now... I'm loving this tablet and the stuff it can do... looks like I'd be buying a few more soon :)

Wednesday, September 26, 2012

Websocket server using Jetty/Cometd

So I just wrote up a Websocket server using CometD/Bayeux. It's a ridiculously simple app - but went quite a long way in helping to understand the nitty gritties with putting up a Websocket server and CometD/Bayeux. Thought that I'll put it up for reference - should help in getting a leg up on getting started with CometD.

The sample's up on github at https://github.com/raghur/rest-websocket-sample

Here's how to go about running it:
  1. clone the repo above
  2. run mvn jetty:run
  3. Now browse to http://localhost:8080 to see the front page
  4. There are two parts to the app
    1. A RESTful API at http://localhost:8080/user/{name} - hypothetical user info - get retrieves a user, put creates a user and delete obviously deletes the user.
    2. The websocket server at localhost:8080/cometd has a broadcast channel at /useractivity which receives events whenever a user is added/deleted. The main page at http://localhost:8080 has a websocket client that updates the page with the user name whenever a user is added or removed.
And here's the nuts and bolts:
  1. BayeuxInitializer - initializes the Bayeux Service and the EventBroadcaster. Puts the EventBroadcaster in the servlet context from where the RESTful service can pick it up to broadcast.
  2. EventBroadcaster - creates a broadcast channel in the ctor. Provides APIs to publish messages on this channel.
  3. HelloService - basic echo service taken from Maven archetype
  4. MyResource - the RESTful resource which responds to GET/PUT/DELETE - nothing major here. If a user is added or deleted, then it pushes a message on the broadcast channel by getting the EventBroadcaster instance from the servlet context.
It's about as simple as you can get (beyond a Hello world or a chat example). Specifically, I wanted a sample where back end changes can be pushed to clients.

Friday, September 21, 2012

Android WordHero - product lessons

So, yesterday I figured that now I'm an addict.. fully and totally to something called wordhero on my phone... it's one of those games where you have a 4x4 grid of letters and you need to find as many words as you can within 2 mins. Nothing special... and there are tons of look alikes and also rans on the Google Play store. Even installed some of them and then removed them...

So what's different? Turns out that there's quite a few things - and apart from one, they're all at the detail level. The most significant one is that there are its online only and everyone's solving the same grid at the same time - so you get to see your ranking at the end. No searching for opponents, no clicking - just every game.

Apart from that, the main game idea is the same (form words on a 4x4 grid) so details are the only place where one can innovate... reminds me of Jeff Atwood's post that a product is nothing but a collection of details. So what are these details?
  1. Its online only. You can play only if you have an Internet connection.. otherwise, scoot!
  2. The information level and detail is just right: Tracing through the letters highlights the whole word; If you find a word, you see green; wrong word, red; dupe - yellow. At 10s, there's a warning been upto 5s. Not down to 0... so it warns - but doesn't distract. Simple. Effective. Efficient. Brillant!
Now sample the competition:
  1. Tracing - line through the letters, shaky squiggly letters when you pass over them and other sorts of UI idiocy, grid that's too small, grid that isn't a square, word check indicators at some other place. Sure, some of this is debatable..esp the ones around the bells and whistles. They looks great the first time, the second time and a few more times after that. By the time you hit the tenth time (if you do ), you start hating it.
  2. Offline mode - this is counter intuitive.. in fact, after playing wordhero, I ran to find one which had an offline mode. Once I found it though, surprisingly, I did not like it.. Turns out that there's little thrill in forming words on a grid; the thrill is in seeing where you stand and if you're improving.
  3. Timed mode - pretenders to the throne have untimed modes, customizable timers and so on. Didn't work for me - 2 minutes is that absolute sweet spot where you can grab a game anytime... and have that deadline adrenaline rush work for you... Thought I'd do great on the untimed games - but while I scored more, it wasn't significantly more. More importantly, it was missing the fun. Turns out that we want to see where we rank far more than we want to form words :D
So after promising myself one last game at 11 in the night yesterday and ending up playing up to 12:30 AM, I tore myself away from this satanic game. Kept the phone far away to ensure that I don't pick it up again in the middle of the night and started thinking what makes wordhero tick. There's nothing earth shaking about the reasons - but the effect of getting it right is surprising:
  1. Figure out what will tickle the right pleasure centers - and optmize like hell for that: This is hard... in wordhero, this is the global rankings per game and the stats... optimizing for this means that you take away offline mode totally. That isn't a small decision - especially when an offline mode is easy to implement and feels like giving the user 'more'. Tough to argue against it too - but as I've seen myself - something like that will kill the multiplier effect of seeing a large number of people play. Chances are, your users dont know that either - so no point asking them. Apple seems to have figured this out very well.
  2. Keep the UI simple and efficient - and show me what I need when I need it: Should look good for the casual user. For power users, it should be efficient and not irritating... so keep all those nice bells and whistles under control.
  3. Keep the options simple - I like options.. I like options more than what your average joe likes them... most of the times, I've seen the options that you didn't know were there... but when you're designing a game that's 2:30 minutes from start to finish, I don't want to think about options. More importantly, don't ask me questions about it.. just start the damn game...
So does it mean that WordHero's perfect? Far from it - but its successful by anyone's measure. If you're looking for perfection, you won't ever launch :). Some of the stuff that I'm sure they'll get to at some point
  1. Better explanation of the stats
  2. Charts/trends over the stats instead of only the current value
  3. Better explanation of some of the UI color coding on the results screen.

Thursday, September 06, 2012

Google Maps Navigation enabled in India!!

Just came across an awesome piece of news - Google Maps now has turn by turn, voice guided directions officially in India!!

Uptil now, I used to get the Ownhere mod for Google Maps that enables World navigation - It used to be available on XDA-Forums but got taken down once google frowned on it!

No more of that hassle - just go to Play store and install Maps.

Very cool! Thanks Google.

Tuesday, August 21, 2012

Converting xml to json with a few nice touches

During my recent outings in heavyweight programming, one of the things we needed to do was converting a large XML structure from the server to JSON object on the browser to facilitate easy manipulation/inspection.

Also, the XML from the server was not the nice kind - what I mean is that tag names were consistent - but the content was wildly inconsistent. For ex, all of the following were recd:


<!-- different variations of a particular tag -->
<BgSize>100,23</BgSize>
<BgSize>0,0</BgSize>
<BgSize>,</BgSize>

Ideally, in this case, we wanted to parse and validate the node (and all its different variations) and convert it to an X,Y pair only if it was a valid data in it. Also, a lot of these were common tags as you might expect that showed up in various different entities in the XML, so we wanted that all these rules get applied sooner centrally rather than having to deal with them at disparate places later down the stream.

The other reason was that a lot of the nodes really had structured data crammed into a single tag - which we ideally wanted parsed as a javascript object so that we could manipulate it easily


<!-- xml data with structured content -->
<!-- font, size, color, bold, italic-->
<Font>Arial;Lucida,14,0x0044,True,False</Font>

So that brought up a search for the best way to convert XML to JSOn -and of course stackoverflow had a question. THe article in the answer makes for very interesting reading into all the different conditions that have to be handled. The associated script at http://goessner.net/download/prj/jsonxml/ is the solution I picked. Really not much going on below other than to use the xml2json function to convert the xml to a raw json object.


@parseXML2Json: (xmlstr) ->
    log xmlstr
    json = $.parseJSON (xml2json $.parseXML (xmlstr)
    destObj = Utils.__parseTypesInJson(json)
    log "raw and parsed objects", json, destObj
    return destObj

But now to the more interesting part - once the xml is converted to a JSON, we need to do our magic on top of it - of applying validations and conversions. This is where the Utils.__parseTypesInJson method comes in

What we're doing here is walking through the JSON object recursively. At each step, we keep track of the path of the xml that we have descended into so that we can check the path and based on the path, apply validations or conversions. At each step, we also need to check the type of JSOn object we're dealing with - starting with undefined, null, string, array or object

If its a string, we further delegate to a __parseString function to convert the string to an object if needed.


@__parseTypesInJson: (obj, path = "") ->
 if typeof obj is "undefined"
  return undefined
 else if obj is null
  return null
 else if typeof obj is "string"
  newObj =  Utils.__parseString(obj, path)
  validator = _.find Utils.CUSTOM_VALIDATORS, (v)->
  v.regex.test path
  return validator.fn(newObj)  if validator?
  return newObj
 else if Object.prototype.toString.call(obj) is '[object Array]'
  destObj = (Utils.__parseTypesInJson(o, path) for o,i in obj)
  destObj = _.reject destObj,  (obj) ->
  obj == null
  return destObj
 else if typeof obj is "object"
  destObj = {}
  destObj[k]  = Utils.__parseTypesInJson(obj[k],  "#{path}.#{k}") for k of obj
  validator = _.find Utils.CUSTOM_VALIDATORS, (v)->
  v.regex.test path
  return validator.fn(obj)  if validator?
  return destObj
 else
  return obj


At each step, once the object is formed, we see if there's a custom validator defined in the array of custom Validators. Each validator is a regex and a callback function - if the regex matches the path, then the callback is passed the json object which it may manipulate before returning


@CUSTOM_VALIDATORS = [ choice =
                        regex: /choice$/
                        fn: (obj)->
                            if obj["#text"]?
                                return obj
                            else
                                log "returning null"
                                return null
                        ]

THe parseString method for completeness - you can really tweak this to your
taste and there's nothing complicated going on in this.


@__parseString : (str,  path) ->
    if not str?
        return str
    if _.any(Utils.SKIP_STRING_PARSING_REGEXES, (r)->
                                                    r.test path)
        log "Skipping string parsing for:" , path, str
        return  str
    if str
        if /^\d+$/.test str
        return parseInt str
    else if /^\d+,\d+$/.test str
        [first,second] = str.split(",")
        return  {"x": parseInt(first), "y": parseInt(second)}
    else if str == ','
        return null
    else if /^true$/i.test str
        return true
    else if /^false$/i.test str
        return false
    else if   /^[^,]+,\d+,(0x[0-9a-f]{0,6})?,((True|False),(True|False))?$/i.test str
        log "Matched font: ", str
        return  Utils.parseFontSpec(str)
    else
        return str

Microsoft Releases Git TFS integration tool

Microsoft released a cross platform Git TFS integration tool Git TF!! It's definitely a good step and acknowledgement about the mindshare that Git has.
I took it for a spin - the integration is supposed to be cross platform - so that it should work on cygwin also. However, the first time I tried, it did not and had to tweak the script a little.

In the script <install folder>/git-tf

# On cygwin and mingw32, simply run the cmd script, otherwise we'd have to
# figure out how to mangle the paths appropriately for each platform
if [ "$PLATFORM" = "cygwin" -o "$PLATFORM" = "mingw32" ]; then
#exec cmd //C "$0.cmd" "$@"                 #Orig
exec cmd /C "$(cygpath -aw "$0.cmd")" "$@"  #Changed
fi

Anyway, after that, things did seem to work - the only issue is that your windows domain password is echoed on the cygwin console :(... other than that minor irritant, I was able to clone the project and work on it using the Git integration. Going to try it out some more over the next few days and will post if find anything more. THis is definitely a great step from MS - and if this works properly, it will almost make working with TFS source control much much bearable :D

Friday, August 10, 2012

Coffeescript rocks!

I've been absent a few weeks from the blog. Life got taken over by work - been deep in the Javascript jungles and Coffeescript has been a lifesaver.
Based on my earlier peek at Coffeescript, we went ahead full on with Coffeescript and I have to say it has been a pleasant ride for the team with over 4.7KLoc of Javascript (with Coffeescript source weighing in around 3.7KLoc including comments etc) that now I can confidently recommend it for any sort of Javascript heavy development.
I'm going to list down benefits we saw with Coffeescript and hopefully someone else trying to evaluate it might find this useful:
  1. Developers who haven't dove deep into Javascript's prototype based model find it easier to get up to speed sooner. Yes - once in a while they do get tripped up and then have to look again into what's going under the covers - but this is normal. The key point is that its much much more productive and enjoyable to use Coffeescript.
  2. The conciseness of the Coffeescript definitely goes a long way in improving readability. One of the algorithms implemented was applying a bunch of time overlap rules. We also used Underscore.js - and between Coffeescript and Underscore.js, the whole routine was within 20 lines, mostly bug free and very easy for new folks to pick upand maintain over time. Correspondingly, the generated JS was much more complicated (though Underscore helped hide some of loop iteration noise) - and it wouldn't have been too different had we written the JS directly.
  3. Integrating with external frameworks - jquery, jquery ui etc was again painless and simple.
  4. Another benefit was that the easy class structure syntactic sugar helped quickly prototype new ideas and then refine them to production quality. With developers who're still shaky on JS, I doubt the same approach would have worked since they'd have spent cycles trying to get their heads wrapped around JS's prototype based model.
  5. Coffeescript also allows you to split the code to multiple source files and merge all of them before compiling to JS - this allowed us to keep each source file separate and reduce merges required during commits.
  6. Finally, performance is a non issue - you do have to be a little careful otherwise you might find yourself allocating function objects and returning them back when you don't mean to but this is easily caught in reviews.
One latent doubt I had going into this was the number of times we'd have to jump in to the JS level to debug issues. With a larger Coffeescript codebase spread across multiple files, this is a real concern since the error line numbers wouldn't match with source and if we have to jump through hoops to fix issues. Luckily, this wasn't a problem at all - over time, in cases of either an error in JS or just inspecting code in the browser, its easy to map to the Coffeescript class/function - so you just fix it there and regenerate the JS. Secondly, the generated JS is quite readable - so even when investigating issues, it's quite trivial to drop breakpoints in Chrome and know what's going on.
The one minor irritation was if there was a Coffeescript compile issue, then when joining the file, the line number reporting.fails and then you have to compile each file independently to figure out the error. Easily automated with a script - so that's just being nitpicky.
Anyway, if you got here looking for advice on using Coffeescript, then you've reached the right place and maybe this post's helped you make up your mind!

Tuesday, July 03, 2012

Media center setup - XBMC-XVBA

I finally got my nettop - AMD E-350 based barebones system. Installed 4G of RAM and the plan was to set it up with XBMCBuntu or XBMC-XvBA. Instead of installing the XBMC-XvBA version directly, I figured that I could start with XBMCBuntu, see how it does and then if necessary move to the XvBA enabled builds.

I don't have a hard drive for the nettop - the plan was to have the system run off a 8Gig pen drive.

Basic Installation - XBMCBuntu

What you need

  1. The nettop with RAM installed.
  2. 2 USB pendrives - One for installation (2GB) and another which is going to act as your HDD (8G)

Steps

  1. Download UNetBootin for windows and the XBMCBuntu iso image
  2. Create a Live USB using UNetBootin: Once you have UNetBootin installed, stick in a flash drive in the usb, start UNetBootin and selec the XBMCBuntu iso image as the source distribution iso and the flash drive as the destination.
  3. Boot the nettop using the USB drive: You might have to play around with boot devices and priorities in the BIOS settings to get it to boot from the USB drive. To keep things simple, stick the pendrive into one of the USB2 ports (avoid the USB3)
  4. ON the UNetBootin boot menu, you can just try out XBMCBuntu live image. I did so and things seemed to work well enough for me to do the full install to another USB drive plugged into the system. Note that if you're not able to find the target drive, then just reboot with both the USB drives plugged in - sometimes, newly inserted devices aren't detected.
  5. Install, go through the menus and wait for it to complete.
  6. As you go through the menus, keep in mind to choose a custom partitioning scheme. In my case, I had 4G of RAM and there's no sense in having a swap drive on the pen drive. If you plan on having hibernation support, then use a 2G swap partition (50% of RAM) - else you can skip the swap altogether.
  7. Once done, pull out the installation pen drive and reboot. You should be able to reboot off the USB pendrive that you installed into. The installation pendrive is pretty much done - you won't need it any longer.

XBMCBuntu

At this point, I had XBMCBuntu up and running however, there were a few problems:

  1. On idle, CPU utilization was very high (~ 60 - 70%) and the unit was running hot.
  2. Display resolution proved troublesome - my LCD's native resolution is 1366x768 but that wasn't available over HDMI.
  3. I was able to get 1360x768 on DVI/D-Sub - but that meant using a separate cable for audio out.

Of these, the high CPU utilization was the biggest worry - so there's a few steps available to try

  1. Within XBMC - set sync to display refresh - always.
  2. Turn off RSS feeds
  3. Tweaks .xbmc/userdata/advancedsettings.xml:
<advancedsettings>
    <useddsfanart>true</useddsfanart>
    <cputempcommand>cputemp</cputempcommand>
    <samba>
        <clienttimeout>30</clienttimeout>
    </samba>
    <network>
        <disableipv6>true</disableipv6>
    </network>
    <loglevel hide="false">1</loglevel>
    <gui>
        <algorithmdirtyregions>1</algorithmdirtyregions>
        <visualizedirtyregions>false</visualizedirtyregions>
        <nofliptimeout>0</nofliptimeout>
    </gui>
    <measurerefreshrate>true</measurerefreshrate>
    <videoextensions>
        <add>.dat|.DAT</add>
    </videoextensions>
    <tvshowmatching append="yes">
        <!-- matches title 01/04 episode title and similar.-->
        <regexp>[s]?([0-9]+)[/._ ][e]?([0-9]+)</regexp>
    </tvshowmatching>
    <gputempcommand>/usr/bin/aticonfig --od-gettemperature | grep Temperature | cut -f 2 -d "-" | cut -f 1 -d "." | sed -e "s, ,," | sed 's/$/ C/'</gputempcommand>
</advancedsettings>

Did those and while they dropped the CPU utilization to about 25% which was quite good. However, during videos, the CPU was still high - and that's because even though XBMCBuntu official uses hardware acceleration through VAAPI, it still is spotty.

Getting XvBA

I went over to the XBMC-XvBA installation thread and followed the directions in the first post to add the XBMC-XvBA ppas. The download took some time and XvBA build got installed. Started XBMC and things were much, much better.

sudo apt-add-repository ppa:wsnipex/xbmc-xvba
sudo apt-get update
sudo apt-get install xbmc xbmc-bin    

There are other tweaks that are listed on the XBMC-XvBA installation thread which I also went ahead and applied.

Other tweaks

Optimizing Linux for a flash/pen drive installation

Installing on a pen drive /usb flash drive has its pain points. My boot time was around painfully slow (~3.5 minutes). Opening Chromium took forever and even page loads were slow (it would be stuck with the status bar on 'checking cache'...). Also, the incessant writing to disk is probably killing off my pen drive much much faster. I ended up doing the following:

  1. Use the noatime and nodiratime flags for the USB drive

    # /etc/fstab
    UUID=39f52ccf-363b-4b6e-abdd-927809618d83 /               ext4    noatime,nodiratime,errors=remount-ro 0       1
  2. Use tmpfs - In memory, reduces writes to disk and is faster. With 4G of RAM, this is a no-brainer.

    # /etc/fstab
    tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0
  3. Browsers - use profile-sync-daemon for Ubuntu from Arch Linux - will automatically move your browser profile directory from your home folder to /tmpfs
  4. Move .xbmc to NAS/External drive along with your media. Makes a lot more sense to keep your .xbmc folder with your media on a external hdd.
  5. Change to noop or deadline scheduler:

    # Assuming sda is your USB drive
    sudo echo noop > /sys/block/sda/queue/scheduler
  6. Change system swappiness. We don't want the OS to use swap drive at all.

    # /etc/sysctl.conf
    vm.swappiness=1

Getting suspend/hibernate to work

I had greatest trouble here - but was able to get pm-utils working eventually. pm-utils is a framework of shell scripts around suspend/hibernation/wakeup that provides hooks to execute scripts before standby/hibernation and when the computer resumes from sleep/shutdown. First test if basic suspend/hibernate works

# check suspend methods supported
cat /sys/power/state
# S3
sudo sh -c "echo mem > /sys/power/state"

If your system goes into standby, then things are good. But its just a good start. In my case, system would go into standby only the first time after boot. After that, it would go into standby but then resume immediately. Its been asked enough times on Google and I've probably tried all the fixes. The first one is to update a kernel param acpi_enforce_resources=lax

# /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_enforce_resources=lax"

After that, make sure to run  sudo update-grub In my case, the magic incantation above failed (your mileage might vary). Nothing bad happened so I kept it on. Anyway, so I rebooted, then suspended and resumed the first time (which works) and took a dump of dmesg > dmesg.1.log. After that again tried to suspend and when it came back immediately, I could get a dmesg output and scan the entries after the first run. Turned out that the log had entries related to xhci_hcd - so decided to unload it first and then try to suspend:

sudo modprobe -r xhci_hcd
sudo sh -c "echo mem > /sys/power/state"

After this, the system was able to standby each and every time. Now it was time to get pm-utils working. Out of the box, pm-utils came with a config that had a bunch of things that I didn't understand (and I doubt they applied to this machine). If standby was working directly, then it should have worked through pm-utils. However, it needed some amount of pushing around before that comes around to a functional state.

Getting pm-utils to play nice

So now that I had confirmed suspend working, it was time to see why pm-utils was being so bad. First off, time to clean up the default configuration. So copied /usr/lib/pm-utils/config to /etc/pm/config.d/config and then start editing it

SLEEP_MODULE="kernel"
# These variables will be handled specially when we load files in
# /etc/pm/config.d.
# Multiple declarations of these environment variables will result in
# their contents being concatenated instead of being overwritten.
# If you need to unload any modules to suspend/resume, add them here.
SUSPEND_MODULES="xhci_hcd"
# If you want to keep hooks from running, add their names  here.
HOOK_BLACKLIST="99_fglrx 99lirc-resume novatel_3g_suspend"

Waking up with the keyboard

if you'd like wake up with a usb device (usb keybd), then you need to find out the usb port where your device is connected. The easiest way might be to check dmesg output which would usually print this out. In my case, my wireless keyboard/trackball are connected on USB3

echo USB3 >  /sys/proc/acpi
echo  enabled > /sys/proc/devices/usb3/power/wakeup

After that, the HTPC could be woken up with a keypress. Now I haven't been able to find a way to do the same thing with only the keyboard (so that the system doesn't wake up anytime anyone picks up the keyboard - so for now, have turned this off). The above change won't persist over a reboot - so to make it persistent, put the two lines above to /etc/rc.local before the exit 0

Fixing up fglrx annoyances (ATI binary driver)

Not much point of a HTPC if the video isn't top quality. And there are a lot of variables involved there - your computer hardware, software, drivers, type of connection (HDMI/DVI) and the telly itself. Also, video driver support on Linux for ATI leaves quite a bit to be desired. One of the reasons of going with XBMCBuntu was knowing that there'll be large community support available on ubuntuforums.

Right off the bat, things started at the mildly irritating level. Catalyst control center in root mode won't start even though there's a big fat menu item there. Quick google and it says that the easiest way out is to use gksu amdcccle in the run dialog (ALT-F2).

So where does all this get us

After all this, it makes a sea change in the overall experience:

  1. XBMC idles at 15 - 20% cpu utilization. During video playback, stil stays at a comfy 40% - 50% while playing 720p/1080p videos
  2. Browsers (Chrome and FF) open near instantly; browsing experience is better than my desktop and page loads, tab switches etc feel much nimbler than on my desktop (AMD 6 core, 12G monster running Win 7x64)
  3. Total cost - USD 180

More to come

  1. Hibernation support
  2. Torrenting
  3. Scheduled wake up from shutdown/hibernate/suspend state

Saturday, June 30, 2012

Avast! trial expiration kills Internet connection - how bad is that!

So I use 'free for personal use' Avast Antivirus at home for the past couple of years. It's been mostly good though I've had some reservations about it - namely, nag pop-ups and so on. Some months ago (or maybe was it a year ago?) there was a program update and it wanted me to install 'Avira Internet Security'. Now I had no need for this (I use COMODO firewall which has been quite good) however, there was no way around it. Avira's update process said I could revert back to the free Antivirus version anytime without a re-install or a re-anything!
Not much of an option and you can't blame them for trying to push their products and convert free users to paying ones - so I went ahead with the upgrade. About a fortnight ago, I started getting warnings about 'your trial licenses is about to expire' and so on. The good thing about the internet security product was that it was discreet - in fact, safe to say that I even forgot that I installed it.
Remembering the notices about the trial expiring and reverting back to the free version, I chose to ignore all the warnings till yesterday afternoon when the wife called me up at work about 'internet not working from the home machine'. Now my wifi dongle on the home pc does once in a while show up with a 'Limited connection' that's quickly fixed with either disabling and enabling the dongle or unplugging it and putting it back in the USB port. I offered that up as a solution and a few hours later am told that it hasn't fixed the issue.This morning I finally sat down to see what was up. Turns out that the WIFI just wouldnt connect. So up comes device manager and under Network Devices I see a whole lot of 'avast! NDIS filter' virtual devices showing up. Opened Avast! gui and there are no panels for turning the thing off... Its reverted back to the free version - but has killed my net connection in the process! Not a happy camper at this point - still wasn't worried since I figured there've got to be tons of users with the same problem and it probably has a simple fix. Google did not reveal any simple fixes - Avast's community forum had help! Avast Internet Security trial expired, no internet connection and Avast Internet Security Trial seems to have affected my internet connection. The suggestions offered - buy a license, uninstall and re-install Avast etc were just not Ok. I definitely wasn't the only one affected but looks like a small but sizable user population was affected. If that was so, then Avast! should have done something about it - however, looks like they don't believe much in that. Agreed it's a free product and will not merit levels of support that you'd get for something that you pay for. But:

  1. I did not ask for the installation of the 'Premium' product trial.
  2. There was no option to opt out of the 'trial'
  3. They actively messaged that there's 'nothing to lose' from using the trial.
Given all that, they should have stepped up and either taken care of the issue with an update OR put up steps on how to solve it. It doesn't take that much. Here's how I got back my net connection:
  1. Device manager - Remove all 'Avast!' virtual devices with a right click 'Uninstall'
  2. Restart
  3. No WIFI still... so open 'Network and sharing center-> Change Adapter settings-> Wifi Connection -> properties'. In the 'This connection uses the following items:' list there was one more avast! filter device. Selected and uninstalled that too and restarted again.
  4. Back in business...WIFI is back up and running!
It's time to say goodbye to Avast!. Any recommendations for good, free antivirus solutions?

Wednesday, April 04, 2012

Media center upgrades - part two

So this is a continuation to my last post on my effort to upgrade the media center at home. While I wait for hardware to come, I've been reading up through forums and blogs online and am finding it real hard to get some good advice. So, thought it might help to list down concisely the situation as it stands currently, in the hope that it will server other folks who're trying to find similar answers.

So what's the fuss all about?

Getting XBMC on Linux with AMD fusion APUs to work nicely and render hardware accelerated video. Also, while we're at it, also do it by booting off a pendrive (ie hdd less system)

Background

Graphics APIs

To get hardware accelerated video on ATI/AMD hardware on Linux, currently, there are two choices

  1. XvBA - this is AMD's graphic APIs (similar to VDPAU on nVidia). Not very well supported.
  2. VAAPI - this is intels APIs. XBMC Eden is said to work well with VAAPI.

Drivers

  1. Open source Linux drivers for ATI chips lag behind the closed source ATI proprietary drivers. For HD video, you're pretty much limited to using ATI's proprietary drivers. So, let's emphasize - from now on, driver means ATI Catalyst for Linux

The Contenders

OpenElec.tv

OpenElec is covered in the earlier post - but essentially you have a Fusion optimized micro builds that can run off an SD card/flash drive. From a video perspective, this should be identical to XBMCBuntu. The upside is that everything is pre-configured while the downside is that it's pretty limited.

XBMCBuntu

Also covered in my previous post - lightweight Ubuntu based distro/LiveCD. XBMC Eden implements VAAPI and Catalyst Fusion APUs drivers can be used asa backend with these and provide hardware accelerated video. There are some cases where this bridging doesn't/may not work well. On the other hand, since this is the officially supported method, its going to be around and improved upon, and likely to have more info available in public domain etc.

XBMC-XvBA PVR builds

So this is an unofficial build by the community. THe promise is that instead of going the VAAPI route, this has direct support for XvBA api so, offes better performance. The forum thread tracking this is available here. While the build is supposed to be quite usable, from the thread activity, it seems its also heavily under development.The goal is to merge this back to the mainline once it stabilizes.

I plan to go the path of least resistance - OpenElec, then XBMC-XvBA and finally settle on XBMCBuntu - but things might change once I actually get down to it.

Time for the big fat disclaimer - Nothing in this post is guaranteed to be correct. this is my read of stuff on the net and it could be wrong. You're welcome to correct it in the comments and I'd be more than happy to fix the post.

Monday, April 02, 2012

Compiling Vim again - Cygwin

Vim installed by Cygwin's setup project does not have Ruby/Python/Perl support enabled by default. As my list of must have vim plugins has a few which use Ruby and Python, thought that it might be good to build my own Cygwin build of Vim. Turned out a little more work than I thought - but that's more due to the misleading (at least for me :) ) Make file in the vim source tree called Make_cyg.mak.

Here's how to compile:
  1. Make sure you have python (and ruby, perl and whatever other interpreters you need vim built with) installed.
  2. Do not install vim through cygwin (or uninstall it if you have it)
  3. Download vim source tarball, untar it and go into the vim73/src folder.
  4. Configure

    ./configure --enable-pythoninterp --enable-perlinterp --enable-rubyinterp --enable-gui=no --without-x --enable-multibyte --prefix=/usr
    make && make install
  5. You're off to the races!

Saturday, March 31, 2012

Media center upgrades

I have a small form factor (SFF) machine on the way to take up duties as a media center machine. After waiting for long, finally pulled the trigger on a Foxconn Barebones Book sized system and 4G of RAM. I haven't ordered a hard drive - the plan is to run XBMC completely off a USB drive. As it is, media is on a 1TB external disk and the cost of 2.5" laptop HDDs has gone through the roof.

In terms of software, I've got to figure out which XBMC to use - the contenders are to either install XBMCBuntu or go with one of the specialized builds from OpenElec. I'm still new to both - so will need to do some reading up before I decide.

OpenElec

OpenElec has small footprint (100MB), customized builds for different chipsets. Its meant to be run from a flash drive - so it has a few optimizations to make sure that it doesn't clobber your flash drive. Also, the stable version of OpenElec based on XBMC 10.0 "Dharma" has native AMD Fusion chipset support. Its also designed to be self updating and from reading the manuals, boots right into XBMC and OpenElec settings are all accessed via a XBMC extension so you never have to drop down to the linux machine underneath it.

At this point, looks like OpenELec is really limiting. I would really love to run a browser, use the machine for torrenting etc - and somehow using the XBMC interface for all those doesn't sound too good.

Also, XBMC Eden is supposed to support AMD Fusion natively and OpenElec hasn't been updated yet for Eden (there are nightly builds available though that are based on Eden).

XBMCBuntu

XBMCBuntu is XBMC's official liveCD - you can use the live CD to install to another USB drive media and provided the system can boot from USB, you're off to the races.
The thing here is that it isnt specific to a 'flash' drive - so there's a small tradeoff in terms of the flash drive life. XBMCBuntu is based on Lubuntu 11.10.

Migrating the database

I also have to figure out how to migrate my XBMC database of movie information from Windows XP to the Linux setup - not sure if its even possible - but its something that's definitely worth a shot. In any case, if it doesnt work, then will just let XBMC rebuild its database overnight.

The hardware's supposed to come in april 2nd week - can't wait for it :)

Friday, March 23, 2012

Moved to bitbucket

I've been using Git Enterprise for hosting private repositories since github's free plan doesn't include any private repos. Git enterprise's worked - but the UI leads a lot to be desired the few times that you actually have to use the web interface.

So the other day while doing something else, I landed on bitbucket . Bitbucket is Atlassian's code hosting service - and for some reason I was under the impression that it only supported mercurial repositories. Was pleasantly surprised to see that not only can you have git repos, you also get unlimiited private and public repos with upto 5 collaborators all for the unbeatable price of free!

Can't ask for more - so it's Bye-bye Git Enterprise! and Hello! Bitbucket... Bitbucket also has a nice helpful repo import - plug in the url to your git repo and it gets cloned. Once that was done, it was a simple matter to update the origin url of my repo with


git remote set-url origin https://raghur@bitbucket.org/raghur/home.git

Thursday, March 22, 2012

Hey look! A flying pig!

:)

MS has added git support to Codeplex - who'd have thought that such a day would ever dawn.

Kudos to the good souls at MS who made this happen - One can only imagine the kind of conversations that would've taken place to get the necessary approvals for this :).

Still Git has great mindshare but native windows support is pretty bad. Hopefully this might even help making a good gui for git on windows. After creating an abomination like TFS, MS should realize the benefit of just going with openly available tools rather than create.

Maybe not - that's more like seeing a squadron of pigs flying!

Tuesday, March 13, 2012

Coffeescript looks promising

I've just ran across Coffeescript... can't believe what sort of a hole I've been living in.

It's a source to source compiler (ie when you 'compile' a coffeescript script, you get javascript source.)

So why would you want a source to source compiler for Javascript?
Well, as apps become more and more 'front-end' heavy with DHTML/Ajax bling bling, the javascript that holds all that together also becomes more and more complex. Yeah, sure you used Jquery (or 'insert your favourite js framework') - but that's not even scratching the surface. You're still writing tons of js code, and dealing with its idiosyncracies and tearing your hair apart.

Enter Coffeescript - clean syntax with elements of style borrowed from ruby and python, this is super clean and efficient. You write your code in coffeescript which is neat, clean and concise. What it generates is very idiomatic and clean javascript.

Let's try something - take a guess at what the following does:

    var Animal, Mammal, animal, farm, _i, _len,
      __hasProp = {}.hasOwnProperty,
      __extends = function(child, parent) { for (var key in parent) { if (__hasProp.call(parent, key)) child[key] = parent[key]; } function ctor() { this.constructor = child; } ctor.prototype = parent.prototype; child.prototype = new ctor(); child.__super__ = parent.prototype; return child; };

    Animal = (function() {

      function Animal(name) {
        this.name = name;
      }

      Animal.prototype.speak = function() {
        return console.log("I am a " + this.name);
      };

      return Animal;

    })();

    Mammal = (function(_super) {

      __extends(Mammal, _super);

      function Mammal() {
        return Mammal.__super__.constructor.apply(this, arguments);
      }

      Mammal.prototype.speak = function() {
        Mammal.__super__.speak.apply(this, arguments);
        return console.log("and I'm a mammal");
      };

      return Mammal;

    })(Animal);

    farm = [new Animal("fish"), new Mammal("dog")];

    for (_i = 0, _len = farm.length; _i < _len; _i++) {
      animal = farm[_i];
      animal.speak();
    }
    
And now - see if you like this better:


    class Animal
        constructor: (@name)->
        speak: ->
            console.log "I am a #{@name}"
    
    class Mammal extends Animal
        speak:->
            super
            console.log ("and I'm a mammal")
    
    farm=[ (new Animal "fish"), (new Mammal "dog")]
    
    animal.speak() for animal in farm

The javascript version is the generated from the coffeescript version above . Head over to coffeescript.org page - they have an online interpreter where you can try out coffeescript code and it generates equivalent javascript source.


If you're wow'ed with that (I am) - and just in case you're saying good bye to javascript, here's the nub.. since its a source to source compiler, unless you understand what's going on under the covers, you'll hit a problem soonish when you have to debug something.

So, Javascript isn't optional - but if you have that bit covered, there's no reason to have to 'live' with the iffy side of javascript. Take a look something like coffeescript and have a little fun along the way.

Friday, February 17, 2012

VIM config updates

Ultisnips has been updated to 2.0. See the video here for the updates and new features. One piece of information - and one that I was eagerly waiting for is that 2.0 works perfectly with auto complete popup. This wasn't always the case - in fact, the bug on launchpad for the same had been marked as a 'wont-fix'. In any case, I was super thrilled to see that its been fixed.


Zencoding.vim also has been updated... if you're writing any sort of markup the old way, then just google zencoding - there's a couple of videos that will blow your socks off. For the truly impatient, you write a CSS like expression and its converted back to markup! How cool is that?

Friday, January 27, 2012

IOS-no previous simulator versions!

Ran into a situation today where we had a mobile web app that was reported to be misbehaving on iOS 3.2. The Mac at work has the latest XCode and IOS 5 simulator loaded on it. So we thought it would be quite routine to just start a simulator running IOS 3.2 - after all this having simulators of different versions of the OS is pretty routine. Android makes it trivial and before that, Blackberry has always had different simulator versions for different versions of their OS. Truth be told, RIM probably overdid it. They had too many versions, a developer website that would drive even the most persistent BB fanboys to stark raving madness and documentation that took great pains to suck! Hell, its a separate rant altogether :).


Anyway, after clicking around Xcode for a bit, imagine our surprise when we found that only a device debugging package for iOS 3.2 was available as an updated package for Xcode. That didnt seem right - so off to Google and there's a post on SE http://apple.stackexchange.com/questions/14128/how-do-i-install-the-3-0-iphone-simulator-on-xcode-4


Apparently, Apple doesn't want you to test on previous versions (or atleast - 2 major versions before. Testing on last major version IOS4 is ok though). Now isn't that absolutely ridiculous? Sure, Apple wants people to upgrade their phones to the latest OS versions and they've done a great job of ensuring that later versions of the OS work on older generation phones - but from a development tool standpoint, making tools to test your app unavailable is taking things too far. So tomorrow if my site/app doesnt work properly on a iOS 3 device, the user isnt going to blame Apple. Its the app developer who gets the bug report :(.


So once I'd made my peace with Apple's decisions and diktats on what simulators I was allowed to play with, I reflected on it a bit. I think the key is that that IOS's simulator is really just a simulator (ie software running on the host machine but mimicking a device). On the other hand, the Android emulator is actually a full qnx VM that's totally isolated from the host machine. In IOS's case, the simulator is sharing libraries and tools installed on the base OS - and as such, it would be quite hard to simulate older versions. In Android's case, since each emulator is really like running a VM, you can have all the different versions at your beck and call. On the flip side of things, the emulator approach really slows things down - everything from booting up the VM to actually running code inside it whereas IOS simulator positively zips around. I'm not sure if this is indeed the case - it's just my theory. Looking around on the net, I couldnt find a solid reference - so if you know of one, drop it in the comments.. I did find a few on SE/SO - but they were by no means conclusive http://stackoverflow.com/questions/4544588/difference-between-iphone-simulator-and-android-emulator

VIM macro super powers

So my affair with Vim continues - and I seem to have discovered VIM's macro super powers. The obvious next step is to shout from the rooftops and hence this blog post (and there's hardly anything original - apart from the fact that I've just had a 'aha' moment when it comes to macros and thought it might help other budding vimmers out there...


A little primer - Macros let you repeat a set of commands. The way to go about it is to press q<macro_letter> where <macro_letter> is between lowercase a-z. This starts recording a macro in Vim (and you see a recording message at the bottom). Now hit commands you want to repeat later and press q when done to finish recording. VIM records all the keystrokes you enter in the register you specified as the macro name. To now execute the macro, position the macro on the line and then hit @<macro_letter> and Vim will faithfully replay your commands.


Its a great time saver - especially for complex editing tasks where search/replace doesn't cut it. But, if you're feeling a dissappointed after coming this far (after all, I promised a aha moment), then hang on.


Today's discovery was that you can edit macros that you've recorded quite easily and save them back!!! THIS IS HUGE. Why so? Because when you record a macro, its quite normal to jump around quite a bit or get one or two keystrokes wrong. In fact, its for this reason that I could never use Emacs's macro facility and failed to just 'get it'. However, in VIM, you could just open a scratch pad editor and hit "<macro_letter>p - that's double quote-letter-p to paste the contents of register containing your macro. You see your macro keystrokes - so go ahead and edit them and then use "<register>y<movement> to save your edits back to the register. You can now execute the macro with a @<macro_letter> as if that's the way it was recorded in the first place.


Another obvious tip - you can execute the contents of any register as if it were a macro with a @. Not sure when that could be helpful - but knowing that its possible is good.

Tuesday, January 17, 2012

Yoohooo!! Successfully compiled Android from source

Finally!!!


So my last weekend project had been to compile Android ICS from source. Given that the size of the repo itself is in excess of 6Gigs, just getting it down itself took the better part of Friday night and Saturday night. When I got down to running make on it, it was Sunday afternoon.


Needless to say, things didn't work too well. I'm running this on a 32 bit Ubuntu 10.04 Virtualbox with a piddly 1GB RAM. When make failed the first time, realized that swap was a measly 300Mb. First steps first, went on to increase memory to 2GB (that's all I can spare) and increased swap to 2Gs.


Compilation next round started and that failed too - ran out of disk space - and this was Sunday night. Things kind of stayed there and finally this evening, resized the disk in virtualbox to 50Gigs. Again started the compilation and this time ran into linker errors when building webcore. One more round of troubleshooting involved deleting the previously built static library and then running make again. Surprisingly, this time make completed successfully - to the point where I wasn't sure if it had succeeded or just failed silently on something else.


The next step was to run the emulator to see if it really would boot up. Over at source.android.com, they oversimplify it when they say that you just run emulator from the android root folder. That didn't work for me - and this time it was because I hadn't sourced the envSetup.sh file... this thread http://groups.google.com/group/android-platform/browse_thread/thread/91ff18e034acf951 helped in tracking that one down.


So finally, after all that trouble, I have my very own ICS build running!!!!


For now, its onward ahoy to setting up Eclipse and starting with a fix I've been mulling about for sometime now..


Signing off from cloud nine
R

Monday, January 16, 2012

Ubuntu, Console VIM - weird characters in insert mode

Now that I feel quite comfy with VIM, over the weekend I needed to edit a config file in my Ubuntu 10.10 Virtualbox machine quickly. Instead of GVim, I just opened the file in console VIM. As I hit i to get into insert mode, a bunch of weird character boxes were inserted. That was not good at all :( - just when you think you're comfortable with something if it does something totally weird. In any case, I was in too much of a hurry to bother and went about editing my file with gVim. Also, backspace was wonky (same weird characters) - so I felt better. For some reason that I fail to understand, why must Linux make proper backspace and delete handling such a pain! In any case, it's something that I've dealt with enough times to know that there'll be something on Google.


Later on, tried to see what all the fuss was about. Googling around, I found :help :fixdel and that seemed simple enough. Alas, when I tried it out, it didn't fix the issue at all. Also, I seemed to be getting weird characters just pressing i to get into insert mode - and the VIM wiki page didn't have anything about that. Neither did Google turn up anything that seemed related.


So today early morning, on a whim, read up a little on VIm terminal handling. I have the following in my .vimrc
[sourcecode language="text"]
set t_Co=256
[/sourcecode]
Maybe it was the color escape code that was coming in - so checked out :echoe &term which returned xterm under gnome-console and builtin_gui under gvim. So I've put the following bit in my .vimrc and it seems to have fixed things nicely:
[sourcecode language="text"]
if &term == "xterm"
set term=xterm-256color
endif
[/sourcecode]

Wednesday, January 11, 2012

Android Annoyances

So yesterday and today while driving back from work, I've had to join conference calls. The conference call provider we use at work has 10 digit passcode numbers. Usually, I have a few bridge numbers with the DTMF codes saved in my contacts so I can just click on the contact to get dialled to the access number and have the participant passcode typed in for me. However, yesterday and today's calls were on a different bridge and I had to try to remember a 10 digit number after dialling the access code - and all that while driving. Needless to say, it took a few attempts and I'm sure at that time my attention wasn't where it should have been - ie on the road and on the traffic. Besides being thoroughly unsafe on Bangalore roads, its just frustrating(thankfully - better sense prevailed today and I pulled over, dialled into the bridge and then started driving again).


So the issue really is that the native parser that parses out email and calendar invites doesn't understand access codes and passcodes. It shouldn't be too hard to do - but then I started digging a bit deeper this evening. Granted that the parser isn't smart enough, at the very minimum if it handles tel: links properly, then its just a matter of educating folks who set up meetings to set them up so that you can click to call with something like <a href="tel:23423432233,,9230233#"> - in fact, in Outlook if you type TEL: and then the number, it will automatically be parsed as a tel: hyperlink. Turns out that its a massive fail - if I click the link, Android will show me the dialler but without the DTMF codes (basically, only the number upto the first comma). TOTAL FAIL.


So, Isn't this something that should have been brain dead simple to do? I mean - this is 2012 after all - and I'm not asking much. All I'm asking is that the tel: url parsing/handling be done in such a way that we can use our phones properly!!! Turns out that there's an open ticket 4575 since Nov 09. And its marked as an enhancement - I find that laughable since its a bug and definitely something that can be done quite easily (esp since a contact that has DTMF codes is dialled properly). However, for the 2 years that the ticket has languished, there have been 73 comments and not a single response from big GOOG :(


At the moment, doesnt look like this is going to be fixed - so I started browsing thru the Android source tree to see if I can find where the implementation for tel: urls - however, given the size of the android source, that's like trying to find a needle in a haystack. Guess I'll have better luck with seeing if CyanogenMod folks can fix this in CM9.


So about that, the other thing that has confounded me is why in the world can't Android bundle a decent T9 dialer/smart dialer out of the box. I know there are tons of apps on the market that do that - but seriously, is smart dialing something so out of the world that I need an app for it? As expected, there's a ticket but no action.


I think its safe to assume that Google isn't interested in fixing these issues as there's no 'benefit' in doing so - though for the life of me, I can't imagine either of them being particularly hard. In any case, I'm eagerly awaiting a CM9 build for the Nexus One (right now, am running an ICS build from XDA).

Tuesday, January 10, 2012

Facebook publicize is driving me nuts!

I'm thoroughly frustrated with Wordpress.com's facebook publicize feature. In theory, its supposed to post to your facebook wall whenever you publish a new post and that way publicize your post among your friend circle.....if it ever works. I've done all the resets, disconnects and reconnects and it just doesn't. Now, this could very well be a facebook problem rather than a wordpress.com problem - so while my rant might be misdirected, its a rant anyway against the thoroughly frustrating experience. Its like a bucket of cold water on my enthusiasm to be more active on my blog.


You see, with having posted rarely to this blog, I get a measly 70/80 page views per day (yeah - there's no need for the snide looks); So one part of actively persisting on the blog has been to see if I can get to 100+ page views per day. Modest goals, I admit - and getting the linky to a new post on the FB wall is a big part of it. If only it worked as it says on the tin :( :(


Anyway, this post is a test in itself - I've just jumped through the said hoops , mumbled the magic incantations and in other words, followed every bit of direction available to make this work right. And if this post shows up on my FB wall, well and good. If not, then I'm done with trying to get this to work.

Monday, January 09, 2012

A syntax highlighter extension for Deck.js

So for the past few hours, have been playing with Deck.js . I like the idea of a web based presentation format rather than a blob like powerpoint. At the same time, I'm a bit circumspect too - given the state of the tools. At least for my use, there's really no burning need that powerpoint can't solve (though I get the shivers everytime I have to do a presentation). All the web based/HTML5 seem raw at the moment on some much needed features (slide notes, slide prints, scaling issues etc).

Anyway, after a few minutes on Google and StackOverflow, decided to give Deck.js a spin. Deck.js is really nice and you should take a tour if you haven't done so. So finally, after giving the online presentation and the introduction a try, downloaded the latest to give it a more thorough spin. As usual, one of the first things was embedding code snippets and I thought it would be nice to integrate Alex Gorbatchev's SyntaxHighlighter into this. Turned out really simple to do (I'm sure there are other syntax highlighter extensions to deck.js out there) - but since I got something working pretty easily, here it is:

Create a file called deck.syntaxhighlighter.js with code below:

[sourcecode language="javascript"]
(function ($) {
$("head").append(
'<link href="http://alexgorbatchev.com/pub/sh/current/styles/shCore.css" rel="stylesheet" type="text/css" />'
).append(
'<link href="http://alexgorbatchev.com/pub/sh/current/styles/shThemeDefault.css" rel="stylesheet" type="text/css" />'
);
function setupSyntaxHighlighterAutoloads() {
console.log("calling SyntaxHighlighter");
SyntaxHighlighter.autoloader(
'applescript http://alexgorbatchev.com/pub/sh/current/scripts/shBrushAppleScript.js',
'actionscript3 as3 http://alexgorbatchev.com/pub/sh/current/scripts/shBrushAS3.js',
'bash shell http://alexgorbatchev.com/pub/sh/current/scripts/shBrushBash.js',
'coldfusion cf http://alexgorbatchev.com/pub/sh/current/scripts/shBrushColdFusion.js',
'cpp c http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCpp.js',
'c# c-sharp csharp http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCSharp.js',
'css http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCss.js',
'delphi pascal http://alexgorbatchev.com/pub/sh/current/scripts/shBrushDelphi.js',
'diff patch pas http://alexgorbatchev.com/pub/sh/current/scripts/shBrushDiff.js',
'erl erlang http://alexgorbatchev.com/pub/sh/current/scripts/shBrushErlang.js',
'groovy http://alexgorbatchev.com/pub/sh/current/scripts/shBrushGroovy.js',
'java http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJava.js',
'jfx javafx http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJavaFX.js',
'js jscript javascript http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJScript.js',
'perl pl http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPerl.js',
'php http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPhp.js',
'text plain http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPlain.js',
'py python http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPython.js',
'ruby rails ror rb http://alexgorbatchev.com/pub/sh/current/scripts/shBrushRuby.js',
'sass scss http://alexgorbatchev.com/pub/sh/current/scripts/shBrushSass.js',
'scala http://alexgorbatchev.com/pub/sh/current/scripts/shBrushScala.js',
'sql http://alexgorbatchev.com/pub/sh/current/scripts/shBrushSql.js',
'vb vbnet http://alexgorbatchev.com/pub/sh/current/scripts/shBrushVb.js',
'xml xhtml xslt html http://alexgorbatchev.com/pub/sh/current/scripts/shBrushXml.js'
);
SyntaxHighlighter.all();
}
$.getScript("http://alexgorbatchev.com/pub/sh/current/scripts/shCore.js",
function () {
$.getScript("http://alexgorbatchev.com/pub/sh/current/scripts/shAutoloader.js",setupSyntaxHighlighterAutoloads);
});
})(jQuery);
[/sourcecode]

Throw that in your deck.js/extensions folder. IN the slide deck you want to use this extension, include a script line before the end of the body tag:

[sourcecode language="html"]
<script src="../extensions/deck.syntaxhighlighter.js"></script>
</body>
</html>
[/sourcecode]

And you're done. Now to include any code snippets in your deck, just use either of the methods specified in the SyntaxHighlighter page (snippet for the script tag below)

[sourcecode language="html"]
<script type="syntaxhighlighter" class="brush: js">
$(document).ready(function () {
//this is the function body
});
</script>
[/sourcecode]

That's all there is to it. Code above can do with some improvements (conditionally load local copies if the remote loading fails etc) - but this is just a quickie script - so feel free to modify this to your heart's content.

Friday, January 06, 2012

Vim - unmap Esc!!

So I had the bright idea (by no means original, though, as I later figured out) that it'd be great to avoid the Esc key on Vim as its so far away from the home row. The alternative to pressing Esc is Ctrl-[ which, even though I've mapped CapsLock to control, I still find hard. So then, after some more googling around I've settled down on mapping jk to Esc. Its been a few hours with this setup and while its been an absolute pain till now, I think its a great way to avoid the Esc key jump. I can already feel my finger muscle memory relearning and my hand jumps instinctively for the Esc key much less now.

Here's my setup in case you want to try this out. Bung the following into your .vimrc or _vimrc as the case may be:

inoremap  :echoe "use jk"
inoremap jk 
The first mapping makes VIM echo a reminder. Its not friendly since it introduces a pause. However, the idea is to make the Esc so painful that you will shy away from hitting it.

Upgraded to XBMC 11.0 Eden beta

So I upgraded the good ole' media center machine at home to XBMC 11.0 Beta
. XBMC has been one of those software finds that has been just marvellous - to the point where I can't imagine the telly at home without XBMC. I've pretty much stopped watching regular tv/cable and almost exclusively on XBMC.
Also, its been a great way to keep the aging laptop (circa 2006 - core2Duo 1.6 Ghz, 2 Gb RAM and a piddling ATI Radeon X1600) in active duty.
Here's a hat-tip to all the XBMC guys and gals. And if you're not running a media center at home, you should give it a spin - XBMC makes your idiot box smart!