Wednesday, September 04, 2013

What publishing a python module taught me

Programming in Python

So I've mostly used python for one off scripts and tools and at one point for a serious foray into Django - but never came into a situation where I'd thought of publishing anything.

Hmm - crossed that bridge over this weekend - and its been a fun journey. I'm writing this post with what I wrote :)

Things I've picked up

Code

  1. Better understanding into Python modules, classes and code organization for libraries.
  2. Good unit tests
  3. Mocking in python

Packaging

  1. Packaging with setup.py
  2. pip, setuptools, easy_install and their idiosyncracies
  3. Install a platform specific script
  4. PyPI - registering and publishing
    • Tutorial One point to note - if you create the .pypirc manually, you will need to manually do a python setup.py register. If you skip that, python setup.py sdist upload will fail with a 403.

Testing

  1. virtualenv - this link
  2. Testing platform specific scripts installed with tools above
  3. Coverage

Coding/Style/Syntax linting

  1. pyflakes, pylame, pep8 and integration in Vim with Syntastic

What I really, really liked

  1. That I didn't miss a debugger
  2. That tests were short and sweet
  3. good code coverage out of the box
  4. pip
  5. Overall, how pleasant it was and how much I enjoyed it.

Where I had hiccups

  1. Mocks in python were a little hard to debug/understand
  2. Should have written the tests first - but it came as an afterthought after I decided to publish.
  3. Finding good documentation on packaging - for ex:, its hard to find a good walkthrough of how to publish

Just putting finishing touches and a little polish for a v0.9 release. Basically this post itself is nothing but a test. Should be out with it in a day or two.

Friday, August 30, 2013

Fixing Wifi sleep of death

The problem

I have a TP-Link WN722N USB Wifi dongle. Linux Mint picked it up during install and seemed like all was good.

Then the other day, noticed that sometimes WiFi would be flaky like hell - all I'd get is the password prompt. Turns out that this is a common problem with USB Wifi. After a few days, the pattern appeared to be a recurring problem after putting the computer sleep. The fix is easy - just unload the ath9k_htc before suspending. Edit /etc/pm/config.d/config (create if needed):

SUSPEND_MODULES="ath9k_htc"

Monday, August 26, 2013

Moving from Wordpress.com to Blogger

Moving from wordpress

So I moved my blog from its old home at http://niftybits.wordpress.com to http://blog.rraghur.in I've also moved out of wordpress.com to blogger. For quite some time, I've not been happy with Wordpress's abilities for a tech blog. It's a commercial endeavour and so if you need additional features/tweakability you've got to fork out the good stuff. Anyway, I was intending to get my own domain and wordpress hosting that would give me full control over the blog engine and the ability to install any addons that I wanted. There was this niggling feeling in the back of my head that probably I was creating a monster - basically, set it up and then take on maintenance of it as well :(. That's part of the reason I decided to dust off my old blogger account and see where it stood. Now, the last time I'd touched Blogger about 10 yrs ago, it was just after Google bought over Blogger and was really more mommy blog engine. Things have changed and I've been living in a hole... Blogger's now much more polished. A few things where it leaves Wordpress.com in the dust:
  1. Custom markup and css.
  2. Custom domains
  3. Google Analytics
  4. Ability to have ads and hence make some money - not that I intend to.
Where it falls behind:
  1. Analytics - really... the site analytics seems a little iffy.
    Update: Once you integrate Blogger with Google Analytics, its much much nicer than the one provided by blogger
  2. Referrer spam - All I saw was entries of www dot vampire dot stat domain on the analytics dashboard. Turns out that its referrer spam and isn't blocked/can't be blocked. What I don't understand is how this was never a problme with wordpress.com.
  3. Themes - far lesser than WP - but this is not an issue since you can tweak anything to your heart's content.
Now all of the WP shortcomings can be addressed if you go for paid upgrades OR just host your own WP. Unfortunately, neither is a good option for me. So while I didn't like moving out, it had to be done.

Registering a domain

I went ahead and registered the domain with BigRock.in - here's a referral link that will get you 25% off your domain.

 

Pointing Blogger to your custom domain

This was very simple - just follow the instructions on Blogger. You will need to set up 2 CNAME records for Blogger in your DNS management console other than your custom domain itself. For ex:

 

Migrating content from WP.com

Also, had to get the blog content transferred. Went into wordpress dashboard Tools > Export > All Content and exported the blog. This lets you download a xml file with all your blog content. Now, we have to convert it so that it can be imported by Blogger. Move over to Wordpress2Blogger and upload the file. If all goes well, then you get a converted file. Wasn't so simple for me though and it gave a Invalid XML at line xxxx error. Opening the wordpress XML, I didn't see any issues and a little google indicated that WP.com is notorious for generating invalid/malformed XML. Hmm - not a biggie. Pulled out XMLLint and ran it over the wordpress file
xmllint -v /path/to/wordpress/export/file
And xmllint found the problem - it was basically an   entity that had not been declared. Just removed it from the file and uploaded again to Wordpress2Blogger and the conversion went without a hitch.

 

Other Tweaks

Formatting of posts - while the content came over fine, it came in with <br /> tags. Also all my old source code listings that had the WP.com syntax tags [sourcecode] [/sourcecode] were broken and had to be fixed.
For code syntax highlighting, I went with hightlight.js for now. May change later. Also copied the key hightlighting css from here. You can edit the template and stick in the markup in the head section.
Next, spent some time on the theme tweaks on Blogger till I got tired. I'm ok with it for now - but will probably come back to it later.

 

In Closing..

Well, the move was completed. I'm a little dissatisfied with a few things:
  1. What do I do with my wordpress blog. I cannot redirect it and I can delete it but a little hesitant about burning bridges.
  2. Reaching via google search seems to be a problem - searching for xbmc xvba nettop rraghur doesn't even show up the Blogger link. Only the wordpress links are there. I think this is because of page reputation - so not much I can do.
  3. Blogging with VIM: Vimrepress was a good solution for WP.com. For blogger I have only found Blogger.vim which I'm still to give a try.
The benefits are worth much more and having the peace of mind of not having to worry about maintenance and the freedom that I can move to a self hosted blog later on if needed is indeed good.

Saturday, August 24, 2013

Linux Mint 15 KDE - tweaks and fixes

Additional fixes post installation

SSH connection refused

So today I tried ssh'ing into the desktop and no go. I was getting a connection refused and thought it had to do with either SSH not being installed or being blocked by the firewall. Later on when I checked, OpenSSH server was installed and the service was also running

sudo service status ssh
ssh start/running, process 2709

Hmm - this is weird. Next check was for iptables and that was clear too.. So last check was to look at /var/log/auth and indeed, there's a problem. Interestingly machine keys weren't generated during installation

Aug 22 20:15:58 desktop sshd[1960]: fatal: No supported key exchange algorithms [preauth]
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_rsa_key
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_dsa_key
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key

Ok - so the fix is easy - generate the keys with

    sudo ssh-keygen -A

After that, everything's back to normal :)

Power button shuts down computer

This is Kubuntu Bug 1124149. I'll spare you the details, which you can read yourself. Fix needed is to link /usr/bin/qdbus

sudo ln -sf /usr/lib/x86_64-linux-gnu/qt4/bin/qdbus /usr/bin/qdbus

Tuesday, August 20, 2013

Mixing Generics, Inheritance and Chaining

In my last post on unit testing, I had written about a technique I'd learnt forsimplifying test set ups with the builder pattern. It provides a higher level, more readable API resulting in DAMP tests.

Implementing it though presented a few interesting issues that were fun to solve and hopefully, instructive as well. I for one will need to look it up if I spend a few months doing something else - so got to write it down :).

In Scheduler user portal, controllers derive from the MVC4 Controller class whereas others derive from a custom base Controller. For instance, Controllers that deal with logged in interactions derive from TenantController which provides TenantId and SubscriptionId properties. IOW, a pretty ordinary and commonplace setup.
    class EventsController : Controller 
    {
        public ActionResult Post (MyModel model) 
        {
        // access request, form and other http things
        }
    }

    class TenantController: Controller 
    {
        public Guid TenantId {get; set;}
        public Guid SubscriptionId {get; set;}
    }

    class TaskController: TenantController
    {
        public ActionResult GetTasks()
        {
            // Http things and most probably tenantId and subId as well.
        }
    }
So, tests for EventsController will require HTTP setup (request content, headers etc) where as for anything deriving from TenantController we also need to be able to set up things like TenantId.

Builder API


Let's start from how we'd like our API to be. So, for something that just requires HTTP context, we'd like to say:
    controller = new EventsControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();
And for something that derives from TenantController:
    controller = new TaskControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .WithTenantId(theTenantId)
                .WithSubscriptionId(theSubId)
                .Build();
The controller builder will basically keep track of the different options and always return this to facilitate chaining. Apart from that, it has a Build method which builds a Controller object according to the different options and then returns the controller. Something like this:

    class TaskControllerBuilder() 
    {
        private object[] args;
        private Guid tenantId;
        public TaskControllerBuilder WithConstructorParams(params object args ) 
        {
            this.args = args;
            return this;
        }

        public TaskControllerBuilder WithTenantId(Guid id ) 
        {
            this.tenantId = id;
            return this;
        }

        public TaskController Build() 
        {
            var mock = new Mock<TaskController>(MockBehavior.Strict, args);
            mock.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }

Generics


Writing XXXControllerBuilder for every controller isn't even funny - that's where generics come in - so something like this might be easier:
    controller = new ControllerBuilder<EventsController>()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();
and the generic class as:
    class ControllerBuilder<T>() where T: Controller
    {
        private object[] args;
        private Guid tenantId;
        protected Mock<T> mockController;

        public ControllerBuilder<T> WithConstructorParams(params object args ) 
        {
            this.args = args;
            return this;
        }

        public T Build() 
        {
            mockController = new Mock<T>(MockBehavior.Strict, args);
            mockController.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }
In takes about 2 seconds to realize that it won't work - since the constraint only specifies T should be a subclass of Controller, we do not have the TenantId or SubscriptionId properties in the Build method.

Hmm - so a little refactoring is in order. A base ControllerBuilder that can be used for only plain controllers and a sub class for controllers deriving from TenantController. So lets move the tenantId out of the way from ControllerBuilder.
    class TenantControllerBuilder<T>: ControllerBuilder<T>  
     where T: TenantController          // and this will allow access
                                        // TenantId and SubscriptionId
    {
        private Guid tenantId;
        public TenantControllerBuilder<T> WithTenantId(Guid tenantId) 
        {
            this.tenatId = tenantId;
            return this;
        }

        public T Build() 
        {
            // call the base
            var mock = base.Build();
            // do additional stuff specific to TenantController sub classes.
            mockController.Setup(t => t.TenantId).Returns(this.tenantId);
            return mock.Object;
        }
    }
Now, this will work as intended:
/// This will work:
controller = new TenantControllerBuilder<TaskController>()
            .WithTenantId(guid)                             // Returns TenantControllerBuilder<T>
            .WithConstructorParams(mockOpsRepo.Object)      // okay!
            .Build();

But this won't compile: :(

///This won't compile:
controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Compiler can't resolve WithTenant method.
            .Build();
This is basically return type covariance and its not supported in C# and will likely never be. With good reason too - if the base class contract says that you'll get a ControllerBuilder, then the derived class cannot provide a stricter contract that it will provide not only a ControllerBuilder but that it will only be TenantControllerBuilder.

But this does muck up our builder API's chainability - telling clients to call methods in certain arbitrary sequence is a no - no. And this is where extensions provide a neat solution. Its in two parts

  • Keep only state in TenantControllerBuilder.

  • Use an extension class to convert from ControllerBuilder to TenantControllerBuilder safely with the extension api.


// Only state:
class TenantControllerBuilder<T> : ControllerBuilder<T> where T : TenantController
{
    public Guid TenantId { get; set; }

    public override T Build()
    {
        var mock = base.Build();
        this.mockController.SetupGet(t => t.TenantId).Returns(this.TenantId);
        return mock;
    }
}

// And extensions that restore chainability
static class TenantControllerBuilderExtensions
{
    public static TenantControllerBuilder<T> WithTenantId<T>(
                                        this ControllerBuilder<T> t,
                                        Guid guid)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = guid;
        return c;
    }

     public static TenantControllerBuilder<T> WithoutTenant<T>(this ControllerBuilder<T> t)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = Guid.Empty;
        return c;
    }
}
So, going back to our API:
///This now works as intended
controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Resolves to the extension method
            .Build();
It's nice sometimes to have your cake and eat it too :D.

Wednesday, August 14, 2013

Unit Tests: Simplifying test setup with Builders

Had some fun at work today. The web portal to Scheduler service is written in ASP.NET MVC4. As such we have a lot of controllers and of course there are unit tests that run on the controllers. Now, while ASP.NET MVC4 apparently did have testability as a goal, it still requires quite a lot of orchestration to test controllers.

Now all this orchestration and mock setups only muddies the waters and gets in the way test readability. By implication, tests are harder to understand, maintain and eventually becomes harder to trust the tests. Let me give an example:

[TestFixture]
public class AppControllerTests  {
// private
/// set up fields elided
// elided

[SetUp]
public void Setup()
{
_mockRepo = new MockRepository(MockBehavior.Strict);
_tenantRepoMock = _mockRepo.Create();
_tenantMapRepoMock = _mockRepo.Create();
_controller = MvcMockHelpers.CreatePartialMock(_tenantRepoMock.Object, _tenantMapRepoMock.Object);

guid = Guid.NewGuid();

// partial mock - we want to test controller methods but want to mock properties that depend on
// the HTTP infra.
_controllerMock = Mock.Get(_controller);
}

[Test]
public void should_redirect_to_deeplink_when_valid_sub()
{
//Arrange
_controllerMock.SetupGet(t => t.TenantId).Returns(guid);
_controllerMock.SetupGet(t => t.SelectedSubscriptionId).Returns(guid);
var formValues = new Dictionary<string,string>();
formValues["wctx"] = "/some/deep/link";
_controller.SetFakeControllerContext(formValues);

// Act
var result = _controller.Index() as ViewResult;

//// Assert
Assert.That(result.ViewName, Is.EqualTo(string.Empty));
Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
//Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

_mockRepo.VerifyAll();
}
}
As you can see, we’re setting up a couple of dependencies, then creating the SUT (_controller) as a partial mock in the setup. In the test, we’re setting up the request value collection and then exercising the SUT to check if we get redirected to a deep link. This works – but the test set up is too complicated. Yes – we need to create a partial mock that and then set up expectations that correspond to a valid user who has a valid subscription – but all this is lost in the details. As such, the test set up is hard to understand and hence hard to trust.

I recently came across this pluralsight course  and there were a few thoughts that hit home right away, namely:
  1. Tests should be DAMP (Descriptive And Meaningful Phrases)
  2. Tests should be easy to review
Test setups require various objects in different configurations - and that's exactly what a Builder is good at. The icing on the cake is that if we can chain calls to the builder, then we move towards evolving a nice DSL for tests. This goes a long way towards improving test readability - tests have become DAMP.

So here's what the Builder API looks like from the client (the test case):

[TestFixture]
public class AppControllerTests {
[SetUp]
public void Setup()
{
_mockRepo = new MockRepository(MockBehavior.Strict);
_tenantRepoMock = _mockRepo.Create();
_tenantMapRepoMock = _mockRepo.Create();
guid = Guid.NewGuid();
}

[Test]
public void should_redirect_to_deeplink_when_valid_sub()
{
var formValues = new Dictionary<string, string>();
formValues["wctx"] = "/some/deep/link";

var controller = new AppControllerBuilder()
.WithFakeHttpContext()
.WithSubscriptionId(guid)
.WithFormValues(formValues)
.Build();

// Act
var result = _controller.Index() as ViewResult;

//// Assert
Assert.That(result.ViewName, Is.EqualTo(string.Empty));
Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
//Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

_mockRepo.VerifyAll();
}
}

While I knew what to expect, it was still immensely satisfying to see that:
  1. We’ve abstracted away details like setting up mocks, that we’re using a partial mock, that we’re even using MVC mock helper utility behind the AppControllerBuilder leading to simpler code.
  2. The Builder helps readability of the code – its making it easy to understand what preconditions we’d like to be set on the controller. This is important if you’d like to get the test reviewed by someone else.
You might think that this is just sleight of hand - after all, have we not moved all the complexity to the AppControllerBuilder? Also, I haven't shown the code - so definitely something tricky is going on ;)?

Well not really - the Builder code is straight forward since it does one thing (build AppControllers) and it does that well. It has a few state properties that track different options. And the Build method basically uses the same code as in the first code snippet to build the object.

Was that all? Well not really – you see, as always, the devil’s in the details. The above code is’nt real – its  more pseudo code. Secondly, an example in isolation is easier to tackle. However, IRL (in real life), things are more complicated. We have a controller hierarchy. Writing builders that work with the hierarchy had me wrangling with generics, inheritance and chainability all at once :). I'll post a follow up covering that.

Sunday, August 11, 2013

And we're back to windows

Well not really - but I have your attention now... So in my last post, I talked about moving my home computer from Win 7 to Linux Mint KDE. That went ok for the most part other than some minor issues.
Fast-forward a day and I hit my first user issue :)... wife's workplace has some video content that is distributed as DRM protected swf files that wil play only through a player called HaiHaiSoft player!

Options

  1. Boot into windows - painful and slow and kills everyone's else session.

  2. Wine - Thought it'd be worth a try - installed Wine and dependencies through Synaptic. As expected, it would'nt run haiHaiSoft player - crashed aat launch.

  3. Virtualization: so the final option was a VM through virtualbox. Installed Virtualbox and its dependencies (dkms, guest additions etc) and brought out my Win 7 install disk from cold storage.

Virtualbox and Windows VM installation

Went through installation and got Windows up and running. Once I got the OS installed, also installed guest additions and it runs surprisingly well. I'd only used Virtualbox for a linux guest from a Windows host before so it was a nice change to see how it worked the other way around.

Anyway, once the VM was installed, downloaded and installed the player and put a shortcut to virtualbox on the desktop. Problem solved!

Saturday, August 10, 2013

Upgraded to Linux

So after suffering tons of crashes (likely due to AMD drivers) and general system lagginess, I finally decided to ditch windows and move to linux full time.
This is on my home desktop which is more a family computer than something that only I would use.
I was a little apprehensive with driver support as usual and tricky stuff like suspend to ram (s3) which always seems highly driver dependent and problematic on Linux (it is still a pain on my XBMCBuntu box). Anyway, nothing like trying it out.

After looking around a bit, downloaded Linux Mint 15 (default and KDE). Booted with the Live CD and liked the experience - though GNOME seems a bit jaded and old. I liked KDE much better - esp since it seems more power user friendly.

So after testing hardware stuff (Suspend, video drivers and so on) - all of which worked flawlessly, I must say, I decided to go ahead and install it on one of my HDDs. Unfortunately, installation was rocky a bit - I don't know if it was just me - the mint installer would progress up to preparing disks and hang there for 10+ minutes without any feedback I'm assuming it is reading partition tables and so forth - but no idea why it took so long. A thought it'd hung a couple of times - so terminated it and it was only accidentally that I found that it was still working - when I left it on its own for sometime and got back. It presented my the list of options (guided partition on entire disk, co locate with another OS etc) - but things actually went worse after this.

What seems to have happened is that my pending clicks on the UI all were processed and it proceeded to install on my media drive before I had a chance ... wiped out my media drive. Thankfully, before installation I had a backup of the important stuff on that drive and so it wasn't a biggie...
At this point, I was having serious doubts of continuing with Mint and was ready to chuck it out of the window and go back to Kubuntu or just back to Windows. However, I hung on - given that I'd wiped a drive, might as well install it properly and then wipe it if it wasn't good.

Anwyay, long story short, I restarted the install, picked my 1TB drive and partitioned it as 20GB /, 10Gb /var, 1Gb /boot and remaining as unpartitioned.
Mint went through the installation and seemed to take quite sometime - there were a couple of points where the progress bar was stuck at some percentage for
multiple minutes and I wasn't sure if things were proceeding or hung. In any case, after the partitioning window, I was more inclined to wait. Good that I did since the installation did eventually complete.

Feedback to Mint devs - please make the installer be more generous with feedback - esp if the installer goes into something that could take long.

First boot

Post installation, rebooted and grub shows my windows boot partition as well as expected. I still haven't tried booting into windows so I that's one thing to check. Booted into Mint and things looked good. Set up accounts for my dad and my wife. one thing I had to do was edit /etc/pam.d/common-password to remove password complexity (obscure) and set minlen=1

     password   [success=1 default=ignore]  pam_unix.so minlen=1 sha512

Next was to set up local disks (2 ntfs and 1 fat32 partition) so that they are mounted at boot and everyone can read and write to them. I decided to go the easy route and just put entries in /etc/fstab

UUID=7D64-XXX  /mnt/D_DRIVE    vfat      defaults,uid=1000,gid=100,umask=0007                   0       2
UUID="1CA4559CXXXXX" /mnt/E_DRIVE ntfs rw,auto,exec,nls=utf8,uid=1000,gid=100,umask=0007 0 2
UUID="82F006D7XXXX" /mnt/C_DRIVE ntfs rw,auto,exec,nls=utf8,uid=1000,gid=100,umask=0007 0 2

That fixed the mount issue but still need to have them surface properly on the file manager (dolphin) - this was actually quite easy - I just added them as places, removed the device entries from the right click menu. This worked for me - I'd have liked to make this the default but didn't find a way. Finally decided to just copy the ~/.local/share/user-places.xbel file to each local user and set owner.

Android

Other than that, I also need to be able to connect my nexus 4 and 7 as MTP devices. I had read that this doesn't work out of the box - but looks like that's been addressed in ubuntu 13.04 (and hence in Mint)
I also need adb and fastboot - so just installed them through synaptic. BTW, that was awesome since it means that I didn't have to download the complete android SDK just for two tools.

General impressions

Well, I'm still wondering why I didn't migrate full time to linux all these years. THings have been very smooth - but I need to call out key improvements that I've seen till now

  1. Boot - fast - less than a minute. Compare that to upto 3 mins till the desktop is loaded on Win 7
  2. Switching users - huge huge speed up. On Windows, it would take so long that we would most of the times just continue using other's login.
  3. Suspend/Resume - works reliably. Back in Windows, for some reason, if I had multiple users logged in, suspend would work but resume was hit and miss.
  4. GPU seems to work much better. Note here though that I'm not playing any games etc. I have a Radeon 5670 - but somehow on windows even Google Maps (new one) would be slow and sluggish while panning and zooming. Given that on Linux, I'm using the open source drivers instead of fglrx, I was expecting the same if not worse. Pleasantly surprised that maps just works beautifully. Panning,zooming in and out is smooth and fluid. Even the photospheres that I had posted to maps seem to load a lot more quickly.

Well, that's it for now. I know that a lot of it might be 'new system build' syndrome whereas on windows gunk had built up over multiple years. However, note that my windows install was fully patched and up to date. Being a power user, I was even going beyond the default levels of tweaking (page file on separate disk from system etc) - but just got tired of the issues. The biggest trigger was the GPU crashes of course and here to updating to latest drivers didn't seem to help much. I fully realize that its almost impossible to generalize things. My work laptop has Win 7x64 Enterprise and I couldn't be happier - it remains snappy and fast in spite of a ton of things being installed (actually, maybe not - the linux boot is still faster) - but it is stable.
And of course, there might be a placebo effect at some places - but in the end what matters is that things work.

Thursday, July 25, 2013

Vimgrep on steriods - even on Windows

So I was looking at this vim tip for finding in files from within Vim - while it looks helpful, there are a number of possible improvements:

  1. Why a static binding? being able to tweak the patterns or the files to search is quite common - so much more value if you could have the command printed in the command line, ready to be edited to your heart's content or just go ahead and execute the search with Enter.
  2. The tip wont work for files without extensions (say .vimrc) - in this case, expand("%:e") returns empty string
  3. lvimgrep is cross platform but slow - let's use Mingw grep too for vimgrep
  4. And make that Mingw grep integration work on different machines
It was more of an evening of scratching an itch (a painful one if you're zero in vimscript :) ). Here's the gist for it- hope someone finds it useful.

Feel free to tweak the mappings - I use the following:

  1. leader+f: normal mode: vimgrep for current word, visual mode: search for current selection
  2. leader+fd: Similar - but look in the directory of the file and below
  3. leader+*: Similar to the above, but use internal grep

Save the file to your .vim folder and source it from .vimrc

    so ~/.vim/grephacks.vim

A few notes:

  1. GNUWIN is an env variable pointing to some folder where you've extracted mingw findutils and grep and dependencies
  2. The searches by default work down from whatever vim thinks is your present working directory. I highly recommend vim-rooter if you're using anything like subversion, mercurial or git as vim-rooter automatically looks for a parent folder that contains .git, .hg or .svn (and more - please look it up)

Happy vimming!

Saturday, February 16, 2013

Downloading over an unreliable connection with Wget

This is a part rant, part tip - so bear with me... My broadband connection absolutely sucks over the past week. I upgraded from 2Mbps with a download limit to a 4Mbps with unlimited downloads and since then it has been nothing but trouble... Damn BSNL!! I've probably registered about 30 odd complaints with them to no avail. If there was a Nobel for bad customer service, BSNL would probably win it by a mile. Some examples:
  1. They'll call to find out what the complaint it and even when I explain what's happening, they hardly hear me out at all.
  2. Either they call up and say 'We have fixed it at the Exchange' and nothing has changed
  3. They automatically close the complaints :)
Guess they find it too troublesome that someone who's paying for broadband actually expects the said broadband connection to work reliably!

Anyway, Airtel doesn't seem to be any better - they need 10 days to set up a connection and when I was on the phone with them, they didn't seem too interested in increasing their customer count by 1 :).

I also tried calling an ISP called YouBroadband after searching some of the Bangalore forums for good ISP providers. They promised a call in 24 hours to confirm if they have coverage in my area and it was feasible for them to set up the connection and that was 48 hours ago!

At work, I've heard good things about ACTBroadband and they have some ads in TOI as well, but they said they don't have coverage in my area :(.

So how do you download


Today I needed to download something and doing it from the browser failed each time since my DSL connection would blink out in between!

After ranting and raving and writing the first part above and still mentally screaming at BSNL, decided to do something about it... Time for trusty old wget - surely, it'll have something?

Turns out that guess was a 100% on the money... it took a few tries experimenting with different options, but finally worked like a charm

wget -t0 --waitretry=5 -c -T5 url
# where
# -t0 - unlimited retries
# --waitretry - seconds to wait between retries
# -c resume partially downloaded files
# -T5 - set all timeouts to 5 seconds. Timeouts here are connect timeout, read timeout and dns timeout

Sunday, February 03, 2013

Single Page Apps

We released the Scheduler service (cloud hosted cron that does webhooks) on the 18th of Jan. It was our first release (still in beta) and you can sign up for it via the Windows Azure store as an addon. Upcoming release will have a full portal and the ability to register without going via the Windows Azure portal.

We've been building the user portal to the Scheduler service as a Single Page app (SPA) and I wanted to share a some background and insights we've gained.

SPA overview

To review, an SPA is a web app contained in a single page - where 'pages' are nothing but divs being shown/hidden based on the state of the app and user navigation.

The benefits are that you never have a full page refresh at all - essentially, page loads are instantaneous and data is retrieved and shown via AJAX calls. From a UX standpoint, this delivers a 'speedier' experience, since you never see the 'static' portions of your page reload when you navigate around.

All that speediness is great but the downsides are equally important.

SPA - Challenges

  1. Navigation - SPA's by nature break the normal navigation mechanism of the browser. Normally, you click a link, it launches off a request and would update the url on the address bar. The response is then fetched and painted. In an SPA however, a link click is trapped in JS and the state is changed and you show a different div (with a background AJAX req being launched).
    This breaks Back/Forward navigation and since the URL doesn't change, bookmarkability is also broken to boot.
  2. SEO - SEO also breaks because links are associated with JS script and most bots cannot follow such links.
Now, none of this is really new. Gmail was probably the first well known SPA implementation and that's been around since 2004. What's changed is tha t now there are better tools and frameworks for writing SPAs. So how do you get around the problems?
  1. Back/Forward nav and Bookmarkability: SPA's use hash fragment navigation - links contain hash fragments. According to the original HTTP standard, hash fragments are for within page navigation and hence while the browser will update the address on the address bar and push an entry into the history stack, it will not make a request to the server. Client side routing can listen for changes to the location hash and manipulate the DOM to show the right 'section' of the page.

  2. SEO - Google (and later Bing) support crawling SPA websites provided the links are formatted specifically. See

Why we went the SPA way

When we started out with the Portal, we needed to take some decisions around how to go about it
  1. Scheduler REST service is a developer focused offering and the primary interaction for our users is the API interface itself. While the portal will have a Scheduler management features, this is really to give our users a 'manual' interface to scheduler. The other important use case for the portal is when you want to see history of a task's execution. Given that the API was primary, we wanted to build the UI using the APIs to dogfood our API early and often.
  2. It just made sense to have the UI consume the APIs so that we weren't re-writing the same capabilities again just to support the UI.
  3. Getting the portal to work across devices was important. In that sense, going with an approach that reduces page loads makes sense.
  4. We wanted public pages to be SEO friendly - so the SPA experience kicks in only after you login.
  5. Bookmarkability is important and it should be easy to paste/share links within the app.

Tools and frameworks

We evaluated different frameworks for building the SPA. We wrote a thin slice of the portal - a few public pages, a Social login page and a couple of logged in pages for navigation and bookmarkability.
  1. KO+ approach - I'm calling this KO+ as it's KO is just a library for MVVM binding and we needed a bunch of other libraries for managing other aspects of the SPA.
    • Knockout.js - for MVVM binding
    • Sammy.js - Client side routing
    • Require.js - script dependency management.
    • Jquery - general DOM manipulation when we needed it.
  2. Angular.js - Google's Angular.js is a full suite SPA framework that handles all the aspects of SPA
We chose the KO+ approach as there was knowledge and experience on KO in the team. The learning curve's also lesser since each library can be tackled at a time. While Angular offers a full fledged SPA framework, it also comes with more complexity to be grappled with and understood - essentially, the 'Angular' way of building apps.

That said, once you get over the initial learning curve of Angular, it does have a pleasant experience and you don't have to deal with integration issues that come up when using different libraries. We had prior experience on KO on the team so it just made sense to pick it given our timelines.

I'll post an update once we have it out of the door and ready for public consumption.

Thursday, December 13, 2012

Rewriting history with Git

What's this about rewriting history?

While developing any significant piece of code, you end up making a lot of incremental advances. Now, it'll be ideal if you are able to save your state at each increment with a commit and then proceed forward. This gives you the freedom to try out approaches, go in one way or the other and at each point have a safe harbor to return to. However, this ends up with your history looking messy and folks whom you're collaborating with have to follow your mental drivel as you slowly built up the feature. Now imagine if you could do incremental commits but at the same time, before you share your epic with the rest of the world, were able to clean up your history of commits by reordering commits, dropping useless commits, squashing a few commits together (remove those 'oops missed a change' commits) and clean up your commit messages and so on and then let it loose on the world! Git's interactive rebase lets you do exactly this!!!

git rebase --interactive to the rescue

Git's magic incantation to rewrite history is git rebase -i. This takes as argument a commit or a branch on which to apply the effects of rewritten rebase operation Lets see it in operation:

Squashing and reordering commits

Let's say you made two commits A and B. Then you realize that you've missed out something which should really have been a part of A, so you fix that with a 'oops' commit and call it C. So your history looks like A->B->C whereas you'd like it to look like AC->B Let's say your history looks like this:

bbfd1f6 C                           # ------> HEAD
94d8c9c B                           # ------> HEAD~1
5ba6c52 A                           # ------> HEAD~2
26de234 Some other commit           # ------> HEAD~3
....
....

You'd like to fix up all commits after 'some other commit' - that's HEAD~3. Fire up git rebase -i HEAD~3 The HEAD~3 needs some explaining - you made 3 commits A, B and C. You'd like to rewrite history on top of the 4th commit before HEAD (HEAD~3). The commit you specify as the base in rebase is not included. Alternatively, you could just pick up the SHA1 for the commit from log and use that in your rebase command. Git will open your editor with something like this:

pick 5ba6c52 A
pick 94d8c9c B
pick bbfd1f6 C
# Rebase 7a0ff68..bbfd1f6 onto 7a0ff68
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

Basically, git is showing you the list of commands it will use to operate on all commits since your starting point. Also, it gives instructions on how to pick (p), squash (s)/fixup (f) or reword(r) each of your commits. To modify the history order, you can simply reorder the lines. If you delete any line altogehter, then that commit totally skipped (However, if you delete all the lines, then the rebase operation is aborted). So, here we tell that we want to pick A, squash commit C into it and then pick commit B.

pick 5ba6c52 A
squash bbfd1f6 C
pick 94d8c9c B

Save the editor and Git will perform the rebase. It will then pop up another editor window allowing you to give a single commit message for AC (helpfully pre filled with the two original messages for A and C). Once you provide that, git rebase proceeds and now your history looks like AC->B as you'd like it to be.

Miscellaneous tips

Using GitExtensions

  1. If you use Git Extensions, you can do the rebase though it's not very intuitive. First, select the commit on which you'd like the interactive rebase. Right click and choose 'Rebase on this'.
  2. This opens the rebase window. In this window, click 'Show Options'
  3. In the options, select 'Interactive rebase' and hit the 'Rebase' button on the right
  4. You'll get an editor window populated similarly.

If the editor window comes up blank then the likely cause is that you have both cygwin and msysgit installed and GitExtensions is using the cygwin version of git. Making sure that msysgit is used in GitExtensions will avoid any such problems.

Using history rewriting

Rewrite history only for what you have not pushed. Modifying history for something that's shared with others is going to confuse the hell out of them and cause global meltdown. You've been warned.

Handling conflicts

You could end up with a conflict - in which case you can simply continue the rebase after resolving the conflicts with a git rebase --continue

Aborting

Sometimes, you just want the parachute to safety in between a rebase. Here, the spell to use is git rebase --abort

Final words

Being able to rewrite history is a admittedly a powerful feature. It might even feel a little esoteric at first glance. However, embracing it gives you the best of both worlds - quick, small commits and a clean history. Another and probably more important effect is that instead of 'waiting to get things in shape' before committing, commits happen all the time. Trying out that ingenious approach that's still taking shape in your head isn't a problem now since you always have a point in time to go back to in case things don't work out. Being able to work 'messily' and commit anytime and being secure in the knowledge that you'd be able fix up stuff later provides an incredible amount of freedom of expression and security. Avoiding the wasted mental cycles spent around planning things carefully before you attack your codebase is worth it's weight in gold!!!

Wednesday, October 03, 2012

Nexus 7 - First impressions and tips and tricks

So I got my Dad the 8GB Nexus 7. This is an awesome tablet - exactly what a good tablet should be. The UI is buttery smooth and things just fly. The hardware is not a compromise, excellent price point and overall a superb experience.

Of course, there are some things to deal with like 8 GB storage,lack of mobile data connectivity, lack of expandable storage and no rear camera. These aren't issues at all as far as I'm concerned.

If I'm traveling with the tablet, then I always have the phone's 3G data to tether to using WiFi tethering. The 8GB storage is only an issue if you're playing the heavyweight games or want to carry all your videos or a ton of movies with you. Given the 8GB storage, I'm more than happy to load up a few movies/music before travel. Provided you have a good way to get files/data in and out of the computer and are OK with not carrying your complete library with you always, you don't have to worry about the storage. A camera though would be nice - but then hey - you can't have everything your way :).

File transfer to/from PC

Which brings us to the topic of file transfers to/from your PC. Now wifi is really the best way to go - and I couldn't find a way to make WiFi direct work with Windows 7. So for now, Connectify seems to be the best option. It runs in the background on your PC and makes your PC's wireless card publish its own Wireless network. You can connect to this network from your phone and if you share folders on your PC, you're set to move data around.

Now, on the Android side, ES file explorer is free and gets the job done from a file management/copying/moving perspective. I also tried File Expert but its more cumbersome. ES excels in multiple file selection and copying.

Ebooks

The one area where the N7 excels is for reading books. The form factor and weight are just right for extended reading sessions. However, Google Play books doesn't work in India and so you need an alternate app. I tried out Moon+ Reader, FBReader and Reader+ - and out of the lot, FBReader was the best. Moon+ has a nicer UI but choked on some of my Ebooks. Reader+ didn't get the tags right and felt a little clunky. FB reader provided the smoothest experience of the lot. I'm already through half of my first book - and did not have any issues. I have a decent collection of e-books on my PC but once I copied them to the N7, all the meta data was messed up. Editing metadata and grabbing covers is a pain on the tablet and best done on the PC.

This is where Calibre comes in - this is a full blown ebook library management app. It does a great job of keeping your ebooks organized and editing the metadata on them. It can also fetch metadata and covers from Amazon and google and update your collection. Once you're done, transferring to the N7 is a little tricky. The first time, I just copied the library over to the N7 - but N7 showed each book thrice. Some troubleshooting later, found that the best way was to create an export folder and use teh 'Connect to Folder' feature to mount it as a destination. Then you can select all the books you want and use the 'Send to destination in one format' to publish EPub format to the folder. This generates one epub file per book with the metadata and covers embedded in it and you can then copy this folder over to the N7's Books folder using ESFileExplorer

Playing movies on your N7 over WIFI

My movie collection is on XBMC - and XBMC is DLNA/uPNP compatible. Dive into XBMC system settings and make turn on the uPnP/DLNA services. Then on the N7, you can use uPnPlay. For playing video, it relies on having a video player app isntalled. I like MXplayer. Don't forget to also install the HW Player codec for ARM V7 and to turn on HW decoding in the settings.

Playing movies on your TV from the N7

You wont be doing much of this as there isn't a rear camera - but if you do decide to take a video or pics from the N7's FFC, then you can use the uPnPlay to project them on to your TV (that is, provided you have a DLNA/uPnP compatible TV or compliant media center hooked to your TV)
For XBMC, turn on uPnp in settings and you're done. XBMC should be able to discover your tablet and you'll be able to browse and play videos.
If you'd rather use the table to control what's played on XBMC, then turn on the setting to allow control via uPnP in XBMC settings. Now, in uPnPlay you can select XBMC as the play to device and playing any video/song, plays it on the tv.

That's all for now... I'm loving this tablet and the stuff it can do... looks like I'd be buying a few more soon :)

Wednesday, September 26, 2012

Websocket server using Jetty/Cometd

So I just wrote up a Websocket server using CometD/Bayeux. It's a ridiculously simple app - but went quite a long way in helping to understand the nitty gritties with putting up a Websocket server and CometD/Bayeux. Thought that I'll put it up for reference - should help in getting a leg up on getting started with CometD.

The sample's up on github at https://github.com/raghur/rest-websocket-sample

Here's how to go about running it:
  1. clone the repo above
  2. run mvn jetty:run
  3. Now browse to http://localhost:8080 to see the front page
  4. There are two parts to the app
    1. A RESTful API at http://localhost:8080/user/{name} - hypothetical user info - get retrieves a user, put creates a user and delete obviously deletes the user.
    2. The websocket server at localhost:8080/cometd has a broadcast channel at /useractivity which receives events whenever a user is added/deleted. The main page at http://localhost:8080 has a websocket client that updates the page with the user name whenever a user is added or removed.
And here's the nuts and bolts:
  1. BayeuxInitializer - initializes the Bayeux Service and the EventBroadcaster. Puts the EventBroadcaster in the servlet context from where the RESTful service can pick it up to broadcast.
  2. EventBroadcaster - creates a broadcast channel in the ctor. Provides APIs to publish messages on this channel.
  3. HelloService - basic echo service taken from Maven archetype
  4. MyResource - the RESTful resource which responds to GET/PUT/DELETE - nothing major here. If a user is added or deleted, then it pushes a message on the broadcast channel by getting the EventBroadcaster instance from the servlet context.
It's about as simple as you can get (beyond a Hello world or a chat example). Specifically, I wanted a sample where back end changes can be pushed to clients.

Friday, September 21, 2012

Android WordHero - product lessons

So, yesterday I figured that now I'm an addict.. fully and totally to something called wordhero on my phone... it's one of those games where you have a 4x4 grid of letters and you need to find as many words as you can within 2 mins. Nothing special... and there are tons of look alikes and also rans on the Google Play store. Even installed some of them and then removed them...

So what's different? Turns out that there's quite a few things - and apart from one, they're all at the detail level. The most significant one is that there are its online only and everyone's solving the same grid at the same time - so you get to see your ranking at the end. No searching for opponents, no clicking - just every game.

Apart from that, the main game idea is the same (form words on a 4x4 grid) so details are the only place where one can innovate... reminds me of Jeff Atwood's post that a product is nothing but a collection of details. So what are these details?
  1. Its online only. You can play only if you have an Internet connection.. otherwise, scoot!
  2. The information level and detail is just right: Tracing through the letters highlights the whole word; If you find a word, you see green; wrong word, red; dupe - yellow. At 10s, there's a warning been upto 5s. Not down to 0... so it warns - but doesn't distract. Simple. Effective. Efficient. Brillant!
Now sample the competition:
  1. Tracing - line through the letters, shaky squiggly letters when you pass over them and other sorts of UI idiocy, grid that's too small, grid that isn't a square, word check indicators at some other place. Sure, some of this is debatable..esp the ones around the bells and whistles. They looks great the first time, the second time and a few more times after that. By the time you hit the tenth time (if you do ), you start hating it.
  2. Offline mode - this is counter intuitive.. in fact, after playing wordhero, I ran to find one which had an offline mode. Once I found it though, surprisingly, I did not like it.. Turns out that there's little thrill in forming words on a grid; the thrill is in seeing where you stand and if you're improving.
  3. Timed mode - pretenders to the throne have untimed modes, customizable timers and so on. Didn't work for me - 2 minutes is that absolute sweet spot where you can grab a game anytime... and have that deadline adrenaline rush work for you... Thought I'd do great on the untimed games - but while I scored more, it wasn't significantly more. More importantly, it was missing the fun. Turns out that we want to see where we rank far more than we want to form words :D
So after promising myself one last game at 11 in the night yesterday and ending up playing up to 12:30 AM, I tore myself away from this satanic game. Kept the phone far away to ensure that I don't pick it up again in the middle of the night and started thinking what makes wordhero tick. There's nothing earth shaking about the reasons - but the effect of getting it right is surprising:
  1. Figure out what will tickle the right pleasure centers - and optmize like hell for that: This is hard... in wordhero, this is the global rankings per game and the stats... optimizing for this means that you take away offline mode totally. That isn't a small decision - especially when an offline mode is easy to implement and feels like giving the user 'more'. Tough to argue against it too - but as I've seen myself - something like that will kill the multiplier effect of seeing a large number of people play. Chances are, your users dont know that either - so no point asking them. Apple seems to have figured this out very well.
  2. Keep the UI simple and efficient - and show me what I need when I need it: Should look good for the casual user. For power users, it should be efficient and not irritating... so keep all those nice bells and whistles under control.
  3. Keep the options simple - I like options.. I like options more than what your average joe likes them... most of the times, I've seen the options that you didn't know were there... but when you're designing a game that's 2:30 minutes from start to finish, I don't want to think about options. More importantly, don't ask me questions about it.. just start the damn game...
So does it mean that WordHero's perfect? Far from it - but its successful by anyone's measure. If you're looking for perfection, you won't ever launch :). Some of the stuff that I'm sure they'll get to at some point
  1. Better explanation of the stats
  2. Charts/trends over the stats instead of only the current value
  3. Better explanation of some of the UI color coding on the results screen.

Thursday, September 06, 2012

Google Maps Navigation enabled in India!!

Just came across an awesome piece of news - Google Maps now has turn by turn, voice guided directions officially in India!!

Uptil now, I used to get the Ownhere mod for Google Maps that enables World navigation - It used to be available on XDA-Forums but got taken down once google frowned on it!

No more of that hassle - just go to Play store and install Maps.

Very cool! Thanks Google.

Tuesday, August 21, 2012

Converting xml to json with a few nice touches

During my recent outings in heavyweight programming, one of the things we needed to do was converting a large XML structure from the server to JSON object on the browser to facilitate easy manipulation/inspection.

Also, the XML from the server was not the nice kind - what I mean is that tag names were consistent - but the content was wildly inconsistent. For ex, all of the following were recd:


<!-- different variations of a particular tag -->
<BgSize>100,23</BgSize>
<BgSize>0,0</BgSize>
<BgSize>,</BgSize>

Ideally, in this case, we wanted to parse and validate the node (and all its different variations) and convert it to an X,Y pair only if it was a valid data in it. Also, a lot of these were common tags as you might expect that showed up in various different entities in the XML, so we wanted that all these rules get applied sooner centrally rather than having to deal with them at disparate places later down the stream.

The other reason was that a lot of the nodes really had structured data crammed into a single tag - which we ideally wanted parsed as a javascript object so that we could manipulate it easily


<!-- xml data with structured content -->
<!-- font, size, color, bold, italic-->
<Font>Arial;Lucida,14,0x0044,True,False</Font>

So that brought up a search for the best way to convert XML to JSOn -and of course stackoverflow had a question. THe article in the answer makes for very interesting reading into all the different conditions that have to be handled. The associated script at http://goessner.net/download/prj/jsonxml/ is the solution I picked. Really not much going on below other than to use the xml2json function to convert the xml to a raw json object.


@parseXML2Json: (xmlstr) ->
    log xmlstr
    json = $.parseJSON (xml2json $.parseXML (xmlstr)
    destObj = Utils.__parseTypesInJson(json)
    log "raw and parsed objects", json, destObj
    return destObj

But now to the more interesting part - once the xml is converted to a JSON, we need to do our magic on top of it - of applying validations and conversions. This is where the Utils.__parseTypesInJson method comes in

What we're doing here is walking through the JSON object recursively. At each step, we keep track of the path of the xml that we have descended into so that we can check the path and based on the path, apply validations or conversions. At each step, we also need to check the type of JSOn object we're dealing with - starting with undefined, null, string, array or object

If its a string, we further delegate to a __parseString function to convert the string to an object if needed.


@__parseTypesInJson: (obj, path = "") ->
 if typeof obj is "undefined"
  return undefined
 else if obj is null
  return null
 else if typeof obj is "string"
  newObj =  Utils.__parseString(obj, path)
  validator = _.find Utils.CUSTOM_VALIDATORS, (v)->
  v.regex.test path
  return validator.fn(newObj)  if validator?
  return newObj
 else if Object.prototype.toString.call(obj) is '[object Array]'
  destObj = (Utils.__parseTypesInJson(o, path) for o,i in obj)
  destObj = _.reject destObj,  (obj) ->
  obj == null
  return destObj
 else if typeof obj is "object"
  destObj = {}
  destObj[k]  = Utils.__parseTypesInJson(obj[k],  "#{path}.#{k}") for k of obj
  validator = _.find Utils.CUSTOM_VALIDATORS, (v)->
  v.regex.test path
  return validator.fn(obj)  if validator?
  return destObj
 else
  return obj


At each step, once the object is formed, we see if there's a custom validator defined in the array of custom Validators. Each validator is a regex and a callback function - if the regex matches the path, then the callback is passed the json object which it may manipulate before returning


@CUSTOM_VALIDATORS = [ choice =
                        regex: /choice$/
                        fn: (obj)->
                            if obj["#text"]?
                                return obj
                            else
                                log "returning null"
                                return null
                        ]

THe parseString method for completeness - you can really tweak this to your
taste and there's nothing complicated going on in this.


@__parseString : (str,  path) ->
    if not str?
        return str
    if _.any(Utils.SKIP_STRING_PARSING_REGEXES, (r)->
                                                    r.test path)
        log "Skipping string parsing for:" , path, str
        return  str
    if str
        if /^\d+$/.test str
        return parseInt str
    else if /^\d+,\d+$/.test str
        [first,second] = str.split(",")
        return  {"x": parseInt(first), "y": parseInt(second)}
    else if str == ','
        return null
    else if /^true$/i.test str
        return true
    else if /^false$/i.test str
        return false
    else if   /^[^,]+,\d+,(0x[0-9a-f]{0,6})?,((True|False),(True|False))?$/i.test str
        log "Matched font: ", str
        return  Utils.parseFontSpec(str)
    else
        return str

Microsoft Releases Git TFS integration tool

Microsoft released a cross platform Git TFS integration tool Git TF!! It's definitely a good step and acknowledgement about the mindshare that Git has.
I took it for a spin - the integration is supposed to be cross platform - so that it should work on cygwin also. However, the first time I tried, it did not and had to tweak the script a little.

In the script <install folder>/git-tf

# On cygwin and mingw32, simply run the cmd script, otherwise we'd have to
# figure out how to mangle the paths appropriately for each platform
if [ "$PLATFORM" = "cygwin" -o "$PLATFORM" = "mingw32" ]; then
#exec cmd //C "$0.cmd" "$@"                 #Orig
exec cmd /C "$(cygpath -aw "$0.cmd")" "$@"  #Changed
fi

Anyway, after that, things did seem to work - the only issue is that your windows domain password is echoed on the cygwin console :(... other than that minor irritant, I was able to clone the project and work on it using the Git integration. Going to try it out some more over the next few days and will post if find anything more. THis is definitely a great step from MS - and if this works properly, it will almost make working with TFS source control much much bearable :D

Friday, August 10, 2012

Coffeescript rocks!

I've been absent a few weeks from the blog. Life got taken over by work - been deep in the Javascript jungles and Coffeescript has been a lifesaver.
Based on my earlier peek at Coffeescript, we went ahead full on with Coffeescript and I have to say it has been a pleasant ride for the team with over 4.7KLoc of Javascript (with Coffeescript source weighing in around 3.7KLoc including comments etc) that now I can confidently recommend it for any sort of Javascript heavy development.
I'm going to list down benefits we saw with Coffeescript and hopefully someone else trying to evaluate it might find this useful:
  1. Developers who haven't dove deep into Javascript's prototype based model find it easier to get up to speed sooner. Yes - once in a while they do get tripped up and then have to look again into what's going under the covers - but this is normal. The key point is that its much much more productive and enjoyable to use Coffeescript.
  2. The conciseness of the Coffeescript definitely goes a long way in improving readability. One of the algorithms implemented was applying a bunch of time overlap rules. We also used Underscore.js - and between Coffeescript and Underscore.js, the whole routine was within 20 lines, mostly bug free and very easy for new folks to pick upand maintain over time. Correspondingly, the generated JS was much more complicated (though Underscore helped hide some of loop iteration noise) - and it wouldn't have been too different had we written the JS directly.
  3. Integrating with external frameworks - jquery, jquery ui etc was again painless and simple.
  4. Another benefit was that the easy class structure syntactic sugar helped quickly prototype new ideas and then refine them to production quality. With developers who're still shaky on JS, I doubt the same approach would have worked since they'd have spent cycles trying to get their heads wrapped around JS's prototype based model.
  5. Coffeescript also allows you to split the code to multiple source files and merge all of them before compiling to JS - this allowed us to keep each source file separate and reduce merges required during commits.
  6. Finally, performance is a non issue - you do have to be a little careful otherwise you might find yourself allocating function objects and returning them back when you don't mean to but this is easily caught in reviews.
One latent doubt I had going into this was the number of times we'd have to jump in to the JS level to debug issues. With a larger Coffeescript codebase spread across multiple files, this is a real concern since the error line numbers wouldn't match with source and if we have to jump through hoops to fix issues. Luckily, this wasn't a problem at all - over time, in cases of either an error in JS or just inspecting code in the browser, its easy to map to the Coffeescript class/function - so you just fix it there and regenerate the JS. Secondly, the generated JS is quite readable - so even when investigating issues, it's quite trivial to drop breakpoints in Chrome and know what's going on.
The one minor irritation was if there was a Coffeescript compile issue, then when joining the file, the line number reporting.fails and then you have to compile each file independently to figure out the error. Easily automated with a script - so that's just being nitpicky.
Anyway, if you got here looking for advice on using Coffeescript, then you've reached the right place and maybe this post's helped you make up your mind!

Tuesday, July 03, 2012

Media center setup - XBMC-XVBA

I finally got my nettop - AMD E-350 based barebones system. Installed 4G of RAM and the plan was to set it up with XBMCBuntu or XBMC-XvBA. Instead of installing the XBMC-XvBA version directly, I figured that I could start with XBMCBuntu, see how it does and then if necessary move to the XvBA enabled builds.

I don't have a hard drive for the nettop - the plan was to have the system run off a 8Gig pen drive.

Basic Installation - XBMCBuntu

What you need

  1. The nettop with RAM installed.
  2. 2 USB pendrives - One for installation (2GB) and another which is going to act as your HDD (8G)

Steps

  1. Download UNetBootin for windows and the XBMCBuntu iso image
  2. Create a Live USB using UNetBootin: Once you have UNetBootin installed, stick in a flash drive in the usb, start UNetBootin and selec the XBMCBuntu iso image as the source distribution iso and the flash drive as the destination.
  3. Boot the nettop using the USB drive: You might have to play around with boot devices and priorities in the BIOS settings to get it to boot from the USB drive. To keep things simple, stick the pendrive into one of the USB2 ports (avoid the USB3)
  4. ON the UNetBootin boot menu, you can just try out XBMCBuntu live image. I did so and things seemed to work well enough for me to do the full install to another USB drive plugged into the system. Note that if you're not able to find the target drive, then just reboot with both the USB drives plugged in - sometimes, newly inserted devices aren't detected.
  5. Install, go through the menus and wait for it to complete.
  6. As you go through the menus, keep in mind to choose a custom partitioning scheme. In my case, I had 4G of RAM and there's no sense in having a swap drive on the pen drive. If you plan on having hibernation support, then use a 2G swap partition (50% of RAM) - else you can skip the swap altogether.
  7. Once done, pull out the installation pen drive and reboot. You should be able to reboot off the USB pendrive that you installed into. The installation pendrive is pretty much done - you won't need it any longer.

XBMCBuntu

At this point, I had XBMCBuntu up and running however, there were a few problems:

  1. On idle, CPU utilization was very high (~ 60 - 70%) and the unit was running hot.
  2. Display resolution proved troublesome - my LCD's native resolution is 1366x768 but that wasn't available over HDMI.
  3. I was able to get 1360x768 on DVI/D-Sub - but that meant using a separate cable for audio out.

Of these, the high CPU utilization was the biggest worry - so there's a few steps available to try

  1. Within XBMC - set sync to display refresh - always.
  2. Turn off RSS feeds
  3. Tweaks .xbmc/userdata/advancedsettings.xml:
<advancedsettings>
    <useddsfanart>true</useddsfanart>
    <cputempcommand>cputemp</cputempcommand>
    <samba>
        <clienttimeout>30</clienttimeout>
    </samba>
    <network>
        <disableipv6>true</disableipv6>
    </network>
    <loglevel hide="false">1</loglevel>
    <gui>
        <algorithmdirtyregions>1</algorithmdirtyregions>
        <visualizedirtyregions>false</visualizedirtyregions>
        <nofliptimeout>0</nofliptimeout>
    </gui>
    <measurerefreshrate>true</measurerefreshrate>
    <videoextensions>
        <add>.dat|.DAT</add>
    </videoextensions>
    <tvshowmatching append="yes">
        <!-- matches title 01/04 episode title and similar.-->
        <regexp>[s]?([0-9]+)[/._ ][e]?([0-9]+)</regexp>
    </tvshowmatching>
    <gputempcommand>/usr/bin/aticonfig --od-gettemperature | grep Temperature | cut -f 2 -d "-" | cut -f 1 -d "." | sed -e "s, ,," | sed 's/$/ C/'</gputempcommand>
</advancedsettings>

Did those and while they dropped the CPU utilization to about 25% which was quite good. However, during videos, the CPU was still high - and that's because even though XBMCBuntu official uses hardware acceleration through VAAPI, it still is spotty.

Getting XvBA

I went over to the XBMC-XvBA installation thread and followed the directions in the first post to add the XBMC-XvBA ppas. The download took some time and XvBA build got installed. Started XBMC and things were much, much better.

sudo apt-add-repository ppa:wsnipex/xbmc-xvba
sudo apt-get update
sudo apt-get install xbmc xbmc-bin    

There are other tweaks that are listed on the XBMC-XvBA installation thread which I also went ahead and applied.

Other tweaks

Optimizing Linux for a flash/pen drive installation

Installing on a pen drive /usb flash drive has its pain points. My boot time was around painfully slow (~3.5 minutes). Opening Chromium took forever and even page loads were slow (it would be stuck with the status bar on 'checking cache'...). Also, the incessant writing to disk is probably killing off my pen drive much much faster. I ended up doing the following:

  1. Use the noatime and nodiratime flags for the USB drive

    # /etc/fstab
    UUID=39f52ccf-363b-4b6e-abdd-927809618d83 /               ext4    noatime,nodiratime,errors=remount-ro 0       1
  2. Use tmpfs - In memory, reduces writes to disk and is faster. With 4G of RAM, this is a no-brainer.

    # /etc/fstab
    tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0
  3. Browsers - use profile-sync-daemon for Ubuntu from Arch Linux - will automatically move your browser profile directory from your home folder to /tmpfs
  4. Move .xbmc to NAS/External drive along with your media. Makes a lot more sense to keep your .xbmc folder with your media on a external hdd.
  5. Change to noop or deadline scheduler:

    # Assuming sda is your USB drive
    sudo echo noop > /sys/block/sda/queue/scheduler
  6. Change system swappiness. We don't want the OS to use swap drive at all.

    # /etc/sysctl.conf
    vm.swappiness=1

Getting suspend/hibernate to work

I had greatest trouble here - but was able to get pm-utils working eventually. pm-utils is a framework of shell scripts around suspend/hibernation/wakeup that provides hooks to execute scripts before standby/hibernation and when the computer resumes from sleep/shutdown. First test if basic suspend/hibernate works

# check suspend methods supported
cat /sys/power/state
# S3
sudo sh -c "echo mem > /sys/power/state"

If your system goes into standby, then things are good. But its just a good start. In my case, system would go into standby only the first time after boot. After that, it would go into standby but then resume immediately. Its been asked enough times on Google and I've probably tried all the fixes. The first one is to update a kernel param acpi_enforce_resources=lax

# /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_enforce_resources=lax"

After that, make sure to run  sudo update-grub In my case, the magic incantation above failed (your mileage might vary). Nothing bad happened so I kept it on. Anyway, so I rebooted, then suspended and resumed the first time (which works) and took a dump of dmesg > dmesg.1.log. After that again tried to suspend and when it came back immediately, I could get a dmesg output and scan the entries after the first run. Turned out that the log had entries related to xhci_hcd - so decided to unload it first and then try to suspend:

sudo modprobe -r xhci_hcd
sudo sh -c "echo mem > /sys/power/state"

After this, the system was able to standby each and every time. Now it was time to get pm-utils working. Out of the box, pm-utils came with a config that had a bunch of things that I didn't understand (and I doubt they applied to this machine). If standby was working directly, then it should have worked through pm-utils. However, it needed some amount of pushing around before that comes around to a functional state.

Getting pm-utils to play nice

So now that I had confirmed suspend working, it was time to see why pm-utils was being so bad. First off, time to clean up the default configuration. So copied /usr/lib/pm-utils/config to /etc/pm/config.d/config and then start editing it

SLEEP_MODULE="kernel"
# These variables will be handled specially when we load files in
# /etc/pm/config.d.
# Multiple declarations of these environment variables will result in
# their contents being concatenated instead of being overwritten.
# If you need to unload any modules to suspend/resume, add them here.
SUSPEND_MODULES="xhci_hcd"
# If you want to keep hooks from running, add their names  here.
HOOK_BLACKLIST="99_fglrx 99lirc-resume novatel_3g_suspend"

Waking up with the keyboard

if you'd like wake up with a usb device (usb keybd), then you need to find out the usb port where your device is connected. The easiest way might be to check dmesg output which would usually print this out. In my case, my wireless keyboard/trackball are connected on USB3

echo USB3 >  /sys/proc/acpi
echo  enabled > /sys/proc/devices/usb3/power/wakeup

After that, the HTPC could be woken up with a keypress. Now I haven't been able to find a way to do the same thing with only the keyboard (so that the system doesn't wake up anytime anyone picks up the keyboard - so for now, have turned this off). The above change won't persist over a reboot - so to make it persistent, put the two lines above to /etc/rc.local before the exit 0

Fixing up fglrx annoyances (ATI binary driver)

Not much point of a HTPC if the video isn't top quality. And there are a lot of variables involved there - your computer hardware, software, drivers, type of connection (HDMI/DVI) and the telly itself. Also, video driver support on Linux for ATI leaves quite a bit to be desired. One of the reasons of going with XBMCBuntu was knowing that there'll be large community support available on ubuntuforums.

Right off the bat, things started at the mildly irritating level. Catalyst control center in root mode won't start even though there's a big fat menu item there. Quick google and it says that the easiest way out is to use gksu amdcccle in the run dialog (ALT-F2).

So where does all this get us

After all this, it makes a sea change in the overall experience:

  1. XBMC idles at 15 - 20% cpu utilization. During video playback, stil stays at a comfy 40% - 50% while playing 720p/1080p videos
  2. Browsers (Chrome and FF) open near instantly; browsing experience is better than my desktop and page loads, tab switches etc feel much nimbler than on my desktop (AMD 6 core, 12G monster running Win 7x64)
  3. Total cost - USD 180

More to come

  1. Hibernation support
  2. Torrenting
  3. Scheduled wake up from shutdown/hibernate/suspend state