Friday, November 15, 2013

Upgrading to Kitkat on Nexus4 from rooted, custom 4.3 ROM

Kitkat's here!

So Google finally posted kitkat factory images for Nexus4. Saw it on Reddit this morning and started the download before I had my morning cuppa.

Hmm - and then to flash. Flashing image will nuke your device (including all photos etc) which I didn't want. As I was on AOKP, a data wipe will be needed but there's no reason to kill my storage too. And while we're at it, why not also root it in the process.

It's been sometime since I've flashed and even then it's usually zips. I was running AOKP nightly on my N4 so I'd have to do a full wipe. First step was backups...

  1. Backup via titanium
  2. Backup Nova desktop
  3. Backup SMS and call logs
  4. Nandroid backup from TWRP.

Then to move the backups to the PC - just in case... Moving Nandroid from TWRP was a little bit of an issue since they implemented security. General advice is to do a adb pull /sdcard/TWRP/BACKUPS from recovery. Unfortunately, adb wasn't detecting my device in recovery. Some more googleing and got Nexus 4 Drivers for Windows. Boot into recovery, and follow the driver installation directions exactly (pick android device and 'have driver').

I wanted to remain rooted - so download chainfire's SuperSU update zip and push it to the device with adb push update-SuperSU.zip /sdcard/

That took care of backing up Nandroids to the PC. Extract the factory image file occam-jdq39-factory-345dc199.tgz into a folder. Also open the image-occam-jdq39.zip inside and extract files from there into the same folder.

  1. Reboot to bootloader
  2. fastboot flash bootloader <bootloader img>
  3. fastboot reboot-bootloader
  4. fastboot flash radio <radio img>
  5. fastboot reboot-bootloader
  6. fastboot flash boot boot.img
  7. fastboot flash system system.img
  8. Reboot into recovery console.
  9. Wipe data
  10. Flash the superSU zip.
  11. Flash anythign else that's needed (Titanium backup for me)
  12. Reboot! &
  13. Restore apps from TiBu
  14. SMS and phone logs
  15. Restore Nova desktop

AND finally....

Take a break! Have a kitkat!

Now I just need the AOKP 4.4 nightlies....!

Friday, November 08, 2013

Flickr-Uploader is like Google+ Autobackup - only at full resolution.

Auto backup

It's been some time since I've used anything other than my Nexus 4 to take photos. And after a recent scare where I thought I'd lost 10 years worth of memories after a hard disk failure, I've been very diligent about having one or two backups.

Google+ does a great job of automatically uploading photos and then applying the auto awesome effects - but just with one downside - if you want to upload photos at full resolution, it will most probably count against your storage quota.

With Flickr offering 1 TB of free storage, I wanted to make sure that my photos get uploaded to Flickr as well. automatically. And private by default. And only on wifi (don't like bill shock). And when plugged in (no point if I can't take pics because the phone's dead from uploading). And let me do manual uploads while at it (you know...). You get the drift of my ideal feature set :).

Searched through the play store and its really really hard to find an app that does this well. The official flickr client doesn't (good luck with the adoption guys) and while a few other apps state that they have bulk upload features, none of them were doing autobackup - other than Flickr Uploader. It's available on a 7 day trial and the reviews were encouraging... thought I'd give it a shot.

Installed, authorized flickr - which errored out the first time but worked the next AND BOOM... that's it. The next time I took pics, now they're going to Flickr as well and it comes up with a nice notification. Super!

Additional props to the author - he's open sourced the project as well on github!

Saturday, October 05, 2013

Vim: Making Ultisnips and NeoComplete play nice

Plugin conflict!

If you have both Ultisnips and NeoComplete then you cannot use the same key for expansion. I used to have tab mapped out to both for completion with AutoComplPop and Ultisnips. I had tab set for g:UltiSnipsJumpForwardTrigger but NeoComplete still doesn't like it. So now, that's changed to Control + Tab and things are good again.

let g:UltiSnipsExpandTrigger="<C-CR>"
let g:UltiSnipsJumpForwardTrigger="<C-tab>"
let g:UltiSnipsJumpBackwardTrigger="<s-tab>"

Only wish one NeoComplete OR Ultisnips makers see this and make it work without conflict

Wednesday, September 04, 2013

What publishing a python module taught me

Programming in Python

So I've mostly used python for one off scripts and tools and at one point for a serious foray into Django - but never came into a situation where I'd thought of publishing anything.

Hmm - crossed that bridge over this weekend - and its been a fun journey. I'm writing this post with what I wrote :)

Things I've picked up

Code

  1. Better understanding into Python modules, classes and code organization for libraries.
  2. Good unit tests
  3. Mocking in python

Packaging

  1. Packaging with setup.py
  2. pip, setuptools, easy_install and their idiosyncracies
  3. Install a platform specific script
  4. PyPI - registering and publishing
    • Tutorial One point to note - if you create the .pypirc manually, you will need to manually do a python setup.py register. If you skip that, python setup.py sdist upload will fail with a 403.

Testing

  1. virtualenv - this link
  2. Testing platform specific scripts installed with tools above
  3. Coverage

Coding/Style/Syntax linting

  1. pyflakes, pylame, pep8 and integration in Vim with Syntastic

What I really, really liked

  1. That I didn't miss a debugger
  2. That tests were short and sweet
  3. good code coverage out of the box
  4. pip
  5. Overall, how pleasant it was and how much I enjoyed it.

Where I had hiccups

  1. Mocks in python were a little hard to debug/understand
  2. Should have written the tests first - but it came as an afterthought after I decided to publish.
  3. Finding good documentation on packaging - for ex:, its hard to find a good walkthrough of how to publish

Just putting finishing touches and a little polish for a v0.9 release. Basically this post itself is nothing but a test. Should be out with it in a day or two.

Friday, August 30, 2013

Fixing Wifi sleep of death

The problem

I have a TP-Link WN722N USB Wifi dongle. Linux Mint picked it up during install and seemed like all was good.

Then the other day, noticed that sometimes WiFi would be flaky like hell - all I'd get is the password prompt. Turns out that this is a common problem with USB Wifi. After a few days, the pattern appeared to be a recurring problem after putting the computer sleep. The fix is easy - just unload the ath9k_htc before suspending. Edit /etc/pm/config.d/config (create if needed):

SUSPEND_MODULES="ath9k_htc"

Monday, August 26, 2013

Moving from Wordpress.com to Blogger

Moving from wordpress

So I moved my blog from its old home at http://niftybits.wordpress.com to http://blog.rraghur.in I've also moved out of wordpress.com to blogger. For quite some time, I've not been happy with Wordpress's abilities for a tech blog. It's a commercial endeavour and so if you need additional features/tweakability you've got to fork out the good stuff. Anyway, I was intending to get my own domain and wordpress hosting that would give me full control over the blog engine and the ability to install any addons that I wanted. There was this niggling feeling in the back of my head that probably I was creating a monster - basically, set it up and then take on maintenance of it as well :(. That's part of the reason I decided to dust off my old blogger account and see where it stood. Now, the last time I'd touched Blogger about 10 yrs ago, it was just after Google bought over Blogger and was really more mommy blog engine. Things have changed and I've been living in a hole... Blogger's now much more polished. A few things where it leaves Wordpress.com in the dust:
  1. Custom markup and css.
  2. Custom domains
  3. Google Analytics
  4. Ability to have ads and hence make some money - not that I intend to.
Where it falls behind:
  1. Analytics - really... the site analytics seems a little iffy.
    Update: Once you integrate Blogger with Google Analytics, its much much nicer than the one provided by blogger
  2. Referrer spam - All I saw was entries of www dot vampire dot stat domain on the analytics dashboard. Turns out that its referrer spam and isn't blocked/can't be blocked. What I don't understand is how this was never a problme with wordpress.com.
  3. Themes - far lesser than WP - but this is not an issue since you can tweak anything to your heart's content.
Now all of the WP shortcomings can be addressed if you go for paid upgrades OR just host your own WP. Unfortunately, neither is a good option for me. So while I didn't like moving out, it had to be done.

Registering a domain

I went ahead and registered the domain with BigRock.in - here's a referral link that will get you 25% off your domain.

 

Pointing Blogger to your custom domain

This was very simple - just follow the instructions on Blogger. You will need to set up 2 CNAME records for Blogger in your DNS management console other than your custom domain itself. For ex:

 

Migrating content from WP.com

Also, had to get the blog content transferred. Went into wordpress dashboard Tools > Export > All Content and exported the blog. This lets you download a xml file with all your blog content. Now, we have to convert it so that it can be imported by Blogger. Move over to Wordpress2Blogger and upload the file. If all goes well, then you get a converted file. Wasn't so simple for me though and it gave a Invalid XML at line xxxx error. Opening the wordpress XML, I didn't see any issues and a little google indicated that WP.com is notorious for generating invalid/malformed XML. Hmm - not a biggie. Pulled out XMLLint and ran it over the wordpress file
xmllint -v /path/to/wordpress/export/file
And xmllint found the problem - it was basically an &nbsp; entity that had not been declared. Just removed it from the file and uploaded again to Wordpress2Blogger and the conversion went without a hitch.

 

Other Tweaks

Formatting of posts - while the content came over fine, it came in with &lt;br /&gt; tags. Also all my old source code listings that had the WP.com syntax tags [sourcecode] [/sourcecode] were broken and had to be fixed.
For code syntax highlighting, I went with hightlight.js for now. May change later. Also copied the key hightlighting css from here. You can edit the template and stick in the markup in the head section.
Next, spent some time on the theme tweaks on Blogger till I got tired. I'm ok with it for now - but will probably come back to it later.

 

In Closing..

Well, the move was completed. I'm a little dissatisfied with a few things:
  1. What do I do with my wordpress blog. I cannot redirect it and I can delete it but a little hesitant about burning bridges.
  2. Reaching via google search seems to be a problem - searching for xbmc xvba nettop rraghur doesn't even show up the Blogger link. Only the wordpress links are there. I think this is because of page reputation - so not much I can do.
  3. Blogging with VIM: Vimrepress was a good solution for WP.com. For blogger I have only found Blogger.vim which I'm still to give a try.
The benefits are worth much more and having the peace of mind of not having to worry about maintenance and the freedom that I can move to a self hosted blog later on if needed is indeed good.

Saturday, August 24, 2013

Linux Mint 15 KDE - tweaks and fixes

Additional fixes post installation

SSH connection refused

So today I tried ssh'ing into the desktop and no go. I was getting a connection refused and thought it had to do with either SSH not being installed or being blocked by the firewall. Later on when I checked, OpenSSH server was installed and the service was also running

sudo service status ssh
ssh start/running, process 2709

Hmm - this is weird. Next check was for iptables and that was clear too.. So last check was to look at /var/log/auth and indeed, there's a problem. Interestingly machine keys weren't generated during installation

Aug 22 20:15:58 desktop sshd[1960]: fatal: No supported key exchange algorithms [preauth]
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_rsa_key
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_dsa_key
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key

Ok - so the fix is easy - generate the keys with

    sudo ssh-keygen -A

After that, everything's back to normal :)

Power button shuts down computer

This is Kubuntu Bug 1124149. I'll spare you the details, which you can read yourself. Fix needed is to link /usr/bin/qdbus

sudo ln -sf /usr/lib/x86_64-linux-gnu/qt4/bin/qdbus /usr/bin/qdbus

Tuesday, August 20, 2013

Mixing Generics, Inheritance and Chaining

In my last post on unit testing, I had written about a technique I'd learnt forsimplifying test set ups with the builder pattern. It provides a higher level, more readable API resulting in DAMP tests.

Implementing it though presented a few interesting issues that were fun to solve and hopefully, instructive as well. I for one will need to look it up if I spend a few months doing something else - so got to write it down :).

In Scheduler user portal, controllers derive from the MVC4 Controller class whereas others derive from a custom base Controller. For instance, Controllers that deal with logged in interactions derive from TenantController which provides TenantId and SubscriptionId properties. IOW, a pretty ordinary and commonplace setup.
    class EventsController : Controller 
    {
        public ActionResult Post (MyModel model) 
        {
        // access request, form and other http things
        }
    }

    class TenantController: Controller 
    {
        public Guid TenantId {get; set;}
        public Guid SubscriptionId {get; set;}
    }

    class TaskController: TenantController
    {
        public ActionResult GetTasks()
        {
            // Http things and most probably tenantId and subId as well.
        }
    }
So, tests for EventsController will require HTTP setup (request content, headers etc) where as for anything deriving from TenantController we also need to be able to set up things like TenantId.

Builder API


Let's start from how we'd like our API to be. So, for something that just requires HTTP context, we'd like to say:
    controller = new EventsControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();
And for something that derives from TenantController:
    controller = new TaskControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .WithTenantId(theTenantId)
                .WithSubscriptionId(theSubId)
                .Build();
The controller builder will basically keep track of the different options and always return this to facilitate chaining. Apart from that, it has a Build method which builds a Controller object according to the different options and then returns the controller. Something like this:

    class TaskControllerBuilder() 
    {
        private object[] args;
        private Guid tenantId;
        public TaskControllerBuilder WithConstructorParams(params object args ) 
        {
            this.args = args;
            return this;
        }

        public TaskControllerBuilder WithTenantId(Guid id ) 
        {
            this.tenantId = id;
            return this;
        }

        public TaskController Build() 
        {
            var mock = new Mock<TaskController>(MockBehavior.Strict, args);
            mock.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }

Generics


Writing XXXControllerBuilder for every controller isn't even funny - that's where generics come in - so something like this might be easier:
    controller = new ControllerBuilder<EventsController>()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();
and the generic class as:
    class ControllerBuilder<T>() where T: Controller
    {
        private object[] args;
        private Guid tenantId;
        protected Mock<T> mockController;

        public ControllerBuilder<T> WithConstructorParams(params object args ) 
        {
            this.args = args;
            return this;
        }

        public T Build() 
        {
            mockController = new Mock<T>(MockBehavior.Strict, args);
            mockController.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }
In takes about 2 seconds to realize that it won't work - since the constraint only specifies T should be a subclass of Controller, we do not have the TenantId or SubscriptionId properties in the Build method.

Hmm - so a little refactoring is in order. A base ControllerBuilder that can be used for only plain controllers and a sub class for controllers deriving from TenantController. So lets move the tenantId out of the way from ControllerBuilder.
    class TenantControllerBuilder<T>: ControllerBuilder<T>  
     where T: TenantController          // and this will allow access
                                        // TenantId and SubscriptionId
    {
        private Guid tenantId;
        public TenantControllerBuilder<T> WithTenantId(Guid tenantId) 
        {
            this.tenatId = tenantId;
            return this;
        }

        public T Build() 
        {
            // call the base
            var mock = base.Build();
            // do additional stuff specific to TenantController sub classes.
            mockController.Setup(t => t.TenantId).Returns(this.tenantId);
            return mock.Object;
        }
    }
Now, this will work as intended:
/// This will work:
controller = new TenantControllerBuilder<TaskController>()
            .WithTenantId(guid)                             // Returns TenantControllerBuilder<T>
            .WithConstructorParams(mockOpsRepo.Object)      // okay!
            .Build();

But this won't compile: :(

///This won't compile:
controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Compiler can't resolve WithTenant method.
            .Build();
This is basically return type covariance and its not supported in C# and will likely never be. With good reason too - if the base class contract says that you'll get a ControllerBuilder, then the derived class cannot provide a stricter contract that it will provide not only a ControllerBuilder but that it will only be TenantControllerBuilder.

But this does muck up our builder API's chainability - telling clients to call methods in certain arbitrary sequence is a no - no. And this is where extensions provide a neat solution. Its in two parts

  • Keep only state in TenantControllerBuilder.

  • Use an extension class to convert from ControllerBuilder to TenantControllerBuilder safely with the extension api.


// Only state:
class TenantControllerBuilder<T> : ControllerBuilder<T> where T : TenantController
{
    public Guid TenantId { get; set; }

    public override T Build()
    {
        var mock = base.Build();
        this.mockController.SetupGet(t => t.TenantId).Returns(this.TenantId);
        return mock;
    }
}

// And extensions that restore chainability
static class TenantControllerBuilderExtensions
{
    public static TenantControllerBuilder<T> WithTenantId<T>(
                                        this ControllerBuilder<T> t,
                                        Guid guid)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = guid;
        return c;
    }

     public static TenantControllerBuilder<T> WithoutTenant<T>(this ControllerBuilder<T> t)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = Guid.Empty;
        return c;
    }
}
So, going back to our API:
///This now works as intended
controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Resolves to the extension method
            .Build();
It's nice sometimes to have your cake and eat it too :D.

Wednesday, August 14, 2013

Unit Tests: Simplifying test setup with Builders

Had some fun at work today. The web portal to Scheduler service is written in ASP.NET MVC4. As such we have a lot of controllers and of course there are unit tests that run on the controllers. Now, while ASP.NET MVC4 apparently did have testability as a goal, it still requires quite a lot of orchestration to test controllers.

Now all this orchestration and mock setups only muddies the waters and gets in the way test readability. By implication, tests are harder to understand, maintain and eventually becomes harder to trust the tests. Let me give an example:

[TestFixture]
public class AppControllerTests  {
// private
/// set up fields elided
// elided

[SetUp]
public void Setup()
{
_mockRepo = new MockRepository(MockBehavior.Strict);
_tenantRepoMock = _mockRepo.Create();
_tenantMapRepoMock = _mockRepo.Create();
_controller = MvcMockHelpers.CreatePartialMock(_tenantRepoMock.Object, _tenantMapRepoMock.Object);

guid = Guid.NewGuid();

// partial mock - we want to test controller methods but want to mock properties that depend on
// the HTTP infra.
_controllerMock = Mock.Get(_controller);
}

[Test]
public void should_redirect_to_deeplink_when_valid_sub()
{
//Arrange
_controllerMock.SetupGet(t => t.TenantId).Returns(guid);
_controllerMock.SetupGet(t => t.SelectedSubscriptionId).Returns(guid);
var formValues = new Dictionary<string,string>();
formValues["wctx"] = "/some/deep/link";
_controller.SetFakeControllerContext(formValues);

// Act
var result = _controller.Index() as ViewResult;

//// Assert
Assert.That(result.ViewName, Is.EqualTo(string.Empty));
Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
//Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

_mockRepo.VerifyAll();
}
}
As you can see, we’re setting up a couple of dependencies, then creating the SUT (_controller) as a partial mock in the setup. In the test, we’re setting up the request value collection and then exercising the SUT to check if we get redirected to a deep link. This works – but the test set up is too complicated. Yes – we need to create a partial mock that and then set up expectations that correspond to a valid user who has a valid subscription – but all this is lost in the details. As such, the test set up is hard to understand and hence hard to trust.

I recently came across this pluralsight course  and there were a few thoughts that hit home right away, namely:
  1. Tests should be DAMP (Descriptive And Meaningful Phrases)
  2. Tests should be easy to review
Test setups require various objects in different configurations - and that's exactly what a Builder is good at. The icing on the cake is that if we can chain calls to the builder, then we move towards evolving a nice DSL for tests. This goes a long way towards improving test readability - tests have become DAMP.

So here's what the Builder API looks like from the client (the test case):

[TestFixture]
public class AppControllerTests {
[SetUp]
public void Setup()
{
_mockRepo = new MockRepository(MockBehavior.Strict);
_tenantRepoMock = _mockRepo.Create();
_tenantMapRepoMock = _mockRepo.Create();
guid = Guid.NewGuid();
}

[Test]
public void should_redirect_to_deeplink_when_valid_sub()
{
var formValues = new Dictionary<string, string>();
formValues["wctx"] = "/some/deep/link";

var controller = new AppControllerBuilder()
.WithFakeHttpContext()
.WithSubscriptionId(guid)
.WithFormValues(formValues)
.Build();

// Act
var result = _controller.Index() as ViewResult;

//// Assert
Assert.That(result.ViewName, Is.EqualTo(string.Empty));
Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
//Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

_mockRepo.VerifyAll();
}
}

While I knew what to expect, it was still immensely satisfying to see that:
  1. We’ve abstracted away details like setting up mocks, that we’re using a partial mock, that we’re even using MVC mock helper utility behind the AppControllerBuilder leading to simpler code.
  2. The Builder helps readability of the code – its making it easy to understand what preconditions we’d like to be set on the controller. This is important if you’d like to get the test reviewed by someone else.
You might think that this is just sleight of hand - after all, have we not moved all the complexity to the AppControllerBuilder? Also, I haven't shown the code - so definitely something tricky is going on ;)?

Well not really - the Builder code is straight forward since it does one thing (build AppControllers) and it does that well. It has a few state properties that track different options. And the Build method basically uses the same code as in the first code snippet to build the object.

Was that all? Well not really – you see, as always, the devil’s in the details. The above code is’nt real – its  more pseudo code. Secondly, an example in isolation is easier to tackle. However, IRL (in real life), things are more complicated. We have a controller hierarchy. Writing builders that work with the hierarchy had me wrangling with generics, inheritance and chainability all at once :). I'll post a follow up covering that.

Sunday, August 11, 2013

And we're back to windows

Well not really - but I have your attention now... So in my last post, I talked about moving my home computer from Win 7 to Linux Mint KDE. That went ok for the most part other than some minor issues.
Fast-forward a day and I hit my first user issue :)... wife's workplace has some video content that is distributed as DRM protected swf files that wil play only through a player called HaiHaiSoft player!

Options

  1. Boot into windows - painful and slow and kills everyone's else session.

  2. Wine - Thought it'd be worth a try - installed Wine and dependencies through Synaptic. As expected, it would'nt run haiHaiSoft player - crashed aat launch.

  3. Virtualization: so the final option was a VM through virtualbox. Installed Virtualbox and its dependencies (dkms, guest additions etc) and brought out my Win 7 install disk from cold storage.

Virtualbox and Windows VM installation

Went through installation and got Windows up and running. Once I got the OS installed, also installed guest additions and it runs surprisingly well. I'd only used Virtualbox for a linux guest from a Windows host before so it was a nice change to see how it worked the other way around.

Anyway, once the VM was installed, downloaded and installed the player and put a shortcut to virtualbox on the desktop. Problem solved!

Saturday, August 10, 2013

Upgraded to Linux

So after suffering tons of crashes (likely due to AMD drivers) and general system lagginess, I finally decided to ditch windows and move to linux full time.
This is on my home desktop which is more a family computer than something that only I would use.
I was a little apprehensive with driver support as usual and tricky stuff like suspend to ram (s3) which always seems highly driver dependent and problematic on Linux (it is still a pain on my XBMCBuntu box). Anyway, nothing like trying it out.

After looking around a bit, downloaded Linux Mint 15 (default and KDE). Booted with the Live CD and liked the experience - though GNOME seems a bit jaded and old. I liked KDE much better - esp since it seems more power user friendly.

So after testing hardware stuff (Suspend, video drivers and so on) - all of which worked flawlessly, I must say, I decided to go ahead and install it on one of my HDDs. Unfortunately, installation was rocky a bit - I don't know if it was just me - the mint installer would progress up to preparing disks and hang there for 10+ minutes without any feedback I'm assuming it is reading partition tables and so forth - but no idea why it took so long. A thought it'd hung a couple of times - so terminated it and it was only accidentally that I found that it was still working - when I left it on its own for sometime and got back. It presented my the list of options (guided partition on entire disk, co locate with another OS etc) - but things actually went worse after this.

What seems to have happened is that my pending clicks on the UI all were processed and it proceeded to install on my media drive before I had a chance ... wiped out my media drive. Thankfully, before installation I had a backup of the important stuff on that drive and so it wasn't a biggie...
At this point, I was having serious doubts of continuing with Mint and was ready to chuck it out of the window and go back to Kubuntu or just back to Windows. However, I hung on - given that I'd wiped a drive, might as well install it properly and then wipe it if it wasn't good.

Anwyay, long story short, I restarted the install, picked my 1TB drive and partitioned it as 20GB /, 10Gb /var, 1Gb /boot and remaining as unpartitioned.
Mint went through the installation and seemed to take quite sometime - there were a couple of points where the progress bar was stuck at some percentage for
multiple minutes and I wasn't sure if things were proceeding or hung. In any case, after the partitioning window, I was more inclined to wait. Good that I did since the installation did eventually complete.

Feedback to Mint devs - please make the installer be more generous with feedback - esp if the installer goes into something that could take long.

First boot

Post installation, rebooted and grub shows my windows boot partition as well as expected. I still haven't tried booting into windows so I that's one thing to check. Booted into Mint and things looked good. Set up accounts for my dad and my wife. one thing I had to do was edit /etc/pam.d/common-password to remove password complexity (obscure) and set minlen=1

     password   [success=1 default=ignore]  pam_unix.so minlen=1 sha512

Next was to set up local disks (2 ntfs and 1 fat32 partition) so that they are mounted at boot and everyone can read and write to them. I decided to go the easy route and just put entries in /etc/fstab

UUID=7D64-XXX  /mnt/D_DRIVE    vfat      defaults,uid=1000,gid=100,umask=0007                   0       2
UUID="1CA4559CXXXXX" /mnt/E_DRIVE ntfs rw,auto,exec,nls=utf8,uid=1000,gid=100,umask=0007 0 2
UUID="82F006D7XXXX" /mnt/C_DRIVE ntfs rw,auto,exec,nls=utf8,uid=1000,gid=100,umask=0007 0 2

That fixed the mount issue but still need to have them surface properly on the file manager (dolphin) - this was actually quite easy - I just added them as places, removed the device entries from the right click menu. This worked for me - I'd have liked to make this the default but didn't find a way. Finally decided to just copy the ~/.local/share/user-places.xbel file to each local user and set owner.

Android

Other than that, I also need to be able to connect my nexus 4 and 7 as MTP devices. I had read that this doesn't work out of the box - but looks like that's been addressed in ubuntu 13.04 (and hence in Mint)
I also need adb and fastboot - so just installed them through synaptic. BTW, that was awesome since it means that I didn't have to download the complete android SDK just for two tools.

General impressions

Well, I'm still wondering why I didn't migrate full time to linux all these years. THings have been very smooth - but I need to call out key improvements that I've seen till now

  1. Boot - fast - less than a minute. Compare that to upto 3 mins till the desktop is loaded on Win 7
  2. Switching users - huge huge speed up. On Windows, it would take so long that we would most of the times just continue using other's login.
  3. Suspend/Resume - works reliably. Back in Windows, for some reason, if I had multiple users logged in, suspend would work but resume was hit and miss.
  4. GPU seems to work much better. Note here though that I'm not playing any games etc. I have a Radeon 5670 - but somehow on windows even Google Maps (new one) would be slow and sluggish while panning and zooming. Given that on Linux, I'm using the open source drivers instead of fglrx, I was expecting the same if not worse. Pleasantly surprised that maps just works beautifully. Panning,zooming in and out is smooth and fluid. Even the photospheres that I had posted to maps seem to load a lot more quickly.

Well, that's it for now. I know that a lot of it might be 'new system build' syndrome whereas on windows gunk had built up over multiple years. However, note that my windows install was fully patched and up to date. Being a power user, I was even going beyond the default levels of tweaking (page file on separate disk from system etc) - but just got tired of the issues. The biggest trigger was the GPU crashes of course and here to updating to latest drivers didn't seem to help much. I fully realize that its almost impossible to generalize things. My work laptop has Win 7x64 Enterprise and I couldn't be happier - it remains snappy and fast in spite of a ton of things being installed (actually, maybe not - the linux boot is still faster) - but it is stable.
And of course, there might be a placebo effect at some places - but in the end what matters is that things work.

Thursday, July 25, 2013

Vimgrep on steriods - even on Windows

So I was looking at this vim tip for finding in files from within Vim - while it looks helpful, there are a number of possible improvements:

  1. Why a static binding? being able to tweak the patterns or the files to search is quite common - so much more value if you could have the command printed in the command line, ready to be edited to your heart's content or just go ahead and execute the search with Enter.
  2. The tip wont work for files without extensions (say .vimrc) - in this case, expand("%:e") returns empty string
  3. lvimgrep is cross platform but slow - let's use Mingw grep too for vimgrep
  4. And make that Mingw grep integration work on different machines
It was more of an evening of scratching an itch (a painful one if you're zero in vimscript :) ). Here's the gist for it- hope someone finds it useful.

Feel free to tweak the mappings - I use the following:

  1. leader+f: normal mode: vimgrep for current word, visual mode: search for current selection
  2. leader+fd: Similar - but look in the directory of the file and below
  3. leader+*: Similar to the above, but use internal grep

Save the file to your .vim folder and source it from .vimrc

    so ~/.vim/grephacks.vim

A few notes:

  1. GNUWIN is an env variable pointing to some folder where you've extracted mingw findutils and grep and dependencies
  2. The searches by default work down from whatever vim thinks is your present working directory. I highly recommend vim-rooter if you're using anything like subversion, mercurial or git as vim-rooter automatically looks for a parent folder that contains .git, .hg or .svn (and more - please look it up)

Happy vimming!

Saturday, February 16, 2013

Downloading over an unreliable connection with Wget

This is a part rant, part tip - so bear with me... My broadband connection absolutely sucks over the past week. I upgraded from 2Mbps with a download limit to a 4Mbps with unlimited downloads and since then it has been nothing but trouble... Damn BSNL!! I've probably registered about 30 odd complaints with them to no avail. If there was a Nobel for bad customer service, BSNL would probably win it by a mile. Some examples:
  1. They'll call to find out what the complaint it and even when I explain what's happening, they hardly hear me out at all.
  2. Either they call up and say 'We have fixed it at the Exchange' and nothing has changed
  3. They automatically close the complaints :)
Guess they find it too troublesome that someone who's paying for broadband actually expects the said broadband connection to work reliably!

Anyway, Airtel doesn't seem to be any better - they need 10 days to set up a connection and when I was on the phone with them, they didn't seem too interested in increasing their customer count by 1 :).

I also tried calling an ISP called YouBroadband after searching some of the Bangalore forums for good ISP providers. They promised a call in 24 hours to confirm if they have coverage in my area and it was feasible for them to set up the connection and that was 48 hours ago!

At work, I've heard good things about ACTBroadband and they have some ads in TOI as well, but they said they don't have coverage in my area :(.

So how do you download


Today I needed to download something and doing it from the browser failed each time since my DSL connection would blink out in between!

After ranting and raving and writing the first part above and still mentally screaming at BSNL, decided to do something about it... Time for trusty old wget - surely, it'll have something?

Turns out that guess was a 100% on the money... it took a few tries experimenting with different options, but finally worked like a charm

wget -t0 --waitretry=5 -c -T5 url
# where
# -t0 - unlimited retries
# --waitretry - seconds to wait between retries
# -c resume partially downloaded files
# -T5 - set all timeouts to 5 seconds. Timeouts here are connect timeout, read timeout and dns timeout

Sunday, February 03, 2013

Single Page Apps

We released the Scheduler service (cloud hosted cron that does webhooks) on the 18th of Jan. It was our first release (still in beta) and you can sign up for it via the Windows Azure store as an addon. Upcoming release will have a full portal and the ability to register without going via the Windows Azure portal.

We've been building the user portal to the Scheduler service as a Single Page app (SPA) and I wanted to share a some background and insights we've gained.

SPA overview

To review, an SPA is a web app contained in a single page - where 'pages' are nothing but divs being shown/hidden based on the state of the app and user navigation.

The benefits are that you never have a full page refresh at all - essentially, page loads are instantaneous and data is retrieved and shown via AJAX calls. From a UX standpoint, this delivers a 'speedier' experience, since you never see the 'static' portions of your page reload when you navigate around.

All that speediness is great but the downsides are equally important.

SPA - Challenges

  1. Navigation - SPA's by nature break the normal navigation mechanism of the browser. Normally, you click a link, it launches off a request and would update the url on the address bar. The response is then fetched and painted. In an SPA however, a link click is trapped in JS and the state is changed and you show a different div (with a background AJAX req being launched).
    This breaks Back/Forward navigation and since the URL doesn't change, bookmarkability is also broken to boot.
  2. SEO - SEO also breaks because links are associated with JS script and most bots cannot follow such links.
Now, none of this is really new. Gmail was probably the first well known SPA implementation and that's been around since 2004. What's changed is tha t now there are better tools and frameworks for writing SPAs. So how do you get around the problems?
  1. Back/Forward nav and Bookmarkability: SPA's use hash fragment navigation - links contain hash fragments. According to the original HTTP standard, hash fragments are for within page navigation and hence while the browser will update the address on the address bar and push an entry into the history stack, it will not make a request to the server. Client side routing can listen for changes to the location hash and manipulate the DOM to show the right 'section' of the page.

  2. SEO - Google (and later Bing) support crawling SPA websites provided the links are formatted specifically. See

Why we went the SPA way

When we started out with the Portal, we needed to take some decisions around how to go about it
  1. Scheduler REST service is a developer focused offering and the primary interaction for our users is the API interface itself. While the portal will have a Scheduler management features, this is really to give our users a 'manual' interface to scheduler. The other important use case for the portal is when you want to see history of a task's execution. Given that the API was primary, we wanted to build the UI using the APIs to dogfood our API early and often.
  2. It just made sense to have the UI consume the APIs so that we weren't re-writing the same capabilities again just to support the UI.
  3. Getting the portal to work across devices was important. In that sense, going with an approach that reduces page loads makes sense.
  4. We wanted public pages to be SEO friendly - so the SPA experience kicks in only after you login.
  5. Bookmarkability is important and it should be easy to paste/share links within the app.

Tools and frameworks

We evaluated different frameworks for building the SPA. We wrote a thin slice of the portal - a few public pages, a Social login page and a couple of logged in pages for navigation and bookmarkability.
  1. KO+ approach - I'm calling this KO+ as it's KO is just a library for MVVM binding and we needed a bunch of other libraries for managing other aspects of the SPA.
    • Knockout.js - for MVVM binding
    • Sammy.js - Client side routing
    • Require.js - script dependency management.
    • Jquery - general DOM manipulation when we needed it.
  2. Angular.js - Google's Angular.js is a full suite SPA framework that handles all the aspects of SPA
We chose the KO+ approach as there was knowledge and experience on KO in the team. The learning curve's also lesser since each library can be tackled at a time. While Angular offers a full fledged SPA framework, it also comes with more complexity to be grappled with and understood - essentially, the 'Angular' way of building apps.

That said, once you get over the initial learning curve of Angular, it does have a pleasant experience and you don't have to deal with integration issues that come up when using different libraries. We had prior experience on KO on the team so it just made sense to pick it given our timelines.

I'll post an update once we have it out of the door and ready for public consumption.