Friday, August 30, 2013

Fixing Wifi sleep of death

The problem

I have a TP-Link WN722N USB Wifi dongle. Linux Mint picked it up during install and seemed like all was good.

Then the other day, noticed that sometimes WiFi would be flaky like hell - all I'd get is the password prompt. Turns out that this is a common problem with USB Wifi. After a few days, the pattern appeared to be a recurring problem after putting the computer sleep. The fix is easy - just unload the ath9k_htc before suspending. Edit /etc/pm/config.d/config (create if needed):

SUSPEND_MODULES="ath9k_htc"

Monday, August 26, 2013

Moving from Wordpress.com to Blogger

Moving from wordpress

So I moved my blog from its old home at http://niftybits.wordpress.com to http://blog.rraghur.in I've also moved out of wordpress.com to blogger. For quite some time, I've not been happy with Wordpress's abilities for a tech blog. It's a commercial endeavour and so if you need additional features/tweakability you've got to fork out the good stuff. Anyway, I was intending to get my own domain and wordpress hosting that would give me full control over the blog engine and the ability to install any addons that I wanted. There was this niggling feeling in the back of my head that probably I was creating a monster - basically, set it up and then take on maintenance of it as well :(. That's part of the reason I decided to dust off my old blogger account and see where it stood. Now, the last time I'd touched Blogger about 10 yrs ago, it was just after Google bought over Blogger and was really more mommy blog engine. Things have changed and I've been living in a hole... Blogger's now much more polished. A few things where it leaves Wordpress.com in the dust:
  1. Custom markup and css.
  2. Custom domains
  3. Google Analytics
  4. Ability to have ads and hence make some money - not that I intend to.
Where it falls behind:
  1. Analytics - really... the site analytics seems a little iffy.
    Update: Once you integrate Blogger with Google Analytics, its much much nicer than the one provided by blogger
  2. Referrer spam - All I saw was entries of www dot vampire dot stat domain on the analytics dashboard. Turns out that its referrer spam and isn't blocked/can't be blocked. What I don't understand is how this was never a problme with wordpress.com.
  3. Themes - far lesser than WP - but this is not an issue since you can tweak anything to your heart's content.
Now all of the WP shortcomings can be addressed if you go for paid upgrades OR just host your own WP. Unfortunately, neither is a good option for me. So while I didn't like moving out, it had to be done.

Registering a domain

I went ahead and registered the domain with BigRock.in - here's a referral link that will get you 25% off your domain.

 

Pointing Blogger to your custom domain

This was very simple - just follow the instructions on Blogger. You will need to set up 2 CNAME records for Blogger in your DNS management console other than your custom domain itself. For ex:

 

Migrating content from WP.com

Also, had to get the blog content transferred. Went into wordpress dashboard Tools > Export > All Content and exported the blog. This lets you download a xml file with all your blog content. Now, we have to convert it so that it can be imported by Blogger. Move over to Wordpress2Blogger and upload the file. If all goes well, then you get a converted file. Wasn't so simple for me though and it gave a Invalid XML at line xxxx error. Opening the wordpress XML, I didn't see any issues and a little google indicated that WP.com is notorious for generating invalid/malformed XML. Hmm - not a biggie. Pulled out XMLLint and ran it over the wordpress file
xmllint -v /path/to/wordpress/export/file
And xmllint found the problem - it was basically an   entity that had not been declared. Just removed it from the file and uploaded again to Wordpress2Blogger and the conversion went without a hitch.

 

Other Tweaks

Formatting of posts - while the content came over fine, it came in with <br /> tags. Also all my old source code listings that had the WP.com syntax tags [sourcecode] [/sourcecode] were broken and had to be fixed.
For code syntax highlighting, I went with hightlight.js for now. May change later. Also copied the key hightlighting css from here. You can edit the template and stick in the markup in the head section.
Next, spent some time on the theme tweaks on Blogger till I got tired. I'm ok with it for now - but will probably come back to it later.

 

In Closing..

Well, the move was completed. I'm a little dissatisfied with a few things:
  1. What do I do with my wordpress blog. I cannot redirect it and I can delete it but a little hesitant about burning bridges.
  2. Reaching via google search seems to be a problem - searching for xbmc xvba nettop rraghur doesn't even show up the Blogger link. Only the wordpress links are there. I think this is because of page reputation - so not much I can do.
  3. Blogging with VIM: Vimrepress was a good solution for WP.com. For blogger I have only found Blogger.vim which I'm still to give a try.
The benefits are worth much more and having the peace of mind of not having to worry about maintenance and the freedom that I can move to a self hosted blog later on if needed is indeed good.

Saturday, August 24, 2013

Linux Mint 15 KDE - tweaks and fixes

Additional fixes post installation

SSH connection refused

So today I tried ssh'ing into the desktop and no go. I was getting a connection refused and thought it had to do with either SSH not being installed or being blocked by the firewall. Later on when I checked, OpenSSH server was installed and the service was also running

sudo service status ssh
ssh start/running, process 2709

Hmm - this is weird. Next check was for iptables and that was clear too.. So last check was to look at /var/log/auth and indeed, there's a problem. Interestingly machine keys weren't generated during installation

Aug 22 20:15:58 desktop sshd[1960]: fatal: No supported key exchange algorithms [preauth]
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_rsa_key
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_dsa_key
Aug 22 20:16:49 desktop sshd[1990]: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key

Ok - so the fix is easy - generate the keys with

    sudo ssh-keygen -A

After that, everything's back to normal :)

Power button shuts down computer

This is Kubuntu Bug 1124149. I'll spare you the details, which you can read yourself. Fix needed is to link /usr/bin/qdbus

sudo ln -sf /usr/lib/x86_64-linux-gnu/qt4/bin/qdbus /usr/bin/qdbus

Tuesday, August 20, 2013

Mixing Generics, Inheritance and Chaining

In my last post on unit testing, I had written about a technique I'd learnt forsimplifying test set ups with the builder pattern. It provides a higher level, more readable API resulting in DAMP tests.

Implementing it though presented a few interesting issues that were fun to solve and hopefully, instructive as well. I for one will need to look it up if I spend a few months doing something else - so got to write it down :).

In Scheduler user portal, controllers derive from the MVC4 Controller class whereas others derive from a custom base Controller. For instance, Controllers that deal with logged in interactions derive from TenantController which provides TenantId and SubscriptionId properties. IOW, a pretty ordinary and commonplace setup.
    class EventsController : Controller 
    {
        public ActionResult Post (MyModel model) 
        {
        // access request, form and other http things
        }
    }

    class TenantController: Controller 
    {
        public Guid TenantId {get; set;}
        public Guid SubscriptionId {get; set;}
    }

    class TaskController: TenantController
    {
        public ActionResult GetTasks()
        {
            // Http things and most probably tenantId and subId as well.
        }
    }
So, tests for EventsController will require HTTP setup (request content, headers etc) where as for anything deriving from TenantController we also need to be able to set up things like TenantId.

Builder API


Let's start from how we'd like our API to be. So, for something that just requires HTTP context, we'd like to say:
    controller = new EventsControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();
And for something that derives from TenantController:
    controller = new TaskControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .WithTenantId(theTenantId)
                .WithSubscriptionId(theSubId)
                .Build();
The controller builder will basically keep track of the different options and always return this to facilitate chaining. Apart from that, it has a Build method which builds a Controller object according to the different options and then returns the controller. Something like this:

    class TaskControllerBuilder() 
    {
        private object[] args;
        private Guid tenantId;
        public TaskControllerBuilder WithConstructorParams(params object args ) 
        {
            this.args = args;
            return this;
        }

        public TaskControllerBuilder WithTenantId(Guid id ) 
        {
            this.tenantId = id;
            return this;
        }

        public TaskController Build() 
        {
            var mock = new Mock<TaskController>(MockBehavior.Strict, args);
            mock.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }

Generics


Writing XXXControllerBuilder for every controller isn't even funny - that's where generics come in - so something like this might be easier:
    controller = new ControllerBuilder<EventsController>()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();
and the generic class as:
    class ControllerBuilder<T>() where T: Controller
    {
        private object[] args;
        private Guid tenantId;
        protected Mock<T> mockController;

        public ControllerBuilder<T> WithConstructorParams(params object args ) 
        {
            this.args = args;
            return this;
        }

        public T Build() 
        {
            mockController = new Mock<T>(MockBehavior.Strict, args);
            mockController.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }
In takes about 2 seconds to realize that it won't work - since the constraint only specifies T should be a subclass of Controller, we do not have the TenantId or SubscriptionId properties in the Build method.

Hmm - so a little refactoring is in order. A base ControllerBuilder that can be used for only plain controllers and a sub class for controllers deriving from TenantController. So lets move the tenantId out of the way from ControllerBuilder.
    class TenantControllerBuilder<T>: ControllerBuilder<T>  
     where T: TenantController          // and this will allow access
                                        // TenantId and SubscriptionId
    {
        private Guid tenantId;
        public TenantControllerBuilder<T> WithTenantId(Guid tenantId) 
        {
            this.tenatId = tenantId;
            return this;
        }

        public T Build() 
        {
            // call the base
            var mock = base.Build();
            // do additional stuff specific to TenantController sub classes.
            mockController.Setup(t => t.TenantId).Returns(this.tenantId);
            return mock.Object;
        }
    }
Now, this will work as intended:
/// This will work:
controller = new TenantControllerBuilder<TaskController>()
            .WithTenantId(guid)                             // Returns TenantControllerBuilder<T>
            .WithConstructorParams(mockOpsRepo.Object)      // okay!
            .Build();

But this won't compile: :(

///This won't compile:
controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Compiler can't resolve WithTenant method.
            .Build();
This is basically return type covariance and its not supported in C# and will likely never be. With good reason too - if the base class contract says that you'll get a ControllerBuilder, then the derived class cannot provide a stricter contract that it will provide not only a ControllerBuilder but that it will only be TenantControllerBuilder.

But this does muck up our builder API's chainability - telling clients to call methods in certain arbitrary sequence is a no - no. And this is where extensions provide a neat solution. Its in two parts

  • Keep only state in TenantControllerBuilder.

  • Use an extension class to convert from ControllerBuilder to TenantControllerBuilder safely with the extension api.


// Only state:
class TenantControllerBuilder<T> : ControllerBuilder<T> where T : TenantController
{
    public Guid TenantId { get; set; }

    public override T Build()
    {
        var mock = base.Build();
        this.mockController.SetupGet(t => t.TenantId).Returns(this.TenantId);
        return mock;
    }
}

// And extensions that restore chainability
static class TenantControllerBuilderExtensions
{
    public static TenantControllerBuilder<T> WithTenantId<T>(
                                        this ControllerBuilder<T> t,
                                        Guid guid)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = guid;
        return c;
    }

     public static TenantControllerBuilder<T> WithoutTenant<T>(this ControllerBuilder<T> t)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = Guid.Empty;
        return c;
    }
}
So, going back to our API:
///This now works as intended
controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Resolves to the extension method
            .Build();
It's nice sometimes to have your cake and eat it too :D.

Wednesday, August 14, 2013

Unit Tests: Simplifying test setup with Builders

Had some fun at work today. The web portal to Scheduler service is written in ASP.NET MVC4. As such we have a lot of controllers and of course there are unit tests that run on the controllers. Now, while ASP.NET MVC4 apparently did have testability as a goal, it still requires quite a lot of orchestration to test controllers.

Now all this orchestration and mock setups only muddies the waters and gets in the way test readability. By implication, tests are harder to understand, maintain and eventually becomes harder to trust the tests. Let me give an example:

[TestFixture]
public class AppControllerTests  {
// private
/// set up fields elided
// elided

[SetUp]
public void Setup()
{
_mockRepo = new MockRepository(MockBehavior.Strict);
_tenantRepoMock = _mockRepo.Create();
_tenantMapRepoMock = _mockRepo.Create();
_controller = MvcMockHelpers.CreatePartialMock(_tenantRepoMock.Object, _tenantMapRepoMock.Object);

guid = Guid.NewGuid();

// partial mock - we want to test controller methods but want to mock properties that depend on
// the HTTP infra.
_controllerMock = Mock.Get(_controller);
}

[Test]
public void should_redirect_to_deeplink_when_valid_sub()
{
//Arrange
_controllerMock.SetupGet(t => t.TenantId).Returns(guid);
_controllerMock.SetupGet(t => t.SelectedSubscriptionId).Returns(guid);
var formValues = new Dictionary<string,string>();
formValues["wctx"] = "/some/deep/link";
_controller.SetFakeControllerContext(formValues);

// Act
var result = _controller.Index() as ViewResult;

//// Assert
Assert.That(result.ViewName, Is.EqualTo(string.Empty));
Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
//Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

_mockRepo.VerifyAll();
}
}
As you can see, we’re setting up a couple of dependencies, then creating the SUT (_controller) as a partial mock in the setup. In the test, we’re setting up the request value collection and then exercising the SUT to check if we get redirected to a deep link. This works – but the test set up is too complicated. Yes – we need to create a partial mock that and then set up expectations that correspond to a valid user who has a valid subscription – but all this is lost in the details. As such, the test set up is hard to understand and hence hard to trust.

I recently came across this pluralsight course  and there were a few thoughts that hit home right away, namely:
  1. Tests should be DAMP (Descriptive And Meaningful Phrases)
  2. Tests should be easy to review
Test setups require various objects in different configurations - and that's exactly what a Builder is good at. The icing on the cake is that if we can chain calls to the builder, then we move towards evolving a nice DSL for tests. This goes a long way towards improving test readability - tests have become DAMP.

So here's what the Builder API looks like from the client (the test case):

[TestFixture]
public class AppControllerTests {
[SetUp]
public void Setup()
{
_mockRepo = new MockRepository(MockBehavior.Strict);
_tenantRepoMock = _mockRepo.Create();
_tenantMapRepoMock = _mockRepo.Create();
guid = Guid.NewGuid();
}

[Test]
public void should_redirect_to_deeplink_when_valid_sub()
{
var formValues = new Dictionary<string, string>();
formValues["wctx"] = "/some/deep/link";

var controller = new AppControllerBuilder()
.WithFakeHttpContext()
.WithSubscriptionId(guid)
.WithFormValues(formValues)
.Build();

// Act
var result = _controller.Index() as ViewResult;

//// Assert
Assert.That(result.ViewName, Is.EqualTo(string.Empty));
Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
//Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

_mockRepo.VerifyAll();
}
}

While I knew what to expect, it was still immensely satisfying to see that:
  1. We’ve abstracted away details like setting up mocks, that we’re using a partial mock, that we’re even using MVC mock helper utility behind the AppControllerBuilder leading to simpler code.
  2. The Builder helps readability of the code – its making it easy to understand what preconditions we’d like to be set on the controller. This is important if you’d like to get the test reviewed by someone else.
You might think that this is just sleight of hand - after all, have we not moved all the complexity to the AppControllerBuilder? Also, I haven't shown the code - so definitely something tricky is going on ;)?

Well not really - the Builder code is straight forward since it does one thing (build AppControllers) and it does that well. It has a few state properties that track different options. And the Build method basically uses the same code as in the first code snippet to build the object.

Was that all? Well not really – you see, as always, the devil’s in the details. The above code is’nt real – its  more pseudo code. Secondly, an example in isolation is easier to tackle. However, IRL (in real life), things are more complicated. We have a controller hierarchy. Writing builders that work with the hierarchy had me wrangling with generics, inheritance and chainability all at once :). I'll post a follow up covering that.

Sunday, August 11, 2013

And we're back to windows

Well not really - but I have your attention now... So in my last post, I talked about moving my home computer from Win 7 to Linux Mint KDE. That went ok for the most part other than some minor issues.
Fast-forward a day and I hit my first user issue :)... wife's workplace has some video content that is distributed as DRM protected swf files that wil play only through a player called HaiHaiSoft player!

Options

  1. Boot into windows - painful and slow and kills everyone's else session.

  2. Wine - Thought it'd be worth a try - installed Wine and dependencies through Synaptic. As expected, it would'nt run haiHaiSoft player - crashed aat launch.

  3. Virtualization: so the final option was a VM through virtualbox. Installed Virtualbox and its dependencies (dkms, guest additions etc) and brought out my Win 7 install disk from cold storage.

Virtualbox and Windows VM installation

Went through installation and got Windows up and running. Once I got the OS installed, also installed guest additions and it runs surprisingly well. I'd only used Virtualbox for a linux guest from a Windows host before so it was a nice change to see how it worked the other way around.

Anyway, once the VM was installed, downloaded and installed the player and put a shortcut to virtualbox on the desktop. Problem solved!

Saturday, August 10, 2013

Upgraded to Linux

So after suffering tons of crashes (likely due to AMD drivers) and general system lagginess, I finally decided to ditch windows and move to linux full time.
This is on my home desktop which is more a family computer than something that only I would use.
I was a little apprehensive with driver support as usual and tricky stuff like suspend to ram (s3) which always seems highly driver dependent and problematic on Linux (it is still a pain on my XBMCBuntu box). Anyway, nothing like trying it out.

After looking around a bit, downloaded Linux Mint 15 (default and KDE). Booted with the Live CD and liked the experience - though GNOME seems a bit jaded and old. I liked KDE much better - esp since it seems more power user friendly.

So after testing hardware stuff (Suspend, video drivers and so on) - all of which worked flawlessly, I must say, I decided to go ahead and install it on one of my HDDs. Unfortunately, installation was rocky a bit - I don't know if it was just me - the mint installer would progress up to preparing disks and hang there for 10+ minutes without any feedback I'm assuming it is reading partition tables and so forth - but no idea why it took so long. A thought it'd hung a couple of times - so terminated it and it was only accidentally that I found that it was still working - when I left it on its own for sometime and got back. It presented my the list of options (guided partition on entire disk, co locate with another OS etc) - but things actually went worse after this.

What seems to have happened is that my pending clicks on the UI all were processed and it proceeded to install on my media drive before I had a chance ... wiped out my media drive. Thankfully, before installation I had a backup of the important stuff on that drive and so it wasn't a biggie...
At this point, I was having serious doubts of continuing with Mint and was ready to chuck it out of the window and go back to Kubuntu or just back to Windows. However, I hung on - given that I'd wiped a drive, might as well install it properly and then wipe it if it wasn't good.

Anwyay, long story short, I restarted the install, picked my 1TB drive and partitioned it as 20GB /, 10Gb /var, 1Gb /boot and remaining as unpartitioned.
Mint went through the installation and seemed to take quite sometime - there were a couple of points where the progress bar was stuck at some percentage for
multiple minutes and I wasn't sure if things were proceeding or hung. In any case, after the partitioning window, I was more inclined to wait. Good that I did since the installation did eventually complete.

Feedback to Mint devs - please make the installer be more generous with feedback - esp if the installer goes into something that could take long.

First boot

Post installation, rebooted and grub shows my windows boot partition as well as expected. I still haven't tried booting into windows so I that's one thing to check. Booted into Mint and things looked good. Set up accounts for my dad and my wife. one thing I had to do was edit /etc/pam.d/common-password to remove password complexity (obscure) and set minlen=1

     password   [success=1 default=ignore]  pam_unix.so minlen=1 sha512

Next was to set up local disks (2 ntfs and 1 fat32 partition) so that they are mounted at boot and everyone can read and write to them. I decided to go the easy route and just put entries in /etc/fstab

UUID=7D64-XXX  /mnt/D_DRIVE    vfat      defaults,uid=1000,gid=100,umask=0007                   0       2
UUID="1CA4559CXXXXX" /mnt/E_DRIVE ntfs rw,auto,exec,nls=utf8,uid=1000,gid=100,umask=0007 0 2
UUID="82F006D7XXXX" /mnt/C_DRIVE ntfs rw,auto,exec,nls=utf8,uid=1000,gid=100,umask=0007 0 2

That fixed the mount issue but still need to have them surface properly on the file manager (dolphin) - this was actually quite easy - I just added them as places, removed the device entries from the right click menu. This worked for me - I'd have liked to make this the default but didn't find a way. Finally decided to just copy the ~/.local/share/user-places.xbel file to each local user and set owner.

Android

Other than that, I also need to be able to connect my nexus 4 and 7 as MTP devices. I had read that this doesn't work out of the box - but looks like that's been addressed in ubuntu 13.04 (and hence in Mint)
I also need adb and fastboot - so just installed them through synaptic. BTW, that was awesome since it means that I didn't have to download the complete android SDK just for two tools.

General impressions

Well, I'm still wondering why I didn't migrate full time to linux all these years. THings have been very smooth - but I need to call out key improvements that I've seen till now

  1. Boot - fast - less than a minute. Compare that to upto 3 mins till the desktop is loaded on Win 7
  2. Switching users - huge huge speed up. On Windows, it would take so long that we would most of the times just continue using other's login.
  3. Suspend/Resume - works reliably. Back in Windows, for some reason, if I had multiple users logged in, suspend would work but resume was hit and miss.
  4. GPU seems to work much better. Note here though that I'm not playing any games etc. I have a Radeon 5670 - but somehow on windows even Google Maps (new one) would be slow and sluggish while panning and zooming. Given that on Linux, I'm using the open source drivers instead of fglrx, I was expecting the same if not worse. Pleasantly surprised that maps just works beautifully. Panning,zooming in and out is smooth and fluid. Even the photospheres that I had posted to maps seem to load a lot more quickly.

Well, that's it for now. I know that a lot of it might be 'new system build' syndrome whereas on windows gunk had built up over multiple years. However, note that my windows install was fully patched and up to date. Being a power user, I was even going beyond the default levels of tweaking (page file on separate disk from system etc) - but just got tired of the issues. The biggest trigger was the GPU crashes of course and here to updating to latest drivers didn't seem to help much. I fully realize that its almost impossible to generalize things. My work laptop has Win 7x64 Enterprise and I couldn't be happier - it remains snappy and fast in spite of a ton of things being installed (actually, maybe not - the linux boot is still faster) - but it is stable.
And of course, there might be a placebo effect at some places - but in the end what matters is that things work.