Friday, February 17, 2012

VIM config updates

Ultisnips has been updated to 2.0. See the video here for the updates and new features. One piece of information - and one that I was eagerly waiting for is that 2.0 works perfectly with auto complete popup. This wasn't always the case - in fact, the bug on launchpad for the same had been marked as a 'wont-fix'. In any case, I was super thrilled to see that its been fixed.


Zencoding.vim also has been updated... if you're writing any sort of markup the old way, then just google zencoding - there's a couple of videos that will blow your socks off. For the truly impatient, you write a CSS like expression and its converted back to markup! How cool is that?

Friday, January 27, 2012

IOS-no previous simulator versions!

Ran into a situation today where we had a mobile web app that was reported to be misbehaving on iOS 3.2. The Mac at work has the latest XCode and IOS 5 simulator loaded on it. So we thought it would be quite routine to just start a simulator running IOS 3.2 - after all this having simulators of different versions of the OS is pretty routine. Android makes it trivial and before that, Blackberry has always had different simulator versions for different versions of their OS. Truth be told, RIM probably overdid it. They had too many versions, a developer website that would drive even the most persistent BB fanboys to stark raving madness and documentation that took great pains to suck! Hell, its a separate rant altogether :).


Anyway, after clicking around Xcode for a bit, imagine our surprise when we found that only a device debugging package for iOS 3.2 was available as an updated package for Xcode. That didnt seem right - so off to Google and there's a post on SE http://apple.stackexchange.com/questions/14128/how-do-i-install-the-3-0-iphone-simulator-on-xcode-4


Apparently, Apple doesn't want you to test on previous versions (or atleast - 2 major versions before. Testing on last major version IOS4 is ok though). Now isn't that absolutely ridiculous? Sure, Apple wants people to upgrade their phones to the latest OS versions and they've done a great job of ensuring that later versions of the OS work on older generation phones - but from a development tool standpoint, making tools to test your app unavailable is taking things too far. So tomorrow if my site/app doesnt work properly on a iOS 3 device, the user isnt going to blame Apple. Its the app developer who gets the bug report :(.


So once I'd made my peace with Apple's decisions and diktats on what simulators I was allowed to play with, I reflected on it a bit. I think the key is that that IOS's simulator is really just a simulator (ie software running on the host machine but mimicking a device). On the other hand, the Android emulator is actually a full qnx VM that's totally isolated from the host machine. In IOS's case, the simulator is sharing libraries and tools installed on the base OS - and as such, it would be quite hard to simulate older versions. In Android's case, since each emulator is really like running a VM, you can have all the different versions at your beck and call. On the flip side of things, the emulator approach really slows things down - everything from booting up the VM to actually running code inside it whereas IOS simulator positively zips around. I'm not sure if this is indeed the case - it's just my theory. Looking around on the net, I couldnt find a solid reference - so if you know of one, drop it in the comments.. I did find a few on SE/SO - but they were by no means conclusive http://stackoverflow.com/questions/4544588/difference-between-iphone-simulator-and-android-emulator

VIM macro super powers

So my affair with Vim continues - and I seem to have discovered VIM's macro super powers. The obvious next step is to shout from the rooftops and hence this blog post (and there's hardly anything original - apart from the fact that I've just had a 'aha' moment when it comes to macros and thought it might help other budding vimmers out there...


A little primer - Macros let you repeat a set of commands. The way to go about it is to press q<macro_letter> where <macro_letter> is between lowercase a-z. This starts recording a macro in Vim (and you see a recording message at the bottom). Now hit commands you want to repeat later and press q when done to finish recording. VIM records all the keystrokes you enter in the register you specified as the macro name. To now execute the macro, position the macro on the line and then hit @<macro_letter> and Vim will faithfully replay your commands.


Its a great time saver - especially for complex editing tasks where search/replace doesn't cut it. But, if you're feeling a dissappointed after coming this far (after all, I promised a aha moment), then hang on.


Today's discovery was that you can edit macros that you've recorded quite easily and save them back!!! THIS IS HUGE. Why so? Because when you record a macro, its quite normal to jump around quite a bit or get one or two keystrokes wrong. In fact, its for this reason that I could never use Emacs's macro facility and failed to just 'get it'. However, in VIM, you could just open a scratch pad editor and hit "<macro_letter>p - that's double quote-letter-p to paste the contents of register containing your macro. You see your macro keystrokes - so go ahead and edit them and then use "<register>y<movement> to save your edits back to the register. You can now execute the macro with a @<macro_letter> as if that's the way it was recorded in the first place.


Another obvious tip - you can execute the contents of any register as if it were a macro with a @. Not sure when that could be helpful - but knowing that its possible is good.

Tuesday, January 17, 2012

Yoohooo!! Successfully compiled Android from source

Finally!!!


So my last weekend project had been to compile Android ICS from source. Given that the size of the repo itself is in excess of 6Gigs, just getting it down itself took the better part of Friday night and Saturday night. When I got down to running make on it, it was Sunday afternoon.


Needless to say, things didn't work too well. I'm running this on a 32 bit Ubuntu 10.04 Virtualbox with a piddly 1GB RAM. When make failed the first time, realized that swap was a measly 300Mb. First steps first, went on to increase memory to 2GB (that's all I can spare) and increased swap to 2Gs.


Compilation next round started and that failed too - ran out of disk space - and this was Sunday night. Things kind of stayed there and finally this evening, resized the disk in virtualbox to 50Gigs. Again started the compilation and this time ran into linker errors when building webcore. One more round of troubleshooting involved deleting the previously built static library and then running make again. Surprisingly, this time make completed successfully - to the point where I wasn't sure if it had succeeded or just failed silently on something else.


The next step was to run the emulator to see if it really would boot up. Over at source.android.com, they oversimplify it when they say that you just run emulator from the android root folder. That didn't work for me - and this time it was because I hadn't sourced the envSetup.sh file... this thread http://groups.google.com/group/android-platform/browse_thread/thread/91ff18e034acf951 helped in tracking that one down.


So finally, after all that trouble, I have my very own ICS build running!!!!


For now, its onward ahoy to setting up Eclipse and starting with a fix I've been mulling about for sometime now..


Signing off from cloud nine
R

Monday, January 16, 2012

Ubuntu, Console VIM - weird characters in insert mode

Now that I feel quite comfy with VIM, over the weekend I needed to edit a config file in my Ubuntu 10.10 Virtualbox machine quickly. Instead of GVim, I just opened the file in console VIM. As I hit i to get into insert mode, a bunch of weird character boxes were inserted. That was not good at all :( - just when you think you're comfortable with something if it does something totally weird. In any case, I was in too much of a hurry to bother and went about editing my file with gVim. Also, backspace was wonky (same weird characters) - so I felt better. For some reason that I fail to understand, why must Linux make proper backspace and delete handling such a pain! In any case, it's something that I've dealt with enough times to know that there'll be something on Google.


Later on, tried to see what all the fuss was about. Googling around, I found :help :fixdel and that seemed simple enough. Alas, when I tried it out, it didn't fix the issue at all. Also, I seemed to be getting weird characters just pressing i to get into insert mode - and the VIM wiki page didn't have anything about that. Neither did Google turn up anything that seemed related.


So today early morning, on a whim, read up a little on VIm terminal handling. I have the following in my .vimrc
[sourcecode language="text"]
set t_Co=256
[/sourcecode]
Maybe it was the color escape code that was coming in - so checked out :echoe &term which returned xterm under gnome-console and builtin_gui under gvim. So I've put the following bit in my .vimrc and it seems to have fixed things nicely:
[sourcecode language="text"]
if &term == "xterm"
set term=xterm-256color
endif
[/sourcecode]

Wednesday, January 11, 2012

Android Annoyances

So yesterday and today while driving back from work, I've had to join conference calls. The conference call provider we use at work has 10 digit passcode numbers. Usually, I have a few bridge numbers with the DTMF codes saved in my contacts so I can just click on the contact to get dialled to the access number and have the participant passcode typed in for me. However, yesterday and today's calls were on a different bridge and I had to try to remember a 10 digit number after dialling the access code - and all that while driving. Needless to say, it took a few attempts and I'm sure at that time my attention wasn't where it should have been - ie on the road and on the traffic. Besides being thoroughly unsafe on Bangalore roads, its just frustrating(thankfully - better sense prevailed today and I pulled over, dialled into the bridge and then started driving again).


So the issue really is that the native parser that parses out email and calendar invites doesn't understand access codes and passcodes. It shouldn't be too hard to do - but then I started digging a bit deeper this evening. Granted that the parser isn't smart enough, at the very minimum if it handles tel: links properly, then its just a matter of educating folks who set up meetings to set them up so that you can click to call with something like <a href="tel:23423432233,,9230233#"> - in fact, in Outlook if you type TEL: and then the number, it will automatically be parsed as a tel: hyperlink. Turns out that its a massive fail - if I click the link, Android will show me the dialler but without the DTMF codes (basically, only the number upto the first comma). TOTAL FAIL.


So, Isn't this something that should have been brain dead simple to do? I mean - this is 2012 after all - and I'm not asking much. All I'm asking is that the tel: url parsing/handling be done in such a way that we can use our phones properly!!! Turns out that there's an open ticket 4575 since Nov 09. And its marked as an enhancement - I find that laughable since its a bug and definitely something that can be done quite easily (esp since a contact that has DTMF codes is dialled properly). However, for the 2 years that the ticket has languished, there have been 73 comments and not a single response from big GOOG :(


At the moment, doesnt look like this is going to be fixed - so I started browsing thru the Android source tree to see if I can find where the implementation for tel: urls - however, given the size of the android source, that's like trying to find a needle in a haystack. Guess I'll have better luck with seeing if CyanogenMod folks can fix this in CM9.


So about that, the other thing that has confounded me is why in the world can't Android bundle a decent T9 dialer/smart dialer out of the box. I know there are tons of apps on the market that do that - but seriously, is smart dialing something so out of the world that I need an app for it? As expected, there's a ticket but no action.


I think its safe to assume that Google isn't interested in fixing these issues as there's no 'benefit' in doing so - though for the life of me, I can't imagine either of them being particularly hard. In any case, I'm eagerly awaiting a CM9 build for the Nexus One (right now, am running an ICS build from XDA).

Tuesday, January 10, 2012

Facebook publicize is driving me nuts!

I'm thoroughly frustrated with Wordpress.com's facebook publicize feature. In theory, its supposed to post to your facebook wall whenever you publish a new post and that way publicize your post among your friend circle.....if it ever works. I've done all the resets, disconnects and reconnects and it just doesn't. Now, this could very well be a facebook problem rather than a wordpress.com problem - so while my rant might be misdirected, its a rant anyway against the thoroughly frustrating experience. Its like a bucket of cold water on my enthusiasm to be more active on my blog.


You see, with having posted rarely to this blog, I get a measly 70/80 page views per day (yeah - there's no need for the snide looks); So one part of actively persisting on the blog has been to see if I can get to 100+ page views per day. Modest goals, I admit - and getting the linky to a new post on the FB wall is a big part of it. If only it worked as it says on the tin :( :(


Anyway, this post is a test in itself - I've just jumped through the said hoops , mumbled the magic incantations and in other words, followed every bit of direction available to make this work right. And if this post shows up on my FB wall, well and good. If not, then I'm done with trying to get this to work.

Monday, January 09, 2012

A syntax highlighter extension for Deck.js

So for the past few hours, have been playing with Deck.js . I like the idea of a web based presentation format rather than a blob like powerpoint. At the same time, I'm a bit circumspect too - given the state of the tools. At least for my use, there's really no burning need that powerpoint can't solve (though I get the shivers everytime I have to do a presentation). All the web based/HTML5 seem raw at the moment on some much needed features (slide notes, slide prints, scaling issues etc).

Anyway, after a few minutes on Google and StackOverflow, decided to give Deck.js a spin. Deck.js is really nice and you should take a tour if you haven't done so. So finally, after giving the online presentation and the introduction a try, downloaded the latest to give it a more thorough spin. As usual, one of the first things was embedding code snippets and I thought it would be nice to integrate Alex Gorbatchev's SyntaxHighlighter into this. Turned out really simple to do (I'm sure there are other syntax highlighter extensions to deck.js out there) - but since I got something working pretty easily, here it is:

Create a file called deck.syntaxhighlighter.js with code below:

[sourcecode language="javascript"]
(function ($) {
$("head").append(
'<link href="http://alexgorbatchev.com/pub/sh/current/styles/shCore.css" rel="stylesheet" type="text/css" />'
).append(
'<link href="http://alexgorbatchev.com/pub/sh/current/styles/shThemeDefault.css" rel="stylesheet" type="text/css" />'
);
function setupSyntaxHighlighterAutoloads() {
console.log("calling SyntaxHighlighter");
SyntaxHighlighter.autoloader(
'applescript http://alexgorbatchev.com/pub/sh/current/scripts/shBrushAppleScript.js',
'actionscript3 as3 http://alexgorbatchev.com/pub/sh/current/scripts/shBrushAS3.js',
'bash shell http://alexgorbatchev.com/pub/sh/current/scripts/shBrushBash.js',
'coldfusion cf http://alexgorbatchev.com/pub/sh/current/scripts/shBrushColdFusion.js',
'cpp c http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCpp.js',
'c# c-sharp csharp http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCSharp.js',
'css http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCss.js',
'delphi pascal http://alexgorbatchev.com/pub/sh/current/scripts/shBrushDelphi.js',
'diff patch pas http://alexgorbatchev.com/pub/sh/current/scripts/shBrushDiff.js',
'erl erlang http://alexgorbatchev.com/pub/sh/current/scripts/shBrushErlang.js',
'groovy http://alexgorbatchev.com/pub/sh/current/scripts/shBrushGroovy.js',
'java http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJava.js',
'jfx javafx http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJavaFX.js',
'js jscript javascript http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJScript.js',
'perl pl http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPerl.js',
'php http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPhp.js',
'text plain http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPlain.js',
'py python http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPython.js',
'ruby rails ror rb http://alexgorbatchev.com/pub/sh/current/scripts/shBrushRuby.js',
'sass scss http://alexgorbatchev.com/pub/sh/current/scripts/shBrushSass.js',
'scala http://alexgorbatchev.com/pub/sh/current/scripts/shBrushScala.js',
'sql http://alexgorbatchev.com/pub/sh/current/scripts/shBrushSql.js',
'vb vbnet http://alexgorbatchev.com/pub/sh/current/scripts/shBrushVb.js',
'xml xhtml xslt html http://alexgorbatchev.com/pub/sh/current/scripts/shBrushXml.js'
);
SyntaxHighlighter.all();
}
$.getScript("http://alexgorbatchev.com/pub/sh/current/scripts/shCore.js",
function () {
$.getScript("http://alexgorbatchev.com/pub/sh/current/scripts/shAutoloader.js",setupSyntaxHighlighterAutoloads);
});
})(jQuery);
[/sourcecode]

Throw that in your deck.js/extensions folder. IN the slide deck you want to use this extension, include a script line before the end of the body tag:

[sourcecode language="html"]
<script src="../extensions/deck.syntaxhighlighter.js"></script>
</body>
</html>
[/sourcecode]

And you're done. Now to include any code snippets in your deck, just use either of the methods specified in the SyntaxHighlighter page (snippet for the script tag below)

[sourcecode language="html"]
<script type="syntaxhighlighter" class="brush: js">
$(document).ready(function () {
//this is the function body
});
</script>
[/sourcecode]

That's all there is to it. Code above can do with some improvements (conditionally load local copies if the remote loading fails etc) - but this is just a quickie script - so feel free to modify this to your heart's content.

Friday, January 06, 2012

Vim - unmap Esc!!

So I had the bright idea (by no means original, though, as I later figured out) that it'd be great to avoid the Esc key on Vim as its so far away from the home row. The alternative to pressing Esc is Ctrl-[ which, even though I've mapped CapsLock to control, I still find hard. So then, after some more googling around I've settled down on mapping jk to Esc. Its been a few hours with this setup and while its been an absolute pain till now, I think its a great way to avoid the Esc key jump. I can already feel my finger muscle memory relearning and my hand jumps instinctively for the Esc key much less now.

Here's my setup in case you want to try this out. Bung the following into your .vimrc or _vimrc as the case may be:

inoremap  :echoe "use jk"
inoremap jk 
The first mapping makes VIM echo a reminder. Its not friendly since it introduces a pause. However, the idea is to make the Esc so painful that you will shy away from hitting it.

Upgraded to XBMC 11.0 Eden beta

So I upgraded the good ole' media center machine at home to XBMC 11.0 Beta
. XBMC has been one of those software finds that has been just marvellous - to the point where I can't imagine the telly at home without XBMC. I've pretty much stopped watching regular tv/cable and almost exclusively on XBMC.
Also, its been a great way to keep the aging laptop (circa 2006 - core2Duo 1.6 Ghz, 2 Gb RAM and a piddling ATI Radeon X1600) in active duty.
Here's a hat-tip to all the XBMC guys and gals. And if you're not running a media center at home, you should give it a spin - XBMC makes your idiot box smart!

Saturday, December 31, 2011

Unit testing Apache CXF RESTful services - code available

So, the original post on the topic written about two and a half years ago had code snippets, but there's been comments and PMs for the complete code. So last week, as I resurrected this blog, decided to get that code out on github. Unfortunately, that was easier said than done; it has been quite some time and frankly, I'd lost the code. I must've switched machines about 3 times in the interim and gone from SVN to github for personal projects. Some hunting around ensued and thankfully, I was able to find the actual code we wrote based on the sample I'd posted. So cleaned that up - and just extracted the unit testing example out of it and pushed it to github - get it here. I haven't updated any of the dependencies - so this is still running against spring 2.5 and cxf 2.2.3 (I think) and things might've changed quite a bit since then (I haven't used the JAXRS bits of CXF much after that)


Running tests:
[sourcecode language="bash"]
mvn test
[/sourcecode]
Running the server:
[sourcecode language="bash"]
mvn jetty:run
[/sourcecode]

Syntax highlighting support in Wordpress.com with markdown

Now that I've cozied up to Vim/VimRepress combo for posting to this blog, there are a few things where I'm finding issues with posting code. With straight wordpress.com, I used to be able to mark up code with the [sourcecode][/sourcecode] tag and syntax highlighting comes in. With markdown - indenting a block of code with 4 spaces renders it as a <pre><code></code></pre> tags, but I don't know if there's a way to let WP.com know what language it is or any way to use the [sourcecode][/sourcecode] plugin from markdown.


Some googling on the topic didn't result in any wp.com specific answers (some folks have posted on using other plugins et cetera with a self hosted Wordpress - but nothing for wordpress.com)


Any ideas/pointers? Guess I should also post the question on the Unix stackexchange


Update on 1/1/2012


Using the sourcecode language="xxx" tag works - but you cant have any empty lines in your source.

Friday, December 30, 2011

Creating an interstitial login page with JqueryMobile

So, at work, we're building a mobile website using JqueryMobile. The app has a bunch of publicly visible pages however, other pages require the user to be authenticated. We didn't want the user to be forced to login on the first page. Instead, whenever a protected page is accessed, and if the user insn't logged into the app, we'd like to take him to the login page. Once he's successfully authenticated, then take him to the page he was navigating to. Doing this in a normal webapp is quite standard - however, with JqueryMobile, query params meddle with the hash navigation model. Also, the page that the user tries to access could be a div in the same physical page or a different url that needs to be fetched.


Trying to solve this was interesting as we were all really just getting started with JqueryMobile - so finding the ideal solution required a few tries. The solution takes a leaf out of JqueryMobile's approach. The outline of the solution is:



  1. Any page div that's a protected resource is marked with a data-needs-auth="true" attribute

  2. We hook into the document level pagebeforechange event to see if the user is trying to transition to a page requiring authentication. If so, then check if we have the user's authenticated context available.

  3. if the said context isnt available,

    1. Cancel default event handling since we're now going to navigate the user to the login page.

    2. save the toPage object - so once the user is logged in, we know where to take him.

    3. navigate to the login page.



  4. In the login page, the page can call the server apis to autheticate the user. Once the user is authenticated, then

    1. See if there's a valid returnTo object, if so, take the user to the page.

    2. If not, take the user to a 'default' page - in our case, this is the app dashboard page.




Code below
[sourcecode language="javascript"]
var pageVars = {}
$(document).bind("pagebeforechange", function (event, data) {
if (typeof data.toPage == 'object' &amp;&amp; data.toPage.attr('data-needs-auth') == 'true') {
if (!sessionStorage.getItem("TokenSSKey")) {
if (!localStorage.getItem("TokenLSKey")) {
pageVars.returnAfterLogin = data;
event.preventDefault();
$.mobile.changePage("#Login_Page", { changeHash: false });
}
else {
sessionStorage.setItem('TokenSSKey', localStorage.getItem("TokenLSKey"));
}
}
}
});
[/sourcecode]
The login event handler that handles the server response that's received once we pass the username and password


[sourcecode language="javascript"]
function SuccessLogin(data) {
if (data != null &amp;&amp; data.LoginResult != null) {
if (data.LoginResult.Code === 0) {
localStorage.setItem('UNameLSKey', data.LoginResult.User.AccountName);
if ($("#RememberMeChkBx").is(":checked")) {
ErrorPanel.html("");
localStorage.setItem('TokenLSKey', data.LoginResult.Token);
sessionStorage.setItem('TokenSSKey', data.LoginResult.Token);
}
else {
ErrorPanel.html("");
sessionStorage.setItem('TokenSSKey', data.LoginResult.Token);
}
if (pageVars &amp;&amp; pageVars.returnAfterLogin) {
$.mobile.changePage(pageVars.returnAfterLogin.toPage);
}
else {
$.mobile.changePage("#DashBoard_Page", { changeHash: false });
}
}
}
}
[/sourcecode]

Thursday, December 29, 2011

Learning Vim

Is it worth it?

Definitely seems to be. I've looked at VIM in the past, tried it out too a couple of times or more, failed miserably(mostly within a day or two) and then wondered Why nutheads use VI. This would usually be followed with going back to the comfort of Emacs. I think over the years, I've spent more time customizing Emacs than actually getting any work done with it. And somewhere that felt wrong. In light of that, the minimalistic VIM looked attractive and worth another try.

So what was different about this time?

So this time things worked out a bit better. Rather than firing up VIM, spent some time reading through other's experiences on picking up VIM. And the first thing I did right was to disable the arrow keys in normal mode (I still have them in insert mode)

    " disable arrow keys
    noremap   <Up>     <NOP>
    noremap   <Down>   <NOP>
    noremap   <Left>   <NOP>
    noremap   <Right>  <NOP>
Once you have that bit, you're forced to use h/j/k/l. And while h/j/k/l muscle memory is built up within a week, the nice thingthat really happens is that you dont use h/j/k/l much - instead you move to using more efficient movement commands. There're aton of resources/cheatsheets on the web - but the approach I followed was to figure out some small keystroke when I needed it.What that mean't was that I could get work done - but at the same time get more efficient gradually.

Customizations

VIM out of the box is pretty badly configured - and that's part of the reason that people seem to shy away from it. In fact, all the times that I tried out VIM before, I didnt even come close to cusotimizing my .vim. There are folks who have curated vim dotfiles on github etc - but my advice is to stay away from them. You should know what goes in your .vim and be in control of thatrather than getting a bunch of things in your .vim that you dont understand. Just so you know, looking at the github history for my vimfiles repo - the initial commit was 3 months ago - but after that, all the commits have come in only in the last 4 weeks.What that means is that while I put in a vim file initially, I didnt do much with it initially since I was just getting a hang of the basics. Once one becomes comfortable with the basics, one moves to customizing the vim environment more and more.

Parting words

To summarize, VIM definitely seems nice once you invest into it. It's easy to drop off in the initial stage and not go any further - and I believe this is what happens to the vast majority of folks who try it out. However, once you build that initial comfort level,it feels light, fast and easy.Start easy, persist, and customize bit by bit - you'll feel yourself going from struggling with Vim to feeling comfortableand then to customizing your environment for an even better experience with VIM.I've definitely been more productive with VIM than I ever felt I was with Emacs - and these posts to my blog from Vim part of that.Besides that, I've used VIM effectively with a decent sized js code, html markup etc and felt the speed of editing inspite of still beinga noob in Vim terms.

A new look

Changed the theme of this blog and moved around the widgets a bit.
Finally, I can bear looking at this blog :) - hope that holds good for you too.


Wednesday, December 28, 2011

Compiling VIM

Running ubuntu 10.10 here and ubuntu repos have only vim 7.2. I'm sure there's a ppa out there that has 7.3, but thought
that compiling vim from source would be a good exercise - plus I get to compile it with the options that I'd like
rather than relying on someone's build.


Here's the options that I enabled:
[sourcecode language="text"]
CONF_OPT_PERL = --enable-perlinterp=dynamic
CONF_OPT_PYTHON = --enable-pythoninterp
CONF_OPT_RUBY = --enable-rubyinterp
CONF_OPT_GUI = --enable-gui=gtk2
CONF_OPT_FEAT = --with-features=huge
BINDIR = /usr/bin
DATADIR = /usr/share
[/sourcecode]
Here's hte other dependencies I had to install
[sourcecode language="bash"]
sudo apt-get install libperl-dev ruby-dev python-dev libgtk2.0-dev
[/sourcecode]
Once you have the deps installed, just run
[sourcecode language="bash"]
make
sudo checkinstall
[/sourcecode]

Blogging with Vim

So now I'm in Vim land and this is the first time I've gotten far enough to feel a bit comfy. Decided to dust off my blog and start at it again - what better to do it in than in VIM.


So - TA-DA - here's the first post - courtsey VIM on ubuntu. However, as usual, it was rougher than it's supposed to be. IN any case, I'll forget how I got this far the next time so the next few posts will be around recording how to get VIM to post to WP.com blogs.


But before that - the first thing to to is to get the VimRepress plugin. Better if you have pathogen installed, in which case you can do
[sourcecode language="bash"]
cd .vim
git submodule add https://github.com/raghur/VimRepress bundle/VimRepress.git
[/sourcecode]
That's my fork on Github of https://github.com/connermcd/VimRepress.git which fixes a few things:



  • Makes VimRepress work properly through a proxy

  • Changes the attachment filename to a '.odt' since Wordpress.com doesn't allow a text file attachment.


I still dont have a clue if doing this will break the plugin - but nevertheless, basic case of posting to my blog works and at this stage that seems good enough for me.


PS as you can see from this post - I've not yet got a hang of markdown syntax :)


Dec 29th - PPS a couple of posts and one more tip for Wordpress.com. WP.com does <br/> for hardbreaks in the markdown text. Obviously, this doesnt leave the post looking very good. I have the following in my .vimrc to get around this


[sourcecode language="text"]
augroup Markdown
autocmd FileType markdown set wrap
\ linebreak
augroup END
[/sourcecode]


PPS
You will also need to have python markdown installed once you have VimRepress running.
[sourcecode language="bash"]
easy_install markdown
[/sourcecode]

Thursday, August 18, 2011

Troubleshooting nandroid backup/restore

Have been having all sorts of weird problems with Nandroid backup/restores. Essentially, here's the symptoms of the problem - I'd get a nandroid and restore it successfully (Amon RA/CWM would report success) - however either will get stuck at boot or if it boots successfully, will have tons of FCs and/or data loss. In most cases, I would dread seeing the green Android on boot up asking me to log in to my google account :(

Essentially, my nandroids were useless... to the extent that I had only one nandroid backup that was known to work - and I was keeping 4 backups of that lest I lose it somehow.

So today, thought I'd get deeper into it and see where the problem was

  1. It was unlikely that its a problem with CWM/AmonRA - I myself had an old working backup. Since my backups were created and restored successfully with MD5 verification, it seemed that there's something wrong in the backup image itself.

  2. Still, that seemed inexplicable, since creating images just doesn't seem that flaky. A couple of times after reboot, I had got a "UID has changed - it is recommended to wipe data" or something similar message - so I thought something was wrong with permissions after the restore. In any case, I tried the CWM menu item of fix permissions - but didnt get anywhere with that. At this point, I was desperate enought to get adb out !!

  3. Now in full blown investigation mode, I didnt care if I couldnt restore my data - just wanted to figure this thing out. So restored a  "non working" backup and did a adb logcat while the phone booted... turns out that I was seeing tons of messages like so:


  4. I/PackageManager(  205): /system/app/ContactsProvider.apk changed; collecting certs
    I/PackageManager( 205): New shared user android.uid.shared: id=10006
    W/PackageManager( 205): System package com.android.providers.contacts has changed from uid: 10003 to 10006; old data erased


  5. So that explained what was going wrong... thought it would be an easy fix to do the 'fix permissions' thing in CWM advanced menu. Restored again and went over to the advanced menu, did fix permissions and rebooted. What I got was a big naught for all my work - same problems and no resolutions. At this point I was stumped enough - but sheer bull headedness forced me to look at the log again... and lo it says 'data erased'. So that explains why fix permissions after boot wont work since the data is erased during boot itself!


At this point, the key to the problem was really understanding how and where android's UIDs are generated,  stored  and regenerated. Headed over to Cyanogen wiki and read up the details on fix permissions which explained the packages.xml file. Somehow the packages.xml was borked in the nandroid (every time) and that was causing it to be regenerated.

Armed with that, got a germ of a solution in place, which is roughly

  1. Restore nandroid with borked packages.xml

  2. Let the system boot. Will lose data but a new packages.xml will be regenerated

  3. Reboot into recovery and adb pull /data/system/packages.xml out.

  4. Do an advanced restore and again restore the data only.

  5. mount /data and adb pull /data/system/packages.xml to compare differences. Found that packages.xml was indeed corrupt.

  6. adb push packages.xml (this is the generated one pulled in step 3) to /data/system. Now you have all the old data but the packages.xml is newly generated one and known to be valid. Obviously UIDs will mismatch - but fix permissions has a valid file to work on.

  7. Still in recovery, run fix permissions. It should fix permissions properly.

  8. Reboot


It worked like a charm!!!! I'm still a little worried as I dont know what else is borked in my nandroid data.img. And I dont know why that image has the exact same problem every time - I have tried 3 different versions of CWM recovery, Amon RA 2.2.1 and ensured that my sd card was clean (ran chkdsk on it). In any case, since I'm able to restore, will just use the phone for the next few days and hopefully I'll run into any wonkiness soon.

And going forward, I think I'll take my own backup copy of the /data/system/packages.xml file along with each nandroid.

Tuesday, August 02, 2011

Note To Self: Fixing broken market links on Android after wipe/ROM upgrade

Force stop market and clear data
Launch market again - it will ask you to accept terms. Do so.
Should force it to rebuild the database and you should see all your apps linked to market again.

Thursday, July 28, 2011

Note to self: ROM install/upgrade


  1. nandroid backup - amon ra recovery

  2. Reboot recovery, install zip

  3. Install Link2SD-preinstall.zip (only on cyanogen based ROMs)

  4. Boot

  5. Play around...make sure things work.

  6. Install other niceties/Troubleshoot


    1. Link2SD - database error.Just uninstall and reinstall.

    2. /etc/gps.conf - change to sg.pool.ntp.org


  7. Charge to 100%

  8. Reboot into recovery

  9. wipe battery stats

  10. reboot

  11. Run down the battery

  12. Recharge to 100%


 

Monday, July 19, 2010

A Python mystery

Back after a long time…saw something strange today and think it deserves a post. I was cranking through Problem 47 on Project Euler. As I was optimizing the solution, the optimization actually increased run time – and I’m at a loss to explain it. So here goes:

[sourcecode language="python"]
def problem_47(maxlen = 4):
found = False
i = 2*3*5*7 + 1
while not found:
# facs = [len([j for j in uniq(prime_fac(i+k))]) for k in xrange(0,maxlen)]
d = 4
for t in range(i+3, i-1 , -1):
k = len(list(uniq(prime_fac(t))))
if k < 4:
i = t + 1
break
else:
d -= 1
if d ==0:
found = True
print list(xrange(i, i + maxlen))
# if i % 1000 == 0:
# print i
[/sourcecode]

The run time is about 1m2s.

Now, if I try to optimize it such that I break (line 10) when I find the first number from the end that has less than 4 prime factors, the run time should be lesser (or at least the same). Right?

Turns out Wrong… now the thing takes more than 3m to run. What is going wrong?

If any of you have a clue, drop me a comment.

Friday, May 28, 2010

Fun with python’s decorators

Was in need of a utility function that can retry an arbitrary function a few times before giving up. Essentially something like Gmail or Google Readers behavior when there’s no network connection.

Thought it would be a few minutes job to cook up a decorator utility in Python. Boy! was I wrong! I mean, the basic use case is definitely trivially easy with Python – however, once you want something that’s more useful than that and resembles something that you’d actually use in production, the complexity goes over the top!

Anyway, I’m figuring out all sorts of fun things about decorators – and all of it the hard way! OTOH, its a  lot of fun to write small test code to test & validate assumptions!

Make no mistake – I’m still a python fanboy :) – just that going through some pains with decorators right now. Will follow this up with a longer/detailed post that may have some useful insights I’ve gained till then. Thanks for stopping by!

Saturday, March 06, 2010

Back after a long time…

Obviously, I’m not writing enough out here… part of the reason being even though wordpress’s web editor is great, I really like not having to type gobs of text in a text area.

So eventually, looked around and found Windows Live Writer. Its going out on its customary spin :).

So what’s been cooking? Actually a bunch of things over the last several months:

Stuff – on which I mean to put up individual posts

  1. Had a fun exercise benchmarking lighttpd with python wsgi
  2. Been doing some stuff on mysql cluster – mostly around seeing how it compares with the mysql master-master replication setup I had in place.
  3. Dipping my toes into Amazon EC2 finally – though Linode or Rackspace is way easier if you want to just spin up a VM. Amazon’s  EC2 does have some interesting stuff (reliability of back ups, CDN etc). However, it comes at the cost of having a model that initially is hard to understand.
  4. Resin server – heard good things about it, had to see if it would fit at  some stuff at work. Disappointed that the free version is really hamstrung.
  5. Apache Wicket: I’ve always hated web UI and somehow the action oriented frameworks (Struts and their ilk) never appealed from a coupling/cohesion standpoint. In that respect, seemed like ASP.net got a lot of things right going the component oriented way. However, it seems fatally flawed with stuff like viewstate and postback and so on. On the Java end, tried Tapestry out, but, it comes with too much baggage for my taste. Had been reading of Wicket for sometime now and decided to take the plunge and was pleasantly surprised doing my contrived example:
    1. Took much less to get off the ground compared to Tapestry
    2. Mentally, a lot easier to understand
    3. Managed to realize my goal of exploiting OO techniques to DWIM – even on a simple contrived example.

Books:

  1. Steve Souders excellent "High performance websites” book: if you’re doing anything near a high performance website, then grab this book today!
  2. Wicket In Action
  3. Agile Principles, Patterns and Practices by Robert C Martin: read about the SOLID principles first and then buy this. This is a book to own if you aspire to become a good Agile/OO practitioner. Don’t worry about the C# in the title – it applies universally.

Saturday, January 30, 2010

A new tool for the toolbox!

Firstly - my VM setup:

I'm running Virtualbox with Xubuntu 9.10 on Win7 host - and its pretty. Its on a office standard issue Dell D531 - meaning they're AMD Turion X2 TL-60 and 2GB of RAM.

Now the Turion's supposed to have hw virtualization (AMD-V) however, the moment hw virtualization was enabled in virtualbox and I tried starting the vm, the machine would hard reboot!!!

After searching high and low, turns out that its an issue with Dell bioses and they dont have any updates. Here's a page that tracks the issue. Imagine my happiness when a couple of days ago, found that dell had released an unofficial bios update (T12).Well, its gone in, and things are running swimmingly well - my VM now has 2 procs, is stable and I hardly feel I'm in a VM :). In fact, this post is coming from the VM  - firefox with 12 tabs, a few terminals and emacs running on 600 MB of RAM.

Now let me come to the new tool I was talking about

I like to run the VM full screen - feels best that way. After trying out enough and more of multiple desktop softwares, have finally settled on VirtuaWin - beats the crap out of other tools, systray integration is great, has window rules and so on. Over the past couple of weeks, its come close to the ideal tool - does the job well and you hardly know its there :-)

Friday, August 28, 2009

Hudson for CI - Tips, Tricks and insights

Just started using Hudson recently and I'm wowed! It's head and shoulders above CruiseControl and things that I like a lot are

  1. Snappy web based config - felt great that I could set up a CI build with essentially the repo path alone

  2. Plugin system!

  3. Deep maven2 integration (though read on below that this isnt always what works)

  4. Trending data OOB - essentially giving you nice charts about how your build is doing over time


Now that I've said all the very nice things about it, here's a few things that were hard to figure out/or weren't immediately apparent. If your maven builds aggregates modules then you'll find the experience a bit challenging

  1. The generated site doesnt work: Basically, the link is to one of the modules' site instead of a link to the parent project. This apparently is a known issue and the solution on hudson user list is to run the site:deploy goal and have a link in the project description to point to that url

  2. Code coverage: none of the coverage tools (EMMA, clover etc) support code coverage over a multi module build. Since coverage is very important to me, I eventually resorted to having separate build jobs instead of using the default multi module support. Here's how my svn structure looks
    [sourcecode language="sh"]
    /trunk/basebuild #contains the parent pom
    /trunk/project1 # pom refers to ../basebuild/pom.xml
    /trunk/project2 # ditto here
    [/sourcecode]

    With the directory structure above, there are build jobs for project1 and project2. Each build job checks out both the project folder (/trunk/project1) and the basebuild folder so that the POM references work.
    One undesirable effect of this set up is that if project 2 depends on project 1, then project 1 build will have to install the artifact to the local repo for the project2 build to work.

  3. Findbugs plugin - Running maven builds with findbugs configured did a Out of Memory (OOM) and failed the build. I tried setting MAVEN_OPTS to -Xmx512M at a bunch of places and nothing worked. Eventually, it turned out that the right place to specify it is in the Hudson COnfigure job page in the build section!

  4. Violations plugin - This is a great little hudson plugin. However, I couldnt get this to work with a inherited POM setup above. Eventually resorted to using Findbugs and PMD hudson plugins individually.


I should mention that I'm running hudson 1.321 with the latest plugins. If you have any tips to share on running hudson - please do drop a link in the comments. Overall, a great big 'thank you' to the Hudson folks!

Wednesday, August 26, 2009

Recipe: Unit testing Apache CXF RESTful services

Recently, decided to use Apache CXF to expose a service with a RESTful API. Part of the reason for choosing REST had more to do with the fact that the client is going to be a mobile client. These days, though mobile devices stacks have come a long way and provide SOAP clients, it still seems prudent to not depend on a whole slew of technologies where plain 'ole HTTP and JSON might do the trick.
As I started exploring CXF, I liked the JAX-RS implementation and decided to go ahead with it - however, almost immediately, hit a snag when I went on to write test cases. Apache CXF documentation is not quite there and things do require some investigation - at least initially till you get a hang of the framework. As it took time to figure out the solution, it makes sense to share it on blogosphere. Here's how to go about writing unit tests:

Firstly, the service and the service implementation:

[sourcecode language="java"]
package com.aditi.blackberry.web;

import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.HeaderParam;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Response;

@Path("/chat")
@Produces("application/json")
public interface ChatWebService {

@POST
@Path("connect")
public Response connect(@FormParam("user")String username, @FormParam("pass")String password);
}
[/sourcecode]

The service implementation:

[sourcecode language="java"]
package com.aditi.blackberry.web;

import javax.ws.rs.Produces;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;

@Produces("application/json")
public class ChatWebServiceImpl implements ChatWebService {

public Response connect(String username, String password) {
if(username ==null || "".equals(username) ||
password ==null || "".equals(password)) {
return Response.status(Status.BAD_REQUEST).build();
}
String[] response = {username, password};
return Response.ok(response).build();
}
}
[/sourcecode]

The corresponding spring context xml (applicationContext.xml) is:

[sourcecode language="xml"]

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:jaxrs="http://cxf.apache.org/jaxrs" xmlns:cxf="http://cxf.apache.org/core"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-2.5.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-2.5.xsd
http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd
http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd">





























[/sourcecode]

A few things to note here - logging is turned on using interceptors and the jaxrs server is defined. I'm also using flexJson to convert arbitrary objects to json - so a MessageBodyWriter bean is also injected into the jaxrs server node. The most important thing is that we havent included either the cxf-servlet.xml config for the cxf-extension-http-jetty.xml. Essentially, what we want to do is for the actual build, include cxf-servlet.xml and for the test runs, run the service on the bundled jetty server.

So, go ahead and define a applicationContext-web.xml:

[sourcecode language="xml"]
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">




[/sourcecode]

This is the context xml that we'll provide to the ContextLoaderListener in our web.xml.

For the test cases, define applicationContext-test.xml - this is the context xml which we'll load from the test cases.

[sourcecode language="xml"]
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">









[/sourcecode]

As you see, we also define a jaxrs:client for the test context xml.

There's one final issue to address - which is that we would ideally like the urls we use to access the service to be the same. The spring jaxrs:server binding takes an address attribute which defines the url the service is hosted on. For deployment onto an external container, this takes the form of "/myservice" - a path element relative to the context location. For the internal jetty hosted service, it takes the full http path (http://localhost:port/my/path/to/service). The easiest way is to have this set using a property reference in spring and have the applicationContext-web.xml and applicationContext-test.xml load different property files as shown in above.

For completeness, here's the web.xml:

[sourcecode language="xml"]

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

CXF REST Example
CXF REST Example


contextConfigLocation
classpath:/applicationContext-web.xml


org.springframework.web.context.ContextLoaderListener



CXFServlet
org.apache.cxf.transport.servlet.CXFServlet
1



CXFServlet
/*



[/sourcecode]

And finally, here's a junit test case:

base class:

[sourcecode language="java"]
package com.aditi.blackberry.web;

import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:/applicationContext-test.xml" })
public abstract class AbstractApiTest {

@Autowired
@Qualifier("chatclient")
protected ChatWebService proxy;
}
[/sourcecode]

A test case for the connect API:

[sourcecode language="java"]
package com.aditi.blackberry.web;

import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;

import org.junit.Assert;
import org.junit.Test;

public class ConnectApiTest extends AbstractApiTest{
@Test
public void testConnect() {
Response resp = proxy.connect("raghu", "password");
Assert.assertTrue(resp.getStatus() == 200);
System.out.println(resp.getEntity().toString());
}
}
[/sourcecode]

Thursday, January 01, 2009

PIL vs Imagemagick

Decided that I want to timestamp my photo collection with the date from the exif data. Many digicams have an option to do this - unfortunately, my Panasonic DMC-LZ8 doesn't seem to do this. I knew imagemagick would do the trick, but thought it would be a good time to play around with PIL and python.

Here's my PIL effort - functional, but one that came with quite some amount of googling and trying to make sense of the PIL documentation which is inadequate at best.

[sourcecode language="python"]

from PIL import Image
from PIL import ImageFont, ImageDraw
from PIL.ExifTags import TAGS
from os.path import basename, dirname,join
import logging
import sys
import datetime
import time

# Important: I set out to write the image annotation in PIL - there's one serious drawback though. When saving
# the image, the exif data is'nt preserved.

logging.basicConfig(level=logging.DEBUG,
                    format='%(asctime)s %(levelname)s %(message)s')
logger = logging.getLogger()
logger.level = logging.DEBUG

def readExif(image):
    info = image._getexif()
    ret ={}
    for tag,value in info.items():
        ret[TAGS.get(tag,tag)] = value
    dt = datetime.datetime (*time.strptime (ret['DateTime'],"%Y:%m:%d %H:%M:%S")[0:6])
    ret['DateTime'] = dt
    return ret

def annotateImage (file):
    i = Image.open(file)
    font = ImageFont.truetype("/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans-Bold.ttf", 36)
    exif = readExif(i)
    draw = ImageDraw.Draw(i)
    width, height = i.size
    draw.text((width * 0.7, height - 100),exif['DateTime'].strftime("%a %d-%b-%Y  %l:%M %p"), font=font, fill='orange')
    outfile = join(dirname(file), "Ann_" + basename(file))
    i.save (outfile, quality=98)
    logger.debug (outfile + " saved")

if __name__== "__main__":
    logger.debug ("getting exif for " + sys.argv[1])
    for file in sys.argv[1:]:
        logger.debug ("Annotating " + file)
        annotateImage(file)

[/sourcecode]

Unfortunately, PIL has a fatal flaw - you can annotate the image and save it - but then the saved image doesn't retain the original image's exif metadata. I also tried the exiv2 library, but couldnt figure out a way to load the image, annotate it and then copy over the metadata. Googling around didn't turn up any intersting solutions - so if any of you have any ideas, please share.

Meanwhile, as I was getting tired of coaxing PIL to do what I want, I just wrote a a little bash script to do the same in imagemagick. Its as painless as it can be, comes with excellent documentation, hardly any gotchas, a world of options in case you feel creative and the job gets done in 10 mins. Here's the bash script below.

#! /bin/bash
# script adds a black 18px bottom border to the pic with the Exif datetime tag
# no safety checks :). Original pics are left untouched.
while [ "x$*" != "x" ]
do
file=$1;
shift;
outfile="$(dirname "$file")/Ann_$(basename "$file")"
echo $outfile
echo $file
date=$(identify -verbose "$file" | grep 'DateTime:'| sed 's/ Exif:DateTime: //;s/:/-/;s/:/-/')
date="$(date -d "$date" +"%a %d-%b-%Y %l:%M %p")"
convert "$file" -size 1x18 xc:Black -fill White -background Black -append -gravity Southeast -draw "text 0,0 '$date'" "$outfile"
done

Overall, the experience left me dissappointed and dissatisfied with PIL.

Tuesday, October 07, 2008

andLinux with Hardy Heron

andLinux is  built on top of co-linux (co operative linux) and basically runs side by side with Windows. andLinux packages the whole thing better (coLinux bundled with Xming and a nice systray app allowing you to launch Linux apps right in windows).

Here's details on getting off the ground - and the reason that I have this post is that though andLinux comes with an installer application, it still needs some amount of fiddling under the hood to make it work. This post is just to make sure I can go through the process again when the time comes

  1. When installing andlinux, choose the COFS option for making your hard drive visible in Linux

  2. Install with the command line option to launch andLinux (do not install it as a service just yet)

  3. Post installation, tweak andlinux's network setup - set up a couple of virtual TAP adapter . You will have to tweak things both on the linux side and on the windows side. Basically, you create a 2 TAP adapters - one is a loopback and another for sharing your LAN connection. Your wireless network is shared via Slirp (doesnt need a TAP adapter setup).

  4. Keep in mind a gotcha - slirp wont allow you to ping - so if you have only slirp working, then try a wget www.google.com to check if you have network connectivity.

  5. Start the andlinux server (or if its already running) make sure that your c drive is shared - on the bash prompt you should be able to do ls /mnt/windows

  6. do a apt-get update to update your package list. run an update. As of this time, the only prebuilt images on andlinux.org is gutsy.

  7. do a apt-get install update-manager-core

  8. run do-release-upgrade - and you should see apt running and updating your system to hardy.

Monday, June 23, 2008

Compact Ubuntu

I've always hated the fact that on Ubuntu with the default themes, there's far too much space wasted. The buttons are too tall, the treeview wastes too much space so that if you're on eclipse or some other ide, you see a precious few items on the screen.

I've been trying to tweak it to no end - even looking to see if there are any ~/.gtkrc-2.0 tweaks. Found a few links such as this Making Eclipse look good on Linux - Max's blog - however, didn't really satisfy my need.

And so it stayed until today when I came across Clearlooks Compact Gnome Theme.

I love it - one more for my list of must-haves!

Wednesday, June 18, 2008

Enjoy symlinks and hardlinks on NTFS

Can't believe I didnt come across this before - if you've gotten used taming your hdd by creating links to folders and have been annoyed with the lack of symlinks and hardlinks on NTFS, then despair no more. I've been using Mark Russinovich's (of sysinternals fame) tool - junction.exe all this while and though it works great, have always wanted something that would integrate with Explorer too. For an in-depth discussion - read http://shell-shocked.org/article.php?id=284 Anyways, I'm extremely happy with NTFS Link - this will surely go into my list of "Must have tools - install immediately on a new machine" list :-)

Upgrade blues - upgrading to Firefox 3 final from Firefox RC 3

As evident from other posts here - have been keenly waiting for the FF 3 final. Imagine my surprise when the "Check updates" didnt find an upgrade! (I'm on FF3 rc3).

Anyway, so off I went to Mozilla.org and downloaded a copy of the final - and did my bit towards FF download day. Happily installed it - all defaults as usual. Install told me that it was installing into the same location as my current installation (c:\program files\mozilla firefox 3 beta 1 - that's where my FF3 install have been going  - all the way from b1 to b5 and then from rc1 to rc3 - so no surprise).

Well, installation completed successfully, and I started FF 3 - but my title bar still says Build 2008052906 - even the file version has the same build ID.

Something's up - don't know what yet - but has anyone else had a similar experience?

Monday, June 16, 2008

Desultory Monday...

This entry was posted using Its all text on Firefox 3.0 RC2 on Ubuntu Hardy heron, with emacs 23 snapshot as the editor. I love it :-)

Well, Its all Text is great if you hate typing into webforms with textboxes that make editing such a big pain in the butt.

Its great to see that Its All text has been updated to work with FF 3.0 now. The fun would be to see if this works on Windows with cygwin emacs as the editor. Had problems the last time I tried that - but that's been sometime ago now.

Today's been a desultory Monday. Spent sometime getting emacs snapshot with pretty fonts on my hardy. Its beautiful.

The next thing has been mostly scratching my head on hadoop. What I'd like to do is parse an access log and generate multiple outputs - ie single input of gobs of web access logs and multiple outputs - with say requests by country, popular pages, % of client browser and so on.

  1. parse web log

  2. pull out remote ips and use geo ips to find the originating country

  3. pull out user agent field and figure out browser distribution.

  4. Filter the requested resource and pull out only pages - find pages by popularity


Now there seem to be quite a number of ways of doing this -

  • Code the whole thing in Java - and this is where I'm getting into analysis paralysis.
    Look at ways to generate multiple outputs from MapRed and then use Job and JobControl to setup the pipeline.

  • Use Pig - Pig examples on the Pig overview page seem to suggest that this should be trivial with Pig.

  • Use Cascading - seems to be doing the same thing - will need to do this in JRuby or Groovy though.


Will post an update once I get through the java route

Thursday, June 12, 2008

VPN into Windows VPN Server from Ubuntu *Hardy* Intrepid

** Update 2008/11/17 **: Networkmanager is broken in intrepid. To get it  working had to install network manager from ppa as given here - http://www.ubuntu-forums.com/showpost.php?s=e0d93c09b8c340976477456593ac4cf7&p=6094870&postcount=5

Ok - this was easy - and while there's some resources on google, I had to figure out a few itty bitty things for my work VPN setup.

install

  • network-manager-pptp

  • pptp-linux


Restart network manager with

killall nm-applet
sudo /etc/init.d/dbus restart
nm-applet --sm-disable &


Configure VPN settings

Click on the network manager applet and click on VPN connections

  1. Create a new VPN connection

  2. Ensure that you select Refuse CHAP  in the authentication tab.

  3. In the routing tab, you can give netmasks that need to go through VPN - for my work network, I have: 10.10.5.0/24 172.16.106.0/24


That's it. Now click on the Network applet, and connect to your VPN. In the authentication dialog, use <domain>\username and your windows domain password.

Thursday, May 01, 2008

Drip...

Drip..


IMG_3370_crop, originally uploaded by Raghu Rajagopalan.

Drip....


IMG_3369_crop


Water droplets - dipping a toe in macro photography

he he - so I have a canon S3 IS - got it last year since it allows enough manual control while also having family friendly thingies like video :). Also, with the chdk hack, the S3 IS is good enough for me to experiment.

So, one of these long time itches has been to take a water droplet splash - you know, the immensely close up snaps where you see a single drop splashing...

Here's the snaps after two evenings of trial and error (mostly errors though) - feeling quite smug myself :)

Wednesday, April 02, 2008

Free subversion hosting - What's the best?

All - I've just signed up for an Assembla account - these folks provide free subversion hosting with a 500 meg space and unlimited spaces.

Will see how it goes.

Firefox 3 beta 5 released. Yahoo Mail is still broken.

Firefox 3 Beta 5 release today. Release notes and downloads here.

Installed it as soon as I got to know today morning and the first thing to check was whether Yahoo Mail still crashed. Initially, Yahoo Mail seemed to work alright for all of 50 seconds - quickly moving over items in inbox caused Firefox to crash :-(

Guess will wait for some more time. I'm sure there's a bug report somewhere on this - Yahoo mail was broken on Beta 2, got fixed in Beta 3, then was broken in Beta 4  and is still broken on Beta 5.

Will wait for it to be fixed - Any idea if this is a firefox issue or a Yahoo! issue? Seems odd that script can cause the browser to crash so badly.

Monday, March 31, 2008

Hardy heron - first impressions

He he :-) - finally got Ubuntu Hardy heron beta on my home and work laptop. first impressions below:

1. Wubi install from within windows is easy and works great. If after setting up so many boxes, I can go on and on about it, I'm sure that its great help for anyone who's on Windoze. I mean, the barrier to entry has never gone down so much.

2. I guess once you've installed via Wubi and configured your system to your liking, you can uninstall and take an image that you finall install to a dedicated partition - isn't that just awesome.

3. Comes installed with Firefox 3b4 -which is awesome. Given that FF crashes badly on yahoo, this might be a bummer for many people. Should probably have some first time customization that will let you install Opera.

4. Installation is super fast - took about 10 mins for wubi to install, reboot once, finish installation and reboot again. Grub default to Last selected would probably be a better idea.

The not so good

1. Wifi doesnt work out of the box - didn't on my Dell Inspiron 1501 or the Dell Latitude D620. Its the ye olde broadcom problem. This is really the BIGGEST turn off. Hope it will get fixed by the time the final release is out. Meanwhile, had to jump through hoops getting ndiswrapper in. I didn't go the broadcom fwcutter way since that only allows a 802.11b connection from what I read. I'm still not sure what fixed the issue - irrespective, I had to update the system and then things started working like a charm.

2. Compiz configuration isnt installed by default. If this is your first time on Ubuntu and you've come this way to see the awesome 3D desktop, then this is a bummer. Finding out what you need to do is a pain too.

I think that's all there is to it. Its great once wifi starts working normally.

Friday, March 28, 2008

Gnuplot, dstat - easy graphing on Linux

Recently, started fiddling around with how to monitor and graph performance data on linux boxes. Other than the usual tools like top and vmstat, which are either interactive (top) or too textual to do anything much.

First off, vmstat, doesnt lend itself well to graphing without additional scripts to lay out the data so tools like gnuplot can be used. Secondly, and more seriously, it doesn't include a timestamp in the output.

Looking around a bit found that dstat seems to be a good replacement to vmstat (and iostat) - and the generated data is consumable with gnuplot.

Here's a quick example of generating graphs for CPU user, system and idle times
dstat -tc 5 500 > dstat.raw

now fire up gnuplot and go ahead and plot it
gnuplot> set xdata time gnuplot> set timefmt "%s" gnuplot> set format x "%M:%S" gnuplot> plot "dstat.raw" using 1:2 title "User" with lines, "dstat.raw" using 1:3 title "Sys" with lines, "dstat.raw" using 1:4 title "Idle" using lines

To make gnuplot generat an output file, you need

gnuplot> set term png

gnuplot> set output "dstat.png"

gnuplot> replot

dstat png - User, system and Idle times

And you're done. here's the graph generated on my machine. There's loads more that you can do - and admittedly, you can do everything by dumping your file to excel. However, that doesn't lend itself well to a completely automated process. When you're doing performance testing and such like, you will likely repeat this enough number of times. Not having to do it manually helps big time!

Thursday, March 27, 2008

Working with huge XML files - tools of the trade.

XMLStarlet is great for slicing and dicing huge XML files. Had a run in recently - had a 80 Mb XML file in a single line :D. Guess what, most editors that I tried balked and fell over. This was on a 2Gig Core2 Duo machine.

XMLSpy, vi, emacs, notepad++ all died - and trying to do something with a 80 Gig XML where the 80 gigs are on a single line isnt much fun. So the first order of business was to pretty print the XML. XMLstarlet worked great -
xmlstarlet fo file.xml > output.xml

and you're done.

The next order of business was that we needed to validate the XML document against a schema. Our first attempt was with Sun's multi schema validator (MSV). MSV does not validate the whole document but instead stops after a certain number of failures. So, MSV - out, XMLStarlet in. XMLStarlet can validate documents again W3C schema, DTD  or a RELAXNG schema.
xmlstarlet val --err --xsd schema.xsd input.xml >  errors.txt

And presto! - you get an error report that you can slice and dice with sed/awk or anything else at all.

XMLStarlet also allows you to write Xpaths to query the xml - however, I found the syntax too weird and round about. A better alternative is a perl based solutions - XSH2 - a command line xml editing shell. You can install it under cygwin and it supports basic command pipelining and redirection.

So go ahead and launch XSH. At your cygwin prompt
[~]xsh
---------------------------------------
 xsh - XML Editing Shell version 2.1.1
---------------------------------------

Copyright (c) 2002 Petr Pajas.
This is free software, you may use it and distribute it under
either the GNU GPL Version 2, or under the Perl Artistic License.
Using terminal type: Term::ReadLine::Gnu
Hint: Type `help' or `help | less' to get more help.
$scratch/>

Now, lets load up our document, type
$scratch/>$x:=open formatted.xml

Your prompt changes to
$x/>

So go ahead and try a few xpaths
$x/> ls /path/to/node

and XSH prints out the matching nodes. Now what if you need to create a document fragment of nodes matching a certain xpath? Piece of cake - do ahead
$x/> ls /path/to/node | tee fragment.xml

XSH2 has many, many more features - but this should be good enough to get you off the ground.

Saturday, February 09, 2008

Yahoo! mail fixed for Firefox 3 beta 2

Used to get an error on opening Yahoo mail beta in Firefox 3 beta 2 - and had to switch to the plain 'ole yahoo mail. Here's the bug report

Was pleasantly surprised today morning to see that Yahoo! mail beta now works properly in FF3b2. Thanks!

Wednesday, January 23, 2008

Pesky little bash quoting problem

Have to admit it - this happens every time I sit down to write a some shell script that manipulates paths on windows (where path names often end up with spaces). Soon I find my nifty little script running into problems when it doesn't handle spaces properly and I find myself reading up on bash quoting rules once again...

Anyway, so this post is mostly for self reference :) and to put down some simple rules in the hope that writing it down will help committing it to memory.

The latest (mis)adventure was to make irfanview run under wine and a little script to allow irfanview to open a file provided on the command line. Irfanview being a windoze executable, its necessary to cd to the folder and then pass the file as argument to irfanview. Trivial isn't it....until I found that the script fell over when it got a path liek /path/to/a folder with spaces/image.jpeg.

#! /bin/bash
DIRNAME=$(dirname "$1")           # double quotes necessary - since $1 could have embedded spaces

FILENAME=$(basename "$1")
echo $DIRNAME
echo $FILENAME
cd "$DIRNAME"                               # once more, double quotes necessart
irfanview $FILENAME

Golden Rule

When passing a path as argument, always enclose in double quotes.

Thursday, January 10, 2008

Firefox 3 Beta 2 on Ubuntu Gutsy

I'm having weird problems with firefox 3b2 on ubuntu gutsy - and as far as I can tell, I seem to be the only one. Did not find anything similar on ubuntu forums too.

Installed firefox 3 beta 2 from Mozilla to /usr/lib/firefox3b2 folder and created
lrwxrwxrwx 1 root root 27 2007-12-30 23:44 /usr/bin/firefox-3b2 -> /usr/lib/firefox3b2/firefox

When I launch firefox3b2, I get firefox alright, however, in the location bar if I type in a url and press Enter, nothing happens - absolutely nothing at all. I have to go and click the green arrow for the browser to open the URL. The search box is even weirder - neither the Enter key works nor does the mouse!

I'm at a loss - and nor can I find any similar experiences on forums etc - ideas welcome :D

SOLVED 01/20:Backed up my .mozilla folder and started firefox 3 b2 - no problems now :D

HOWTO: Access your machine from the internet without a static IP

For machines to be accessible on the internet, usually you need a static IP that's leased from your ISP so that when someone types in your IP address, so that packets can be routed over to your machine. However, getting a static ip is costly and for the most part, internet users have dynamic IP address that the ISP allocates each time an end user connects to the internet. Since the ip address keeps changing on each connection, there's no straightforward way to connect to the machine without knowing the IP address that's been allocated - or so it was at least till Dynamic DNS came along (it isnt new - has been around for ages, but for some reason isn't that well known)

Typically, when you type in www.google.com in your browser, your machine performs a DNS (Domain name service) lookup with the DNS servers from your ISP to find out the IP address corresponding to www.google.com. With DDNS (dynamic DNS) this is made to work with your dynamically allocated IP address also. Here's how it works

  1. Register with a DDNS service provider. Service provider provide free accounts for personal use - go to www.dyndns.org

  2. Once you've created your account, go ahead and set up your hostname. DDNS service providers will have some domains that you can choose from and you get to choose the host part. For a fee, you can also use a domain name of your choice.

  3. If your set up has a router at your end, check your router administration page if it supports dynamic DNS. If it does, you need to enter the hostname, account and password. Everytime your router connects to the internet, it sends an update notification to the DDNS service notifying the new IP obtained from your ISP. The DDNS service takes care of sending update notifications to routers on the internet.

  4. If you dont have a router, then download the DDNS client software from the service provider. Most DDNS providers have windows, mac and linux clients. These run on your machine and do the same thing - notify the DDNS service provider of your new IP whenever you establish a connection with your ISP.

  5. If you've got all this set up, then you can reach your machine from the net - try ping <your host name>


If you're running Linux/Ubuntu, make sure your're running SSH service and try ssh <your host name>. If you have a router setup, then you will need an additional step - basically the DDNS name refers to your router IP - and not the machine behind the router that you wish to reach. You will also need to make sure that your machine has a static IP from your router. To set up your router, go to your router administration page.

  1. Go to the LAN section and give a range of IPs outside of the static IP. Most routers have lan addresses like 192.168.x.y - 192.168.x.z. If you want your host to have an IP address of 192.168.1.100, then give a LAN range that does not include this IP - say 192.168.1.110 - 192.168.1.200.

  2. Save and reboot your router.

  3. Now go to your network settings and enter your static IP (192.168.1.100), netmask 255.255.255.255, gateway (usually 192.168.1.1).

  4. Go to your router administration page and look for a section like virtual server - your router will allow you to forward packets received on a particular port to a host and port within your LAN. You will have to enter the external port (we'll use 22), the internal machine to forward (192.168.1.100) and the port to forward to (22). With this in place, any packets received on port 22 (ssh) on your router will be forwarded to the 192.168.1.100 machine on the ssh port.

  5. Save and reboot your router.

  6. Give it a spin.


From a different machine (or from the same one -doesnt matter), try out ssh <your host> and you should be able to login to your machine - via the internet.

Thursday, January 03, 2008

Back in circulation

I'm on vacation in Bangalore, and guess what - fixing my home computer. Mostly things like lost drivers, screen resolution, cruft in the drives - its an old machine - a P4/512 Meg, but good enough for surfing the net.

Did a few fun things in the midst, and its been ages since I've added anything to this blog. Will summarize for now and put in longer posts with more details in cases someone's interested.

  1. Fixed my windows C drive which was running out of space - used trusty old windirstat for that.

  2. Set up wifi at home with ADSL modem from BSNL - MT800. Again, wasn't as straightforward as I'd thought.

  3. Replaced old pcq linux 2006 with ubuntu gutsy - without losing stuff :D. Need to have /home in a separate partition, but otherwise this is a breeze.

  4. Having fun with compiz-fusion. Its great - however, the documentation isnt easily locatable/consumable enough for first timers (me).

  5. Set up DNS caching proxy on my linux box - has improved my net/web experience a 100 fold. Was a piece of cake too.

  6. Set up Dynamic DNS and remote SSH access to my box - this has been the single most important utility/maintainance action.


More later.

Tuesday, July 03, 2007

Sluggish Firefox - and what a hog!

Okay - for the past few days I've been irritated with the browsing experience at home. Pages (google reader, Yahoo mail etc seemed inexplicably slower than before - but they'd load alright - just seemed that teeny weeny bit slower that's enough to leave you suspicious).

I first suspected my ISP (verizon) for frequent dropped connections (saw the DSL modem lights reset a couple of times a day), then my wifi modem (not a high end one), then spyware/malware. So after the usual barrage of tests - wifi interference sources/anti virus/anti spyware/cable tests etc, I still hadn't nailed it.

Finally, used procexp ( you can use plain 'ole Task manager too - this is just flashier) and saw that FF had 330 MB of RAM with 3 or 4 tabs. Also, just plain clicking on a text box was slow, typing into a text field would echo characters after a noticeable delay - so this was definitely a browser problem.

When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.


-- Sherlock Holmes


Ain't that an apt quote??? I love the addons I have and probably I have one too many. This page on problematic addons  was a life-saver - after disabling a bunch of infrequently used addons (stumble upon toolbar, google toolbar, browser sync, adblock filterset g, foxy tunes and some more), I'm back in browsing heaven. The only addons I have now enabled are




  • diigo toolbar

  • all in one gestures

  • Adblock

  • Flashgot

  • Piclens


What a relief!

BTW, addons are also the latest attack vector. So be wary of who you let into your browser!

Thursday, June 28, 2007

Piclens - full screen slideshows with flickr (and others)

discovered Piclens

Its a great addin for firefox - and integrates with Flickr to give you full screen slideshows a'la Picasa slideshows on your machine!! Its a bit tricky to figure out how to get it to work - Just hover any picture on any page  and click on the blue bubbly overlay button that appears

Python, cygwin, TurboGears, mysql hell

Ok - here goes - I've always liked python, though definitely a noob. I was interested in python on the web and after a bit of googling, seems like TurboGears is the way to go.

First things first - decided to use mysql as the database (already have it on my machine and didn't want to install one more database (postgresql/sqlite). Now it turns out that MySQL doesnt have a cygwin package. More googling - mysql server can't run on cygwin due to something to do with pthreads. You can compile the mysql client on cygwin though.

That's what I decided to do - grabbed the Linux tar.gz source from mysql.com, got it into a directory and ran ./configure --without-server, followed by make && make install. All went through fine -other than the fact that it was time consuming and pretty boring (more so since I had to download and install gcc, bintools first in cygwin)

I thought I'd got through the hard part and what remained was to install python-MySQLdb package. Off I went to

easy_install MySQLdb

No luck there - package build failed with missing library -lmysqlclient_r. Turns out that the 'thread-safe' version of mysql client (mysqlclient_r) is preferred but the mysql build doesnt build this by default. What a shame!!!

Anyway, so I wasnt going to redo the whole mysql client library build again - more README files and googling later, grabbed mysql-python-1.2.2 tarball, got it into a folder, edited site.cfg and changed 'threadsafe=false'. The next run on python setup.py build worked properly, with the python mysql linking against the non threadsafe mysqlclient library.

Think troubles are over yet? No way.

Off I went to test - started the python interpreter and did a import MySQLdb, and got a Permission Denied in some 'egg' file!!! What the heck are these egg files anyway. Well, I did'nt have much of a clue and more google later got educated that these are install packages used by the easy install system. The more I looked, the more it seems that the easy install is anything but easy :(. Anyway, this one had me floored - since I couldnt get to the line of source where the error was and had no clue how to view the contents of an 'egg' (they're zips - but I didnt know that and very helpfully there's hardly anyplace where they say that they're zips with the extension of egg!!! Baaah!! - why couldn't they just use .zip?)

More and more hard googling - and this time the info's really sketchy till eventually found a post from a guy who asked the exact same question. Guess what, the easy_install system unzips the eggs to some folder (pointed by PYTHON_EGG_CACHE env var) and there I needed to do a chmod a+x on the _mysql.dll. Well so I did echo $PYTHON_EGG_CACHE and the var isnt set!!! Admittedly at this point, I'm not looking sharp either - what started out as a quick spin has become a quagmire of installation issues - but I'll be damned if I let it sink me!!! Eventually, had the Eureka moment and checked ~/.python-eggs and sure enough found the truant _mysql.dll. Quick chmod a+x and presto - import MySQLdb worked!!! YAHOOOOOOOOOOOOOOOOOOOO!!!!

And now back to where I started - went back to turbogears, did a tg-admin quickstart, setup a mysql database and started with python start-testproject.py. Guess what - no luck yet - turns out that mysql cant client to my windows server with a socket.

More googling - and this time its really desperate - and more enlightenment - the windows mysqld doesnt do unix sockets - how do I force tcp/ip? Simple - use 127.0.0.1 as hostname in the connection settings instead of localhost!! Finally something that was easy to fix. Finally after 8 hours of on and off hacking away at installation issues I'm glad to see a turbogears web page.

Bottomline: Python's great, from the looks of it, turbo gears seems well designed. Mysql is a great database - a cygwin native server would be great - or atleast a client package. But if one has to run through all these hoops just go get a 'quick spin' then adoption's going to be difficult.

I havent tried RoR - but has someone tried a similar thing on cygwin (cygwin, ruby, RoR, mysql)? How does the experience compare - is it any easier to get off the ground?