Saturday, December 31, 2011

Unit testing Apache CXF RESTful services - code available

So, the original post on the topic written about two and a half years ago had code snippets, but there's been comments and PMs for the complete code. So last week, as I resurrected this blog, decided to get that code out on github. Unfortunately, that was easier said than done; it has been quite some time and frankly, I'd lost the code. I must've switched machines about 3 times in the interim and gone from SVN to github for personal projects. Some hunting around ensued and thankfully, I was able to find the actual code we wrote based on the sample I'd posted. So cleaned that up - and just extracted the unit testing example out of it and pushed it to github - get it here. I haven't updated any of the dependencies - so this is still running against spring 2.5 and cxf 2.2.3 (I think) and things might've changed quite a bit since then (I haven't used the JAXRS bits of CXF much after that)


Running tests:
[sourcecode language="bash"]
mvn test
[/sourcecode]
Running the server:
[sourcecode language="bash"]
mvn jetty:run
[/sourcecode]

Syntax highlighting support in Wordpress.com with markdown

Now that I've cozied up to Vim/VimRepress combo for posting to this blog, there are a few things where I'm finding issues with posting code. With straight wordpress.com, I used to be able to mark up code with the [sourcecode][/sourcecode] tag and syntax highlighting comes in. With markdown - indenting a block of code with 4 spaces renders it as a <pre><code></code></pre> tags, but I don't know if there's a way to let WP.com know what language it is or any way to use the [sourcecode][/sourcecode] plugin from markdown.


Some googling on the topic didn't result in any wp.com specific answers (some folks have posted on using other plugins et cetera with a self hosted Wordpress - but nothing for wordpress.com)


Any ideas/pointers? Guess I should also post the question on the Unix stackexchange


Update on 1/1/2012


Using the sourcecode language="xxx" tag works - but you cant have any empty lines in your source.

Friday, December 30, 2011

Creating an interstitial login page with JqueryMobile

So, at work, we're building a mobile website using JqueryMobile. The app has a bunch of publicly visible pages however, other pages require the user to be authenticated. We didn't want the user to be forced to login on the first page. Instead, whenever a protected page is accessed, and if the user insn't logged into the app, we'd like to take him to the login page. Once he's successfully authenticated, then take him to the page he was navigating to. Doing this in a normal webapp is quite standard - however, with JqueryMobile, query params meddle with the hash navigation model. Also, the page that the user tries to access could be a div in the same physical page or a different url that needs to be fetched.


Trying to solve this was interesting as we were all really just getting started with JqueryMobile - so finding the ideal solution required a few tries. The solution takes a leaf out of JqueryMobile's approach. The outline of the solution is:



  1. Any page div that's a protected resource is marked with a data-needs-auth="true" attribute

  2. We hook into the document level pagebeforechange event to see if the user is trying to transition to a page requiring authentication. If so, then check if we have the user's authenticated context available.

  3. if the said context isnt available,

    1. Cancel default event handling since we're now going to navigate the user to the login page.

    2. save the toPage object - so once the user is logged in, we know where to take him.

    3. navigate to the login page.



  4. In the login page, the page can call the server apis to autheticate the user. Once the user is authenticated, then

    1. See if there's a valid returnTo object, if so, take the user to the page.

    2. If not, take the user to a 'default' page - in our case, this is the app dashboard page.




Code below
[sourcecode language="javascript"]
var pageVars = {}
$(document).bind("pagebeforechange", function (event, data) {
if (typeof data.toPage == 'object' &amp;&amp; data.toPage.attr('data-needs-auth') == 'true') {
if (!sessionStorage.getItem("TokenSSKey")) {
if (!localStorage.getItem("TokenLSKey")) {
pageVars.returnAfterLogin = data;
event.preventDefault();
$.mobile.changePage("#Login_Page", { changeHash: false });
}
else {
sessionStorage.setItem('TokenSSKey', localStorage.getItem("TokenLSKey"));
}
}
}
});
[/sourcecode]
The login event handler that handles the server response that's received once we pass the username and password


[sourcecode language="javascript"]
function SuccessLogin(data) {
if (data != null &amp;&amp; data.LoginResult != null) {
if (data.LoginResult.Code === 0) {
localStorage.setItem('UNameLSKey', data.LoginResult.User.AccountName);
if ($("#RememberMeChkBx").is(":checked")) {
ErrorPanel.html("");
localStorage.setItem('TokenLSKey', data.LoginResult.Token);
sessionStorage.setItem('TokenSSKey', data.LoginResult.Token);
}
else {
ErrorPanel.html("");
sessionStorage.setItem('TokenSSKey', data.LoginResult.Token);
}
if (pageVars &amp;&amp; pageVars.returnAfterLogin) {
$.mobile.changePage(pageVars.returnAfterLogin.toPage);
}
else {
$.mobile.changePage("#DashBoard_Page", { changeHash: false });
}
}
}
}
[/sourcecode]

Thursday, December 29, 2011

Learning Vim

Is it worth it?

Definitely seems to be. I've looked at VIM in the past, tried it out too a couple of times or more, failed miserably(mostly within a day or two) and then wondered Why nutheads use VI. This would usually be followed with going back to the comfort of Emacs. I think over the years, I've spent more time customizing Emacs than actually getting any work done with it. And somewhere that felt wrong. In light of that, the minimalistic VIM looked attractive and worth another try.

So what was different about this time?

So this time things worked out a bit better. Rather than firing up VIM, spent some time reading through other's experiences on picking up VIM. And the first thing I did right was to disable the arrow keys in normal mode (I still have them in insert mode)

    " disable arrow keys
    noremap   <Up>     <NOP>
    noremap   <Down>   <NOP>
    noremap   <Left>   <NOP>
    noremap   <Right>  <NOP>
Once you have that bit, you're forced to use h/j/k/l. And while h/j/k/l muscle memory is built up within a week, the nice thingthat really happens is that you dont use h/j/k/l much - instead you move to using more efficient movement commands. There're aton of resources/cheatsheets on the web - but the approach I followed was to figure out some small keystroke when I needed it.What that mean't was that I could get work done - but at the same time get more efficient gradually.

Customizations

VIM out of the box is pretty badly configured - and that's part of the reason that people seem to shy away from it. In fact, all the times that I tried out VIM before, I didnt even come close to cusotimizing my .vim. There are folks who have curated vim dotfiles on github etc - but my advice is to stay away from them. You should know what goes in your .vim and be in control of thatrather than getting a bunch of things in your .vim that you dont understand. Just so you know, looking at the github history for my vimfiles repo - the initial commit was 3 months ago - but after that, all the commits have come in only in the last 4 weeks.What that means is that while I put in a vim file initially, I didnt do much with it initially since I was just getting a hang of the basics. Once one becomes comfortable with the basics, one moves to customizing the vim environment more and more.

Parting words

To summarize, VIM definitely seems nice once you invest into it. It's easy to drop off in the initial stage and not go any further - and I believe this is what happens to the vast majority of folks who try it out. However, once you build that initial comfort level,it feels light, fast and easy.Start easy, persist, and customize bit by bit - you'll feel yourself going from struggling with Vim to feeling comfortableand then to customizing your environment for an even better experience with VIM.I've definitely been more productive with VIM than I ever felt I was with Emacs - and these posts to my blog from Vim part of that.Besides that, I've used VIM effectively with a decent sized js code, html markup etc and felt the speed of editing inspite of still beinga noob in Vim terms.

A new look

Changed the theme of this blog and moved around the widgets a bit.
Finally, I can bear looking at this blog :) - hope that holds good for you too.


Wednesday, December 28, 2011

Compiling VIM

Running ubuntu 10.10 here and ubuntu repos have only vim 7.2. I'm sure there's a ppa out there that has 7.3, but thought
that compiling vim from source would be a good exercise - plus I get to compile it with the options that I'd like
rather than relying on someone's build.


Here's the options that I enabled:
[sourcecode language="text"]
CONF_OPT_PERL = --enable-perlinterp=dynamic
CONF_OPT_PYTHON = --enable-pythoninterp
CONF_OPT_RUBY = --enable-rubyinterp
CONF_OPT_GUI = --enable-gui=gtk2
CONF_OPT_FEAT = --with-features=huge
BINDIR = /usr/bin
DATADIR = /usr/share
[/sourcecode]
Here's hte other dependencies I had to install
[sourcecode language="bash"]
sudo apt-get install libperl-dev ruby-dev python-dev libgtk2.0-dev
[/sourcecode]
Once you have the deps installed, just run
[sourcecode language="bash"]
make
sudo checkinstall
[/sourcecode]

Blogging with Vim

So now I'm in Vim land and this is the first time I've gotten far enough to feel a bit comfy. Decided to dust off my blog and start at it again - what better to do it in than in VIM.


So - TA-DA - here's the first post - courtsey VIM on ubuntu. However, as usual, it was rougher than it's supposed to be. IN any case, I'll forget how I got this far the next time so the next few posts will be around recording how to get VIM to post to WP.com blogs.


But before that - the first thing to to is to get the VimRepress plugin. Better if you have pathogen installed, in which case you can do
[sourcecode language="bash"]
cd .vim
git submodule add https://github.com/raghur/VimRepress bundle/VimRepress.git
[/sourcecode]
That's my fork on Github of https://github.com/connermcd/VimRepress.git which fixes a few things:



  • Makes VimRepress work properly through a proxy

  • Changes the attachment filename to a '.odt' since Wordpress.com doesn't allow a text file attachment.


I still dont have a clue if doing this will break the plugin - but nevertheless, basic case of posting to my blog works and at this stage that seems good enough for me.


PS as you can see from this post - I've not yet got a hang of markdown syntax :)


Dec 29th - PPS a couple of posts and one more tip for Wordpress.com. WP.com does <br/> for hardbreaks in the markdown text. Obviously, this doesnt leave the post looking very good. I have the following in my .vimrc to get around this


[sourcecode language="text"]
augroup Markdown
autocmd FileType markdown set wrap
\ linebreak
augroup END
[/sourcecode]


PPS
You will also need to have python markdown installed once you have VimRepress running.
[sourcecode language="bash"]
easy_install markdown
[/sourcecode]

Thursday, August 18, 2011

Troubleshooting nandroid backup/restore

Have been having all sorts of weird problems with Nandroid backup/restores. Essentially, here's the symptoms of the problem - I'd get a nandroid and restore it successfully (Amon RA/CWM would report success) - however either will get stuck at boot or if it boots successfully, will have tons of FCs and/or data loss. In most cases, I would dread seeing the green Android on boot up asking me to log in to my google account :(

Essentially, my nandroids were useless... to the extent that I had only one nandroid backup that was known to work - and I was keeping 4 backups of that lest I lose it somehow.

So today, thought I'd get deeper into it and see where the problem was

  1. It was unlikely that its a problem with CWM/AmonRA - I myself had an old working backup. Since my backups were created and restored successfully with MD5 verification, it seemed that there's something wrong in the backup image itself.

  2. Still, that seemed inexplicable, since creating images just doesn't seem that flaky. A couple of times after reboot, I had got a "UID has changed - it is recommended to wipe data" or something similar message - so I thought something was wrong with permissions after the restore. In any case, I tried the CWM menu item of fix permissions - but didnt get anywhere with that. At this point, I was desperate enought to get adb out !!

  3. Now in full blown investigation mode, I didnt care if I couldnt restore my data - just wanted to figure this thing out. So restored a  "non working" backup and did a adb logcat while the phone booted... turns out that I was seeing tons of messages like so:


  4. I/PackageManager(  205): /system/app/ContactsProvider.apk changed; collecting certs
    I/PackageManager( 205): New shared user android.uid.shared: id=10006
    W/PackageManager( 205): System package com.android.providers.contacts has changed from uid: 10003 to 10006; old data erased


  5. So that explained what was going wrong... thought it would be an easy fix to do the 'fix permissions' thing in CWM advanced menu. Restored again and went over to the advanced menu, did fix permissions and rebooted. What I got was a big naught for all my work - same problems and no resolutions. At this point I was stumped enough - but sheer bull headedness forced me to look at the log again... and lo it says 'data erased'. So that explains why fix permissions after boot wont work since the data is erased during boot itself!


At this point, the key to the problem was really understanding how and where android's UIDs are generated,  stored  and regenerated. Headed over to Cyanogen wiki and read up the details on fix permissions which explained the packages.xml file. Somehow the packages.xml was borked in the nandroid (every time) and that was causing it to be regenerated.

Armed with that, got a germ of a solution in place, which is roughly

  1. Restore nandroid with borked packages.xml

  2. Let the system boot. Will lose data but a new packages.xml will be regenerated

  3. Reboot into recovery and adb pull /data/system/packages.xml out.

  4. Do an advanced restore and again restore the data only.

  5. mount /data and adb pull /data/system/packages.xml to compare differences. Found that packages.xml was indeed corrupt.

  6. adb push packages.xml (this is the generated one pulled in step 3) to /data/system. Now you have all the old data but the packages.xml is newly generated one and known to be valid. Obviously UIDs will mismatch - but fix permissions has a valid file to work on.

  7. Still in recovery, run fix permissions. It should fix permissions properly.

  8. Reboot


It worked like a charm!!!! I'm still a little worried as I dont know what else is borked in my nandroid data.img. And I dont know why that image has the exact same problem every time - I have tried 3 different versions of CWM recovery, Amon RA 2.2.1 and ensured that my sd card was clean (ran chkdsk on it). In any case, since I'm able to restore, will just use the phone for the next few days and hopefully I'll run into any wonkiness soon.

And going forward, I think I'll take my own backup copy of the /data/system/packages.xml file along with each nandroid.

Tuesday, August 02, 2011

Note To Self: Fixing broken market links on Android after wipe/ROM upgrade

Force stop market and clear data
Launch market again - it will ask you to accept terms. Do so.
Should force it to rebuild the database and you should see all your apps linked to market again.

Thursday, July 28, 2011

Note to self: ROM install/upgrade


  1. nandroid backup - amon ra recovery

  2. Reboot recovery, install zip

  3. Install Link2SD-preinstall.zip (only on cyanogen based ROMs)

  4. Boot

  5. Play around...make sure things work.

  6. Install other niceties/Troubleshoot


    1. Link2SD - database error.Just uninstall and reinstall.

    2. /etc/gps.conf - change to sg.pool.ntp.org


  7. Charge to 100%

  8. Reboot into recovery

  9. wipe battery stats

  10. reboot

  11. Run down the battery

  12. Recharge to 100%


 

Monday, July 19, 2010

A Python mystery

Back after a long time…saw something strange today and think it deserves a post. I was cranking through Problem 47 on Project Euler. As I was optimizing the solution, the optimization actually increased run time – and I’m at a loss to explain it. So here goes:

[sourcecode language="python"]
def problem_47(maxlen = 4):
found = False
i = 2*3*5*7 + 1
while not found:
# facs = [len([j for j in uniq(prime_fac(i+k))]) for k in xrange(0,maxlen)]
d = 4
for t in range(i+3, i-1 , -1):
k = len(list(uniq(prime_fac(t))))
if k < 4:
i = t + 1
break
else:
d -= 1
if d ==0:
found = True
print list(xrange(i, i + maxlen))
# if i % 1000 == 0:
# print i
[/sourcecode]

The run time is about 1m2s.

Now, if I try to optimize it such that I break (line 10) when I find the first number from the end that has less than 4 prime factors, the run time should be lesser (or at least the same). Right?

Turns out Wrong… now the thing takes more than 3m to run. What is going wrong?

If any of you have a clue, drop me a comment.

Friday, May 28, 2010

Fun with python’s decorators

Was in need of a utility function that can retry an arbitrary function a few times before giving up. Essentially something like Gmail or Google Readers behavior when there’s no network connection.

Thought it would be a few minutes job to cook up a decorator utility in Python. Boy! was I wrong! I mean, the basic use case is definitely trivially easy with Python – however, once you want something that’s more useful than that and resembles something that you’d actually use in production, the complexity goes over the top!

Anyway, I’m figuring out all sorts of fun things about decorators – and all of it the hard way! OTOH, its a  lot of fun to write small test code to test & validate assumptions!

Make no mistake – I’m still a python fanboy :) – just that going through some pains with decorators right now. Will follow this up with a longer/detailed post that may have some useful insights I’ve gained till then. Thanks for stopping by!

Saturday, March 06, 2010

Back after a long time…

Obviously, I’m not writing enough out here… part of the reason being even though wordpress’s web editor is great, I really like not having to type gobs of text in a text area.

So eventually, looked around and found Windows Live Writer. Its going out on its customary spin :).

So what’s been cooking? Actually a bunch of things over the last several months:

Stuff – on which I mean to put up individual posts

  1. Had a fun exercise benchmarking lighttpd with python wsgi
  2. Been doing some stuff on mysql cluster – mostly around seeing how it compares with the mysql master-master replication setup I had in place.
  3. Dipping my toes into Amazon EC2 finally – though Linode or Rackspace is way easier if you want to just spin up a VM. Amazon’s  EC2 does have some interesting stuff (reliability of back ups, CDN etc). However, it comes at the cost of having a model that initially is hard to understand.
  4. Resin server – heard good things about it, had to see if it would fit at  some stuff at work. Disappointed that the free version is really hamstrung.
  5. Apache Wicket: I’ve always hated web UI and somehow the action oriented frameworks (Struts and their ilk) never appealed from a coupling/cohesion standpoint. In that respect, seemed like ASP.net got a lot of things right going the component oriented way. However, it seems fatally flawed with stuff like viewstate and postback and so on. On the Java end, tried Tapestry out, but, it comes with too much baggage for my taste. Had been reading of Wicket for sometime now and decided to take the plunge and was pleasantly surprised doing my contrived example:
    1. Took much less to get off the ground compared to Tapestry
    2. Mentally, a lot easier to understand
    3. Managed to realize my goal of exploiting OO techniques to DWIM – even on a simple contrived example.

Books:

  1. Steve Souders excellent "High performance websites” book: if you’re doing anything near a high performance website, then grab this book today!
  2. Wicket In Action
  3. Agile Principles, Patterns and Practices by Robert C Martin: read about the SOLID principles first and then buy this. This is a book to own if you aspire to become a good Agile/OO practitioner. Don’t worry about the C# in the title – it applies universally.

Saturday, January 30, 2010

A new tool for the toolbox!

Firstly - my VM setup:

I'm running Virtualbox with Xubuntu 9.10 on Win7 host - and its pretty. Its on a office standard issue Dell D531 - meaning they're AMD Turion X2 TL-60 and 2GB of RAM.

Now the Turion's supposed to have hw virtualization (AMD-V) however, the moment hw virtualization was enabled in virtualbox and I tried starting the vm, the machine would hard reboot!!!

After searching high and low, turns out that its an issue with Dell bioses and they dont have any updates. Here's a page that tracks the issue. Imagine my happiness when a couple of days ago, found that dell had released an unofficial bios update (T12).Well, its gone in, and things are running swimmingly well - my VM now has 2 procs, is stable and I hardly feel I'm in a VM :). In fact, this post is coming from the VM  - firefox with 12 tabs, a few terminals and emacs running on 600 MB of RAM.

Now let me come to the new tool I was talking about

I like to run the VM full screen - feels best that way. After trying out enough and more of multiple desktop softwares, have finally settled on VirtuaWin - beats the crap out of other tools, systray integration is great, has window rules and so on. Over the past couple of weeks, its come close to the ideal tool - does the job well and you hardly know its there :-)

Friday, August 28, 2009

Hudson for CI - Tips, Tricks and insights

Just started using Hudson recently and I'm wowed! It's head and shoulders above CruiseControl and things that I like a lot are

  1. Snappy web based config - felt great that I could set up a CI build with essentially the repo path alone

  2. Plugin system!

  3. Deep maven2 integration (though read on below that this isnt always what works)

  4. Trending data OOB - essentially giving you nice charts about how your build is doing over time


Now that I've said all the very nice things about it, here's a few things that were hard to figure out/or weren't immediately apparent. If your maven builds aggregates modules then you'll find the experience a bit challenging

  1. The generated site doesnt work: Basically, the link is to one of the modules' site instead of a link to the parent project. This apparently is a known issue and the solution on hudson user list is to run the site:deploy goal and have a link in the project description to point to that url

  2. Code coverage: none of the coverage tools (EMMA, clover etc) support code coverage over a multi module build. Since coverage is very important to me, I eventually resorted to having separate build jobs instead of using the default multi module support. Here's how my svn structure looks
    [sourcecode language="sh"]
    /trunk/basebuild #contains the parent pom
    /trunk/project1 # pom refers to ../basebuild/pom.xml
    /trunk/project2 # ditto here
    [/sourcecode]

    With the directory structure above, there are build jobs for project1 and project2. Each build job checks out both the project folder (/trunk/project1) and the basebuild folder so that the POM references work.
    One undesirable effect of this set up is that if project 2 depends on project 1, then project 1 build will have to install the artifact to the local repo for the project2 build to work.

  3. Findbugs plugin - Running maven builds with findbugs configured did a Out of Memory (OOM) and failed the build. I tried setting MAVEN_OPTS to -Xmx512M at a bunch of places and nothing worked. Eventually, it turned out that the right place to specify it is in the Hudson COnfigure job page in the build section!

  4. Violations plugin - This is a great little hudson plugin. However, I couldnt get this to work with a inherited POM setup above. Eventually resorted to using Findbugs and PMD hudson plugins individually.


I should mention that I'm running hudson 1.321 with the latest plugins. If you have any tips to share on running hudson - please do drop a link in the comments. Overall, a great big 'thank you' to the Hudson folks!

Wednesday, August 26, 2009

Recipe: Unit testing Apache CXF RESTful services

Recently, decided to use Apache CXF to expose a service with a RESTful API. Part of the reason for choosing REST had more to do with the fact that the client is going to be a mobile client. These days, though mobile devices stacks have come a long way and provide SOAP clients, it still seems prudent to not depend on a whole slew of technologies where plain 'ole HTTP and JSON might do the trick.
As I started exploring CXF, I liked the JAX-RS implementation and decided to go ahead with it - however, almost immediately, hit a snag when I went on to write test cases. Apache CXF documentation is not quite there and things do require some investigation - at least initially till you get a hang of the framework. As it took time to figure out the solution, it makes sense to share it on blogosphere. Here's how to go about writing unit tests:

Firstly, the service and the service implementation:

[sourcecode language="java"]
package com.aditi.blackberry.web;

import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.HeaderParam;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Response;

@Path("/chat")
@Produces("application/json")
public interface ChatWebService {

@POST
@Path("connect")
public Response connect(@FormParam("user")String username, @FormParam("pass")String password);
}
[/sourcecode]

The service implementation:

[sourcecode language="java"]
package com.aditi.blackberry.web;

import javax.ws.rs.Produces;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;

@Produces("application/json")
public class ChatWebServiceImpl implements ChatWebService {

public Response connect(String username, String password) {
if(username ==null || "".equals(username) ||
password ==null || "".equals(password)) {
return Response.status(Status.BAD_REQUEST).build();
}
String[] response = {username, password};
return Response.ok(response).build();
}
}
[/sourcecode]

The corresponding spring context xml (applicationContext.xml) is:

[sourcecode language="xml"]

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:jaxrs="http://cxf.apache.org/jaxrs" xmlns:cxf="http://cxf.apache.org/core"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-2.5.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-2.5.xsd
http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd
http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd">





























[/sourcecode]

A few things to note here - logging is turned on using interceptors and the jaxrs server is defined. I'm also using flexJson to convert arbitrary objects to json - so a MessageBodyWriter bean is also injected into the jaxrs server node. The most important thing is that we havent included either the cxf-servlet.xml config for the cxf-extension-http-jetty.xml. Essentially, what we want to do is for the actual build, include cxf-servlet.xml and for the test runs, run the service on the bundled jetty server.

So, go ahead and define a applicationContext-web.xml:

[sourcecode language="xml"]
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">




[/sourcecode]

This is the context xml that we'll provide to the ContextLoaderListener in our web.xml.

For the test cases, define applicationContext-test.xml - this is the context xml which we'll load from the test cases.

[sourcecode language="xml"]
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">









[/sourcecode]

As you see, we also define a jaxrs:client for the test context xml.

There's one final issue to address - which is that we would ideally like the urls we use to access the service to be the same. The spring jaxrs:server binding takes an address attribute which defines the url the service is hosted on. For deployment onto an external container, this takes the form of "/myservice" - a path element relative to the context location. For the internal jetty hosted service, it takes the full http path (http://localhost:port/my/path/to/service). The easiest way is to have this set using a property reference in spring and have the applicationContext-web.xml and applicationContext-test.xml load different property files as shown in above.

For completeness, here's the web.xml:

[sourcecode language="xml"]

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

CXF REST Example
CXF REST Example


contextConfigLocation
classpath:/applicationContext-web.xml


org.springframework.web.context.ContextLoaderListener



CXFServlet
org.apache.cxf.transport.servlet.CXFServlet
1



CXFServlet
/*



[/sourcecode]

And finally, here's a junit test case:

base class:

[sourcecode language="java"]
package com.aditi.blackberry.web;

import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:/applicationContext-test.xml" })
public abstract class AbstractApiTest {

@Autowired
@Qualifier("chatclient")
protected ChatWebService proxy;
}
[/sourcecode]

A test case for the connect API:

[sourcecode language="java"]
package com.aditi.blackberry.web;

import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;

import org.junit.Assert;
import org.junit.Test;

public class ConnectApiTest extends AbstractApiTest{
@Test
public void testConnect() {
Response resp = proxy.connect("raghu", "password");
Assert.assertTrue(resp.getStatus() == 200);
System.out.println(resp.getEntity().toString());
}
}
[/sourcecode]

Thursday, January 01, 2009

PIL vs Imagemagick

Decided that I want to timestamp my photo collection with the date from the exif data. Many digicams have an option to do this - unfortunately, my Panasonic DMC-LZ8 doesn't seem to do this. I knew imagemagick would do the trick, but thought it would be a good time to play around with PIL and python.

Here's my PIL effort - functional, but one that came with quite some amount of googling and trying to make sense of the PIL documentation which is inadequate at best.

[sourcecode language="python"]

from PIL import Image
from PIL import ImageFont, ImageDraw
from PIL.ExifTags import TAGS
from os.path import basename, dirname,join
import logging
import sys
import datetime
import time

# Important: I set out to write the image annotation in PIL - there's one serious drawback though. When saving
# the image, the exif data is'nt preserved.

logging.basicConfig(level=logging.DEBUG,
                    format='%(asctime)s %(levelname)s %(message)s')
logger = logging.getLogger()
logger.level = logging.DEBUG

def readExif(image):
    info = image._getexif()
    ret ={}
    for tag,value in info.items():
        ret[TAGS.get(tag,tag)] = value
    dt = datetime.datetime (*time.strptime (ret['DateTime'],"%Y:%m:%d %H:%M:%S")[0:6])
    ret['DateTime'] = dt
    return ret

def annotateImage (file):
    i = Image.open(file)
    font = ImageFont.truetype("/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans-Bold.ttf", 36)
    exif = readExif(i)
    draw = ImageDraw.Draw(i)
    width, height = i.size
    draw.text((width * 0.7, height - 100),exif['DateTime'].strftime("%a %d-%b-%Y  %l:%M %p"), font=font, fill='orange')
    outfile = join(dirname(file), "Ann_" + basename(file))
    i.save (outfile, quality=98)
    logger.debug (outfile + " saved")

if __name__== "__main__":
    logger.debug ("getting exif for " + sys.argv[1])
    for file in sys.argv[1:]:
        logger.debug ("Annotating " + file)
        annotateImage(file)

[/sourcecode]

Unfortunately, PIL has a fatal flaw - you can annotate the image and save it - but then the saved image doesn't retain the original image's exif metadata. I also tried the exiv2 library, but couldnt figure out a way to load the image, annotate it and then copy over the metadata. Googling around didn't turn up any intersting solutions - so if any of you have any ideas, please share.

Meanwhile, as I was getting tired of coaxing PIL to do what I want, I just wrote a a little bash script to do the same in imagemagick. Its as painless as it can be, comes with excellent documentation, hardly any gotchas, a world of options in case you feel creative and the job gets done in 10 mins. Here's the bash script below.

#! /bin/bash
# script adds a black 18px bottom border to the pic with the Exif datetime tag
# no safety checks :). Original pics are left untouched.
while [ "x$*" != "x" ]
do
file=$1;
shift;
outfile="$(dirname "$file")/Ann_$(basename "$file")"
echo $outfile
echo $file
date=$(identify -verbose "$file" | grep 'DateTime:'| sed 's/ Exif:DateTime: //;s/:/-/;s/:/-/')
date="$(date -d "$date" +"%a %d-%b-%Y %l:%M %p")"
convert "$file" -size 1x18 xc:Black -fill White -background Black -append -gravity Southeast -draw "text 0,0 '$date'" "$outfile"
done

Overall, the experience left me dissappointed and dissatisfied with PIL.

Tuesday, October 07, 2008

andLinux with Hardy Heron

andLinux is  built on top of co-linux (co operative linux) and basically runs side by side with Windows. andLinux packages the whole thing better (coLinux bundled with Xming and a nice systray app allowing you to launch Linux apps right in windows).

Here's details on getting off the ground - and the reason that I have this post is that though andLinux comes with an installer application, it still needs some amount of fiddling under the hood to make it work. This post is just to make sure I can go through the process again when the time comes

  1. When installing andlinux, choose the COFS option for making your hard drive visible in Linux

  2. Install with the command line option to launch andLinux (do not install it as a service just yet)

  3. Post installation, tweak andlinux's network setup - set up a couple of virtual TAP adapter . You will have to tweak things both on the linux side and on the windows side. Basically, you create a 2 TAP adapters - one is a loopback and another for sharing your LAN connection. Your wireless network is shared via Slirp (doesnt need a TAP adapter setup).

  4. Keep in mind a gotcha - slirp wont allow you to ping - so if you have only slirp working, then try a wget www.google.com to check if you have network connectivity.

  5. Start the andlinux server (or if its already running) make sure that your c drive is shared - on the bash prompt you should be able to do ls /mnt/windows

  6. do a apt-get update to update your package list. run an update. As of this time, the only prebuilt images on andlinux.org is gutsy.

  7. do a apt-get install update-manager-core

  8. run do-release-upgrade - and you should see apt running and updating your system to hardy.

Monday, June 23, 2008

Compact Ubuntu

I've always hated the fact that on Ubuntu with the default themes, there's far too much space wasted. The buttons are too tall, the treeview wastes too much space so that if you're on eclipse or some other ide, you see a precious few items on the screen.

I've been trying to tweak it to no end - even looking to see if there are any ~/.gtkrc-2.0 tweaks. Found a few links such as this Making Eclipse look good on Linux - Max's blog - however, didn't really satisfy my need.

And so it stayed until today when I came across Clearlooks Compact Gnome Theme.

I love it - one more for my list of must-haves!

Wednesday, June 18, 2008

Enjoy symlinks and hardlinks on NTFS

Can't believe I didnt come across this before - if you've gotten used taming your hdd by creating links to folders and have been annoyed with the lack of symlinks and hardlinks on NTFS, then despair no more. I've been using Mark Russinovich's (of sysinternals fame) tool - junction.exe all this while and though it works great, have always wanted something that would integrate with Explorer too. For an in-depth discussion - read http://shell-shocked.org/article.php?id=284 Anyways, I'm extremely happy with NTFS Link - this will surely go into my list of "Must have tools - install immediately on a new machine" list :-)