Saturday, January 30, 2010
A new tool for the toolbox!
I'm running Virtualbox with Xubuntu 9.10 on Win7 host - and its pretty. Its on a office standard issue Dell D531 - meaning they're AMD Turion X2 TL-60 and 2GB of RAM.
Now the Turion's supposed to have hw virtualization (AMD-V) however, the moment hw virtualization was enabled in virtualbox and I tried starting the vm, the machine would hard reboot!!!
After searching high and low, turns out that its an issue with Dell bioses and they dont have any updates. Here's a page that tracks the issue. Imagine my happiness when a couple of days ago, found that dell had released an unofficial bios update (T12).Well, its gone in, and things are running swimmingly well - my VM now has 2 procs, is stable and I hardly feel I'm in a VM :). In fact, this post is coming from the VM - firefox with 12 tabs, a few terminals and emacs running on 600 MB of RAM.
Now let me come to the new tool I was talking about
I like to run the VM full screen - feels best that way. After trying out enough and more of multiple desktop softwares, have finally settled on VirtuaWin - beats the crap out of other tools, systray integration is great, has window rules and so on. Over the past couple of weeks, its come close to the ideal tool - does the job well and you hardly know its there :-)
Friday, August 28, 2009
Hudson for CI - Tips, Tricks and insights
- Snappy web based config - felt great that I could set up a CI build with essentially the repo path alone
- Plugin system!
- Deep maven2 integration (though read on below that this isnt always what works)
- Trending data OOB - essentially giving you nice charts about how your build is doing over time
Now that I've said all the very nice things about it, here's a few things that were hard to figure out/or weren't immediately apparent. If your maven builds aggregates modules then you'll find the experience a bit challenging
- The generated site doesnt work: Basically, the link is to one of the modules' site instead of a link to the parent project. This apparently is a known issue and the solution on hudson user list is to run the site:deploy goal and have a link in the project description to point to that url
- Code coverage: none of the coverage tools (EMMA, clover etc) support code coverage over a multi module build. Since coverage is very important to me, I eventually resorted to having separate build jobs instead of using the default multi module support. Here's how my svn structure looks
[sourcecode language="sh"]
/trunk/basebuild #contains the parent pom
/trunk/project1 # pom refers to ../basebuild/pom.xml
/trunk/project2 # ditto here
[/sourcecode]
With the directory structure above, there are build jobs for project1 and project2. Each build job checks out both the project folder (/trunk/project1) and the basebuild folder so that the POM references work.
One undesirable effect of this set up is that if project 2 depends on project 1, then project 1 build will have to install the artifact to the local repo for the project2 build to work. - Findbugs plugin - Running maven builds with findbugs configured did a Out of Memory (OOM) and failed the build. I tried setting MAVEN_OPTS to -Xmx512M at a bunch of places and nothing worked. Eventually, it turned out that the right place to specify it is in the Hudson COnfigure job page in the build section!
- Violations plugin - This is a great little hudson plugin. However, I couldnt get this to work with a inherited POM setup above. Eventually resorted to using Findbugs and PMD hudson plugins individually.
I should mention that I'm running hudson 1.321 with the latest plugins. If you have any tips to share on running hudson - please do drop a link in the comments. Overall, a great big 'thank you' to the Hudson folks!
Wednesday, August 26, 2009
Recipe: Unit testing Apache CXF RESTful services
As I started exploring CXF, I liked the JAX-RS implementation and decided to go ahead with it - however, almost immediately, hit a snag when I went on to write test cases. Apache CXF documentation is not quite there and things do require some investigation - at least initially till you get a hang of the framework. As it took time to figure out the solution, it makes sense to share it on blogosphere. Here's how to go about writing unit tests:
Firstly, the service and the service implementation:
[sourcecode language="java"]
package com.aditi.blackberry.web;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.HeaderParam;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Response;
@Path("/chat")
@Produces("application/json")
public interface ChatWebService {
@POST
@Path("connect")
public Response connect(@FormParam("user")String username, @FormParam("pass")String password);
}
[/sourcecode]
The service implementation:
[sourcecode language="java"]
package com.aditi.blackberry.web;
import javax.ws.rs.Produces;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;
@Produces("application/json")
public class ChatWebServiceImpl implements ChatWebService {
public Response connect(String username, String password) {
if(username ==null || "".equals(username) ||
password ==null || "".equals(password)) {
return Response.status(Status.BAD_REQUEST).build();
}
String[] response = {username, password};
return Response.ok(response).build();
}
}
[/sourcecode]
The corresponding spring context xml (applicationContext.xml) is:
[sourcecode language="xml"]
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:jaxrs="http://cxf.apache.org/jaxrs" xmlns:cxf="http://cxf.apache.org/core"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-2.5.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-2.5.xsd
http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd
http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd">
[/sourcecode]
A few things to note here - logging is turned on using interceptors and the jaxrs server is defined. I'm also using flexJson to convert arbitrary objects to json - so a MessageBodyWriter bean is also injected into the jaxrs server node. The most important thing is that we havent included either the cxf-servlet.xml config for the cxf-extension-http-jetty.xml. Essentially, what we want to do is for the actual build, include cxf-servlet.xml and for the test runs, run the service on the bundled jetty server.
So, go ahead and define a applicationContext-web.xml:
[sourcecode language="xml"]
[/sourcecode]
This is the context xml that we'll provide to the ContextLoaderListener in our web.xml.
For the test cases, define applicationContext-test.xml - this is the context xml which we'll load from the test cases.
[sourcecode language="xml"]
[/sourcecode]
As you see, we also define a jaxrs:client for the test context xml.
There's one final issue to address - which is that we would ideally like the urls we use to access the service to be the same. The spring jaxrs:server binding takes an address attribute which defines the url the service is hosted on. For deployment onto an external container, this takes the form of "/myservice" - a path element relative to the context location. For the internal jetty hosted service, it takes the full http path (http://localhost:port/my/path/to/service). The easiest way is to have this set using a property reference in spring and have the applicationContext-web.xml and applicationContext-test.xml load different property files as shown in above.
For completeness, here's the web.xml:
[sourcecode language="xml"]
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
[/sourcecode]
And finally, here's a junit test case:
base class:
[sourcecode language="java"]
package com.aditi.blackberry.web;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:/applicationContext-test.xml" })
public abstract class AbstractApiTest {
@Autowired
@Qualifier("chatclient")
protected ChatWebService proxy;
}
[/sourcecode]
A test case for the connect API:
[sourcecode language="java"]
package com.aditi.blackberry.web;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.Response.Status;
import org.junit.Assert;
import org.junit.Test;
public class ConnectApiTest extends AbstractApiTest{
@Test
public void testConnect() {
Response resp = proxy.connect("raghu", "password");
Assert.assertTrue(resp.getStatus() == 200);
System.out.println(resp.getEntity().toString());
}
}
[/sourcecode]
Thursday, January 01, 2009
PIL vs Imagemagick
Here's my PIL effort - functional, but one that came with quite some amount of googling and trying to make sense of the PIL documentation which is inadequate at best.
[sourcecode language="python"]
from PIL import Image
from PIL import ImageFont, ImageDraw
from PIL.ExifTags import TAGS
from os.path import basename, dirname,join
import logging
import sys
import datetime
import time
# Important: I set out to write the image annotation in PIL - there's one serious drawback though. When saving
# the image, the exif data is'nt preserved.
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s')
logger = logging.getLogger()
logger.level = logging.DEBUG
def readExif(image):
info = image._getexif()
ret ={}
for tag,value in info.items():
ret[TAGS.get(tag,tag)] = value
dt = datetime.datetime (*time.strptime (ret['DateTime'],"%Y:%m:%d %H:%M:%S")[0:6])
ret['DateTime'] = dt
return ret
def annotateImage (file):
i = Image.open(file)
font = ImageFont.truetype("/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans-Bold.ttf", 36)
exif = readExif(i)
draw = ImageDraw.Draw(i)
width, height = i.size
draw.text((width * 0.7, height - 100),exif['DateTime'].strftime("%a %d-%b-%Y %l:%M %p"), font=font, fill='orange')
outfile = join(dirname(file), "Ann_" + basename(file))
i.save (outfile, quality=98)
logger.debug (outfile + " saved")
if __name__== "__main__":
logger.debug ("getting exif for " + sys.argv[1])
for file in sys.argv[1:]:
logger.debug ("Annotating " + file)
annotateImage(file)
[/sourcecode]
Unfortunately, PIL has a fatal flaw - you can annotate the image and save it - but then the saved image doesn't retain the original image's exif metadata. I also tried the exiv2 library, but couldnt figure out a way to load the image, annotate it and then copy over the metadata. Googling around didn't turn up any intersting solutions - so if any of you have any ideas, please share.
Meanwhile, as I was getting tired of coaxing PIL to do what I want, I just wrote a a little bash script to do the same in imagemagick. Its as painless as it can be, comes with excellent documentation, hardly any gotchas, a world of options in case you feel creative and the job gets done in 10 mins. Here's the bash script below.
#! /bin/bash
# script adds a black 18px bottom border to the pic with the Exif datetime tag
# no safety checks :). Original pics are left untouched.
while [ "x$*" != "x" ]
do
file=$1;
shift;
outfile="$(dirname "$file")/Ann_$(basename "$file")"
echo $outfile
echo $file
date=$(identify -verbose "$file" | grep 'DateTime:'| sed 's/ Exif:DateTime: //;s/:/-/;s/:/-/')
date="$(date -d "$date" +"%a %d-%b-%Y %l:%M %p")"
convert "$file" -size 1x18 xc:Black -fill White -background Black -append -gravity Southeast -draw "text 0,0 '$date'" "$outfile"
done
Overall, the experience left me dissappointed and dissatisfied with PIL.
Tuesday, October 07, 2008
andLinux with Hardy Heron
Here's details on getting off the ground - and the reason that I have this post is that though andLinux comes with an installer application, it still needs some amount of fiddling under the hood to make it work. This post is just to make sure I can go through the process again when the time comes
- When installing andlinux, choose the COFS option for making your hard drive visible in Linux
- Install with the command line option to launch andLinux (do not install it as a service just yet)
- Post installation, tweak andlinux's network setup - set up a couple of virtual TAP adapter . You will have to tweak things both on the linux side and on the windows side. Basically, you create a 2 TAP adapters - one is a loopback and another for sharing your LAN connection. Your wireless network is shared via Slirp (doesnt need a TAP adapter setup).
- Keep in mind a gotcha - slirp wont allow you to ping - so if you have only slirp working, then try a wget www.google.com to check if you have network connectivity.
- Start the andlinux server (or if its already running) make sure that your c drive is shared - on the bash prompt you should be able to do ls /mnt/windows
- do a apt-get update to update your package list. run an update. As of this time, the only prebuilt images on andlinux.org is gutsy.
- do a apt-get install update-manager-core
- run do-release-upgrade - and you should see apt running and updating your system to hardy.
Monday, June 23, 2008
Compact Ubuntu
I've been trying to tweak it to no end - even looking to see if there are any ~/.gtkrc-2.0 tweaks. Found a few links such as this Making Eclipse look good on Linux - Max's blog - however, didn't really satisfy my need.
And so it stayed until today when I came across Clearlooks Compact Gnome Theme.
I love it - one more for my list of must-haves!
Wednesday, June 18, 2008
Enjoy symlinks and hardlinks on NTFS
Upgrade blues - upgrading to Firefox 3 final from Firefox RC 3
Anyway, so off I went to Mozilla.org and downloaded a copy of the final - and did my bit towards FF download day. Happily installed it - all defaults as usual. Install told me that it was installing into the same location as my current installation (c:\program files\mozilla firefox 3 beta 1 - that's where my FF3 install have been going - all the way from b1 to b5 and then from rc1 to rc3 - so no surprise).
Well, installation completed successfully, and I started FF 3 - but my title bar still says Build 2008052906 - even the file version has the same build ID.
Something's up - don't know what yet - but has anyone else had a similar experience?
Monday, June 16, 2008
Desultory Monday...
Well, Its all Text is great if you hate typing into webforms with textboxes that make editing such a big pain in the butt.
Its great to see that Its All text has been updated to work with FF 3.0 now. The fun would be to see if this works on Windows with cygwin emacs as the editor. Had problems the last time I tried that - but that's been sometime ago now.
Today's been a desultory Monday. Spent sometime getting emacs snapshot with pretty fonts on my hardy. Its beautiful.
The next thing has been mostly scratching my head on hadoop. What I'd like to do is parse an access log and generate multiple outputs - ie single input of gobs of web access logs and multiple outputs - with say requests by country, popular pages, % of client browser and so on.
- parse web log
- pull out remote ips and use geo ips to find the originating country
- pull out user agent field and figure out browser distribution.
- Filter the requested resource and pull out only pages - find pages by popularity
Now there seem to be quite a number of ways of doing this -
- Code the whole thing in Java - and this is where I'm getting into analysis paralysis.
Look at ways to generate multiple outputs from MapRed and then use Job and JobControl to setup the pipeline. - Use Pig - Pig examples on the Pig overview page seem to suggest that this should be trivial with Pig.
- Use Cascading - seems to be doing the same thing - will need to do this in JRuby or Groovy though.
Will post an update once I get through the java route
Thursday, June 12, 2008
VPN into Windows VPN Server from Ubuntu *Hardy* Intrepid
Ok - this was easy - and while there's some resources on google, I had to figure out a few itty bitty things for my work VPN setup.
install
- network-manager-pptp
- pptp-linux
Restart network manager with
killall nm-applet
sudo /etc/init.d/dbus restart
nm-applet --sm-disable &Configure VPN settings
Click on the network manager applet and click on VPN connections
- Create a new VPN connection
- Ensure that you select Refuse CHAP in the authentication tab.
- In the routing tab, you can give netmasks that need to go through VPN - for my work network, I have: 10.10.5.0/24 172.16.106.0/24
That's it. Now click on the Network applet, and connect to your VPN. In the authentication dialog, use <domain>\username and your windows domain password.
Thursday, June 05, 2008
Thursday, May 01, 2008
Water droplets - dipping a toe in macro photography
So, one of these long time itches has been to take a water droplet splash - you know, the immensely close up snaps where you see a single drop splashing...
Here's the snaps after two evenings of trial and error (mostly errors though) - feeling quite smug myself :)
Wednesday, April 02, 2008
Free subversion hosting - What's the best?
Will see how it goes.
Firefox 3 beta 5 released. Yahoo Mail is still broken.
Installed it as soon as I got to know today morning and the first thing to check was whether Yahoo Mail still crashed. Initially, Yahoo Mail seemed to work alright for all of 50 seconds - quickly moving over items in inbox caused Firefox to crash :-(
Guess will wait for some more time. I'm sure there's a bug report somewhere on this - Yahoo mail was broken on Beta 2, got fixed in Beta 3, then was broken in Beta 4 and is still broken on Beta 5.
Will wait for it to be fixed - Any idea if this is a firefox issue or a Yahoo! issue? Seems odd that script can cause the browser to crash so badly.
Monday, March 31, 2008
Hardy heron - first impressions
1. Wubi install from within windows is easy and works great. If after setting up so many boxes, I can go on and on about it, I'm sure that its great help for anyone who's on Windoze. I mean, the barrier to entry has never gone down so much.
2. I guess once you've installed via Wubi and configured your system to your liking, you can uninstall and take an image that you finall install to a dedicated partition - isn't that just awesome.
3. Comes installed with Firefox 3b4 -which is awesome. Given that FF crashes badly on yahoo, this might be a bummer for many people. Should probably have some first time customization that will let you install Opera.
4. Installation is super fast - took about 10 mins for wubi to install, reboot once, finish installation and reboot again. Grub default to Last selected would probably be a better idea.
The not so good
1. Wifi doesnt work out of the box - didn't on my Dell Inspiron 1501 or the Dell Latitude D620. Its the ye olde broadcom problem. This is really the BIGGEST turn off. Hope it will get fixed by the time the final release is out. Meanwhile, had to jump through hoops getting ndiswrapper in. I didn't go the broadcom fwcutter way since that only allows a 802.11b connection from what I read. I'm still not sure what fixed the issue - irrespective, I had to update the system and then things started working like a charm.
2. Compiz configuration isnt installed by default. If this is your first time on Ubuntu and you've come this way to see the awesome 3D desktop, then this is a bummer. Finding out what you need to do is a pain too.
I think that's all there is to it. Its great once wifi starts working normally.
Friday, March 28, 2008
Gnuplot, dstat - easy graphing on Linux
First off, vmstat, doesnt lend itself well to graphing without additional scripts to lay out the data so tools like gnuplot can be used. Secondly, and more seriously, it doesn't include a timestamp in the output.
Looking around a bit found that dstat seems to be a good replacement to vmstat (and iostat) - and the generated data is consumable with gnuplot.
Here's a quick example of generating graphs for CPU user, system and idle times
dstat -tc 5 500 > dstat.raw
now fire up gnuplot and go ahead and plot it
gnuplot> set xdata time gnuplot> set timefmt "%s" gnuplot> set format x "%M:%S" gnuplot> plot "dstat.raw" using 1:2 title "User" with lines, "dstat.raw" using 1:3 title "Sys" with lines, "dstat.raw" using 1:4 title "Idle" using lines
To make gnuplot generat an output file, you need
gnuplot> set term png
gnuplot> set output "dstat.png"
gnuplot> replot
And you're done. here's the graph generated on my machine. There's loads more that you can do - and admittedly, you can do everything by dumping your file to excel. However, that doesn't lend itself well to a completely automated process. When you're doing performance testing and such like, you will likely repeat this enough number of times. Not having to do it manually helps big time!
Thursday, March 27, 2008
Working with huge XML files - tools of the trade.
XMLSpy, vi, emacs, notepad++ all died - and trying to do something with a 80 Gig XML where the 80 gigs are on a single line isnt much fun. So the first order of business was to pretty print the XML. XMLstarlet worked great -
xmlstarlet fo file.xml > output.xml
and you're done.
The next order of business was that we needed to validate the XML document against a schema. Our first attempt was with Sun's multi schema validator (MSV). MSV does not validate the whole document but instead stops after a certain number of failures. So, MSV - out, XMLStarlet in. XMLStarlet can validate documents again W3C schema, DTD or a RELAXNG schema.
xmlstarlet val --err --xsd schema.xsd input.xml > errors.txt
And presto! - you get an error report that you can slice and dice with sed/awk or anything else at all.
XMLStarlet also allows you to write Xpaths to query the xml - however, I found the syntax too weird and round about. A better alternative is a perl based solutions - XSH2 - a command line xml editing shell. You can install it under cygwin and it supports basic command pipelining and redirection.
So go ahead and launch XSH. At your cygwin prompt
[~]xsh
---------------------------------------
xsh - XML Editing Shell version 2.1.1
---------------------------------------
Copyright (c) 2002 Petr Pajas.
This is free software, you may use it and distribute it under
either the GNU GPL Version 2, or under the Perl Artistic License.
Using terminal type: Term::ReadLine::Gnu
Hint: Type `help' or `help | less' to get more help.
$scratch/>
Now, lets load up our document, type
$scratch/>$x:=open formatted.xml
Your prompt changes to
$x/>
So go ahead and try a few xpaths
$x/> ls /path/to/node
and XSH prints out the matching nodes. Now what if you need to create a document fragment of nodes matching a certain xpath? Piece of cake - do ahead
$x/> ls /path/to/node | tee fragment.xml
XSH2 has many, many more features - but this should be good enough to get you off the ground.
Saturday, February 09, 2008
Yahoo! mail fixed for Firefox 3 beta 2
Was pleasantly surprised today morning to see that Yahoo! mail beta now works properly in FF3b2. Thanks!
Wednesday, January 23, 2008
Pesky little bash quoting problem
Anyway, so this post is mostly for self reference :) and to put down some simple rules in the hope that writing it down will help committing it to memory.
The latest (mis)adventure was to make irfanview run under wine and a little script to allow irfanview to open a file provided on the command line. Irfanview being a windoze executable, its necessary to cd to the folder and then pass the file as argument to irfanview. Trivial isn't it....until I found that the script fell over when it got a path liek /path/to/a folder with spaces/image.jpeg.
#! /bin/bash
DIRNAME=$(dirname "$1") # double quotes necessary - since $1 could have embedded spaces
FILENAME=$(basename "$1")
echo $DIRNAME
echo $FILENAME
cd "$DIRNAME" # once more, double quotes necessart
irfanview $FILENAME
Golden Rule
When passing a path as argument, always enclose in double quotes.
Thursday, January 10, 2008
Firefox 3 Beta 2 on Ubuntu Gutsy
Installed firefox 3 beta 2 from Mozilla to /usr/lib/firefox3b2 folder and created
lrwxrwxrwx 1 root root 27 2007-12-30 23:44 /usr/bin/firefox-3b2 -> /usr/lib/firefox3b2/firefox
When I launch firefox3b2, I get firefox alright, however, in the location bar if I type in a url and press Enter, nothing happens - absolutely nothing at all. I have to go and click the green arrow for the browser to open the URL. The search box is even weirder - neither the Enter key works nor does the mouse!
I'm at a loss - and nor can I find any similar experiences on forums etc - ideas welcome :D
SOLVED 01/20:Backed up my .mozilla folder and started firefox 3 b2 - no problems now :D
HOWTO: Access your machine from the internet without a static IP
Typically, when you type in www.google.com in your browser, your machine performs a DNS (Domain name service) lookup with the DNS servers from your ISP to find out the IP address corresponding to www.google.com. With DDNS (dynamic DNS) this is made to work with your dynamically allocated IP address also. Here's how it works
- Register with a DDNS service provider. Service provider provide free accounts for personal use - go to www.dyndns.org
- Once you've created your account, go ahead and set up your hostname. DDNS service providers will have some domains that you can choose from and you get to choose the host part. For a fee, you can also use a domain name of your choice.
- If your set up has a router at your end, check your router administration page if it supports dynamic DNS. If it does, you need to enter the hostname, account and password. Everytime your router connects to the internet, it sends an update notification to the DDNS service notifying the new IP obtained from your ISP. The DDNS service takes care of sending update notifications to routers on the internet.
- If you dont have a router, then download the DDNS client software from the service provider. Most DDNS providers have windows, mac and linux clients. These run on your machine and do the same thing - notify the DDNS service provider of your new IP whenever you establish a connection with your ISP.
- If you've got all this set up, then you can reach your machine from the net - try ping <your host name>
If you're running Linux/Ubuntu, make sure your're running SSH service and try ssh <your host name>. If you have a router setup, then you will need an additional step - basically the DDNS name refers to your router IP - and not the machine behind the router that you wish to reach. You will also need to make sure that your machine has a static IP from your router. To set up your router, go to your router administration page.
- Go to the LAN section and give a range of IPs outside of the static IP. Most routers have lan addresses like 192.168.x.y - 192.168.x.z. If you want your host to have an IP address of 192.168.1.100, then give a LAN range that does not include this IP - say 192.168.1.110 - 192.168.1.200.
- Save and reboot your router.
- Now go to your network settings and enter your static IP (192.168.1.100), netmask 255.255.255.255, gateway (usually 192.168.1.1).
- Go to your router administration page and look for a section like virtual server - your router will allow you to forward packets received on a particular port to a host and port within your LAN. You will have to enter the external port (we'll use 22), the internal machine to forward (192.168.1.100) and the port to forward to (22). With this in place, any packets received on port 22 (ssh) on your router will be forwarded to the 192.168.1.100 machine on the ssh port.
- Save and reboot your router.
- Give it a spin.
From a different machine (or from the same one -doesnt matter), try out ssh <your host> and you should be able to login to your machine - via the internet.
Thursday, January 03, 2008
Back in circulation
Did a few fun things in the midst, and its been ages since I've added anything to this blog. Will summarize for now and put in longer posts with more details in cases someone's interested.
- Fixed my windows C drive which was running out of space - used trusty old windirstat for that.
- Set up wifi at home with ADSL modem from BSNL - MT800. Again, wasn't as straightforward as I'd thought.
- Replaced old pcq linux 2006 with ubuntu gutsy - without losing stuff :D. Need to have /home in a separate partition, but otherwise this is a breeze.
- Having fun with compiz-fusion. Its great - however, the documentation isnt easily locatable/consumable enough for first timers (me).
- Set up DNS caching proxy on my linux box - has improved my net/web experience a 100 fold. Was a piece of cake too.
- Set up Dynamic DNS and remote SSH access to my box - this has been the single most important utility/maintainance action.
More later.
Tuesday, July 03, 2007
Sluggish Firefox - and what a hog!
I first suspected my ISP (verizon) for frequent dropped connections (saw the DSL modem lights reset a couple of times a day), then my wifi modem (not a high end one), then spyware/malware. So after the usual barrage of tests - wifi interference sources/anti virus/anti spyware/cable tests etc, I still hadn't nailed it.
Finally, used procexp ( you can use plain 'ole Task manager too - this is just flashier) and saw that FF had 330 MB of RAM with 3 or 4 tabs. Also, just plain clicking on a text box was slow, typing into a text field would echo characters after a noticeable delay - so this was definitely a browser problem.
When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
-- Sherlock Holmes
Ain't that an apt quote??? I love the addons I have and probably I have one too many. This page on problematic addons was a life-saver - after disabling a bunch of infrequently used addons (stumble upon toolbar, google toolbar, browser sync, adblock filterset g, foxy tunes and some more), I'm back in browsing heaven. The only addons I have now enabled are
- diigo toolbar
- all in one gestures
- Adblock
- Flashgot
- Piclens
What a relief!
BTW, addons are also the latest attack vector. So be wary of who you let into your browser!
Thursday, June 28, 2007
Piclens - full screen slideshows with flickr (and others)
Its a great addin for firefox - and integrates with Flickr to give you full screen slideshows a'la Picasa slideshows on your machine!! Its a bit tricky to figure out how to get it to work - Just hover any picture on any page and click on the blue bubbly overlay button that appears
Python, cygwin, TurboGears, mysql hell
First things first - decided to use mysql as the database (already have it on my machine and didn't want to install one more database (postgresql/sqlite). Now it turns out that MySQL doesnt have a cygwin package. More googling - mysql server can't run on cygwin due to something to do with pthreads. You can compile the mysql client on cygwin though.
That's what I decided to do - grabbed the Linux tar.gz source from mysql.com, got it into a directory and ran ./configure --without-server, followed by make && make install. All went through fine -other than the fact that it was time consuming and pretty boring (more so since I had to download and install gcc, bintools first in cygwin)
I thought I'd got through the hard part and what remained was to install python-MySQLdb package. Off I went to
easy_install MySQLdb
No luck there - package build failed with missing library -lmysqlclient_r. Turns out that the 'thread-safe' version of mysql client (mysqlclient_r) is preferred but the mysql build doesnt build this by default. What a shame!!!
Anyway, so I wasnt going to redo the whole mysql client library build again - more README files and googling later, grabbed mysql-python-1.2.2 tarball, got it into a folder, edited site.cfg and changed 'threadsafe=false'. The next run on python setup.py build worked properly, with the python mysql linking against the non threadsafe mysqlclient library.
Think troubles are over yet? No way.
Off I went to test - started the python interpreter and did a import MySQLdb, and got a Permission Denied in some 'egg' file!!! What the heck are these egg files anyway. Well, I did'nt have much of a clue and more google later got educated that these are install packages used by the easy install system. The more I looked, the more it seems that the easy install is anything but easy :(. Anyway, this one had me floored - since I couldnt get to the line of source where the error was and had no clue how to view the contents of an 'egg' (they're zips - but I didnt know that and very helpfully there's hardly anyplace where they say that they're zips with the extension of egg!!! Baaah!! - why couldn't they just use .zip?)
More and more hard googling - and this time the info's really sketchy till eventually found a post from a guy who asked the exact same question. Guess what, the easy_install system unzips the eggs to some folder (pointed by PYTHON_EGG_CACHE env var) and there I needed to do a chmod a+x on the _mysql.dll. Well so I did echo $PYTHON_EGG_CACHE and the var isnt set!!! Admittedly at this point, I'm not looking sharp either - what started out as a quick spin has become a quagmire of installation issues - but I'll be damned if I let it sink me!!! Eventually, had the Eureka moment and checked ~/.python-eggs and sure enough found the truant _mysql.dll. Quick chmod a+x and presto - import MySQLdb worked!!! YAHOOOOOOOOOOOOOOOOOOOO!!!!
And now back to where I started - went back to turbogears, did a tg-admin quickstart, setup a mysql database and started with python start-testproject.py. Guess what - no luck yet - turns out that mysql cant client to my windows server with a socket.
More googling - and this time its really desperate - and more enlightenment - the windows mysqld doesnt do unix sockets - how do I force tcp/ip? Simple - use 127.0.0.1 as hostname in the connection settings instead of localhost!! Finally something that was easy to fix. Finally after 8 hours of on and off hacking away at installation issues I'm glad to see a turbogears web page.
Bottomline: Python's great, from the looks of it, turbo gears seems well designed. Mysql is a great database - a cygwin native server would be great - or atleast a client package. But if one has to run through all these hoops just go get a 'quick spin' then adoption's going to be difficult.
I havent tried RoR - but has someone tried a similar thing on cygwin (cygwin, ruby, RoR, mysql)? How does the experience compare - is it any easier to get off the ground?
Thursday, May 24, 2007
Maven Emma plugin - filters dont work
Posted the bug and the fix to Maven's sourceforge developer forum. You can find it here
The fixed plugin.jelly's here - Maven emma plugin 0.5 fixed
Friday, May 04, 2007
Building a FC6 server for Java development
- OS installation was smooth. I installed with KDE since some folks prefer the GUI.
- During the installation, I chose Java Development - and all the gcj components got installed. Then post installation, I downloaded JDK from Sun and installed it too. The (gcj) tomcat of course didnt work very well in this mess. Fixing it was easy - just fired up yum via Add or Remove Programs and removed gcj - which removed all the other gcj tools and libraries too.
- Setting the host name - unexpectedly, had trouble doing this - finally used system-config-network.
- Environment variables - again a little digging around to figure out how to set environment variables so that they are effective for all users on the system - essentially JAVA_HOME, JRE_HOME, CATALINA_HOME, MAVEN_HOME etc. Finally found that this is best set in /etc/environment where it works for all users.
- Downloaded java tools and libraries - housed tomcat and maven under /usr/lib/java/maven-xxx and /usr/lib/java/apache-tomcat-x.xx and created symlinks. Placed symlinks to tomcat's startup.sh and shutdown.sh in /usr/bin. Placing a symlink to maven doesnt work - but an alias works as well. To make it work for all users, simply put it in /etc/profile
alias maven=/usr/lib/java/maven/bin/maven - Created a group java and added users to the group. Set permissions on the maven installation folder so that java group has write access (this is so that maven plugin:download for additional plugins works properly and can write to the maven plugins folder)
That's all there is to it. Just make sure that you do everything as the 'application' user - dont do it with your account or the root account.
Tuesday, May 01, 2007
Ant - Debugging classpath
Essentially,
<?xml version="1.0"?>
<project name="project" default="default">
<property name="lib" value="web/WEB-INF/lib"/>
<property name="src" value="src"/>
<property name="dist" value="dist"/>
<path id="classpath">
<fileset dir="${lib}">
<include name="**/*.jar"/>
</fileset>
</path>
<target name="default" description="--> description">
<javac srcdir="${src}" destdir="${dist}">
<classpath refid="classpath"/>
</javac>
</target>
</project>
Sunday, April 22, 2007
Cobertura vs EMMA
We've been using Cobertura at work till now and its done its job nicely - the reports look great and the maven 1.x integration, while not neat, is functional. While we knew that someday we were going to have to merge coverage data and have a single report across multiple test methods (junit, selenium tests and manual), cobertura documentation stated that this was possible and so we weren't really bothered.
Thought I'd give it a whirl and set it up - and that's when the trouble started. Atleast, with the maven integration.
First of all - there's no way to just instrument code. You can generate the report (which will instrument classes and run the tests) but if you just want to instrument classes so that the final deployable contains instrumented classes, its a no go with the maven plugin tools.
Obviously, no point giving it up there - so thought I'd just include the ant tasks and go the ant way in my maven goal. Turns out that there's no 'plugin init' kind of goal that can be called post build:start that will set up dependencies and import the cobertura ant tasks. You have to do it all yourself. Fine - went that way too - so now my maven.xml uses the cobertura ant tasks and finally I'm able to generate an instrumented build. YIPPPPPPPEEEEEE.... or wait...lets' just make sure that this thing works...
Does it?
Turns out - no - it doesnt - so I dropped the WAR into tomcat and accessed the login page of the application and then shut down tomcat nicely. There's even a cobertura.ser created in the tomcat bin folder and I'm thinking that probably this will all work together finally...
So I go ahead, tweak my Maven.xml further with a coverage task that will merge the data from the junit runs and servlet container runs. Turn the switch on... and lo and behold...Exception reading the merged data file. Back to google and after hunting around for sometime found this Bug while merging reports
So finally I'm ready to give up cobertura and give Emma a try...and it couldnt have been better...
1. Goals are nicely setup
2. You can init the emma system with the emma:init goal and then use ant tasks if you want flexibility for doing things like merging reports.
3. The merging works :))
One sticky issue that I did come up with was that the for the same source and test cases the coverage reported by cobertura and emma differ widely. With Cobertura, we were at 40% coverage while with EMMA, the number's up to 60% coverage - and while EMMA has some literature on how it does things - I'd be glad if someone did explain why or how the reported numbers could be so different for the same base code and unit test suite?
Tuesday, April 17, 2007
Tulip festival at Skagit county
Anyway, had a lovely weekend driving up to Mount Vernon with friends and had a memorable weekend.
![]() |
| TulipFestival |
Saturday, March 17, 2007
Ninotech Path Copy 4
Novell Cool Solutions: Cool Tool
Was looking for a shell extension to copy filenames from explorer. Thought there'd be a dime a dozen - and it turns out that
a) they have 20 other things that I dont want.
b) Its trial ware.
Finally after searching high and low, came across this and it's GREAT!
powered by performancing firefox
Thursday, March 15, 2007
Production Eclipse Configuration
At a minimum, it helps to have a central Eclipse installation that has all the tools configured and setup - so each person doesn't have to do it. Alternatively, there should be a reference set of plugins and their configuration files available in the source control repository. Here's my eclipse configuration
Wednesday, March 14, 2007
Tips for using Eclipse
Great Article - full of very very useful tips!
Firefox - not so obvious search
In Firefox, you can use / (forward slash) to search for text OR links - and if its a link, then you can follow the link just by clicking Enter! That is way cool - especially when you're on a laptop and cant/dont have your mouse around. I've always find it a pain to do a "link find" - ' (single quote) and a text find - ctrl-f separately...Never knew that there was a shortcut that did both and what more - its way natural if you're used to 'less'!!
To complete the keys for Firefox mouseless browsing,
- space - page down
- shift - space - page up
Sunday, February 25, 2007
Performancing and Wordpress - initial imression - very nice!
Performancing | Firefox Add-ons | Mozilla Corporation
Performancing for Firefox is a full featured blog editor that sits right in your Firefox browser and lets you post to your blog easily. You can drag and drop formatted text from the page you happen to be browsing, and take notes as well as post to your blog.
powered by performancing firefox
Saturday, February 24, 2007
Diigo - a hidden gem
I've been using it for quite some time now (after trying out del.icio.us and google notepad) and I'm not budging! Recently I was doing a bunch of research online and diigo excelled!
- Online bookmarking - will also simultaneously post to your del.icio.account
- Firefox toolbar
- tagging
- Web clipping
- Web annotation
Now to "hidden" - I've been on Diigo more than 8 months or so now - and yet I'm still to meet someone who knew about Diigo!!
Diigo rocks!
Imaging the tenth dimension
Great flash movie explaining dimensions 4 (time), 5, 6 all the way uptil 10. And why they stop after 10!
I've never come across a clearer explanation ever and the great visualization does the trick.
Instant messenger client utopia
I hate running 3 different chat clients (yahoo, msn, gtalk) on my machine - and that's just 3x memory, startup programs and useless adversting too many.
And I hate msn and yahoo's advertising that's built into their clients...esp on MSN which insists on opening a mini page on login...
Use GAIM - and do yourself a favour. Of course that's if you don't need the very specialized features :)).
Installation is a breeze and use plugins you like and avoid the bloat....The best plugins I like are guifications and text replace. Lastly, if you keep moving between machines as much as I do, then its a breeze to retain your settings between all your machines.
Friday, April 29, 2005
XSLT Analog to sysouts
Tried a couple of IDEs - Stylus Studio (free edition) and Marrowsoft XSelerator. Stylus studio did a graceful exit, Xselerator went purple in the face and died a gruesome death :-(
Hmm... so after sometime I was wondering if I could annotate the XSL output with information on the templates matched it would atleast help partway. I was thinking of perl/C#/regular expressions and then suddenly the penny dropped "for each xsl:template node, include a comment with the template match/mode" - Hang on!!! looks like that sounds like a job for XSLT....
Anyway, there are a couple of quirks - the first one you hit will be when you try to output a template like this
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
<xsl:template match="xsl:stylesheet" >
<!-- generate an output xsl:stylesheet node -->
<xsl:stylesheet></xsl:stylesheet>
</xsl:template>
</xsl:stylesheet>
Oops! The XSLT processor cribs (and with good reason too)! It doesn't know which xsl:template is for the current stylesheet vs which is intended to be output to the result document.There are a couple of approaches around this. One is to use xsl:element like this
<xml:namespace prefix = xsl />
<xsl:element name="xsl:template">
</xsl:element>
But this results in enormously wordy documents. Thankfully there's a neater way out. You use something called
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:gen="http://www.w3.org/1999/XSL/Transform/2">
<xsl:namespace-alias stylesheet-prefix="gen" result-prefix="xsl"/>
<xsl:template match="xsl:stylesheet">
<gen:stylesheet>
<xsl:for-each select="@*">
<xsl:attribute name="{name(.)}">
<xsl:value-of select="."/>
</xsl:attribute>
</xsl:for-each>
<xsl:apply-templates></xsl:apply-templates>
<xsl:if test="not(xsl:template[@name='pseudo-xpath-to-current-node'])" >
<xsl:text>
&#10;&#10;
</xsl:text> <xsl:copy-of select="document('')/xsl:stylesheet/xsl:template[ @name='pseudo-xpath-to-current-node']"/>
<xsl:text>
&#10;&#10;
</xsl:text>
</xsl:if>
</gen:stylesheet>
</xsl:template>
</xsl:stylesheet>
Note the usage of xsl:namespace-alias and the code for generating an xsl:stylesheet element in the result document.
I've included my efforts here - along with a simple books.xml, a books.xsl which generates a table and finally an instrument.xsl that instruments books.xsl to generate an instrumented version. Transforming books.xml with the instrumented xslt generates output that annotated with custom nodes that highlight which template got called when.
After I was mostly done with the code, I came across an article in IBM developerWorks which discusses the same topic. Rather than cover the same material again, you can find the article here. Stuff that's different is that I generate custom nodes (which I thought would be useful to view in XML IDE which allow a hierarchical display). I've also shamelessly borrowed the code to generate the Xpath of the node (part of what you see in the snippet).
I'm on a high. I've been stuck with this problem o...
Tried a couple of IDEs - Stylus Studio (free edition) and Marrowsoft XSelerator. Stylus studio did a graceful exit, Xselerator went purple in the face and died a gruesome death :-(
Hmm... so after sometime I was wondering if I could annotate the XSL output with information on the templates matched it would atleast help partway. I was thinking of perl/C#/regular expressions and then suddenly the penny dropped "for each xsl:template node, include a comment with the template match/mode" - Hang on!!! looks like that sounds like a job for XSLT....
Anyway, there are a couple of quirks - the first one you hit will be when you try to output a template like this
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
<xsl:template match="xsl:stylesheet" >
<!-- generate an output xsl:stylesheet node -->
<xsl:stylesheet>
</xsl:stylesheet>
</xsl:template>
</xsl:stylesheet>
Oops! The XSLT processor cribs (and with good reason too)! It doesn't know which xsl:template is for the current stylesheet vs which is intended to be output to the result document.There are a couple of approaches around this. One is to use xsl:element like this
<xml:namespace prefix = xsl />
<xsl:element name="xsl:template"></xsl:element>
But this results in enormously wordy documents. Thankfully there's a neater way out. You use something called . Basically, what it does is that it allows you to use a dummy namespace in your xslt. You set up the dummy namespace (let's say genxsl) to map to a real namespace in the result document (xsl). Then you basically use the dummy namespace in your XSLT. However, when generating output, the processor will replace all references to the dummy namespace in the result document with references to the real namespace. For ex.
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:gen="http://www.w3.org/1999/XSL/Transform/2">
<xsl:namespace-alias stylesheet-prefix="gen" result-prefix="xsl"/>
<xsl:template match="xsl:stylesheet">
<gen:stylesheet>
<xsl:for-each select="@*">
<xsl:attribute name="{name(.)}"><xsl:value-of select="."/>
</xsl:attribute>
</xsl:for-each>
<xsl:apply-templates></xsl:apply-templates>
<xsl:if test="not(xsl:template[@name='pseudo-xpath-to-current-node'])" >
<xsl:text></xsl:text>
<xsl:copy-of select="document('')/xsl:stylesheet/xsl:template[ @name='pseudo-xpath-to-current-node']"/><xsl:text></xsl:text>
</xsl:if>
</gen:stylesheet>
</xsl:template>
Note the usage of xsl:namespace-alias and the code for generating an xsl:stylesheet element in the result document.
I've included my efforts here - along with a simple books.xml, a books.xsl which generates a table and finally an instrument.xsl that instruments books.xsl to generate an instrumented version. Transforming books.xml with the instrumented xslt generates output that annotated with custom nodes that highlight which template got called when.
After I was mostly done with the code, I came across an article in IBM developerWorks which discusses the same topic. Rather than cover the same material again, you can find the article here. Stuff that's different is that I generate custom nodes (which I thought would be useful to view in XML IDE which allow a hierarchical display). I've also shamelessly borrowed the code to generate the Xpath of the node (part of what you see in the snippet).
Thursday, April 14, 2005
I love the premise of Test Driven Development - I've even used it a few times in the line of duty ;-) (I do have to admit, I've been naughty and left out the step of  seeing the test case fail a few times )...Anyway, I end up working on web applications more often than not and while you can use TDD for your class libraries, a web app is a totally different animal. The fact that you can use TDD for class libraries makes the whole thing even more frustrating - you have a bit that works for sure (the class libraries with their tests) and then you hit this piece (aspx) on which you dont have the same level of confidence.
I've been working around it making sure that pages generate nice logs, so that during development, whenever I find a sticky piece, I put in an additional log statement. This thing works but at best is a poor cousin to automated testing ala nunit.
Enter NunitAsp - it promises to do for web applications what nunit does to class libraries - pretty stiff goal indeed! I've looked at this piece about an year ago for a similar project but had to decide against its use after going through the feature list. As a result, though I understand the aims, I havent got my hands into it. These days' I'm planning to do a dive-deep into it - just to make a more diligent evaluation if it'll actually work.
I love the premise of Test Driven Development -...
I love the premise of Test Driven Development - I've even used it a few times in the line of duty ;-) (I do have to admit, I've been naughty and left out the step of  seeing the test case fail a few times )...Anyway, I end up working on web applications more often than not and while you can use TDD for your class libraries, a web app is a totally different animal. The fact that you can use TDD for class libraries makes the whole thing even more frustrating - you have a bit that works for sure (the class libraries with their tests) and then you hit this piece (aspx) on which you dont have the same level of confidence.
I've been working around it making sure that pages generate nice logs, so that during development, whenever I find a sticky piece, I put in an additional log statement. This thing works but at best is a poor cousin to automated testing ala nunit.
Enter NunitAsp - it promises to do for web applications what nunit does to class libraries - pretty stiff goal indeed! I've looked at this piece about an year ago for a similar project but had to decide against its use after going through the feature list. As a result, though I understand the aims, I havent got my hands into it. These days' I'm planning to do a dive-deep into it - just to make a more diligent evaluation if it'll actually work.
Monday, April 11, 2005
For the most part, I find ASP.NET far more easier to use than Java. But the ONE BIG THING where I've found ASP.NET sorely lacking is in the support for page templates.
Page templates, if you need to brush up, allow you to define common layout and contents for a web site. Furthermore, once defined, its easy to change the layout and or move your default items around the place.
Basically, what you need is to be able to define a template page with the different areas (header, left pane, main content, footer etc). So the template page controls what is shown where. In addition, you also define the default content for all these areas.
Now each page in the application just overrides the content for the main area (assuming that the defaults are fine for the rest of it). WOW!!!
Java's had this quite some time - Jakarta Struts has something called Tiles which does exactly this.
For .NET, as I mentioned, the need's going to be fulfilled with v2.0 of ASP.NET. Meanwhile, if you feel the idea's great and there's no point in waiting for v2.0, release, do take a look at MasterPage as www.asp.net control gallery. Do note that since the asp.net team has released this control, there's a good chance that most of the features will end up in asp.net 2.0.
There are a few shortcomings of the control though - you'll get a hang of them if you read the posts. Paul Wilson has a version which overcomes these - and best of all, he releases the control with source :). You can find it here.
For the most part, I find ASP.NET far more easi...
For the most part, I find ASP.NET far more easier to use than Java. But the ONE BIG THING where I've found ASP.NET sorely lacking is in the support for page templates.
Page templates, if you need to brush up, allow you to define common layout and contents for a web site. Furthermore, once defined, its easy to change the layout and or move your default items around the place.
Basically, what you need is to be able to define a template page with the different areas (header, left pane, main content, footer etc). So the template page controls what is shown where. In addition, you also define the default content for all these areas.
Now each page in the application just overrides the content for the main area (assuming that the defaults are fine for the rest of it). WOW!!!
Java's had this quite some time - Jakarta Struts has something called Tiles which does exactly this.
For .NET, as I mentioned, the need's going to be fulfilled with v2.0 of ASP.NET. Meanwhile, if you feel the idea's great and there's no point in waiting for v2.0, release, do take a look at MasterPage as www.asp.net control gallery. Do note that since the asp.net team has released this control, there's a good chance that most of the features will end up in asp.net 2.0.
There are a few shortcomings of the control though - you'll get a hang of them if you read the posts. Paul Wilson has a version which overcomes these - and best of all, he releases the control with source :). You can find it here.
Friday, April 08, 2005
The ASP.NET validation summary is great for displaying all the errors in the page. It would be nice though to be able to use a validation summary to display errors that occur on the server side.
A typical scenario is when you have a page that does a search and then displays the results of the search. For the case where no results are found, it would be nice to be able to display the results in our custom validation summary control. It takes away the need to handle the display of the error message in a validation summary control.
This is what you can do about it - implement the IValidator class
public
class CustomErrorMessage:IValidator
    {
        privatestring message;
        public CustomErrorMessage()
        {
            //
            // TODO: Add constructor logic here
            //
        }
        #region IValidator Members
        publicvoid Validate()
        {
            // TODO: Add CustomErrorMessage.Validate implementation
        }
        publicbool IsValid
        {
            get
            {
                returnfalse;
            }
            set
            {
               Â
            }
        }
        publicstring ErrorMessage
        {
            get
            {
               Â
                return message;
            }
            set
            {
                this.message = value;
            }
        }
        #endregion
    }
And here's code to use the validator at runtime in response to a server side error.
CustomErrorMessage msg = new CustomErrorMessage();
msg.ErrorMessage = "No rows found";
ServerErrors.Visible =true;
ValidationSummary1.Visible = false;
Page.Validators.Add(msg);
Page.Validate();
The ASP.NET validation summary is great for displaying all the errors in the page. It would be nice though to be able to use a validation summary to display errors that occur on the server side.
A typical scenario is when you have a page that does a search and then displays the results of the search. For the case where no results are found, it would be nice to be able to display the results in our custom validation summary control. It takes away the need to handle the display of the error message in a validation summary control.
This is what you can do about it - implement the IValidator class
public
class CustomErrorMessage:IValidator
    {
        privatestring message;
        public CustomErrorMessage()
        {
            //
            // TODO: Add constructor logic here
            //
        }
        #region IValidator Members
        publicvoid Validate()
        {
            // TODO: Add CustomErrorMessage.Validate implementation
        }
        publicbool IsValid
        {
            get
            {
                returnfalse;
            }
            set
            {
               Â
            }
        }
        publicstring ErrorMessage
        {
            get
            {
               Â
                return message;
            }
            set
            {
                this.message = value;
            }
        }
        #endregion
    }
Been tinkering with getting a nice paging algorithm out. To get a basic hang of the problem, do take a look at
Do take a look at the second query given. I've modified it a little bit so that you can sort by a given field and removed a bit of the cruft (the au_lname like '%A%' bit). Here the table used is called Pager - with a column called Name.
Some of my bare bones requirements for a paging system are:
1. Should allow sorting
2. Should not impose any requirements on the table schema/ resultset.
3. Should be done on SQL Server as much as possible. Definitely not default paging that results in all rows being sent to the middle layer.
4. Ideally, should not require dynamic queries. (Though note that this conflicts with 1 & 3 as these two requirements almost make dynamic queries mandatory).
5. Should not use temp tables.
declare @pagenum int
declare @pageSize int
set rowcount @pagesize
select *Â
   from Pager P
where
    (select count(*)
    from Pager P2
    where P2.Name <= P.name) > @pagesize * @pagenum
order by
       p.name
Â
The ASP.NET validation summary is great for dis...
The ASP.NET validation summary is great for displaying all the errors in the page. It would be nice though to be able to use a validation summary to display errors that occur on the server side.
A typical scenario is when you have a page that does a search and then displays the results of the search. For the case where no results are found, it would be nice to be able to display the results in our custom validation summary control. It takes away the need to handle the display of the error message in a validation summary control.
This is what you can do about it - implement the IValidator class
public
class CustomErrorMessage:IValidator
    {
        privatestring message;
        public CustomErrorMessage()
        {
            //
            // TODO: Add constructor logic here
            //
        }
        #region IValidator Members
        publicvoid Validate()
        {
            // TODO: Add CustomErrorMessage.Validate implementation
        }
        publicbool IsValid
        {
            get
            {
                returnfalse;
            }
            set
            {
               Â
            }
        }
        publicstring ErrorMessage
        {
            get
            {
               Â
                return message;
            }
            set
            {
                this.message = value;
            }
        }
        #endregion
    }
And here's code to use the validator at runtime in response to a server side error.
CustomErrorMessage msg = new CustomErrorMessage();
msg.ErrorMessage = "No rows found";
ServerErrors.Visible =true;
ValidationSummary1.Visible = false;
Page.Validators.Add(msg);
Page.Validate();
The ASP.NET validation summary is great for dis...
The ASP.NET validation summary is great for displaying all the errors in the page. It would be nice though to be able to use a validation summary to display errors that occur on the server side.
A typical scenario is when you have a page that does a search and then displays the results of the search. For the case where no results are found, it would be nice to be able to display the results in our custom validation summary control. It takes away the need to handle the display of the error message in a validation summary control.
This is what you can do about it - implement the IValidator class
public
class CustomErrorMessage:IValidator
    {
        privatestring message;
        public CustomErrorMessage()
        {
            //
            // TODO: Add constructor logic here
            //
        }
        #region IValidator Members
        publicvoid Validate()
        {
            // TODO: Add CustomErrorMessage.Validate implementation
        }
        publicbool IsValid
        {
            get
            {
                returnfalse;
            }
            set
            {
               Â
            }
        }
        publicstring ErrorMessage
        {
            get
            {
               Â
                return message;
            }
            set
            {
                this.message = value;
            }
        }
        #endregion
    }
Been tinkering with getting a nice paging algor...
Been tinkering with getting a nice paging algorithm out. To get a basic hang of the problem, do take a look at
Do take a look at the second query given. I've modified it a little bit so that you can sort by a given field and removed a bit of the cruft (the au_lname like '%A%' bit). Here the table used is called Pager - with a column called Name.
Some of my bare bones requirements for a paging system are:
1. Should allow sorting
2. Should not impose any requirements on the table schema/ resultset.
3. Should be done on SQL Server as much as possible. Definitely not default paging that results in all rows being sent to the middle layer.
4. Ideally, should not require dynamic queries. (Though note that this conflicts with 1 & 3 as these two requirements almost make dynamic queries mandatory).
5. Should not use temp tables.
declare @pagenum int
declare @pageSize int
set rowcount @pagesize
select *Â
   from Pager P
where
    (select count(*)
    from Pager P2
    where P2.Name <= P.name) > @pagesize * @pagenum
order by
       p.name
Â

Performancing for Firefox is a full featured blog editor that sits right in your Firefox browser and lets you post to your blog easily. You can drag and drop formatted text from the page you happen to be browsing, and take notes as well as post to your blog.