« October 2008 | Main | December 2008 »

5 posts from November 2008


Extended Validation SSL Certificates: An Easy Way to Bust the Green Bar

It has been a little over a month since I posted some questions regarding Extended Validation SSL Certificates (EV SSL). Since posting, I have had some time to think about this particular issue further and I am still pretty skeptical about these new certificates.

Based upon the comments from the initial post regarding my concerns with EV certs and the marketing information by many of the EV SSL Certificate vendors, not only is it said that EV certificates increase trust but in many case-studies, they improve conversion or registration rates. My main concern with using a bold visual cue to evoke safety and trust is that users will start to equate safety and trust with the green bar, even though the site may still be ssl-encrypted (and for all intents and purposes, still secure).  Your organization can go through the two to three week long vetting process to get that new certificate, you pay the extra—in some cases substantial—increase in price for the certificate, you can be considered PCI compliant, and all that trust can vanish simply because someone in-house (!) added a link on the site to an image that uses an unencrypted absolute URL. EV-aware browsers (except Safari 3.2) are unanimous presently in their handling of those cases where you have a mix of secure and insecure elements within an EV SSL encrypted page--the green bar vanishes and then I start wondering what's wrong with the site.

Perhaps an example to illustrate my point is in order. A very well known CA that sells Extended Validation SSL Certificates operates a site of corporate blogs. If you access that site over SSL (https), you are presented with an Extended Validation certificate. If you then click on any of the blogs hosted by that site, with one exception, the green bar vanishes in your browser but the page is still encrypted. This all appears to be due to the inclusion of images on those sites that are being delivered over an unencrypted (http-only) channel--probably because someone input an absolute URL in the HTML. If this behavior occurred on an electronic commerce site where I was getting ready to submit an order, I'd be a bit reluctant to submit that order. Even on a site like this corporate blog, it made me stop for a few seconds and wonder what was going on. It makes the fact that there is a problem even more prominent.

I worry that the over-emphasis on providing visual indicators for trust and security may have unintended consequences to site owners when that indicator fails.  The behavior with the afore-mentioned corporate blog site has been like this for the past week at least and, like I said, if this were an electronic commerce site, I might not be buying.


JBoss: Remotely Generating Thread Dumps With JMXConsole

If you ever have a need to generate a thread dump in order to try and figure out what might be going wrong with your jboss application server, you can generate them remotely using the built-in JMXConsole.

A few months ago, I really hated the built-in jmxconsole--mostly, it turns out, due to ignorance on my part. Now that I've getting more acclimated to using it, it's actually quite useful.

Simply access your server using the jmx-console URL: http://$server_name/jmx-console/ (which I hope you've password protected!). Then scroll down to the section labeled jboss.system. Select the type=ServerInfo link and you will see a page listing lots of useful information in a table listing default paths, jvm version information, etc. If you then scroll down beyond the table, there are some interesting mbeans that you can invoke. The one that will generate the remote thread dump is labeled: java.lang.String listThreadDump(). Click the Invoke button and the thread dump will be generated and displayed in-browser. Pretty neat!


BigIP: Botkilling iRule

Below is an irule for when you need to quickly kill connections from nasty robots (or scripters) hitting your site. We use something like this to kill connections from folks who have scripted POST requests to data search features on our sites to keep them from mining data from the site.

class badbots {
  spider slayers
  master mold

set myPool [LB::server pool]
if { [matchclass [string tolower [HTTP::header User-Agent]] contains $::badbots] } {
log -noname local0. "Robot '[HTTP::header User-Agent]' blocked. IP of agent is [IP::client_addr]"
} else {
pool $myPool
# log -noname local0. "Robot '[HTTP::header User-Agent]' Allowed."

The badbots class is a simple datagroup containing user-agents from your web server logs that one would consider abusive. When creating the class, I created the datagroup using all lowercase values for the user-agent. The set myPool [LB::server pool] statement sets the rule to use the default pool assigned to your VIP so you can refer to it later on in the rule. The [matchclass [string tolower [HTTP::header User-Agent]] line converts the incoming user-agent to lowercase and then matches that agent to the list of bad 'bots in the datagroup. If it gets a match, issue a tcp reset on the connection; otherwise, let it through. Uncomment the second log statement temporarily for debugging purposes but fair warning: it will fill up your ltm log very quickly because that second statement logs all the user-agents that were allowed to pass by the irule.

The main advantage to an irule like this is that it is quick, easy, and effective but the main disadvantage is that anyone knowing LWP::UserAgent (or something similar) could cook up their own modified user-agent and bypass the rule fairly easily. However, they would need to realize that the reset they are being issued is due to the user-agent that they are using and not something like sniffing their IP address. I'm using this type of rule on some of my sites while I try to figure out how to get something more elaborate and cool working like a resource obfuscation irule.