7 posts categorized "siteadmin"

04/30/2010

What's on Your "Sorry" Server?

There comes a time when we need to redirect everything to a single page on an apache-driven site.  There could be several reasons for doing this.  Data Center Migration.  Retiring a site.  User notification of site maintenance by way of a "sorry" server.

A "sorry" server is an industry term, which is a web site meant to convey temporary outage notifications to our users.  There are some guidelines we should follow when setting up a sorry server.  It isn't necessarily something as mundane as uploading an html page that says that we are going to be down for some period of time because our sites are constantly getting accessed by things other than humans.  And we don't want these things to penalize us simply because we're going to be offline for a few hours.

This post assumes that you've already got Apache up and running.  It's natural for us as busy administrators to not want to go overboard with setting up a web server that may not be accessed all that frequently (hopefully!) but we really should treat our sorry servers just like any other production server.  We certainly wouldn't want out sorry servers to get hacked or defaced.  We wouldn't want one of our maintenance pages to show up in Google's listings.

The HTML below is a no-frills page meant to be used as an example and not for production use. We want our designers to create an actual, branded HTML page that looks just as nice as any of the other pages on our sites.

<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/>
<title>Site Maintenance
</head>
<body>
<p>We're sorry.  We're down.</p>
</body>
</html>

There are a few things missing from this page that we should probably add. We probably shouldn't cache this page because we don't necessarily have any control over the third-party device that is caching our maintenance page. If we know how long we're going to be offline, we could probably set an 'expires' header or meta tag. If the site is really busy though, allowing caching while setting an expiration is a good idea. We don't want robots indexing the page because it shouldn't show up in search engines. Below is the updated HTML from above, sprinkling in some meta tags to prevent caching and indexing by robots:

<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/>
<meta http-equiv="Cache-Control" content="no-cache"/>
<meta http-equiv="Pragma" content="no-cache"/>
<meta name="robots" content="noindex,nofollow"/>
<meta name="googlebot" content="noarchive"/>

<title>Site Maintenance</title>
</head>
<body>
<p>We're sorry.  We're down.</p>
</body>
</html>

I decided that I didn't want to cache any requests because I want to see every page request show up in the web server logs. Now that we've got our page ready, save it as index.html and upload it to the sorry server's document root. The next thing that should be done is to upload a robots.txt file into the site's document root as well. The robots.txt file will only include the following lines:

User-agent: *
Disallow: /

The usual caveats apply with this file: It will only be honored by robots that actually conform to the robots.txt standard.

Now we're ready to modify our apache configuration. In our example site, we want ANY POSSIBLE page request to get redirected to our site down index.html page. The mod_rewrite rule is fairly simple:

# Turn on mod_rewrite
# if not already on
RewriteEngine On
# Don't rewrite image requests
# or stylesheet requests
# Add js if necessary
RewriteRule \.(css|jpe?g|gif|png)$ - [L]
# Rewrite everything
RewriteCond %{REQUEST_URI} !^/index.html
RewriteRule /(.*) /index.html [R=302,L]

This set of rewrite rules is pretty self-explanatory but if you are new to mod_rewrite, here is a walk-through. RewriteEngine On turns on mod_rewrite. RewriteRule \.(.css|jpe?g|gif|png)$ - [L] tells mod_rewrite to not do anything to requests for images of stylesheets and don't process any more rules for this type of request. RewriteCond %{REQUEST_URI} !^/index.html means "if the request is not for /index.html". RewriteRule /(.*) /index.html [R=302,L] means, "issue a 302 redirect and send the browser to /index.html and don't process any more rules after this one.

I like compressing at the apache level and even though we are working with one html page that will probably be relatively small, I'm going to compress it anyway. Here are the settings I use as a starting point, (requires that mod_deflate be installed):

## HTTP compression on html, js and css files
    DeflateBufferSize 8096
    DeflateCompressionLevel 4
    DeflateFilterNote Input instream
    DeflateFilterNote Output outstream
    DeflateFilterNote Ratio ratio

    SetOutputFilter DEFLATE
    Header append Vary User-Agent env=!dont-vary

We aren't finished yet. We should add some customized logging to apache so we can see how many visitors are coming to the site. Normally a combined log format would be ok but I'm also interested in seeing the Host header from the browser so I can see which site they were trying to get to (if multiple sites were going to be offline for whatever reason). Also, I'm interested in logging compression rates as well, if later optimization is necessary. Finally, I want to grab the X-Forwarded-For IP if that is being passed instead. I'm using the logging set up from a recent How-To I published:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Host}i\" \"%{Referer}i\" \"%{User-Agent}i\" %{outstream}n/%{instream}n (%{ratio}n%%) " combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Host}i\" \"%{Referer}i\" \"%{User-Agent}i\" %{outstream}n/%{instream}n (%{ratio}n%%) " proxy

SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded

CustomLog "logs/access_log" combined env=!forwarded
CustomLog "logs/access_log" proxy env=forwarded

What's on your sorry server? I'd love to compile everyone's best practices that could be used as an implementation template.

07/23/2009

Apache: Forcing the Server's SSL Cipher on the Client

Normally, in an SSL conversation, the client presents its preferred cipher to use and as long as the server that the client is negotiating a connection with supports it, that cipher will be used.  Suppose though that there are cases where you don't want to leave this up to the client to decide and you want the strongest encryption available between client and server.  A little-known apache configuration directive, and by little-known I mean I just started playing with it a few days ago, called SSLHonorCipherOrder will allow you to do just that.

Simply set the value of the directive to On and you are all set. Be wary however because Apache's preferred ssl cipher appears to be DHE-RSA-AE256-SHA; a 256 bit cipher could be costly in terms of cpu. 

When connecting with Safari 4 prior to making this change, the cipher Safari used was AES128-SHA—after making the change, I started using the 256 bit cipher. (Interestingly, with the 3.5 version of Firefox, the client's preferred cipher and apache's preferred cipher seem to be the same: DHE-RSA-AE256-SHA).

10/14/2008

TRACE Method Handling - Disabling TRACE

I have many articles posted on this blog for handling http trace methods. Since they are amongst the most popular on this site, I thought it would be helpful to consolidate them into something better suited for reference. Also, since it is a vulnerability that frequently turns up on PCI Compliance scans, I thought it would be useful to provide one base page that searches can be directed to. Less page views for me but hopefully easier searching for everyone else.

To start, I updated the ruby script I wrote to test to see how a site responds to HTTP TRACE requests. The previous script can be found here and below is the update:

#!/usr/bin/env ruby
require 'net/http'

puts "\nEnter the URL for the site"
puts "(Format should be 'http://siteURL/')"
name = gets

url = URI.parse(name)
begin
req = Net::HTTP::Trace.new(url.path)
resp = Net::HTTP.start(url.host, url.port) {|http|
http.request(req)
}
rescue Errno::ECONNRESET, Errno::ECONNABORTED
puts "Your iRule Works!! Connection Dropped!"
exit
end

# Case statement - delivers response based on status code
# returned from site assuming irule not used or needed
statuscode = resp.code
result = case statuscode
when "200": "TRACE is **probably** enabled"
when "301": "Site is responding with a 301 - Redirect for \"/\""
when "302": "Site is responding with a 302 - Redirect for \"/\""
when "403": "TRACE is disabled. 403 Forbidden"
when "404": "TRACE is probably disabled. 404 Not Found"
when "405": "TRACE is disabled. 405 Method Not Allowed Response"
when "501": "TRACE is disabled. 501 Not Implemented Response."
else "Unexpected Response."
end

From what I've learned in my research over the past few months as well as experimentation on my own sites, you can expect several different types of responses to TRACE requests based upon site platform and site layout architecture.

If you are running IIS4 or IIS5 or any version of Apache, by default the TRACE method is enabled. The best way you can disable it in IIS4 and IIS5 is by installing URLScan. Remediation in apache can be done in two different ways. One method involves setting the TraceEnable directive to Off. This only works if you are running a more recent version, (Apache 1.3.34, 2.0.55 and later). The other mechanism that works on all apache versions that support mod_rewrite involves writing a condition and rule to forbid trace requests:

RewriteEngine on 
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]

If you are running IIS6 or IIS7, the TRACE method is disabled. If you are running Tomcat, Glassfish, or JBoss Application Server as the front-end for your sites, the TRACE method is disabled by default. The status code returned by IIS6/7 is different from tomcat, however. IIS responds with a 501 status code while tomcat responds with a 405 status code.

If you load-balance your websites with a BigIP, you can write an iRule that will discard all TRACE methods and assign that rule to all of your VIPs. A basic iRule that drops TRACE requests looks something like this:

when HTTP_REQUEST {
set default_pool [LB::server pool]
if { [HTTP::method] equals "TRACE" } {
reject
} else {
pool $default_pool
}
}

The action that the rule takes when the condition is met is to drop the packet. From a network sniffer standpoint, you will see the BigIP issue a connection reset when a TRACE request is sent to a VIP that is utilizing this particular iRule.

The script above allows me to determine which of my sites are vulnerable to TRACE requests and, depending upon the remediation steps taken, gives me a mechanism for validating those changes. Since all of my sites are front-ended with a BigIP, I can simply utilize the iRule to drop those requests and validate by trapping the connection reset but with the addition of better handling of responses pre- and post-remediation, the script becomes more portable to validate remediation actions for a larger number of possible site configurations. Want to validate that apache is responding as expected after setting TraceEnable to Off? This should do it. Do you want to verify that URLScan is working? This script should do it.

The script should not follow redirects, which is helpful in false-positive determination. However, when a redirect is issued, you should re-run the script against the redirect target. In other words, if "http://yoursite.yourdomain.com/" redirects to "http://yoursite.yourdomain.com/home/", running the script against the former should result in either a 301 or 302 response. So, execute the script against "http://yoursite.yourdomain.com/home/" directly instead.

Instead of coming out with a page for each platform and another page for updates, this page will serve as the anchor page for TRACE and any updates will be made directly to this page instead. All the other pages will be updated to link to this one.