Category Archives: Programmer’s Toolbox

Responsive iframe elements

I hate rush jobs. “Haste makes waste” was already two centuries old before Ben Franklin co-opted it, and it hasn’t lost any of its truth even as parts of our world have sped up. It is, alas, an easy little rhyme that we too often forget to apply.

Recently I had an awful rush job–a web page that varied significantly from our standard site templates, had been poorly scoped and spec’d, and that was in flux even as we neared a deadline that had been pushed up by a week. I had to cut a lot of corners to get it to work without having to tinker with code that might have affected other pages; one of the corners I cut was on a video embedded in the page. Our site shows videos in a lightbox (we use the jQuery Fancybox implementation), but the code for doing so is deeply entangled with lots and lots of things that weren’t appropriate for this page. The page owners didn’t want the lightbox, and with time running out I decided that digging into our tangled code for one part of the page wasn’t worth the effort; instead, I went with an iframe from a video sharing site and moved on to other things. I hate iframes that show external content, but sometimes you need to compromise to meet a deadline.

The problem with a quick fix like this, though, is that it will eventually come back to bite you. Especially if there are other shortcuts taken, like very poor attention to testing. Anyone who’s been doing software development for more than a few months learns quickly that testing is something that you don’t rush; indeed, the importance of solid software is something that has even made the news recently. But “test early and often” is another of those seanfhocaili we ignore all too often.

In post-deployment testing (the worst kind), the site owners discovered that the iframe didn’t scale on devices like the iPhone. And they really wanted it to scale. So while flurries of emails labeled “URGENT” (a word that should be banned from email headers; if I got to do a rewrite of the Outlook client, it would have a routine that automatically trashes any message with that word in its subject) flew around, I did some web searches to see what smart people have done. And I found a nice, simple, smart answer.

The best implementation of a responsive iframe that I found was from Anders Andersen’s Responsive embeds article, based on this A List Apart article. In short, the iframe is wrapped in a responsive div, with the height and width parameters frequently included by video sharing sites in the code they hand you.

For example, this is what the code Vimeo provides for the video below looks like, with the width and height parameters for the iframe set in absolute pixels:

<iframe src=”//″ width=”500″ height=”281″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe> <p><a href=”″>Moments in Morocco</a> from <a href=””>The Perennial Plate</a> on <a href=””>Vimeo</a>.</p>

Moments in Morocco from The Perennial Plate on Vimeo.

The video looks fine in a full-screen browser, but look at it on a smaller device (or even shrink your browser window), and you’ll see that it quickly becomes unusable. (I’ve bumped up the height and width a bit from the original to make the behavior more obvious.)

Using Anders’ solution, though, the video works much, much better as its space shrinks:

The thing I most like about this is that it’s a pure CSS solution. I’ve also seen some JavaScript solutions, but dropping JavaScript onto a page to solve a problem like this makes me nervous, especially a page that already has quite a bit of JavaScript lurking on it. Weakly-typed, non-compiled scripting languages need to be used with great caution …

Of course, the real solution is to apply the same kind of discipline to website development that other aspects of software development receive: attention to requirements, testing, project management, and cross-functional coordination would go a long way toward saving us from bad rush jobs and unmaintainable spaghetti code. But that’s a harder fix to apply than some CSS and HTML …

Conditional Comments Revisited: the “document mode” thicket

I’ve continued to lean on conditional comments to deliver browser-specific CSS to different versions of Internet Explorer. Recently, though, I’ve discovered a new wrinkle that complicates things: it’s not so much the browser mode as the document mode that determines how Internet Explorer renders things.

I ran into this problem when my manager was testing a page that uses conditional comments in the traditional way, something like this:

<!--[if IE 7]-->
<link rel="stylesheet" type="text/css" href="/css/ie7.css">
<!--[if IE 8]-->
<link rel="stylesheet" type="text/css" href="/css/ie8.css">
<!--[if IE 9]-->
<link rel="stylesheet" type="text/css" href="/css/ie9.css">

Pretty straightforward: depending on the browser version, a different CSS file is loaded that deals with the particular quirks of that browser.
But my manager was using Internet Explorer 9 in IE9 Compatibility View Mode, with the document mode set to IE9 Standards. And the page looked entirely wrong!

The problem, it turns out, is that the compatibility view mode caused the IE7 conditional comment to fire, but the document mode caused the page to be rendered according to IE9 rules: we loaded IE7 CSS to IE9 rendering.

The fix isn’t too horrible, though it does complicate things a bit, requiring some JavaScript to get the job done. Instead of loading the CSS based on browser mode, we load it based on document mode, so browsers that have a mismatch between the two will get something they can handle:

<!--[if gte IE 7]>
<script type="text/javascript">
var ieCss="";
    } else if (document.documentMode==9)
    } else {
} catch(exception)
document.writeln('<link rel="stylesheet" type="text/css" href="/css/' + ieCss + '">');

Using the new Derby drivers in the old WebSphere server

Have some new functionality that isn’t showing up when deployed on an old Java server? You might need to investigate your classloader settings!

I have an application that needs a fast cache layer that provides better retrieval features than most of the common cache libraries provide; ideally, the same kinds of queries that I could run against a database, but without all the overhead of having a full-blown (or even lightweight) RDBMS. We’re already using embedded Derby for another project, so the new in-memory support that Derby offers seemed the ideal solution: I could tap into existing Derby expertise, while using a robust and thoroughly tested package.

Creating a simple web application based on the sample code that came with Derby wasn’t too hard. Getting it to work in embedded mode, where the database is written to a local disk, was a breeze. But no matter what I did, the in-memory option just wasn’t working. This worked great:

Connection conn = DriverManager.getConnection(“jdbc:derby:myDb;create=true”);

But this failed with a “no suitable driver” error:

Connection conn = DriverManager.getConnection(“jdbc:derby:memory:myDb;create=true”);

I debugged my project’s WEB-INF/lib, went through the Derby installation guide several times, put the DriverManager through all sorts of interrogation, all to no avail. I simply couldn’t find a way to load a driver that would be “suitable,” nor figure out why the freshly-downloaded driver was not.

Then one of my co-workers noted that WebSphere itself is using Derby to save its configurations, and so the version of Derby installed with the WAS 7 server we’re using–Derby 10.3, it appears–is loaded first.

To fix this running on a full install of WebSphere Application Server, you need to go to the class loader settings of your web application and set the class loader order to “Classes loaded with application class loader first.” This will allow the newer version of derby.jar that you place in the web application WEB-INF/lib directory to load instead of the older version loaded at server startup. In the WebSphere 7 runtime hooked into Rational, you can’t change this setting in the admin console (at least I couldn’t figure out how to change it), but you can manually swap the derby.jar that WebSphere uses. You can find its exact location by starting the admin console, and navigating to the class loader viewer (go to Applications\Application Types\WebSphere enterprise applications, find your EAR, find your WAR, and click “View Module Class Loader”). I found mine under ../runtimes/base_v7/derby/lib. Simply stop your server, rename the derby.jar file to derby.jar.old, move in your new derby.jar file with in-memory database support, and start the server up. There’s more than a little danger in swapping out key libraries, of course, so be prepared to swap the old derby.jar back in if you run into problems persisting server settings.

If you’re trying to use new capabilities in older Java environments, you may find this tip helpful when you run into problems with other core features; I would expect similar issues with, for example, XML and logging packages that are used by WebSphere.

Minding your 0s and Os

Some font choices, good and badI was recently burned by a very simple failure of a programming tool I didn’t even know I was using.

We were debugging a SQL statement that was embedded in a Java class (we’ll set aside for now the wisdom of embedding SQL statements into your code; yes, it’s best to parameterize at the very least). The SQL statement looked something like this:


When I ran the code in the Java class, it consistently returned zero rows. But when I ran the query in a SQL console, it returned significantly more records. We spent a lot of time looking at permissions, table structures, Java DAO models, but nothing was explaining the problem.

Until I pasted that code into the SQL console, that is; and then the console returned exactly the same results as the Java class. I pasted the code into a text editor, and pasted the working SQL statement right under it, and fiddled with the font size until it was obvious what the problem was:



Do you see it?

“0” and “O” were not obviously different in my IDE’s default font. Side by side, I could see the difference, but otherwise they were indistinguishable. At some point, this query–which had been sent back and forth through various e-mails–was mistyped into the code, and the expected “O” was replaced with “0”. The default font–the old standby Courier New, as it turns out–was a poor choice for a programmer’s font.

“0” and “O” aren’t the only characters that can make a programmer’s life difficult. “I”, “l”, and “1” have a habit of blending together, and the results can be annoyingly difficult to untangle. And “( )” and “{ }” characters can be devilishly difficult to distinguish in a JSP with a tangle of JSTL, jQuery, and scriptlets.

When I think of my development tools, I seldom think of my font selections. And while an IDE, text editor, browser plugins, and other software are obvious parts of the toolkit, font choices ought to be, too. In my experience, a good programming font should:

  • Distinguish between easily confused characters, like “0” and “O”
  • Make formatting code easy by exaggerating the differences between parentheses and curly braces
  • Belong to the “monospace” family, for easy layout on the screen
  • Be easy on the eyes, scaling well to different monitor resolutions

In my hunt for a good replacement font, I landed on Ubuntu Mono. It puts a dot inside the zero, adds curly flourishes to the lower-case “L” and “I”, has nice big serifs on the capital “I”, and gives a stroke and a bottom serif the “1”. Curly braces, parentheses, and square brackets are easily distinguished. And as an added bonus, its proportional space relative, Ubuntu, is a nice looking font for dialogues and titles inside the IDE. The result is something like this (assuming my @font-face code is working …):

for int l=0; l < O; l++) {      System.out.println("SELECT * FROM CUSTOMERS WHERE STATUS='O' AND ID='" + l + "'");      if(l==1)      {         System.out.println("l==1 but does not equal i or I");      } }

It’s pretty easy to tell which character is which.

I also like Anonymous, which features a slash in the “0”:

for int l=0; l < O; l++) {      System.out.println("SELECT * FROM CUSTOMERS WHERE STATUS='O' AND ID='" + l + "'");      if(l==1)      {         System.out.println("l==1 but does not equal i or I");      } }

And Anonymous Pro, which has an exaggerated stroke on the “1”:

for int l=0; l < O; l++) {      System.out.println("SELECT * FROM CUSTOMERS WHERE STATUS='O' AND ID='" + l + "'");      if(l==1)      {         System.out.println("l==1 but does not equal i or I");      } }

All are clearly preferable to Courier New:

for int l=0; l < O; l++) {      System.out.println("SELECT * FROM CUSTOMERS WHERE STATUS='O' AND ID='" + l + "'");      if(l==1)      {         System.out.println("l==1 but does not equal i or I");      } }

There are a good many other programming fonts that fit my criteria; a search for “programming fonts” turns up handy lists, like Top 10 Programming Fonts” at Hivelogic and a rundown of programming fonts by Jeff Atwood. Whichever font you choose, though, do choose, and choose wisely: you could end up saving yourself quite a bit of trouble and eye strain down the line.

chipping away at stylesheet lava with Dust-Me Selectors

I’ve been working on a web project that has gone through multiple iterations, and many hands, before landing on my desk. Over that time (and while it’s been in my care, as well as before), the stylesheets have acquired quite a few geological layers. The primary CSS file has had over 2,000 lines, with more than 600 selectors spread over almost a dozen stylesheets. Since the site has gone through multiple revisions and adjustments–at least five major versions in the month or so I’ve been working on it–a lot of these selectors aren’t being used anymore.

This is a classic case of the Lava Flow anti-pattern:

When code just kind of spews forth and becomes permanent, it becomes an architectural feature of the archeological variety. Things are built atop the structure without question and without hope of changing what is beneath them. The existing code is seen as an historical curiosity.

The best thing to do with lava is to blast it away, leaving just the useful bedrock. But removing unused CSS selectors is a tedious and error-prone undertaking; my Java IDE can tell me which methods and variables are no longer in use, and profile an application for redundancies and deprecated components, but I didn’t have a comparable development tool for styles. Luckily, my core development philosophy–somebody smarter than me has already solved this problem, and has probably put the information out on the Internet–panned out, and I discovered the Dust-Me Selectors Firefox add-on.

Dust-Me Selectors can look at an individual page, or at an entire site, and tell you which selectors are actually in use. It generates a nice report, broken down by stylesheet file and including line numbers, that will direct your blasting caps and jackhammers at the problem areas. After running it through my pages, I was able to reduce that main CSS file to just over 1,000 lines (it’s a complex site), with less than 200 selectors defined.

The tool can be run against individual pages or against a sitemap; for a large site, a sitemap is by far the best approach. If you don’t have a sitemap.xml file yet, it’s relatively easy to generate one, either with your development tools or CMS, by hand for a short list of pages, or with a simple script. I found that it was less reliable in single-page mode: your coverage won’t be as good, and things like Ajax windows are going to get in the way of accurate results. A comprehensive sitemap ensures coverage.

It’s also a good idea to use caution when pruning the styles; I went through the list of flagged selectors and disabled them with comments, visually testing the site as I went along. There were a handful of cases where Dust-Me flagged styles that were actually in use, primarily within jQuery code that was manipulating element classes on the fly. After I verified that my leaner, meaner CSS files were working, I moved all of the commented code out into a new file just in case some of those selectors need to be resurrected in the future.

There are a few reasons to tidy up your CSS files. Download performance, though not as much of an issue in the broadband world, is a factor: my old core file was about 41KB, and my new one is 17KB. Browser performance, too, is a factor: parsing unneeded CSS code may be a very tiny drag on performance for faster machines and browsers, but every millisecond you save on unnecessary parsing is time you can put the browser to work doing something more interesting and useful.

The best argument for cleaning up the lava flow, though, is maintainability. Scrolling through those 2,000 lines of code in search of an errant style, or trying to debug a rendering issue when looking across a dozen files for a class or ID, is a drag on productivity. In the heat of battle, it’s far too easy just to add another selector to the bottom of the file, which only contributes to the chaos. Pruning the excess weight early in the development process makes it much easier to maintain clean code as the project moves along.

What I’ve been working on: The Architecture

I’ve found that putting together a very rough architectural sketch is the best way to start when you’re building an application from scratch. While it’s a lot more fun to write code and see it executed, you save yourself a lot of grief, and create a more robust and flexible system, if you think things through up front and draw some boxes and arrows.

For this project, I knew that I would have at least three tiers: the user interface, which would initially run on WebSphere Application Server but should be portable to WebSphere Portal Server; the business logic layer, which would do the heavy lifting of taking the user’s input and sending it to the right back end screens; and the back end screen interface itself, built on an existing Java bean architecture.

I also knew, from painful previous experience, that keeping those tiers separated was critical. Java makes n-tiered development easy, with the ability to control scope and encapsulate functions, but it also makes breaking a clean separation of tiers very easy, too. Sloppiness early on, in making too many methods public rather than protected or giving one class too much access to things outside its logical scope, can doom your application. So that initial architectural sketch should include an attempt to block out the domains of interest and identify what limited interfaces should exist between different parts of the application.

I ended up with something roughly like this:

4GL Service Layer

Back Office Beans
Business Logic
Configuration Logic
Services (JAX-RPC)
Client Layer

Service Client
Business Logic (limited to validation rules)
Configuration Logic
User Interface (JSP, CSS, JavaScript)
User’s browser

The service and client layers are implemented as separate web applications: though they are packaged in the same enterprise application for ease of deployment, they are independent of each other and could be deployed as individual WAR files. Each domain surrounded by dashed lines in the table above represents at least one Java package, with most methods set to protected rather than public: this limits the points where changes in one layer can affect the code in another, and makes swapping out different services (caching, configuration, etc.) relatively easy. Building clear walls between application layers in the beginning will help you in building out and maintaining the application in the future.

In the original version of the application, developed on Eclipse and running on Tomcat (I didn’t yet have access to the licensed versions of IBM’s tools), the interaction between the service and client was handled through Axis, a Jakarta web service infrastructure. In the final version, running on WebSphere, the interface is handled through JAX-RPC, the default mode for WebSphere. My first attempt to deploy the application to WebSphere was, in fact, botched by the use of Axis: WebSphere wants to do JAX-RPC when it does web services between WebSphere servers, and was very unhappy about Axis. Luckily, both Eclipse and Rational (IBM’s version of Eclipse) have handy wizards for generating the web service bindings, so switching the web service layer was relatively painless and, since I started with the practice of separating the application into distinct domains, required no code changes.

In developing a SOA application, a lot of attention should be paid up front to designing your service interfaces. Keeping your changes to the service interfaces to a minimum will make your development a lot smoother–no need to re-generate the service and client bindings if you keep method signatures in place from version to version–so it pays to do as much analysis up front as possible.

For this application, I knew that I would be passing a user-generated data bean from the client layer to the service, sending that bean through various permutations in the service layer to do all of the back office work, and then sending that bean back up to the client to show the user the fruits of their labor (and any error messages that might have been picked up along the way). So the first order of business was designing the data bean, which would act as the primary transfer object, and the methods that would accept it.

The data object had to be very flexible, since the back end system is so denormalized: multiple fields and value types would have to be carried by the bean, not all of them well known at design time. The solution was to make that bean into the carrier of a collection of field objects, which have name, value, and type (since passing raw Objects over web services is forbidden) attributes. This allows the bean to pick up whatever fields it needs to feed the back office system, without having to change the code any time a new field is discovered. With simple getField(String fieldName) and setField(FieldObject field) methods, any value can be put on the bean.

Another consideration in developing your service interfaces is data typing. As noted, you can’t send a plain Object over the wire and expect to be able to cast it to the specific type you want. Any object that is transferred over the service needs to be mapped through XML, and so should consist of serializable or primitive members. I’ve found that sticking to Strings, ints, longs, decimals, and such is by far the safest way to work with web services; more complex objects can be generated from the primitive transfer objects after they’ve reached their destination.

Once the service interfaces were in place, and the demarcations within each application were defined, it was time to start wiring things together. And that’s the fun part!

What I’ve been working on: the anatomy of an application

Since September, I’ve been developing a Java application for a client in St. Paul. As noted in an earlier post, this has been a very successful project; I’m quite proud of the application, and with the quality of the work that we’ve done as a team. It’s unusual for a corporate IT project to be on time, under budget, and to deliver real functionality, so being involved in a project that hits all three is quite a treat.

The code in this project was bought and paid for by the client, for use in an application that runs behind the firewall, so I won’t be sharing any of the source here. But I think there’s value in sharing some of the patterns and principals, the architectural strategies, and the methodologies that have led to our success, and also in calling out some of the open source projects from which we’ve borrowed. In this series of posts, I’ll provide highlights of the project that have made it especially fun.

My role on the project is winding down at the end of the month, and I’ve started searching for a new gig. I can only hope that my next project will be half as enjoyable and successful as this one has been.

The Problem: So Many Screens, So Little Time

The application is used by a business unit responsible for setting up the licensing, maintenance agreements, and billing for customers. Before this application went into production, they used a series of browser screens that expose a 4GL application (extended, I believe, from a terminal screen application). To set up a customer order, the users would have to visit a dozen or more screens, often entering the same information into each one. Behind the screens is a highly de-normalized database.

Though the 4GL application contains some validation rules, there are other business rules that were really more like uncodified traditions: the users could very easily enter “incorrect” data into the screens, because there were no program controls in place to control their input. Data entered in one screen is dependent on data entered in other screens, resulting in a lot of back-and-forth in the system to gather up all the information that needs to be submitted. Training in a new user on the Byzantine ways of the process was difficult, and errors were frequent and painful to correct.

The primary goal for the application I’ve been working on, then, is to increase the users’ productivity and accuracy by giving them one form to complete. All of the manual lookup steps, all of the rules for deriving certain values and codes, needed to be automated to simplify and streamline the process.

Despite its limitations, though, the 4GL system is of great value to the business processes surrounding the data entry. It’s used to generate reports and billing, it serves as a repository of customer contact and relationship information, and it feeds into the business’s bottom line. Replacing it was not an option; the new solution would have to work with the 4GL layer, and be compatible with the rest of the business processes.

Another requirement going into the project was the separation of the business logic and UI layers. There was a strong interest in eventually using WebSphere Portal Server as the front end (not yet implemented, alas), but in the interest of meeting the schedule the first version of the application would be delivered as a stand-alone web application running on WebSphere Application Server. To avoid reinventing the wheel and introducing a lot of regression errors, we started off with a service-oriented architecture: the business logic would be implemented as a web service, with the front-end web application driven entirely by the web service layer.

The biggest challenge in getting this application started–the communication with the 4GL layer–had luckily been handled in a different project, so I was able to build on a successful set of APIs instead of developing my own methods for interacting with the back end screens. The 4GL programs expressed by the screens have an XML-over-HTTP interface, and a developer had already built a very nice tool for converting the XML into Java beans. The challenge of wiring together all of those screens, then, fell to me. And that’s the real heart of this application: marshalling a collection of disparate Java beans into an orchestrated process that does complex things using simple tools.

every day, in every way, getting better and better

Pardon our improvements? by John MortonI’ve worked on a variety of projects in my IT career, from small Lotus Notes applications where I’ve been the business analyst, developer, tester, system administrator, and support engineer, to multi-year enterprise initiatives with far-flung teams and huge project management systems where my role has been very narrowly defined. I’ll admit to a preference for the former–I like to write code, and waiting weeks or months for the first solid requirements to trickle in is painfully dull. But all of the projects, like most IT projects, have been plagued by the usual disconnects, missed opportunities, and frustrations with delivering what the users really want on time and on budget.

My current project, though, has been surprisingly successul. We released our first version in November, after two months of development and testing on top of about six months of thorough analysis (most of which happened before I joined the project), and since then we’ve released new and improved versions on a monthly schedule. The customer has been pleased, the application has been solid, and we continue to meet the users’ expectations.

What’s the secret?

To a great extent, it’s due to a very talented team of developers, testers, project managers, analysts, and business users. We work together well, have open and honest communication, and set up realistic and reachable goals for each release. The problem with talent, though, is that it’s not necessarily reproducible; you can’t bank on having good people in every role, or even on having good people at the top of their game most days. A project that relies on talent alone is bound to fail eventually.

What has really worked for this project is a philosophy of continual improvement. Our driving principal has been, to borrow a line from Jeff Atwood, version 1 sucks; ship it anyway.

My current workplace doesn’t have a formal “methodology” for development, no waterfall gate-checks or SCRUM masters, at least that I’ve encountered. There are rudimentary project controls and such to meet corporate governance requirements, but development teams are left largely to organize their own efforts. As a result, we’ve landed on some practices that borrow heavily from various flavors of “agile” development without professing the full “agile” theology; the guidelines that I’ve found work best on this project, and that may be reproducible on other projects, are pragmatic and contingent, flexibly implemented within a loose framework. This may not work everyplace, on every project, and it doubtless has some scalability issues, but for a mid-sized project with an aggressive schedule, these are some practices that have worked for us:

Manage the requirements to the schedule: hit the dates by containing the enhancements

We have a huge list of things we’d like the application to do, ranging from simple tweaks to pipe-dream fantasies. They’re all good requirements, all worth meeting because they represent what the users really want. But they’re not all going to go into the first, second, or third release.

Instead, we’ve promised a monthly release with at least one major system enhancement, and as many smaller enhancements as can be realistically squeezed into the time frame. Like the big rocks fable suggests, we focus on the one big thing first, and then categorize the other requirements as hard, challenging, or low-hanging fruit. Once the big requirement for the next release is ready, we knock off the smaller requirements as time permits, always mindful that no small enhancement should jeopardize the big one. It sucks to leave low fruit on the branch, but we keep our spirits up in the knowledge that we’ll have a long harvest season if we keep the customer happy.

A little spice and sizzle helps, though

The “one big rock” is usually a meat-and-potatoes affair, and it’s always filling and nutritious. But we’re also sure to include a little spice among the smaller enhancements. Refreshing the style sheet, adding a more attractive screen layout, or providing an extra screen of administrative information on the application’s performance is often cheap, easy, and low risk, but it’s very useul for maintaining customer satisfaction. The users may not notice that you’ve shaved an average two seconds off the web service response time and implemented a really nifty sorting algorithm–indeed, you’d better hope they don’t notice those things, because their only evidence should be when they fail–but they’ll ooh and ahh over a nicer interface.

Track every requirement, no matter how small

Indeed, make your requirement-tracking as granular as possible. Break the big requirements up into bite-sized chunks, and build good estimates for them (this is where something like the Pomodoro Technique can really shine). You don’t know which rocks are big and which are small unless you track them, and you don’t know if you need to scale back your release features unless you do estimates.

Open up the black box and let everyone see the work list

Having a good requirements and bug-tracking system is critical to managing in a progressive-enhancement environment. We’re using FogBugz, but other tools–Roundup and Bugzilla come to mind–are also useful. Even a shared spreadsheet is better than nothing. The key requirement is that everything is on the table and visible to the entire project team; having the project progress available at a glance, and maintained in real time, is the only way to keep everyone honest and ensure that releases happen on schedule.

Build plenty of testing time into the schedule

I’ve known thin-skinned developers who don’t like testers. Personally, I’d rather have someone on my team find my bugs before a customer does: it’s easier and cheaper to fix problems before your release date, and a good, thorough tester can be the difference between a product that people love and one that makes their jobs harder. In our current project schedule, we have a code cut-off date a week before the release date, after about three weeks of development; this should be adjusted for larger and more complicated projects, with even more time dedicated to serious testing.

Release early and often

That “three weeks of development” in our project is really three weeks of development and testing. As soon as you have something to show, even if you know it’s not ready for release, get it out there for your testing team to break. If you’ve got users who can spend time looking at things that are in development, so much the better: unvarnished responses to early iterations can flesh out requirements and ensure that you’re meeting the customer’s needs. There’s nothing worse than releasing something that’s been carefully developed, thoroughly tested, and still misses the customer’s core requirements. During the last two weeks of development on my current project, I’m deploying something to the shared development environment nearly every day (and if I had an automated build and deploy system, I’d be checking in updates even more often).

Build a solid architecure in the beginning, and build out modularly

My current project lends itself well to continual improvement, because it was architected from the beginning to be modular. It’s a service-oriented architecture that uses the Apache Commons Configuration framework to abstract the business logic into XML documents. It’s developed to Java interfaces and abstract classes as much as possible, with an eye toward identifiying and reusing patterns; if something can be accomplished through XML rather than code, that’s the direction we go.

SOA is a good fit for continual enhancements because the application layers can be clearly separated from each other; you’re less likely to break something if you don’t have to touch it. But the same principals apply to any development platform: make code small, abstract, and reusable, and avoid great big tangles of spaghetti. If you can’t see the whole method on your screen without scrolling, don’t adjust your monitor resolution: break the code up and look for the patterns.

The next release will be better

Whether the customer is pleased as punch, or grinding their teeth in angst, that’s the appropriate response: the next release will be better. The requirements we missed in this release are first on the docket for the next; the new requirements that have emerged from the testing rounds have been captured and scheduled for future deployments; there’s nowhere to go but up.

Provided that you set a pattern of actually delivering on this promise, the customer will be willing to accept that they won’t have the perfect system out of the gate. And if the customer is involved at every stage, and has a hand in the requirements triage and testing, they’ll be happy to play along with incremental enhancements. Something is almost always better than nothing, and unless they’ve got the self-control of toddlers they’ll be willing to defer some of their gratification, especially if they get a little taste of real improvement at each release.

I can imagine projects where these guidelines would fail to deliver: big enterprise initiatives with lots of interrelated parts are hard to release quickly in small pieces. At the same time, though, this may be as much a matter of perspective as of scale: if the project is too big for continual enhancement, maybe it’s really two or three or ten projects that need to be broken up and managed independently, with a longer test and integration period set aside to mesh the components. If it’s possible to deliver a little bit more often, rather than a lot after a really long time, my bias is toward the former: it keeps everyone working, ensures that real requirements are identified early, and shines some light into IT’s darkest boxes.

A box in the clouds

Cardboard Box PC (Top) by TimRogersI’m embarking on a little experiment with netbook operating systems; there’s a lot of interesting activity in the lightweight OS world–Ubuntu, Jolicloud, Chrome, Android, etc.–and I’m curious how the different systems stack up. Each weekend from last weekend until I run out of operating systems, patience, or time (or land on the ideal OS paradise and never want to leave), I’ll be installing a new platform on my Dell Mini and rating it on usability, stability, features, and other criteria specific to the netbook space.

One of the things that has kept me from making big OS changes on my netbook in the past is that getting Firefox reconfigured is such a hassle. While I’ve been using the Xmarks plugin for bookmark and password synchronization between home, work, and my netbook since the plugin was called Foxmarks, getting all of the other plugins installed and configured after wiping out the netbook has been a tedious chore. About 90% of what I use the netbook for is browser-based, so this is a relatively big deal for a little computer.

The solution that I’ve landed on is actually pretty simple, and uses two nice utilities in concert.

First, there’s FEBE, the “Firefox Environment Backup Extension,” a nice Firefox plugin. FEBE will create backups of whatever Firefox components you choose–plugins, themes, bookmarks, cookies, etc.–and restore them. You can set it up to do scheduled backups, restore settings into a new profile, and manage selective backup configurations.

And then there’s Dropbox, an online file storage and synchronization service. I’ve been using it to easily synchronize writing projects between my Windows PC and netbook, and it works like a charm: silently synchronizes the files that I place into its directories, and seamlessly integrates with the file systems on both my Windows and Linux computers.

Before I uninstalled Ubuntu on my netbook, I ran a full backup of Firefox from FEBE to a directory under Dropbox’s control. Then when I installed Jolicloud, I added the FEBE plugin and installed Dropbox. In just a few clicks, I had all of my other plugins plus bookmarks, passwords, and other browser settings back in place.

I admit, it was a little disconcerting to be suddenly confronted with more than a dozen Firefox tabs for each installed plugin after the FEBE restore ran. But it was a lot easier to close tabs than it would have been to reinstall all of those plugins.

The same concept could, of course, be used with other combinations of tools. FEBE natively supports, for example, and there are some other tools for doing Firefox backups (I’ve used MozBackup before, which handles the whole Mozilla suite, but it’s a Windows-only utility and therefore not terribly helpful on my netbook).

When I move on from Jolicloud in a few days, I’ll be going through the same steps again, perhaps with a few refinements. Simple is good.

Making a list and checking it twice

In keeping with my recent interest in the Pomodoro Technique, I’ve been on the lookout for other dead simple techniques to make my work life easier. This time, again via a couple of podcasts (the New York Times Book Review and Lucy Kellaway of The Financial Times), I’ve discovered the power of check lists.

In The Checklist Manifesto, Atul Gawande explores how the lowly check list can save lives in the operating room. By having everyone on the operating team follow a straightforward list of pre-conditions and procedures, the World Health Organization was able to increase survival rates among surgical procedures by 47%. The problem, as Dr. Gawande explains it, is one of extreme complexity: there are so many steps required, so many places where one error can cause severe failure, and the human brain simply is ill-equipped to handle such complexity by itself.

I don’t have nearly as complex a job as a surgeon or an airline pilot, and I’m very happy to say that no lives are held in the balance by my ability to execute my tasks without error. But there are a lot of details in my work, and a lot of places where things can go terribly wrong (on a small scale), and though no lives will be lost, there’s a good chance that my day (or the day of my customer) will be ruined by failure. I speak from bitter experience of overlooking a simple detail and seeing things spin badly out of control.

As a test of the manifesto’s thesis, I put together a check list for my application build process. It’s a terribly manual process, with lots of chances to make a bad build. Yes, it should be done on a build server directly from the source repository (at my last job, we used Hudson: I highly recommend it to anyone looking for a clean, simple, flexible build infrastructure), but as a contractor I’m a guest here and can’t really demand that kind of change. And yes, I could make things easier for myself with an ANT script, but I could also make things easier for myself with DTDs for my XML, more consistent use of jQuery, and some Eclipse plugins for my configuration system, but somehow I have to find time to implement features too; we must choose our battles, and I’m willing to defer this one. We’re running the development of this project on an aggressively iterative time line, so in the midst of development I may have one or more daily builds to release to the customer for review, and monthly deployments to production. That’s a lot of opportunity to miss a step and waste several hours of time.

Originally, inspired by the Pomodoro Technique’s paper system, I thought I’d print out a check list for each deployment. Having a physical object to mark the steps on seemed like a good way to ensure focus. But one of the hallmarks of a good check list is its flexibility and adaptability: my first list was obsolete after the second time I used it.

I looked at a few on-line tools–Toodledo and Remember the Milk seemed like the hot properties in to-do lists–but they didn’t quite fit the bill. The thing that makes a check list different from a to-do list is that it lives in multiple instances: the same steps repeated over and over, rather than tasks that are assigned and executed once. One of my requirements was that my check list be cloneable, while at the same time offering flexibility in adding and removing steps as the process changed.

I finally ended up at Checkvist, which met all my needs:

  1. I can create a master check list and copy it any number of times as needed
  2. I can add and remove steps on the master template, and can also modify the working copies
  3. Tasks can be arranged, rearranged, and put into hierarchies
  4. The interface is quick, clean, and simple, and a joy to use
  5. It’s free (though there is a paid version, which supports encryption and secure sharing of lists)

I have a master build check list that I’ve used for 10 builds so far; and so far, none of my builds have failed. The list is largely obvious, possibly even pedantic, but there’s a tale of woe behind almost every line: a build that failed because the application.xml file was corrupt, or because an external properties file was for the wrong environment, or because a network hiccup caused the compiled application to copy incorrectly to the deployment server. Most of the steps take only a few seconds to perform, but can save hours of effort to correct.

Dr. Gawande found that surgeons were initially resistant to using a check list. They consider themselves (perhaps rightly) above such things: they are as much artists as they are technicians, and going through a check list seems anathema to their skills and knowledge. And I had a little resistance to overcome myself in giving it a try; I think of myself less as a software engineer than a software sculptor, shaping the formless bits and bytes of data and code into a beautiful machine. But after giving the check list a chance, I found that it actually freed me up to think more about the creative parts of my work and less about the mundane but critical details. The benefits have included:

  1. More consistent success in getting a successful build released.
  2. Less anxiety: I’m both absent-minded and anxious, the sort of person who returns home after getting half way to work to make sure I really did unplug the coffee maker and lock the doors. The check list gives me confirmation that I did in fact take care of the details.
  3. Less time thinking about the details: I don’t have to remember where in the configuration settings I have pieces to switch for deployments to dev vs. cert, or which files point to service endpoints. I’ve spelled out the specifics in the check list, and I update the check list when the details change.
  4. Better development practices overall: I’ve been using the Apache Commons Configuration API to control the application’s setup, and the check list encourages me to put environment-specific information into consistent spots in the application. I am also far less tempted to hard-code things might have to change at deployment if I know that they need to go into the check list; externalization is good for maintainability, and also good for making a speedy list.
  5. A concrete log of my work: Checkvist lets me archive my check lists after I’ve finished with them, so I can go back over them and verify that each step was completed, and also see how the process has changed over time.
  6. A better understanding of the process: when I do get around to using ANT to simplify things, I’ll have a template to work from; steps that could benefit from improvement stand out in the list, and can be addressed before they become entrenched.

Using a check list for this particular set of tasks has encouraged me to identify other things that can benefit from this discipline. Adding configurable components to the application, branching the project in the repository, organizing requirements for future releases: all repeatable processes with the possibility of failure if a detail is missed. And all candidates for freeing up my brain to think of more interesting tings while the check list serves as my memory.