I’ve worked on a variety of projects in my IT career, from small Lotus Notes applications where I’ve been the business analyst, developer, tester, system administrator, and support engineer, to multi-year enterprise initiatives with far-flung teams and huge project management systems where my role has been very narrowly defined. I’ll admit to a preference for the former–I like to write code, and waiting weeks or months for the first solid requirements to trickle in is painfully dull. But all of the projects, like most IT projects, have been plagued by the usual disconnects, missed opportunities, and frustrations with delivering what the users really want on time and on budget.
My current project, though, has been surprisingly successul. We released our first version in November, after two months of development and testing on top of about six months of thorough analysis (most of which happened before I joined the project), and since then we’ve released new and improved versions on a monthly schedule. The customer has been pleased, the application has been solid, and we continue to meet the users’ expectations.
What’s the secret?
To a great extent, it’s due to a very talented team of developers, testers, project managers, analysts, and business users. We work together well, have open and honest communication, and set up realistic and reachable goals for each release. The problem with talent, though, is that it’s not necessarily reproducible; you can’t bank on having good people in every role, or even on having good people at the top of their game most days. A project that relies on talent alone is bound to fail eventually.
What has really worked for this project is a philosophy of continual improvement. Our driving principal has been, to borrow a line from Jeff Atwood, version 1 sucks; ship it anyway.
My current workplace doesn’t have a formal “methodology” for development, no waterfall gate-checks or SCRUM masters, at least that I’ve encountered. There are rudimentary project controls and such to meet corporate governance requirements, but development teams are left largely to organize their own efforts. As a result, we’ve landed on some practices that borrow heavily from various flavors of “agile” development without professing the full “agile” theology; the guidelines that I’ve found work best on this project, and that may be reproducible on other projects, are pragmatic and contingent, flexibly implemented within a loose framework. This may not work everyplace, on every project, and it doubtless has some scalability issues, but for a mid-sized project with an aggressive schedule, these are some practices that have worked for us:
Manage the requirements to the schedule: hit the dates by containing the enhancements
We have a huge list of things we’d like the application to do, ranging from simple tweaks to pipe-dream fantasies. They’re all good requirements, all worth meeting because they represent what the users really want. But they’re not all going to go into the first, second, or third release.
Instead, we’ve promised a monthly release with at least one major system enhancement, and as many smaller enhancements as can be realistically squeezed into the time frame. Like the big rocks fable suggests, we focus on the one big thing first, and then categorize the other requirements as hard, challenging, or low-hanging fruit. Once the big requirement for the next release is ready, we knock off the smaller requirements as time permits, always mindful that no small enhancement should jeopardize the big one. It sucks to leave low fruit on the branch, but we keep our spirits up in the knowledge that we’ll have a long harvest season if we keep the customer happy.
A little spice and sizzle helps, though
The “one big rock” is usually a meat-and-potatoes affair, and it’s always filling and nutritious. But we’re also sure to include a little spice among the smaller enhancements. Refreshing the style sheet, adding a more attractive screen layout, or providing an extra screen of administrative information on the application’s performance is often cheap, easy, and low risk, but it’s very useul for maintaining customer satisfaction. The users may not notice that you’ve shaved an average two seconds off the web service response time and implemented a really nifty sorting algorithm–indeed, you’d better hope they don’t notice those things, because their only evidence should be when they fail–but they’ll ooh and ahh over a nicer interface.
Track every requirement, no matter how small
Indeed, make your requirement-tracking as granular as possible. Break the big requirements up into bite-sized chunks, and build good estimates for them (this is where something like the Pomodoro Technique can really shine). You don’t know which rocks are big and which are small unless you track them, and you don’t know if you need to scale back your release features unless you do estimates.
Open up the black box and let everyone see the work list
Having a good requirements and bug-tracking system is critical to managing in a progressive-enhancement environment. We’re using FogBugz, but other tools–Roundup and Bugzilla come to mind–are also useful. Even a shared spreadsheet is better than nothing. The key requirement is that everything is on the table and visible to the entire project team; having the project progress available at a glance, and maintained in real time, is the only way to keep everyone honest and ensure that releases happen on schedule.
Build plenty of testing time into the schedule
I’ve known thin-skinned developers who don’t like testers. Personally, I’d rather have someone on my team find my bugs before a customer does: it’s easier and cheaper to fix problems before your release date, and a good, thorough tester can be the difference between a product that people love and one that makes their jobs harder. In our current project schedule, we have a code cut-off date a week before the release date, after about three weeks of development; this should be adjusted for larger and more complicated projects, with even more time dedicated to serious testing.
Release early and often
That “three weeks of development” in our project is really three weeks of development and testing. As soon as you have something to show, even if you know it’s not ready for release, get it out there for your testing team to break. If you’ve got users who can spend time looking at things that are in development, so much the better: unvarnished responses to early iterations can flesh out requirements and ensure that you’re meeting the customer’s needs. There’s nothing worse than releasing something that’s been carefully developed, thoroughly tested, and still misses the customer’s core requirements. During the last two weeks of development on my current project, I’m deploying something to the shared development environment nearly every day (and if I had an automated build and deploy system, I’d be checking in updates even more often).
Build a solid architecure in the beginning, and build out modularly
My current project lends itself well to continual improvement, because it was architected from the beginning to be modular. It’s a service-oriented architecture that uses the Apache Commons Configuration framework to abstract the business logic into XML documents. It’s developed to Java interfaces and abstract classes as much as possible, with an eye toward identifiying and reusing patterns; if something can be accomplished through XML rather than code, that’s the direction we go.
SOA is a good fit for continual enhancements because the application layers can be clearly separated from each other; you’re less likely to break something if you don’t have to touch it. But the same principals apply to any development platform: make code small, abstract, and reusable, and avoid great big tangles of spaghetti. If you can’t see the whole method on your screen without scrolling, don’t adjust your monitor resolution: break the code up and look for the patterns.
The next release will be better
Whether the customer is pleased as punch, or grinding their teeth in angst, that’s the appropriate response: the next release will be better. The requirements we missed in this release are first on the docket for the next; the new requirements that have emerged from the testing rounds have been captured and scheduled for future deployments; there’s nowhere to go but up.
Provided that you set a pattern of actually delivering on this promise, the customer will be willing to accept that they won’t have the perfect system out of the gate. And if the customer is involved at every stage, and has a hand in the requirements triage and testing, they’ll be happy to play along with incremental enhancements. Something is almost always better than nothing, and unless they’ve got the self-control of toddlers they’ll be willing to defer some of their gratification, especially if they get a little taste of real improvement at each release.
I can imagine projects where these guidelines would fail to deliver: big enterprise initiatives with lots of interrelated parts are hard to release quickly in small pieces. At the same time, though, this may be as much a matter of perspective as of scale: if the project is too big for continual enhancement, maybe it’s really two or three or ten projects that need to be broken up and managed independently, with a longer test and integration period set aside to mesh the components. If it’s possible to deliver a little bit more often, rather than a lot after a really long time, my bias is toward the former: it keeps everyone working, ensures that real requirements are identified early, and shines some light into IT’s darkest boxes.