Responsive iframe elements

I hate rush jobs. “Haste makes waste” was already two centuries old before Ben Franklin co-opted it, and it hasn’t lost any of its truth even as parts of our world have sped up. It is, alas, an easy little rhyme that we too often forget to apply.

Recently I had an awful rush job–a web page that varied significantly from our standard site templates, had been poorly scoped and spec’d, and that was in flux even as we neared a deadline that had been pushed up by a week. I had to cut a lot of corners to get it to work without having to tinker with code that might have affected other pages; one of the corners I cut was on a video embedded in the page. Our site shows videos in a lightbox (we use the jQuery Fancybox implementation), but the code for doing so is deeply entangled with lots and lots of things that weren’t appropriate for this page. The page owners didn’t want the lightbox, and with time running out I decided that digging into our tangled code for one part of the page wasn’t worth the effort; instead, I went with an iframe from a video sharing site and moved on to other things. I hate iframes that show external content, but sometimes you need to compromise to meet a deadline.

The problem with a quick fix like this, though, is that it will eventually come back to bite you. Especially if there are other shortcuts taken, like very poor attention to testing. Anyone who’s been doing software development for more than a few months learns quickly that testing is something that you don’t rush; indeed, the importance of solid software is something that has even made the news recently. But “test early and often” is another of those seanfhocaili we ignore all too often.

In post-deployment testing (the worst kind), the site owners discovered that the iframe didn’t scale on devices like the iPhone. And they really wanted it to scale. So while flurries of emails labeled “URGENT” (a word that should be banned from email headers; if I got to do a rewrite of the Outlook client, it would have a routine that automatically trashes any message with that word in its subject) flew around, I did some web searches to see what smart people have done. And I found a nice, simple, smart answer.

The best implementation of a responsive iframe that I found was from Anders Andersen’s Responsive embeds article, based on this A List Apart article. In short, the iframe is wrapped in a responsive div, with the height and width parameters frequently included by video sharing sites in the code they hand you.

For example, this is what the code Vimeo provides for the video below looks like, with the width and height parameters for the iframe set in absolute pixels:

<iframe src=”//player.vimeo.com/video/79788407″ width=”500″ height=”281″ frameborder=”0″ webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe> <p><a href=”http://vimeo.com/79788407″>Moments in Morocco</a> from <a href=”http://vimeo.com/theperennialplate”>The Perennial Plate</a> on <a href=”https://vimeo.com”>Vimeo</a>.</p>

Moments in Morocco from The Perennial Plate on Vimeo.

The video looks fine in a full-screen browser, but look at it on a smaller device (or even shrink your browser window), and you’ll see that it quickly becomes unusable. (I’ve bumped up the height and width a bit from the original to make the behavior more obvious.)

Using Anders’ solution, though, the video works much, much better as its space shrinks:

The thing I most like about this is that it’s a pure CSS solution. I’ve also seen some JavaScript solutions, but dropping JavaScript onto a page to solve a problem like this makes me nervous, especially a page that already has quite a bit of JavaScript lurking on it. Weakly-typed, non-compiled scripting languages need to be used with great caution …

Of course, the real solution is to apply the same kind of discipline to website development that other aspects of software development receive: attention to requirements, testing, project management, and cross-functional coordination would go a long way toward saving us from bad rush jobs and unmaintainable spaghetti code. But that’s a harder fix to apply than some CSS and HTML …

Google Hassles, part 2

Feedly
Feedly – via hplusk on Flickr

The previous post about a hassle with Google Calendar and Windows 8 was started a couple weeks ago and just finished up today, in the aftermath of the Google Reader news. The only big change is my increasing lack of love for Google …

I’ve been a Google Reader junky for years; after briefly experimenting with iGoogle, I’ve maintained somewhere between 200 and 400 RSS feeds in Google Reader. They range from the big news sites (New York Times, The Guardian) to a Google alerts feed for news about my father’s town in western Maine; from the big literature sites like The Millions and Bookslut to my favorite small presses like Small Beer and Gray Wolf; and of course a smattering of tech blogs and vendor news that have proven useful in my work life. Most mornings I start off my day with a cup of coffee, a dish of yogurt, and a quick spin through my favorite funnies (the great thing about using Google Reader has been that my comic strip reading isn’t cluttered by Family Circus and Pluggers; I can read classics from Cul de Sac and Bloom County, and mix in xkcd and Bad Machinery, without being assaulted by Not Me and Barfy).

And so it came as a very rude wake-up indeed on Thursday when I was interrupted by a prompt telling me that Google Reader will be going away on July 31, 2013. It made Tom the Dancing Bug slightly less funny to ponder how this was going to add more hassle to my computing life, after already coming to terms with a forced calendar change.

There are plenty of articles out there now about Google Reader alternatives, and about what made Google Reader–in all its stripped-down dullness–so great. I mostly agree with Kevin Drum that social media is a poor replacement for raw news, and I’m drifting back toward Feedly, a service that sits on top of Google Reader and has been anticipating the demise of Google Reader for some time; I briefly tried them out a couple years ago, encountered a few glitches that made it harder to use and which drove me back to the bare-bones glory of Google Reader, but over the last couple days, at least, the glitches have been minor, which is pretty impressive considering the amount of traffic they’re getting.

What strikes me as interesting about this is what it suggests about Google and its future, and what it tells me about how I use the Internet, which is apparently not the way most people use it (or, at least, is a way of using it that is hard to monetize).

Google is an uneasy blend of a public utility and an advertising broker. On the one hand, it has pioneered and improved many of the things that make the Internet so useful: search, both general and specific (like Google Scholar); mail and calendaring; project and code management. But its profits don’t come from those things; its money comes from its marriage of search algorithms with targeted advertising, all those “sponsored links” in search results and YouTube videos. The margins on services like RSS readers and e-mail are razor thin, if they can even be made profitable; web advertising, it seems, is much more profitable, especially if your products, AdWords and AdSense, are the most widely adopted on the Internet.

The advertisements creep into Google’s other products, not just search: Gmail’s mail and calendar pages sport little advertising banners, too, creepily based on the content of your email (until I implemented a little custom CSS in Firefox to hide them, I was offered deals on camping supplies and books whenever I checked my mail). But it wasn’t part of Google Reader, for some reason. A few of the RSS feeds I read have ads inserted though Feedburner (another Google product), but for the most part my Google Reader experience has been blessedly advertisement-free.

This is in stark contrast to both Facebook and Twitter, where sponsored posts and tweets (including some from beyond the grave) are rampant; the bar for advertising in social media seems much lower than it is for RSS readers. And that may be because of the audience: RSS is a somewhat nerdy tool, beloved by people who want a very specific and targeted reading experience on the web (even if the topics to which they subscribe can be rather broad). It’s much easier to prune your RSS feeds than your Facebook friends and Twitter follows, and also much easier to know what you’re getting into when you subscribe to a feed: I’m seldom surprised by the sites whose RSS I consume, but I find that the people behind Facebook and Twitter accounts can be surprisingly disappointing in their inanity. I have not hesitated to unsubscribe from RSS feeds that were chock-a-block with “sponsored posts” (all the more reason to stay away from BuzzFeed and HuffPo), and if Google Reader had become cluttered with advertising I would probably have chucked it as well (or done some CSS tricks to hide it).

Google would like very much to direct us away from things that don’t put advertising in front of us. (And apparently has some interest in keeping us from avoiding the ads as well.) Google+, which is where Google would like its Google Reader refugees to land, is promoting sponsored content and leveraging the AdWords and AdSense products. The future, according to the big social media players, is in targeted advertising.

Brian Solomon has a nice article that contrasts the Google model with Kickstarter, where a Veronica Mars movie was funded to the tune of over $3 million dollars in a few hours. I’d suggest that this isn’t so different from another model that I happily support every month as a sustaining member of Minnesota Public Radio and Twin Cities Public Television; because I value an ad-free source of news, information, and entertainment, I happily pony up a few dollars to keep it “free” (there being many different kinds of “free”, of course).

Would I pay for Google Reader, or a similar way of keeping up with a large and varied collection of web sites? Probably. I do, after all, subscribe to Flickr and Dropbox. Would I tolerate advertising in Google Reader if that kept it around? Probably not; I’d resent the imposition of the ads, and would either hack it like I have Gmail, Google Search, and (with varying success) Facebook, or flee to a different service. And I think it comes down to how I like to use the Internet: I’m much happier choosing my reading myself, rather than letting someone else–Google, Facebook, an advertiser, or even a social media contact–choose it for me. And in Google’s model, that’s a lot harder to monetize.

Google Hassles, part 1

Death and the Lansquenet

I’ve been on my best behavior financially for a while now, and after spending about three months pondering an upgrade to my laptop I finally settled on the Lenovo Yoga 13, a Windows 8 laptop/tablet hybrid with a clever design and not-too-awful price considering its abilities. While I would probably have preferred to go with something that runs a Linux flavor instead of paying the Windows OS premium, I’ve got various levels of OS-lock-in, so for my home computer, which I use for balancing the family budget, editing photos and videos, managing my music collection, and reading the news, Windows makes sense. And I was intrigued enough by Windows 8 that I thought I might as well give it a shot.

When I got the laptop a couple weeks ago, I started going through the setup and configuration process: deleting the bloatware that came pre-installed, setting up my favorite browser plugins, migrating data. Though it’s come in for some abuse, I thought that the Windows 8 Start page looked like something worth a try: a nice, clean, up-front place to launch my most-accessed tools. It’s not a UI that I would want for work (I’m much more cluttered at work, with various IDEs, editors, browsers, and other tools taking up space), but for home it looked good. And the most-accessed tools on my plate are, of course, mail and calendaring.

For a few years now I’ve been using Gmail (pulling in legacy e-mail accounts to a single, efficient inbox) and Google Calendar. I was pleased to see that my Gmail inbox came right over onto my new laptop, but for the life of me I couldn’t configure the calendar. Every time I tried to add my Gmail account to the calendar, I got an error saying that my account wasn’t “available.” After checking and re-checking my credentials, I did a little investigation, and lo and behold I found that it wasn’t my fault at all: as of January 30, 2013, Google had stopped supporting the Exchange ActiveSync technology that drives the Windows 8 Calendar integration with Google Calendar. Had I made my purchase decision a few weeks earlier, though, I would never have seen this problem–”[s]tarting January 30, 2013, consumers won’t be able to set up new devices using Google Sync; however, existing Google Sync connections will continue to function.”

Although the (partially-buried) announcement puts a technical gloss on things–”Google now offers similar access via IMAP, CalDAV and CardDAV, making it possible to build a seamless sync experience using open protocols,” implying that that open protocols not supported by the Windows 8 app are superior to the proprietary EAS protocol–this seems to hardly be a technical decision, since legacy accounts will continue to be supported and no end-of-life indication has been given. Google has also shut down some other synchronization interfaces, but those are for platforms in their waning days, not for platforms that are just starting to ramp up.

In short, this appears to be a tactical business (with more than a touch of Realpolitik) decision aimed at throwing a spanner into the Microsoft works. I suspect that Microsoft will scramble to come up with a solution for their apps that uses one of the open protocols, but it won’t be ready in time to make for a seamless transition for many Google Calendar users who are looking at their upgrade options. Faced with the difficulties in synchronizing their important data with a new laptop, phone, or tablet, many of those users may postpone a change or consider a non-Microsoft platform. With Microsoft just moving into the hardware space, any disruption could have significant effect.

For those of us not battling it out at the Clash of the Titans level, but rather just trying to make technology work on a daily basis, this tactical maneuver is a frustrating time-waster. It made more more than a little annoyed at Google, a company that (until very recently–more on the Google Reader hassle in a bit …) I’ve always championed in arguments about open APIs and software services; and, perhaps perversely, threw me into the arms of Microsoft. My solution was to export my Gmail calendar to a new Hotmail account (I’ll grant that Google does make your cloud-based data portable and easy to extract), which synchronized very nicely with Windows 8 and iOS, proprietary APIs be damned. There are still hassles to come–synchronizing it with my wife, making sure all the items came over cleanly, tweaking the settings–but more annoying is the lack of trust in Google it produced.

A lack of trust made all the more intense this past Thursday morning, when my 5:00 AM reading of the funnies was interrupted by a prompt warning that Google Reader would wind down on July 1; more on that to come …

Conditional Comments Revisited: the “document mode” thicket

I’ve continued to lean on conditional comments to deliver browser-specific CSS to different versions of Internet Explorer. Recently, though, I’ve discovered a new wrinkle that complicates things: it’s not so much the browser mode as the document mode that determines how Internet Explorer renders things.

I ran into this problem when my manager was testing a page that uses conditional comments in the traditional way, something like this:

<!--[if IE 7]-->
<link rel="stylesheet" type="text/css" href="/css/ie7.css">
<![endif]-->
<!--[if IE 8]-->
<link rel="stylesheet" type="text/css" href="/css/ie8.css">
<![endif]-->
<!--[if IE 9]-->
<link rel="stylesheet" type="text/css" href="/css/ie9.css">
<![endif]-->

Pretty straightforward: depending on the browser version, a different CSS file is loaded that deals with the particular quirks of that browser.
But my manager was using Internet Explorer 9 in IE9 Compatibility View Mode, with the document mode set to IE9 Standards. And the page looked entirely wrong!

The problem, it turns out, is that the compatibility view mode caused the IE7 conditional comment to fire, but the document mode caused the page to be rendered according to IE9 rules: we loaded IE7 CSS to IE9 rendering.

The fix isn’t too horrible, though it does complicate things a bit, requiring some JavaScript to get the job done. Instead of loading the CSS based on browser mode, we load it based on document mode, so browsers that have a mismatch between the two will get something they can handle:

<!--[if gte IE 7]>
<script type="text/javascript">
var ieCss="";
try
{
    if(document.documentMode==8)
    {
        ieCss="ie8.css";
        
    } else if (document.documentMode==9)
    {
        ieCss="ie9.css";
    } else {
        ieCss="ie7.css";
    }
} catch(exception)
{
    ieCss="ie7.css";
}
document.writeln('<link rel="stylesheet" type="text/css" href="/css/' + ieCss + '">');
</script>

Using the new Derby drivers in the old WebSphere server

Have some new functionality that isn’t showing up when deployed on an old Java server? You might need to investigate your classloader settings!

I have an application that needs a fast cache layer that provides better retrieval features than most of the common cache libraries provide; ideally, the same kinds of queries that I could run against a database, but without all the overhead of having a full-blown (or even lightweight) RDBMS. We’re already using embedded Derby for another project, so the new in-memory support that Derby offers seemed the ideal solution: I could tap into existing Derby expertise, while using a robust and thoroughly tested package.

Creating a simple web application based on the sample code that came with Derby wasn’t too hard. Getting it to work in embedded mode, where the database is written to a local disk, was a breeze. But no matter what I did, the in-memory option just wasn’t working. This worked great:

Connection conn = DriverManager.getConnection(“jdbc:derby:myDb;create=true”);

But this failed with a “no suitable driver” error:

Connection conn = DriverManager.getConnection(“jdbc:derby:memory:myDb;create=true”);

I debugged my project’s WEB-INF/lib, went through the Derby installation guide several times, put the DriverManager through all sorts of interrogation, all to no avail. I simply couldn’t find a way to load a driver that would be “suitable,” nor figure out why the freshly-downloaded driver was not.

Then one of my co-workers noted that WebSphere itself is using Derby to save its configurations, and so the version of Derby installed with the WAS 7 server we’re using–Derby 10.3, it appears–is loaded first.

To fix this running on a full install of WebSphere Application Server, you need to go to the class loader settings of your web application and set the class loader order to “Classes loaded with application class loader first.” This will allow the newer version of derby.jar that you place in the web application WEB-INF/lib directory to load instead of the older version loaded at server startup. In the WebSphere 7 runtime hooked into Rational, you can’t change this setting in the admin console (at least I couldn’t figure out how to change it), but you can manually swap the derby.jar that WebSphere uses. You can find its exact location by starting the admin console, and navigating to the class loader viewer (go to Applications\Application Types\WebSphere enterprise applications, find your EAR, find your WAR, and click “View Module Class Loader”). I found mine under ../runtimes/base_v7/derby/lib. Simply stop your server, rename the derby.jar file to derby.jar.old, move in your new derby.jar file with in-memory database support, and start the server up. There’s more than a little danger in swapping out key libraries, of course, so be prepared to swap the old derby.jar back in if you run into problems persisting server settings.

If you’re trying to use new capabilities in older Java environments, you may find this tip helpful when you run into problems with other core features; I would expect similar issues with, for example, XML and logging packages that are used by WebSphere.

Minding your 0s and Os

Some font choices, good and badI was recently burned by a very simple failure of a programming tool I didn’t even know I was using.

We were debugging a SQL statement that was embedded in a Java class (we’ll set aside for now the wisdom of embedding SQL statements into your code; yes, it’s best to parameterize at the very least). The SQL statement looked something like this:

SELECT * FROM CUSTOMERS WHERE ACCOUNT STATUS='0'



When I ran the code in the Java class, it consistently returned zero rows. But when I ran the query in a SQL console, it returned significantly more records. We spent a lot of time looking at permissions, table structures, Java DAO models, but nothing was explaining the problem.

Until I pasted that code into the SQL console, that is; and then the console returned exactly the same results as the Java class. I pasted the code into a text editor, and pasted the working SQL statement right under it, and fiddled with the font size until it was obvious what the problem was:

SELECT * FROM CUSTOMERS WHERE ACCOUNT STATUS='0'


SELECT * FROM CUSTOMERS WHERE ACCOUNT STATUS='O'



Do you see it?

“0″ and “O” were not obviously different in my IDE’s default font. Side by side, I could see the difference, but otherwise they were indistinguishable. At some point, this query–which had been sent back and forth through various e-mails–was mistyped into the code, and the expected “O” was replaced with “0″. The default font–the old standby Courier New, as it turns out–was a poor choice for a programmer’s font.

“0″ and “O” aren’t the only characters that can make a programmer’s life difficult. “I”, “l”, and “1″ have a habit of blending together, and the results can be annoyingly difficult to untangle. And “( )” and “{ }” characters can be devilishly difficult to distinguish in a JSP with a tangle of JSTL, jQuery, and scriptlets.

When I think of my development tools, I seldom think of my font selections. And while an IDE, text editor, browser plugins, and other software are obvious parts of the toolkit, font choices ought to be, too. In my experience, a good programming font should:

  • Distinguish between easily confused characters, like “0″ and “O”
  • Make formatting code easy by exaggerating the differences between parentheses and curly braces
  • Belong to the “monospace” family, for easy layout on the screen
  • Be easy on the eyes, scaling well to different monitor resolutions

In my hunt for a good replacement font, I landed on Ubuntu Mono. It puts a dot inside the zero, adds curly flourishes to the lower-case “L” and “I”, has nice big serifs on the capital “I”, and gives a stroke and a bottom serif the “1″. Curly braces, parentheses, and square brackets are easily distinguished. And as an added bonus, its proportional space relative, Ubuntu, is a nice looking font for dialogues and titles inside the IDE. The result is something like this (assuming my @font-face code is working …):

for int l=0; l < O; l++)
{
     System.out.println("SELECT * FROM CUSTOMERS WHERE STATUS='O' AND ID='" + l + "'");
     if(l==1)
     {
        System.out.println("l==1 but does not equal i or I");
     }
}

It’s pretty easy to tell which character is which.

I also like Anonymous, which features a slash in the “0″:

for int l=0; l < O; l++)
{
     System.out.println("SELECT * FROM CUSTOMERS WHERE STATUS='O' AND ID='" + l + "'");
     if(l==1)
     {
        System.out.println("l==1 but does not equal i or I");
     }
}

And Anonymous Pro, which has an exaggerated stroke on the “1″:

for int l=0; l < O; l++)
{
     System.out.println("SELECT * FROM CUSTOMERS WHERE STATUS='O' AND ID='" + l + "'");
     if(l==1)
     {
        System.out.println("l==1 but does not equal i or I");
     }
}

All are clearly preferable to Courier New:

for int l=0; l < O; l++)
{
     System.out.println("SELECT * FROM CUSTOMERS WHERE STATUS='O' AND ID='" + l + "'");
     if(l==1)
     {
        System.out.println("l==1 but does not equal i or I");
     }
}

There are a good many other programming fonts that fit my criteria; a search for “programming fonts” turns up handy lists, like Top 10 Programming Fonts” at Hivelogic and a rundown of programming fonts by Jeff Atwood. Whichever font you choose, though, do choose, and choose wisely: you could end up saving yourself quite a bit of trouble and eye strain down the line.

Experimenting with the new Facebook Questions

As part of the online community-building project for my book, I’m administering a Facebook page. So far I’ve just been cross-posting links from the book’s blog: it’s still very early (the book doesn’t hit the shelves until May 15), so I don’t want to risk boring people too soon!

I was very interested, though, in the new Facebook Questions feature, which seems like a hybrid of something like Quora and all those ubiquitous Facebook poll/quiz applications. I wanted to add a quiz and poll feature to the Facebook page, but I wasn’t thrilled with the apps that seem to exist mostly to plunder user data; I experimented with a quiz on Goodreads, and probably will use it again, but Goodreads doesn’t have the penetration that Facebook has.

So I logged in to the Facebook page, enabled the Questions option, and created a question. It was pretty simple–compose the question, select the options, and voila! a simple poll goes out without all of the application-permissions hassles of a quiz. There was also an option to forward the question to friends, making it easy to get it out to people who might not yet be fans of the page. When a fan does answer a question, it shows up in their updates, which can be very handy for extending the reach of your page; I managed to pick up one new fan through this quick foray into Questions: not world-shaking, but not bad for a couple minutes of work.

When viewing a question set up as a poll, with pre-defined options, only the first three options are displayed, with a “more” button to show the rest. On this first outing I gave five options; not surprisingly, though, the first three options have been the most popular, no doubt in part because they’re “above the fold.”

It also doesn’t appear to be possible to force people to pick one and only one answer; people can choose as many of the options as they wish. You can, however, control whether people can add their own options, or whether the options you define when setting up the question are the only ones that will ever be available.

The poll display is nice, with a bar graph of the responses and profile icons shown next to the bar for the people who answered. Nothing especially flashy, and nicely integrated into the overall Facebook look and feel.

Is Facebook Questions a replacement for a Facebook-based quiz/poll app, or an external solution? Probably not in every case. But for quickly taking the pulse of a page’s fans, generating some discussion topics, and mixing up the content that goes out on a page, it’s a really nice feature. I expect to make quite a bit of use of this as this project progresses.

chipping away at stylesheet lava with Dust-Me Selectors

I’ve been working on a web project that has gone through multiple iterations, and many hands, before landing on my desk. Over that time (and while it’s been in my care, as well as before), the stylesheets have acquired quite a few geological layers. The primary CSS file has had over 2,000 lines, with more than 600 selectors spread over almost a dozen stylesheets. Since the site has gone through multiple revisions and adjustments–at least five major versions in the month or so I’ve been working on it–a lot of these selectors aren’t being used anymore.

This is a classic case of the Lava Flow anti-pattern:

When code just kind of spews forth and becomes permanent, it becomes an architectural feature of the archeological variety. Things are built atop the structure without question and without hope of changing what is beneath them. The existing code is seen as an historical curiosity.

The best thing to do with lava is to blast it away, leaving just the useful bedrock. But removing unused CSS selectors is a tedious and error-prone undertaking; my Java IDE can tell me which methods and variables are no longer in use, and profile an application for redundancies and deprecated components, but I didn’t have a comparable development tool for styles. Luckily, my core development philosophy–somebody smarter than me has already solved this problem, and has probably put the information out on the Internet–panned out, and I discovered the Dust-Me Selectors Firefox add-on.

Dust-Me Selectors can look at an individual page, or at an entire site, and tell you which selectors are actually in use. It generates a nice report, broken down by stylesheet file and including line numbers, that will direct your blasting caps and jackhammers at the problem areas. After running it through my pages, I was able to reduce that main CSS file to just over 1,000 lines (it’s a complex site), with less than 200 selectors defined.

The tool can be run against individual pages or against a sitemap; for a large site, a sitemap is by far the best approach. If you don’t have a sitemap.xml file yet, it’s relatively easy to generate one, either with your development tools or CMS, by hand for a short list of pages, or with a simple script. I found that it was less reliable in single-page mode: your coverage won’t be as good, and things like Ajax windows are going to get in the way of accurate results. A comprehensive sitemap ensures coverage.

It’s also a good idea to use caution when pruning the styles; I went through the list of flagged selectors and disabled them with comments, visually testing the site as I went along. There were a handful of cases where Dust-Me flagged styles that were actually in use, primarily within jQuery code that was manipulating element classes on the fly. After I verified that my leaner, meaner CSS files were working, I moved all of the commented code out into a new file just in case some of those selectors need to be resurrected in the future.

There are a few reasons to tidy up your CSS files. Download performance, though not as much of an issue in the broadband world, is a factor: my old core file was about 41KB, and my new one is 17KB. Browser performance, too, is a factor: parsing unneeded CSS code may be a very tiny drag on performance for faster machines and browsers, but every millisecond you save on unnecessary parsing is time you can put the browser to work doing something more interesting and useful.

The best argument for cleaning up the lava flow, though, is maintainability. Scrolling through those 2,000 lines of code in search of an errant style, or trying to debug a rendering issue when looking across a dozen files for a class or ID, is a drag on productivity. In the heat of battle, it’s far too easy just to add another selector to the bottom of the file, which only contributes to the chaos. Pruning the excess weight early in the development process makes it much easier to maintain clean code as the project moves along.

Netbook OS Roundup: Fedora LXDE

Fedora is one of the scions of the Linux world; the first Red Hat release came out in 1994, so there’s a lot of expertise behind the current Fedora builds. There are multiple “spins” of Fedora available, one of which–LXDE–is specifically targeted to low-power, lightweight systems like the Dell Mini netbook.

When I went looking for a Fedora distribution to try out, though, I started with the base Fedora 12 release. This was very much the wrong distribution for a netbook: it comes packed with software and services (everything from OpenOffice to font sets for Tajik script); within a day it had consumed all available space on my drive and no amount of pruning could get it down to size. While a fine approach for a powerful desktop machine, the everything-plus-the-kitchen-sink strategy of OS distribution was a recipe for frustration on a low profile machine.

LXDE takes the opposite approach:

XDE is not designed to be powerful and bloated, but to be usable and slim. A main goal of LXDE is to keep computer resource usage low. It is especially designed for computers with low hardware specifications like netbooks, mobile devices (e.g. MIDs) or older computers.

And with a few caveats, it is a successul netbook platform.

Ease of installation

LXDE can be installed from a USB stick, downloaded and built with Fedora’s LiveUSB tool. The wizard installation process is easy to use, and installation is fast (especially compared to the full Fedora installation time….).

Not everything works “out of the box,” though. The Dell Mini uses a Broadcom wireless card, and no free driver is distributed for it. Immediately after installing LXDE, I had to attach my netbook to a wired connection and install the wireless driver (good instructions here). This took a little more time in the console than the casual user would likely want to take; I can understand the reluctance to include proprietary software in an open source distribution, but this is an area where the hardware manufacturers, system builders, and software developers need to find some common ground before an OS like LXDE can gain winder acceptance.

Application support

Because LXDE is so stripped down, there’s room even on the Dell Mini to install some more applications. It comes with Midori, a lightweight and minimalist web browser, and AbiWord, a basic word processor, but not much else. My requirements on the netbook are simple, and AbiWord proved sufficient; Firefox 3.5 installs easily, as does Dropbox, so I was able to restore my bookmarks and plugins.

Stability

Under normal use, LXDE is a stable platform; I haven’t had any unexpected crashes. Power management, though, is a drawback, and that’s a signiicant issue for netbooks, which are typically saddled with a less-than-optimal battery. The idle hibernation options (accessible through the screen saver settings) have never worked for me: I have to manually hibernate a session, or risk returning to a dead netbook if it’s unplugged. And without a native battery monitor, it’s hard to tell when the power will go out. Mint and Ubuntu were much more reliable in this regard.

Performance

LXDE is quick to start and connect; it’s not an instant-on OS, but it’s certainly faster than Windows. Under normal use, it is responsive and smooth within the constraints of the netbook’s limited hardware.

Appearance

Like Mint, LXDE uses the desktop metaphor: an easy transition for Windows and Mac users, but a bit of a real estate waste for the netbook: I still prefer the Moblin UI for small computers, with its compact and space-sensitive layout. LXDE does offer multiple desktops, though, and a useful task bar and application menu, so users who are happy with the Windows interface will find much to like in LXDE.

Overall Assessment

LXDE is a solid OS for netbooks; it’s stable and easy to use, though its lack of native support for the Mini’s wireless card and its poor power management and monitoring make it less than ideal. Though not a groundbreaking approach to mobile computing, it is an easy bridge from Windows to Linux: if you’re looking for a good operating system and want to avoid the Microsoft tax, LXDE is a good fit for small computers.

What I’ve been working on: The Architecture

I’ve found that putting together a very rough architectural sketch is the best way to start when you’re building an application from scratch. While it’s a lot more fun to write code and see it executed, you save yourself a lot of grief, and create a more robust and flexible system, if you think things through up front and draw some boxes and arrows.

For this project, I knew that I would have at least three tiers: the user interface, which would initially run on WebSphere Application Server but should be portable to WebSphere Portal Server; the business logic layer, which would do the heavy lifting of taking the user’s input and sending it to the right back end screens; and the back end screen interface itself, built on an existing Java bean architecture.

I also knew, from painful previous experience, that keeping those tiers separated was critical. Java makes n-tiered development easy, with the ability to control scope and encapsulate functions, but it also makes breaking a clean separation of tiers very easy, too. Sloppiness early on, in making too many methods public rather than protected or giving one class too much access to things outside its logical scope, can doom your application. So that initial architectural sketch should include an attempt to block out the domains of interest and identify what limited interfaces should exist between different parts of the application.

I ended up with something roughly like this:

4GL Service Layer

Back Office Beans
Business Logic
Configuration Logic
Services (JAX-RPC)
Client Layer

Service Client
Controller
Business Logic (limited to validation rules)
Configuration Logic
User Interface (JSP, CSS, JavaScript)
User’s browser

The service and client layers are implemented as separate web applications: though they are packaged in the same enterprise application for ease of deployment, they are independent of each other and could be deployed as individual WAR files. Each domain surrounded by dashed lines in the table above represents at least one Java package, with most methods set to protected rather than public: this limits the points where changes in one layer can affect the code in another, and makes swapping out different services (caching, configuration, etc.) relatively easy. Building clear walls between application layers in the beginning will help you in building out and maintaining the application in the future.

In the original version of the application, developed on Eclipse and running on Tomcat (I didn’t yet have access to the licensed versions of IBM’s tools), the interaction between the service and client was handled through Axis, a Jakarta web service infrastructure. In the final version, running on WebSphere, the interface is handled through JAX-RPC, the default mode for WebSphere. My first attempt to deploy the application to WebSphere was, in fact, botched by the use of Axis: WebSphere wants to do JAX-RPC when it does web services between WebSphere servers, and was very unhappy about Axis. Luckily, both Eclipse and Rational (IBM’s version of Eclipse) have handy wizards for generating the web service bindings, so switching the web service layer was relatively painless and, since I started with the practice of separating the application into distinct domains, required no code changes.

In developing a SOA application, a lot of attention should be paid up front to designing your service interfaces. Keeping your changes to the service interfaces to a minimum will make your development a lot smoother–no need to re-generate the service and client bindings if you keep method signatures in place from version to version–so it pays to do as much analysis up front as possible.

For this application, I knew that I would be passing a user-generated data bean from the client layer to the service, sending that bean through various permutations in the service layer to do all of the back office work, and then sending that bean back up to the client to show the user the fruits of their labor (and any error messages that might have been picked up along the way). So the first order of business was designing the data bean, which would act as the primary transfer object, and the methods that would accept it.

The data object had to be very flexible, since the back end system is so denormalized: multiple fields and value types would have to be carried by the bean, not all of them well known at design time. The solution was to make that bean into the carrier of a collection of field objects, which have name, value, and type (since passing raw Objects over web services is forbidden) attributes. This allows the bean to pick up whatever fields it needs to feed the back office system, without having to change the code any time a new field is discovered. With simple getField(String fieldName) and setField(FieldObject field) methods, any value can be put on the bean.

Another consideration in developing your service interfaces is data typing. As noted, you can’t send a plain Object over the wire and expect to be able to cast it to the specific type you want. Any object that is transferred over the service needs to be mapped through XML, and so should consist of serializable or primitive members. I’ve found that sticking to Strings, ints, longs, decimals, and such is by far the safest way to work with web services; more complex objects can be generated from the primitive transfer objects after they’ve reached their destination.

Once the service interfaces were in place, and the demarcations within each application were defined, it was time to start wiring things together. And that’s the fun part!