My local government has gone open source

When most folks were out enjoying the fine weekend weather last Saturday morning in Arlington, the County Board met and approved letting government developers publish software as open source. Whether software the county shares with the open source community proves to be useful to other developers is a secondary issue. The exciting news here is that the county board’s unanimous vote is a good idea both for the open source community and the Arlington County government itself. 

Before I get into the details of why open-sourcing is a good idea for the county, let me explain what the county actually did and intends to do. The county is in the midst of redesigning and converting its main website from Active Server Pages, a commercial technology from Microsoft, to WordPress, a free and open source website management tool. During the conversion, county technology staff will or has already made modifications to parts of WordPress or perhaps some of the publicly available WordPress plugins that add features to the basic content management features that come with vanilla WordPress. Since WordPress is licensed under the GNU General Public License, version 2, the county cannot distribute its changes without making those changes open source under the GPLv2 license. What the county board did on Saturday was to approve releasing county-developed source code under the GPLv2 license. Since county developers are using WordPress plugins licensed under version 3 of the GPL license, the board also approved releasing source code under GPLv3

So why is releasing county-developed website code as open source a good idea?

First, it will cost the county close to nothing. There are free code-sharing repositories available, like GitHub, the county can use to host its open source code. The time developers will spend uploading code to a public repository is insignificant.

Second, changes to WordPress or its plugins that county developers make might actually catch on in the open source community. If the county has a need for an enhancement in the code, it seems likely someone else will, too. If other developers pick up some of the county’s changes and run with them, those developers might make their own open source improvements (and bug fixes) that can feed back into the county’s website. 

Third, if county developers find and fix bugs in WordPress or its plugins, they previously could not contribute those fixes back to the original developers. When county developers create software, the county owns it. Developers were not at liberty to give their code away to others, even to fix bugs. With the county board’s action, they can now release their bug fixes and improvements back to the open-source community. These contributions not only should be good for the open source community, the county won’t have to deal with the software upgrade dilemma: if you upgrade to the next version, but you have made changes to that version, your changes (and bug fixes) will be lost, which pressures you to stagnate with old but working software. If the county can contribute its changes back to the original source, then new upgrades will include the county’s changes, making the upgrade path much easier.

Fourth, the county loses nothing. The changes the county makes to GPL software cannot be shared with others (like other local governments) without also licensing their changes under the GPL license. (This is the viral nature of the GPL license that some businesses bemoan and the open source community cheers.) Since the county is using GPL software, the changes it makes to the software can’t be leveraged somehow into an income-producing revenue stream to benefit taxpayers. As the county’s staff report to the board put it, “Since County code is partially derived from open-source code, it cannot be released under any other license other than an open-source license. The County’s choices are to release the code under an open-source license, or to not release the code at all.” So why not go ahead and release it.

Fifth, the county should actually save money. Assuming Arlington’s new website has nice features, other local governments around the country might take notice. They then might email the Arlington County technology division asking for copies of their WordPress customizations. Each of these requests would need to be dealt with by some staffer, and the source code zipped up and emailed to the requesting government agency. Now, with the open source policy, Arlington can just refer interested government agencies to its public source code repository. Easy.

Sixth, and this is the political upside, the taxpayers have already paid for the source code. By having the county release the code to the public, the county is saying, “Here is software we created for the purpose of serving you. If you can further reuse it for your own personal benefit, go for it.” Since WordPress is one of the most popular web content management systems out there, it isn’t unreasonable to assume that a few Arlington County taxpayers (including businesses) might actually derive benefit from the county’s source code.

That last point complements a trend among governments to release information that was gathered at taxpayer expense. In February, the White House announced that federally funded scientific research would be made available free to the public — although waiting one year to allow subscriber-funded academic journals to make money from the research as well. The trend is a good sign that more people, and more people in government, are believing that what the government creates in the name of the public interest should be owned by the people.

So, this month my local government has gone open source — or at least is starting in that direction. Has yours?

Space Shuttle Discovery’s retirement flyover D.C.

Here are some of the photos I captured today of Space Shuttle Discovery flying over D.C. today on board its 747 carrier. The shuttle circled around D.C. a few times before heading for its final resting place at the National Air and Space Museum Steven F. Udvar-Hazy Center next to Dulles International Airport.

Two reasons to prefer Hibernate JPA over EclipseLink on GlassFish

EclipseLink is the default JPA provider on the GlassFish JEE server. As such, I figured EclipseLink would work well as the persistence provider. However, we began a recent JEE 6 project using GlassFish v3 and chose Hibernate as the JPA provider because of the team's familiarity with Hibernate 3. We later switched to EclipseLink to be compatible with another application running on the server -- and immediately encountered two annoying problems that never occurred with Hibernate. This article is meant to document those EclipseLink problems in case it helps others with similar EclipseLink issues on GlassFish.

The first problem we saw was a sporadic ClassCastException after a module redeploy. The second problem we found was that entity methods marked with @PrePersist were not being called when the entity was being saved to the database. Instead, those fields would remain null when EclipseLink executed the INSERT SQL statement, resulting in constraint violations for our non-nullable columns.

The ClassCastException occurs at a line in our code that assigns an entity to a reference variable declared as the type of its parent class, which is a JPA mapped superclass. A widening cast like that should cause no problem, so this was a head-scratcher. Adding to the mystery is we had no problem with this code when using Hibernate JPA. The entity class in question, ActionTypeLookup, as shown in the stack trace below, directly extends the parent class type that it is being assigned to.

Here are the partial class definitions. The parent class type, CodeLookupValues,
public abstract class CodeLookupValues implements Serializable {
is a @MappedSuperclass that the entity ActionTypeLookup directly extends:
public class ActionTypeLookup extends CodeLookupValues implements Serializable {
The persistence code is defined in a module (an EAR file) containing the JPA persistence unit definition. The ClassCastException occurs only if that same module is unloaded and reloaded from the server several times. That is, something you do a lot of during development. We had loaded and unloaded this EAR file many times on GlassFish with no problem when using Hibernate. Shortly after the switch to EclipseLink, we would see this stack trace in the log at deploy time: Exception while loading the app
javax.ejb.EJBException: javax.ejb.CreateException: Initialization failed for Singleton LookupSessionFacade
at com.sun.ejb.containers.AbstractSingletonContainer$SingletonContextFactory.create(
at com.sun.ejb.containers.AbstractSingletonContainer.instantiateSingletonInstance(
at org.glassfish.ejb.startup.SingletonLifeCycleManager.initializeSingleton(
at org.glassfish.ejb.startup.SingletonLifeCycleManager.initializeSingleton(
at org.glassfish.ejb.startup.SingletonLifeCycleManager.doStartup(
at org.glassfish.ejb.startup.EjbApplication.start(
... [removed several classes involved in installing the EAR] ...
at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(
at com.sun.grizzly.util.AbstractThreadPool$
Caused by: javax.ejb.CreateException: Initialization failed for Singleton LookupSessionFacade
at com.sun.ejb.containers.AbstractSingletonContainer.createSingletonEJB(
at com.sun.ejb.containers.AbstractSingletonContainer.access$100(
at com.sun.ejb.containers.AbstractSingletonContainer$SingletonContextFactory.create(
... 36 more
Caused by: java.lang.ClassCastException: my.customer.package.shared.datamodel.reference.ActionTypeLookup cannot be cast to my.customer.package.shared.datamodel.reference.CodeLookupValues
at my.customer.package.shared.datamodel.persistence.LookupSessionFacadeBean.reloadCommonLookupValues(
at my.customer.package.shared.datamodel.persistence.LookupSessionFacadeBean.readLookupTables(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at com.sun.ejb.containers.interceptors.BeanCallbackInterceptor.intercept(
... [removed several more classes involved in installing the EAR] ...
at com.sun.ejb.containers.interceptors.CallbackChainImpl.invokeNext(
at com.sun.ejb.containers.interceptors.InterceptorManager.intercept(
at com.sun.ejb.containers.interceptors.InterceptorManager.intercept(
at com.sun.ejb.containers.AbstractSingletonContainer.createSingletonEJB(
... 38 more
We see the error at deploy time because the application has a @Singleton EJB that also is annotated with @Startup so the server loads it at deploy time. I don't know if having a non-startup EJB or a non-singleton EJB would make a difference.

It turns out this was the easiest problem to work around -- the ClassCastException goes away if you restart the domain. But I never discovered what causes the exception. At first, I thought the problem must be caused by the two classes being loaded by separate classloaders due to the way we had structured the modules in our application. However, both classes are:
  • Deployed in the same Java package
  • Inside the same JAR
  • Packaged and deployed inside the same EAR.
The ClassCastException does not occur all the time. It just suddenly pops up after a series of deploy/undeploy cycles and does not go away until we restart the GlassFish domain.

My best theory so far as to what causes the ClassCastException is it is a bug in GlassFish's OSGi bundle class loading. The problem appears as if some GlassFish classloader has the parent MappedSuperclass class loaded even after module undeploy but not the child class. Subsequently, when the updated EAR file is deployed again, the new module's newly assigned Archive classloader sees that the parent class is already loaded but needs to load the child classes from the new EAR files's JAR. When the code tries a widening assignment of a child entity to a reference type of the parent class -- kaboom! -- a ClassCastException because the two classes are now unrelated because of the different classloaders.

The GlassFish OSGi implementation gets my suspicion only because EclipseLink is deployed in GlassFish as an OSGi bundle, and the ClassCastException never occurred when using Hibernate, which is deployed as a set of jar files inside the server's shared library classpath.

Better theories are welcome in the comments. I poked around but found no others reporting the same bug or that this problem has been fixed in a GlassFish update (we are not running the latest updates of v3). This ClassCastException is only an annoyance because the workaround is to restart the GlassFish domain/server. A restart always fixes the problem and it goes away, until after another series of deploy/undeploy cycles.

@PrePersist problem

The second and more vexing problem is with the way EclipseLink synchronizes its entity session cache with the database. If I understand the problem correctly from reading various forum postings, if you instantiate a new entity object and call an EntityManager.find() method before the new, detached entity has been persisted with EntityManager.persist(), EclipseLink synchronizes all the detached entity states with the database before performing the query. That synchronization makes sense: the database is going to need to know about the new rows in the tables if the SELECT query is going to find the correct data.

The problem, at least for me, is that when EclipseLink performs the INSERT statement to persist the unsaved entities, it does not call the @PrePersist methods on the entity. I was counting on @PrePersist methods to set values like created-date and last-updated-date, which cannot be null in our schema. OK, I can understand that EclipseLink opted not to call @PrePersist methods when it is merely inserting the data for its convenience and not because EntityManager.persist() was called. But the unexpected behavior required some logic change in our entity code because Hibernate always seemed to call our @PrePersist methods before performing an INSERT on the database. After the switch to EclipseLink, we started seeing database constraint violation errors on these @PrePersist columns.

This second problem with EclipseLink, then, cannot be called a bug, but it was unexpected behavior worth documenting here. I assumed incorrectly that @PrePersist methods, by definition, always would be called before an entity got persisted. Wrong.

Those are the only two unusual problems we saw after switching from Hibernate to EclipseLink. The other problems that occurred were some HQL-specific queries failed, but those could all be converted to standard JPQL with a little alteration.

After complaining about EclipseLink issues, I will profess one preference for EclipseLink: The debugging log messages and error log messages seemed clearer and more straightforward with EclipseLink. When one our our HQL queries failed or I needed to trace what queries were being called and when, EclipseLink's debugging messages just seemed clearer and easier to understand than many of Hibernate's logging messages. I could see queries more clearly being translated from JPQL to Oracle SQL, and see what values were being bound to query substitution parameters. I do hand it to EclipseLink for making its database interactions more transparent than I am used to with Hibernate. So when we had the @PrePersist problem, at least I was able to debug and diagnose them fairly easily.

Astrogeeks and photographers: Look eastward this weekend

Moonrise over D.C. in winter Moonrise on Feb. 28, 2010

The full moon will rise this weekend at nearly 90 degrees azimuth for those in Washington, D.C. That means the moon will be almost directly to the east. Since the National Mall and many of its famous monuments and buildings align along an east-west axis, the astronomical phenomenon promises stunning moonrises from places like the Washington Monument, the Lincoln Memorial, and the Netherlands Carillon as the moon slowly rises behind or next to the U.S. Capitol. If the expected clouds abate on Friday and the weather holds out, that is.

As you can see from these photos of near-90 azimuth moonrises last year, the moon looks great near the horizon when looking east across the Mall. Here are the stats for the weekend:

On Friday, the nearly-full moon will rise in D.C. at 6:23 p.m. EDT at 89 degrees azimuth (source). On Saturday, the full moon rises at 7:39 p.m. at 97 degrees azimuth.

And if the full moon due east isn't enough to pique your geeky astronomical interest, this weekend's moon will be a big and bright moon. Saturday, the full moon is at perigee -- its closest approach to earth. This perigee is the closest the moon will get to the Earth for all of 2011: 221,575 miles (356,575 kilometers). That's 31,064 miles closer than the moon was on March 6, according to EarthSky. EarthSky says Saturday's full moon will be its closest encounter with the Earth since Dec. 12, 2008, and the closest it will be until Nov. 14, 2016.

For photographers interested in capturing the event, outside the Lincoln Memorial and the Netherlands Carillon should be good places to set up your tripod. (Tip: You might want to get to the Carillon early to stake out an unobstructed spot for your tripod. It's a popular place for full moon photos. Note that the path in front of the bronze lions is a popular one with joggers, bikers and pedestrians, so get your siteline set up with that in mind.) As far as weather goes, the current forecast calls for partly cloudy Friday around moonrise with a slight chance of rain. But sometimes a low cloud layer can make for great photos if the clouds aren't dense. Saturday also promises to be partly cloudy with some rain possible in the morning. But the clouds are supposed to clear by moonrise. Saturday promises to be the better day for weather but with the 97 degrees azimuth, perhaps less-stunning photos (more like the one below).
Moonrise over D.C. on Aug. 25, 2010 Moonrise on Aug. 25, 2010
I'm looking forward to some beautiful moonrises this weekend. Share the moment with someone you love -- but hey, take the camera. If you grab some good photos, please send me a link in the comments.

Working around ddclient’s “bad hostname” and “network is unreachable” problems

I have had continuing problems with ddclient being able to connect to the network and make an http call to check my current IP address. If you use ddclient and also see this problem, this workaround might work for you, too.

The ddclient bug exhibits itself with two errors I would see in the system log and also kindly emailed to me by the ddclient daemon itself:
WARNING:  cannot connect to socket: IO::Socket::INET: Bad hostname ''
or the more generic error:
WARNING:  cannot connect to socket: IO::Socket::INET: connect: Network is unreachable
The issue seems to be that ddclient, a Perl client that talks to dynamic DNS services like, has problems either making network connections or perhaps caches a bad address at system start when networking services might not yet be up. This problem with ddclient seems longstanding, with a bug filed in 2003 on the Debian list and a bug filed in 2009 on the Red Hat list.

The Red Hat bug was closed May 29 with a fix (ddclient- posted to update sites for Fedora 11 and later. But if you have not or cannot update, or still see the bug, here's my workaround: Instead of using ddclient's built-in web client to connect to your dynamic-DNS service, call a shell script that uses curl to make the network call. Specifically, I replaced this line in my /etc/ddclient.conf configuration file:
use=web,, web-skip='IP Address' # found after IP Address
with this line:
use=cmd, cmd=/home/tom/bin/, cmd-skip='IP Address' # found after IP Address
Here is my shell script, stored in my home "bin" directory:
# A script to fill in for what ddclient
# can't seem to do: reliably connect to
That's it. The only extra steps you need to take are to ensure the user that runs your ddclient daemon (typically user "ddclient") has access to the script. That means in my case making sure the script itself is executable, e.g. chmod 755 ~/bin/, and that my home directory and bin directory are world executable, e.g. chmod --recursive o+x ~/bin/

When I eventually upgrade my system and use version of ddclient, I look forward to seeing if this longstanding networking bug really got fixed.

Clojure’s inventor and author make a case for the new language

Rich Hickey, inventor of the Clojure language, and Stuart Halloway, author of "Programming Clojure," presented introductory and advanced concepts of the young JVM language at Wednesday's Northern Virginia Java Users Group. These are some of my notes from the meeting. The session served to whet interest in learning Clojure, thus these notes do not include a lot of code or explain Clojure's unusual syntax. There are many other sources for that.

Clojure, a Lisp-like language that compiles to Java byte code and runs on the Java virtual machine, was created as a general-purpose programming language that embraces a functional style of software design, rather than the imperative style typical in languages like Java -- and most other general purpose languages in use today. Functional programming languages like Clojure, Scheme and Erlang have been getting a lot of attention at technology conferences over the last few years, which first brought my attention to Clojure. Its functional style and its ability to run alongside and integrate with existing Java code interested me in learning more about Clojure. The fact that its inventor and a technology instructor I highly respect were presenting a free session on Clojure compelled me to attend the JUG meeting.

Rich Hickey released the first version of Clojure in October, 2007, with version 1.0 released May 1, 2009. We are talking about a young language. Still, from what I learned last night, it looks like a powerful language with potential. Clojure is released as open source under the Eclipse Public License 1.0, which makes it easy to use in a non-open source commercial environment.

Stuart Halloway, author of 'Programming Clojure'
Stuart Halloway
Stu Halloway, co-founder of the top-notch professional training and agile consulting company Relevance Inc., began with an introduction to Clojure's features and why a Java developer might want to learn it. Rich then took over and introduced three new features of Clojure (Protocols, Reify and Datatypes) that can be downloaded from the latest source tree but are not part of the current 1.1 release of the language.

According to Stu, some of the compelling features of Clojure are its:
  • Easy interoperability with Java
  • Lisp syntax
  • Functional style
  • Ability to run in a multi-threaded environment with no coding overhead
To demonstrate the syntax benefit, Stu "refactored" the StringUtils.isBlank method from the Apache Commons lang library. He started by showing the full Java source code and then removing all the ceremonial scaffolding code to expose the core logic, then simplified the Java code into the definition of an equivalent Clojure function:
(defn blank? [s]
(every? #(Character/isWhitespace %) s))
I'm not a Clojure programmer (yet) but I think I captured the above syntax correctly. Like Ruby, Clojure uses the question mark to replace the traditional "is" prefix in boolean functions. The # symbol introduces an anonymous function. From what Stu described, the functional programming paradigm in Clojure handles most (all?) corner cases for you. There is no need to write special-case "if" statements to deal with a null parameter, for instance.

For Clojure's interoperability with Java, Clojure code can call Java, and Java code can call Clojure functions. (According to Rich, the integration is implemented with little or no need to use Java reflection at runtime, adding less runtime overhead.)

For Clojure's advantage by using Lisp syntax, Stu referred everyone to Paul Graham's 2001 article, "What Made Lisp Different" as the best explanation. Most languages have "special forms" like imports, scopes, protection definitions, metadata, keywords. These special forms are language features you can use, but you cannot create them yourself and add them to the language. These language features are thus unavailable for reuse. Lisp abandoned this restriction. In a Lisp-like language, special forms of the language look like anything else in the language. All forms of the language are created equal. In Lisp (and Clojure), defining scope, the control flow, method calls, operators, functions, import mechanisms -- they are all lists. Stu said a language's "special forms" restrictions cause a programming language to "crap out," and joked that the restrictions bring about magical cut-and-paste reuse workarounds we call "design patterns."

For Clojure's advantage by being a functional language, Clojure encourages you to write small pieces of code that work well together. Good code has the same shape as pseudo code, he said, and Clojure's functional style lets you create more pseudo-code looking real code. According to Stu, functional languages are simpler to understand. They let you write code that eliminates or reduces what he called "incidental complexity" required by non-functional languages:
  • Corner cases
  • Class definitions
  • Internal exit points
  • Variables
  • Branches
The resulting code is less complex, he said, and simpler to understand by orders of magnitude.

The final benefit he talked about is Clojure's inherent ability to run in a multi-threaded environment with no special concurrency-handling code from the developer. Clojure and other functional programming languages perform this feat by treating data as immutable and producing a new copy of a data structure when data needs to be changed. Two threads never look at the same data at the same time, so there is never any need to synchronize access to code that reads and writes data. Clojure's solution, Stu said, is to separate identify from value. He went on to explain what this means, but maybe the late hour caused me to miss the details.

Rich Hickey

After Stuart set the stage for why learn and use Clojure, Rich Hickey took over to talk about new features he is adding to the language. He said, quite truthfully, that for those in the audience who don't already know Clojure, what he was about to say would not make a lot of sense. These features are Protocols, Reify and Datatypes. As a result of my newness to Clojure, I will pass along what I thought Rich said and hope he and the Clojure crowd forgive my ignorance.
Rich Hickey, inventor of Clojure language, speaking at NovaJUG
Rich Hickey, inventor of Clojure, speaking March 17, 2010 at the
Northern Virginia Java Users Group meeting.
[taken from my phone]

Rich, for an open source programming language inventor, was a refreshingly clear advocate for his new language. Maybe I'm jaded from years of slogging through open-source code, but from my experience, most open source projects release their code with little explanation of how or even why to use it, and then treat users like they are the ones who failed if they misunderstand how to use the code correctly. Rich actually understood where most of us in the audience were coming from. "I know it's a big deal to try to learn a new programming language," he said, but he believes Clojure is worth taking the time to learn and will make our jobs as programmers easier.

Before delving into the new features he is adding, Rich provided a summary of how Clojure is implemented. Part of it is written in Java for performance, and the rest is written in Clojure itself. He said his goal is to eventually write most of Clojure in Clojure once he can get performance boosted to an equivalent level.

Clojure is built using abstractions, with those abstractions written as Java interfaces. The fundamental implementation objectives of Clojure (or at least the ones I picked up on), he said, are to leverage high-performance polymorphism mechanisms of the host environment, to write to abstractions not concrete types, and to enable extension and interoperability with Java.

From what I understood of the new language features, Protocols are named sets of generic functions. Reify allows developers to use the "cool code generation" in the built-in fn function. "I put a lot of work into 'fn' and I wanted to make it reusable," he said. Even though it went over my head, Rich said Reify allows developers to create an instance of an unnamed type that implements protocols, like proxy for protocols. For the new Datatypes feature, if I understood correctly, he said he added a new construct, deftype, to define a name for a type and list of fields in that type.

Additional details that might make sense if you know Clojure:
  • Datatypes fields can be primitives
  • Datatypes support metadata and value-based equality by default
  • In-line method definitions are true methods, no indirection or lookup and calls can be inlined by just-in-time compilers, like Hotspot
  • Keyword-style field lookups can be inlined just like (.field x) calls
Rich concluded by offering more reasons to explore and begin using Clojure. "Closure has dramatically less implicit complexity than other languages," he said. You don't need to write a lot of code simply to support the needs of the language. You spend your time with Clojure focusing on domain complexity, not language complexity, he said. "It has a lot of newness, so the unfamiliarity level is high," he said. "But it is very, very simple."

The Lost Symbol: Nix It From Your Christmas List

Let me start this review of Dan Brown's latest novel by saying I read Angels & Demons and The Da Vinci Code and thoroughly enjoyed the stories and the storytelling. Second, although The Lost Symbol was at times painful to read, I do not join other critics who point out the preachy, moralistic ending. Sometimes we need a reminder to return to the basics of our morality. Finally, I plan to reveal minor details of the book here but I won't disclose any plot twists or surprises.

The Lost Symbol reads as if Dan Brown had been kidnapped and tortured by the Masons, just like one of the characters in the book is kidnapped and tortured by an evildoer, and forced to write this book under duress. Each chapter, while revealing frat-boy antics committed by the Masons during its rituals, also includes what seem to be apologies to the reader for those antics. Brown constantly reminds the reader that Masons have included the geniuses of history, the rich, the politically powerful -- including, he says, most of the high-ranking members of all three branches of the U.S. government. Whenever a character in the book criticizes a Masonic activity, the hero of the book reminds us how warm and cuddly the Masons really are to the point that the subtitle of the book could have been, "Hug a Mason Today."

The constant apologies for the Masons is not why I thought this book was a Brown dud. I actually learned what I hope are facts about Masonic history from this book, which I thought were enlightening and interesting. No, the worst part of this book is the amateurish writing and the forced, silly narrative. Brown wanted to ladle so much history and symbology onto the pages that the hero of the story, Robert Langdon, has to constantly stop and lecture one or more of the other characters in this book on the history of Freemasonry and all the wonderful contributions the world has received unto it by a Mason. We're 30 seconds from the clutches of the bad guys, from whom we are running so we can save someone's life, but wait, let's stop a moment so I can explain in historic detail a particular symbol, or show you this nifty, magical number sequence and spell out in detail why it pertains to our rescue mission. Those stop-and-explain moments clue the reader in early that the tension the author is trying so hard to build must not be really all that tense if the main characters have so much time to marvel over history while being hotly pursued.

To add to the amateurish narrative, the characters, all portrayed as very smart and world-wise, are shocked, shocked! at every predictable turn of events. The characters actually exclaim, quite regularly, "Oh my God!" when something occurs that the readers will have predicted 5 pages ago, pandering to our egos so we can constantly pat ourselves on the back on how smart we are. Langdon, who is surprised the most, has evolved from a savvy, likeable university professor in The Da Vinci Code to a naive, gullible idiot savant. What? You mean this secret package as heavy as a bowling ball, the one my good friend and mentor (and, gasp, a 33rd degree Mason) told me years ago to keep safe and guard with my life because evil people across the entire globe would kill for it, and for which I got a mysterious phone call this morning telling me to bring this vital package to Washington, D.C., this heavy package I have been carrying over my shoulder, which I completely forgot I was carrying even though my shoulder is aching from the weight, might have something to do with why my friend and mentor has been kidnapped? Oh my God! How could this be? I'm shocked! Shocked! And sadly, I'm not exaggerating.

Another example of the irritating writing packed inside The Lost Symbol is that nearly every chapter begins with a retelling of what has occurred up to this point -- just in case the previous section had lulled you into a deep case of neurasthenia and you lost all memory of the previous dozen pages. Why Dan Brown felt he had to constantly summarize previous events is a mystery. If you ignore my suggestion to pass on this book, you will remark to yourself each chapter how you haven't seen such great recapping of events since watching the first three minutes of Batman reruns from the 1960s where they summarize the previous week's cliffhanger.

As the final reader irritation (especially to us in Washington, D.C.), Brown gets some of his D.C. geography, details and landmarks wrong. Here are some of the more obvious factual indiscretions:
  • His limo driver takes him from Dulles Airport to the Capitol via an unlikely route: the Dulles toll road to the beltway to the George Washington Parkway, then finally over the Memorial Bridge. Unless I-66 was closed, the limo driver would not have taken the beltway.
  • The book says the trip from the airport took a half hour. Not by taking the GW Parkway to the Memorial Bridge it doesn't.
  • When Langdon's limo crosses the Potomac, Langdon looks to left of the Lincoln Memorial to see the Jefferson Memorial. Didn't Brown check a map? Or did his researcher mistake the Kennedy Center for the Jefferson? The Jefferson is way over to the right.
  • Langdon enters the Capitol Visitor Center on a Sunday and sees tour groups inside the Rotunda. The visitor center is closed on Sundays. There are no public tours.
  • Langdon crosses the street from Freedom Plaza and enters the Metro system to get away from the bad guys. The closest Metro station to Freedom Plaza is a couple of blocks away, not across the street.
  • When the bad guys try to arrest Langdon as the Metro train pulls into the station, the train conductor is driving from the third car. Metrorail conductors always drive from the first car.
  • The metro conductor exits the car without opening the doors. I guess he could have squeezed out the side window, but I think Brown would have included that contortionist trick in the narrative.
Those are a few of the errors a D.C. resident, regular visitor or observant tourist would notice. Since I mentioned a few of the book's D.C.-centric errors, to his credit, Brown does have Langdon notice the hum of the limo's wheels change as he approaches the Memorial Bridge, a sign that Brown knows the road is cobblestone between the Parkway exit and the roundabout approaching the bridge.

Since Brown's previous two books were so much better, I have to ask, What happened? That's why I had to conclude from reading The Lost Symbol that Brown must have been kidnapped by some group intent on rehabilitating the public's view of the Masons after Brown's previous books made these types of secret societies look evil. The real lost symbol of the book is hidden in plain sight. The words on the page, those everyday alphabetic symbols, are Dan Brown's way of crying out to the reader: "Can't you tell from this stilted writing and my obvious mistakes of D.C. geography that any tourist would pick up on that I've been kidnapped and forced to write this? Help me!"

If indeed Dan Brown has been seen in public since the book's publication in September, and he isn't a prisoner of the Masons, the only other reasons I can see for this book being so bad after two previous entertaining novels are:
  • The Lost Symbol was a contractual obligation book. Maybe the book was motivated by Doubleday reminding Brown of the $5 million advance and the promise of another $10 million upon delivery of the manuscript.
  • This book reflects Dan Brown's actual writing ability, and he got in a major tiff with his editor. The Lost Symbol is the editor's revenge.
Overall, if you still feel compelled to read this book, do like I did and buy the ebook version. At least no tree would have been required to share your suffering. My plea to the Masons: Free Dan Brown before he writes another book.

Impressed with Manning’s marketing push and discounts

For the past few months, tech publisher Manning Publications has impressed me with its marketing push by offering quick-strike discounts on print and ebooks. Until Manning's recent marketing and discounts, I was buying a Manning book maybe once a year, and I almost never bought it directly from the publisher. Instead, I'd usually check sites like BestBookBuys to find who had the title I was looking for at the best price. But with its steep short-term discount offers, and my newfound fondness for ebooks, I have purchased Manning books in recent months on Groovy, Grails, Spring and Ext JS, almost always buying the ebook version for $10 to $15 -- a great price for a tech book.

As part of its marketing push, Manning offers daily and weekly discount codes on its website and Twitter feed. Discounts are often 50% or more from its regular price. Tuesday, for example, the Ext JS In Action ebook for which I paid about $15 a few weeks ago (on discount from $27.50) was on sale for $10. (The book, not yet in print by Jesus Garcia, is a great introduction and explanation on how to use the Ext JS 3.0 component library and the only book I found available at the time covering version 3.0.)

In addition to the book discounts, following Manning's marketing message won me an additional $300. In one of Manning's emails in August, I learned that Manning was holding a monthlong technology quiz in September. Manning posted a question daily on a technology topic related to one of its books, with a $300 grand prize to the contestant who could answer the most questions correctly. The tech quiz was great marketing because it brought me and hundreds of others to the Manning website daily. As a quiz incentive, Manning gave away two ebooks every day to two contestants and offered a daily discount on one or more of its books. After answering 30 technical questions, on topics as diverse as features of ActiveMQ, Clojure and Silverlight, I'm proud to say I walked away as the grand prize winner. The competition was stiff. Manning said it had 1,500 contestants. Toward the end of September, there were still about a dozen people with perfect scores with just days left in the contest. After the final question, only two contestants remained with perfects scores, me and Belgian developer Renaud Florquin. I was lucky to be randomly selected as the grand prize winner. (Thanks again, Manning.)

In addition to improving its marketing and pricing, Manning also has impressed me recently by expanding its ebook file formats. Previously, Manning offered its ebooks only in PDF format. Earlier this month, Manning announced it will begin offering its books in the mobi and EPUB file formats. That's great for me because I like reading books in the mobi format on my BlackBerry using the free Mobipocket reader. Ebooks have won me over from the paper version of tech books because of their searchability, the ability to cut and paste code, and their ultra portability by being on my phone and laptop when I visit customer offices. The mobi format is also supported by the Kindle, while the EPUB format is popular with devices like Sony Reader, the nook and the iPhone.

Keep it up, Manning. If you keep offering good technology books at great prices in flexible formats, I will continue to be a regular customer.

Adding shutdown hooks to a desktop Griffon application

Technical pride prompted me to write my first Griffon application Tuesday. Griffon is a Groovy-based framework to write Java desktop applications. Groovy takes some of the sting out of writing Java Swing applications and Griffon relieves more of the burden. My pride came into the picture when Manning Publications released its daily Pop Quiz yesterday asking what technique one would use to process the shutdown of a Griffon application running on OS X. Manning posts a new question each day of September, and as of today, I'm running a perfect score. I couldn't let a little question about Griffon stop me. However, since Griffon is so new (its stable release is 0.1.2) and developers are only now starting to play with it, googling around for a simple answer didn't turn up much.

After failing to find sample Griffon code that described the application shutdown process (especially with the question's wrinkle of using OS X), I figured I'd write a simple Griffon desktop application and give the technology a spin. In the category of famous last words, "How hard could it be?" Turns out, thankfully, not that hard.

After downloading the Griffon 0.2-BETA zip file, setting my GRIFFON_HOME environment variable to point to the folder where I unzipped the files, and adding the $GRIFFON_HOME/bin directory to my PATH for convenience, I was ready to create my first Griffon application. I followed the instructions on the Griffon Quick Start page and ran the command:
griffon create-app
and typed in my project name (quiz) to create all the files needed for a basic application. The create-app command generates the application scaffolding along the lines of other modern frameworks like Rails, Grails, App Fuse and even Maven.

Once the create-app command created the skeletal the application files, I followed the sample code on the Griffon Quick Start page to augment the files with code to create a simple desktop application. The application provides a window that lets you type in and execute code in the Groovy shell. Griffon structures its files around the Model-View-Controller pattern, creating subdirectories for "models", "views" and "controllers". Here are the directories in the project's "griffon-app" folder:
$ ls griffon-app
conf/  controllers/  i18n/  lifecycle/  models/  resources/  views/
Figure 1 shows the resulting application in action, which you can build and run using the command griffon run-app. I typed the two Groovy statements into the window and clicked the Execute button.
Griffon sample application screen
Figure 1: Griffon Quick Start application window
One of my first tweaks to the sample application code was to put in place what I learned during my googling around for an answer. I followed the advice of Josh Reed, one of Griffon's six committers. Josh, who uses Griffon in his day job, wrote a blog post this month about how to intercept window closing events that proved quite helpful. I edited the file griffon-app/views/QuizView.groovy to define application properties for defaultCloseOperation and windowClosing so the top of my QuizView.groovy now looked like this:
import javax.swing.WindowConstants
iconImage: imageIcon('/griffon-icon-48x48.png').image,
iconImages: [imageIcon('/griffon-icon-48x48.png').image,
defaultCloseOperation: WindowConstants.DO_NOTHING_ON_CLOSE, // ADDED PROPERTY HERE
windowClosing: { evt ->                                     // AND HERE
println "QuizView.groovy: windowClosing event called!"
In addition to the println statement to tell me my shutdown hook was invoked, I needed to add the call to app.shutdown() since I was now telling Java not to end the application when its main window was closed by setting the defaultCloseOperation property to the DO_NOTHING_ON_CLOSE. I followed Josh's tip on editing the griffon-app/conf/Application.groovy file to set the autoShutdown property to false. This flag is needed so my window-closing event code would be run instead of the default auto-shutdown behavior. (Thanks for the tip, Josh.)
application {
startupGroups = ['quiz']
// Should Griffon exit when no Griffon created frames are showing?
autoShutdown = true
// If you want some non-standard application class, apply it here
//frameClass = 'javax.swing.JFrame'
mvcGroups {
// MVC Group for ""
'quiz' {
model = 'QuizModel'
controller = 'QuizController'
view = 'QuizView'
Now when I run the application and close the window, the console shows:
$ griffon run-app
Welcome to Griffon 0.2-BETA -
Licensed under Apache Standard License 2.0
Griffon home is set to: /home/tom/Projects/Griffon/griffon-0.2-BETA
Base Directory: /home/tom/Projects/ManningQuiz/quiz
Running script /home/tom/Projects/Griffon/griffon-0.2-BETA/scripts/RunApp.groovy
Environment set to development
Warning, target causing name overwriting of name default
[groovyc] Compiling 3 source files to /home/tom/.griffon/0.2-BETA/projects/quiz/classes
QuizView.groovy: My windowClosing event called!
That's one way to add a shutdown hook to a Griffon application, by adding a listener to fire when the application's window closes. However, this discovery didn't answer the Manning quiz. None of the available answers showed this technique.

More searching around the web pointed me to the compellingly sounding griffon-app/lifecycle files created by the create-app scaffolding command. One of these auto-generated files is called Shutdown.groovy. It couldn't get more obvious or more easy than that, I suppose. The contents of this file show helpful comments describing how to add shutdown hooks to your application.
* This script is executed inside the EDT, so be sure to
* call long running code in another thread.
* You have the following options
* - SwingBuilder.doOutside { // your code  }
* - Thread.start { // your code }
* - SwingXBuilder.withWorker( start: true ) {
*      onInit { // initialization (optional, runs in current thread) }
*      work { // your code }
*      onDone { // finish (runs inside EDT) }
*   }
* You have the following options to run code again inside EDT
* - SwingBuilder.doLater { // your code }
* - SwingBuilder.edt { // your code }
* - SwingUtilities.invokeLater { // your code }
I thought I'd edit this file and add some custom shutdown code. I added this to the end of the above file:
import groovy.swing.SwingBuilder
def swing = new SwingBuilder()
swing.doOutside {
println "doOutside called in the Shutdown.groovy lifecycle"
With these few extra lines of code, running the application (griffon run-app) and closing the window resulted in these lines on the console. (I eliminated the Griffon startup information.):
QuizView.groovy: My windowClosing event called!
doOutside called in the Shutdown.groovy lifecycle
Interesting to see that the application's window-closing event occurred before the application shutdown event. That makes perfect sense.

But Wait, There's More

Unfortunately, this solution didn't seem to satisfy any of the available options in the Manning quiz. (Except for the tantalizingly tempting "None of the above" fourth option.) I didn't want to give up yet in finding a solution. The available quiz answers that seemed worthy of looking into talked about defining event handlers for the "ShutdownStart" event or the "ShutdownEnd" event. According to the Release Notes for version 0.1, runtime events may be added to the controller class. The notes list all events that may fired by the application during its life cycle:
  • BootstrapEnd
  • StartupStart
  • StartupEnd
  • ReadyStart
  • ReadyEnd
  • ShutdownStart
Since no event for ShutdownEnd is in the list, I figured the Manning quiz answer was probably defining an event handler for ShutdownStart. Since I wanted to be sure, I added a tiny event handler, with code borrowed from the sample in the Release Notes, to my controller class in griffon-app/controllers/QuizController.groovy:
def onShutdownStart = { app ->
println "Controller onShutdownDown says ${app.config.application.title} is shutting down."
I re-ran the application and shut it down, and the console now showed:
QuizView.groovy: My windowClosing event called!
Controller onShutdownDown says Quiz is shutting down.
doOutside called in the Shutdown.groovy lifecycle
The lines show all of my shutdown code successfully got called. So here's what I learned in my foray into Griffon:
  • There are at least THREE ways to handle events that fire when an application is shut down
  • Writing event listeners in Groovy/Griffon is a lot easier than Swing
  • There is no requirement to register the runtime event with the source of the event
  • Griffon (and Groovy) do their share to ease programming by defining conventions over requiring configuration
The nice bonus in playing with Griffon was the scaffolding-building create-app command got me started and running quickly. I was able to create a Griffon desktop Java application, add three ways to capture runtime events, compile and test the application several times -- all in less time it took me to write this blog documenting these facts. I don't know whether Griffon can win the hearts of developers who want to write desktop applications, but I sure think it can win the hearts of Java developers who would otherwise be stuck writing a straight Swing application. If you're a Swing developer, definitely check out what Groovy and now Griffon have to offer in ease of development and simpler code writing. I look forward to seeing what Griffon becomes once it reaches the 1.0 milestone.

A pre-dawn visit to Thomas Jefferson for the Cherry Blossom Festival

Jefferson Memorial at dawn with cherry blossoms
Jefferson Memorial at dawn this morning during the D.C. Cherry Blossom Festival
The bloom of the Japanese cherry trees in Washington, D.C. is at its peak, so Renee and I went over to the Tidal Basin at dawn this morning to watch the sun come up behind the Jefferson Memorial. We got some nice photos.

I was surprised at how popular the Tidal Basin was at 6 a.m. During the Cherry Blossom Festival, D.C. has turned Ohio Drive SW into a one-way street going north, with parking available on the west side along the Potomac. By sunrise at 6:47 a.m., there almost wasn't a parking spot left. There was a plethora of photographers lined up along the Tidal Basin walking path, all prepared with their tripods and telephotos. Renee set up her tripod near one tree, while I roamed around shooting hand-held, which made for a lot of blurry photos in the pre-dawn twilight. I shot at ISO 800 initially, then switched to ISO 200 in the hopes that it would let me blow-up the photos extra-large without as much graininess. Still, I was shooting at 1/30 of a second and slower for a lot of the early photos. That's what I like about shooting digital: I deleted about 60% of my photos with no thought to all the "film" I wasted.

Visiting the Tidal Basin before dawn to enjoy the cherry blossoms was a good idea. The area around the basin was packed a couple of hours later, with the usual gridlock traffic on Independence Avenue SW and the Memorial Bridge entering the district from Virginia. If you're in D.C. and plan to visit the cherry blossoms on Sunday, definitely arrive early. I saw a lot of cars idling along the Memorial Bridge, slowly crawling toward D.C. -- and probably not finding a close space to park.
Jefferson Memorial at dawn with cherry blossoms
Framing Thomas Jefferson through the cherry blossoms

I uploaded several of my photos from today and from last weekend to Picasa Web Albums.

Some cherry tree facts: There are 1,678 cherry trees around the Tidal Basin, with more surrounding neighboring roads and parks. Trees originally were planted around the Tidal Basin in 1912 as a gift of friendship from the people of Japan. About 400 of the present trees were propagated from the original 1912 trees. The health of the trees often suffers as a result of their beauty. The crowds who visit the area often tromp around the base of the trees, compacting the soil. The drainage in the area could use some improvement, too, as you'll notice when you have to walk around some of the flooded areas along the Tidal Basin path -- forcing you to compact the soil even more around those trees. New trees need to be planted regularly to replace the suffering ones, which is probably one reason none of the trees you see there are ancient.

If you are interested in planting a Yoshino cherry tree at your home like the ones along the Tidal Basin, the non-profit American Forests sells them online. My "green" plug for the planet.