No surprises please

This is a retrospective on a painful project launch.

Today, CIR published a big investigation on the nuclear weapons industry. The report includes three interactive graphics: two slippy maps and a timeline of sorts. The launch was far from perfect – I wrote a 2,000-word internal report on everything that went wrong and how to avoid similar issues next time – and I wanted to riff on some those topics here as well.

But first, a little background information.

Without getting too specific, the main problem with the launch was that the graphics didn't display on initial page load. The underlying cause was that the SSL implementation on prevented Pym from loading, and differences between our testing and production environments prevented anyone from noticing the issue before the article went live.

Not good.

Additionally, no one at CIR has authorization to fix issues with the server because the website is managed by an outside firm. As such, we couldn't resolve the real problem and had to resort to hotfixes. For two of the graphics, I could simply swap out Pym for static iframes, which wasn't a huge deal. On the other hand, my Revolving Door graphic was very vertically variable – and it relied heavily on Pym. I had to actually rewrite all the code to make it work directly in Wordpress' text editor, which is ugly as hell and horrific from a maintainability perspective.

Again, not good.

So I got the hotfixes in place fairly quickly and no one died. But the fun doesn't stop there. We also identified a handful of smaller bugs with the slippy maps post-launch that can be traced back to two main issues with our development and editorial processes (the real topic of this post):

  1. Incomplete Requirements Gathering
  2. Insufficient QA Testing

1. Incomplete Requirements Gathering

The slippy map graphics were both designed to be full-screen desktop applications. They were built as stand-alone applications and had lots of features that come along with stand-alone apps – like internal scrolling, anchor links, full window width and height, a vertical sidebar area, and external linkage. They were very obviously supposed to function on their own rather than embedded in an article page. (They were also never optimized for mobile, but that's another story.) And yet, where can you find these apps? Embedded in an article.

This illustrates a fundamental misunderstanding of requirements.

I'm sure the graphics would've been designed differently if it had been clear from the beginning that the intent was to publish the graphics inside the story instead of as a supporting stand-alone app.

So, here's a list of things to clarify with your editors at the beginning of a project:

  • What's the objective for this graphic?
    Mark all that apply. If more than one, please specify priority.
    • Provide additional information that isn't in the report
    • Display information from the report in an alternative way
    • Provide a tool for the public to use
    • Make something "fun" or "delightful"
    • Get clicks on social media
  • Where will the graphic run?
    Mark all that apply. If more than one, please specify priority.
    • Embedded in an article on our main website
    • In its own page on our main website
    • As an entirely separate app, not on the main site
    • Embedded in an article on partner websites
  • When is the story running?
    Please choose one.
    • This week
    • Next week
    • Next month
    • Next year
    • Other, please specify ________

Answering questions like these will help ensure everyone is on the same page from the beginning. This allows the news apps folks to produce the right kind of product, and it helps set editors' expectations.

2. Insufficient QA Testing

Stemming from the requirements problem comes insufficient QA testing. In the case of this nukes project, almost all of the editing and testing happened while the graphics were still in their stand-alone form. They had been cycling through the editorial process since April. The marching order to embed the graphics in an article page didn't happen until July. By then, all involved were experiencing testing burnout, we lost our slippy map developer (He's busy being awesome at The Chronicle now!), and details were overlooked.

Additionally, no one at the organization had identified the discrepancies between the preview and production environments for – so when we were testing, we weren't testing the right things.

The obvious solution is, firstly, to identify the appropriate testing procedures to more closely replicate our production environment. And secondly, it would be helpful to hold off on significant review of graphics until they are in the correct format, presented in exactly the same way they will be published. That could help save everyone from burning out too early and overlooking obvious bugs.

In conclusion

I felt like these graphics were handled with too editorial a focus. There were no official requirements, no set design process, no project checkpoints, no meetings, no defined testing strategy, no integration process. But at the end of the day, we were dealing with code and we should have approached it as such.

We need to approach the creation of our interactive graphics as technology products with legitimate development processes in place.

This way we can document different procedures (like how to test things appropriately) and define workflows for design, development, testing, and deployment. We can make sure the right people are included and consulted at the right times in the process, and we can identify and resolve pain points earlier and more comprehensively. We can present our best work without delay.

And now that there's attention on the issue, we can take steps to improve.

A fun aside! Today also birthed the best and worst backhanded compliment I've ever received:

Yours is less sophisticated, but it looks far better on mobile.