Data visualization for the World Council on City Data

Revising the interface of a powerful tool to make it easier for new users to access city data


The World Council on City Data came to Laurier's UX students with usability issues in their data visualization platform. They had done no previous user research on the site.


Validate assumptions about the current interface offerings. Suggest solutions to improve scalability and making it easier to get started using the platform.


Working in a small team of students through 8 weeks of sprints, conduct feedback sessions with students and faulty in Laurier's Digital Media program. Develop an understanding of user needs and jobs to be done. Generate interface mockups for improvements that can be made.


Presented work to stakeholders, which went over well. Received 100 marks out of 100 for work on this project.



The World Council for City Data (WCCD) Open Data Portal showcases data for ISO 37120 certified cities. Specifically, it allows users to visualize complex data sets and enables users to compare, benchmark, and visualize data trends the world over.

We were tasked with conducting usability studies and building prototypes of recommendations that would improve the usability of the site. Working over the course of 8 weeks, our group of 7 students collaborated in modified GV Sprints to analyze the current offering, create mockups of improvements, test those mockups, and make revisions based on our feedback sessions.

Literature review and background research

We began with a competitive analysis and literature review to develop our understanding of the problem space. We focused on other well designed websites with a focus on data visualization. We also conducted stakeholder interviews to get a better sense of their priorities.

Here are some highlights from what we learned:

  • Focus on the stories and comparisons
  • Important to effectively utilize grouping / ranking
  • Understand and respect our audience and how they process visual representations of info
  • Clutter and confusion is a failure of the interface design, not a failure of the data


Working with such a limited budget (surprise: it was literally $0) and limited time, we had to be mindful of scope when creating our project plan. Ultimately, we decided to move forward with one round of testing / feedback sessions in the hopes of learning more about user needs directly from them. After the feedback sessions, we would then create mockups / recommendations based on what we heard, and encourage more testing.

Problem statement

How might we create a simpler, easier-to-understand interface that removes barriers to entry for accessing city data? Further, how might we build more engaging visualizations that address the needs of advanced users?


Building a testing framework

Now that we have a reasonable frame of reference for the project, we moved forward into building a testing framework. We started by analyzing our research and agreeing on some common takeaways from our research. We then analyzed the current platform to better understand the opportunities for improvement.

Once that was completed, we moved on to ideation exercises. We decided to format our feedback sessions in the form of hybrid interview / prototype walkthrough.

Ideation and brainstorming

Figure 1.2: Four step sketches

Figure 1.3: Crazy 8 sketches

Our team worked through a number of activities in ideation: four step sketches, 'Crazy 8s', and solution sketching. Each team member took what they learned from the competitive analysis and suggested improvements to the current, sketching ideas over a 2 hour workshop.

Ultimately, each team member drew out their best idea into a 3 panel storyboard and taped their ideas up onto a whiteboard, art museum style. We then heat mapped our favourites by placing dot stickers beside the parts of each solution that seemed the most promising, and wrote down some questions based on these ideas.

Figure 1.4: Solution sketching

From there, we went into a speed critique: discussing each solution sketch to decide what should be included in the first round of prototyping. We took the best of everyone's solution sketches, and built those into a storyboard we could show users in testing. We understood our gaps and priorities at this point, and all that was left to do before the round was write the script and complete the prototype.

Figure 2.1: Prototype sketches as part of work planning


I personally built out the screens for the prototype shown to users, and for the final recommendations. The prototype was built in Sketch over a short time frame, and included over 50 art-boards. I was responsible for leading the prototyping in both the initial round, and the recommendations.

Figure 3.1: Editing individual visualizations

Figure 3.2: Initial rework of onboarding process

Some of the features we added to the platform in the first round of testing are below:

  • Saving filters and parameters for use at a later time
  • Viewing more data at once
  • An easier to understand and more engaging on-boarding experience

Testing our proposed changes

We hosted interviews and observed users walking through our prototype. We also asked generative research questions prior to the walkthrough, and used A/B testing to validate some assumptions that our stakeholders had not previously validated with users.

Figure 3.3: Screen recording of onboarding testing

Figure 3.4: Data visualization in the prototype

Users completed pre-determined tasks, and were timed and recorded as they walked through. We observed users clicks and eye movements, and we heard lots of feedback from users. Here are some highlights of what we heard:

  • The number-based labelling system (aka the ISO section headers) was not effective and confused users
  • Information architecture of the site needed to be simplified even further than was done in prototype
  • The fewer clicks per task, the better (some actions took too many clicks)
  • No users asked about or utilized the search feature, generally users were indifferent to the option to search specific data points
  • Users valued bright, dense and captivating visuals. Majority of users were willing to spend more time interacting with and learning and platform if the visuals were worth the investment

Our initial plan for our solution involved adjusting the scatter-plot layout of the data as presented in the current site visualization. We heard overwhelmingly from the users that the way the current data was visualized was actually “refreshing,” despite it’s somewhat clear usability issues, and we elected in our final prototype to focus more on adjusting the usability issues we found through testing as opposed to the actual data visualizations. We recommended looking into the data visualizations in more depth in a further sprint.

Additional prototyping based on findings from feedback sessions

We adjusted our onboarding experience based on our testing feedback, trimming the fat off of our initial prototype and taking into account the needs and wants of our users.

Figure 4.1: Redesigned splash page for new visitors

Some things we brought over from the current site that worked well for our users in testing:

  • Users have colours and icons work as identifiers for each of the ISO standard categories
  • Users are able to see their selected metrics at (essentially) all times during their use of the site
  • It takes two clicks (and some light scrolling) to add a metric from the main screen, down from about 4 in our first round of usability tests for our prototype

Some key changes we made from the current site to now:

  • Users are presented a list of categories with all category and metric names available to them at once, as opposed to having to scroll over or click the icons to view the category and metric labels
  • Users aren’t forced into using an arbitrary, circular layout of these categories
  • Adding cities and metrics can be done via search functionality, as well as a clearly organized list of cities with multiple sorting options
  • Tasteful drop-shadow effects on white (and slightly off white) backgrounds help users differentiate portions of the site, as well as indicating intractable elements

Figure 4.2: Redesigned onboarding view

Figure 4.3: Redesigned visualization screen

* Visualization views weren't fully redesigned (based on user feedback), but instead adapted from the current design to the new design language of our proposed solution.

Notes on the design of our recommendations

  • Users are presented with the ISO Standard metric labeling scheme on each individual metric, but in a much less dependent way than previous prototypes
  • Users are presented a small icon and colour to maintain visual consistency across all instances of metrics and category cards on the site
  • Users are able to scroll through the list of categories and list of metrics in individual columns (in opposition to full-page scrolling), maintaining a balance of visual density for usability and aesthetic considerations
  • The selected metrics section live-updates in real time, displaying this key info regardless of how far down the rabbit hole a user may find themselves, and can be clicked to edit regardless of a users place on the site
  • The category cards stay the same size and shape regardless of your place on the site, reinforcing a consistent visual language
  • City labels can be turned off, and are all sorted to the right by default when viewing a visualization (to avoid the mess that the site currently has in place when it comes to labeling data on the scatter-plot)

No items found.


Final recommendations

Continuing to explore the ways complex data gets represented is a key to the long-term sustainability of the site. The people we tested with generally understood the language of the city-based scatter-plot, but problems like labeling and density still exist in the current visualization.

While we initially planned to tackle that challenge, our understanding of our space quickly shifted to being able to access information (selecting filters) once we realized that was a key roadblock that came before the visualization in the process. In the future, it would be worth exploring the ways to display information in a way that works in more dense situations.

As mentioned previously (during the Initial Prototyping stage) we attempted to sort out an option where a user was able to log in and save a list of filters for later reference, but simplified our prototype for the sake of testing our core functional changes to the systems usability as opposed to this change. Moving forward, it may be worth re-visiting the ability to save and select visualizations from a user profile or list.

I recieved a 100% final grade for this project.


Looking back on this project years later... it's clear to me we over-thought a lot of what we recommended. We were in way over our heads with the amount of ground we tried to cover, and rather than focus on a truly impactful portion of the site that we would be able to effect, we spread ourselves too thin and delivered surface-level recommendations. I would love to go back and cut out a ton of the scope creep that plagued this project.

I'd also spend less time with high-fidelity prototyping than we did. Although medium and high fidelity prototyping could be helpful with a project like this, we were much too early in the process to extract any value from that time. Our time would've been better spent writing more effective research questions, and really getting to the bottom of what people want from a service like this.

It's also worth mentioning that the company we were partnering with experienced some turnover when we were mid-project, and so the advising we received from the community partner meant to liaison between our group and the World Council on City Data. Looking back, we definitely could've used a more firm hand to push us in the right direction to make an impact.

Thank you!

I would personally like to thank the World Council for City Data for being gracious partners, and for affording our group the opportunity to work on a project that challenged us as students and designers in new and exciting ways.

Our group would also like to thank Abby Goodrum for her continued guidance and support throughout the duration of the project, as well as our testers from the Digital Media and Journalism program at Laurier Brantford for being kind enough to let us test our ideas on them.