Understanding needs of lower income users when collecting e-Signatures

Collecting e-signatures from low-income individuals and people who are new to Canada to reduce application friction and abandonment

Problem

Canada Revenue Agency requires a "wet" signature in order to validate the users consent to perform income verification. Post-COVID, Ontario's acceptance and delivery of this standard needed to be modernized and offered entirely online.

Goal

Make informed recommendations to Ontario's Ministry of Finance in support of a user-friendly e-Signature implementation. Take on a teaching role with project stakeholders so that they can continue to conduct user research as the project progresses.

Action

Conduct early-stage feedback sessions with real users to validate assumptions and test potential user-flows for the collection of e-Signatures. Additionally, work to understand what language needs to be used to guide the user through the process quickly and easily.

Impact

The findings from this work informed recommendations that were given to the vendor of record for the project. The e-Signature solution is under development and will be deployed by the Ministry of Finance across multiple programs (reaching thousands of users).

Problem

Context

Many things have changed since COVID-19 began to impact our way of life, and the virus has undeniably accelerated the transition to a digital-first society. Specifically, the effects on the way governments engage with citizens cannot be overstated; many projects that may have otherwise taken years to be prioritized became the immediate focus of many public servants.            

                     

For Ontario's Ministry of Finance and the Canada Revenue Agency, collecting electronic signatures is something that has long been a topic of conversation but came into the spotlight specifically in a post-COVID world. Working with the Ontario Digital Service Lab as a product designer, I was tasked with an early-stage research project that would help solidify the understanding of user expectations for digitally signing documents.

                                 

I worked closely with stakeholders from the Ministry of Finance to develop prototypes of current-state and future-state implementations of eSignature collection, and independently facilitated user feedback sessions that helped to validate and question assumptions.                                  

The program area of focus for the project was specifically the Healthy Smiles Ontario program, which provides dental care to people 17 years of age or younger who come from low-income households.        

Figure 1.1: Research timeline

Goal

Core research objectives              

Working with a previously-hired vendor, we were initially unable to view any sort of demo of what a realistic potential solution could look like due to legal limitations in the early-stage of this project. Still, it was our mandate to provide thorough recommendations to the Ministry of Finance that would help inform a user-friendly e-Signature implementation.                                   

A graphic example of a simple, typical user journey. Step 1, user learns about the Healthy Smiles Ontario program. Users are generally happy at this point, and excited to see that ther eis a support program offered to connect kids with dental care. Step 2, user visits the website to learn more. The instructions seem complicated, and require a paper form submission. There are lots of eligbility requirements. Step 3, the user decides to apply to the program. Users are willing to put up with some difficulty to get their kids dental care, so they go through the application and try to fill out the required information accurately. Step 4, the user tries to submit their applcation. The user is required to submit an additional form by mail to the CRA, and many users do not complete this additional step. Users are often frustrated by this point, and if a user submits their info incorrectly they are required to apply again with corrected info.
Figure 1.2: An example of the type of journey typical for users

I did not have the luxury of delaying the project until the vendor was able to work with me more closely, and so I began the project by facilitating a 2-hour workshop with a diverse set of project stakeholders that would help to develop a plan of action and clearly define our priorities.        

Figure 1.3: Screenshots from the workshop activities conducted with stakeholders

Business goals:

  • Reducing time it takes to apply to the program
  • Enabling users to apply entirely online
  • Removing the requirement for paper forms
  • Improving the user experience

Our anticipated delivery must include:

  • A secure way to sign required documents online
  • A faster, more convenient way to submit required documents to get children the care they need (specifically relating to the Healthy Smiles Program)
  • An inclusive approach to e-signature collection (considering users of different abilities, English proficiencies and income levels)

We initially aimed to quantify our changes by:

  • Tracking the rate of enrollment in the program
  • Tracking the abandonment rate at the signature collection stage

Problem statement                        

We observed that the process for signing and submitting an application to Healthy Smiles Ontario does not enable users to sign up entirely online. How might we enable the user to submit signatures online so that we are able to reduce the amount of time and effort it takes to enrol in HSO? Further, how can we design a simple, secure and inclusive experience that sees a reduction in the abandonment rate of users attempting to enroll?                                          

Typically, we’d hope to conduct sessions in-person to reach as large a scope of participants as possible. Obviously with everyone working from home, we were limited to online recruitment methods. It should be noted that everyone we spoke to was required to possess some level of technology literacy, as we conducted sessions virtually. Special attention was paid to outreach for low-income individuals, and we reached out to community health centers and other local community organizations for help reaching this user group.            

Action

Leading effective research

After clearly defining our problem space and objectives, I created a research plan for our feedback sessions that included a script and specific scenarios for users to walk through. Special attention was given to writing open-ended questions that avoided leading the user. If I were to directly ask the user what we were hoping to learn, it wouldn't make for great conversation or promote honest answers.    

Figure 1.4: Expanding on our understanding of the user journey for submitting an application to the program

As I worked to build the script independently, I was working in parallel on the prototype we would be showing to users. I worked with another designer on the prototyping, and took on a teaching role as they had no experience in Figma prior to this project. Together, we built a recreation of what the current process looked like, as well as a future-state for the application that adopted the Ontario Design System. We tested using the current-state visual standard, as it was decided that we would be best served showing users as close to current-state representations as possible.  

As a team, we went over the script and prototype together to make sure it aligned with our research goals. I then facilitated user feedback sessions via Zoom, asking generative introductory questions and walking users through the prototype using Zoom's remote access features and our Figma document.

Figure 1.5: An example of the (unreasonably) complex initial prototype flow proposed by stakeholders

Asking good questions during feedback sessions                          

When creating scripts for user testing sessions, it’s important to understand that directly asking what we want to know doesn’t always make for a great conversation. We also want to avoid leading the user with our questioning, and allowing them to freely offer up an answer to the question we ask.                                      

     

Something that has been a challenge for me as I've learned and grown as a designer is becoming comfortable with the awkward silence. Usually the user offers up valuable feedback to get out of it, and so it's best to let a question sit until you're asked to clarify.                              

As a simple example, we often ask “what are your thoughts on (a feature)” or “what would you expect from (a product or service)” rather than “do you like this?” ... Our hope is that this allows the user to offer up their own thoughts with less of a lead-in to their answer.                

A graphic example of what good research questions sound like. Examples of bad questions: “Does the content on this page make sense?” “Do you like how this page looks and feels?” Examples of better, more open-ended questions: “What do you understand this page to be telling you?” “What do you understand you’re being asked here? What are your thoughts on being asked for this info?”
Figure 1.6: An example of revising research questions to make for more open-ended sessions

What we learnt in our first round of feedback sessions

Because we were using the Healthy Smiles Ontario program as a testing ground for the e-Signature implementation, some feedback was specific to that program. A lot of the feedback from users was on the content design and instructions they saw when working through the process. We leaned into this focus while maintaining our priority to establish user preference for how their electronic signature is collected.                

A graphic representation of our insights from round one. 1: Users showed no preference between terms e-Signature and digital signature. 2: Consent form is long and users do not read long sets of text. 3: Users want it to be made clear at the beginning of the application that spouse must be present at time of signing. 4: Language of instructions needs to be revisited and made more clear. 5: Users feel uncertain about the accuracy of their signature if drawn with a mouse. 6: “Applicant information” confused a user who thought that information would be the child's, rather than their own.
Figure 2.1: Insights from our first round of research

Based on our feedback to this point, it was clear we needed to go back to the drawing board on the content design. Improvements were made after our first round of sessions to the instructions on the page.      

                         

At this point in our testing, users found the overall process simple and easy. There were concerns expressed specifically regarding the accuracy of their signature, but overall feedback was quite positive. As a team, we worked to drill down on some of the core criticisms users had with their processes. Once our prototypes were properly built in Figma, I led another round of feedback sessions to solidify the findings.                

Figure 2.2: An example of the intentionally simple screens we showed users for the signature-specific portion of the follow up testing

Speaking to internal verification agents                              

I took an opportunity between rounds of testing to speak to some internal folks. The verification agents we spoke to had close-up knowledge around the types of users who typically applied for the Healthy Smiles Program, and their behaviors. Our project stakeholders gave us a great foundation, and after our feedback sessions we had a really solid set of questions for the folks who work up close and personal with these users most often.            

                   

We learnt a lot from the feedback session with the verification agents, but here are some highlights:                  

  • Overall positive sentiment toward prototype and e-Signature as a concept
  • Agents hoped e-Signature implementation will see a reduction in the rate of applications that are submitted incorrectly, or without completed consent forms attached
  • Following up with users to be able to edit their application, rather than requiring them to submit a new application all together, would be the ideal future state
  • Agents wondered if there was an opportunity to break up larger pages (such as input fields) into shorter pages to keep users attention
  • Because we are reaching such a wide variety of users, we need to be incredibly careful with language (current form language often confuses users)

We then pushed forward with a second round of testing with the public.        

Summarizing the feedback from both testing rounds

I pride myself on being a feedback driven designer. What that looks like in practice is taking findings from feedback sessions, analyzing their causes, and making improvements to the prototypes to better serve users. This is the core of the work that I do: I actively communicated with my stakeholders throughout this feedback analysis so that they were able to understand my interpretations of findings. Ultimately, qualitative research findings can be difficult to parse through, but removing the ego from the equation and remembering that good designers thrive on feedback makes for a much more effective team.                  
                 

After carefully considering all user feedback, we identified core areas we could improve our prototype designs and make actionable recommendations. Although we were very early in the process, and the vendor had not yet been on-boarded, these recommendations make up a good foundation for the project to move forward prioritizing user needs. Ultimately, the results from this work are mostly conceptual: these recommendations serve as best-practices, and exact interface designs will need to be tested further as they are developed.

A graphic representation of our insights from round testing in round two. 1: Proceed with in-line signature collection option, with email link for edge-cases. 2: Stop the user before their submission if they enter incorrect info. 3: Reduce the amount of information shown on the page, and break up the application across multiple pages. 4: Write simple instructions, in plain language to avoid user abandonment. 5: Revise the legal language at the end of the form to be simpler and more readable. 6: Reduce the amount of information we collect from the user wherever possible.
Figure 2.3: Insights compiled from both rounds of testing

No items found.

Impact

Based on the feedback we received from business stakeholders, verification agents and the public, we were able to make informed recommendations for best practices. Stakeholders were pleased to have an informed set of recommendations to give to the vendor once development began.

The project will continue to evolve over time; as we learn more from users, improvements will be made. Because the service is pre-launch, it is hard to quantify the amount of reach it will have. However, I can report that it received strong support internally, and the team of stakeholders I worked with is moving forward with the principles we recommended based on user feedback.  

                                                 

Note that my work on this project was done very early on in the development process: the eventual final product that launches may look different from what is shown here.

Reflection

With more time and resources I would've loved to have continued to be involved with usability testing. Having a vendor before having an understanding of your users real needs is also not ideal, but is an unfortunately common reality in government.