< Back to all Blog posts

4 principles for fruitful learning and evaluation design

In my role as WellAhead’s Knowledge Manager, I’m constantly gathering insights to advance our understanding about how to meaningfully and sustainably integrate wellbeing into K-12 education. In the 2015–16 school year, we asked educators and school district staff their opinions on the value of data. Their answer was simple: data collection is both a blessing and a curse. 

The Curse: Let’s face it: people (especially those in working in the education system) are busy, and taking time to fill out evaluation tools is not typically at the top of their priority list. Adding time to already-packed schedules to reflect on practice and complete evaluation tools felt like a burden to educators and district staff, and this came through clearly in their reflection.

The Blessing: Despite the burden of filling out evaluation tools, there is a plus side – educators found it can generate valuable insights about what they do, and gave them evidence to better understand the impacts of practices they valued.

In reflecting on our approach to data collection for WellAhead, I’ve developed four guiding principles to try and shift the pendulum to favour the blessings of learning and evaluation processes, and limit the burdens or ‘curse’. Tools should:

1) Be intuitive
Evaluation work can be complex. Part of an evaluator’s skillset should be the ability to distill complex concepts into simple language that resonates with your users. Where possible, it’s best to work with tools and questions that have already been validated. The more intuitive and straightforward your tools, the more valid and reliable they’ll be. Not sure if the language resonates? Test it with your users before going to scale. 

2) Be aesthetically pleasing
The era of smartphones and social media has dramatically increased competition for people’s attention. Especially when using electronic tools, it’s increasingly important to use software platforms with more engaging and attractive design (e.g. try Typeform). My yet-to-be-tested hypothesis is the more engaging your survey design is, the more likely participants are to complete the tool, and encourage their peers to do so as well. If participants respond to the question “how was the survey?” with “it was beautiful!” this is a good thing. 

3) Be as concise as possible
Initiatives a scope as broad as WellAhead may have a correspondingly high number of relevant questions to explore. While evaluators may be inspired to ask a thousand questions, participants simply don’t have the time to answer them, evaluators don’t have the time to analyze the mountain of data that would ensue, and (to follow the metaphor) in all likelihood the real story you’re trying to uncover will get lost somewhere in a tree-well. Ask only what is most relevant.

4) Bring value to both you and your users
Health research in K-12 settings has a history of inconveniencing teachers and students to collect data, with the promise of a glossy report for their efforts. Unfortunately, many of these processes end up generating much more benefit for the collectors of the data (and the related research community), than the users who filled out the form. Challenges in bringing value locally include the considerable lag time between collecting the data and receiving reports (often over a year for large-scale surveys) and limited support to help local communities understand results and embed them into their strategies. 

What do you find works best when collecting data from educators? Are there any guiding principles you would add to this list?