Using evidence to make decisions about edtech purchases: a practical guide

Dr Fiona Aubrey-Smith, director of One Life Learning, gives her top tips on how you can make informed choices when it comes to procuring edtech

As school leaders we are undoubtedly becoming better at using research evidence to inform our decision-making, both individually and collectively; however, 42% of buying decisions are still made based on the ‘word of mouth’ informal recommendations of other schools (NFER, 2018) which suggests we still have a long way to go.

However, there are an increasing number of sources of evidence to draw upon when making buying decisions about edtech. Whilst historically many suppliers have produced case studies from advocate schools, and soundbites from enthusiasts, many now recognise the need for more robust evidence of impact. School leaders deserve to know what value using the edtech product adds to existing teaching and learning experiences.   

Edtech suppliers are increasingly working in partnership with academic researchers to undertake objective analysis – identifying precisely how their products make a direct impact on improving teaching and learning. Furthermore, edtech suppliers are also following the trend for online retail to provide customer ratings. Ventures such as EdTech Impact have been set up where suppliers list their products and existing customers provide validated reviews based on pre-determined criteria; furthermore, sources of support such as Educate provide schools with comprehensive guidance about what to consider.

It is absolutely vital that school leaders interpret the evidence presented to us – challenging bias within it and being absolutely clear on what it might mean for our school, our teachers and, most importantly, for our students. 

Every school has its own unique flavour – a combination of size, catchment, strategic priorities, characteristics of teaching and learning, improvement or innovation priorities, policies, experience and expertise of staff, and a great many other variables; even within the same school, a department, phase, year group or class can have a very different personality to its neighbour. We must remember that these kinds of variables affect the relationship of a particular product with a particular school and, moreover, the relationship between a particular product, the teachers and children using it, and the specific context that they are using it within (Aubrey-Smith, 2021).

Ask the right questions

When receiving recommendations from other schools, comparison websites or through supplier marketing materials, you are encouraged to ask:

  • What proportion of staff and students are using the product – and why are those staff and those students the ones using it? This will help to bring to the surface the other influences affecting its successful use.
  • What prompted the decision to use this particular product, and which others were considered? This will help to identify whether it’s the general concept of the product that is perceived as successful – such as automated core subject quizzes – or whether it is the specific product itself.
  • How long the product has been in use for – and if it has been renewed what informed that decision? This will help reveal how embedded the product is.
  • Since this product has been introduced to the school, what other improvement strategies have been implemented – either whole-school or within this particular subject/phase/department? This will help you to ascertain whether any improvements seen relate to the product, other teaching and learning strategies, or a combination of the two.
  • Once students are used to using this product, what evidence is there to show that their learning translates into the same levels of mastery in other contexts (e.g. if they score ‘x’ or do ‘y’ when using this product, can you be confident that they would later score ‘x’ or do ‘y’ when applying the same skill in an unrelated context?) Are the attainment increases about the child’s knowledge, or the child’s familiarity with the product?
  • What evidence is there of students’ long-term knowledge or skill retention – over a week, term, year and beyond?  Note – this is not the same as progression through units of work, but about retaining knowledge over time. Is the product securing long term knowledge, or targeting short term test preparation or skill validation?

Part of a school becoming an effective professional environment for all staff is about everyone engaging meaningfully with available evidence, and embedding specific types of strategic thinking and evaluative focus into practice (Twining and Henry, 2014). In other words, all of us need to be using robust evidence to inform our thinking, and to be clear on how we use that evidence, meaningfully, to make future decisions. There are three key lines of enquiry which will help you to challenge evidence meaningfully:

  1. Correlation is not the same as causation

In other words, just because a school using a product saw improved attainment outcomes, increased engagement, reduction in workload or improved accountability measures, it doesn’t mean that it was the product that led to this. Most schools implementing a new product do so as part of a broader strategy focused on improving specific priorities. One would, therefore, expect improvements to be seen regardless of which products were chosen because of the underlying strategic prioritisation given to the issue. Instead, focus on how the product brings about changes to behaviours – for example, increased precision within teaching and learning dialogue; this is where meaningful impact will be found.

  • For every research finding that argues one approach, there will be research elsewhere arguing for something different

Your role is to identify which research relates most closely to your specific context. You can do this by asking:

  1. Who produced the material that I am reading? What bias might they have? Have they acknowledged this bias and shown how they have mitigated it?
    1. What evidence led to the recommendations? What data are findings based on – and are these large scale and surface level, or smaller scale which probes more meaningfully?
    1. What is the vision for teaching and learning here, and how does this align with the vision of what good learning and good teaching look like in our own school?
[PULL OUT: Did you know that there are at least 23 different types of bias that we all bring to our decision-making? (Hattie and Hamilton, 2020, pp. 6-9)]
  • Plan for impact before you commit to investing

A vital part of decision-making is planning, from the outset, how you will evaluate what works and why; you will then remain forensically focused on what matters most to your school throughout procurement, implementation and review. Furthermore, you will be able to identify and recalibrate when ideas do not work as intended so that future practice improves. Guskey (2016) encourages us to think about impact through five levels; reactions to something, personal learning about it, consequent organisational change, embedding ideas within new practices and, finally, creating a positive impact on the lives of all those involved.

These things apply to both teachers and students (as well as leaders, parents and other stakeholders – depending on the product). Embedding meaningful review of the impact of your product choice connects your intentions to the lived experiences of the students whose needs and future you are serving. The two vital questions that you will want to ask yourself and your team are:

  1. What evidence is there that our intentions for this product are being lived out in reality by our young people?
  2. What evidence is there that our provision (through this product) is making a tangible difference to how students view themselves, their learning and their future?

Any decision made in school should always be rooted in improving the quality of teaching and learning

This can easily be lost amongst conversations about requirements and procurement. To help with this, identify three to five ‘personas’ – short descriptions of the people who the product is, ultimately, intended to support. For example:

  • High attaining pupil premium students.
  • White working-class boys in KS2.
  • KS3 girls disengaged with STEM.
  • Children with EAL in KS1.

At every point keep coming back to these personas – how would each product, feature, piece of research, impact finding, or sample of evidence relate to these specific students. In this way, we keep a forensic eye on what matters most – our students and their learning.

Don’t forget to follow us on Twitter like us on Facebook or connect with us on LinkedIn!

Be the first to comment

Leave a Reply