When choosing new edtech for your school, value for money and impact are key. Al Kingsley breaks down his key steps to achieving this
This is an edited version of an article that originally appeared on Headteacher Update
There is no doubt about it – selecting new software for a school is a daunting process. There are many similar, but competing, solutions to sift through, as well as the significant time and effort needed to evaluate them – not just to see how they work, but to ensure they will cover all bases. All of this is made even more difficult with the added pressure of ensuring that the vast sum of money your school is about to spend will not be in vain…
Let’s break it down and outline some of the positives and initial steps to help you overcome any roadblocks to action.
- First, your school has, most likely, set out a framework for its digital strategy, so you are clear on your overall objectives.
- Second, within that, you have the specific requirements for each area, which means you already know what you are looking for in any potential new solutions.
- Third, making a list of possible contenders and seeking out evidence as to how they are performing in other schools will get you started and begin to reveal which ones stand out enough to make it onto your shortlist.
- Finally, once you have narrowed things down, the trialling period comes into play. This is the time when you test the product, get familiar with how it works, talk to the vendor, and decide whether it is a good fit for your school context and infrastructure.
We all shop online these days, and it is common to read product reviews in order to get a steer on whether something is a good purchase or not. It just makes sense. So, the next thing to do is collect third-party evidence for your shortlisted software solutions. There are four types of evidence to look for:
- Users’ impressions and anecdotes – often found in blog posts, articles, testimonials, videos and recommendations.
- Descriptive evidence of the potential impact a solution could have – found in vendors’ marketing materials and white papers.
- Featured in comparison charts and white papers – correlational evidence compares users and non-users of a solution (but it is important to note that all kinds of factors make this an imperfect science, so you cannot reliably apply results to different contexts).
- Finally, if it exists, causal evidence in the form of dedicated research papers, peer-reviewed articles and independently commissioned reports will be the most insightful.
Regardless of whether your information is direct from the vendor, other schools in your trust or area, or from peers on social media, views and experiences from other people will help to guide you in your quest to ensure that the solution is fit for your purpose.
Choosing the right solution
Earlier this year I read a great article from Brian Seymour, director of instructional technology for Pickerington Local School District in Ohio, USA. In it, he provides a comprehensive description of how his district chooses edtech solutions – and it is one that I absolutely endorse (Seymour, 2021).
His team has devised a two-stage process of issues to consider before buying new software for mass roll-out. The first stage is a flowchart of options to ensure technical compatibility, asking questions such as:
- Does the item work with the school’s current and future devices?
- Is it device agnostic?
- Does it work with the current infrastructure?
- Which student information systems does it integrate with?
- What data does it capture and where does it keep it?
The flowchart is designed to expose problems and it is infinitely better to have those highlighted up front than after purchase.
The second stage is a series of checkpoints to ensure that any potential solution will align with the school’s curriculum – confirming what its intended purpose is and how it is going to enhance teaching and learning.
The two stages marry together perfectly to provide a 360-degree assessment of all potential aspects and impacts of implementing new edtech within a school. I thoroughly recommend you check them out.
Tips for evaluation
So now you have reached the point of evaluation – where do you start? Begin by looking at the features you have identified that your school needs, the ones that attracted you to the solution in the first place. There is no need to overwhelm yourself at this stage by committing to an analysis of every single tool in the box.
Take notes on how easy the solution is to use. Is it intuitive? Can you remember what to do when you come back to it after time away? Is it awkward or cumbersome in parts? If you find yourself frustrated by having to click too many times to access features, you can bet others will be too. Another good tip is to make notes and include screenshots – this is a good way to differentiate between the various solutions when you are drawing your conclusions later on.
You may wish to consider ‘parallel testing’ during this time. This involves two comparable class groups, where one is using the software and one is not, then assessing at the end of the trial whether you can see any evidence of the impact and what key benefits it has brought.
Finally, having trialled the software for the standard 30 days, did the teachers using it believe it was beneficial for teaching and learning – and, most importantly, do they endorse it? The final decision on which solution to implement must come from the staff who will be using it every day – and not be overturned at the 11th hour by the cost.
Although it takes considerable effort to test software thoroughly, it is well worth it to know that you have done the job properly and given your school, staff and students the best chance of making it work. By following the advice, asking the right questions, collecting a range of evidence, and getting hands-on experience, your school will be in the best position to select a product that is fit for the job and, what’s more, you will have gained knowledge and confidence ready for the next time around.