Why should EdTech be different?

In any aspect of life, as consumers we expect products to be tested before use. In many countries it is illegal to market a product without rigorous testing and even if it is not illegal, consumers would demand companies test all products before they are released onto the market. Imagine booking a flight on a newly designed plane and discovering the day before that the company had failed to complete any rigorous testing… research to test any product must be designed with the most robust approach that is feasibly possible. 

In education, and in particular the development of EdTech, we appear to accept new products and interventions without demanding the rigour that we would expect from other products that we use in our daily lives. Maybe this is because people rarely die or fall ill as a direct result of the poor quality research designs or the lack of any form of testing. Many well-intended interventions are rolled out on the basis that they seem plausible and are unlikely to do harm. Yet when these have been evaluated robustly, they are found to be ineffective and even harmful. These types of ineffective interventions divert funding, time and other resources away from effective solutions. We have a moral responsibility to ensure that we have the most robust evidence to inform policy makers, school leaders and parents and ensure decisions are made on the best possible evidence at that time.

In EdTech, small start-up companies seeking investment to scale are not primarily focused on the impact of the software. The focus is on metrics to demonstrate usage and user engagement with the platform. At present, start-ups rarely have the expertise or funding to commission robust research and it is only the larger well-established companies who can develop an evidence base. However, funding does not ensure high quality robust research, as many of these studies are single group designs without the use of a comparator group. 

 In education, organisations such as the Education Endowment Foundation (EEF) fund robust studies to evaluate the effectiveness of interventions used by schools. In 2015, the EEF funded an evaluation of the Accelerated Reader (AR) web-based programme to support secondary age pupils with reading using an efficacy trial. The study found that Year 7 pupils who were offered Accelerated Reader made 3 months’ additional progress in reading compared to other similar pupils. For pupils eligible for free school meals the figure was 5 months’ additional progress (1). This led the EEF to commission a large-scale effectiveness trial for AR to assess the impact of the programme on year 5 (6,116 pupils) and year 4 pupils (6,311) in 181 primary schools. The evaluation cost £889,950 and took 5 years to complete. “The independent evaluation found children who started AR in Year 5, on average, made no additional progress in reading compared to children in the comparison schools. Similarly, children who started AR in Year 4, on average, made no additional progress in reading compared to children in comparison schools” (2). Yet, 1000s of primary schools use the AR programme even though the research evidence suggest that it may be best deployed as a reading catch-up intervention for secondary aged pupils. 

The problem we encounter in education and EdTech are the barriers to creating robust evidence, such as the high cost and the length of time these evaluations take. In a fast-moving EdTech environment, the product is likely to go through numerous code changes with new features and the thought of locking code down for years is a non-starter. Yet, these are not justifiable reasons for not attempting to robustly evaluate the impact of any new products. 

At present, the major technology companies are positioning themselves and investing into the ‘Metaverse’, potentially transforming how we interact, work and learn over the next two decades. As new technology arrives, we must ensure that as consumers we demand the evidence underpinning any new types of educational interventions is available. If the goal of EdTech is to improve educational outcomes, we should expect the venture capital companies to evaluate the evidence before committing to scale new educational products. Surely, it is a safer bet to invest in a company which is building an evidence base than a company who can only promise a potential impact?

At WhatWorked, we are developing a new methodology to support teachers and school leaders to evaluate interventions using small scale mini- randomised controlled trials. As each intervention follows the same protocol, resources and assessments, we are able to aggregate the data into a cumulative meta-analysis. Initially developed as a PhD, the proof of concept has now been demonstrated that we can support schools to run robust evaluations for minimal cost and in a rapid timescale. 

In the summer term, we piloted a Year 3 peer tutoring programme using Year 5 students to deliver a short peer tutoring programme focused on multiplication and division in mathematics. The research evidence highlights peer tutoring as a high impact intervention, yet very few schools run peer tutoring programmes. We allowed teachers to choose from three types of evaluation strategy, either a teacher observation (1 star rating), a pre- post test single group (2 star rating) or a mini-randomised controlled trial (3 star rating). The programme provided teachers with step-by-step guidance through micro-lectures to support the planning, implementation and evaluation of the programme and we analysed the data for the schools.

The pre-post test single group evaluations and more importantly the mini-randomised controlled trials (RCTs) returned large effect sizes (3), demonstrating that the intervention had a positive impact on learning. As more schools deliver and evaluate the intervention, we are then able to develop a live and constantly updating evidence base for each programme.

As we develop our understanding of the methodology behind the use of small scale RCTs, we can apply this to supporting EdTech companies and venture capital companies to conduct rapid robust evaluations in a relatively short timescale and for a fraction of the cost of traditional large scale RCTs. The rapid iterations can also support product development to refine programmes to ensure they can maximise their impact. 

So, as the tech giants move into developing and creating the ‘Metaverse’ over the next decade, we should ask the key questions “are these interventions effective?” and “what evidence do we have to decide if they are effective?”, before we invest our resources in these new types of interventions. 

References

(1)  https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/accelerated-reader 
(2) https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/accelerated-reader-effectiveness-trial

Footnote:
(3) We are aware of the methodological limitations involved in small scale studies and explain to teachers that the effect sizes cannot be directly compared to large scale RCT studies. However, within the context of our evidence base we will be able to help inform which interventions show the most promise. 
Created with