In vying for attention and recognition in a competitive EdTech landscape, organizations claim to be “evidence-based”. Evidence is used as a short-hand for everything to do with research, impact, or outcomes. How can EdTech organizations approach evidence as a mindset and not as a buzzword?
As a 2022 LEAP Challenge judge and a reviewer for Solve's 2023 Global Challenges, I have seen various evidence statements in pitches and applications. Often, organizations feel pressured to present solutions in a way that maximizes appeal to funders. However, big evidence claims can quickly backfire. The EdTech evidence field has developed considerably after the Covid-19 pandemic, with both funders and EdTech users demanding proof that a solution “works”. Navigating the evolving evidence expectations might feel overwhelming, especially with the different definitions of evidence out there.
What is evidence in the context of EdTech?
From a funder’s perspective, evidence is an independent scientific proof that a solution works. While a teacher or a child provides useful indicators of evidence (in the forms of reviews or feedback from user testing), researchers can provide scientific evidence of whether the solution works, according to established parameters. These parameters vary depending on the national context.
For example, in India, the Tulna standards of evidence are research-based standards for quality design of EdTech products for the Indian context. In the United States, the Office of Educational Technology set out levels of evidence in the form of ESSA tiers, where Tier 4 requires EdTech to demonstrate the rationale for their solution and Tier 1, the highest level, refers to evidence in the form of a randomized controlled trial (RCT).
In addition to various kinds of evidence, it is important to be aware of the strength of evidence. In research, this is typically represented with a hierarchy: pedagogical valuations in the form of teachers’ reviews are ranked lower than validation studies that experimentally test a solution. Here, the views of educational researchers on evidence often get divided, with some positioning RCTs as the strongest proof of evidence while others reject a hierarchy and advocate for choice of evaluation based on specific research questions.
The Figure below simplifies the most common types of evidence and the extent to which they are represented in the current EdTech landscape (most solutions have evidence from classrooms rather than completed validation studies).
An organization’s evidence needs vary depending on the content and maturity of their solutions but also the market they operate in. Nevertheless, there are some common core principles for gathering and documenting evidence when communicating evidence to funders and users. Here are my top three.
The word “evidence” has been used rather loosely in the industry, so it might be worth specifying how your organization defines it. Do not simply write “our solution is evidence-based” but be specific about what you mean by it. For example, did you conduct your own research or independent evaluation of the product? Did you gather data from users alone, or were non-users also involved to understand your overall impact on the target community? What is your plan for gathering evidence as you scale your solution?
Gathering evidence and acting on the results is an ongoing process—evaluators will not expect a comprehensive validation of a start-up but are likely to expect more from a Series A company. So, don’t worry about presenting effect sizes if you have just launched your product—make sure you present evidence of results proportionate to your company’s maturity. The notion of an evidence portfolio, where an organization gathers various types of studies over time, is a good way to showcase the ongoing nature of evidence-gathering.
Efficacy versus effectiveness
In layman terms, effectiveness is often used to mean the quality of a research study and you may often hear the term efficacy in the context of quantitative research. The two terms tend to be used interchangeably by EdTech entrepreneurs, but they actually designate two different types of evidence. Efficacy tells us how your solution performs under ideal and controlled circumstances. Effectiveness tells us how your solution works under ‘real-world' conditions. The two are often separated by research being conducted in partnership with users or by randomly assigning users to active and control conditions. Judges with a research background will expect both types of evidence in your evidence plans and will be interested in knowing how the studies were conducted.
Evidence as the North Star
Funders will want to know whether you work hand-in-hand with educators and community users and how you gained access to users’ insights and feedback. A commitment to evidence involves transparent documentation of both what works and what doesn’t, with both positive and negative user feedback and results. Transparent evidence builds credibility and strengthens your case for winning a challenge or VC investment.
Ultimately, embracing evidence as a guiding principle requires a fundamental shift in mindset. It means recognizing that EdTech must be grounded in research, data, and rigorous evaluation. Rather than relying solely on intuition or assumptions, we must commit ourselves to a thorough examination of the evidence, allowing it to guide our decision-making and product development processes. By choosing your words carefully around evidence, you demonstrate a genuine dedication to continuous improvement and the betterment of education as a whole.
Are you interested in learning more about bridging research and practice to promote evidence-based education solutions? Learn more about LEAP, an initiative by the Jacobs Foundation and MIT Solve, HERE.