Moderating the panel on July 1, Jaymes Hanna, senior program officer of market dynamics for the Gates Foundation, said that a lot of data — on ed-tech products, on pedagogical approaches, on students — has been made available to decision-makers. Yet what and when it’s being used is inconsistent and unclear to those providing the data, and even when school decision-makers use it to select tools, the implementations that follow often ignore the components or factors that correlate with the promised impact.
“We all know there’s a lot of evidence, there’s a lot of data, there’s been a lot of money spent on research for at least a decade,” he said. “What we’re trying to figure out is, how do we get that to inform better decisions from basically all the parties involved?”
EVERYBODY WANTS THE SAME THING — OR DO THEY?
Doug Lynch, a professor and senior fellow at the University of Southern California, offered a note of optimism that the education sector is moving past the point where the various parties involved — teachers, superintendents, vendors, researchers — think the others are evil. He said actually, everybody cares in the broadest sense about effective, usable tools that scale. But on a granular level, they care about slightly different things: The teacher is worried about classroom implementation and effects on students; superintendents are worried about that, plus accountability, legal issues and vendor contracts; the vendors are worried about their business; and researchers are focused on the quality of their output.
“We’re constrained by three horrible things: money, time and expertise,” he said. “[And] how you ask a question and how you analyze it depends on what you want to know … so there’s a lot of miscommunication, misinterpretation.”
Jeff Livingston, CEO of consulting firm EdSolutions, quibbled slightly with the idea that aforementioned parties all want the same thing. He said district leaders want evidence that a tool is effective, aligns with standards and is interoperable with the district’s existing learning management system; while teachers want something that engages students and doesn’t require a lot of extra labor and expertise on their part.
Representing ISTE+ASCD, Managing Director of R&D Tal Havivi concurred, pointing out that many district decision-makers don’t know how to represent teachers in discussions and don’t always know what evidence teachers need.
“They understand what the teacher is saying, but they probably can’t regurgitate it the same way,” he said.
INCENTIVES AND TRANSLATION
Hanna said the issues around effective ed-tech research boil down to two factors: the alignment of incentives, or who needs what data for what; and translation, or how the data is transmitted to decision-makers.
On the question of what it would mean for ed-tech research to be truly designed with district leaders in mind, Livingston said districts really want to know what’s working for districts and contexts like theirs. What are similar districts finding helpful for the kids they serve?
“A system that is more attuned to that user would be answering that question constantly, with enough evidence for the user, not enough evidence for the tenure committee,” he said, though he added the caveat that this approach would have to translate to sales in order for ed-tech companies to care about it.
“It’s going to be harder for product developers to invest in research if customers don’t actually buy because of that research,” he said.
Representing the ed-tech software company Edmentum, Senior VP of Research Policy and Impact Michelle Barrett offered an example of how ed-tech companies could share such research. She said Edmentum automates a lot of valuable research into analytics as part of regular impact reports that it provides to district leaders, showing them the relationship between skills students are supposed to be learning and outcomes measured by summative assessment scores.
Asked what advice she would give to ed-tech startups today, Barrett — speaking for a company that started in 1960 at the University of Illinois under a different name — said they should think about what outcomes they’re designing for, avoid setting the plan in stone and invest in data infrastructure.
“It becomes a working model for you to iterate: You do a little bit of research, you see which pieces of it seem to be true, you see what pieces of it seem not to be true, and you allow yourself the space to iterate, and then … be sure that you’re starting to craft the data infrastructure that you’re going to need in order to be able to rapidly iterate,” she said. “A lot of researchers will tell you like 80 percent of their work is cleaning data and 20 percent is actually conducting the analysis, so do the pieces as a startup that help you get to the place that you have clean, available data for your researchers, so that they can spend their time actually conducting the studies and not going through a bunch of really dirty data.”
Lynch added that successful startups tend to come from people who were students of the space their technology is targeting.
INFORMING THE ED-TECH SELECTION PROCESS
Hanna asked the panelists what they would tell district leaders at the conference about selecting ed-tech tools, and each made a different case.
Havivi said evaluating ed tech is difficult to do well or be certain about, and it’s an iterative process. He recommended the five quality indicators emphasized by a consortium of ed-tech leaders in 2024: is it safe, inclusive, usable, evidence-based and interoperable? But above all, he said, one consideration is necessary without exception.
“A product has to be usable by a teacher and a student. If it’s not, nothing else matters,” he said. “So I’ve pushed that in district leaders’ decision-making, to put a little more weight into that, because if that isn’t met, everything else is for naught.”
Lynch said he’s worried about coaching people to ask the right questions, to which Livingston offered an answer: Teachers trust other teachers, he said, but most of them are not equipped to understand the language of research. Therefore, researchers and ed-tech companies must create ways for the few teachers who really do want to look under the hood to dive in and do so, so they can then share what they find with their colleagues, who will trust them before they’ll trust a salesperson or superintendent.
Going back to the question of data overload, Havivi recalled the trend of giving teachers a ton of data dashboards and thinking that solved a problem.
“It was the same logic model as, ‘All we need is for teachers to add another skill, which is data analysis, and if they understand that, then they can act on that data,’” he said. “When in truth, they were just like, ‘I just need to know what to do.’”
AI IS HERE. NOW WHAT?
When the conversation inevitably turned to artificial intelligence — but not until 50 minutes in, as Havivi proudly noted, given its ubiquity at the conference — he expressed concern that while the products are increasingly powerful, pedagogical approaches aren’t growing with them.
“The pedagogical impressiveness of tools, I think, generally is remaining constant and in some cases is actually declining, in that a lot of tools are solidifying not just what is a pedagogically sound approach, but whatever pedagogical approach the person wants to do. So if I have a subpar pedagogical approach, I can do that frictionlessly,” he said. “Somehow, we need to bridge the gap between the pedagogical soundness … and the technical ability, and knowing that the rate of change is only going to increase … is concerning. And I don’t think we really have an answer for that yet.”
Livingston offered one, a familiar refrain in this and other discussions around ed tech: Educators must insist on being part of the conversation.