Scroll Top

News

The vital role of measuring impact in media literacy initiatives

The vital role of measuring impact in media literacy initiatives

This post is based on EDMO Advisory Board member Professor Sonia Livingstoneā€™s presentation at EDMOā€™s training session on ā€˜Evaluating the impact of media literacy initiativesā€™ where she discussed the importance and complexity of measuring impact in media literacy work.

Let me begin with a definition. Of all the definitions of media literacy, I still work a lot with the classic one: the ability to access, analyse, evaluate and create communications in any form of oneā€™s choosing. This emphasises a range of crucial processes that come to the fore at different times and for different reasons. For example:

  • in the disinformation age, what matters is competence in evaluation
  • in much of the global South, itā€™s still the knowledge, resources and skills required to access communications
  • for children, the ability to create communications is crucial if they are to be active participants in society;
  • for all of us, the ability to analyse the very nature of the digital ecology with its data ecosystem and algorithmically-driven forms of connection and visibility.

This definition works for both individuals and groups and at the level of the society ā€“ for ultimately, we seek not only to equip individuals but also to build a media literate society. That means not just testing impacts on individuals but also asking about the media literacy of our communities ā€“ our libraries and schools, for instance ā€“ and our democracy, including our politicians and policymakers.

Iā€™m also happy with a more pragmatic definition of media literacy: itā€™s whatever we need to know to participate as agents and citizens in a digital society. That highlights that media literacy is a moving target, and that it depends on the nature of the digital infrastructure of our society, which is increasingly global, commercial, and weaponised. Additionally, it emphasises the importance of context: that different groups need to know different things, depending on the specifics of their lives, so media literacy can also mean different things.

The challenges of measuring impact

I started thinking about measuring the impact of media literacy first as an academic researcher who began work in a department of experimental psychology and knows how hard it is to establish the impact of anything. As a researcher, I worry about several things:

  • The lack of baseline measures against which to measure improvement.
  • The multiple and interconnecting causal factors that may lead to observable changes – which are responsible for what? Have we measured them?
  • The vagueness regarding the goals of media literacy initiatives – E-safety? Escaping the filter bubble? Not falling for disinformation? More creative self-expression? Getting a digital job that hasnā€™t been invented yet?
  • A lack of specificity about the target audience – Everyone? The usual suspects? The hard-to-reach?
  • A lack of a clear and accountable language with which to explain what an intervention does, in operational terms – what was it about the educational materials used that makes the difference? Can we explain it in terms that others can apply to other circumstances? Was it just a Hawthorne effect?
  • Weak outcome measures – for example, self-reporting of ā€˜likingā€™ or ā€˜learning somethingā€™, with measures taken five minutes later, rarely five months later.
  • What is the scale of the outcome? An improvement for 5% of participants or 50%?
  • Experimental design ā€“ is the effect measured against a baseline for the target group (a before and after study) or a control group (matched to the experimental group but not in receipt of the intervention)?

The second reason that I started thinking about the impact of media literacy is based on my experience as a participant in multistakeholder events concerned with improving media literacy. Over the years I listened to confident presentations from well-resourced organisations claiming successful interventions in media literacy because:

  • They had attracted funding.
  • They claimed scale (citing large numbers reached, however modest a proportion of the population those numbers might represent).
  • They claimed the moral high ground ā€“ devoting the time of well-meaning volunteer employees doing their level best to counter the exploitative power of big tech (even if they were employed by, or partnered with, or funded by big tech) by taking colourful and expensively designed materials into public spaces to educate the public and make the world a better place.

These well-meaning interventions claimed successful impact, but they didnā€™t answer my questions. How do they know if the interventions worked? On what scale did they work? For whom? Compared with what? What made them work?

I began asking further questions. Is the methodology public? Are the data open? Has the study been independently reviewed? Most important, have the right evaluation questions been asked ā€“ impact may rely on attendance and reach, but such metrics are just a means to an end. The end is to improve media literacy ā€“ in peopleā€™s real lives, after the intervention is over, in ways that have positive benefits. For this, outcome measures are vital.

Why we need to know what works

Itā€™s vital to improve media literacy for the entire population at this time of rapid digital innovation and, more worryingly, profitable and even weaponised digital disruption. Itā€™s urgent that we figure out how to combat disinformation through media literacy initiatives. We donā€™t have time to develop glossy materials nor spend so much budget on design that thereā€™s none left for evaluation. Nor to fuss about getting approvals for the right logos or exact colours used, or to check that our resources reflect the organisationā€™s brand.

Rather, we need to know that our initiatives work:

  • We need to know which initiatives work best for which audiences, and how to target our resources efficiently.
  • We need to know what outcomes we are trying to achieve, so as to figure out whether weā€™ve succeeded.
  • We need to share what works with others, rather than keep our methodology proprietary.
  • And we need to share what didnā€™t work so that others donā€™t waste their resources making the same mistakes, even if that means we might be seen to have got things wrong.

From researching the impact of media literacy initiatives over the years, we know that it is vital to deploy effective measures, careful research design, to eliminate common mistakes, and to learn from the good practice and mistakes of others. It is also vital to target specific audiences, to define the outcomes clearly, and to make your methods and findings available to public scrutiny.

For example, for the European Commission-funded ySKILLS project, my colleagues and I recently conducted a systematic evidence review of recently published peer-reviewed research on the outcomes for adolescents of gaining different dimensions of digital skills. This sizeable body of research reveals many findings, but I can summarise them as showing that efforts to teach adolescents just functional or technical skills could result in negative as well as positive outcomes. This chimes with what we know in the EU Kids Online project ā€“ you may think you are promoting a particular skill, but people may put that to other uses that you didnā€™t expect, like using newfound skills to pursue risky opportunities, and potentially getting into new kinds of trouble online precisely because they are now more media literate. It is therefore important not to forget to measure the unintended consequences of your intervention.

More positively, the systematic review also found that if young people gain multiple dimensions of media literacy, especially critical and informational skills, along with communicative, functional and creative skills, they seem to gain a deeper knowledge that brings more positive outcomes and fewer negative ones.

Optimising for impact

I will end with some general conclusions about the best way to optimise impact. The tricky thing is that the answers vary according to your chosen approach, target audience, and desired outcome, but there are some generalisations we can make, based on past experience:

  1. If you deliver your intervention to a general audience, then those who are already advantaged (privileged, motivated, knowledgeable, interested) will benefit more. And those who are busy, distracted, anxious about other things, not to mention those who donā€™t speak your language or lack connectivity or canā€™t read your tiny fonts ā€“ just wonā€™t get the message as well. So, the overall effect may be to improve media literacy, but it could also exacerbate inequalities at the same time. So do target the optimal audience who really need your intervention.
  2. If you deliver your intervention to an audience you havenā€™t consulted, worked with, and listened to, then you run a serious risk of missing your mark, of being misunderstood, and regarded as patronising or lacking in understanding, or focusing on the wrong priority. So, think about consulting with your audience, listening carefully to what they say, and co-designing your intervention with them. Partner with the civil society and advocacy groups who have long experience of representing underserved groups, and involve them in your evaluation.
  3. People are good at learning stuff when well-meaning people tell them about it, but thereā€™s a vast gap between knowledge and practice. In the heat of the moment, when they canā€™t be bothered, when no-one else is being sensible, when they fancy a thrill ā€“ yes, theyā€™ll share disinformation or act foolishly, even though they may know better, even when youā€™ve told them better. So where are you seeking impact ā€“ on their knowledge or their actions? Hoping that itā€™s the latter, how will you measure it?
  4. Media literacy is a matter of education. It takes time, it involves progression in learning, it is multidimensional. It engages peopleā€™s faculties and changes who they are and how they relate to the world, digital and otherwise. As with learning print literacy ā€“ reading and writing, oneā€™s ABC, media literacy cannot be learned sustainably through a one-off campaign, and it rarely delivers ā€œquick wins,ā€ for there is no silver bullet. So it is crucial to make a serious and costed plan to educate people about the digital environment in ways that respect their learning and meet their needs, and in ways that have a fair chance of proving sustainable and transferable to future circumstances.
  5. However, people can only learn what is learnable, what someone can reasonably teach in the available time. And what is learnable often depends not only on the recipients, but also on the digital technology ā€“ if it is designed to be opaque, non-transparent, even deceptive; or if it is highly complex, and constantly changing, if the legal and technical experts themselves donā€™t really understand it, then it will be hard to provide an effective media literacy initiative or to establish its successful impact. In such cases, we might better devote our resources to changing the digital technology through regulation or design.

It is not an easy task: neither to improve media literacy nor to evaluate initiatives. But it is a task to be taken seriously, and I have high hopes of what can be achieved.

Sonia Livingstone