Aaand… Following right up on the KT planning webinar with Melanie Barwick, KTDRR recently produced an ambitious, information-packed three-day online seminar on measuring and evaluating KT.
Whether an online conference with voiceover and PowerPoint slides spread over three days was the best way to tackle a subject this densely packed is a topic for another conversation. Perseverance was rewarded with access to a broad spectrum of worldwide expertise, and some cool tools which demonstrated that it is possible to evaluate the impact of your knowledge translation efforts.
Some of the cool tools were:
CIHR’s KT Planning Report – A comprehensive workbook prepared in 2012 under the supervision of Ian Graham, PhD.
RE-AIM – A framework for evaluating health interventions with more than 14 years of documentation.
Payback Framework – A data collection and cross-case analysis tool originally formulated by Stephen Hanney, PhD at Brunel University.
Go and browse the archived materials, and let me know in the comments what you found particularly useful.
My neighbors at KTDRR have excellent timing! Shortly after I blogged about developing KT curricula, and got a “not so in Canada” comment from Melanie Barwick, they posted a new webinar featuring Melanie’s KT planning course and her pioneering KT Planning Template.
Thanks to folks like Melanie, KTDRR, and PHSSR, among others, we’re finally having increasingly productive conversations about the importance of KT here in the U.S.
Melanie’s planning template is pretty comprehensive, but after going through her recent webinar, I have some questions about its efficacy for public health in the U.S. For instance:
What’s the true value of research synthesis in public health knowledge translation? In her template, Melanie asserts that all published research deserves a KT effort. Since so much of public health research has public policy implications, does a standalone research result really cut it?
How do we overcome the acute obstacles of budget, time, and energy when there’s virtually no incentive to do KT here in the U.S.? Public health researchers here are still incentivized by the traditional “publish or perish” model, with nothing comparable to the Canadian Institutes for Health Research driving change.
Is merely “generating awareness” ever enough? During the webinar, Melanie leads the discussion about goal-setting for KT with “generating awareness” and “sharing knowledge.” Is this ever enough? Social marketers here in the U.S. will tell you no:
Conversations about the importance of knowledge translation in public health will have to include the broader needs of public health researchers, policy-makers, and the public. Where do we start? What do you think?
Airline pilots do it. Engineers do it. Surgeons are known for resisting it.
“It” is the checklist. Could the use of checklists move knowledge translation toward measurable, reliable results?
The folks at KTDRR recently published a new webinar, “Assessing the Quality and Applicability of Systematic Reviews (AQASR),” which focused on a single facet of this issue.
Public health researchers who are interested in the effective knowledge translation of their research often rely on systematic reviews to bolster their own results. KTDRR’s webinar points up the wide variation in quality and reliability amongst systematic reviews, leading very much to a Caveat Emptor situation for researchers who would like to use them. Their AQASR instrument is a comprehensive checklist, which may be a strike against it; it’s very comprehensive, and takes training, time, and energy to use effectively.
There is much discussion amongst KT practitioners about the difficulty of reliably measuring the results of KT efforts. The only tool that I’m aware of right now which attempts to tackle this issue is Melanie Barwick’s KT planning template, which you can see here.
Melanie’s checklist only exhorts the user to consider the variety of measures available, and how they might be applied to the particular KT effort. What would a quality assessment checklist for KT results look like? Are there any already available? Tell me in the comments if you know of any other tools that can help us move toward measurable, reliable knowledge translation results.
I sat in last week on a webinar sponsored by the Center on Knowledge Translation for Disability and Rehabilitation Research (KTDRR).
KTDRR is the latest iteration of knowledge translation research sponsored by the National Institute on Disability and Rehabilitation Research (NIDRR), part of the Department of Education.
Their webinar focused on using plain language in research summaries. The leader, Merete Konnerup, admitted that plain language summaries are a small cog in a large, unwieldy knowledge translation machine.
Working with the Campbell Collaboration in Denmark, Konnerup is particularly focused on the use of systematic reviews as a resource and tool for policy makers at all levels. She spent a good amount of time during the webinar talking about the theoretical underpinnings of the National Research Council’s (NRC) 2012 policy paper, “Using Science as Evidence in Public Policy.”
She emphasized NRC’s focus on studying the mechanics of policy argumentation and the psychology of decision-making. Despite this emphasis, the underlying assumption of both her talk and the ensuing questions and follow-up discussion was that good research yielding good, supportable conclusions was sufficient unto itself for policy-making. It’s a start, but it’s not enough by itself.
I’ve blogged several times recently about this issue, and how the real world of self-interested politicians and politically motivated institutions can impede our progress if we don’t take politics and self-interest into account.
This is an aspect of knowledge translation that needs to be added to any discussion of KT and policy-making. What do you think?