Why use Survey Monkey for surveys when NeuroDevNet has REDCap? Create a REDCap survey in 10 easy steps

by Anneliese Poetz, KT Manager, NeuroDevNet

The KT Core provides services for evaluation mainly for KT events and KT products.  Did you know that REDCap can be used for collecting evaluation/survey data?  As a NeuroDevNet researcher or trainee, you have free access to REDCap using your login and password.  Use REDCap for informing your research and/or for KT evaluation purposes.

Monitoring_and_Evaluation_v2

What kind of surveys would you want to use REDCap for?

  • Data collection for research
  • Evaluation of your KT event (either in-person or webinar)
  • Evaluation of your KT product (summary, infographic, video, etc.)
  • Needs assessment for informing capacity building, research grant applications, etc.
  • Any survey you would normally use survey monkey, zoho, or any other free survey maker for

Example: I used REDCap recently to conduct a needs assessment for an upcoming workshop on evaluation of knowledge translation.  I used a mix of multiple choice and open-ended questions, and for one of the questions I used the “slider”.  I wanted to know how attendees rated their ability to do KT evaluation prior to the workshop, so by choosing the “slider” type of question I was able to give survey respondents the ability to slide a horizontal bar between “0” on one end and “100” on another end, sort of like a visual analog scale.  The data from these responses is represented as a scatter plot in the data report. It’s a neat feature I have never seen in any survey software before.  I customized a report of the open-ended text-based answers as well as a separate report that gave me either bar charts or pie charts showing the data from the multiple choice responses.  This was very useful data for informing our approach to the workshop.

redcap-title

What are the benefits of using REDCap for surveys?

  • Has the same features as the paid-version of survey monkey
  • Data is housed in Canada, by NeuroDevNet Neuroinformatics Core, not in the United States like Survey Monkey
  • Can export your data into SPSS, SAS, R, MS Excel for analysis, or as a .csv file
  • Can customize reports for data export so you can visually see trends (e.g. a bar or pie chart for multiple choice answers, text entries for open ended questions)
  • Can enter email addresses of respondents directly into the survey, so you can track survey responses by user

How can you get started using REDCap?

  1. Go to: https://neurodevnet.med.ualberta.ca/
  2. Enter your NeuroDevNet username and password
  3. Watch tutorial videos if needed, to learn how to use REDCap to develop your survey
  4. Enter your questions, and design the survey using multiple choice options or open-ended text-based answers
  5. Work with Neuroinformatics (Justin Leong, jleong [at] neurodevnet.ca) to finalize and launch your survey
  6. Retrieve your data (using export/report options listed above)

Contact the KT Core if you’d like help drafting evaluation questions.  Here is a step-by-step example of how you can create your own survey in REDCap:

Step 1: Log into REDCap

redcap1

Step 2: In this window, you will see surveys you have already created. You can click on an existing “Project Title” in the list, or to create a new survey click on the tab “Create New Project”.

redcap2

Step 3: When you click on “Create New Project” tab, you will see the following screen. Type in a title for your survey, choose an appropriate “purpose” for your survey (I usually choose “Quality Improvement”) and I usually leave the default choice to start the project from scratch.  Then scroll to the bottom of the page and click the button “Create Project”.

redcap3

Step 4: Set up your survey (project).  Click “enable” for “Use surveys in this project?” and then click on it to open a page that will allow you to upload a logo/photo and edit a message to your survey participants (such as ethics agreement if the survey is for research, an introduction/overview of the survey if you are doing a needs assessment or evaluation, information about how the data will be used etc.). When you are done, click the “I’m done!” button.  To design your survey, click the “Online Designer” button and “I’m done!”.

redcap4

Step 5:  Design your survey.  You will see an entry under “Instrument Name” that is called “My First Instrument”.  Click on the “edit” button to edit the title of this instrument. If you don’t, and you create a new survey (if you “add new instrument”) you will have problems later that you will have to contact Justin to sort out.  After you are done editing the title to something more meaningful (can be the same title as your project title) click “save”.

redcap5

Step 6: Click on the “Instrument name” title you just edited.  Click “Add Field” to add your first question to your survey.

redcap6

Step 7: Design your survey questions.  Choose the type of field you want, for example, text box, multiple choice, true/false, slider (visual analog scale), etc.  The slider is great if you want to ask your survey respondents to rate something based on how they felt about it. It can give you more precise information than a scale from 1-10. For example, you can ask “how would you rate your knowledge about XYZ after this workshop?” and give them a scale from 1-100. Text boxes are good for open-ended questions, and multiple choice can be either “choose one only” or “choose multiple”.

redcap7

Step 8: Create your questions. The example below shows how you would create a multiple choice question. Type the question you want to ask into “Field Label”. Type the choices for your multiple choice question in the box called “Choices” but don’t type a number, just type one choice per line. The numbers will be added automatically by REDCap.  Type in a meaningful name for the variable, so when you view the report of your data you’ll know which one it is.  I usually type “_text” at the end of answers that are open-ended text based answers so that I can create one report for the text-based answers, and another for the report-based answers.  Choose “yes” or “no” whether you want the question to be mandatory for the user to answer – the default is “no”.  Accept the defaults for everything else and click “save”.

This is what the question looks like:

redcap8

Step 9: When you are done creating questions for your survey, click the “Project Setup” tab.  Click “I’m done!” for the “Design your data collection instruments” item.  Accept the default values and click “I’m done!” for the next 3 items “Enable optional modules and customizations”, “Set up project bookmarks” and “User Rights and Permissions”.  The next item asks you to test the instrument thoroughly before entering “Production” mode. Once you enter “Production” you will be limited in terms of what you can edit/change.

redcap10When you are done testing (and clicked “I’m done!”) click the button that says “Move project to production” to get the survey link that you can send to participants by copying/pasting the link into the body of an email.

redcap11

Step 10: Get your survey link so you can send it in the body of an email to your participants (for anonymous survey data collection).  You may have to contact Justin for help with this.  If you want to try it on your own, look under the “Data Collection” heading on the left hand side of the project page, and click “Manage Survey Participants”.  Then you should be able to copy/paste the link for your survey into an email or into a tweet or other social media post.  Note: you have to have enabled the survey (step 4) in order to get the link for your survey.

redcap12

This blog post is not exhaustive in terms of REDCap’s survey functionality.  You can also create custom reports so that as people fill out your survey, you can look at the data in a way that makes it easiest for you. For example, I usually create 2 reports: one for the text-based/open-ended answers and one for the multiple choice answers that are usually bar or pie-charts.  It’s up to you.

RCIconREDCap is a great tool for surveys.  You may have used it for research-based surveys, but you may not have thought about using it for conducting needs assessments before conducting a workshop, course or presentation or to assess your end-users’ needs before designing your KT products (summaries, infographics, videos, etc.) for evaluating an event (in-person workshop, stakeholder engagement event, or conference, or a webinar) or your KT products (survey your end-users to find out how they have used your KT Product, and what impact it may have had for informing practice or policy).

If you are a NeuroDevNet researcher or trainee and would like help with evaluation of your events and/or KT activities, contact the KT Core. If you need technical assistance setting up a REDCap survey, contact Justin Leong (jleong [at] neurodevnet.ca) from NeuroDevNet’s Neuroinformatics Core.

 

 

Sustainability and Knowledge Translation: sessions at Canadian Knowledge Mobilization Forum 2015

by Anneliese Poetz, KT Manager, NeuroDevNet

This past Thursday May 14, 2015 and Friday May 15, 2015 the 2015 Canadian Knowledge Mobilization Forum took place at the Grand Bibliotheque in Montreal, QC. The Canadian Knowledge Mobilization forum is the national conversation on KT/KMb practice and an excellent way not only to build our own skills but to brand NeuroDevNet as a leading KT organization.  In fact, it was during this event that David Phipps (NeuroDevNet KT Lead) received the “2015 President’s Award for innovation” in “recognition of his extraordinary contribution to the field and practice of Knowledge Mobilization in Canada and internationally”.

14/05/2015 ckf15 Photo Pedro Ruiz

Peter Levesque presents Knowledge Translation award to David Phipps at CKF 15
Photo credit: Pedro Ruiz

The overall theme of the conference was “Creativity as Practice: Mobilizing Diverse Ways of Thinking”. I both learned from other presenters, and shared my own knowledge.

In the workshop “Narratives, video and smartphones as KT tools for youth” (by Sean Muir) I learned that the ‘formula’ for maximizing effectiveness of KT with youth is: grab their attention with a shocking image or story, present your content/message, and then end with something positive. Sean used examples of videos and posters to illustrate this point. In the workshop on “Mobilizing your message through documentary video: research findings as cinematic narrative” (Callista Haggis et al.) the takeaways for creating KT videos were “done is better than perfect”, “show don’t tell” and “think about what you want your target audience to think, feel, do”. In this case, the documentary was both to present research findings in an alternative format, as well as to inspire discussion about the issues presented in the video toward possible infrastructure changes to accommodate the needs of an aging population.

NeuroDevNet’s KT Core Lead, David Phipps, participated in leading 2 sessions. One session was with Purnima Sundar (Ontario Centre of Excellence for Child and Youth Mental Health) and Renee Leduc (NCE Secretariat).

ReneePurnimaDavid_CKF15The audience gained insight into the 3 common reasons why research grant applications fail: 1) lack of meaningful end user engagement, 2) unclear pathway to impact, and 3) poor evaluation of KM (Knowledge Mobilization) and of impact. The NCE Secretariat provided tips on how to prepare a successful research funding application, and held an interactive session asking for the audience’s ideas for what the NCE Secretariat could do to help applicants be more successful. Ideas included: successful applicants’ mentoring of new applicants, creation of how-to videos to accompany written grant application instructions, and provision of examples.

David moderated the session on “the paths of sustainability for KMb” in which I was one of the 4 presenters. I presented on the KT Core’s evaluation framework, indicators, and 3 factors relating to sustainability: relevance (how does what we’re doing fit with our priorities), leadership (who is responsible for ensuring outcomes are met), and financial (can cost-effective strategies be used).   The presentations were 10 minutes each. When the presentations were over, each presenter took their discussion question to a corner of the room and invited attendees to join their group (depending on which question most interested them) and discuss it further in terms of their own context.

PicFromDJP_sessionCKF15The questions were:

– How are people attempting to influence sustainability across diverse settings with the use of tools?
– How can we sustain KT implementation through strategic planning?
– How can team capacity and culture be shaped over time to best meet the needs of knowledge users?

And my question was:

– What factors should be considered with respect to sustainability?

I had about 12 people in my breakout discussion group. Although I had a discussion question prepared, I received several questions about what NeuroDevNet’s KT Core does in terms of evaluation and also about database design and development. After the breakout discussions we returned to the large group and each presenter did a ‘report back’ about what their group discussed.

“Anneliese provided a great overview of the process she developed to measure the relevance and impact of knowledge translation products. Her experience was very relevant as our organization is currently exploring different methods of evaluating our work. We look forward to learning more about Anneliese’s indicators and database.”
– Sheena Gereghty, Canadian Centre on Substance Abuse

If you are a NeuroDevNet researcher or trainee and would like help with KT videos, advice on event evaluations and/or evaluation of your other KT activities and products, contact the KT Core to find out how we can help.

Research partners, research users and research impact

By: David Phipps, KT Lead, NeuroDevNet

“If you want your research to have an influence on early childhood literacy practice you’d better not be partnering with the fire department”

David Phipps leads discussion during workshop for research administrators in the UK

David Phipps leads discussion during workshop for research administrators in the UK

On April 15 I led a workshop for the UK Association of Research Managers and Administrators. This workshop was for research administrators (university staff managing research applications among other things) who were implementing the Research Excellence Framework. The REF 2014 was a research assessment exercise that assessed both research excellence and the impacts of research. For REF impact was defined as:

“an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia”

– (see page 26, REF Assessment Framework and Guidance on Submissions)

The REF officers and other research administrators interested in research impact gathered for a one day ARMA workshop to look beyond REF 2014. This included looking towards REF 2020 as well as beyond the narrowly construed REF frameworks including university research expertise (faculty and graduate students) that is engaged beyond the academy.

I used Melanie Barwick’s KT Planning Guide (click the link and enter your e-mail address to get access to the tool) as a tool to help the UK impact officers look beyond REF reporting on past impacts and start to create the conditions to enable future impacts. This planning guide asks researchers to consider 13 elements of a KT framework. Working through those 13 elements provides the raw material to then craft the KT strategy.

Melanie Barwick's KT Planning Tool

Melanie Barwick’s KT Planning Tool

The KT planning guide (elements 1-3) asks the researcher to consider the types and roles of partners in the research. Partners are the individuals/organizations who are along for the ride. They are co-producers of research. They help disseminate research results. They co-supervise students. They provide cash and in-kind (space, data, populations, equipment) resources to the research project.

The KT planning guide also asks the researcher to consider types of research users (element 5). These are individuals/organizations that take up the research evidence and use that evidence to inform decisions about public policy, professional practice and social services. The NCE Secretariat calls them “receptors” or “knowledge users (KUs)”. Both partners and receptors/users are critically important to the research to impact process. The co-produced pathway to impact outlines the pathway from research to impact on the lives of children with neurodevelopmental disorders and their families. Partners collaborate throughout but receptors only become involved after dissemination.

Phipps' Co-Produced Pathway to Impact, the evaluation framework adopted by NeuroDevNet NCE

Phipps’ Co-Produced Pathway to Impact, the evaluation framework adopted by NeuroDevNet NCE

Research partners will likely be research users but research users are not always research partners.

In the ARMA impact workshop one Impact Officer was convinced that research partners and research users were the same. After I explained the difference she remained unconvinced. That’s when I said, “If you want your research to have an influence on early childhood literacy practice you’d better not be partnering with the fire department”. Research users need to be coherent with research partners because one informs and/or has access to the other.

For NeuroDevNet’s social ABC’s intervention led by Dr. Jessica Brian from Holland Bloorview as part of the Autism Discovery Program, the research partner is Humber College which has two full-time community-based childcare settings. Humber College’s practitioners-in-training will help develop and evaluate the intervention. The knowledge users will be early childhood centres and day care centres across Canada who will put the research evidence into practice by using it to support early childhood learning. The KT Core will work with Dr. Brian and her partners help identify these receptors/KUs and broker collaborations so that Social ABC will be implemented and evaluated beyond the research project setting.

If you want the KT Core to help you find partners and receptors/users to help translate your research into early diagnosis, validated interventions and supports throughout the life span please contact the KT Core.

What is “Impact” and how do you measure it?

by Anneliese Poetz, KT Manager, NeuroDevNet

For NeuroDevNet, impacts of research and training are achieved when children with neurodevelopmental disorders:
• Are diagnosed sooner
• Receive validated interventions as soon as possible
• And their families are supported through the life span

Related to these, impact is achieved when we make a difference – changes to existing policies or the implementation of new ones, or changes in the way caregivers and/or health practitioners approach their work with children and families. Impact is also helping improve the quality of life for children and families in unexpected ways. In the field of Knowledge Translation (KT), there is still ambiguity about how to measure and report on ‘impacts’ of KT. NeuroDevNet frames its Knowledge & Technology Exchange and Exploitation (KTEE) activities using Phipps’ Co-Produced Pathway to Impact evaluation framework which encompasses the Network’s KT activities as well. If impact is what we are trying to achieve, then KT is one of the means to help us achieve it.

Phipps' Co-Produced Pathway to Impact Evaluation Framework
When thinking about KT in terms of evaluation and reporting on KT activities, several quantitative measures easily come to mind: # of peer-reviewed publications, # of citations of one’s research publications, # of conference presentations, # of KT Products created, etc. However, these measures do not go far enough – notice that these are all indicators in the ‘dissemination’ phase of the CPPI. Typically, this is where KT activities ‘stop’ – it is the point of departure for researchers move onto the next research project. But stopping at the “dissemination” (otherwise referred to as end-of-grant KT) stage doesn’t help you measure the impact of your research.

In order to find out whether your research has been considered useful (in practice, or policy, or otherwise) you have to go and ask the people 1) who you engaged in your research process (integrated Knowledge Translation), and 2) who you imagined would find your research useful once it was completed even if they did not directly participate in informing your research questions or process. Yes, qualitative interviews!

The KT Core conducts qualitative interviews with its researchers, trainees and most importantly its collaborators and partners. There is a lot of good work going on in the Network, and these interviews are for the purpose of discovering stories about how NeuroDevNet’s research and training have made a difference. Some of them might not have otherwise been discovered and/or reported on. The first interview is always with the researcher or trainee. Then, we ask them who their collaborators/partners were, and whether they would be willing to broker an invitation for an interview so we can ask questions about the impact of NeuroDevNet’s work from their perspective. How have we changed things for them in their organization? Their practice? For the children and families they serve?

An example of a story we discovered was through one of our trainees, Angelina Paolozza, in the FASD program of research. Angelina was invited to present at Adopt Ontario after someone from that organization saw her present her research at a local hospital. Angelina was able to adapt her presentation style to be compatible with an audience of prospective parents. After her presentation (the 2 times she has been invited) the audience had the same response – many parents said that now that they understood FASD after hearing her describe her research they would revisit the files they had reviewed on children with FASD. Talk about impact – a child in a stable home has a much improved life trajectory and quality of life. The basic underpinning of any effective KT activity is relationships – and this impact was achieved through the relationship built between NeuroDevNet and Adopt Ontario.

Getting these stories is not just useful for reporting purposes, but it is also valuable for us as a Network to learn what works and what needs more attention/improvement in terms of our collective KT activities. By learning how we can best achieve impact, we can maximize the chances that we can repeat and scale our efforts.

If you are a NeuroDevNet researcher, trainee or collaborator/partner and you have a success story you would like to share, please contact the KT Core and we can help draft it into a formal ‘success story’ to be placed on the NeuroDevNet website as part of a series.

Bringing NCEs together to share KT Best Practices

by Anneliese Poetz, KT Manager, NeuroDevNet

David Phipps, NeuroDevNetKT Core Lead, commenting on one of the presentations

During plenary: David Phipps, NeuroDevNetKT Core Lead, commenting on one of the presentations

During the week of January 26-29, 2015 MEOPAR NCE hosted a symposium in Halifax, Nova Scotia for all NCEs to gather and share what they are doing in terms of “best practices” for KT within their network. There were presentations in the morning, and the afternoons were allocated to 3-hour workshops on various topics.

 

 

Different ways to convey the same message about coastal erosion

From one of the workshops: Different ways to convey the same message about coastal erosion

I learned something important from one of the workshops I attended: that providing the same message in different formats is key for people to understand and remember the message (which is the first step toward being able to apply the message in practice/policy). One format of the message might be a photo that illustrates what might happen in a certain situation, while another way to convey the same message could be an interactive display: either an online tool or a hands-on model that can be physically manipulated to see what happens in different scenarios, yet another option is to hold a community event and encourage broad participation.

There were approximately 100 attendees, which included representatives from NCEs at different stages of maturity. GRAND NCE just finished its first 5 years and provided information about their open source tool they created called the “forum”. It is for project leads to be able to do collaborative reporting with their trainees, upload their presentations and publications, and export citations directly to their common CV. Mike Smit from GRAND said they wished they’d had this at the beginning, however it took them several years to develop – it is open source and an available for any NCE (especially new ones!) to use. TREKK described their quick reference sheets for ER physicians working in a ‘regular’ ER (not specifically for pediatric patients) who need reliable evidence-informed and quick information about how to treat the most common ailments children are brought to the ER for. These evidence-informed tools for practitioners go through a rigorous process before they are finalized. New NCEs such as Glyconet, SERENE-RISC and CellCan commented that this event was a good opportunity to learn from more experienced NCEs about KT practices and management systems.

NeuroDevNet’s KT Core (David Phipps and I) co-presented with the NCE Secretariat (Renee Leduc). Renee presented on progress reporting and KTEE expectations from the perspective of the NCE Secretariat:

and led an exercise with participants that helped them link their Network’s goals with outputs and outcomes:

David and I presented on the Co-Produced Pathway to Impact KTEE evaluation framework, indicators for measuring KT services and impact, and their database system that was created for tracking data on our suite of indicators that were created over the past 16 months:

Anneliese and David provided a hands-on exercise for participants that acted as a “part 2” to Renee’s exercise because following goals, outputs and outcomes is the need to create indicators – so this 2nd handout was a worksheet that helps to fully define indicators:

 

‘your presentation was the most valuable of all the sessions…it was your session alone that made the conference worthwhile attending’CellCan NCE

Booths set up in main area

Booths set up in main area

This event provided a great opportunity to network and get to know other NCEs in the NCE Program. Part of networking included the opportunity to set up a booth at no cost. Across from NeuroDevNet and ResearchImpact booths was the CYCC NCE booth. I tweeted and picked up some copies of checklists they produced for: involvement of children and youth in research, having impact on policy, and others that could be useful to NeuroDevNet’s work as we approach Cycle II. Several attendees found the materials at the NeuroDevNet (and ResearchImpact) booth(s) interesting, particularly the ResearchSnapshots and brochures explaining our services. Many NCEs expressed interest in emulating NeuroDevNet’s KT Core model including the CPPI framework and associated services, as well as our staffing model of a KT Lead, KT Manager and KT Coordinator.

The KT Core live-tweeted from the event from @anneliesepoetz and @neurodevnetKT and several of these were retweeted by @neurodevnet and @ MEOPAR_NCE.

If you are a NeuroDevNet researcher or trainee, or if you represent one of Canada’s NCEs and would like to know more about NeuroDevNet’s KT Core services please visit our website and/or contact the KT Core.