Planning and Executing Data Collection
Practical guidance for collecting high-quality data -- from designing instruments to managing your data collection process.
Planning and Executing Data Collection
Data collection is the stage where your dissertation shifts from theoretical planning to real-world engagement. You have defined your questions, reviewed the literature, designed your methodology, and received ethical approval. Now you need to go out and gather the evidence that will answer those questions. It is simultaneously one of the most exciting and most anxiety-producing phases of the dissertation journey.
This guide covers the practical dimensions of data collection – from finalizing your instruments through managing the messy realities of fieldwork. Whether you are distributing surveys, conducting interviews, observing classrooms, or pulling records from an existing database, the principles here will help you collect high-quality data efficiently and ethically.
Planning Your Approach
Before you collect your first data point, invest time in detailed planning. The more precisely you have thought through the logistics, the fewer surprises you will encounter in the field.
Creating a Data Collection Protocol
A data collection protocol is a step-by-step document that describes exactly what you will do, in what order, and how you will handle common situations. For a survey study, this might include the email invitation text, the number and timing of reminder emails, how you will track response rates, and when you will close the survey. For an interview study, it might include how you will schedule interviews, the script you will use to open each session, how you will handle technical problems with recording equipment, and how you will store recordings after each session.
Writing a protocol forces you to anticipate problems before they occur. It also ensures consistency if you are collecting data over weeks or months – your procedures on day one should be identical to your procedures on day sixty.
Aligning Instruments with Research Questions
Every data collection instrument should trace directly back to your research questions. For each question on a survey or each prompt in an interview guide, you should be able to identify which research question it addresses. If a question does not connect to any research question, consider whether it belongs in your study. Unnecessary questions burden your participants and complicate your analysis without adding value.
The Research Timeline Tool can help you map your data collection activities against your overall dissertation schedule, ensuring that your instrument development, pilot testing, and collection phases are allocated realistic timeframes.
Instrument Design
The quality of your data depends directly on the quality of your instruments. Whether you are building a survey, developing an interview protocol, or designing an observation rubric, careful instrument design is essential.
Surveys and Questionnaires
If you are using an established, validated instrument, your primary task is to secure permission to use it and to administer it exactly as designed. Modifying validated instruments can undermine their psychometric properties – if you need to make changes, document them and plan to report reliability statistics for the modified version.
If you are developing your own survey, the process is more involved. Start by defining exactly what constructs you are measuring. Write items that are clear, unambiguous, and free of double-barreled questions (questions that ask about two things at once). Use consistent response scales and avoid leading language. Have colleagues and subject matter experts review your items before pilot testing.
Consider the practical dimensions of survey administration. How long will the survey take to complete? (Anything over 20 minutes risks significant dropout.) Will it be administered online or on paper? If online, which platform will you use? Qualtrics, SurveyMonkey, and similar tools each have strengths and limitations. Verify that your chosen platform meets your IRB’s data security requirements.
Interviews
Interview protocols range from highly structured (every participant is asked the same questions in the same order) to unstructured (conversations guided by the participant’s responses). Most dissertation interviews use a semi-structured approach: a set of prepared questions supplemented by follow-up probes that allow you to explore unexpected themes.
Write your interview questions to be open-ended and non-leading. “Tell me about your experience with the mentoring program” invites a rich response. “Did you find the mentoring program helpful?” invites a yes or no. Prepare probes for each question – follow-up prompts like “Can you tell me more about that?” or “What was that experience like for you?” that encourage participants to elaborate.
Think carefully about the logistics of interviews. Where will they take place? In-person interviews offer richer nonverbal data but require a private, comfortable space. Virtual interviews (via Zoom or similar platforms) offer convenience and geographic flexibility but can feel less personal. Phone interviews are the least resource-intensive but sacrifice visual cues entirely.
Plan your recording setup. Use two recording devices whenever possible – technology fails at the worst moments, and losing an interview to a dead battery or a corrupted file is devastating. Test your equipment before every session.
Observation Protocols
If your study involves direct observation, develop a structured protocol that defines what you will observe, how you will record your observations, and how you will manage your role as an observer. Will you be a passive observer or a participant-observer? How will you take field notes – during the observation or immediately after? What specific behaviors, interactions, or events will you record?
Observation research requires particular attention to consistency. If you are observing multiple settings or multiple sessions, your protocol must ensure that you are capturing the same types of data each time.
Secondary and Archival Data
If you are using existing data – administrative records, publicly available datasets, historical documents – your “instrument design” phase involves defining your data extraction procedures. Which variables will you extract? How will you handle missing data or inconsistent coding in the original dataset? How will you verify the accuracy of your extraction?
Document every decision you make about the data. Future readers (including your committee) will want to understand exactly how you moved from the raw dataset to the variables in your analysis.
Pilot Testing
Pilot testing is not optional. It is the stage where you discover that your survey question about “institutional support” means something different to participants than it does to you, that your interview prompts produce one-word answers, or that your observation protocol cannot be completed in the time available.
What to Test
Test everything: the instruments themselves, the administration procedures, the technology, and the logistics. For surveys, recruit five to ten people similar to your target population to complete the survey and provide feedback. How long did it take? Were any questions confusing? Did the response options capture their views?
For interviews, conduct two or three practice interviews. Record them and review the recordings. Did your questions elicit the type of data you need? Were there awkward transitions? Did you talk too much or interrupt participants?
Incorporating Pilot Feedback
Revise your instruments based on pilot feedback. This may mean rewriting confusing questions, reordering sections, adjusting the length, or adding prompts you did not anticipate needing. If you make substantial changes, consider running a second pilot to verify the improvements.
If your pilot testing reveals the need for changes to your IRB-approved protocol, submit an amendment before proceeding. Do not skip this step – collecting data under an unapproved protocol is an ethical violation regardless of how minor the changes seem.
Recruitment
Recruitment is the stage where optimistic timelines meet stubborn reality. Almost every dissertation student discovers that recruiting participants is harder and slower than expected.
Building a Recruitment Strategy
Start recruitment as early as your IRB approval allows. Use multiple channels: email invitations, social media posts, flyers in relevant locations, announcements at meetings or classes, and personal networks. Each channel reaches a different subset of your target population, and diversifying your approach increases your chances of reaching your sample size.
Craft your recruitment materials carefully. Potential participants decide within seconds whether to engage with your invitation. Lead with what matters to them – the topic, the time commitment, any incentives – not with what matters to you (your degree requirements).
Managing Response Rates
For survey research, plan for a response rate lower than you hope. A 30 percent response rate is common for email-distributed surveys, and rates below 20 percent are not unusual. If you need 200 completed surveys and expect a 25 percent response rate, you need access to at least 800 potential participants.
Send reminders. Research consistently shows that reminder emails significantly boost response rates. Two to three reminders, spaced about a week apart, is standard practice. After three reminders, additional contacts rarely produce meaningful returns and may irritate potential participants.
Incentives
Incentives improve participation rates but must be appropriate. Gift cards in the range of five to twenty dollars are common for survey studies. For interviews requiring an hour or more, larger incentives (twenty-five to fifty dollars) are reasonable. Check your IRB approval – your incentive plan should be described in your approved protocol.
For students whose dissertation research connects to grant-funded projects or who are seeking supplemental funding for participant incentives and research costs, Grant Writing Consultant provides guidance on identifying and applying for small research grants that can support these expenses.
When Recruitment Stalls
If you are not reaching your target, adapt. Can you expand your recruitment to additional sites or populations (with an IRB amendment)? Can you extend your collection window? Can you add an incentive or increase an existing one? Consult with your advisor early if recruitment is falling short – they may have connections or strategies you have not considered.
Data Management
How you manage your data during collection affects everything that follows. Poor data management leads to lost files, confused versions, and analytical errors that can undermine your entire study.
Organizing Files
Create a clear file structure before you begin collecting data. Separate raw data from processed data. Use consistent, descriptive file names that include dates (e.g., “Interview_P03_2026-04-15.wav”). Back up everything in at least two locations – your university’s cloud storage and an encrypted external drive are a common combination.
Maintaining a Research Log
Keep a running log of your data collection activities: when you collected data, from whom, any unusual circumstances, and any decisions you made in the field. This log becomes invaluable when writing your methodology chapter and when answering committee questions about your procedures.
For qualitative researchers, the research log also serves as a reflexivity journal – a place to record your reactions, assumptions, and evolving interpretations. These reflections contribute to the trustworthiness of your analysis.
Data Security
Follow your IRB-approved data security plan precisely. De-identify data as soon as possible. Store consent forms separately from data so that participant identities cannot be linked to their responses. If you are using cloud storage, ensure it meets your institution’s security requirements.
Be especially careful with qualitative data. Interview recordings contain identifiable information by their nature (participants’ voices, and often their names and specific experiences). Transcribe recordings promptly and store transcripts under participant codes, not names. Delete recordings once transcription is verified, unless your protocol specifies otherwise.
Staying Organized During Collection
Data collection can stretch over weeks or months. Without systems to keep you on track, it is easy to lose momentum or let quality slip.
Tracking Progress
Create a tracking spreadsheet that shows your progress toward your target sample. For surveys, track invitation dates, reminder dates, response counts, and completion rates. For interviews, track scheduled dates, completed interviews, transcription status, and any follow-up needed.
Review your tracking sheet weekly. Are you on pace to meet your timeline? Are certain recruitment channels producing better results than others? Use this data to adjust your strategy in real time.
Maintaining Quality
Quality control during collection is far easier than trying to fix problems after the fact. For surveys, monitor responses for patterns that suggest careless responding (straight-lining, impossibly fast completion times). For interviews, review your recordings periodically to check whether you are following your protocol consistently and whether your probing is effective.
If you have a research assistant helping with data collection, train them thoroughly on the protocol and check their work regularly. Inconsistent data collection across researchers is a threat to the validity of your findings.
Troubleshooting Common Problems
Low Response Rates
If your survey response rate is lower than expected, try personalizing your invitation emails, simplifying the survey (if any questions are non-essential), sending reminders at different times of day, or recruiting through additional channels. Sometimes a brief, friendly email from a gatekeeper (a department chair or organizational leader who endorses your study) can dramatically increase responses.
Participant No-Shows
For interview studies, no-shows are inevitable. Confirm each interview 24 hours in advance. Offer flexible scheduling, including evenings and weekends if your participants have demanding schedules. Overrecruit slightly – if you need 15 interviews, schedule 18 to account for dropouts.
Technology Failures
Record with two devices. Save frequently. Back up daily. These three habits prevent most technology-related data loss. When failures do occur – and they will – document what happened and how much data was lost. Your committee will want to know, and transparency is always better than silence.
Unexpected Findings During Collection
Sometimes early data reveals something you did not anticipate – a theme in interviews that your literature review did not prepare you for, or survey responses that cluster in unexpected ways. Resist the urge to modify your instruments mid-collection unless the issue is severe (such as discovering that a question is genuinely confusing). Document the unexpected finding in your research log and address it during analysis.
When Things Do Not Go as Planned
The gap between your proposal and your actual data collection experience is inevitable. Recruitment takes longer than expected. Participants give shorter answers than you hoped. The dataset you planned to use turns out to have more missing data than anticipated.
These challenges are normal, and they do not mean your study is failing. What matters is how you respond. Document every deviation from your proposed plan. Discuss challenges with your advisor promptly. Adjust your approach within the bounds of your IRB approval (or submit amendments for changes that fall outside it).
Your dissertation’s methodology chapter will eventually describe what you actually did, not just what you planned to do. Honest reporting of challenges and adaptations strengthens your credibility as a researcher.
Completing Data Collection
When you have reached your target sample size (or when you and your advisor agree that you have sufficient data), close your data collection formally. For surveys, close the survey link. For interviews, stop scheduling new sessions. For observations, complete your final session.
Secure all your data according to your IRB protocol. Verify that your files are complete, properly labeled, and backed up. Create a final inventory of your dataset: how many participants, how many data points, any missing data, and any anomalies.
With data collection complete, you have the raw material for your findings. The next stage – data analysis – is where you transform that raw material into the evidence that answers your research questions.