Pilot error?

flickr photo by Keenan Pepper https://flickr.com/photos/keenanpepper/543546959 shared under a Creative Commons (BY-SA) license

For my Confirmation of Candidature Report, I’m currently drawing together the findings from, and reflections on, my pilot study. In a meeting a few weeks ago, I’d mentioned to my supervisor that I’d analysed the findings from three of the methods, as assessments for the Research Masters modules I’d been studying. When I mentioned the remaining three methods, he suggested that full analyses for them might not be a good use of my time. That bothered me at the time, because I’d begun a preliminary analysis and had initially coded the data in NVivo. Now that I think more about things however, as usual, he was probably right. I have indeed taken things as far as I need to.

A pilot study can serve a number of purposes, depending on your methodology. In a natural science or clinical study, you might want to test the apparatus you intend to use or the logistics surrounding the processes. In the social sciences, you might be wanting to design your research protocol or verify that it was realistic, test the adequacy of your research instruments or establish whether your recruitment strategies are appropriate (van Teijlingen and Hundley, 2001).

Although a pilot study can be used to collect preliminary data, consideration has to be given to how they will be used. The methods I used to collect data clearly weren’t preceded by a preliminary study to establish that they were appropriate. This means those data may be at best unhelpful, or at worst, misleading. What I wanted from my pilot study was to reveal issues and barriers related to recruiting potential participants, explore the use of oneself as a researcher in a culturally appropriate way and test and modify modifying interview questions (Kim, 2011).

Let’s now look at the methods in turn:

  1. Interviews – the in-depth, semi-structured interview went as planned, and provided indicators regarding which question areas were more informative than others (for this particular interviewee). It revealed a few omissions I will need to remedy when I move forward, and also that my interview protocol might need adjusting from one interview to the next. This is one amongst many reasons for transcribing and beginning the analysis immediately following each interview, rather than waiting until all interviews are complete.

I found the blog interview much harder to gauge. Perhaps this was due to having little access to how questions were affecting the participant – were they confused, irritated, excited by my questions? Even though the s-s interview was conducted by phone, being able to hear a voice, in real time, allowed a better sense of the participant’s reaction to the questions. On the other hand, the blog commenting format allowed the participant to respond at their leisure, and afforded them greater thinking time; time to ruminate and perhaps craft a response which reflects a particular discourse, rather than your own initial reaction, some might argue. Nevertheless, it doubtless takes longer to type out a response, than to do so verbally. As a researcher however, you also have access to the post itself as data in its own right. The post can also serve as stimulus material for interview questions, in much the same way a participant research diary might do.

  1. Participant observation – this really stretched me somewhat, and I’m not at all convinced the method I chose to conduct the observations worked well at all. Such is the value of a pilot study! Although my field notes produced some data I could doubtless have used, what it achieved much more significantly was to alert me that I needed better access to richer data; that the fieldwork will need to take place over a longer period than just 3x one hour slots; to use a variety of routes into ‘the field’ and to try to visually capture some sense of where I’ve roamed – a map to give an overview, rather than reams (kBs?) of fieldnotes alone.

I was also aware that this wasn’t precisely participant observation. Although I was active in the way I normally would be in Twitter, I didn’t ask the searching questions that an ethnography in a community might do … but I’m OK with that for the pilot. Ethnographers do need time to orient themselves before leaping in. In the full study however, I will need to remedy that and attempt to engage people who are tweeting things pertinent to the study. (I also need to ensure that I do not lose the context of those encounters)

  1. Focus group – my intention had been to seek permission to conduct a hashtag chat as a focus group, but with ethical approval only coming through as the summer holidays (northern hemisphere) just started, that proved problematic. I did try a couple of moderators of chats in the southern hemisphere, but received no response to my contact (perhaps they were on midyear break too?). However, in the course of casting around for appropriate chats (there are a lot to choose from!), I came across a recent chat which discussed the areas I would have wished to cover. Although I might have used slightly different phrasing, and I didn’t get the chance to follow up any responses that participating in a live chat would do (wondering whether it’s appropriate/meaningful to reply to a tweet that was made several months ago?!), at least there was a corpus of tweets captured in Storify that was available for me to analyse. There are clearly ethical issues of privacy, consent and the expectations of the uses to which one’s tweets might be put here. I discussed these at greater length in a series of posts, prior to making my ethics submission.

There were technical challenges to overcome to capture the tweets from Storify in a form which lent itself to analysis – another tick in the benefits column for conducting a pilot study. However, having seen the responses in the chat I captured, I’m now less convinced that a hashtag focus group would be able to produce sufficiently rich data, so this may be a method I drop. If however, during the course of my fieldwork, I come across chats which are discussing the areas of professional learning, then I’ll drop by and attempt to participate.

  1. Focused observation – this involved collecting the tweets of a single person (Twitter advocate), with their consent, for a fixed period of time; one month in this case. Given that this was a pilot study, I could only conduct a preliminary analysis to establish the feasibility of the technique. Technically, there were no problems, but I’m not sure the data revealed anything more than earlier studies have done, either the one which used a similar technique (King, 2011), or others which used similar corpora of tweets. I’m starting to wonder whether ripping the data from its natural setting loses much of the context and whether this technique answers the questions I want to ask. If I want to confirm that teachers share things, that they communicate or collaborate, that they reflect on their practice and so forth, then I could probably do that, but all that’s been done in earlier studies. I’m starting to think that the emphasis of my study is shifting subtly away from simply providing evidence that teachers are learning professionally … but I’ve much more thinking to do on that yet.

Overall then, what have I learned from the pilot?

flickr photo by popular17 https://flickr.com/photos/65498285@N08/6040732669 shared under a Creative Commons (BY) license

I think I learned that recruitment of participants may not be as simple as chucking a shoutout on Twitter and waiting for the responses to flood in. Of the dozen or so blog posts discussing professional learning I approached, only one followed through with a full set of responses. Over half never even replied, though I appreciate there might have been mitigating circumstances on some. This has encouraged me to think far more carefully about my participant recruitment strategy. Bound up in that is also my choice of sample; where initially I thought they would be self-selecting from the population of educators to which I have access through Twitter, I now feel I might need to be more direct in my approach and as a consequence establish a set of criteria for choosing potential participants. Should I cover different phases of education, teachers from different disciplines, different geographic regions and educational systems, and perhaps even some from outside the classroom, but who have a particular interest in Twitter for professional development?

For the interview I naturally developed an interview protocol, but prepared nothing for the other methods; they were after all more open, but I wondered whether it might be wise to have a set of pre-prepared generic questions that I would like answering, even though I might not use them without adapting them in each set of circumstances. It might also help me see which aspects of my study are being answering in most detail and where the gaps are.

Although I didn’t perform a full analysis, I was grateful for the opportunity to test out NVivo and see what might be the best strategy for bringing together the different forms of data from different sources. This also encouraged me to think more carefully about my coding strategy and how I build that into my NVivo project.

It was only when I began to draw things together however, that the sociomaterial aspects of my research began to become apparent. Not those in the fieldwork and findings, but in my activities as a researcher. Choosing the twitter.com interface as my window on my field brought me to particular elements of data and led me to behave in a particular way in making field notes. I was also ‘tied’ to the desktop computer (and the desk!) in a way I wouldn’t normally be; how did those actors influence my actions? Reading back through my methodological comments, my frustration with the experience is clear, and how different this was from my usual, but less formal, wanderings in the field. This highlighted for me an area I’d not really considered – the emotional response to events as they unfold, and how that response might influence subsequent events or behaviour. After three fieldwork sessions, and trying a couple of tweaks to make them more fruitful, I later recognised that the interface did not suit my needs and as a result will use Tweetdeck for this kind of fieldwork – a different interface, with different materiality which will doubtless affect me (and the results?) in a different way. I also decided on a completely different method (more about that in a future post) which might provide insights on a part of the field, hidden from the ethnographer in these circumstances.

 

Kim, Y. (2011). The pilot study in qualitative inquiry identifying issues and learning lessons for culturally competent research. Qualitative Social Work, 10(2), 190-206.

King, K. P. (2011). Transformative Professional Development in Unlikely Places: Twitter as a Virtual Learning Community.

Van Teijlingen, E., & Hundley, V. (2001). The importance of pilot studies. Social Research Update. Issue 35.

Advertisements

One thought on “Pilot error?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s