Pilot error?

flickr photo by Keenan Pepper https://flickr.com/photos/keenanpepper/543546959 shared under a Creative Commons (BY-SA) license

For my Confirmation of Candidature Report, I’m currently drawing together the findings from, and reflections on, my pilot study. In a meeting a few weeks ago, I’d mentioned to my supervisor that I’d analysed the findings from three of the methods, as assessments for the Research Masters modules I’d been studying. When I mentioned the remaining three methods, he suggested that full analyses for them might not be a good use of my time. That bothered me at the time, because I’d begun a preliminary analysis and had initially coded the data in NVivo. Now that I think more about things however, as usual, he was probably right. I have indeed taken things as far as I need to.

A pilot study can serve a number of purposes, depending on your methodology. In a natural science or clinical study, you might want to test the apparatus you intend to use or the logistics surrounding the processes. In the social sciences, you might be wanting to design your research protocol or verify that it was realistic, test the adequacy of your research instruments or establish whether your recruitment strategies are appropriate (van Teijlingen and Hundley, 2001).

Although a pilot study can be used to collect preliminary data, consideration has to be given to how they will be used. The methods I used to collect data clearly weren’t preceded by a preliminary study to establish that they were appropriate. This means those data may be at best unhelpful, or at worst, misleading. What I wanted from my pilot study was to reveal issues and barriers related to recruiting potential participants, explore the use of oneself as a researcher in a culturally appropriate way and test and modify modifying interview questions (Kim, 2011).

Let’s now look at the methods in turn:

  1. Interviews – the in-depth, semi-structured interview went as planned, and provided indicators regarding which question areas were more informative than others (for this particular interviewee). It revealed a few omissions I will need to remedy when I move forward, and also that my interview protocol might need adjusting from one interview to the next. This is one amongst many reasons for transcribing and beginning the analysis immediately following each interview, rather than waiting until all interviews are complete.

I found the blog interview much harder to gauge. Perhaps this was due to having little access to how questions were affecting the participant – were they confused, irritated, excited by my questions? Even though the s-s interview was conducted by phone, being able to hear a voice, in real time, allowed a better sense of the participant’s reaction to the questions. On the other hand, the blog commenting format allowed the participant to respond at their leisure, and afforded them greater thinking time; time to ruminate and perhaps craft a response which reflects a particular discourse, rather than your own initial reaction, some might argue. Nevertheless, it doubtless takes longer to type out a response, than to do so verbally. As a researcher however, you also have access to the post itself as data in its own right. The post can also serve as stimulus material for interview questions, in much the same way a participant research diary might do.

  1. Participant observation – this really stretched me somewhat, and I’m not at all convinced the method I chose to conduct the observations worked well at all. Such is the value of a pilot study! Although my field notes produced some data I could doubtless have used, what it achieved much more significantly was to alert me that I needed better access to richer data; that the fieldwork will need to take place over a longer period than just 3x one hour slots; to use a variety of routes into ‘the field’ and to try to visually capture some sense of where I’ve roamed – a map to give an overview, rather than reams (kBs?) of fieldnotes alone.

I was also aware that this wasn’t precisely participant observation. Although I was active in the way I normally would be in Twitter, I didn’t ask the searching questions that an ethnography in a community might do … but I’m OK with that for the pilot. Ethnographers do need time to orient themselves before leaping in. In the full study however, I will need to remedy that and attempt to engage people who are tweeting things pertinent to the study. (I also need to ensure that I do not lose the context of those encounters)

  1. Focus group – my intention had been to seek permission to conduct a hashtag chat as a focus group, but with ethical approval only coming through as the summer holidays (northern hemisphere) just started, that proved problematic. I did try a couple of moderators of chats in the southern hemisphere, but received no response to my contact (perhaps they were on midyear break too?). However, in the course of casting around for appropriate chats (there are a lot to choose from!), I came across a recent chat which discussed the areas I would have wished to cover. Although I might have used slightly different phrasing, and I didn’t get the chance to follow up any responses that participating in a live chat would do (wondering whether it’s appropriate/meaningful to reply to a tweet that was made several months ago?!), at least there was a corpus of tweets captured in Storify that was available for me to analyse. There are clearly ethical issues of privacy, consent and the expectations of the uses to which one’s tweets might be put here. I discussed these at greater length in a series of posts, prior to making my ethics submission.

There were technical challenges to overcome to capture the tweets from Storify in a form which lent itself to analysis – another tick in the benefits column for conducting a pilot study. However, having seen the responses in the chat I captured, I’m now less convinced that a hashtag focus group would be able to produce sufficiently rich data, so this may be a method I drop. If however, during the course of my fieldwork, I come across chats which are discussing the areas of professional learning, then I’ll drop by and attempt to participate.

  1. Focused observation – this involved collecting the tweets of a single person (Twitter advocate), with their consent, for a fixed period of time; one month in this case. Given that this was a pilot study, I could only conduct a preliminary analysis to establish the feasibility of the technique. Technically, there were no problems, but I’m not sure the data revealed anything more than earlier studies have done, either the one which used a similar technique (King, 2011), or others which used similar corpora of tweets. I’m starting to wonder whether ripping the data from its natural setting loses much of the context and whether this technique answers the questions I want to ask. If I want to confirm that teachers share things, that they communicate or collaborate, that they reflect on their practice and so forth, then I could probably do that, but all that’s been done in earlier studies. I’m starting to think that the emphasis of my study is shifting subtly away from simply providing evidence that teachers are learning professionally … but I’ve much more thinking to do on that yet.

Overall then, what have I learned from the pilot?

flickr photo by popular17 https://flickr.com/photos/65498285@N08/6040732669 shared under a Creative Commons (BY) license

I think I learned that recruitment of participants may not be as simple as chucking a shoutout on Twitter and waiting for the responses to flood in. Of the dozen or so blog posts discussing professional learning I approached, only one followed through with a full set of responses. Over half never even replied, though I appreciate there might have been mitigating circumstances on some. This has encouraged me to think far more carefully about my participant recruitment strategy. Bound up in that is also my choice of sample; where initially I thought they would be self-selecting from the population of educators to which I have access through Twitter, I now feel I might need to be more direct in my approach and as a consequence establish a set of criteria for choosing potential participants. Should I cover different phases of education, teachers from different disciplines, different geographic regions and educational systems, and perhaps even some from outside the classroom, but who have a particular interest in Twitter for professional development?

For the interview I naturally developed an interview protocol, but prepared nothing for the other methods; they were after all more open, but I wondered whether it might be wise to have a set of pre-prepared generic questions that I would like answering, even though I might not use them without adapting them in each set of circumstances. It might also help me see which aspects of my study are being answering in most detail and where the gaps are.

Although I didn’t perform a full analysis, I was grateful for the opportunity to test out NVivo and see what might be the best strategy for bringing together the different forms of data from different sources. This also encouraged me to think more carefully about my coding strategy and how I build that into my NVivo project.

It was only when I began to draw things together however, that the sociomaterial aspects of my research began to become apparent. Not those in the fieldwork and findings, but in my activities as a researcher. Choosing the twitter.com interface as my window on my field brought me to particular elements of data and led me to behave in a particular way in making field notes. I was also ‘tied’ to the desktop computer (and the desk!) in a way I wouldn’t normally be; how did those actors influence my actions? Reading back through my methodological comments, my frustration with the experience is clear, and how different this was from my usual, but less formal, wanderings in the field. This highlighted for me an area I’d not really considered – the emotional response to events as they unfold, and how that response might influence subsequent events or behaviour. After three fieldwork sessions, and trying a couple of tweaks to make them more fruitful, I later recognised that the interface did not suit my needs and as a result will use Tweetdeck for this kind of fieldwork – a different interface, with different materiality which will doubtless affect me (and the results?) in a different way. I also decided on a completely different method (more about that in a future post) which might provide insights on a part of the field, hidden from the ethnographer in these circumstances.

 

Kim, Y. (2011). The pilot study in qualitative inquiry identifying issues and learning lessons for culturally competent research. Qualitative Social Work, 10(2), 190-206.

King, K. P. (2011). Transformative Professional Development in Unlikely Places: Twitter as a Virtual Learning Community.

Van Teijlingen, E., & Hundley, V. (2001). The importance of pilot studies. Social Research Update. Issue 35.

But it’s only 300 words”

flickr photo by Chrispl57 https://flickr.com/photos/chrispl57/5321817498 shared under a Creative Commons (BY-NC-ND) license

I’ve spent a substantial part of this weekend writing less than three hundred words … and I’m still not happy.

In addition to the report I need to produce as part of the Confirmation of Candidature process (more to come on that!), I also need to give a verbal report. Usually this is in the form of a presentation to a small group composed of your supervisory team and a rapporteur, followed by a Q&A. A mini viva in effect. My supervisor asked if I’d prefer to do a seminar; much the same format, but invitations would be extended more widely within the Institute. That seemed like a good opportunity to speak to a wider audience, perhaps people I’ve not met before, and possibly attract a wider range of feedback. So I went for it.

We’ve now reached the point however where that seminar needs to be publicised, so I need to produce an abstract, which is where this weekend’s three hundred words came in. This is my first abstract proper and I think it’s safe to say I don’t think I’ve ever sweated so much over so few words … even when writing job applications. In order to plan and develop what I needed to say, I followed the advice of my supervisor and turned to the abstract writing section in Rowena Murray’s ‘Writing for Academic Journals’. She points in particular to Brown’s (1994) advice consisting of an eight questions framework. I’d come across these before at some other time and saved them for future use, so it looked like that future had arrived. Although Brown suggests that the task should be completed in 30 minutes, it took me twice that … at the first pass. I then needed to take that and redraft it into a format suitable for an abstract, which is where the ‘fun’ really started. Struggling to find my mojo, I turned to more experienced people than me; those whose writings had already passed scrutiny and been published. I must have looked at fifty or sixty abstracts in academic papers and tried to pull out some themes and similarities. To me, the best ones seemed to be the simplest – here’s the problem, this is my context, here’s what I did, here’s what I found and these are the implications. Simple.

I tried to take what I’d learned from the literature and apply it to my answers to Brown’s questions, but really struggled to get a writing flow. This then turned into worrying about whether I was struggling to articulate what I’d learned because what I was researching had little value or meaning. Or maybe it had, but I wasn’t good enough to be able to express it (now I see why they say Imposter Syndrome never really goes away!). So I found myself agonizing not just over sentences, but phrases and even words!

Well I said I would have it ready by Monday (tomorrow), so despite my reservations over the standard, I also believe that it’s important to ‘ship.’  Whatever the shortcomings, if it stays in my digital folder, it achieves nothing, so getting something out there is better than nothing. Here it is:

What should we make of teachers’ claims that Twitter provides them with powerful professional learning?

Social media have become a significant feature of many people’s everyday lives; teachers are no exception and some are using those media in educational contexts.This preliminary study explores the ways in which teachers have appropriated Twitter to support their professional learning. What are they doing, why are they doing it and what are they getting out of it?

With much of the activity taking place online, a digital ethnographic approach was chosen; this incorporated a semi-structured interview, participant observation within Twitter, and analysis of blog posts and tweets. Consistent with previous research, the findings confirm that teachers share (resources and ideas), discuss educational issues and their practice, develop and maintain connections with one another which combats isolation, and grow professionally. In addition, the study also indicates they celebrate the work of their peers, cultivate offline contact and activity, appear to have a predominantly positive outlook and are emotionally invested in their experience. Adopting a sociomaterial sensibility yielded insights into the enabling technology. The Internet, wireless access, portable devices and the applications which run on them were working together to enable teachers to personalise their professional learning. The previously silent materiality now ‘pings’ for attention. In association with technology, some teachers have become more self-motivated, self-organised  and able to exploit informal opportunities for their professional learning.

Schools may need to consider how to acknowledge, accommodate and nurture the learning some colleagues are undertaking of their own volition. Those experiences may represent an untapped resource which could be harnessed for the wider good of school communities.

Despite the fact that I’m posting this ‘in public,’ I know the reality is that few people will read it. I think I’m particularly concerned about it being ‘right’ because this will  be the first thing I am putting out to a local academic audience. It’s important for me to have the respect of my peers, so if my abstract is naive and poorly constructed, how will they view me, let alone whether they’ll feel sufficiently interested to give up an hour and come to listen to me talk. What I perhaps ought to remind myself is that I’m still only in my first year and have much to learn … but unfortunately that’s not making me feel any less queasy!

If you have any advice to offer on how I might improve my abstract, do please add a comment. Thanks.

 

Brown, R. (1994). Write right first time. Literati Newsline, 95, 1-8.

Tracing the field

In the preceding post, I was casting around for a tool to trace and display the paths I take through ‘the field.’ My search came up short and it became apparent I would need one tool to record the places travelled and another to display those traversals. In my search for contenders, and looking for mind/concept mapping tools, I came across Draw.io. Although there are other similar more fully featured applications, they are often desktop-located or limited (in the free versions). Draw.io seemed to suit my needs, so I thought I’d give it a try with a few of short visits to the field to see how it functioned in context.

The field configures itself in different ways depending on the approach you take, so I elected to use a couple of Twitter searches as springboards, then to simply follow the twitterstream I normally see, but using the TweetDeck application. In the latter case, I also hoped to see whether the different material environment of TweetDeck (compared with the standard twitter.com interface) might influence what I see, or how I see it. With raw twitter.com, tweets entering the timeline are put in a queue and await your click to release them. As I found during one of my pilot studies, if it’s taken a few minutes to read and process a few tweets, there can be a substantial number in the awaiting queue. These all then drop in on your click and processing this next batch can be a challenge. In contrast, TweetDeck is a more dynamic interface, allowing tweets to appear in your stream at the moment they have been posted. The effects are twofold: firstly it makes the task of processing them at least appear to be more manageable; and secondly you get a better sense of the flow of the ‘stream. TweetDeck also has the additional feature of allowing you to view filtered versions of the ’stream in different columns – these could be your notifications, a person’s posts, general hashtags or other search terms.

tweetdeck

On three separate occasions then, I spent about an hour ‘in the field’ and followed interesting threads by recording them on a Draw.io canvas. The features of Draw that proved useful in representing those threads included symbols to indicate the nature of the point of interest (tweet, blog post, website, article etc); being able to add the url of each point to the symbol as a hyperlink, so I or anyone else could revisit in the future; and being able to add a ‘tooltip’ with either the tweet or a brief summary, to give a flavour of the content. Importantly, each of the points of interest on the growing map could be joined by a connector – a dynamic line which moves with the graphic if it is moved. This makes the process of  editing and appending to the map so much easier. The following image will give you an idea of what the map looks like, though doesn’t provide the interactivity. However, Draw.io also also allows files to be downloaded as html files, so I’ve posted the full-fat, interactive version here.

flickr photo by IaninSheffield https://flickr.com/photos/ianinsheffield/28898253850 shared under a Creative Commons (BY-NC-SA) license

If you hover over the interactive online version, you can follow the links to the places I visited. In addition, you should also see (depending on your browser) a menu at the top which, for simplicity, allows the three different visits to be shown individually. You can also zoom in and out should you need to. (You might find the visualisation of the connections in a #chat in the bottom right quite interesting).

 

The perennial question we then always pose is ‘so what?’ We need to first bear in mind that these were only relatively brief sessions, simply as a proof of concept. However we are immediately aware of the visual element the graphic brings and the capability to quickly see the complexity of the paths traced and what led to what. We can also compare the depth of exploration with the breadth – was this a single topic being pursued relentlessly, or a skim to capture the zeitgeist? Were the stopping-off points similar or varied? How is the materiality expressing itself? These are some of the elements the map is telling me, but how does it stand as an artefact to represent my journeys to someone else?

I found myself inexorably drawn to attempting to use the map as an analytical tool to assist in interpreting the data. But is this fair? That wasn’t the purpose behind its creation, and anyway the points of interest weren’t chosen with analysis in mind. What that did make me wonder though was how useful a similar map might be of the activity of a potential participant. As a researcher, the process of analysis and interpretation might then take on much greater significance. The map you see above is unique; no-one else would have produced an identical one and at a different time, the map I would have produced would have been completely different. It transpires that this is a research method that is already, if not widely, used (Emmel, 2008) – maps are drawn by participants, usually during an interview where they narrate the map as they produce it. To do this in my online context would need some adjustment I feel, and because potential participants would be remote, there would be greater technical overhead for them. Since they would also be flicking back and forth (as I did) between different spaces, might that interfere with the paths they would usually have taken as part of their natural activity? So perhaps I inadvertently stumbled on an appropriate technique for capturing participants’ journeys when I discussed the audio method in an earlier post? Maybe I can learn something from the literature on Participatory mapping that will help me firm up my technique. I’ll add it to my ‘to do’ list.

 

Emmel, N. (2008). Participatory mapping: An innovative sociological method.

Is there a digital ethnographer’s Strava?

flickr photo by splorp https://flickr.com/photos/splorp/357376583 shared under a Creative Commons (BY-NC-ND) license

As I was rewriting my ethics submission and reviewing the methods I had used in my pilot study, I got to thinking about being ‘in the field.’ When I undertook more formal participant observation and made meticulous field notes, I wondered how they might be viewed as data. I also found myself wondering about the process itself and how effective it was in providing me with something from which to make meaning. I was convinced that I wasn’t following the actor-network theory exhortation to ‘follow the actors.’ With all that in mind, I felt I needed a better way to record, trace out and make visible the paths I was taking whilst in the field. Who or what were the actors I was following? Where did they go and what did they do?

Although the starting point for most of these expeditions was Twitter, wanting to record where things flowed from there essentially meant capturing the sequence of hyperlinks or prompts … not unlike the instructions a sat nav system provides. It was important too to record the forms and natures of those links and details about the stopover places. Would there be destinations or endpoints in the paths traced, or would they simply be the places where particular expeditions came to a close? If this just meant capturing hyperlinks, then a bookmarking tool of some sort might do the job … and there are plenty of options from which to choose there. What I also wanted however, was to:

  1. capture some sense of what was at each location; a snapshot if you will
  2. capture the whole set of interlinks, and be able to represent it as a pathway
  3. capture metadata which would later allow me to search, sort and filter the results
  4. present the results visually, allowing an over view and to be able to drill down to the detail

Most of the contenders (like Diigo, Delicious, Symbaloo, ScoopIt, NetVibes) do one, two, or perhaps even three of these, but since none do them all, I was then faced with using two tools in concert to fulfil my needs list. That’s when things got complicated.

The really tricky bit is when it comes to visually representing the paths taken; this began to lead towards a concept or mind mapping tool of some sort. Again there is a whole raft from which to choose, but the crucial thing is I didn’t want to have to build the map from scratch; much better, quicker and more accurate to have the process automated – clicking on a button should capture all the aforementioned data and add a node to the map at the right point, but also link it with the node from which the jump was initiated. Perhaps there were some mindmapping tools which in addition to visualising the data, could also capture the extra information I wanted? Errr … no. Although there are plenty to choose from, when you begin to dig down into their features, the field soon gets whittled down. Crucially, the missing link is in fact … the missing link i.e. that getting the data from a bookmarking application directly and dynamically into a mind map is distinctly non-trivial. One possibility, which only a scant few feature, is to import an xml file. Plenty export xml, but few offer the import option.

Whilst I was casting around for options, I came across a potential contender. The mind mapping application VUE (created by Tufts University) could be linked directly with Zotero, through the Zotero FireFox plugin. The good news is that I have the latter enabled already and am familiar with it; the less good news was twofold: firstly that VUE is an offline application you need to download and install, which for a bunch of reasons is less than optimal); and secondly that Zotero is, strictly speaking, a bibliographic application for managing references, rather than for bookmarking. Short on options, I decided to give it a shot. Whilst working at uni, and therefore within an enterprise environment with things generally locked down, it wasn’t going to happen, but at home I made much better progress. Download and install VUE – check. Create a new Zotero account so I don’t foul up my current references – check. Capture some typical data to make sure Zotero will work to manage bookmarks – check (even tweets can be captured!). Set up the link from my Zotero library into VUE – computer says no!

Unfortunately the straightforward instructions in this video were made before the architecture of FireFox changed, rendering the plugin which performs the setup redundant. As an open source project, VUE (FireFox and Zotero) relies on volunteers to update applications to accommodate changes like this and I guess the will, the expertise or the inclination was no longer there.

At that point, all was not entirely lost however. VUE can be configured for incoming RSS feeds to generate and update mind maps., or import a csv file, both of which Zotero can provide. I found that both work, although less than optimally. The RSS feed brings all the data in, which will update as the data in Zotero updates. Unfortunately some of the field metadata seems to get lost on the way, so turning the feed into a map doesn’t work quite so well, especially where the interconnections between nodes are concerned. The csv import is much better in this respect, though of course, it won’t update automatically. And in fact neither method pulls across the ‘relations’ created between the sources in Zotero.

It appears then that there is no ideal solution and that if I want a visual representation of my activity in the field, then I’m going to need to generate it manually, from data I’ve captured and stored in Zotero. When recounting this tale to Chris, a fellow student, his first observation was ‘Well what have you learned?’ There’s no doubt I’m now more familiar with VUE and can see how powerful it can potentially be in helping to manipulate, filter, sort and visualise data. The process you go through in doing that becomes an integral part of your analysis. Whilst the technical issues mean I probably won’t use VUE, that principle of analysing and interpreting as a function of constructing a visualisation seems to have some merit. It’s that I think I shall take forward as I attempt to map the field.

Green light

flickr photo by My Buffo https://flickr.com/photos/mybuffo/311483225 shared under a Creative Commons (BY-SA) license

If potential research participants gave their permission, what would be the implications of posting interview recordings online? That was essentially the theme of the preceding post. I wasn’t so sure of which way to jump, but the encouraging and supportive comments I received there and on Twitter prompted me to take the trickier route of writing a new ethics submission. In addition to rewriting the University pro forma submission document, I had to rewrite a couple of consent forms and their associated participant information sheets, in order to accommodate the possibility that participants might give their permission to ‘publish’ their recording. I also had to write a consent form and participant information sheet for an new, additional method I want to use. I then had to amend and extend the matrix I composed which summarises the ethical issues for each method. Finally, but perhaps most importantly, I felt it was important to attempt to justify the rather radical notion that interview recordings might be posted as podcasts. Here then is that supplement to my ethics submission:

Why am I proposing a change?

The usual position is that interview participants are afforded confidentiality and anonymity – that the data they provide will only be available to those specified, and that all features which might identify them will be removed before making the findings more public. In the interests of speed and given the small scale of my pilot study, I adopted the aforementioned approach. As I move forward into my main study, I would like to propose a different stance, building on those issues discussed in Appendix 02(?): Anonymity. This also contributes to the University’s and wider Open Access policies.

The arena from which potential participants will be drawn is highly participatory, where members generally adopt a performative approach. The norms of the space include a sense of sharing what you have and what you know; where people acknowledge and give credit to those who have supported or helped them. I’d like to suggest that this participatory space invites a more participatory research approach. As Grinyer (2002) noted, researchers have to balance the need to protect participants from harm by hiding their identity, whilst preventing loss of ownership ‘on an individual basis with each respondent.’ This is manageable, provided the sample size is small, as it will be in this study,

It’s perhaps helpful at this stage to reiterate that the topic of this research is not sensitive, participants are not vulnerable and the data they share will not be ‘sensitive personal data.’

How this differs from the interviews in the pilot study

In the pilot study, the participant was assured confidentiality, anonymity, that the transcript would be deleted at the end of the study and that the findings would not be reported (only used to inform the next stage of research).

For the main study I propose a shift in emphasis from ‘human subject to ‘authored text.’ This would be achieved by allowing interviews to contribute to the participatory agenda, by releasing the interview recordings as podcasts (streamed online audio files), if participants give their permission. Links to the audio files would be embedded in a web page associated with the research project, the interviewees would be named and their contribution credited. This represents an attempt to move beyond the notion that participants are merely sources of data to be mined. In Corden and Sainsbury’s (2006) study, participants responded positively when offered a copy of the audio recording of their interviews and were given the option to amend their responses, though few chose to exercise that control.

This is a very different approach to that found in most studies, but is not without precedent. The ‘edonis’ project, part of an EdD study by David Noble, included a series of interviews with teachers on the theme of leadership in educational technologies. The interviews from those people who gave permission were posted online. It could be argued that this proposed approach is only one step further on from conducting ‘interviews’ in visible online public spaces like blog comments, forums, and some chat rooms.

Risks and benefits

Once participants’ identities are no longer disguised, both potential risks and benefits become more significant. Table xxx summarises possible risks and benefits:

Risks Benefits
Loss of privacy which could lead to exposure to ridicule and/or embarrassment. Direct: Increase in participant agency, moving beyond the notion of participants merely as sources from which researchers abstract data.
Change in future circumstances which renders what participants originally said to be viewed in a less-positive light. Direct: Makes provision for participants to amend or extend what they said in the original interview.
  Indirect: Increasing the awareness and understanding of the wider community of issues associated with professional learning and social media.
Increased attention through increased exposure.
This could be perceived as either a risk or benefit and would depend on the participant’s preferred online behaviours.

Mitigation

As with conventional approaches, in order to make an informed decision, potential participants would need to be made fully aware of:

  1. Purpose and potential consequences of the research
  2. Possible benefits and harms
  3. The right to withdraw
  4. Anticipated uses of the data
  5. How the data will be stored and secured and preserved for the longer term.

With items 4 and 5 the circumstances will be different, depending on whether participants accede to their interview recording being released. This distinction needs to be made absolutely clear at the outset so participants are able to decide whether to be involved at all and whether they want to take that additional step.

At the start of an interview, participants who agreed to allowing the interviews to be posted would be reminded of the above once more and their verbal consent captured in the recording. In the debriefing after the interview is complete, participants will be asked whether they wish to change their minds, and reminded that should they do so subsequently, how they can make those views known.

Procedures

As in the pilot study, potential participants would be provided with a participant information sheet, but one extended to include the additional considerations (see Appendix xxx). The form through which they provide their consent will also be amended to offer options for the different levels of involvement (see Appendix xxx) and whether they are prepared to allow the recording to be released under a Creative Commons license (see next section)

Given the small number of interviewees (<5), coping with different levels of involvement should be a manageable process.

Copyright and Intellectual Property

These issues will also need to be made clear to participants through the participant information sheet.

…for data collected via interviews that are recorded and/or transcribed, the researcher holds the copyright of recordings and transcripts but each speaker is an author of his or her recorded words in the interview.

(Padfield, 2010).

Rather than seeking formal copyright release from participants, it is proposed that the interview recordings will be released with Creative Commons, Attribution – NonCommercial – ShareAlike 4.0 International licensing. Participants will be asked at the point of providing consent to state whether they agree to that release; if they don’t, then the recording would not be released. Once more, potential participants are likely to be familiar with the principles of CC licensing; many of them release their own materials under these licenses.

Eynden et al (2014) recommend the use of Open Data Commons licenses for data released through research, however this licensing system is more appropriate where data is stored in databases and the database itself need licensing separately from the content. CC licensing was chosen since the content will not be wrapped within a database; at least not one which the public will be able to manipulate (copy, remix, redistribute).

References

Corden, A., & Sainsbury, R. (2006). Using verbatim quotations in reporting qualitative social research: Researchers’ views University of York York, UK.
Eynden, V. v. d., Corti, L., Woollard, M., Bishop, L., & Horton, L.,. (2014). Managing and sharing research data : A guide to good practice SAGE Publications Ltd.
Grinyer, A. (2002). The anonymity of research participants: Assumptions, ethics and practicalities. Social Research Update, 36(1), 4
Padfield, T. (2010). Copyright for archivists and records managers (4th ed.). London: Facet Publishing.

 

flickr photo by mherzber https://flickr.com/photos/mherzber/500917537 shared under a Creative Commons (BY-SA) license

I’m delighted to be able to report that my revised submission has passed the ethics review process. It’s highly unusual for interviews to be allowed to be published in this way; standard practice is to afford anonymity to interviewees. Perhaps it’s indicative of the need to make our research more open, or the more performative behaviours of potential participants … or perhaps a bit of both. Whatever the case, I’m chuffed to bits, as we’d say up here in the ‘North.’ Now all that remains is to find participants sufficiently confident and generous enough to give it a shot. Know anyone …?

The only way is ethics

flickr photo by cybass https://flickr.com/photos/cybass/176867465 shared under a Creative Commons (BY-NC) license

Right from the outset, one of the options I’ve tried to keep in mind is that of ‘publishing’ those data that are amenable. Publishing in this sense refers to sharing interview recordings, as podcasts, back with the community. This feels like the right thing to do; when teachers experiment with new techniques that someone else showed or explained to them, they often share those insights more widely. If that is the norm, why wouldn’t my research study, conducted within this environment, be any different? Well there are a number of reasons, mostly arising as a result of a researcher’s’ ethical sensitivities and obligations towards potential participants.

The default ethical stance is to maintain participants’ anonymity and confidentiality; with an interview transcript, this isn’t too difficult. If on the other hand, the audio file of the interview is shared, the potential for the participant to be identified is so much greater, even if personal identifiers are edited out of the audio. However, it could be argued (as I began to discuss here) that in the online performative space with which participants are comfortable, anonymising what they have created actually does them a disservice. Much better to acknowledge their co-authorship and give credit where it’s due. I wonder how many researchers conducting interviews as one of their methods, discuss the issue of ownership, copyright or intellectual property with their interviewees, beyond explaining where their data will be stored and how it will be used. In fact ‘for data collected via interviews that are recorded and/or transcribed, the researcher holds the copyright of recordings and transcripts but each speaker is an author of his or her recorded words in the interview.’ So I find myself speculating what the implications and potential consequences of that are? As Van den Eynden et al (2011) explain, an author could at some time in the future assert their rights over the words they provided and you would be obliged to comply. It is possible however for the researcher to have sought ‘transfer of copyright or a licence to use the data’ ideally at the outset of the project. There are even templates available through the Data Archive to make things easier. I wonder though whether taking the route towards Creative Commons licensing might provide a route forward? Potential participants are likely to be familiar with it; many will indeed use it with their own material. But that then has me wondering whether that’s permissible under the University regulations for PhD research (which of course I could doubtless find out), but also what the implications might be if you subsequently wish to publish your research through conventional commercial channels.

My work this morning has been with the apparently less sticky technical issues – where would the audio files be stored, how would they be served/streamed etc. In the past I’ve used the free versions of various podcast services like AudioBoom, SoundCloud, Spreakr etc, but they’re of course limited in some way and would not be adequate for several hour-long podcasts. Paying for upgrades is an option, but I don’t fancy picking up the tab of tens of pounds per annum, just for this project. Online storage can be bought for a much more manageable outlay through services like Amazon S3, or perhaps more ethically(?) through Reclaim Hosting, but which of course demand a higher level of technical capability to configure, manage and maintain the site. I probably have enough background to cope with that, especially if supplemented by online tutorials … and I have been considering securing a new domain name anyway. But then what happens in the longer term? How long will I need to maintain the site and content?

I can’t help but be drawn back to ethical principles, specifically those of non-malfeasance and beneficence. Would sharing podcasts of interviews be likely to result in any harm befalling participants and are there ways in which they might benefit? Is is not easy to speculate what harms an interviewee might incur, but not dissimilar perhaps than those from potentially any online activity. In most cases (assuming the material is not inflammatory or illegal) the most harm is likely to be reputational damage from an inappropriate or ill-judged comment. It might be possible that potential future employers might be put off by opinions or ideas expressed – if as a teacher, you expressed particular pedagogical approaches you favoured and they were at odds with the views of a potential employer who heard your interview, then s/he might be less inclined to offer an interview. Again though, if you hold a particular set of values and have an online presence, it’s likely you’ll have already burnt that bridge. This can of course be flipped and work in your favour as it did for Daniel Needlestone – a benefit? For those who share widely, seek exposure and an audience, then being provided with an opportunity for that through an interview, then this might indeed be considered to be in their interests. And of course, as for many research participants, but perhaps particularly for teachers, there is the sense that their participation is contributing the pool of knowledge from which we all sip … or gulp.

I’m obliged to also ask myself why I might want to do this; what do I stand to gain? Am I being selfish and actually seeking kudos from the community? Am I attempting to follow in the spirit of making research more open and more accessible? Am I attempting to be more faithful to my participants in seeking to ensure their voice is not lost through my transcription. Is this one way in which I can be more transparent about my analysis and interpretation? Is this an additional channel through which I can make my research accessible to a wider audience? Perhaps a little of all of the above?

So which way do I go? My easy route is to stick with the ethical issues I’ve already had ratified for my pilot study and go with participant anonymity. The difficult route, for all of the aforementioned reasons, is to seek to ‘publish’ the data and therefore have to write a new ethics submission incorporating all those issues and explaining how I would address them. That might be time consuming (both in the composition and in the approval process), but is not impossible; the edonis project by David Noble has already set a precedent in fact. Which option would you choose if a) you were me, and b) you were a potential participant – what would your preference be?

Van den Eynden, V., Corti, L., & Woollard, M. (2014). Managing and sharing data.

Tootling along

When you approach your research with an actor-network sensibility, the one thing that you’re pretty much guaranteed to have absorbed through your reading, is to ‘follow the actors’. The principles in virtual or digital ethnographies similarly encourage you to follow connections and flows; an arguably much easier proposition in the online hyperlinked world than in the offline. It was these approaches that led me to #TootlingTuesday.

Using NVivo, I was working through my first coding pass of a corpus of tweets when a particular tweet caught my eye. A single click on the url of that tweet took me out of NVivo and into my browser so I had a better chance to see it in context. The tweeter’s bio suggested this might be someone I could benefit from following, so, following my usual algorithm, I did a quick check of their last few tweets to confirm that they tweeted interesting material. In their stream I spotted a reference to #TootlingTuesday which further piqued my interest. This was a hashtag I’d not come across before, so I clicked on it to initiate the Twitter search page. A scan through the returned tweets revealed them to mainly be celebrating or praising what other’s had done or said or shared. But I was keen to know more and see whether my interpretation was correct, so #TootlingTuesday next migrated into a Google search. Although the search results didn’t provide much background, one image which was returned helped a little:

(If you know the origin of this image, please let me know in the comments)

Different search engines were even less helpful, so unfortunately on the basis of the ten minutes I spent, somewhat ironicaIly, I’m therefore unable to credit the originator … or even the designer of the image. If I desperately needed to know, my next step would be to follow it up with some of the folks who’ve been using the hashtag.

When I reflected back, what was interesting was the way in which my actions had been influenced by the materiality within the environments. Initially a tweet appropriated my interests which took me to a person’s Twitter account, where I sought out the standard elements I always draw on; in this case the bio and the twitterstream. From a tweet within there, the #TootlingTuesday hashtag mobilised me into further action to seek its origins. I now needed to employ several search engines. Most of these acted only as intermediaries, briefly taking my inputs, but failing to transform them into anything more meaningful. Google images however became a mediator, serving up further information which transformed my knowledge and understanding of the hashtag – I was changed as a result of the output of the Google search. Are the #TootlingTuesday hashtag and I now part of each other’s actor-networks?

I find myself speculating on each of the transition points where that sequence of activity might have broken down after seeing the original tweet. If the person’s bio, or subsequently their twitterstream had not satisfied my criteria for sustaining interest (perhaps I ought to lay them out at some point?), or if I had not scrolled down sufficiently far, then I would not had seen the tweet containing the hashtag. If it had not been a Wednesday (i.e. just after Tuesday) then the tweet or one similar might have been too far back in the temporal flow of the stream. If the hashtag had not been of interest, or not a hyperlink through which I could immediately access Twitter’s search page and thereby instantly form an impression. If at least one search engine had been unable to provide a significant piece of the puzzle. Is it coincidence that these elements all lined up? Or serendipity? I wondered too about the ways in which other people are enrolled by the #TootlingTuesday hashtag and different paths they take and outcomes which result. Perhaps that’s all part of the richness and variety of learning experiences on Twitter … or anywhere else?

Finally in a more methodological reflection, one might assume that when dealing with a tweet corpus, you’ve left the field and are back in ‘the office’ analysing the data. In one sense that’s of course true, but in digital ethnography, you’re never more than a click away from being back in the field.

Unsettlement

flickr photo by derekbruff https://flickr.com/photos/derekbruff/27336142234 shared under a Creative Commons (BY-NC) license

Just had a meeting with my supervisory team; one that I called. By the end of September, I will need to have completed what we call locally, the RF2, or ‘Confirmation of PhD.’ It’s a checkpoint through which you can pass if you’ve made sufficient progress and what you’re proposing to undertake is worthy of study in pursuit of a doctorate. There are two parts: the first is a 6000 word report and the second a presentation to a panel. I guess the combined process serves (at least) two functions; whilst providing that obligatory passage point, it also provides an opportunity to experience, on a smaller scale, what the end of the doctoral process is like – the production of a summative report together with an oral examination. It all makes sense and hangs together.

In preparation for the meeting I produced and circulated a set of notes – points I wanted to cover. Some were simply procedural, but the main thrust was to get some feedback on the pilot phase of my study. Although not quite complete, I have enough data and have undertaken sufficient analysis to begin to make some tentative observations. I wanted the meeting to provide some sense of reassurance that my interpretations held water and to help bring some clarity to some the rather fuzzy and less coherent preliminary thoughts I’ve had. It was not to be … as has been the case in most of the meetings I’ve had so far. As I recounted my thoughts, my sensei’s pushed the points I was making that little bit further. ‘If you’re saying xxx, then you’ll need to consider yyy.’ ‘If it’s the case that xxx, then might it be that yyy.’ In many ways, rather than sharpening the focus, issues became more blurred as possibilities expanded. I found the experience most unsettling.

As I began the process of mulling things over, I realised that my expectations of supervisory meetings might have been opposite to what they’re actually intended to do. They’re not there to bring forth order from chaos; that’s my job as a doctoral researcher. Instead, they’re about being unsettled; having your cage rattled. You arrive at a meeting with a set of thoughts, some fully formed and others mere fledglings. What supervisors then do is test the strength, flexibility and elasticity of those ideas – do they stand up to scrutiny and do they fully represent the phenomenon or situation you’ve been studying? Supervisors are there to pose the questions you, as a student, are too inexperienced to have thought of, or too insecure to have articulated. It should be an unsettling process; if it isn’t, your work may not be moving forward or achieving the standard it needs to.

Based on the data and initial interpretations I offered, there were a number of considerations I need to take away and questions I need to address.

  • Professional development, professional learning, CPD – what significance does the terminology have and how big a deal do I want to make it? Do I define what term I’m going to be using throughout my study and therefore set out my stall from the beginning, or is
  • In trying to ‘tame’ the research ‘site,’ I need to take care that I don’t massage out the very essence of Twitter. It’s a messy, intense, unruly, unbounded, chaotic space; some of that might be what helps to generate the benefits and outcomes that people have begun to describe.
  • There some tentative indications that ‘identity’ might be a topic which needs addressing, though I got the impression that that carries with it a whole other set of baggage.

During the course of my summary, I offered up a variety of possible avenues, each of which might make a fruitful area of exploration, but I now need to decide which thread, running through the whole study, that I want to gradually pull and tease out. I also need to begin to set myself some limits, especially if I intend to continue with multiple methods. If I’m unable to reassure those assessing my capability to continue, that I can conduct and complete my study within the time scale, then I may not be allowed to move forward.

So yes, I’ve definitely been unsettled, but that was needed to encourage me to confront and make sense of the raft of possibilities, and to bring some coherence to my unfolding research.

SM&Society: Final Thoughts

flickr photo by ianguest https://flickr.com/photos/ianinsheffield/28018637830 shared under a Creative Commons (BY-NC-SA) license

So the conference I’ve been looking forward to for about a year now has drawn to a close and the daily commute on the Underground has turned back into a bike ride into Sheffield. Time then to reflect back on my impressions.

My first comment would have to be how incredibly well organised everything was; from the initial call for papers, right through to the final session. Every last ‘i’ was dotted and ‘t’ crossed. Simple things like having printed 5 and 2 minute warning cards for session moderators gives some idea of the attention to detail – lots of useful tips for me as I organise our doctoral conference for later this year. The conference team, in conjunction with the local hosts are due a great deal of praise here. That said, the atmosphere was incredibly friendly and inclusive; you always felt as though you could approach anyone and talk about anything.

I felt the structure and content worked well. Although in a couple of sessions, I found it difficult to choose an appropriate theme and perhaps didn’t always get it right, within each session there would nevertheless be one or two of the papers which provided unanticipated gems. I was grateful for the opportunity to listen to some of the foremost academics in the field and hadn’t appreciated precisely how accessible they are at events like these. I did get the sense that I attended more sessions that presented work built on quantitative methods, but that could doubtless be down to the choices I made, rather than a reflection of the overall spread. It struck me how much research into social media seems to use survey methods as the primary instrument, or of course SNA. Given the nature of the medium, it’s easy to see why that is, though each further study I listened to where a survey had been used left me wondering why digital ethnographic techniques aren’t used more.

Maybe it’s my age, but I found the days quite exhausting; from 8.30 through to 5.00 (longer on the poster day) whilst trying to keep your mind sharp proved quite fatiguing. For me, I think it was the sheer rate of metal processing needed to maintain focus in sessions where four presentations followed in rapid succession. Even during break times you invariably had to stay sharp as you discussed previous or forthcoming sessions with fellow attendees, or indeed your work or theirs. In many of the talks presenters talked incredibly quickly, attempting to squeeze in as much detail as possible, thus requiring you to maintain an incredibly high level of focus. I’m sure that on some occasions I failed, as I hurriedly tried to capture a few notes, and keep up with the barrage of fresh information. In part this was down to the brevity of the allocated time slots, but then if the number of sessions were reduced, fewer people would have the opportunity to present. I guess I just felt it was a shame that those who were presenting ‘work in progress’ and seeking feedback, had a scant couple of minutes in which to receive it; one question at best. Not an easy one to resolve, but it’s encouraged me to think carefully about how I might structure the messages I wish to convey in sessions I deliver. If I’m only given 20 minutes in total and I really want some feedback, then I need to think carefully how I divide up the time.

Presenting certainly occupied my mind in more ways than one. Now that data from my pilot has started to come in, I’ve started thinking of possible places I might present my findings. The SM&Society conference would have certainly been an option had it been somewhat later, but what other forums might also be appropriate? Clearly those which have themes dedicated to professional development or professional learning. Possibly digital methods? Social media? Or could I make a case for more general sociological events? So in addition to all the other benefits I’ve enjoyed at the Conference, it’s also encouraged me to consider exactly where my research is positioned.

I’d love to be going to Toronto next year, but I’m afraid the pockets are unlikely to be that deep.

SM&Society Day 3: Graveyard slot

It’s never easy retaining an audience and maintaining their interest at the end of the day, let alone at the end of a three day programme. Nevertheless the group presenting the papers in the ‘Organisations and Workplaces’ session did a great job.

In the opening paper, Halvdan Haugsbakken reviewed research in social media use in organisations. Anita Greenhill and Jamie Woodcock discussed at project looking at crowdsourcing practice in ‘Zooniverse;’ clearly a very different kind of organisational practice. From the Netherlands, Anita Batenburg considered Virtual Communities of Practice created by organisations in the health care sector and Lene Pattersen looked at the ‘villages’ which formed in a globally connected organisation.

With the exception of Halvdan’s study, the online spaces we might usually recognise as social media were absent in these papers; instead the platforms were created by the organisations to provide social media functionality. Although it didn’t crop up (and I only thought about it writing this post!), I wonder how far we can claim institutional platforms as social media, when although they provide some of the social functionalities, they’re internally facing?

There were a number of elements which came together to make this one of the most successful sessions of the conference for me. First, although my research is focused more closely on individuals, the activity with which they’re involved is clearly related to their workplace, so everything I heard had relevance. Secondly, the papers covered a broad range of issues and topics, and did that through a variety of methods. We had a review of the literature, network analysis, surveys, interviews and an ethnography, all of which spoke to me multiple methods inclinations. Finally, and I’m sure this is something the conference organisers aim for, there was a clear theme which ran through all the talks and drew them together: knowledge sharing practice, motivations and value for all involved. This allowed the speakers to reference what was emerging in each other’s talks.

Most importantly for me, this package of talks suggested a number of avenues that might be fruitfully explored, including voluntaristic materialism, how and where is value being produced, self-determination theory, technol stress and key informant methodology. (I list them here so I can’t forget them!)