Interview – @johnmayo

CPDin140 – John Heffernan

With a broad range of experiences, educator John Heffernan (@johnmayo on Twitter) currently finds himself transplanted from Ireland, his home, into Virginia, United States. John discusses the part that Twitter played in that, connecting him with ‘interesting, smart people’ and exposing him to people who ‘have different views and different lifestyles.’…

via CPD in 140 – John Heffernan — EDUtalkRead More »

Transcribing … a pain in the neck?

“Pain in the Neck” by RebeccaSaylor https://flickr.com/photos/rebeccasaylor/115892449 is licensed under CC BY-NC

Most researchers who conduct interviews will have a tale or two related to transcription; the process whereby you turn audio recordings into typed text. There’s no doubting how labour intensive it is. Depending on your typing speed, the quality of your audio and the equipment you’re using, it’s often likely to be around a 6 or 7 to 1 ratio – one hour’s interview will take six or more hours to transcribe. For me, that slow processing of the words of my participants is the first opportunity to begin to analyse what they’ve said within the broader context of the whole study. Since your pace is slow, you’re able to begin becoming more intimate with that data. On the other hand however, it can be backbreaking work … quite literally. Hours spent hunched over a keyboard can place demands on your physical well being in so many different ways, even if you do know all the health and safety advice. So any tools which might ease some of that burden (short of farming out the transcription, as some researchers do) are definitely of interest to me.Read More »

Opening the field …

“open day” flickr photo by NapaneeGal https://flickr.com/photos/kingstongal/1562197658 shared under a Creative Commons (BY-NC) license

The notion of what constitutes my ethnographic ‘field’ continues to reappear in various situations. Sometimes this is from people who know better than me that I need to articulate precisely what I mean by it, and sometimes it’s from people less familiar with ethnography who can’t conceive what an online field might be. Traditionally, ethnographic fieldwork, and more specifically participant observation, is marked by a number of factors. It assumes the ethnographer will be resident in a limited geographical locale in which they experience face-to-face relationships (Wittel, 2000) with an ‘object’ of study – an ‘Other.’ There will be clearly identified boundaries where it is straightforward to establish what is included and also what is excluded. In an online, digital, virtual or cyberethnography, residence and geography have less meaning, interaction is mediated and boundaries blur. The ethnography becomes one of movement and flexibility, responding to the ebb and flow of the people and practice under study. My field then becomes one of the people I follow and those they bring into view; the learning practices (and others?) in which they’re engaged; and the areas into which they take that practice. Twitter usually, though not always, provides the point of entry to that field; there I might remain, or be whisked off elsewhere as I follow the actors.Read More »

Weighing Anchor

“Anchor” flickr photo by MarcieLew https://flickr.com/photos/91724286@N04/15909947948 shared under a Creative Commons (BY-SA) license

During a recent interview, Joe Dale mentioned a useful new app he’d found which offered some potential in the context of professional sharing – Anchor. It’s a free (as of Jan 2017) smartphone app (Android & Apple) through which you can create a two-minute audio posting (a ‘Wave’) which others can listen to, then respond, again in audio. Joe (with Rachel Smith) had experimented with it by posting a question posed by one of the #mfltwitterati, then crowdsourcing responses from Anchor users. The final combined thread is then presented as a single, stitched audio stream, where the question and responses form a coherent whole.Read More »

Sentiment Analysis 1- ‘sentiment viz’

Following the preceding post, I’ve dug a little deeper into sentiment viz to explore more carefully what it might offer in terms of revealing the emotional components within Twitter and tweets. tweetLike before, I used a chat hashtag as the search term and perhaps unsurprisingly got a similar shaped visualisation which expressed sentiment as generally positive and somewhat relaxed. Probing a little further and clicking on a few individual circles provides the data which located the tweet at that point on the chart. Here we see the overall sentiment rating expressed as ‘v’ for valence (how pleasant) and ‘a’ for arousal (how activated). Then there’s a breakdown of those words which contributed to that sentiment rating, with their individual scores. We therefore have multiple ways we can compare the emotional content of one tweet with another, but can make a judgement whether those ratings make sense – more of that later.

Read More »

Sentiment … ality?

During my pilot studies, a couple of findings suggested areas for further exploration I’d not previously considered. One of these was the degree to which people talking or writing about Twitter seemed to be ‘affected.’ Although it was not a topic I had gone looking for, nor had asked questions about, and although people rarely mentioned it explicitly, the language and terms they used implied some element of emotional response. Before I could take this much further, I needed to return to the literature and see how people have discussed and/or researched the affective side of teacher learning.

Read More »

Pilot error?

flickr photo by Keenan Pepper https://flickr.com/photos/keenanpepper/543546959 shared under a Creative Commons (BY-SA) license

For my Confirmation of Candidature Report, I’m currently drawing together the findings from, and reflections on, my pilot study. In a meeting a few weeks ago, I’d mentioned to my supervisor that I’d analysed the findings from three of the methods, as assessments for the Research Masters modules I’d been studying. When I mentioned the remaining three methods, he suggested that full analyses for them might not be a good use of my time. That bothered me at the time, because I’d begun a preliminary analysis and had initially coded the data in NVivo. Now that I think more about things however, as usual, he was probably right. I have indeed taken things as far as I need to.

A pilot study can serve a number of purposes, depending on your methodology. In a natural science or clinical study, you might want to test the apparatus you intend to use or the logistics surrounding the processes. In the social sciences, you might be wanting to design your research protocol or verify that it was realistic, test the adequacy of your research instruments or establish whether your recruitment strategies are appropriate (van Teijlingen and Hundley, 2001).

Although a pilot study can be used to collect preliminary data, consideration has to be given to how they will be used. The methods I used to collect data clearly weren’t preceded by a preliminary study to establish that they were appropriate. This means those data may be at best unhelpful, or at worst, misleading. What I wanted from my pilot study was to reveal issues and barriers related to recruiting potential participants, explore the use of oneself as a researcher in a culturally appropriate way and test and modify modifying interview questions (Kim, 2011).

Let’s now look at the methods in turn:

  1. Interviews – the in-depth, semi-structured interview went as planned, and provided indicators regarding which question areas were more informative than others (for this particular interviewee). It revealed a few omissions I will need to remedy when I move forward, and also that my interview protocol might need adjusting from one interview to the next. This is one amongst many reasons for transcribing and beginning the analysis immediately following each interview, rather than waiting until all interviews are complete.

I found the blog interview much harder to gauge. Perhaps this was due to having little access to how questions were affecting the participant – were they confused, irritated, excited by my questions? Even though the s-s interview was conducted by phone, being able to hear a voice, in real time, allowed a better sense of the participant’s reaction to the questions. On the other hand, the blog commenting format allowed the participant to respond at their leisure, and afforded them greater thinking time; time to ruminate and perhaps craft a response which reflects a particular discourse, rather than your own initial reaction, some might argue. Nevertheless, it doubtless takes longer to type out a response, than to do so verbally. As a researcher however, you also have access to the post itself as data in its own right. The post can also serve as stimulus material for interview questions, in much the same way a participant research diary might do.

  1. Participant observation – this really stretched me somewhat, and I’m not at all convinced the method I chose to conduct the observations worked well at all. Such is the value of a pilot study! Although my field notes produced some data I could doubtless have used, what it achieved much more significantly was to alert me that I needed better access to richer data; that the fieldwork will need to take place over a longer period than just 3x one hour slots; to use a variety of routes into ‘the field’ and to try to visually capture some sense of where I’ve roamed – a map to give an overview, rather than reams (kBs?) of fieldnotes alone.

I was also aware that this wasn’t precisely participant observation. Although I was active in the way I normally would be in Twitter, I didn’t ask the searching questions that an ethnography in a community might do … but I’m OK with that for the pilot. Ethnographers do need time to orient themselves before leaping in. In the full study however, I will need to remedy that and attempt to engage people who are tweeting things pertinent to the study. (I also need to ensure that I do not lose the context of those encounters)

  1. Focus group – my intention had been to seek permission to conduct a hashtag chat as a focus group, but with ethical approval only coming through as the summer holidays (northern hemisphere) just started, that proved problematic. I did try a couple of moderators of chats in the southern hemisphere, but received no response to my contact (perhaps they were on midyear break too?). However, in the course of casting around for appropriate chats (there are a lot to choose from!), I came across a recent chat which discussed the areas I would have wished to cover. Although I might have used slightly different phrasing, and I didn’t get the chance to follow up any responses that participating in a live chat would do (wondering whether it’s appropriate/meaningful to reply to a tweet that was made several months ago?!), at least there was a corpus of tweets captured in Storify that was available for me to analyse. There are clearly ethical issues of privacy, consent and the expectations of the uses to which one’s tweets might be put here. I discussed these at greater length in a series of posts, prior to making my ethics submission.

There were technical challenges to overcome to capture the tweets from Storify in a form which lent itself to analysis – another tick in the benefits column for conducting a pilot study. However, having seen the responses in the chat I captured, I’m now less convinced that a hashtag focus group would be able to produce sufficiently rich data, so this may be a method I drop. If however, during the course of my fieldwork, I come across chats which are discussing the areas of professional learning, then I’ll drop by and attempt to participate.

  1. Focused observation – this involved collecting the tweets of a single person (Twitter advocate), with their consent, for a fixed period of time; one month in this case. Given that this was a pilot study, I could only conduct a preliminary analysis to establish the feasibility of the technique. Technically, there were no problems, but I’m not sure the data revealed anything more than earlier studies have done, either the one which used a similar technique (King, 2011), or others which used similar corpora of tweets. I’m starting to wonder whether ripping the data from its natural setting loses much of the context and whether this technique answers the questions I want to ask. If I want to confirm that teachers share things, that they communicate or collaborate, that they reflect on their practice and so forth, then I could probably do that, but all that’s been done in earlier studies. I’m starting to think that the emphasis of my study is shifting subtly away from simply providing evidence that teachers are learning professionally … but I’ve much more thinking to do on that yet.

Overall then, what have I learned from the pilot?

flickr photo by popular17 https://flickr.com/photos/65498285@N08/6040732669 shared under a Creative Commons (BY) license

I think I learned that recruitment of participants may not be as simple as chucking a shoutout on Twitter and waiting for the responses to flood in. Of the dozen or so blog posts discussing professional learning I approached, only one followed through with a full set of responses. Over half never even replied, though I appreciate there might have been mitigating circumstances on some. This has encouraged me to think far more carefully about my participant recruitment strategy. Bound up in that is also my choice of sample; where initially I thought they would be self-selecting from the population of educators to which I have access through Twitter, I now feel I might need to be more direct in my approach and as a consequence establish a set of criteria for choosing potential participants. Should I cover different phases of education, teachers from different disciplines, different geographic regions and educational systems, and perhaps even some from outside the classroom, but who have a particular interest in Twitter for professional development?

For the interview I naturally developed an interview protocol, but prepared nothing for the other methods; they were after all more open, but I wondered whether it might be wise to have a set of pre-prepared generic questions that I would like answering, even though I might not use them without adapting them in each set of circumstances. It might also help me see which aspects of my study are being answering in most detail and where the gaps are.

Although I didn’t perform a full analysis, I was grateful for the opportunity to test out NVivo and see what might be the best strategy for bringing together the different forms of data from different sources. This also encouraged me to think more carefully about my coding strategy and how I build that into my NVivo project.

It was only when I began to draw things together however, that the sociomaterial aspects of my research began to become apparent. Not those in the fieldwork and findings, but in my activities as a researcher. Choosing the twitter.com interface as my window on my field brought me to particular elements of data and led me to behave in a particular way in making field notes. I was also ‘tied’ to the desktop computer (and the desk!) in a way I wouldn’t normally be; how did those actors influence my actions? Reading back through my methodological comments, my frustration with the experience is clear, and how different this was from my usual, but less formal, wanderings in the field. This highlighted for me an area I’d not really considered – the emotional response to events as they unfold, and how that response might influence subsequent events or behaviour. After three fieldwork sessions, and trying a couple of tweaks to make them more fruitful, I later recognised that the interface did not suit my needs and as a result will use Tweetdeck for this kind of fieldwork – a different interface, with different materiality which will doubtless affect me (and the results?) in a different way. I also decided on a completely different method (more about that in a future post) which might provide insights on a part of the field, hidden from the ethnographer in these circumstances.

 

Kim, Y. (2011). The pilot study in qualitative inquiry identifying issues and learning lessons for culturally competent research. Qualitative Social Work, 10(2), 190-206.

King, K. P. (2011). Transformative Professional Development in Unlikely Places: Twitter as a Virtual Learning Community.

Van Teijlingen, E., & Hundley, V. (2001). The importance of pilot studies. Social Research Update. Issue 35.

Tracing the field

In the preceding post, I was casting around for a tool to trace and display the paths I take through ‘the field.’ My search came up short and it became apparent I would need one tool to record the places travelled and another to display those traversals. In my search for contenders, and looking for mind/concept mapping tools, I came across Draw.io. Although there are other similar more fully featured applications, they are often desktop-located or limited (in the free versions). Draw.io seemed to suit my needs, so I thought I’d give it a try with a few of short visits to the field to see how it functioned in context.Read More »

Green light

flickr photo by My Buffo https://flickr.com/photos/mybuffo/311483225 shared under a Creative Commons (BY-SA) license

If potential research participants gave their permission, what would be the implications of posting interview recordings online? That was essentially the theme of the preceding post. I wasn’t so sure of which way to jump, but the encouraging and supportive comments I received there and on Twitter prompted me to take the trickier route of writing a new ethics submission. In addition to rewriting the University pro forma submission document, I had to rewrite a couple of consent forms and their associated participant information sheets, in order to accommodate the possibility that participants might give their permission to ‘publish’ their recording. I also had to write a consent form and participant information sheet for an new, additional method I want to use. I then had to amend and extend the matrix I composed which summarises the ethical issues for each method. Finally, but perhaps most importantly, I felt it was important to attempt to justify the rather radical notion that interview recordings might be posted as podcasts. Here then is that supplement to my ethics submission:

Why am I proposing a change?

The usual position is that interview participants are afforded confidentiality and anonymity – that the data they provide will only be available to those specified, and that all features which might identify them will be removed before making the findings more public. In the interests of speed and given the small scale of my pilot study, I adopted the aforementioned approach. As I move forward into my main study, I would like to propose a different stance, building on those issues discussed in Appendix 02(?): Anonymity. This also contributes to the University’s and wider Open Access policies.

The arena from which potential participants will be drawn is highly participatory, where members generally adopt a performative approach. The norms of the space include a sense of sharing what you have and what you know; where people acknowledge and give credit to those who have supported or helped them. I’d like to suggest that this participatory space invites a more participatory research approach. As Grinyer (2002) noted, researchers have to balance the need to protect participants from harm by hiding their identity, whilst preventing loss of ownership ‘on an individual basis with each respondent.’ This is manageable, provided the sample size is small, as it will be in this study,

It’s perhaps helpful at this stage to reiterate that the topic of this research is not sensitive, participants are not vulnerable and the data they share will not be ‘sensitive personal data.’

How this differs from the interviews in the pilot study

In the pilot study, the participant was assured confidentiality, anonymity, that the transcript would be deleted at the end of the study and that the findings would not be reported (only used to inform the next stage of research).

For the main study I propose a shift in emphasis from ‘human subject to ‘authored text.’ This would be achieved by allowing interviews to contribute to the participatory agenda, by releasing the interview recordings as podcasts (streamed online audio files), if participants give their permission. Links to the audio files would be embedded in a web page associated with the research project, the interviewees would be named and their contribution credited. This represents an attempt to move beyond the notion that participants are merely sources of data to be mined. In Corden and Sainsbury’s (2006) study, participants responded positively when offered a copy of the audio recording of their interviews and were given the option to amend their responses, though few chose to exercise that control.

This is a very different approach to that found in most studies, but is not without precedent. The ‘edonis’ project, part of an EdD study by David Noble, included a series of interviews with teachers on the theme of leadership in educational technologies. The interviews from those people who gave permission were posted online. It could be argued that this proposed approach is only one step further on from conducting ‘interviews’ in visible online public spaces like blog comments, forums, and some chat rooms.

Risks and benefits

Once participants’ identities are no longer disguised, both potential risks and benefits become more significant. Table xxx summarises possible risks and benefits:

Risks Benefits
Loss of privacy which could lead to exposure to ridicule and/or embarrassment. Direct: Increase in participant agency, moving beyond the notion of participants merely as sources from which researchers abstract data.
Change in future circumstances which renders what participants originally said to be viewed in a less-positive light. Direct: Makes provision for participants to amend or extend what they said in the original interview.
  Indirect: Increasing the awareness and understanding of the wider community of issues associated with professional learning and social media.
Increased attention through increased exposure.
This could be perceived as either a risk or benefit and would depend on the participant’s preferred online behaviours.

Mitigation

As with conventional approaches, in order to make an informed decision, potential participants would need to be made fully aware of:

  1. Purpose and potential consequences of the research
  2. Possible benefits and harms
  3. The right to withdraw
  4. Anticipated uses of the data
  5. How the data will be stored and secured and preserved for the longer term.

With items 4 and 5 the circumstances will be different, depending on whether participants accede to their interview recording being released. This distinction needs to be made absolutely clear at the outset so participants are able to decide whether to be involved at all and whether they want to take that additional step.

At the start of an interview, participants who agreed to allowing the interviews to be posted would be reminded of the above once more and their verbal consent captured in the recording. In the debriefing after the interview is complete, participants will be asked whether they wish to change their minds, and reminded that should they do so subsequently, how they can make those views known.

Procedures

As in the pilot study, potential participants would be provided with a participant information sheet, but one extended to include the additional considerations (see Appendix xxx). The form through which they provide their consent will also be amended to offer options for the different levels of involvement (see Appendix xxx) and whether they are prepared to allow the recording to be released under a Creative Commons license (see next section)

Given the small number of interviewees (<5), coping with different levels of involvement should be a manageable process.

Copyright and Intellectual Property

These issues will also need to be made clear to participants through the participant information sheet.

…for data collected via interviews that are recorded and/or transcribed, the researcher holds the copyright of recordings and transcripts but each speaker is an author of his or her recorded words in the interview.

(Padfield, 2010).

Rather than seeking formal copyright release from participants, it is proposed that the interview recordings will be released with Creative Commons, Attribution – NonCommercial – ShareAlike 4.0 International licensing. Participants will be asked at the point of providing consent to state whether they agree to that release; if they don’t, then the recording would not be released. Once more, potential participants are likely to be familiar with the principles of CC licensing; many of them release their own materials under these licenses.

Eynden et al (2014) recommend the use of Open Data Commons licenses for data released through research, however this licensing system is more appropriate where data is stored in databases and the database itself need licensing separately from the content. CC licensing was chosen since the content will not be wrapped within a database; at least not one which the public will be able to manipulate (copy, remix, redistribute).

References

Corden, A., & Sainsbury, R. (2006). Using verbatim quotations in reporting qualitative social research: Researchers’ views University of York York, UK.
Eynden, V. v. d., Corti, L., Woollard, M., Bishop, L., & Horton, L.,. (2014). Managing and sharing research data : A guide to good practice SAGE Publications Ltd.
Grinyer, A. (2002). The anonymity of research participants: Assumptions, ethics and practicalities. Social Research Update, 36(1), 4
Padfield, T. (2010). Copyright for archivists and records managers (4th ed.). London: Facet Publishing.

 

flickr photo by mherzber https://flickr.com/photos/mherzber/500917537 shared under a Creative Commons (BY-SA) license

I’m delighted to be able to report that my revised submission has passed the ethics review process. It’s highly unusual for interviews to be allowed to be published in this way; standard practice is to afford anonymity to interviewees. Perhaps it’s indicative of the need to make our research more open, or the more performative behaviours of potential participants … or perhaps a bit of both. Whatever the case, I’m chuffed to bits, as we’d say up here in the ‘North.’ Now all that remains is to find participants sufficiently confident and generous enough to give it a shot. Know anyone …?

The only way is ethics

flickr photo by cybass https://flickr.com/photos/cybass/176867465 shared under a Creative Commons (BY-NC) license

Right from the outset, one of the options I’ve tried to keep in mind is that of ‘publishing’ those data that are amenable. Publishing in this sense refers to sharing interview recordings, as podcasts, back with the community. This feels like the right thing to do; when teachers experiment with new techniques that someone else showed or explained to them, they often share those insights more widely. If that is the norm, why wouldn’t my research study, conducted within this environment, be any different? Well there are a number of reasons, mostly arising as a result of a researcher’s’ ethical sensitivities and obligations towards potential participants.

The default ethical stance is to maintain participants’ anonymity and confidentiality; with an interview transcript, this isn’t too difficult. If on the other hand, the audio file of the interview is shared, the potential for the participant to be identified is so much greater, even if personal identifiers are edited out of the audio. However, it could be argued (as I began to discuss here) that in the online performative space with which participants are comfortable, anonymising what they have created actually does them a disservice. Much better to acknowledge their co-authorship and give credit where it’s due. I wonder how many researchers conducting interviews as one of their methods, discuss the issue of ownership, copyright or intellectual property with their interviewees, beyond explaining where their data will be stored and how it will be used. In fact ‘for data collected via interviews that are recorded and/or transcribed, the researcher holds the copyright of recordings and transcripts but each speaker is an author of his or her recorded words in the interview.’ So I find myself speculating what the implications and potential consequences of that are? As Van den Eynden et al (2011) explain, an author could at some time in the future assert their rights over the words they provided and you would be obliged to comply. It is possible however for the researcher to have sought ‘transfer of copyright or a licence to use the data’ ideally at the outset of the project. There are even templates available through the Data Archive to make things easier. I wonder though whether taking the route towards Creative Commons licensing might provide a route forward? Potential participants are likely to be familiar with it; many will indeed use it with their own material. But that then has me wondering whether that’s permissible under the University regulations for PhD research (which of course I could doubtless find out), but also what the implications might be if you subsequently wish to publish your research through conventional commercial channels.

My work this morning has been with the apparently less sticky technical issues – where would the audio files be stored, how would they be served/streamed etc. In the past I’ve used the free versions of various podcast services like AudioBoom, SoundCloud, Spreakr etc, but they’re of course limited in some way and would not be adequate for several hour-long podcasts. Paying for upgrades is an option, but I don’t fancy picking up the tab of tens of pounds per annum, just for this project. Online storage can be bought for a much more manageable outlay through services like Amazon S3, or perhaps more ethically(?) through Reclaim Hosting, but which of course demand a higher level of technical capability to configure, manage and maintain the site. I probably have enough background to cope with that, especially if supplemented by online tutorials … and I have been considering securing a new domain name anyway. But then what happens in the longer term? How long will I need to maintain the site and content?

I can’t help but be drawn back to ethical principles, specifically those of non-malfeasance and beneficence. Would sharing podcasts of interviews be likely to result in any harm befalling participants and are there ways in which they might benefit? Is is not easy to speculate what harms an interviewee might incur, but not dissimilar perhaps than those from potentially any online activity. In most cases (assuming the material is not inflammatory or illegal) the most harm is likely to be reputational damage from an inappropriate or ill-judged comment. It might be possible that potential future employers might be put off by opinions or ideas expressed – if as a teacher, you expressed particular pedagogical approaches you favoured and they were at odds with the views of a potential employer who heard your interview, then s/he might be less inclined to offer an interview. Again though, if you hold a particular set of values and have an online presence, it’s likely you’ll have already burnt that bridge. This can of course be flipped and work in your favour as it did for Daniel Needlestone – a benefit? For those who share widely, seek exposure and an audience, then being provided with an opportunity for that through an interview, then this might indeed be considered to be in their interests. And of course, as for many research participants, but perhaps particularly for teachers, there is the sense that their participation is contributing the pool of knowledge from which we all sip … or gulp.

I’m obliged to also ask myself why I might want to do this; what do I stand to gain? Am I being selfish and actually seeking kudos from the community? Am I attempting to follow in the spirit of making research more open and more accessible? Am I attempting to be more faithful to my participants in seeking to ensure their voice is not lost through my transcription. Is this one way in which I can be more transparent about my analysis and interpretation? Is this an additional channel through which I can make my research accessible to a wider audience? Perhaps a little of all of the above?

So which way do I go? My easy route is to stick with the ethical issues I’ve already had ratified for my pilot study and go with participant anonymity. The difficult route, for all of the aforementioned reasons, is to seek to ‘publish’ the data and therefore have to write a new ethics submission incorporating all those issues and explaining how I would address them. That might be time consuming (both in the composition and in the approval process), but is not impossible; the edonis project by David Noble has already set a precedent in fact. Which option would you choose if a) you were me, and b) you were a potential participant – what would your preference be?

Van den Eynden, V., Corti, L., & Woollard, M. (2014). Managing and sharing data.