In the preceding post I considered one possible way to visualise a Twitter exchange, but expressed concern that the temporal separation of events had become lost. Seeking to redress this shortfall, I thought that timeline tools might offer a way forward. So finding myself needing to ‘kick the tyres’ of TimelineJS, that’s where I turned. Here the data that you use to compose your timeline is kept in a Google sheet, which means that adjusting or amending your timeline only requires a change to the contents of a spreadsheet cell. It also makes consistency across the elements of the timeline relatively easy by cutting and pasting cell contents. Adding each tweet is no more complex than pasting its url into a cell.
Earlier this week I was engaged in a spatially interesting exchange, in which I was discussing the contents of a blog post with Chris, next to whom I sit here at SHU. The post, authored by Naomi Barnes, had been brought to my attention in a tweet by Aaron, made accessible by the url he had included. Knowing Chris’ interests and current writing, I thought the post might interest him, so I mentioned it … old school … the spoken word. He asked if I would forward the details, which I did by sending an email which contained only the url. Having read the post, he then tweeted his interest and thanked both me and Naomi … which then set in train a flurry of replies, retweets and likes. And one hat-tip.
Following the preceding post, I’ve dug a little deeper into sentiment viz to explore more carefully what it might offer in terms of revealing the emotional components within Twitter and tweets. Like before, I used a chat hashtag as the search term and perhaps unsurprisingly got a similar shaped visualisation which expressed sentiment as generally positive and somewhat relaxed. Probing a little further and clicking on a few individual circles provides the data which located the tweet at that point on the chart. Here we see the overall sentiment rating expressed as ‘v’ for valence (how pleasant) and ‘a’ for arousal (how activated). Then there’s a breakdown of those words which contributed to that sentiment rating, with their individual scores. We therefore have multiple ways we can compare the emotional content of one tweet with another, but can make a judgement whether those ratings make sense – more of that later.
During my pilot studies, a couple of findings suggested areas for further exploration I’d not previously considered. One of these was the degree to which people talking or writing about Twitter seemed to be ‘affected.’ Although it was not a topic I had gone looking for, nor had asked questions about, and although people rarely mentioned it explicitly, the language and terms they used implied some element of emotional response. Before I could take this much further, I needed to return to the literature and see how people have discussed and/or researched the affective side of teacher learning.
As I was rewriting my ethics submission and reviewing the methods I had used in my pilot study, I got to thinking about being ‘in the field.’ When I undertook more formal participant observation and made meticulous field notes, I wondered how they might be viewed as data. I also found myself wondering about the process itself and how effective it was in providing me with something from which to make meaning. I was convinced that I wasn’t following the actor-network theory exhortation to ‘follow the actors.’ With all that in mind, I felt I needed a better way to record, trace out and make visible the paths I was taking whilst in the field. Who or what were the actors I was following? Where did they go and what did they do?
Although the starting point for most of these expeditions was Twitter, wanting to record where things flowed from there essentially meant capturing the sequence of hyperlinks or prompts … not unlike the instructions a sat nav system provides. It was important too to record the forms and natures of those links and details about the stopover places. Would there be destinations or endpoints in the paths traced, or would they simply be the places where particular expeditions came to a close? If this just meant capturing hyperlinks, then a bookmarking tool of some sort might do the job … and there are plenty of options from which to choose there. What I also wanted however, was to:
- capture some sense of what was at each location; a snapshot if you will
- capture the whole set of interlinks, and be able to represent it as a pathway
- capture metadata which would later allow me to search, sort and filter the results
- present the results visually, allowing an over view and to be able to drill down to the detail
Most of the contenders (like Diigo, Delicious, Symbaloo, ScoopIt, NetVibes) do one, two, or perhaps even three of these, but since none do them all, I was then faced with using two tools in concert to fulfil my needs list. That’s when things got complicated.
The really tricky bit is when it comes to visually representing the paths taken; this began to lead towards a concept or mind mapping tool of some sort. Again there is a whole raft from which to choose, but the crucial thing is I didn’t want to have to build the map from scratch; much better, quicker and more accurate to have the process automated – clicking on a button should capture all the aforementioned data and add a node to the map at the right point, but also link it with the node from which the jump was initiated. Perhaps there were some mindmapping tools which in addition to visualising the data, could also capture the extra information I wanted? Errr … no. Although there are plenty to choose from, when you begin to dig down into their features, the field soon gets whittled down. Crucially, the missing link is in fact … the missing link i.e. that getting the data from a bookmarking application directly and dynamically into a mind map is distinctly non-trivial. One possibility, which only a scant few feature, is to import an xml file. Plenty export xml, but few offer the import option.
Whilst I was casting around for options, I came across a potential contender. The mind mapping application VUE (created by Tufts University) could be linked directly with Zotero, through the Zotero FireFox plugin. The good news is that I have the latter enabled already and am familiar with it; the less good news was twofold: firstly that VUE is an offline application you need to download and install, which for a bunch of reasons is less than optimal); and secondly that Zotero is, strictly speaking, a bibliographic application for managing references, rather than for bookmarking. Short on options, I decided to give it a shot. Whilst working at uni, and therefore within an enterprise environment with things generally locked down, it wasn’t going to happen, but at home I made much better progress. Download and install VUE – check. Create a new Zotero account so I don’t foul up my current references – check. Capture some typical data to make sure Zotero will work to manage bookmarks – check (even tweets can be captured!). Set up the link from my Zotero library into VUE – computer says no!
Unfortunately the straightforward instructions in this video were made before the architecture of FireFox changed, rendering the plugin which performs the setup redundant. As an open source project, VUE (FireFox and Zotero) relies on volunteers to update applications to accommodate changes like this and I guess the will, the expertise or the inclination was no longer there.
At that point, all was not entirely lost however. VUE can be configured for incoming RSS feeds to generate and update mind maps., or import a csv file, both of which Zotero can provide. I found that both work, although less than optimally. The RSS feed brings all the data in, which will update as the data in Zotero updates. Unfortunately some of the field metadata seems to get lost on the way, so turning the feed into a map doesn’t work quite so well, especially where the interconnections between nodes are concerned. The csv import is much better in this respect, though of course, it won’t update automatically. And in fact neither method pulls across the ‘relations’ created between the sources in Zotero.
It appears then that there is no ideal solution and that if I want a visual representation of my activity in the field, then I’m going to need to generate it manually, from data I’ve captured and stored in Zotero. When recounting this tale to Chris, a fellow student, his first observation was ‘Well what have you learned?’ There’s no doubt I’m now more familiar with VUE and can see how powerful it can potentially be in helping to manipulate, filter, sort and visualise data. The process you go through in doing that becomes an integral part of your analysis. Whilst the technical issues mean I probably won’t use VUE, that principle of analysing and interpreting as a function of constructing a visualisation seems to have some merit. It’s that I think I shall take forward as I attempt to map the field.
Yesterday the ESRC Festival of Social Science came to town; well OK, it’s been running for a couple of days now, but yesterday was the first events that I attended.
What can data visualisation do?
At the Showroom, this event arranged by The University of Sheffield provided four provocations on different topics, followed by a panel discussion of some of the issues raised. Alan Smith opened the batting by making ‘The Case for Charts,’ questioning whether charts were often used simply to break up blocks of text.
Using a bar chart from a UNESCO report entitled “Gender Parity Index …,” typical of the kinds of charts we often encounter in reports of this nature, Alan showed how it could be quickly amended to improve accessibility, ease of interpretation and simply allow the data to tell a more powerful story. Instead of being relegated to the tenth page, a few alterations meant that it could forefront the research from the front page; in essence, ‘start with a chart.’ He took us back to first principles and used Anscombe’s Quartet to illustrate why we need to use charts with care, but how a well-designed, but carefully chosen chart can obviate the need for swathes of text.
Next up was Thomas Clever who offered the contention that ‘Data visualisation was dead,’ of course qualifying that by showing how Big Data has disappeared from the download slope of the Gartner Hype Cycle, but actually become embedded within many other aspects of our everyday lives. We’re now becoming accustomed to being presented with visualisations in the media, through politics, at work and of course in advertising, although we’re perhaps not yet sufficiently sophisticated in the way we interpret and interrogate the data. It’s also becoming less visible and less controllable – MacData perhaps?
After the break the new presenters were charged with taking us from the general to the more specific. Valentina D’Efilippo described four of the projects with which she had recently been involved; there was even one I was familiar with – ‘Field of Commemoration.’ Though I would love to provide an image, I can’t find one with the right licensing, but I’d encourage you to try out the interactive version at Poppy Fields, commissioned as part of the Great War Centenary commemorations.
The final session was from William Allen at The Migration Observatory, who illustrated how data on a particularly hot topic needed to be portrayed in an as accessible a way as possible, for a multitude of audiences. The data at the Observatory can be displayed in any one of a number of different ways the user chooses, however despite the compelling messages it delivers, Will was dismayed by the extent that people were unwilling to accept the evidence in front of them. Even an impressive visualisation might struggle to unseat deeply held beliefs.
In the closing session hosted by Andy Kirk, the panel discussed what data visualisations aren’t (they were all unequivocal that infographics aren’t data visualisations – learned something new there), how they can help with memorability (making information stick), that good visualisations evoke feelings as well as facilitating understanding, but that the audience for visualisations isn’t yet sufficiently critical – ought data literacy to have a place in schools? (I offered an opinion of course!)
A couple of things struck me. Firstly that there are those in social media circles who are highly critical and dismissive when people make claims about ‘preparing today’s students for jobs which don’t exist yet.’ The four presenters today had titles which meant they could be considered ‘data visualisation specialists.’ I’m pretty sure that option wasn’t on the list of careers I could have chosen from, or even a potential career for the last cohort of students I taught. Secondly, how refreshing it was to see PowerPoint being used by people with more than a little understanding of the elements of design. Instead of delivering information, that much maligned tool was helping to tell a story; a lesson for us all there perhaps?
Foundational 21st Century Literacies
This session was perhaps targeted more precisely at educators, outlining ‘The Street,’ a project aimed at developing literacy skills of primary aged pupils. This was a collaborative venture between Sheffield Hallam University and teachers in a couple of local primary schools (Will Baker, Adam Bamber and Dan Power). The session was facilitated by Professor Cathy Burnett and Professor Guy Merchant, but I’m sure they’d concede that the stars were the primary teachers involved in the project who conveyed their enthusiasm with such aplomb
The premise hinged on teachers undertaking a collaborative project across two schools, but one in which their pupils would also collaborate, as they develop their literacy skills through the medium of a blog. ‘The Street’ was conceived by the teachers themselves, simply as an imaginary location from which a story would unfold; a story crafted by the students themselves. Minimal rudimentary information in the form of a brief audio soundscape was given to the pupils who could then decide what they thought the sounds evoked, in the context of a street. Working in groups, they then posted their thoughts to a blog, shared between the two schools. With a few occasional prompts to ensure the unfolding story addressed the learning intentions the teachers had, the pupils had free reign over which directions their story should take. The sense of empowerment and agency was one of the most powerful elements to arise from the project, together with how inclusive it had been at drawing in pupils who might normally be considered reluctant writers. This and many other positive aspects were used as a hook to encourage teachers in the audience from other schools to consider becoming involved in extending the project; as small clusters collaborating on their own projects, rather than extending ‘The Street’ to the point where it might become unmanageable.
In the discursive session where, in small groups, we were encouraged to consider the project from our perspectives, and in the plenary which followed, a few things piqued my interest. That there are still teachers unaware of what ‘green-screening’ entails and how it might be used in an educational context. I’m not sure why that surprised me; it is after all only a couple of months since I was working with teachers full time! It was somewhat disheartening to hear how poorly provided for some primary schools are in terms of technology – one school of over 500 pupils had nothing more than a single IT suite and sixteen iPads. That must be so tough for any teachers there who have the will and desire to push what they’re doing with digital technologies and media, yet not having the wherewithal to follow that passion. It is clearly too long since I was in our state system, but if that’s what folks are having to cope with these days (amongst other pressures!), I’m not disappointed to be out of it. I was heartened though that those teachers involved in the start of the project were all Twitter users, didn’t make a big deal of it and acted though it was a natural part of their practice. Perhaps there is hope after all.