Wednesday 23 March 2011

Comment moderation - the Rugby Poetry rule

I have, at times, been less than complimentary about certain types of forum users. I know I'm not alone in this as there are various index, category, definition and analysis pieces to be found online which look in detail at the psychology of the WUMs and trolls - and if I didn't point you at the excellent Flame Warriors site right now, it would be a grievous omission.

However, sometimes those who live in the spaces between the 0s and 1s add a little sunshine to my day. This was one of those days...
Thank you, Ruggerpoet andAnonymous:



Achtung Troll!Image via Wikipedia

Sunday 6 March 2011

In praise of Pixelpipe (Updated post on Cardiff industrial estate fire)

Photo
Explosion at an industrial site off Rover Way in Cardiff. Locals said a gas pipe blew up about 8.30am, but it was still blazing away when I drove round there. Lots of smoke - and lots of people watching it burn.

UPDATE... (6/3/11)
Nipped out to buy the papers this morning and saw a black plume of smoke rising from the city centre. Two minutes later the paper-mission was abandoned and husband was driving towards said smoke.
We found it, along with around 30 other people who'd gone to see what was happening. One told me he lived nearby and had been woken by an explosion around 8.30am. The Walesonline report is here
The first photo was taken with an iPhone 3 on zoom, the second with a Nokia N86 - personally, I think the iPhone image is the more striking.
I was uploading pix using Twitpic, and it ate three out of the four I used. So I switched to my Posterous site and that's where my loyalties will be lying in future - Twitpic has let me down three times now, and it's only force of habit that I've kept using it. Enough. I've deleted it from my phone.

Fire in Cardiff

I also tried to shoot some video with the Nokia, uploaded to my YouTube site via Pixelpipe, but it was a bit far away, and I was being pretty jostled by the crowd that had gathered so it really wasn't worth it.
However, crappy video aside, it brought home to me again just how useful Pixelpipe is to a journalist on the go, without recourse to any kit other than a phone. I could have potentially done photos, video and audio and uploaded them all via the Pixelpipe Share app on my phone. Really useful for when you're in a hurry - or for when you need to get media off your phone to somewhere it can be accessed by another person quickly.
I would have loved to have tried out the N8 I now have for work, but it has so far thwarted my attempts to get Pixelpipe to work; I think it's more to do with the contract than the phone, but it needs further investigation. An awesome camera on a phone is a fine thing, but the ability to get the photos out to the wider world is a finer one.






Enhanced by Zemanta

Friday 4 March 2011

Journalism 101: Your readers are the toughest sub editors ever

As a journalist, you should never underestimate the smartness, recall and grammatical abilities of your reader.
When the subject matter is a niche interest that warning goes double, as the Guardian discovered (*links to the two articles are at the foot of this post):



and, just to compound the fun, someone else spotted this:





Whatever your profession, you can't be an expert in everything (as a journalist some knowledge,  an enquiring mind and a willingness to bother those with the expertise is probably what many of us would admit to).
But you can be either a) more honest or b) more devious when it comes to recyling: You can link to your sources, or at least reference them in articles, and make transparency and pathways a virtue; or you can be devious and change the words so it reads differently. At the very least you will have made some effort.

But a word of warning on the last choice - I know of a few instances when articles had to be purged from electronic archieves because one elderly (and subsequently corrected) error kept being repeated. This was, of course, because journos working on the on-going story - maybe years later - were cutting and pasting bits of the original.


Incidentally, one of the best things about writing this (not-very-illuminating) blog post was that Zemanta threw me a fantastic website as a link option.
It's called 43things.com and is a space for people to list their life goals. The link in question was 'Quote Roy Batty's dying speech at most social functions or awkward moments at least once". (Yes, that's their life goal: Not 'be a good parent' or 'scuba dive on the Barrier Reef'; just to shoehorn a quote from a character from a 1980s movie into an inappropriate scenario.)
So there's a list of people explaining how they movingly incorporated "C-beams glitter in the dark near the Tannhauser gate" into a eulogy.
As one respondent who managed to achieve that particular life goal pointed out: "AWESOME SAUCE!"

*The link to the Guardian article is below, with the rest of Zemanta's suggestions, if you want to read the full story of how remaking/reimagining/making a prequel to Blade Runner has caused such consternation. The link to the Guardian article refered to in the comment is http://www.guardian.co.uk/science/2004/aug/26/sciencenews.sciencefictionspecial


Character Rick Deckard has a hard time resisti...Image via Wikipedia
Enhanced by Zemanta

More ripples in the Twitter API clampdown

Interesting email from 140kit team, not least because I didn't realise TwapperKeeper - where you can archive your own and, export and download tweets - was affected.
However, there are still good people out there; 140kit has come up with a workaround that satisfies new Twitter guidelines, and helps non-coders access once-freely available data:
"...we plan on re-structuring this system to a point where it is trivial to download a scratch copy of our service, test one’s own analytics locally, then send the analytical process to the site for vetting, which would be a simple process. If the language you work with isn’t included in our system yet, we’ll add it. If you don’t know how to code, tell us the general algorithm and we’ll code it if we have the time and resources."

The email below explains it in more detail but I was particularly struck by the last few pars on why 140kit was established:

"[we] realized that if we generalized the process of data collection and analysis, we could open the door to doing very meaningful comparative analysis of datasets, which in turn could help us actually figure out A. If Twitter matters, B. If it does, what its impacts are, and C. What this implies for the internet and social networks as a whole. We have never been in this for money - we have never looked for funding, this has never been our job, and our systems were given to us by the Web Ecology Project and are hosted at Harvard’s Berkman Center for Internet and Society. We have one machine we pay for, which in May will be coming out of our own pockets (the machine was purchased for a year as part of a class Ian and I slapped together at Bennington College). We are solely interested in the data and its implications, and this is a labor of love. We are more than happy to continue on this project"
 
Cool people.
 
 
---------- Forwarded message ----------
From: 140Kit Team 
Date: Fri, Mar 4, 2011 at 12:29 AM
Subject: 140kit: Regarding Twitter's API Change

Hello,

You’re receiving this e-mail because you signed up for our service, 140kit, sometime in the last 8 months. We are writing you to inform you about the current state of data exports, as well as our solution to the problem currently being presented. 

A few weeks ago, Twitter caused some news by publicly stating that no more whitelisted IPs would be granted for any purposes - this essentially ends any REST based data collection for new researchers (doing collections of tweets based on User names, for instance, requires this access). Within a few days, they also sent a letter to TwapperKeeper, another major data collector, which compelled their leadership to turn off all export services as of March 20th. The same has basically happened for all other collectors, including ours. In short, the time where a researcher could export a full, unfiltered, unadulterated dataset, is completely over. 

The particular section of the TOS that is violated by export clearly states (Section I.4.a., at http://bit.ly/9LD7XQ): 

I. Access to Twitter Content

4. You will not attempt or encourage others to:

a. sell, rent, lease, sublicense, redistribute, or syndicate the Twitter API or Twitter Content to any   third party for such party to develop additional products or services without prior written approval from Twitter;

Where Twitter Content is defined as: All use of the Twitter API and content, documentation, code, and related materials made available to you on or through Twitter

Meaning that 140kit, as a service, cannot provide the datasets wholesale, where they use products/services basically to mean anything, even academic reports. For many of our users, this effectively shuts them out of the ability to research the platform. If one doesn’t know how to code, its very difficult to do this alone - this problem is compounded when you don’t have the access levels needed to research a given subject. We at 140kit have more than enough access, however, and still retain the right to keep our data, so we came up with a novel solution, which Twitter has agreed to. 

On our site, we have a library of analytical process, which in turn have their own online viewers, and a few of which contain their own exports. All of our services, from CSV export to gender analysis, runs via a modular library of analytics which have their own administrative structure. We built this system with a view that someday, we would open up our system for researchers to build out their own analytics, add them to our site, and all researchers would have access to these processes as well. We wrote our project in Ruby, but want to make this plugin system work with any language, which should actually be quite easy. 

Over the next few months, then, we plan on re-structuring this system to a point where it is trivial to download a scratch copy of our service, test one’s own analytics locally, then send the analytical process to the site for vetting, which would be a simple process. If the language you work with isn’t included in our system yet, we’ll add it. If you don’t know how to code, tell us the general algorithm and we’ll code it if we have the time and resources. 

In this way, as the library increases, we will be able to answer more of the most core questions researchers are interested in, and at a certain threshold, all the important questions will have their analysis on the site already. Since we can keep our data, we would be able to re-calculate analysis on any previous dataset. In short, we can’t give you the exports of data, but we can answer any question you want answered. It’s not the best solution, but it will save many projects from the grief of doing this alone.

This project was started in October 2009, between two people, myself (Devin Gaffney)and Ian Pearce. We were profoundly interested in analysis I was doing about the Iran Election, and realized that if we generalized the process of data collection and analysis, we could open the door to doing very meaningful comparative analysis of datasets, which in turn could help us actually figure out A. If Twitter matters, B. If it does, what its impacts are, and C. What this implies for the internet and social networks as a whole. We have never been in this for money - we have never looked for funding, this has never been our job, and our systems were given to us by the Web Ecology Project and are hosted at Harvard’s Berkman Center for Internet and Society. We have one machine we pay for, which in May will be coming out of our own pockets (the machine was purchased for a year as part of a class Ian and I slapped together at Bennington College). We are solely interested in the data and its implications, and this is a labor of love. We are more than happy to continue on this project, and are glad you have used our service. Our hope is to be more on the ball with tickets, issues, and other problems as we go through this re-structuring, and come out of this making analysis even easier for people. Thank you for reading this admittedly long e-mail - A more full description of the current situation is located on our front page currently, if you need any more details. For any other questions, feel free to personally reach out to us or contact us via this email account.



Read the full report here: http://bit.ly/ddarvF


Thanks much, 


Devin Gaffney and Ian Pearce


Wednesday 2 March 2011

Newspaper choices: Build paywalls or build bridges with audiences?

I see another regional title is going to have a bash at paywalling content. Yes, I did just verb Paywall but if that was the only thing that bothered you about the opening sentence then well done, you’re an excellent sub. Now go correct lolcat spelling or something.

But kudos to Wolverhampton Express & Star for marking to the very month the anniversary of Johnson Press giving a midnight burial to its paywall scheme...by launching a paywall scheme.
The axed Johnson Press plan, which saw a £5 subscription levied on audiences who wanted to read stories on some local sites (the experiment was not rolled out across all group sites), was done away with in March 2010 although JP declined to talk about the whys and wherefores.
I’d imagine the why was because no one paid a bean towards looking at the content, but that’s just me speculating.