is this big data which i see before me?

34
BIG is this which I see before me? DATA Photo: Stefan Insam, “Deadly carvings” CC-BY-SA http://www.flickr.com/photos/ramsesoriginal/6652582259/ So, hi, I’m Dorothea Salo from the School of Library and Information Studies at the University of Wisconsin at Madison, and the first thing I’m going to do is apologize for the talk title in the day’s agenda, which is a horrific MISquotation of Shakespeare’s Macbeth. Totally my fault, not Eric’s or OCLC’s, sorry about that, it’s correct on the slide! So, Big Data.

Post on 21-Oct-2014

1.292 views

Category:

Education


0 download

DESCRIPTION

Given for an OCLC member symposium on shared data, May 31 2013.

TRANSCRIPT

Page 1: Is this BIG DATA which I see before me?

BIGis this

which I see before me?DATA

Photo: Stefan Insam, “Deadly carvings” CC-BY-SAhttp://www.flickr.com/photos/ramsesoriginal/6652582259/

So, hi, I’m Dorothea Salo from the School of Library and Information Studies at the University of Wisconsin at Madison, and the first thing I’m going to do is apologize for the talk title in the day’s agenda, which is a horrific MISquotation of Shakespeare’s Macbeth. Totally my fault, not Eric’s or OCLC’s, sorry about that, it’s correct on the slide!

So, Big Data.

Page 2: Is this BIG DATA which I see before me?

VOLUME

VELOCITY

VARIETYThis is a well-known characterization of “big data:” volume, velocity, and variety. Big VOLUMES of data is probably the first thing to spring to mind when somebody says “big data” -- don’t think I need to explain it -- but size is not everything! (CLICK) VELOCITY matters too: how fast do these data pile up? how fast do they need to be cleaned up and used? how fast does interaction with the data need to be? how easy is it to get data where they’re going in the form they need to be in?

(CLICK) And that gets to the third vee, VARIETY: From a computational perspective -- and computers are notoriously persnickety and dumb about this -- how clean are the data in the first place? how much effort does it take to clean them, and how much of that effort can be automated? Note that high variety is not a good thing! Ideal data for analysis is clean, CONSISTENT (this one’s important!); it’s easy to understand, and simple to use a computer to mess around with. In the real world, though, big data tends to mean more variety than is wanted. So a bit of hope here for libraries as we struggle with variety in our data: it’s not just us! we’re not alone!

So keep these vees in mind as I go on talking. None of them is more important than any other; they all factor in to making the best use of Big Data.

Page 3: Is this BIG DATA which I see before me?

Where they most breed and haunt...Photo: CERN, “The Large Hadron Collider/ATLAS at CERN ” CC-BYhttp://www.flickr.com/photos/11304375@N07/2046228644/

http://gigaom.com/2013/04/04/why-facebook-home-bothers-me-it-destroys-any-notion-of-privacy/

So where’s big data? It’s everywhere. It’s in science -- oops, the Large Hadron Collider twitched, that’s another petabyte. It’s on the web, of course, from Google to Facebook to Amazon.

Page 4: Is this BIG DATA which I see before me?

Why, I can buy me twenty at any market...

http://www.mckinsey.com/insights/business_technology/big_data_the_next_frontier_for_innovation

What need we fear who knows it, when none can call our power to account?

Regalado and Leber, MIT Technology Review, http://www.technologyreview.com/news/514386/intel-fuels-a-rebellion-around-your-data/

And even beyond the online giants, Big Data has hit business, where the hype cycle is highest, and where “big data” seems to mean something like “anything we can collect about our customers or users and their behavior to correlate with other companies’ data in flagrant violation of any notion of privacy.” And I think it’s important to watch how that debate evolves, as academe and its libraries keep getting told “behave like a business!” and businesses keep behaving so horrendously.

The top quote, incidentally, is said by Lady Macbeth, and it’s about husband acquisition. That Lady Macbeth, business genius for our time!

Page 5: Is this BIG DATA which I see before me?

But in these casesWe still have judgment here; that we but teachBloody instructions, which, being taught, returnTo plague the inventor:

Examples via: Inside Higher Ed, http://insidehighered.com/

Big data is in education, who knew? And we in academic libraries should be watching this, as well as folk who have served on IRBs, because it’s troubling from a student-privacy perspective and I don’t know who has more authority in academe to speak truth to power about privacy than academic librarians.

Page 6: Is this BIG DATA which I see before me?

BIGis this

which I see before me?DATA

Photo: Stefan Insam, “Deadly carvings” CC-BY-SAhttp://www.flickr.com/photos/ramsesoriginal/6652582259/

So, of course libraries have data, and we use data in decisionmaking, in asserting our value, in collection-development and service decisions, and so on. All I need to do is say “LibQual,” right? The question I was asked to address today, though, is whether libraries have, or will have, “Big Data.”

Page 7: Is this BIG DATA which I see before me?

Your face, my thane, is as a book where menMay read strange matters.

BIG DATA IN LIBRARIES?

Kind gentlemen, your painsAre register'd where every day I turnThe leaf to read them.

And I have several answers to that question. YES, libraries have big data. Of course we do.YES, libraries have or could have big data, BUT its collection or use is somehow problematic. NO, sometimes what libraries have there isn’t big data. It might be big, it might be important, but it’s not actually data, and that is often problematic.Some library data could be big data, but NOT YET it’s not. And finally... Big Data, SIGH. We could have big data and it’d be super-cool if we did, but something completely unnecessary is in the way.

Page 8: Is this BIG DATA which I see before me?

BIGis this

which I see before me?DATA

YES. Do we really even need to ask?So some of you are looking at me right now all like “what a dumb question! Of course libraries have Big Data, where have you been for the last twenty years?!”

Page 9: Is this BIG DATA which I see before me?

5 billion web-archive files

LIBRARY OF CONGRESS

50 billion tweets

5 million newspaper pages(x)00,000 e-journal articles

digital audio, video, etc.

Leslie Johnston, Library of Congress, as reported by Lorcan Dempsey in “Big data... big trend,”http://orweblog.oclc.org/archives/002196.html

And you know, those people are quite right. National libraries and some major research libraries have been in the big-volume data game for some time because of digitization, and more recently, conscious collection of large volumes of born-digital materials. Here’s what Leslie Johnston claimed last year the Library of Congress is hanging onto digitally: five million newspaper pages, some hundreds of thousands of e-journal articles, five billion web-archive files, scads of digital audio and video, and what by now is probably close to if not more than a hundred billion tweets.

Interestingly, I’ve seen news stories that hint that the Library of Congress’s Twitter database is running into a serious velocity problem! They have all the tweets, just not the computational power to let researchers or anybody else DO anything with them. And it’s too big a dataset to be downloadable, so the combination of high volume and a hoped-for high velocity is pretty deadly.

Page 10: Is this BIG DATA which I see before me?

HATHI TRUST

So we’re all familiar with this page by now; in fact, a lot of the institutions represented in this room are Hathi Trust members. It’s worth remembering that Hathi Trust came about in order to solve a classic big data volume problem: where the heck to PUT all those page scans and OCRed texts from the Google Books project!

And as Hathi grows and changes, we see people tackling more problems that would sound really familiar to a big-data analyst in business or a so-called data scientist: what can we find out from this gigantic pile of bits? How do we best clean up the OCR so that linguistic and literary analysis is reliable, and how do we deal with language variation over time?

And I have to tell you, as a historical-linguist-in-a-past-life and a sometime computer programmer, a lot of the analyses I see Ph.Ds proudly trotting out these days are pretty weak. I don’t just mean “the digital humanities,” either, though there’s plenty of eye-rolly work there -- that “culturomics” stuff coming out of Google’s comp-sci people has some pretty obviously overbroad conclusions stemming from a failure to consider the limitations of their evidence base appropriately. But, you know, there’s a lesson in that: with big data, we’re all learning by doing. We’ll get better at it; just give us time, and room to monkey around.

Page 11: Is this BIG DATA which I see before me?

Threescore and ten I can remember well:Within the volume of which time I have seenHours dreadful and things strange...

doi:10.2218/ijdc.v7i1.219 Developments in Research 114

The International Journal of Digital CurationVolume 7, Issue 1 | 2012

Developments in Research Funder Data Policy

Sarah Jones,

Digital Curation Centre,

University of Glasgow

Abstract

This paper reviews developments in funders’ data management and sharing policies, and explores

the extent to which they have affected practice. The Digital Curation Centre has been monitoring UK

research funders’ data policies since 2008.1 There have been significant developments in subsequent

years, most notably the joint Research Councils UK’s Common Principles on Data Policy and the

Engineering and Physical Sciences Research Council’s Policy Framework on Research Data. This

paper charts these changes and highlights shifting emphasises in the policies. Institutional data

policies and infrastructure are increasingly being developed as a result of these changes. While

action is clearly being taken, questions remain about whether the changes are affecting practice on

the ground.

1 Digital Curation Centre policy webpages: http://www.dcc.ac.uk/resources/policy-and-legal/funders-

data-policies

International Journal of Digital Curation (2012), 7(1), 114–125. http://dx.doi.org/10.2218/ijdc.v7i1.219

The International Journal of Digital Curation is an international journal committed to scholarly excellence and

dedicated to the advancement of digital curation across a wide range of sectors. The IJDC is

published by UKOLN at the University of Bath and is a publication of the Digital Curation

Centre. ISSN: 1746-8256. URL: http://www.ijdc.net/

So yeah, libraries COLLECTIVELY have big data and have had it for a long time! not new at all. What’s changing is that INDIVIDUAL libraries are starting to run into high-volume and high-variety data problems. In academic libraries, for example, faculty are starting to look to us to help with research-data management. Some digital libraries are seriously getting into targeted web archiving, too.

And here’s where I go all finger-shaky at us: right now, in May twenty-thirteen, most of us are not investing NEARLY enough in computing infrastructure and development to be able to keep up well. We heard this morning from Sarah Pritchard that data management and curation is a thing in research institutions; I’m here to tell you that the opportunity for libraries to stake a claim to research-data management and archiving in particular is a TIME-LIMITED one. If academic libraries don’t prove we can help -- and that means a lot more than putting together a committee or hiring one person -- researchers SHOULD and WILL go elsewhere.

So we can have Big Data... but only if we decide we want it badly enough.

Page 12: Is this BIG DATA which I see before me?

BIGis this

which I see before me?DATA

YES... BUT.

Page 13: Is this BIG DATA which I see before me?

So if you weren’t watching, you missed this one: Harvard Library for a very brief time piloted a service called Library Hose that tweeted the titles of books that had been checked out of the library, shortly after that checkout. Eyes were rolled, fusses were fussed, and the Library Hose was shut down, because honestly, it’s kind of a bad idea. But that’s only a funny example of extremely serious questions about ethical uses of the data that libraries could and sometimes do collect about patrons, individually and in aggregate, on-purpose and inadvertently: search data, patron-computer-use data, patron-behavior data.

And we discussed this earlier in the Q&A, but in my mind at least, this one’s easy. We want to differentiate ourselves from Google, our search competitor? We want to differentiate ourselves from Facebook, our social-activity competitor? We want to differentiate ourselves from Amazon, our content-purveying competitor? Easy. WE DO NOT SELL OUT OUR PATRONS THROUGH THEIR DATA. EVER. FOR ANY REASON. Even if they invite us to. No matter how tempting it is, how many nifty things we could build, or how hard our patrons push us to do things that we KNOW could turn around and bite them, in this age of increased surveillance from government and business and black-hat hackers everywhere. “Political problems,” rather than technical ones, yeah, sure, but you can’t just wish political problems away. I’m avoiding the obvious cheap shot here out of respect for the dead, but I’m sure all of you can fill it in for me. In lieu of that, I’ll just say that AOL and Netflix both learned really quickly that “sanitizing” data doesn’t, and “deidentified” data isn’t.

We don’t sell out our patrons. We just don’t. That’s our first requirement whenever we talk about using or even KEEPING certain kinds of patron data, or patron-traceable data. And the only way to keep data safe is often to destroy it or refuse to keep it in the first place. Fact of the computing life. MOVING ON...

Page 14: Is this BIG DATA which I see before me?

BIGis this

which I see before me?DATA

NO. This isn’t even data.This is a subtle point, but one that governments are particularly struggling with as OPEN data becomes a thing for them: it’s possible to turn data into something that looks like data but isn’t. Which often defeats the purpose of collecting or sharing the data in the first place. Does this happen in libraries? You betcha. And often, it happens with exactly the kind of data we’ve been discussing today.

Page 15: Is this BIG DATA which I see before me?

...thereby shall we shadowThe numbers of our host and make discoveryErr in report of us.

So, this web page I’ve taken a screenshot of here, a sort of library-activity infographic thinger, is brilliant and I love it. When it made the rounds of my online librarian friends, there was a chorus of I WANT MY LIBRARY TO DO THIS.

But it’s not data. There’s data underneath it somewhere, but as presented, this is not data. It could be -- and if we could collect this information from libraries all over the place, it could even be BIG data -- but it’s not. The problem is that third vee, variety. If I wanted to compute on these numbers, I’d have to grab the HTML and laboriously write code to extract the numbers from it, and as soon as Traverse Area District Library changes their content-management system or does a redesign, my code breaks. Multiply this by all the libraries in all the cities and towns in all the states everywhere, and you see the problem.

So, acknowledging that qualitative data is often-though-not-always an exception to this rule, take this rule away with you: *if it’s not computable, it’s not data.* Big or otherwise. Libraries have treated the computability of the data we create and collect as a low-priority consideration for far too long.

Page 16: Is this BIG DATA which I see before me?

Photo: Luz, “amber” http://www.flickr.com/photos/nieve44/3800137286/ CC-BY

up, up, and seeThe great doom's image!

Making an infographic or a pie chart or a data HTML table takes pieces of the data -- usually not even everything -- and reduces them to something that tells a story, because graphs and charts and tables almost always tell stories much better than the actual data do.

So a graph or a table or a chart or an infographic is data trapped in amber. It’s very beautiful, and human beings appreciate that beauty, BUT... you can’t get those little particles of data back out, much less do anything useful with them if you did! They’re just not computable any more. You’ve doomed your data!

Any data you’re putting out there in PDFs, incidentally? It’s not data any more! Stop that! We in libraries should be setting the example here! And we should lean on our vendors about this, too. There’s just no point in them providing data that we can’t use for our purposes.

Page 17: Is this BIG DATA which I see before me?

The sacred storehouse of his predecessors,And guardian of their bones.

Image: http://library.music.indiana.edu/tech_s/manuals/training/marc/record1.html

Which brings me to the skeleton in the closet (speaking of bones): MARC. If I had a nickel for every cataloger who’s asked me what the problem is with MARC and AACR2 and ISBD, I would never need to work a day in my life again.

Here’s the problem in a nutshell, and it’s not news, because Kim alluded to it earlier with respect to harmonizing serials holdings in the CIC. The records we put into our library catalogs are marginally computable at best. If you don’t believe me, ask any programmer anywhere who’s worked with MARC records. And you heard Kim talk about Google Books and library metadata -- look, Google has the smartest engineers anywhere; if THEY can’t compute on our data, it’s NOT computable. That uncomputability is costing us untold amounts of money in systems and cleanup programmers, not to mention mindshare on the larger information web that libraries are only a part of. We have GOT to do better.

Another aspect of the MARC problem gets back to the third vee I talked about, “variety.” Local practice, rule interpretations and other changes over time that don’t get retroactively fixed in old records, places where AACR2 just throws up its hands and says “as long as it’s human-readable, do what you want,” -- all this INCREASES the variety in our catalog records, which DECREASES their computability and reuse value. Whatever happens with RDA and BIBFRAME and similar efforts, if we end up with yet another sloppy tower of Babel, it’s not solving the problems we have.

Cataloging for your users -- COMPUTERS, THEIR PROGRAMMERS, AND THEIR USERS *ARE* YOUR USERS.

Page 18: Is this BIG DATA which I see before me?

Strange things I have in head, that will to hand;Which must be acted ere they may be scann’d.

Photo: Mike Linksvayer, “dsc02977.jpg” http://www.flickr.com/photos/mlinksva/2254052444/ CC-BY

Digital librarians, among whom I include myself -- come on, we know we’re not off the hook here! I ran institutional repositories for six years, I got an entire ARTICLE out of one authority-control mishap where one author had eight different name variants in the IR. Our data isn’t clean and consistent. It isn’t computable, and it can’t be aggregated usefully or consistently. Let’s not pretend!

What we can do, though, is watch the Big Data pioneers and the techniques they use to cut through the chaos. Natural-language processing. Fuzzy matching. If you haven’t played with Open Refine, which used to be Google Refine, you completely need to grab some random data from your catalog or digital library or wherever and do that, it’s actually really fun! If only so that you see what the possibilities are.

Page 19: Is this BIG DATA which I see before me?

BIGis this

which I see before me?DATA

NOT YET.Libraries also have data that doesn’t look all that big -- or all that powerful -- when you only have it from a single library, but if you add together that same data from a whole BUNCH of libraries, suddenly you have something super-interesting.

Page 20: Is this BIG DATA which I see before me?

AGGREGATIONAy, in the catalogue ye go for men;As hounds and greyhounds, mongrels, spaniels, curs,Shoughs, water-rugs and demi-wolves, are cleptAll by the name of dogs: the valued fileDistinguishes the swift, the slow, the subtle,The housekeeper, the hunter, every oneAccording to the gift which bounteous natureHath in him closed; whereby he does receiveParticular addition.The term of art for this, of course, is “aggregation,” and it happens all over the place already, it’s nothing new. Any data, any data at ALL, can be aggregated... in theory. In practice, a successful aggregation depends a LOT on keeping a lid on that third Big Data vee, variety. It may also depend on velocity, keeping things current, fixing errors quickly, and similar speed-dependent concerns.

Page 21: Is this BIG DATA which I see before me?

We shall not spend a large expense of timeBefore we reckon with your several loves...

All the cataloguers in the room know this already, of course, because of WorldCat. I’m not a cataloguer and definitely no expert, but I do know that OCLC does its level best to enforce certain kinds of consistency in contributed MARC records, above and beyond what MARC and AACR2 and RDA insist on, because if they don’t, the search engine doesn’t work! And, you know, we all know they don’t do a perfect job of it... but to some extent that’s on us, because of the MARC closet skeletons I mentioned earlier.

Page 22: Is this BIG DATA which I see before me?

OAISTER

I see thee compass'd with thy kingdom's pearl...Any Michigan folks here? Here’s a blast from the past for you: OAIster, which now belongs to our good hosts at OCLC. See, we’ve tried large-scale aggregation with HIGHLY heterogeneous metadata -- far more variable than the MARC coming from skilled cataloguers -- before. With OAIster, it didn’t work out so well. Variety in our data bit us yet again, as did some really pretty stupid and evitable structural flaws in the harvesting protocol OAI-PMH, such as total lack of error reporting and no flag for metadata-only records so that searches could exclude them.

So what have we learned from the wonderful, bizarre, epic mess that is OAIster? Let’s see.

Page 23: Is this BIG DATA which I see before me?

O proper stuff!This is the very painting of your fear...

DPLA

We have another chance to try aggregation, in the guise of the Digital Public Library of America. It’s very early days yet, but I did want to call out one thing that I think DPLA is doing right: cutting the Gordian knot of intellectual-property rights in metadata. Long story short, some metadata is too factual to qualify for copyright protection in the US; other metadata such as abstracts clearly does qualify.

But DPLA isn’t playing that game. They say very clearly, if you want to play with us, you do NOT play intellectual-property games with your metadata. You start up with that, we kick you out. They’re gambling, of course, that they become enough of a name to conjure with that they can make this stick. As I said, it’s early days, but I’m not betting against them -- and I appreciate this approach very, very much.

Here’s what I want to know, though. Can DPLA get past the metadata-quality issues that made a mess out of the National Science Digital Library, never mind OAIster? They seem to be leaving training and quality control to their Service Hubs. Maybe that’ll work. But I don’t see any kind of feedback loop being built in here, and it worries me some.

Page 24: Is this BIG DATA which I see before me?

BIGis this

which I see before me?DATA

SIGH. It could be, but...And that question leads me to what if I were the Porter in Macbeth, I’d call something simultaneously grandiloquent and obscene, but since I’m not the Porter in Macbeth, I’ll just call it “the graveyard of missed big-data opportunities.”

Page 25: Is this BIG DATA which I see before me?

The multiplying villanies of natureDo swarm upon himThere’s so much we should know about books that we don’t. And we don’t know it IN SPITE OF all the effort we spend cataloging books! The above is from a public-librarian friend of mine, Laura Crossett, and what she was trying to do was make sure they had all the books in any series where any book in the series was circulating well. And the series information stumped her. And this is just a STUPID problem to have, and honestly, I think we have it because our ideas and practices around cataloging are so fragmented and so calcified.

And digital librarians, we only get to gloat about this because we’re often describing unique materials. Otherwise, we’re just as bad.

We need to build Big Data together -- it’s not just the responsibility of the Library of Congress or the New York Public Library or Harvard or OCLC, it’s everybody’s responsibility. And one of the ways we do it is by eliminating redundant labor, well beyond copy cataloging even, so that we can actually do things like record series information and relationship information... so that we can further embiggen and enrich our data! I think the linked-data infrastructures that several national libraries are building can do that... if we let them!

Page 26: Is this BIG DATA which I see before me?

...there cannot beThat vulture in you, to devour so manyI can’t really add anything to the combination of Les Carr and Shakespeare. I’m just going to admire this for a second... and no, I don’t know what the fourth and fifth vees are either.

But seriously, Les is right. We saw it with serials and their metadata, we’re seeing it now with e-textbooks, we’re even starting to see it with a few kinds of research data, and I don’t know who’s gonna stop it and build a real Big Data commons if it’s NOT academic libraries. So if you need a reason to get involved with open access and open data, this is it: it beats the heck out of the velociraptor alternatives.

And because Deb Blecic mentioned it earlier: Non-disclosure agreements are a velociraptor indicator. I don’t like them, and I don’t think any of us should. Just sayin’.

Page 27: Is this BIG DATA which I see before me?

As two spent swimmers, that do cling togetherAnd choke their art.

And here’s where, as I generally do, I bite the hand that’s feeding me. OCLC, you are clearly of two minds on this Big Data thing, and I think you’re hurting yourself by it. On the one hand, there’s OCLC Research, which is making amazing Big Data things like the Virtual International Authority File and working hard -- and pretty successfully -- to embed them in the larger information world.

And on the other hand, there’s the dog-in-manger intellectual-property shenanigans OCLC proper keeps trying to pull with the records contributed to WorldCat, which made the national library of Sweden pull out of WorldCat altogether, and is infuriating those of us who are paying attention to where the Big Data world is going.

Please, OCLC, get your act together. If you’re going to insist on being a velociraptor, please spin off OCLC Research so that you don’t drown it when we drown you -- and we will. It will take time, just as the open-access movement took time, but we can destroy you and we will. Or take OCLC Research as your model and stop being a dang velociraptor.

Page 28: Is this BIG DATA which I see before me?

when worldcat.org do come to DPLAne

And it’s not just libraries who want to treat WorldCat as a big juicy Big Data-store, either. This is a FriendFeed comment from a librarian quoting what OCLC actually lets affiliates do and why. You can’t read the first comment here, so I will: “I can, barely, stretch this definition to include the work that I’m doing on my research project... but the grad student who wants to use WorldCat data for a bibliographic study of the spread of publishing in New Spain is pretty much out of luck.”

Stop it, OCLC. Just stop it. You are shutting yourself out of Big Data-land, and when you do that, you shut us libraries out too. Hathi Trust is willing to fight in federal court to allow researchers to do research on its corpus, and OCLC comes at researchers with legalese? Stop it. DPLA insists that all contributed metadata be available for any meditated reuse, within reasonable limits of bandwidth, and OCLC gives researchers static? Stop it. Bring worldcat-org to DPLAne, instead.

Page 29: Is this BIG DATA which I see before me?

when worldcat.org do come to DPLAne

Think upon what hath chanced, and, at more time,The interim having weigh'd it, let us speakOur free hearts each to other.

All right, I’m done lecturing OCLC, and now I’m going to lecture everyone else in this room, because I clearly haven’t made enough enemies, right? To some extent, OCLC is doing what it’s doing because it knows that a lot of academic libraries love to free-ride. We heard about this from Kim today briefly with regard to collective print management -- “can the CIC let California do it?” -- and, you know, I come out of open access and open source, so I’ve seen it firsthand. Contribute programmer time to an open-source project? Nope. Pay for a membership in an open-source foundation, or participate in a collective digital-preservation system? Fuhgeddaboudit; who has money for that? Put actual acquisitions money toward open access? Bah, we have an institutional repository... under somebody’s desk... in the third sub-basement... somewhere; that’s enough, right?

But at some point, free-riding prevents useful collective action, and I think Big Data is one of those points. Big Data isn’t free. Open data, big or small, isn’t free. It’s really tempting to pretend it is and free-ride anyway, I get that. But free riding is slimy and lazy and unethical and we need to stop doing it. No one library can talk OCLC down off the ledge; heaven knows Sweden tried. But maybe we can talk OCLC down together, as a community. Shouldn’t we try?

Page 30: Is this BIG DATA which I see before me?

BIGif this is

which I see before meDATA

NOW WHAT?

Page 31: Is this BIG DATA which I see before me?

SKILLS

So I was asked to talk about what skills and scaffolding we need to make and use big data in our libraries. And I’m sorry, Eric, but I tried and tried to make a slide answering that question and I just couldn’t.

Page 32: Is this BIG DATA which I see before me?

SKILLS

Every one that does so is a traitor, and must be hanged.

And the reason I couldn’t is that I know what way too many academic libraries DO with lists of skills -- they think they can just hire some poor Macduff with a random grab-bag list of skills and call him a “Big Data Coordinator” or some such thing, and then they’ve solved the big data problem and they can go home and have a drink.

I don’t work in libraries any more in part because my own career was badly hurt by that kind of “skills thinking” with respect to scholarly communication and open access. I don’t think thinking about library services in terms of laundry lists of skills works! And I KNOW it hurts people, because I’ve had former students come back to me for advice over it, and I’ve seen it hurt much better librarians than I ever had a hope of being.

So now that my job is preparing people for librarianship, I explicitly warn my students about skills thinking and how it manifests in job descriptions, and I tell them not to apply for those laundry-list, unsupported-single-person-in-a-disregarded-corner jobs. The dice are just too loaded against them.

So if you think you’re going to hire a Research Data Coordinator, or a Digital Humanities Librarian, or one bioinformaticist, or one statistician, somebody with serious skills, and you’re going to wind that person up and turn them loose and miracles will happen? (CLICK) Well, I’m with Lady Macduff on this one -- hang the traitors!

Page 33: Is this BIG DATA which I see before me?

SCAFFOLDING

What bets should we make, now and future?

What do we build? Fix?

Who cares, right now? Who else should?

What can we use, right now?

How can we experiment?

I hate the word “infrastructure,” because it’s impersonal and overused, so I’m going to suggest “scaffolding” instead, by way of a more holistic, less skills-focused mode of thinking about the opportunities Big Data might have for us, and what we’ll have to be and do to capitalize on those opportunities.

I like “scaffolding” because -- look, you can hire a Michaelangelo, but if you don’t put that scaffolding under him, he’s not painting you no Sistine Chapel. So here are some questions I think are worth asking.

- note that “who cares” means you look at your existing staff as well as your environment, because ignoring your current people is stupid and counterproductive.

- bets: there’s no such thing as a sure thing. you have to bet. betting means risk, risk means failure. fail fast and often.

Page 34: Is this BIG DATA which I see before me?

BIGis this

which I see before me?DATA

This presentation is available under a Creative Commons Attribution 3.0 United States license.

Dorothea SaloUniversity of Wisconsin–Madison

And that’s where I’m at on this just now! Hope it helped, and I’m findable on Twitter and the web if you have questions.