computer magacine

110
MASSIMO BANZI: BUILDING ARDUINO, P. 11 THE PRESENT AND FUTURE OF ELECTRONIC SCIENTIFIC PUBLICATIONS, P. 64 A COMPUTER PRIVACY PRIMER, P. 78 JANUARY 2014 http://www.computer.org Contents | Zoom in | Zoom out Search Issue | Next Page For navigation instructions please click here Contents | Zoom in | Zoom out Search Issue | Next Page For navigation instructions please click here

Upload: jaime-araya-aros

Post on 10-Dec-2015

50 views

Category:

Documents


21 download

DESCRIPTION

revista

TRANSCRIPT

Page 1: computer magacine

MASSIMO BANZI: BUILDING ARDUINO, P. 11

THE PRESENT AND FUTURE OF ELECTRONIC SCIENTIFIC PUBLICATIONS, P. 64

A COMPUTER PRIVACY PRIMER, P. 78

JA

NU

AR

Y 2

014

http

://w

ww

.com

pute

r.or

g

Contents | Zoom in | Zoom out Search Issue | Next PageFor navigation instructions please click here

Contents | Zoom in | Zoom out Search Issue | Next PageFor navigation instructions please click here

Page 2: computer magacine

CLOUDIEEE

COMPUTING

computer.org/cloudcomputing

IEEE Computer Society’s newest magazine tackles the emerging technology of cloud computing.

Subscribe today!

Coming March 2014

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 3: computer magacine

MASSIMO BANZI: BUILDING ARDUINO, P. 11

THE PRESENT AND FUTURE OF ELECTRONIC SCIENTIFIC PUBLICATIONS, P. 64

A COMPUTER PRIVACY PRIMER, P. 78

JA

NU

AR

Y 2

014

http

://w

ww

.com

pute

r.or

g

Contents | Zoom in | Zoom out Search Issue | Next PageFor navigation instructions please click here

Contents | Zoom in | Zoom out Search Issue | Next PageFor navigation instructions please click here

________________________

Page 5: computer magacine

JANUARY 2014 1

Editorial StaffCarrie Clark WalshManaging [email protected] NelsonSenior EditorMark GallaherStaff EditorLee GarberSenior News Editor

Contributing EditorsCamber AgreliusChristine AnthonyStaff Multimedia EditorsBrian BrannonBen JonesDesignLarry Bauer

ProductionLarry Bauer, LeadCamber AgreliusJennie Zhu-MaiMonette VelascoCover DesignDavid AngelKate Wojogbe

Products and Services DirectorEvan ButterfieldSenior Manager, Editorial Services Robin Baldwin

Manager, Editorial ServicesJennifer StoutSenior Business Development ManagerSandy BrownSenior Advertising CoordinatorMarian Anderson

Editor in ChiefRon VetterUniversity of North Carolina [email protected]

Associate Editor in ChiefSumi Helal University of [email protected]

Associate Editor in Chief,Research FeaturesKathleen Swigger University of North [email protected]

Associate Editor in Chief,Special IssuesBill N. [email protected]

Computing PracticesRohit [email protected]

PerspectivesBob [email protected]

Multimedia EditorCharles R. Severance [email protected]

2014 IEEE Computer Society PresidentDejan S. Milojičić[email protected]

Area EditorsComputer ArchitecturesDavid H. AlbonesiCornell UniversityGreg ByrdNorth Carolina State UniversityGraphics and MultimediaOliver BimberJohannes Kepler University LinzHealth InformaticsUpkar VarshneyGeorgia State University, AtlantaHigh-Performance ComputingVladimir GetovUniversity of WestminsterInformation andData ManagementNaren RamakrishnanVirginia Tech Internet ComputingSimon ShimSan Jose State University MultimediaSavitha SrinivasanIBM Almaden Research Center Ying-Dar LinNational Chiao Tung UniversitySecurity and PrivacyRolf OppligereSECURITY TechnologiesSoftwareRenée BryceUniversity of North TexasJean-Marc JézéquelUniversity of Rennes

Column EditorsCloud CoverSan MurugesanBRITE Professional ResourcesComputing and the Law Brian GaffMcDermott Will & EmeryComputing ConversationsCharles R. SeveranceUniversity of MichiganComputing EducationAnn E.K. SobelMiami UniversityDiscovery AnalyticsNaren RamakrishnanVirginia TechEntertainment ComputingKelvin SungUniversity of Washington, BothellThe Errant HashtagDavid Alan GrierGeorge Washington UniversityGreen ITKirk W. CameronVirginia TechIdentity SciencesKarl RicanekUniversity of North Carolina WilmingtonInvisible ComputingAlbrecht SchmidtUniversity of StuttgartOut of BandHal BerghelUniversity of Nevada, Las Vegas

Science Fiction PrototypingBrian David JohnsonIntelSecurityJeffrey M. VoasNISTSoftware TechnologiesMike HincheyLero—the Irish Software Engineering Research Centre32 & 16 Years AgoNeville Holmes

Advisory PanelCarl K. ChangEditor in Chief EmeritusIowa State UniversityJean BaconUniversity of CambridgeHal BerghelUniversity of Nevada, Las VegasDoris L. CarverLouisiana State UniversityNaren RamakrishnanVirginia TechTheresa-Marie RhyneConsultantAlf WeaverUniversity of Virginia

2014 Publications BoardJean-Luc Gaudiot (VP for Publications), Alain April, Laxmi N. Bhuyan, Angela R. Burgess, Greg Byrd, David S. Ebert, Frank Ferrante, Paolo Montuschi, Linda I. Shafer, H.J. Siegel, Per Stenström

2014 Magazine Operations CommitteePaolo Montuschi (chair), Erik R. Altman, Maria Ebling, Miguel Encarnação, Lars Heide, Cecilia Metra, San Murugesan, Shari Lawrence Pfleeger, Michael Rabinovich, Yong Rui, Forrest Shull, George K. Thiruvathukal, Ron Vetter, Daniel Zeng

Circulation: Computer (ISSN 0018-9162) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 10016-5997; IEEE Com-puter Society Publications Office, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720; voice +1 714 821 8380; fax +1 714 821 4010; IEEE Computer Society Headquarters, 2001 L Street NW, Suite 700, Washington, DC 20036. IEEE Computer Society membership includes $19 for a subscription to Computer magazine. Nonmember subscription rate available upon request. Single-copy prices: members $20; nonmembers $228.75.. Postmaster: Send undelivered copies and address changes to Computer, IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid at New York, New York, and at additional mailing offices. Canadian GST #125634188. Canada Post Corporation (Canadian distribution) publications mail agreement number 40013885. Return unde-liverable Canadian addresses to PO Box 122, Niagara Falls, ON L2E 6S8 Canada. Printed in USA.Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in Computer does not necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______________

________________________

______________

__________

____________

__________

__________

____________

Page 6: computer magacine

CO

NT

EN

TS

COVER FEATURES24 What the Future Holds for

Solid-State MemoryKarin Strauss and Doug BurgerThe memory industry faces significant disruption due to challenges related to scaling. Future mem-ory systems will have more heterogeneity at indi-vidual levels of the hierarchy, with management support from multiple layers across the stack.

32 The Emergence of RF-Powered Computing

Shyamnath Gollakota, Matthew S. Reynolds, Joshua R. Smith, and David J. WetherallExtracting power “from thin air” has a quality of science fiction about it, yet technology trends make it likely that in the near future, small computers in urban areas will use ambient RF signals for both power and communication.

40 Enabling the Rapid Development and Adoption of Speech-User Interfaces

Anuj Kumar, Florian Metze, and Matthew KamSpeech-user interfaces offer truly hands-free, eyes-free interaction, have unmatched throughput rates, and are the only plausible interaction modality for illiterate users across the world, but they are not yet developed in abundance to support every type of user, language, or acoustic scenario. Two approaches present exciting opportunities for future research.

48 The Future of Social Learning in Software Engineering

Emerson Murphy-HillBuilding on time-honored strengths of person-to-person social learning, new technologies can help software developers learn from one another more efficiently and productively. In particular, continuous social screencasting is a promising technique for sharing and learning about new software development tools.

ABOUT THIS ISSUE

In this annual Outlook issue, we look at emerging technologies that promise to have a major impact on computing in both the near and distant futures. Topics

addressed include what the future holds for solid-state memories, the emergence of RF-powered computing, ways to facilitate the rapid development and adoption of speech-user interfaces, the future of social learning in software engineering, and a big data vision for tapping into the insights of population informatics.

56 Social Genome: Putting Big Data to Work for Population Informatics

Hye-Chung Kum, Ashok Krishnamurthy, Ashwin Machanavajjhala, and Stanley C. AhaltData-intensive research using distributed, federated, person-level datasets in near real time has the potential to transform social, behavioral, economic, and health sciences—but issues around privacy, confidentiality, access, and data integration have slowed progress in this area. When technology is properly used to manage both privacy concerns and uncertainty, big data will help move the growing field of population informatics forward.

PERSPECTIVES64 Augmented Reading:

The Present and Future of Electronic Scientific Publications

Paolo Montuschi and Alfredo BensoAs technological, economic, and social factors drive scientific publishing toward electronic formats, opportunities open beyond traditional reading and writing frameworks. Journal articles now, and in the future, can increasingly include a variety of supplemental multimedia and inter-active materials for augmented reading that will impact both the nature and presentation of scientific research. The IEEE Computer Society is preparing for this evolution.

w w w. c o m p u t e r. o r g /c o m p u t e r

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

MU

LTIM

ED

IA

Page 7: computer magacine

We welcome your letters. Send them to [email protected]. Letters are subject to editing for style, clarity, and length.

Reuse Rights and Reprint Permissions: Educational or personal use of this material is permitted without fee, provided such use: 1) is not made for profit; 2) includes this notice and a full citation to the original work on the first page of the copy;

and 3) does not imply IEEE endorsement of any third-party products or services. Authors and their companies are permitted to post the accepted version of their IEEE-copyrighted material on their own Web servers without permission, provided that the IEEE copyright notice and a full citation to the original work appear on the first screen of the posted copy. An accepted manuscript is a version which has been revised by the author to incorporate review suggestions, but not the published version with copyediting, proofreading and formatting added by IEEE. For more information, please go to: http://www.ieee.org/publications_standards/publications/rights/paperversionpolicy.html.

Permission to reprint/republish this material for commercial, advertising, or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to the IEEE Intellectual Property Rights Office, 445 Hoes Lane, Piscataway, NJ 08854-4141 or [email protected]. Copyright © 2014 IEEE. All rights reserved.

Abstracting and Library Use: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923.

IEEE prohibits discrimination, harassment, and bullying. For more information, visit www.ieee.org/web/aboutus/whatis/policies/p9-26.html.

COLUMNS9 Spotlight on Transactions

Fixing the Mercator Projection for the Internet AgeRaghu Machiraju

11 Computing ConversationsMassimo Banzi: Building ArduinoCharles Severance

13 Computing and the LawCorporate Risks from Social MediaBrian M. Gaff

16 32 & 16 Years AgoComputer, January 1982 and 1998Neville Holmes

75 Green ITCloud-Based Execution to Improve Mobile Application Energy EfficiencyEli Tilevich and Young-Woo Kwon

78 Out of BandPrivacy Informatics: A Primer on Defensive Tactics for a Society under Siege Hal Berghel

83 SecurityJapan’s Changing Cybersecurity LandscapeNir Kshetri

87 Science Fiction PrototypingUtopia RisingBrian David Johnson

104 The Errant HashtagJust Out of ItDavid Alan Grier

January 2014, Volume 47, Number 1

IEEE Computer Society: http://computer.orgComputer: http://computer.org/[email protected] IEEE Computer Society Publications Office: +1 714 821 8380

F l a g s h i p P u b l i c a t i o n o f t h e I E E E C o m p u t e r S o c i e t y

NEWS18 News Briefs

Lee Garber

MEMBERSHIP NEWS6 CS President’s Message

22 EIC’s Message

90 IEEE Computer Society Connection

91 Call and Calendar

DEPARTMENTS 4 Elsewhere

10 Computer Society Information94 Career Opportunities

For more information on computing topics, visit the Computer Society Digital Library at www.computer.org/csdl.

See www.computer.org/computer-multimedia for multimedia content related to the features in this issue.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______________________

_______________

________________________________________________

____

____________

____

___________

Page 8: computer magacine

4 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

Computer Highlights Society Magazines

ELSEWHERE IN THE CS

The IEEE Computer Society’s lineup of 12 peer-reviewed technical magazines cover cutting-edge topics in comput-ing, including scientific applications, Internet computing, machine intelligence, pervasive computing, security and privacy, digital graphics, and computer history. Select articles from recent issues of other Computer Society maga-zines are highlighted below.

A dramatic evolution of the humble user interfacehas occurred in the past two decades, shaped by continuing change over the course of the desktop, Internet, and mobile computing eras. What seems simple, even obvious, on the surface has required a great deal of work on the part of developers and designers in shaping how we interact with our devices. “A Retrospective on User Interface Development Technology,” from IEEE Software’s November/December 2013 issue, traces this evolution, looks at the concerns formulated and addressed as it has taken place, and predicts some future developments for the field.

Students rarely have an opportunity to integrate computer science technology with other disciplines in a direct, hands-on manner. In “Animatronics Workshop: A Theater + Engineering Collaboration at a High School,” from IEEE CG&A’s November/December 2013 issue, teachers from a Texas K–12 school along with a senior Microsoft researcher describe an innovative pilot educational program in animatronics that allows students to experience the interplay of creativity across the technical and traditional arts.

To make cities smarter, we must find ways of effectively connecting citizens to local government, letting them contribute to their community’s general well-being. In “CrowdSC: Building Smart Cities with Large-Scale Citizen Participation,” from the November/December 2013 issue of IEEE Internet Computing, researchers from Inria and the University of Lorraine present a crowdsourcing framework

designed to assist in data collection, selection, and assessment activities in smart city governance. The authors also describe CrowdSC’s process model and evaluate three different execution strategies.

The various branches of the US government generate huge amounts of data. Can big data analytics help improve e-government? In “Big Data and Transformational Government,” a feature article from IT Pro’s November/December 2013 issue, Rhoda Joseph of Penn State Harrisburg and Norman Johnson of the University of Houston describe some of the drivers and barriers affecting the use of big data in e-government and illustrate big data’s potential for increasing e-government efficiency and effectiveness using data from the US Department of Veterans Affairs.

For hundreds or years, physicians have recognized the importance of maintaining health information privacy and security; the Hippocratic Oath, in fact, demands that information about patients’ health be “secret.” But as we move toward a healthcare system that embraces IT because of its opportunities for increasing quality and decreasing costs, how do we ensure patient privacy? IEEE S&P’s November/December 2013 special issue on security and privacy in health IT offers five articles addressing issues such as patient privacy in emergency medical situations and secure use of mobile devices for patient record interaction.

The convergence of mobile computing and cloud computing is predicated on a reliable, high-bandwidth, end-to-end network. In “The Role of Cloudlets in Hostile Environments,” from IEEE Pervasive Computing’s October-December 2013 issue, researchers from Carnegie Mellon University and the Software Engineering Institute discuss some of the difficulties of this computing paradigm when connectivity to the cloud isn’t stable. They then explore a solution to the problem of unstable cloud

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 9: computer magacine

JANUARY 2014 5

connectivity utilizing virtual-machine-based cloudlets in close proximity to associated mobile devices.

New music is often on the cutting edge of technology—as witnessed by the fact that Nintendo’s Wii Mote and Microsoft’s Kinect were barely released before applications for them had been hacked and adapted for musical purposes. In the October-December 2013 issue of IEEE MultiMedia, Garth Paine of Arizona State University writes in “New Musical Instrument Design Considerations” about the current proliferation of new instruments and interfaces for computer-based musical performance and suggests ways that preexisting practices can provide design guidelines for the developing fi eld of digital musical instruments.

The decade between 1955 and 1965 brought a revolution in academic computing, both technologically and socially: computing centers became increasingly essential facilities on campuses, and computer science began to gain acceptance as a legitimate academic discipline. In “Burroughs Algol at Stanford University, 1960-1963,” from IEEE Annals’ October-December 2013 issue, Robert Braden recalls how a now nearly forgotten system helped build a community of computer-literate students and faculty at Stanford in the early 1960s and played a central role in spreading computing much more widely.

How does the online world provide public goods? In “Digital Public Goods,” the Micro Economics department piece for IEEE Micro’s September/October 2013 issue, Shane Greenstein, chair of information technology at Northwestern University’s Kellogg School of Management, starts with a description of the economics that underlie the concept of a public good and then considers digital examples of public goods ranging from the technical protocols operating the Internet to individual sites such as Yelp and YouTube. He concludes, “There are still plenty of public goods waiting for more effective means to provide them.”

Space weather can have a signifi cant effect on Earth’s climate, but measuring the impact of space environmental disturbances presents a unique set of challenges for scientists. “Space Weather Prediction and Exascale Computing,” from the September/October 2013 issue of

CiSE, presents the ExaScience Lab’s efforts in this endeavor. By combining exascale computing and new visualization tools, researchers are working on predicting the arrival and impact of space events on Earth. They focus on the steps necessary to achieve a true physics-based capability for predicting space weather storms.

Game theory and evolution might seem unlikely bedfellows, but over the past three decades, game-theoretic ideas have shed much light on a range of problems in biology, while ideas from evolution have been applied with great success in economics. In “Game Theory and Evolution,” from IEEE Intelligent Systems’ July/August 2013 AI and Game Theory department, Steve Phelps from the University of Essex and Michael Wooldridge from the University of Oxford describe key concepts from the area now known as evolutionary game theory and introduce some important applications of these concepts.

EXTRACS president emeritus Sorel Reisman’s blog on computer science education topics, “Musings from the Ivory Tower,” is online at www.computer.org/portal/web/Musings-from-the-Ivory-Tower. The blog is a feature of the Computing Now Education page (www.computer.org/portal/web/computingnow/education). Also included here are a range of instructional materials on a growing set of technical topics drawn from CS conference tutorials, extracts from CS e-Learning courses, book reviews, audio-video presentations, and interviews with leading computer science experts and technology innovators.

NEXT ISSUESOFTWARE TESTING

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______________________________

__________

_________________

Page 10: computer magacine

PRESIDENT’S MESSAGE

6 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

O ver the years, the IEEE Computer Soci-ety has served both researchers and prac-

titioners in numerous technical fields by organizing conferences and section/chapter meetings, publishing journals and standards, and offering professional and edu-cational materials. In that same time span, technologists have become younger, more intent on solving specific practical prob-lems, and increasingly virtually connected. Some of the Society’s services can now be found else-where, simply by searching the Web or professional portals such as Stackoverflow. Members now physically gather at hackathons and focused conferences—such gatherings are of direct use, have a high “cool” factor, and attract various IT professionals, computer engineers, and students.

These trends confront the IEEE Computer Society leadership with a key challenge: How can we retain the rigor of our traditional scien-tific and engineering principles

and values, yet appeal to young, tech-savvy professionals who aren’t members? How can we reinvent the Society to make it more relevant to contemporary researchers and practitioners?

ONE YEAR, THREE INITIATIVES

The IEEE CS presidency lasts only one year, so the president has to initiate activities while still in the president-elect role.

As president-elect, I focused on three efforts. First, with a team of renowned experts (Hasan Alkhatib, Paolo Faraboschi, Eitan Frachten-berg, Hironori Kasahara, Danny Lange, Phil Laplante, Arif Merchant, and Karsten Schwan), I undertook writing the CS 2022 Report, address-ing 22 technologies that are likely to be disruptive by 2022. This report, freely available to everyone, will be published in early 2014 and is intended to provide technology input into the 2014 CS strategic plan.

Second, with a team of confer-ence organizers (Tom Conte, Paul Croll, Sven Dietrich, David Ebert,

Jean-Luc Gaudiot, Frank Huebner, David Lomet, Cecilia Metra, Hausi Muller, and Bill Pitts), I began a thorough analysis of current CS conferences—in particular, their best practices, outcomes, core evaluation criteria, governance practices, open access policies, and relationship to technical commit-tees and journal publication—as well as how they compare to ACM conferences. Conferences are extremely important for the CS because they’re where most innova-tive work is presented first; they’re also a source of financial stability. This ad hoc committee will con-tinue through next year, and the results of our analysis will help the new vice president of the Technical and Conference Activities Board to strengthen these gatherings.

Third, I met with the IEEE CS executive committee and staff direc-tors in a full-day event to plan for the following year. We addressed three major topics: technology, the Society’s focus and impact, and our relationship to IEEE. In the rest of this message I elaborate on our find-

Reinventing Relevance

Today’s IT professionals, computer engineers, and students share knowledge directly through the Internet and gather at “cool” events like hackathons. How can the Computer Society reinvent its relevance with contemporary researchers and practitioners?

IEEE Computer Society 2014 President

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 11: computer magacine

JANUARY 2014 7

ings, which will also be my own priorities in the coming year.

TECHNOLOGYRecent years have brought

multiple disruptive technologies that will revolutionize the way we manufacture goods (3D printers), program and use computers (non-volatile memory), and interact with and through computers (voice- and gesture-based human-computer interfaces). In addition, incremental innovations, such as in cloud com-puting, big data analytics, and the Internet of Things, have enabled tremendous growth in unrelated fields including medicine, bioinfor-matics, and life sciences.

To address these changes, we need agile communities that can quickly start, grow, shrink, and stop once both technology and practitioner needs are met. Special Technical Communities (STCs; http://stc.ieee.net) can help in this regard. We’ve already started 15 STCs in various areas such as cloud comput-ing, smart grids, social networking, sustainable computing, multicore systems, and wearable computing, to name a few.

Another excellent example of a technology- and practitioner-driven initiative is our upcoming cyberse-curity effort, which has just been approved as an IEEE initiative. The primary benefits of this initiative to practitioners include identifying the top 10 cybersecurity flaws, architec-tural risk analyses, threat-modeling processes, a security component library, an open source framework library, a design patterns and attack patterns library, and building codes for security.

We’re a Society of engineers and practitioners, and we always need to drive our contributions from the technology itself. In particular, we’ve done this with Computing Now (http://computingnow.computer.org), which covers cloud comput-ing, big data, mobile computing,

networking, security, and software engineering. We’ve also started new publications in the areas of cloud computing and big data. Staff-organized conferences focus on the latest technologies, with a successful big data event in Silicon Valley last fall and a mobile cloud computing event in 2014.

IEEE CS FOCUS AND IMPACTAs the technology changes, so

does its use. Today, typical IEEE CS members, or potential members, are substantially different than they used to be. They’re younger

and much more versed in technol-ogy, having grown up with multiple devices and applications; English is their second language; and they need much faster and more practi-cal benefits from the community. Even the notion of community has changed—it’s much less formal but far nimbler and more hands-on. Driven by the Silicon Valley culture, young professionals prefer to gather at events where they can get directly exposed to technology and the people creating and using it.

Current technologies have also changed the way practitioners search for information, collabo-rate, publish, and network as a community. In the past, confer-ences and journal articles were the primary venue for publishing and learning about innovative results. To interactively address solutions for technical problems, engineers gathered at annual conferences, section/chapter events, and stan-dards meetings.

Today, professionals disseminate their results instantly on websites, which their colleagues around the globe evaluate in real time. They meet virtually more frequently than face to face. They communicate using Skype and social networking tools. What are the needs of these new professionals, and how can we meet them in the context of our traditional activities? We have some ideas:

Conferences need to be of more practical and immediate value to practitioners. New practitio-ners live in the moment; they want to learn insights and tech-niques to create solutions now and for their immediate needs.Publications, in the form of tra-ditional journals or magazines, require years to start and can live for many years beyond their usefulness. This is out of step with the pace of modern technological developments, which can start quickly but also evolve and vanish rapidly. Moreover, today’s authors want to publish almost instantly, and practitioners want to read their work at that same pace. The traditional mechanism for codifying research results for practitioners is to incorpo-rate them into standards. The necessity for achieving broad consensus may slow the use of standards-making for fast-mov-ing, disruptive technologies, though. Complementing stan-dards-making with open source publication, living libraries, and patterns may be an appropriate approach. The cybersecurity initiative may offer an opportu-nity to try these out.Professional activities and edu-cation similarly need to be of practical and immediate use to potential students. Metro area workshops exploring hot topics, such as cloud computing and

Young professionals prefer to gather at events where they can get directly exposed to technology and the people creating and using it.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

__

___

_______

Page 12: computer magacine

PRESIDENT’S MESSAGE

8 COMPUTER

price of printing and distributing paper, and volunteers undertake some of the editing usually per-formed by staff. New membership models can be motivated by new services, such as filtering new pub-lications and conferences for papers that match a member’s profile.

Generally speaking, there’s no magic wand. We need to address every activity we undertake as a Society and focus on only those that bring value and are financially sustainable. We can only afford to sustain a few small, strategically important or incubating activities longer term without a sizable return on financial investment.

The Society’s top leader-ship isn’t enough to lead the Society—we need

every current and future member to contribute for us to be successful. One of my major focuses will be on recruiting the true technical leaders in their field who can elevate us to the next level. Being an avid soccer coach, I’m fully aware of the saying that games are won by players and lost by coaches. I invite all members of the IEEE CS to help me win this game! I plan to continue the efforts of past presidents Kathy Land, Jim Isaak, Sorel Resiman, John Walz, and David Alan Grier, and pledge to work closely with the next president-elect, Tom Conte. Only together will we make a difference.

Dejan Milojic ic is a senior researcher and manager at Hewlett Packard Labs. He received a PhD in computer science from Kaiser-slautern University of Technology, Germany. An IEEE Fellow, Milojic icis the author of several books, many papers, and a dozen patents. Contact him at [email protected].

IEEE AND FINANCIAL SUSTAINABILITY

The IEEE CS isn’t a business, but a volunteer-led organization. However, a healthy, financially sustainable Society is crucial for its operation. The CS is by far the largest soci-ety within IEEE, both in terms of the number of members (75,000 CS members out of 400,000 IEEE members, of which 200,000 belong to a technical society) and in rev-enue from events and publications. But despite being a high revenue-generating society, we haven’t been profitable over the past few years.

There have been many attempts to return to profitability, such as cost cutting and adjusting to IEEE revenue distribution, but none were sufficient. We even discussed some extreme moves, such as organizing the IEEE CS into smaller, focused units, or even broadening the scope of cooperation with other societies, but such approaches didn’t seem promising.

During the next year, we’ll attempt to radically change our financial situation to revert to profitability by carefully evaluat-ing every facet of the IEEE CS. New confer-ences, organized by staff, have proven to attract different kinds of attendees, people with more business interests. These conferences are profitable compared to traditional confer-ences, which typically only break even. New publication models, such as myComputer (www.computer.org/portal/web/myCom-puter) and Computing Now, substantially reduce costs because they eliminate the

multicore system design, attract many attendees because of their relevance.

Increasing membership of young practitioners has been an IEEE CS goal for many years, but our efforts haven’t been effective yet. I’m a firm believer that we first need to provide the value to potential members, and then they will join. However, not all of them will prefer traditional full membership, so we must explore new business models, including lightweight membership models with selective benefits, while retain-ing traditional membership with full benefits.

We’ve discussed all kinds of plans, but we should refuse to abandon the rigorous quality pro-cesses of traditional products we’ve delivered to our members over the decades—rather, we should enhance and complement them with varied, dynamic offerings better matched to modern members’ needs.

Selected CS articles and columns are available for free at

http://ComputingNow.computer.org.

IEEE offers a variety of open access (OA) publications:

Discover top-quality articles, chosen by the IEEE peer-review standard of excellence.

Unrestricted access to today’s groundbreaking research

via the IEEE Xplore® digital library

Learn more about IEEE Open Access

www.ieee.org/open-access

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

________________

_____________

___

Page 13: computer magacine

0018-9162/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society JANUARY 2014 9

SPOTLIGHT ON TR ANSACTIONS

The first installment of Computer’s series highlighting the work published in IEEE Computer Society journals comes from IEEE Transactions on Visualization and Computer Graphics.

J ohn Noble Wilford, in his delightful work The Map-makers (Pimlico, 2002), traces the adventures and

discoveries of mapmakers and car-tographers over the centuries. It’s a fascinating story still unfolding in this age of the Internet. Mapmaking has undergone many changes, with demands on both ends of the user spectrum, from the precise maps required to chart the ocean floor and surfaces of planets to the more pro-saic and ubiquitous maps employed to navigate the streets and water-ways of our cities and rivers.

Today’s maps are viewed on a plethora of devices and browsers of every imaginable form and func-tion, all made available through the auspices of various Web-based mapmaking services. The staple of map-displaying methods is still the age-old projection of Merca-tor. However, there’s a problem that this master mapmaker could have never anticipated: the Mercator pro-jection is a poor choice for maps of the globe in its entirety or for large landmasses on digital displays. The higher latitudes suffer from undue distortion and convey a false sense of proximity to the user, while the polar latitudes are completely miss-ing in the Web-based Mercator projection. Is there an alternative?

Bernhard Jenny at Oregon State University offers a well-crafted so-lution called Adaptive Composite

Map Projection, which resorts to al-gorithms and the guiding principles of information visualization design (“Adaptive Composite Map Projec-tions,” IEEE Trans. Visualization and Computer Graphics, vol. 18, no. 12, 2012, pp. 2575-2582; doi:10.1109/TVCG.2012.192). Map scale, the per-tinent geography, the aspect ratio, and nuances of particular carto-graphic visualization impact the final display of regional features and outlines. Jenny’s composite projec-tion combines several recommended projections and adaptively and seamlessly morphs the map space under scrutiny as the user changes scale or the region of interest. It also adapts the underlying geometry to scale, to the map’s aspect ratio, and to the displayed area’s salient central latitude. Most importantly, Jenny’s projection can display the globe in its entirety—including the South and North Poles—with very little areal distortion. The Jenny projec-tion’s main benefit is that areas are displayed true to scale: Greenland doesn’t seem similar in area to the entire continent of Africa! This is es-sential, given the widespread use of maps beyond navigation. Raging debates pertaining to zoning, natu-ral resource management, habitat preservation, climate, and so on are often based on compelling displays that use maps.

Jenny’s work can potentially have a tremendous impact on multiple

user communities. It provides a framework for adapting to the grow-ing needs of modern mapmakers and enables the correct use of the venerable Mercator projection. The inclusion of several other projec-tions du jour makes it a method that might actually stand the test of time.

Another important contribution to note is that Jenny’s work ushers the projection operator into the working list of graphical variables and adds projection to the semi-ology of images. Jacques Bertin, cartographer and author of the book Semiology of Graphics (Eco-nomic & Social Research Institute, 2010), never included projection in his list of viable graphical vari-ables, but Jenny offers compelling reasons why we should do so. In an onscreen digital environment, the ability to alter projections exists and thus can make available a rich set of metaphors to all cartographers and practitioners of data visualiza-tion. In no form or manner is this work complete—it is most definitely a work in progress, and systematic user studies are required before this framework can be used in practice. However, Jenny’s ideas hold much promise and provide a way for the work of a master mapmaker to be used in modern times.

Raghu Machiraju is a professor of computer science and engineering at the Ohio State University. Contact him at [email protected].

Fixing the Mercator Projection for the Internet AgeRaghu Machiraju, Ohio State University

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______________

Page 14: computer magacine

PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field.MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field.COMPUTER SOCIETY WEBSITE: www.computer.orgOMBUDSMAN: To check membership status or report a change of address, call the IEEE Member Services toll-free number, +1 800 678 4333 (US) or +1 732 981 0060 (international). Direct all other Computer Society-related questions—magazine delivery or unresolved complaints—to [email protected]: Regular and student chapters worldwide provide the opportunity to interact with colleagues, hear technical experts, and serve the local professional community.AVAILABLE INFORMATION: To obtain more information on anyof the following, contact Customer Service at +1 714 821 8380 or+1 800 272 6657:

• Membership applications• Publications catalog• Draft standards and order forms• Technical committee list• Technical committee application• Chapter start-up procedures• Student scholarship information• Volunteer leaders/staff directory• IEEE senior member grade application (requires 10 years

practice and significant performance in five of those 10)

PUBLICATIONS AND ACTIVITIESComputer: The flagship publication of the IEEE Computer Society, Computer, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications.Periodicals: The society publishes 13 magazines, 16 transactions, and one letters. Refer to membership application or request information as noted above.Conference Proceedings & Books: Conference Publishing Services publishes more than 175 titles every year.Standards Working Groups: More than 150 groups produce IEEE standards used throughout the world.Technical Committees: TCs provide professional interaction in more than 45 technical areas and directly influence computer engineering conferences and publications.Conferences/Education: The society holds about 200 conferences each year and sponsors many educational activities, including computing science accreditation.Certifications: The society offers two software developer credentials. For more information, visit www.computer.org/certification.

NEXT BOARD MEETING4–7 February 2014, Long Beach, Calif., USA

EXECUTIVE COMMITTEEPresident: Dejan S. MilojicicPresident-Elect: Thomas M. ContePast President: David Alan GrierSecretary: David S. EbertTreasurer: Charlene (“Chuck”) J. WalradVP, Educational Activities: Phillip LaplanteVP, Member & Geographic Activities: Elizabeth L. BurdVP, Publications: Jean-Luc GaudiotVP, Professional Activities: Donald F. ShaferVP, Standards Activities: James W. MooreVP, Technical & Conference Activities: Cecilia Metra2014 IEEE Director & Delegate Division VIII: Roger U. Fujii2014 IEEE Director & Delegate Division V: Susan K. (Kathy) Land2014 IEEE Director-Elect & Delegate Division VIII: John W. Walz

BOARD OF GOVERNORSTerm Expiring 2014: Jose Ignacio Castillo Velazquez, David. S. Ebert, Hakan Erdogmus, Gargi Keeni, Fabrizio Lombardi, Hironori Kasahara, Arnold N. PearsTerm Expiring 2015: Ann DeMarle, Cecilia Metra, Nita Patel, Diomidis Spinellis, Phillip Laplante, Jean-Luc Gaudiot, Stefano ZaneroTerm Expriring 2016: David A. Bader, Pierre Bourque, Dennis Frailey, Jill I. Gostin, Atsuhiro Goto, Rob Reilly, Christina M. Schober

EXECUTIVE STAFFExecutive Director: Angela R. BurgessAssociate Executive Director & Director, Governance: Anne Marie KellyDirector, Finance & Accounting: John MillerDirector, Information Technology & Services: Ray KahnDirector, Membership Development: Eric BerkowitzDirector, Products & Services: Evan ButterfieldDirector, Sales & Marketing: Chris Jensen

COMPUTER SOCIETY OFFICESWashington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928Phone: +1 202 371 0101 • Fax: +1 202 728 9614Email: [email protected] Alamitos: 10662 Los Vaqueros Circle, Los Alamitos, CA 90720Phone: +1 714 821 8380Email: [email protected]

MEMBERSHIP & PUBLICATION ORDERSPhone: +1 800 272 6657 • Fax: +1 714 821 4641 • Email: [email protected]/Pacific: Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 107-0062, JapanPhone: +81 3 3408 3118 • Fax: +81 3 3408 3553Email: [email protected]

IEEE BOARD OF DIRECTORSPresident: J. Roberto de MarcaPresident-Elect: Howard E. MichelPast President: Peter W. StaeckerSecretary: Marko DelimarTreasurer: John T. BarrDirector & President, IEEE-USA: Gary L. BlankDirector & President, Standards Association: Karen BartlesonDirector & VP, Educational Activities: Saurabh SinhaDirector & VP, Membership and Geographic Activities: Ralph M. FordDirector & VP, Publication Services and Products: Gianluca SettiDirector & VP, Technical Activities: Jacek M. ZuradaDirector & Delegate Division V: Susan K. (Kathy) LandDirector & Delegate Division VIII: Roger U. Fujii

revised 17 Dec. 2013

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_______

_____________

__________

__________

___________

____________

Page 15: computer magacine

0018-9162/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society JANUARY 2014 11

COMPUTING CONVERSATIONS

Massimo Banzi: Building ArduinoCharles Severance

Massimo Banzi describes the origins and evolution of the Arduino microcontroller.

M ost computer sci-entists focus on developing software and leave hardware

development to a few specialist engineers. Designing and building hardware takes skill, patience, and time, which is why many software developers simply write code and use hardware designed and built by someone else.

A microcontroller such as Ardu-ino shifts this traditional separation, making it much easier for anyone to build hardware—developing some-thing like a thermostat that senses when someone enters the room, for example, is well within the reach of any computer scientist. Not only is building hardware much easier and more fun with microcontrollers, it’s also relatively inexpensive, which lets a wide range of engineers solve problems using a combination of custom-developed hardware and software.

I met with Massimo Banzi, one of the cofounders of the Arduino project, at his office in Lugano, Swit-zerland, to understand how Arduino was developed. To view our discus-sion in full, visit www.computer.org/computingconversations.

THE INITIAL IDEAIn 2005, Banzi was working as

a faculty member at Interaction Design Institute Ivrea and teaching courses on interaction design for physical devices that increasingly needed electronic components:

When you’re doing interaction

design, you need to be able to build

a prototype because you need to test

your designs with people. You want

a mockup of a website to see how

people react; we need the same thing

for physical devices. Making proto-

types of physical devices means that

you need to learn about electronics,

so we created different courses that

would make electronics approach-

able to people who don’t have that

background or even skills in software

development.

Because the course goals avoided teaching hardware development, Banzi wanted to make creating the electronic components for student prototypes as straightforward as possible. He also wanted the design-ers to be able to build, tinker, and evolve the electronic aspects of their work without depending on elec-tronics experts:

We had to make something that

would run on a Mac yet easy to use

and cheap. We had this program-

ming language that we inherited from

MIT called Processing, which was

used to teach programming to artists

and designers. So we thought, “Why

don’t we try to make that run on a

microcontroller?”

After several prototypes and a student thesis project on a product called Wiring that connected a microcontroller to a computer via USB and incorporated an API for easy programming, the first Arduino design was produced:

The first version of Arduino was

based on Hernando Barragán’s Wiring

project. We re-implemented Arduino

from scratch, reusing the Wiring APIs

so that Arduino would be completely

open source. We wanted something

that would be easy for people to

reproduce and build upon.

Because the initial goal was simply to meet the needs of design students, there was no plan to ramp up manufacturing in those early days. The team published the plans as open source and made a few

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

________________

VI

DE

OA

UD

IO

Page 16: computer magacine

The first-year students saw what the

second-year students were doing

and said, “We want that Arduino,

too!” That gave us 120 power users—

people who made beautiful projects.

Designers tend to produce nice docu-

mentation for their projects and put

them online as part of their portfolios,

which was very helpful.

As the clever design projects based on Arduino made their way around the Internet, the demand for the microcontroller began to grow very quickly. Banzi made arrange-ments to distribute Arduino through the SparkFun online electronics store, which made it so the micro-controller was readily available in the US. In a sense, Arduino is a self-marketing product:

The real growth comes from people

making projects, documenting them,

and putting them online—basically,

sharing information about how they

built those projects.

The Maker and DIY movements have also adopted Arduino, and it’s increasingly used to introduce young students to technology to give them a sense that they too can understand how hardware and software combine to produce new technologies:

The idea is that you download a file,

plug the board in, and in the space

of an hour or two, you have working

hardware. The blinking LED is the

“Hello, World!” of physical computing.

COMPUTING CONVERSATIONS

12 COMPUTER

The real growth comes from people making projects, documenting them, and putting them online—basically, sharing information about how they built those projects.

printed circuit boards for their own use:

We didn’t want to set up a classic

manufacturing company or go to a

venture capitalist because back then,

nobody would have even talked to

us. Instead, we decided to release the

hardware as open source so people

could build it if they wanted to. We

made a few printed circuit boards

and gave them away as gifts. And

some people started to assemble

them. They went to the website and

got instructions, downloaded the

code, and soldered the components

to the boards.

GETTING BIGGEROnce the team had a solid design

for Arduino, they wanted to share their ideas more broadly. The next step was a first production run of pre-assembled Arduinos for their classes and workshops:

I started this project with a friend of

mine, David Cuartielles; he teaches

design in Sweden. We met Gianluca

Martino, an engineer working in

Ivrea who had experience with

manufacturing electronics. I asked if

we could manufacture 200 complete

microcontrollers that we could just

send to people. David and I managed

to convince Interaction Design Insti-

tute Ivrea and his school in Malmö

to buy 50 each, so we presold 100.

We sold the other 100 when we ran

workshops; we’d take 20 boards and

sell them to the attendees.

Arduino got additional exposure when Tom Igoe started using it in his physical design classes at New York University:

We met Tom one summer in Italy and

showed him Arduino. He took a few

prototypes back to NYU and started

to use them with some of his second-

year students. At some point in 2006,

the platform became very solid, and

you could do good projects with it.

Arduino’s worldwide popularity lets Banzi and his co-creators spend time thinking how to get young people more involved in the design of our everyday technological devices:

I think that it’s important especially

for kids to understand the world we

live in. Clearly, if you know how to

design and build things, you can

affect the world that surrounds you.

If you aren’t able to participate in the

world of creation in the digital space,

you’re left out. Somebody else is

going to design your world. At some

point, if there’s no innovation or even

renovation in the marketplace, then

one company will decide that there’s

one way you do a certain thing. It

becomes the only answer to a certain

question, and nobody thinks about

alternatives. I think that it’s important

to be masters of the technology.

While Arduino was origi-nally conceived and designed to help in the

creation of design prototypes with electronic components, it has the potential to bring a hardware element to teaching at all levels of computational thinking and computer science.

Charles Severance, Computing Conversations column editor and Computer’s multimedia editor, is a clinical associate professor and teaches in the School of Information at the University of Michigan. Follow him on Twitter @drchuck or contact him at [email protected].

Selected CS articles and columns are available for free at

http://ComputingNow.computer.org.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

___________

Page 17: computer magacine

0018-9162/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society JANUARY 2014 13

COLUMN SECTION TITLECOMPUTING AND THE L AW

Corporate Risks from Social MediaBrian M. Gaff, McDermott Will & Emery, LLP

Social media has expanded the ability of people to interact and share information, but without appropriate guidelines, a company might encounter trouble in cyberspace.

S ocial media continues to expand at a rapid pace. Although it arguably began with bulletin board

systems decades ago, its use has be-come much more widespread with the arrival of Facebook, LinkedIn, and Twitter. The unbridled flow of information on social media sites presents significant opportunities as well as significant risks.

Being able to distribute informa-tion quickly and broadly can be very valuable. For example, compa-nies typically need to communicate product advisories, software updates, and the like to customers rapidly. On the other hand, a dissat-isfied customer can spread harmful information just as swiftly, and the negative effects on the company’s business can be serious.

The flip side of this relates to how a company’s past and present employees personally use social media. A company needs to be con-cerned if someone is disseminating information about the company

that’s inaccurate, harmful, or includes confidential or proprietary material.

For an expanded discus-sion on this topic, listen to the podcast that accompanies this column at www.computer.org/portal/web/computingnow/computing-and-the-law.

SOCIAL MEDIA BASICSGenerally speaking, the term

social media is used to describe Internet-based platforms for people to gather together in virtual commu-nities and share information. Many individuals have established online profiles and routinely engage in dis-cussions with one or more other people that they know or have met in cyberspace.

Companies have recognized the value of having a presence on social media platforms. Indeed, many companies rely heavily on social media to market their products and communicate with customers and potential customers. Current social

media platforms are capable of dis-tributing much more than plain text. Rich content such as audio, images, and video pervade social media, and sophisticated companies exploit it to place themselves and their products in the best light.

Individuals using social media are unlikely to have any self-imposed policies that dictate how they will use the platform. Instead, the platform’s terms of use will set a minimum bar on what’s acceptable. A company, however, should have a clear policy on how it will present itself online. It’s important for a company to be consistent in its messaging and branding.

SOME RISKSThe omnipresence of social

media and the easy access to it pres-ent a distraction. While at work, employees can be drawn into spending significant time on social media platforms, resulting in lost productivity. Consequently, a com-pany policy on social media should

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_________________

_______________

AU

DI

O

Page 18: computer magacine

COMPUTING AND THE L AW

14 COMPUTER

any regulatory bodies that set the re-quirements for the dissemination of that material.

The examples above relate to the inadvertent disclosure of material. The converse of that is the calcu-lated disclosure of information that’s designed to harm a company. Typi-cal sources of those disclosures are disgruntled employees and ex-employees, as well as competitors and unhappy customers.

One common type of disparaging material is the purported customer review. There has been much dis-cussion about online reviews, particularly about their legitimacy—that is, determining whether they’re written by an actual customer who has a proper basis to write a review. It’s possible that a party intent on harming a company’s reputation could manufacture phony nega-tive reviews and other derogatory commentary and distribute them through social media platforms. In some cases, the affected company might be able to pursue legal action against the poster, especially if the reviews are part of a larger scheme to disparage the company.

Parties hostile to a company might post misleading information by design. For example, misin-formation about the company’s management or business strategy that gets posted might propagate rapidly through cyberspace. The company needs to be vigilant for the appearance of unfounded rumors of this type—ones that can affect com-pany performance—and act quickly to quash them rapidly. This takes time and other valuable resources.

SOME REMEDIESSocial media platforms, by their

very nature of promoting wide-spread and open communication, present significant risks to a com-pany’s security and reputation. One of the first ways a company can pre-vent or at least minimize damage is to monitor the platforms actively

weak security and communication with “trusted” social media users represents a significant threat. Accordingly, a company must be vigilant about protecting itself from infections that originate from social media platforms.

Another group of risks that a com-pany faces are those that originate from employees’ or third parties’ personal social media postings. These can be the result of an honest mistake or a calculated attempt to do damage. For example, a current employee might post information about his or her job on a personal social media account without real-izing that the information includes company confidential or proprietary

material. Competitors who scour the Internet for data on that company might find and use that informa-tion to gain a business advantage. In addition, that information might include clues about breaching the company’s security.

In the US, the posting of a compa-ny’s financial information presents special risks. That’s because of the oversight that the Securities and Exchange Commission (SEC) pro-vides. Employee postings that are inaccurate or untimely might result in scrutiny by the SEC. In general, unauthorized posting of company information by employees can create significant problems in the form of the company needing to retract the information (if it’s even possible to do so) and to respond to

include standards of conduct for its employees’ use of the various plat-forms. It’s probably unrealistic to block or declare an outright ban on employees’ use of social media during the workday. In the age of BYOD—Bring Your Own Device—employees with smartphones will likely use them to maintain their access whenever and wherever they want.

One group of risks that a company faces from social media consists of attacks on the company’s online presence. For example, there have been instances where a company’s Twitter account was hacked and bogus messages sent. Those mes-sages can include misinformation about the company’s financial per-formance, product plans, and the like, which can create havoc with the company’s stock price. It’s likely that these events will damage the com-pany’s brand as well. An interesting question is whether a person harmed by the misinformation—an investor who lost money, for example—could successfully sue the company for not having reasonable security in place to prevent the hacking.

Identity theft is another possi-bility. A company’s hijacked social media account might open the door to a data breach where customers’ personal information is exposed. Many companies are obligated by privacy laws to take swift action to minimize the wrongful exploitation of customer data. If a data breach occurs, a company is faced with expensive and lengthy remedial pro-cesses, as well as the potential of penalties (in the form of fines) for its inability to prevent the breach.

Viruses and malware have always been threats, and most companies do a reasonable job of preventing at-tacks that come through the usual routes, such as email. But a com-pany’s presence on social media platforms creates additional poten-tial entry points for viruses. The combination of a platform with

Social media platforms, by their very nature of promoting widespread and open communication, present significant risks to a company’s security and reputation.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 19: computer magacine

JANUARY 2014 15

Selected CS articles and columns are available for free at

http://ComputingNow.computer.org.

doesn’t include any elements that are unintended or unenforceable.

Companies shouldn’t over-look the important role that social media has today.

Although it can be a useful tool that companies can exploit to market themselves, there are serious down-sides that need to be considered and anticipated, typically by having precautionary measures in place. The old adage that an ounce of prevention is worth a pound of cure holds true.

Brian M. Gaff is a senior member of IEEE and a partner at the McDermott Will & Emery, LLP, law firm. Contact him at [email protected].

hostile work environment, think about whether the company can (or could) restrict employees’ postings about coworkers. Also contem-plate whether employee postings about vendors or affiliates, or con-nections between employees and vendors’ or affiliates’ personnel, are permissible.

Developing a company-wide social media policy isn’t an easy task. IT personnel should be involved to help identify the risks, their potential impacts, and corre-sponding countermeasures. Sales and marketing need to contribute as well to ensure that the company’s planned social media uses are put into action. In view of the potential liability for breaches in security and privacy, the company’s legal depart-ment or an outside attorney with experience in preparing corporate policies needs to be involved. The attorney can ensure that the policy

for mentions of the company. A rea-sonable level of ongoing scrutiny is important to ensure that the com-pany maintains an awareness of what’s happening in cyberspace that could affect it. A company should act to remove improper postings quickly, and it should use legal tools like the Digital Millennium Copy-right Act or the Computer Fraud and Abuse Act, if appropriate.

There’s no substitute, however, for developing and having in place a comprehensive social media policy. All employees should be trained and periodically retrained on that policy, which should include reviewing the policy on the first and last days of employment.

While most social media policies would likely describe the obvi-ous limits on use, other elements should be considered for inclusion in the policy as well. For exam-ple, in the interest of preventing a

NEWSTORE

Save up to

40%on selected articles, books,

and webinars.

Find the latest trends and insights for your

• presentations • research • events

webstore.computer.org

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

__________

Page 20: computer magacine

16 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

32 & 16 YE ARS AGO

JANUARY 1982www.computer.org/csdl/mags/co/1982/01/index.html

PRESIDENT’S MESSAGE (p. 4) “For those who are not impressed by qualitative information let me give some statistics: five years ago the society had a budget of $1.2 million with 26,539 members who paid $6 for their society membership. Our budget for last year was $4.2 million with a membership (as of September) of 55,767 who paid $8 each for society dues.”

SURVEY (p. 11) “The advent of VLSI and very recently developed automated design tools have removed a fundamental constraint from computer architecture. Computer designers are no longer rigidly bound by the cost of processing logic. … The designer can now attack problems that once were computationally intractable by implementing systems in which thousands or even tens of thousands of processors cooperate to solve a single problem.”

NETWORKS AND ALGORITHMS (p. 27) “Although data flow machines have been discussed for several years, no optimal architecture has yet emerged. Later in this article, we show how a data flow language can be executed with maximum parallelism on the more conventional parallel machines described here.”

SYSTOLIC ARCHITECTURE (p. 37) “In a systolic system, data flows from the computer memory in a rhythmic fashion, passing through many processing elements before it returns to memory, much as blood circulates to and from the heart. … Moreover, to implement a variety of computations, data flow in a systolic system may be at multiple speeds in multiple directions—both inputs and (partial) results flow, whereas only results flow in classical pipelined systems. Generally speaking, a systolic system is easy to implement because of its regularity and easy to reconfigure (to meet various outside constraints) because of its modularity.”

POLYMORPHISM (p. 47) “As we are confronted with the potential for highly parallel computers made possible by very-large-scale integrated circuit technology, we may ask: What is the role of polymorphism in parallel computation? To answer this question, we must review the characteristics of parallel processing and the benefits and limitations of VLSI technology.”

SIGNAL PROCESSING (p. 65) “With the advent of VLSI, many processing elements can now be realized on a single chip, and large collections of processors have therefore become economically feasible. In this article, we present the reader with several highly concurrent, pipelined computing structures—structures that are realizable in VLSI and that exhibit large throughputs.”

SYSTEM DESIGN (p. 87) “The next critical step in the evolution of highly p a r a l l e l s y s t em s w i l l be the introduct ion of VLSI component s with architectures specifically created for applications in parallel structures. Most of these a rch itec t ura l innovations will come from skilled designers trained in VLSI design.”

CAD/CAM (p. 105) “Integration of CAD/CAM systems is the general shape of things to come. An early step toward that end would be for one system to be able to translate its particular data format into a neutral format which another system could then interpret and use. Next, a series of steps can be envisaged that would integrate one system with a closely related one, as groups of systems are brought together in engineering and as the division between CAD and CAM is breached.”

ADA (p. 120) “The First Law of Programming: Compared with the dedicated work of talented programmers, the benefits of all techniques of management, organization, documentation, and language are insignificant.”

CONFERENCING (p. 129) “First Edition, by the CommuniTree Group, is a computer-conferencing software package with general data base and ‘electronic mail’ features. … Each new message can be attached to any other already in the data base, allowing users to organize a ‘conference tree’ of messages, any one of which can ‘grow’ into a new conference or subconference.”

A RADIO TELESCOPE (p. 140) “Resembling a giant ‘Y’ with arms 13 miles long, the Very Large Array of the National Radio Astronomy Observatory is a three-pronged array of 27 radio antennas that is controlled and monitored by 11 computer systems. The 212-ton, 82-foot-wide parabolic dish antennas are synthetically equivalent to a single reflector 17 miles across.”

HANDICAPPED HELP (p. 147) “The First National Search, announced in November 1980, was an effort to bring grass-roots initiatives to bear on the task of finding a variety of methods to apply the personal computer to the needs of the handicapped. It was highlighted by a national competition for ideas, devices, methods, and computer programs to help handicapped people overcome difficulties in learning, working, and successfully adapting to home and community settings.”

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 21: computer magacine

JANUARY 2014 17

Editor: Neville Holmes; [email protected]

JANUARY 1998www.computer.org/csdl/mags/co/1998/01/index.html

PRESIDENT’S MESSAGE (p. 4) “The Computer Society Digital Library ‘went live’ in August. … The library offers full-text viewing and searching of articles as well as on-screen figure manipulation. Access to the digital library is now available on a subscription basis, making the Society among the first scientific and engineering publishers in the world to offer electronic subscriptions to its periodicals via the World Wide Web.”

CHEAP CHIPS (p. 10) “Digital signal processors were once the backwater of the chip industry. … However, the embedded systems revolution has placed the relatively inexpensive chips in many consumer devices, including cars, consumer electronics products, and even medical equipment.”

EMPLOYMENT IN 1998 (p. 14) “January 1, 2000 is two years away, but already it is casting a giant shadow on the computer industry, in the form of the Year 2000 Problem.”

THE FUTURE OF COMPUTING (p. 29) “In this excerpt from ‘Visions for the Future of the Fields,’ a panel discussion held on the 10th anniversary of the US Computer Science and Telecommunications Board, experts identify critical issues for various aspects of computing. In the accompanying sidebars, some of the same experts elaborate on points in the panel discussion in mini essays.”

PROCESSOR DESIGN (p. 39) “As part of this outlook issue, Computer invited six computer architects to participate in a virtual roundtable. … [T]hese six architects shared several insights of interest to those of us not intimately connected with processor design.”

PREDICATED EXECUTION (p. 50) “In the future, integrating control and data speculation with predicated execution will enable advanced compiler techniques to increase the performance of future processors. With the adoption of advanced full predication support in IA-64 and perhaps many other architectures, predicated execution may become one of the most significant advances in the history of computer architecture and compiler design.”

MICROSOFT RESEARCH (p. 51) “A quiet migration of talent to Redmond, Washington, may shape the way all of us use computers in the next few years. What are 250 top researchers from academia and industry working on at Microsoft Research?”

GADGET NETOPIA (p. 59) “While successful Information Appliances do multimedia, e-mail, fax, and other functions

inherited from their PC parents, they must do it much better than the current machines. If they are to be truly viable alternatives to traditional technology, there must be no setup complexities nor any computer jargon required to use them.”

WHEELS ON THE WEB (p. 69) “An open system that conforms to standard Internet protocols for communication to and from automobiles could greatly enhance driving. …Existing Internet resources can be leveraged to integrate a car into the Internet. Service providers will subsequently produce innovative services for drivers and passengers that will improve safety and security as well as provide infotainment.”

CURRICULUM INTEGRATION (p. 78) “Raising the level of abstraction in the design of electronic systems has a major effect on the relationship of traditional electrical engineering with computer science. The classical overlap area, often called ‘computer engineering,’ captures only certain aspects of this relationship. … However, there is an equally important overlap area that deals with systems at a higher level of abstraction, and centers more on their application than on their hardware.”

DIGITAL LIBRARIES (p. 93) “Here we briefly address content-based retrieval and the issues of representation, storage, and retrieval of multimedia objects in digital libraries. We then very briefly identify some open areas of research.”

STANDARDS (p. 138) “While conflict ensures that technology will continue to change and grow stronger, it also ensures a certain forced honesty. As one organization ‘invades’ the turf of another—especially when it comes to standards activities—we get to see the cards held in the hands of the players.”

OBJECTS (p. 140) “In the future, object technology will not be confined to a niche. Objects will be pervasive; very little serious software will not be object-oriented at least in some way in 1998 and beyond.”

BROWSER WARS (p. 151) “The 1995 consent decree allowed Microsoft to monopolize the desktop OS market, but prevented it from strong-arming vendors into buying other Microsoft applications to keep their Windows licenses. … This decree will be important in the legal browser wars that will take place in 1998.”

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_____________

Page 22: computer magacine

NEWS BRIEFS

18 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

Hackers Hijack Internet Traffic

Hackers have redirected large amounts of Internet communica-tions from organizations such as government agencies, financial institutions, and network service providers to various countries, perhaps to read or modify the information they contain.

Experts with Renesys—which provides intelligence to custom-ers based on its monitoring of Internet traffic—say this is the largest attack of its type they have seen. They say they are unsure of the attackers’ identities, inten-tions, or exact attack techniques but suspect the hackers looked at or altered data before sending it on to the intended destination.

Renesys researchers say they have seen traffic for organiza-

tions in the Czech Republic, Germany, Iran, Libya, Lithu-ania, South Korea, and the US diverted to Belarusian or Icelandic service providers’ routers on about 40 occasions.

The Internet route hijack-ing attacks exploit the border gateway protocol (BGP), which enables the transfer of routing information between gateway hosts and the Internet or a network of autonomous systems.

Security analysts have long con-tended it is too easy to manipulate BGP and—in a type of man-in-the-middle attack—change or delete authorized routes for Internet communications, or even create entirely new ones. They say this capability lets hackers send traf-fic to their own systems and then possibly access data before send-

ing it on to the intended recipient, with few signs of the diversion.

A traceroute would not neces-sarily reveal a hijacking because Internet traffic frequently takes circuitous routes to travel from one place to another. In addition, the hackers obscured their redirections.

So far, they have grabbed data sent to 150 cities worldwide.

One diversion involved a block of traffic that was supposed to travel from Guadalajara, Mexico, to Washington, DC, through Laredo, Texas, via Mexican and US ISPs.

However, the hackers redi-rected the traffic via various ISPs from Guadalajara through countries such as the UK, Russia, Belarus, and Germany before sending it on to Washington.

Another communication between two locations in Denver was routed through places in the US, Canada, the UK, and Iceland.

Renesys spotted the first set of hijackings in February and March 2013. Initially, hack-ers redirected traffic through Belarusian ISP GlobalOneBel.

The attacks occurred again briefly in May, one diverting traf-fic to Belarus and a second to Iceland. They happened again on a larger scale in July and August, this time with all redi-rections through Iceland.

Officials with the Icelandic ISPs involved say the incidents resulted from a software bug they have since fixed.

According to Renesys, security administrators must now take

Internet-traffic monitoring company Renesys discovered that hackers have been diverting Internet communications through Belarus and Iceland, possibly to read or modify data, before sending it on to the intended recipients. In these two cases, one block of traffic from Guadalajara, Mexico, to Washington, DC, was redirected through Belarus and another between two points in Denver, was rerouted through Iceland.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 23: computer magacine

JANUARY 2014 19

man-in-the-middle BGP hijack-ings very seriously. The company recommends that they monitor all of their important online com-munications and work together to develop ways to keep such incidents from occurring.

Study Says Government Policies Threaten Web’s Positive Impact

A recent study says that issues such as government surveillance, content controls, and access limitations—occurring even in democratic countries with highly developed economies—threaten the positive role the Web could play in societies worldwide.

The second annual Web Index report (http://thewebindex.org) points to problems such as low Web availability in less-developed countries, and government snoop-ing like that exposed recently as having taken place in the UK and US. Other issues raised include widespread censorship and a lack of content on issues of importance to women—such as reproduc-tive health—in many nations.

The World Wide Web Federa-tion conducted the Web Index study. Web inventor and World Wide Web Consortium direc-tor Tim Berners-Lee started the foundation as a way to create, as its website says, “an open Web available, usable, and valuable for everyone.”

The recent report ranked 81 countries based on how much the Web has contributed to social, eco-nomic, and political development.

Sweden and Norway finished at the top, while the UK and US ranked third and fourth, respectively.

Among countries with emerging economies—those experiencing rapid growth and industrializa-tion—Mexico ranked first (30th overall), followed by Colombia (32nd), Brazil (33rd), Costa Rica (34th), and South Africa (35th).

And for countries with less-developed economies, the Philip-pines was on top (38th), followed by Indonesia (48th), Kenya (53rd), Morocco (54th), and Ghana (55th).

In ranking nations, the report took into account factors such as government policies that affect Web openness and the quality of a country’s technical infra-structure.

It also took into consideration Web-related issues such as

universal access;freedom and openness of online access and use;relevant content, which reflects matters such as the degree of

government censorship and the availability of informa-tion in local languages; andempowerment, which addresses the degree to which the Web enables people to receive information, voice their opinions, participate in public affairs, and take action.

“One of the most encouraging findings … is how the Web and social media are increasingly spur-ring people to organize, take action, and try to expose wrongdoing in every region of the world,” said Berners-Lee. “But some govern-ments are threatened by this, and a growing tide of surveillance

RESEARCHERS USE GPS TO TRACK HUGE ICEBERG

A team of UK scientists plans to use GPS technology to follow the movements of a mammoth iceberg that could threaten busy shipping lanes.

University of Sheffield and University of Southampton researchers received a grant of £50,000—about $80,500—to spend half a year tracking the B-31 iceberg and predicting its future movements.

The iceberg is 700 square kilometers (270 square miles) in area—about the size of Sin-gapore—and will move an estimated 10 centimeters per second (19.7 feet per minute).

B-31 was part of the Pine Island Glacier in Antarctica until it broke loose.There is a chance that ocean currents could move it into heavily used South Atlantic

shipping lanes near South America. The UK Natural Environment Research Council thus issued the emergency grant.

In conducting their research, the scientists will use information from two GPS devices that the British Antarctic Survey attached to the iceberg. They will also analyze images from US and German satellites and consult Brigham Young University’s Antarctic Iceberg Tracking Database.

The Singapore-sized B-31 iceberg has separated from the Pine Island Glacier in Antarctica. UK scientists will employ GPS as a way to track the iceberg and predict whether it will move into sea lanes used by ships.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 24: computer magacine

20 COMPUTER

NEWS BRIEFS

and censorship now threatens the future of democracy.”

“Bold steps are needed now,” he continued, “to protect our funda-mental rights to privacy and free-dom of opinion and association online.”

Using Sensors to Help People Who Suffer Accidental Falls

Two US engineers have devel-oped a sensor-based technology that could better detect whether a senior citizen or some other person has fallen. The system could then summon help.

This could be very important because, according to the United Nations’ World Health Organiza-tion, falling is a leading cause of injury or death for people at least 65 years old, an increasingly large segment of the global population.

Some current products that help people who fall require them to put on a sensor and push a button to call for assistance. However, they must remember to wear the sensor.

Other products use a video camera and software to detect falls and call for help. However, the fall must occur within the camera’s field of view. And not everyone wants to be videotaped at all times while at home.

Two University of Utah researchers—assistant profes-

sor Neal Patwari and graduate student Brad Mager—have devel-oped a technology designed to address these shortcomings.

Their approach uses a wireless network of sensors that is installed on or in walls. Patwari and Mager decided to work with RF sensors because their signals can pene-trate walls.

The researchers sought to develop a system that could determine a person’s horizontal and vertical orientation, includ-ing whether an individual fell or lay down deliberately.

During testing, they placed 12 sensors on walls at a low level and 12 higher up. They use their trans-ceivers to send data to a computer for processing. If there is data from only lower-level sensors, that means a person is on the floor.

The RF sensors communicate with one another and thus can detect the nature of a person’s movements. For example, they can measure how fast someone who was once horizontal became verti-cal, thereby identifying whether the person fell or lay down.

If a fall occurred, the system could contact a designated individual, agency, or moni-toring company for help.

Patwari and Mager received a US National Science Founda-

tion grant and have six months to show their approach’s com-mercial potential, at which point they’d receive a second grant.

They are still testing the accu-racy of their system, which they hope to release commercially via Patwari’s Xandem Tech-nology within three years.

Robot Withstands Collisions, Could Help Search-and-Rescue Missions

A new flying robot is designed to be able to crash into objects and keep functioning, making it very useful for working in disaster sites.

A team of scientists from the Swiss Federal Institute of Tech-nology in Lausanne’s Laboratory of Intelligent Systems developed GimBall, a spherical flying robot in a protective cage.

Expert say building flying rescue robots is important because ground-based robots can’t easily climb stairs or navi-gate rubble-strewn disaster sites. Ground-based robots sent to search the World Trade Center site in New York City right after the 11 September 2001 terrorist attacks became bogged down in debris.

However, flying robots run the risk of hitting walls, girders, trees, or other objects commonly found in

Countries finishing highest in the 2013 Web Index report.

Country Overall rankUniversal access

rankFreedom and

openness rankRelevant content

rankEmpowerment

rank

Sweden 1 3 6 5 2

Norway 2 6 1 4 4

UK 3 8 24 1 3

US 4 12 27 10 1

New Zealand 5 11 8 3 5

Denmark 6 2 7 7 12

Finland 7 9 2 13 10

Iceland 8 1 3 9 17

France 9 16 5 8 6

South Korea 10 4 33 6 8Source: World Wide Web Foundation

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 25: computer magacine

JANUARY 2014 21

disaster zones, and either breaking down or having important compo-nents—such as cameras—damaged.

The Swiss Federal Institute scien-tists said they were inspired to create GimBall by insects that can fly around buildings despite having poor eyesight. The researchers explained that insects are able to do this because their outer shells let them survive crashing into surfaces.

They designed GimBall as a 37-centimeter (14.6-inch)-diameter robot weighing just 370 grams (13.1 ounces). It has a rigid inner frame and two propellers than allow it fly at 5 km (3.1 miles) per hour. Its bat-teries enable five minutes of flight.

The aircraft can fly either autonomously or via remote con-trol, and has a camera that can send images to rescue crews.

GimBall has a rotating outer frame consisting of 90 flexible car-bon rods that absorb the force of running into a surface and thereby let the inner workings avoid damage.

During tests in forested areas, the scientists say, their robot flew and rolled along the ground with-out experiencing problems despite hitting objects.

They express hope GimBall will be ready for use in rescue missions sometime next year.

Experts Fear ATM Malware Will Spread from Mexico

Security vendor Symantec has found that hackers have upgraded malware previ-ously found in automated teller machines (ATMs) in Mexico and have created an English-language version, indicating the thieves might be about to spread the Trojan horse to other countries.

Symantec analysts dis-covered two versions of the Ploutus malware, both designed to work with a spe-cific type of ATM, which the

company has declined to name.The first version contained func-

tion names in Spanish, indicating that Spanish-speaking coders might have written the software, Symantec says. The second version is more robust, makes its malicious activities harder to detect, and is in English.

Attackers install Ploutus by pick-ing the lock of an ATM running Windows and inserting a CD boot disk. This indicates that the hackers might be singling out stand-alone ATMs in areas with little traffic.

Within 24 hours after the cybercriminals install Ploutus, they enter a set of numbers on the ATM’s keypad or, in the case of the initial version, an exter-nal keyboard. They then access an interface through which they can interact with the machine’s software and withdraw money.

Symantec said the nature of the malware indicates that the hackers are very familiar with how their target ATM works.

The company said businesses concerned about their ATMs should use good locks on their machines so that thieves can’t access their CD drives, change the BIOS boot order so that the machines boot only from the hard disk, and use BIOS passwords so that hack-ers can’t alter the boot options.

Scientists: Biometric Systems Need More than Face Recognition

A recent study by scientists at the University of Texas at Dallas (UTD) and the US National Institute of Standards and Technology (NIST) indicates that biometric systems designed to recognize people should analyze their bodies and not, as is currently the case, just their faces.

“For 20 years, the assumption in the automatic face-recognition community has been that all impor-tant identity information is in the face,” said NIST electronics engineer Jonathon Phillips. “These results should point us toward exploring new ways to improve automatic recognition systems by incorporat-ing information about the body.”

The researchers showed study participants pairs of images—displaying the head and upper body—of the same person in some cases or different people in others. They asked viewers whether the photos were of the same individual.

At different times, the scien-tists showed viewers only the face, only the upper body, or both. They tracked the participants’ eye move-ments to see what they looked at while making their determinations.

The study found that participants who viewed the face and upper body or just the upper body did a better job of recognizing whether image pairs were of the same person than those who viewed only the face.

“Eye movements revealed a highly efficient and adaptive strategy for finding the most useful identity information in any given image of a person,” con-cluded the study’s lead author, UTD professor Alice O’Toole.

Editor: Lee Garber, Computer;[email protected]

The GimBall flying robot has a strong, flexible outer frame that enables it to keep functioning even if it crashes into surfaces, making it well-suited for disaster-related rescue work.

Selected CS articles and columns are available for free at

http://ComputingNow.computer.org.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

____________

Page 26: computer magacine

EIC’S MESSAGE

22 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

H appy New Year! I’m excited to begin the year with our annual “Outlook” issue,

highlighting emerging technolo-gies that promise to have a major impact on computing in both the near and distant futures. We cover a wide range of topics in this issue, including the emergence of RF-powered computing, the future of social learning in software engineering, ways to facilitate the rapid development and adoption of speech-user interfaces, what the future holds for solid-state memories, and a big data vision for tapping into the insights of popula-tion informatics. I hope that you enjoy reading these articles and that you find the topics interesting and thought provoking.

In my previous EIC messages, I stressed the importance of Com-puter’s digital transition. This move makes sense both for electronic distribution through traditional means as well as through emerging mobile platforms, and I’m pleased to report that the IEEE Computer Society continues to make important

strides in this area. In 2007, the IEEE CS began offering Computer as a PDF file; in 2011, we offered multimedia-enhanced PDF files; and as of 2012, Computer is available as a mobile app for both the iOS and Android platforms. At a recent meeting of the IEEE CS Magazines Operations Com-mittee, editors supported a transition for all CS publications to move to a digital format (with print available on demand) tentatively scheduled for 2015. This is an exciting time to be editor in chief of Computer, as our content is published in new and con-venient formats for our readers.

Select content from Computer,and other Society-related activities and publications, including IEEE Software, is also featured on the IEEE CS YouTube channel (www.youtube.com/user/ieeeComputerSociety). Multimedia interviews and content, captured along with traditional print articles, not only provide value-added capabilities to the print publication, but also provide opportunities for reaching audiences beyond IEEE CS members through new and emerging electronic models of distribution.

Finally, as I reflect on the last year, it amazes me how far our digital publishing initiatives have come in just three short years. The publishing world is changing in many ways—whether it’s open access, digital-only distribution, or the emergence of social media, we must continue to evolve our pub-lishing processes and platforms to engage our current and future read-ership. 2014 is my final year as EIC of Computer. It has been a rewarding experience and, like many events in one’s life, it’s also gone by too quickly!

EDITORIAL BOARD CHANGES

As is customary in this annual message, I want to take the oppor-tunity to thank the editorial board members who serve in a variety of capacities, from the advisory panel to area and column editors. The masthead lists the people who help make Computer possible each month. These volunteers and staff members are critical to providing the expertise required to produce such a high-quality publication.

Digital Magazines: The Future of Publishing Is Here

Computer examines the advance of RF-powered computing, social learning in software engineering, speech-recognition technologies, solid-state memories, and population informatics through big data.

Ron VetterUniversity of North Carolina Wilmington

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

AU

DI

O

Page 27: computer magacine

JANUARY 2014 23

I also want to sincerely thank individuals who have completed their terms of service to Computer,and to welcome new members of the editorial board for 2014. Board members who retired from Com-puter in 2013 include column editor Chris Huntley, software editor David Weiss, and network editor Ahmed Helmy. They contributed significant time and expertise, and Computer is a better publication because of their efforts. In addition, Computer’s long-time managing editor, Judi Prow, retired in May 2013. Judi worked tirelessly to improve the magazine and to uphold its well-known rigor and high standards. Computermisses her dedication, delightful spirit, and reassuring energy, and congratulates her on her retirement at the conclusion of a terrific career.

Finally, I want to take this opportunity to especially remem-ber column editor John Riedl. John died on 15 July 2013 after a three-year battle with cancer (https://cse.umn.edu/admin/comm/features/2013_7_17_memoriam_john_riedl.php). I first met John at the University of Minnesota while I was a PhD candidate there. I asked him to join Computer’s editorial board a couple of years ago to edit the Social Computing column; he was a wonderful editor, colleague, and friend. He is greatly missed.

Computer welcomes several new additions to the publication this year. First, I’m pleased to introduce Carrie Clark Walsh, Computer’s new managing editor. Carrie is very highly regarded for her work with volunteers at the IEEE Computer Society, especially for the way she has handled various Technical Committee and Special Technical Community efforts. She brings more than 10 years of editorial experience in the medical field as well as several years of experience in senior-level positions for high-profile programs. Please join me in welcoming her to our team!

We also have a new area editor and column editor joining us in 2014. Upkar Varshney, of Georgia State University, is the new area editor for health informatics, and Christian Timmerer, with Alpen-Adria-Universität Klagenfurt, will edit the Social Computing column. I look forward to working with them.

GET INVOLVED Opportunities to become involved

with the IEEE CS and Computerinclude becoming an author, serving as a guest editor of a special issue, or volunteering as a reviewer. Visit www.computer.org/portal/web/volunteercenter/getinvolved to find detailed information about these opportunities and others.

As always, I welcome your com-ments and encourage you to submit suggestions for topics to be covered in future issues of Computer.

Ron Vetter is the Interim Associate Provost for Research and Dean of the Graduate School at the University of North Carolina Wilmington and cofounder of Mobile Education, LLC (www.mymobed.com), a technology company that specializes in develop-ing interactive short message service applications. Vetter received a PhD in computer science from the Uni-versity of Minnesota, Minneapolis. Contact him at [email protected].

Selected CS articles and columns are available for free at

http://ComputingNow.computer.org.

New Board MembersUpkar Varshney is an associate professor of computer information systems at Georgia State University, Atlanta. His research interests include wireless networks, pervasive healthcare, ubiquitous computing, and mobile commerce. Varshney received a PhD in telecommunications and computer networking from the University of Missouri. In addition to authoring more than 160 papers, he has presented at numerous workshops, including

keynote addresses at wireless, computing, and information systems conferences. Varshney has received several teaching awards, including the Myron T. Greene Outstanding Teaching Award (2000 and 2004) and the RCB College Distinguished Teaching Award (2002). He is serving or has served as editor or guest editor for major journals including IEEE Access, IEEE Transactions on IT in Biomedicine, Mobile Networks and Applications (MONET), Computer, and Decision Support Systems (DSS).

Christian Timmerer is an assistant professor in the Department of Information Technology (ITEC) and a member of the Multimedia Communications Group at Alpen-Adria-Universität Klagenfurt, Austria. His research interests include immersive multimedia communication, streaming adaptation, quality of experience, and sensory experience. Timmerer received an MSc (Dipl.-Ing.) and PhD (Dr.techn.) from Alpen-Adria-Universität Klagenfurt. In

addition to publishing more than 100 technical papers, he’s an associate editor for Computing Now responsible for social media technologies and the inaugurating chair of the Special Technical Community on Social Networking. He was the general chair of WIAMIS 2008 and QoMEX 2013 and participated in the work of the International Organization for Standardization (IOS) and the Motion Picture Experts Group (MPEG) for more than 10 years. He’s a member of the IEEE Computer Society, the IEEE Communications Society, and ACM SIGMM.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

____________

__________________

_______________________

______________________

__________

Page 28: computer magacine

24 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

COVER FE ATURE

Karin Strauss and Doug Burger, Microsoft Research

The memory industry faces significant disruption due to challenges related to scaling. Future memory systems will have more heterogeneity at individual levels of the hierarchy, with management support from multiple layers across the stack.

T he computer industry’s phenomenal growth over the last half-century could not have been achieved without continuous improvement in solid-state memory density—a direct result of scaling chip

memory to smaller cell sizes. But growing application stor-age and memory requirements, along with the demand for big data in an increasing number of domains, will pose enormous future challenges.

Think about memory for a moment: each chip contains billions of bits—working constantly (or nearly so)—stored in tiny memory cells that each holds a relatively small number of electrons. The semiconductor industry has done an amazing job so far of developing memory technolo-gies, such as DRAM and flash, that provide this abstraction of stable, precise, “always correct” memory, with dura-bility guarantees comparable to, or many times longer, than other system components. Under the covers, how-ever, increased density requires copious mechanisms to compensate for failures. In DRAM, for example, refreshes maintain constantly leaking data values while error cor-rection codes catch and correct bits that occasionally flip (soft errors); flash uses a translation layer to delay and hide transient and permanent (hard) errors. Up to now, innova-tive fixes like these have made possible rapid advances in the semiconductor industry.

Unfortunately, scaling memory to ever-higher densities while maintaining the “always correct” interface is becom-ing so difficult and costly that radical departures from current practices will likely have disruptive implications for software and system designers. As cells in charge-based memory get smaller, they also become less stable; even the smallest disturbances on their stored charge might be sufficient to change their state, resulting in decreased reliability. Possible solutions include new—but costly—fabrication techniques, 3D cell stacking, alternative stor-age principles, more heterogeneous memory hierarchies, and enhanced system support that will tolerate degraded memory and increased error rates.

Here, we describe some of the challenges to be over-come in scaling memories to higher densities, and predict even more heterogeneous memory hierarchies (including special-purpose memories) in the future that will require new levels of cooperation between hardware and software and significant changes in applications, development tools, and system software support.

MEMORY CLASSES AND CHARACTERISTICSMemory has two main classes: working and storage.

Working memory temporarily buffers inputs, intermediary values, and results during active computation; exam-ples include DRAM, used as main memory, and SRAM, used for on-chip caches. Both technologies use charge to hold a stable state, but DRAM is based on storing charge (electrons) in memory cells, while SRAM uses carriers tem-porarily stored in circuit nodes to stabilize a cross-coupled circuit. Storage memory saves data for later reference and processing or as backing storage when contents do not fit in working memory. Magnetic hard disk drives (HDDs) have traditionally been used for this purpose, but in the past 10 to 15 years, flash, a charge-based memory technology,

What the Future Holds for Solid-State Memory

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 29: computer magacine

JANUARY 2014 25

has emerged as a crucial component in storage memory systems. Flash was initially introduced in the mobile market—thumb drives, memory cards for digital cam-eras and other gadgets, and music players—but has made its way up the device chain, with flash-based solid-state drives (SSDs) now displacing HDDs in larger systems such as high-end laptops and even appearing side by side with HDDs in datacenter servers.

Table 1 provides a basic overview of these two types of memory. Essentially, working memory—such as DRAM—is byte-addressable and volatile, with lower capacities and lower access latencies; it functions to allow processors to quickly manipulate elements at a fine granularity for a rel-atively limited amount of time, after which they are saved in more stable storage memory (or no longer needed). Stor-age memory—such as flash—has higher capacity and is nonvolatile, with retention times on the order of years; low access latencies are not as critical for storage memory, and coarser access granularities are acceptable because data goes into working memory before being processed. Dura-bility is important for both because both types of memory need to last roughly as long as the computing device that employs them. Energy concerns should be considered as well, but these are highly device- and application-specific: the energy and power requirements of a smartphone, for example, are very different from those of a datacenter server.

NEW CHALLENGESBoth DRAM and flash memory are charge-based:

charges (electrons) are forced into their cells to store data, and voltages are used to sense the value stored in the cell. Electrons can be moved with low energy, which is both good and bad: this mobility enables cells to be read and written quickly and with little energy expenditure, but it makes long retention a challenge because electrons can easily escape cells. Flash has been designed to prevent this by adding a thick oxide insulator layer to the cells that holds charge longer, but this requires shooting electrons through the oxide to program the cells, creating a durabil-ity problem because the process leads to oxide degradation, eventually affecting cells’ charge-holding capacity.

Achieving increasingly higher densities requires shrinking cells to smaller sizes. As cells shrink, they hold proportionately smaller charge. Consequently, the charge of a single electron represents a relatively higher propor-tion of the total charge stored by the cell, and a small number of electrons escaping or entering a cell results in a corruption of the cell’s stored value. Manufacturing variation—that is, cell-to-cell fabrication differences—aggravates this problem in relatively smaller cells. These differences affect, for example, cell geometry and compo-sition, so certain cells might be able to hold many fewer charges or experience oxide breakdown much earlier than others.

Historically, the memory industry has innovated to address these problems by focusing on the memory technologies themselves—specifically, by using dif-ferent materials, manufacturing processes, or cell geometries. Increasingly, some desirable characteristic is sacrificed to obtain higher densities. A notable example is, rather than storing a single bit per cell (single-level cells, or SLCs), storing multiple bits in a single flash cell (creating multilevel cells, or MLCs) by dividing the analog range of voltages sensed to read and write the cell into more than two regions. This division increases the memory’s usable levels and thus density, but causes faster wear and thus lower durability. Another example is refreshing DRAM: left untouched, DRAM cannot maintain the values it stores, so cells have to be periodically read and rewritten to make sure the appropriate charge levels are maintained, but these continuous refreshes reduce performance and con-sume additional energy.

Without an obvious solution for efficiently retaining electrons in smaller DRAM cells, the overhead of han-dling memory errors could become prohibitively high. For example, increasing refresh rates might consume so many memory cycles that too few cycles remain for actual reads and writes. Increased error-correction capa-bility, another common solution used in the server DRAM memory market, requires greater design complexity and additional storage for more complex codes, which could offset any density gains and result in higher cost. Error cor-rection in working memory also raises issues of latency:

Table 1. Basic characteristics of memory technologies.

Characteristic Description Working memory (DRAM) Storage memory (flash)

Capacity How many bits of data it can store Gbytes Hundreds of Gbytes

Granularity How much data is read/written in one access Bytes Kbytes

Read latency How fast data can be read from memory Nanoseconds Microseconds

Write latency How fast data can be written into memory Nanoseconds Microseconds

Retention How long cells can store uncorrupted data Microseconds Years

Durability How many writes on average before a cell fails 1015 104-105

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 30: computer magacine

26 COMPUTER

COVER FE ATURE

the performance requirements of working memory are much more demanding than those of storage memory; reli-ability solutions that work for storage memory, such as the addition of a translation layer, are not suitable for working memory due to their long latencies.

Another challenge for working memory, especially in the server market, is providing sufficient memory band-width to processing cores. Multicore CPUs have a greater number of processing units per chip. Unfortunately, memory technology has not kept up with the increasing memory bandwidth needs created by these additional cores, and the resulting sharing of memory resources among cores results in inefficiencies.

IMPACT OF ALTERNATIVE TECHNOLOGIESTo address such challenges, the memory industry and

its academic counterparts are developing new memory alternatives. Table 2 shows several proposed candidates to replace today’s working and storage memory technologies, along with the principle each uses for storing information and its specific storage mechanism.

The path currently pursued by charge-based manufac-turers for both DRAM (hybrid memory cube; HMC) and flash (vertical flash) is to stack memory vertically, thereby gaining density per area, improving cost per bit, and improving performance. Atomic (resistive) memory tech-nologies are based on changing the physical configuration of atoms or vacancies within a cell, thus affecting the cell’s resistance. Magnetic (resistive) memory models are based on changing the relative orientation of a floating-orientation ferromagnetic material with respect to a fixed-orientation material, which again affects cell resistance. We leave out other storage approaches, such as photonic storage, as we are not aware of any current, practical large-scale

implementations for them. Yoon-Jong Song and his col-leagues offer detailed comparisons of some of these technologies (PCM, STT-RAM, and CB-RAM) elsewhere.1

Additional information on competing technologies is avail-able from other publications.2-6

Vertical flash is the likely technology to be used as storage memory because its manufacture most resem-bles the flash memory used today. Its higher density, which results from using a third dimension, will allow manufacturers initially to use larger cells, temporar-ily mitigating the reliability problems described earlier. Density scaling can continue for some time by increasing the number of cells vertically, but manufacturing costs will ultimately limit the number of layers. Once this limit is reached, manufacturers will have to resume reducing cell sizes, incurring the associated challenges. After that, we speculate that atomic memories will follow, assum-ing that their large-scale fabrication costs can be reduced to levels comparable to those of flash. Two articles in the August 2013 issue of this magazine cover the impact of these storage-memory improvement technologies on soft-ware systems.7,8

Solutions for working memory are more difficult to pre-dict because of its stricter performance requirements. In the short term, we believe that HMC arrays are likely to be adopted in server markets due to their increased perfor-mance. However, HMC’s lower capacity might mean that it will not replace current DRAM chips and memory modules and will instead be used side by side with them as working memory. Longer term, each candidate technology poses challenges yet to be surmounted. Magnetic memory has desirable characteristics, but high densities have proven difficult to achieve and are still not comparable to DRAM densities. Atomic memory technologies seem to have

Table 2. Alternative memory technologies, their storage principles, and specific storage mechanisms.

Technology Principle Storage mechanism

Hybrid memory cube (HMC)

Electronic (charge-based) Multiple layers of DRAM technology chip-stacked on top of a high-performance logic layer; trades total memory capacity for better performance

Vertical flash Electronic (charge-based) Charges are trapped in a floating-gate transistor; cells are vertically integrated (3D), significantly increasing density and allowing the use of larger cells

Phase-change memory (PCM)

Atomic (resistive) Cell material can be crystalized or put into an amorphous state by controlled heating and cooling; material has different resistances when crystalline or amorphous

Memristors Atomic (resistive) Memristors use a thin film of materials such as titanium dioxide; applying high currents moves oxygen vacancies around the film, changing its resistance

Conductive-bridging RAM (CB-RAM)

Atomic (resistive) Metal ions in a cell migrate when a current is applied and form a conductive path within a nonconductive material; this changes the resistance of the cell

Spin-transfer torque memory (STT-RAM)

Magnetic (resistive) Cell includes a permanent and a floating ferromagnetic material; the polarity of the latter can be changed by a polarized electrical current, and their relative alignment can be determined by running a current through them and observ-ing their resistance

Racetrack memory Magnetic (resistive) Multiple ferromagnetic domains share a smaller set of read/write ports; to be read or written, these domains have to be shifted into the port region

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 31: computer magacine

JANUARY 2014 27

good density scaling and reasonable read latencies, but because their storage principle is based on changing the memory cell’s atomic configuration, the time and energy required to program these cells might be prohibitively high. Furthermore, programming strains cell material, causing degradation and resulting in premature wear that gets cells stuck at high or low resistance and could com-promise device durability. Certain mechanisms developed for DRAM, such as certain error-correcting codes, may ag-gravate this problem, reducing atomic memory durability even further.

A MORE COMPLEX FUTUREDespite their likely ability to scale to higher densities,

none of the multiple contenders for working memory match DRAM on every relevant dimension, with the following implications:

• Memory component heterogeneity might create fur-ther system heterogeneity.

• Future memory technologies might sacrifice one dimension to satisfy others, which could expose cer-tain properties beyond the memory module interface, requiring further cooperation across system layers.

• Multiple points of operation for the same memory, either fixed at design time or tunable at runtime, might be required.

These developments could in turn have repercussions at multiple layers of the system stack: memory interfaces, hardware memory controllers, the operating system, and runtimes, as well as for languages and how applications are written. As engineers and designers, we must decide where most profitably to expose information and tunable knobs

to the rest of the system. Greater exposure generates more opportunity but also introduces more complexity and dis-ruption into the computing and memory ecosystem.

To illustrate these implications and their resulting tradeoffs, we describe a hypothetical system that uses three memory technologies side by side as components of total working memory: standard DRAM, HMC (a higher-performance, lower-capacity memory), and MLC phase-change memory (PCM; a higher-capacity, lower-performance memory that wears out as it is written). The three occupy the same flat physical address space; consequently, this organization requires more support across the system stack than traditional working memory does. Figure 1 shows such a hierarchy (bottom left: three types of memory mod-ules: DRAM, HMC, and PCM), along with the pertinent system layers (middle and top left: memory controllers, operating system, runtimes, programming language, and applications), and identifies which of these layers must offer additional support for each of five scenarios, the require-ments of which we will describe more fully later.

Memory heterogeneity furthering system heterogeneity

Increased heterogeneity is the most straightforward im-plication of multiple emerging memory technologies. As a variety of memory technologies become available—and lacking a single technology that performs optimally across all relevant dimensions—designers might choose to equip systems with multiple memory types, which combine to form a more effective working memory. These would differ from the strictly linear hierarchies we are used to, which employ only one type of working memory within a system.

In this world, software might have to be tailored in order to take advantage of different memory characteristics

App

Programming language

Operating system

Runtime

App

Runtime

App

Runtime

Memorycontroller

Memorymodule

Memorymodule

Memorymodule

Memorycontroller

Memorycontroller

Scenario 1

No

No

No

Yes

Yes

No

Scenario 2

No

No

No

No

Yes

No

Scenario 3

Yes

Yes

Yes

Yes

Yes

No

Scenario 4

No

No

Yes

Yes

Yes

Yes

Scenario 5

Yes

Yes

Yes

Yes

Yes

Yes

Figure 1. Five scenarios based on a hypothetical system, and system layers (left) affected in each scenario (right). In scenario 1, the operating system is responsible for allocating three types of memory, each with its own advantages and disadvantages. Scenario 2 shows that protecting nonvolatile data from cold-boot attacks might require modifications to the memory control-ler. Scenario 3 shows that exposing nonvolatility via loads and stores to the regular application address space could require modifications to multiple system layers. Scenario 4 shows how exposing permanent failures can affect multiple system layers. Finally, Scenario 5 shows layers affected by exposing performance versus reliability tradeoffs to programmers.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 32: computer magacine

28 COMPUTER

COVER FE ATURE

across the hierarchy (Scenario 1 in Figure 1): time-critical applications with reasonably small memory footprints, for example, would use HMC for maximum performance; applications with much larger memory footprints, but that do not write to memory very frequently, would use PCM, which provides higher capacity and benefits from the absence of writes by avoiding the wear that would other-wise result. Write-intensive applications that do not fit in HMC might use regular DRAM. In a scenario like this, the operating system must be aware of the memory’s hetero-geneity so that it can allocate the best memory for each application and the memory controllers must coordinate to steer memory requests accordingly; no other system layer needs to be involved.

Scenario 2 in Figure 1 considers the nonvolatility dimension. While working memory has traditionally been volatile, PCM in our hypothetical system adds nonvolatility, which at this level starts to blur the line between working memory and storage memory and has implications across the stack—and particularly on system security: part of working memory is now nonvolatile, and thus much more susceptible to cold-boot attacks.9 Countering these attacks might require additional system support, such as requiring that memory controllers encrypt data before storing it in PCM.

Another challenge for working memory nonvolatility is how to expose it to software. The most straightforward way is to expose PCM as a fast mass-storage volume, which requires only operating system support. Although relatively simple to implement, this solution misses some potential benefits, as the software stack is not tuned to fast storage and could introduce high access overhead.10

An alternative is to access working data as part of the virtual address space of applications and persist these data in place, without any need for serialization into a file (Scenario 3 in Figure 1). At first glance, this possibility seems ideal because data is “automatically” persisted, but, to be exposed to programmers, mapping nonvola-tility directly into the virtual address space requires language support and carries several correctness risks. First, common software bugs such as dangling pointers and other memory-management errors can have persis-tent effects—restarting an application or a system might not make an abnormal state disappear. Second, if both volatile and nonvolatile memories are available and in use by the same application, additional pointer safety bugs arise. Imagine a pointer in nonvolatile memory that points to a datum in volatile memory; when the application is restarted, the volatile datum is gone and the nonvol-atile pointer still points to it. Finally, recovering from unexpected system failures could be more com-plicated: even if applications are correctly written, events such as power failures can corrupt data. Some recent work in the area of nonvolatile heaps addresses these issues,11,12 but we view this as a difficult, long-term problem.

Exposing feature sacrifices beyond the memory module interface

Sacrificing some features to benefit other features is not new to the memory industry. SSDs built from SLC NAND flash guarantee longer endurances than higher-capacity, lower-endurance SSD successors built from MLC NAND

Table 3. Some existing proposals for hardware-only wearable memory management.

Proposed solution Cell type Mechanism

Differential writes SLC/MLC Writes only those cells that have changed values

Flip-n-write SLC Flips groups of bits if flipped group requires fewer bit flips to write than original group

Dynamically replicated memory

SLC Once a block in a page wears out, pairs pages to make two bad pages into one good page

Start-gap wear leveling SLC/MLC Rotates blocks within pages to spread wear more uniformly and prevent software attacks that intentionally direct excessive wear to certain regions of memory

Error-correcting pointers SLC Uses error-correction metadata storage in block to point to worn bits and replace them

Stuck-at-fault error recovery

SLC Divides block into areas that contain only one worn-out bit and flips all bits in the area if needed to match the value the cell is stuck at

Fine-grained remapping with error-correcting code and embedded-pointers

SLC Points to a replacement block from a worn-out block

Pay-as-you-go SLC Uses error-correction metadata storage to replace first failed bit, but then uses a common pool of replacement entries for any further worn-out bits

Coset coding SLC Provides multiple encodings for a data word, then uses a convenient encoding to minimize wear and tolerate errors

Zombie memory SLC/MLC Uses memory regions with uncorrectable worn-out bits to extend error-correction capabilities of blocks nearing end of life

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 33: computer magacine

JANUARY 2014 29

flash. Here, the sacrifice affects only the product specifi-cation because failures and their resulting fragmentation of memory are hidden by the flash translation layer (FTL). However, disruptive aspects of new memory technologies can be difficult to hide completely from software, such as wear-out in a PCM-based working memory.

Wearable memories and opportunities for cooperation

We call memories subject to wear wearable memories.The issue of wear in atomic (as opposed to charge-based) working memories attracts our attention because it directly impacts the correct operation of a device, rather than just its performance or initial specification. If working memory wears out during the memory’s lifetime and certain regions become unusable, this breaks the existing, convenient ab-straction of a contiguous physical address space.

One could argue that the same problem afflicts flash, but wear only affects its product specification—the number of write cycles the flash module will last—not its correct operation. The reason shorter longevity af-fects only the product specification for flash is that flash is used as storage memory, and, as such, higher-access latencies are tolerable. The higher acceptable laten-cies make possible the inclusion of the FTL in the memory hierarchy, which provides an illusion of contiguous physi-cal address space. The FTL detects and corrects errors with complex error-handling mechanisms and performs wear leveling with an additional level of indirection in ad-dressing. This indirection also remaps accesses to healthy memory when certain portions of the flash memory become unusable due to wear. However, the additional latencies inserted by the FTL are unacceptable for work-ing memory.

Table 3 shows multiple solutions proposed for detect-ing and correcting errors, as well as for performing wear leveling on wearable working memory, including a few we ourselves have proposed (along with our collaborators Engin Ipek, Jeremy Condit, Edmund Nightingale, Thomas Moscibroda, Stuart Schechter, Gabriel Loh, Rodolfo Aze-vedo, John Davis, Parikshit Gopalan, Mark Manasse, and Sergey Yekhanin).13-22 The goal of these is to provide fast and durable working memory with low hardware com-plexity. However, these proposals focus on solving the reliability problem mostly in hardware, hiding as much as possible from software layers. In particular, none of the solutions performs the kind of aggressive remapping of failed blocks that FTLs do because the overhead would be prohibitive for working memory. Because arbitrary re-mapping may be too expensive, an alternative approach exposes working memory wear-out failures to software.

An example of cross-layer cooperation to manage wear-out comes from work we did with our colleagues Tiejun Gao, Steven Blackburn, Kathryn McKinley, and James Larus.23

Our work uses managed runtimes, such as those running C# or Java, to tolerate failures in wearable working memories (Scenario 4 in Figure 1). Because managed runtimes offer an abstracted view of memory, they are capable of al-locating or moving objects to arbitrary locations. This capability allows managed runtimes to dodge “holes” in memory created by block failures due to wear. Another advantage is the granularity at which managed run-times can handle memory. Objects can be quite small, which is a good match for the granularity at which un-usable memory chunks are discarded, typically the size of a single cache block (64 bytes). This solution relies on the assumption that the system still has a region of memory that does not wear out (such as current-generation DRAM), so that the operating system and

applications that do not tolerate memory failures can run seamlessly. Programs that can tolerate failures must indicate they are able to use memory that can fail during execution or that has already partially failed. The motiva-tion to use such memory is that it will likely be cheaper and more plentiful.

This cooperation between hardware and software works as follows. The hardware can detect wear-out fail-ures when it attempts to write back cached data into the wearable memory. If the intended value cannot be writ-ten, a failure is detected. The hardware then buffers this value, and the operating system is invoked via an inter-rupt. Note that wear-out failures result in unwritable bits, instead of the random flips common to DRAM. Once a wear-out failure is detected, the corresponding memory location will no longer be used, so reads will not expose new wear-out failures. The operating system is respon-sible for recovering the unwritten data from the hardware buffer. The operating system (already in charge of man-aging physical memory) must now maintain a failure map that specifies which memory blocks have failed. For memory allocations from a managed runtime, the operating system relays the failure map covering the memory being allocated. The managed runtime’s memory manager then ensures that no objects are allocated on failed blocks.

Tolerating failures does not come without cost for managed runtimes. The location of failures is arbitrary, especially if wear leveling is in use, so failures will cause fragmentation. Contiguous regions of memory that could

If working memory wears out during the memory’s lifetime and certain regions become unusable, this breaks the existing, convenient abstraction of a contiguous physical address space.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 34: computer magacine

30 COMPUTER

COVER FE ATURE

once store an object might now be fragmented by failures, making it impossible to store that same object. Lower-level support can mitigate this problem; we devised hard-ware that can defragment itself within groups of pages, greatly decreasing fragmentation issues. Overall, for the benchmarks we used (DaCapo), the memory manager we modified (Immix) only incurs performance losses after failures occur. Even for high levels of failures (50 percent memory-capacity degradation), our experiments showed performance losses of only 12 percent on average.

Static and dynamic selection of memory operating points

At design time, engineers can make multiple tradeoffs to obtain a memory part with the desired char-acteristics. For example, using different materials in PCM fabrication will affect read and write performance by modifying how easily the material can change its physical configuration. Once engineering teams have determined the material to be used and the memory has been fabricated, it cannot be changed, fixing read and write performance.

Other memory tradeoffs can change dynamically during memory operation. Along with our colleagues Adrian Sampson, Jacob Nelson, and Luis Ceze, we observed that it is possible to increase memory density if the ap-plication can tolerate small errors.24 Certain applications such as sensor data processing, machine learning, and image processing have error-tolerant data structures; these applications can produce acceptable output even if some bits of error-tolerant data structures are incorrect (Scenario 5 in Figure 1). Some of these applications can have large capacity needs so there is an incentive to increase den-sity at the cost of errors. To exploit larger but error-prone memories, we need cross-system support. Memory mod-ules provide a “knob” that allows portions of memory to be tuned for higher density at higher error rates. Memory con-trollers expose this knob to the operating system, which must control this knob and how these portions of memory are allocated. Languages should allow application pro-grammers to identify data that can be stored in these dense but error-prone portions of memory, and runtimes must be aware of and manage these different types of memory.

System support for future working memory hierarchy

As noted earlier, multiple layers of the system stack, both hardware and software, are likely to be affected by future disruptions in semiconductor memory tech-nologies. Designers must decide where to expose information such as performance, nonvolatility, and fail-ures, and which knobs—for example, dynamic tradeoffs between density, reliability, performance, and power—to provide to software. Exposing optimizations only to the

lower-level layers is less disruptive to the system stack but requires that optimizations be widely applicable; those targeting specific cases will remain unexploited. Exposing more about memories to other system layers introduces complexity but could better leverage specific behaviors observed at these layers.

M any alternatives are possible for memory hier-archy designs and for the support provided to each, and in choosing among these alternatives,

designers will have to find the right tradeoffs between functionality and complexity. We believe that the best designs will emerge from tight cooperation between mul-tiple layers across the system and that the best tradeoffs will be made when these layers are designed in tandem.

Multiple experimental memory technologies are under development, creating tremendous uncertainty about how memory hierarchies will evolve. DRAM and flash face diffi-culties, but which technologies will succeed them and how these technologies might be organized remain unclear. We believe that systems will migrate toward a hierarchy of specialized memories—much like counterpart processing technologies migrated toward specialized processors. Re-thinking the abstractions and increasing the cooperation among multiple software and hardware layers in the stack will be imperative to further memory density scaling.

References1. Y.-J. Song et al., “What Lies Ahead for Resistance-Based

Memory Technologies?,” Computer, Aug. 2013, pp. 30-36. 2. Y. Li and K.N. Quader, “NAND Flash Memory: Challenges

and Opportunities,” Computer, Aug. 2013, pp. 23-29. 3. “International Technology Roadmap for Semiconductors

2011 Edition, Executive Summary,” Int’l Tech. Roadmap for Semiconductors, 2011; http://www.itrs.net/Links/2011itrs/2011Chapters/2011ExecSum.pdf.

4. S. Williams, “How We Found the Missing Memristor,” IEEE Spectrum, Dec. 2008, pp. 28-35.

5. S.S.P. Parkin, M. Hayashi, and L. Thomas, “Magnetic Domain-Wall Racetrack Memory,” Science, vol. 320, no. 5873, 2008, pp. 190-194.

6. “About Hybrid Memory Cube,” Hybrid Memory Cube Con-sortium, 2013; http://hybridmemorycube.org/technology.html.

7. A. Badam, “How Persistent Memory Will Change Software Systems,” Computer, Aug. 2013, pp. 45-51.

8. S. Swanson and A.M. Caulfield, “Refactor, Reduce, Recy-cle: Restructuring the I/O Stack for the Future of Storage,” Computer, Aug. 2013, pp. 52-59.

9. K. Bailey et al., “Operating System Implications of Fast, Cheap, Non-Volatile Memory,” Proc. 13th Workshop Hot Topics in Operating Systems (HotOS 11), Usenix, 2011, https://www.usenix.org/legacy/events/hotos11/tech/final_files/Bailey.pdf.

10. A.M. Caulfield et al., “Providing Safe, User Space Access to Fast, Solid State Disks,” Proc. 17th Int’l Conf. Architecture

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_________

__________________________________

___

_____________________

Page 35: computer magacine

JANUARY 2014 31

Support for Programming Languages and Operating Sys-tems (ASPLOS 12), ACM, 2012, pp. 387–399.

11. J. Coburn et al., “NV-Heaps: Making Persistent Objects Fast and Safe with Next-Generation, Non-Volatile Memories,” Proc. 16th Int’l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS 11), ACM, 2011, pp. 105–117.

12. H. Volos, A.J. Tack and M.M. Swift, “Mnemosyne: Light-weight Persistent Memory,” Proc. 16th Int’l Conf. Architec-tural Support for Programming Languages and Operating Systems (ASPLOS 11), ACM, 2011, pp. 91–103.

13. S. Schechter et al., “Use ECP, not ECC, for Hard Failures in Resistive Memories,” Proc. 37th Annual Int’l Symp. Com-puter Architecture (ISCA 10), ACM, 2010, pp. 141–152.

14. D.H. Yoon et al., “FREE-p: Protecting Non-Volatile Memory against both Hard and Soft Errors,” Proc. 2011 IEEE 17th Int’l Symp. High Performance Computer Architecture(HPCA 11), IEEE, 2011, pp. 466–477.

15. M.K. Qureshi, “Pay-As-You-Go: Low-Overhead Hard-Error Correction for Phase Change Memories,” Proc. 44th Ann. IEEE/ACM Int’l Symp. Microarchitecture (MICRO 11), ACM, 2011, pp. 318–328.

16. A.N. Jacobvitz, A.R. Calderbank and D.J. Sorin, “Coset Coding to Improve the Lifetime of Memory,” Proc. 19th Int’l Symp. High Performance Computer Architecture(HPCA 13), IEEE, 2013, pp. 222–233.

17. R. Azevedo et al., “Zombie Memory: Extending Memory Lifetime by Reviving Dead Blocks,” Proc. 40th Ann. Int’l Symp. Computer Architecture (ISCA 13), ACM, 2013, pp. 464–474.

18. E. Ipek et al., “Dynamically Replicated Memory: Building Resilient Systems from Unreliable Nanoscale Memories,” Proc. 15th Int’l Conf. Architectural Support for Programming Languages and Operating Systems (ASPLOS 10), ACM, 2010, pp. 3–14.

19. S. Cho and H. Lee, “Flip-N-Write: A Simple Deterministic Technique to Improve PRAM Write Performance, Energy and Endurance,” Proc. 42nd Ann. IEEE/ACM Int’l Symp. Mi-croarchitecture (MICRO 09), ACM, 2009, pp. 347–357.

20. N.H. Seong et al., “SAFER: Stuck-at-Fault Error Recovery for Memories,” Proc. 2010 43rd Ann. IEEE/ACM Int’l Symp. Microarchitecture (MICRO 10), ACM, 2010, pp. 115–124.

21. W. Zhang and T. Li, “Characterizing and Mitigating the Impact of Process Variations on Phase Change based Memory Systems,” Proc. 42st Ann. IEEE/ACM Int’l Symp. Microarchitecture (MICRO 09), ACM, 2009, pp. 2–13.

22. M.K. Qureshi et al., “Enhancing Lifetime and Security of PCM-Based Main Memory with Start-Gap Wear Leveling,” Proc. 42nd Ann. IEEE/ACM Int’l Symp. Microarchitecture (MICRO 09), ACM, 2009, pp. 14–23.

23. T. Gao et al., “Using Managed Runtime Systems to Toler-ate Holes in Wearable Memories,” Proc. ACM SIGPLAN Conf. Programming Language Design and Implementation (PLDI 13), ACM, 2013, pp. 297–308.

24. A. Sampson et al., “Approximate Storage in Solid-State Memories,” Proc. 46th Ann. IEEE/ACM Int’l Symp. Micro-architecture (MICRO 13), ACM, 2013, pp. 25-36.

Karin Strauss is a researcher at Microsoft Research and an affiliate faculty member at the University of Washing-ton. Her research interests include computer architecture, mobile and cloud systems, and living systems in-cell com-putation. Strauss received a PhD in computer science from the University of Illinois at Urbana-Champaign. She is an IEEE and ACM Senior Member. Contact her at [email protected].

Doug Burger is a director in Microsoft Research’s Extreme Computing Group. His research interests include computer architecture, large-scale machine learning, new comput-ing paradigms, and advanced natural user interfaces. Burger is an IEEE and ACM Fellow. Contact him at [email protected].

Selected CS articles and columns are available for free at http://ComputingNow.computer.org.

Showcase Your Multimedia Content on Computing Now!

IEEE Computer Graphics and Applicationsseeks computer graphics-related multimedia content (videos, animations, simulations, podcasts, and so on) to feature on its Computing Now page, www.computer.org/portal/web/computingnow/cga.

If you’re interested, contact us at [email protected]. All content will be reviewed for relevance and quality.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

___________

__________

___________

_____

_______

Page 36: computer magacine

32 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

COVER FE ATURE

Shyamnath Gollakota, Matthew S. Reynolds, Joshua R. Smith, and David J. WetherallUniversity of Washington

Extracting power “from thin air” has a quality of science fiction about it, yet tech-nology trends make it likely that in the near future, small computers in urban areas will use ambient RF signals for both power and communication.

O ver the past decade, personal computers have been transformed into small, often mobile devices that are rapidly multiplying. Aside from the ever-present smartphone, a growing set of

computing devices has become part of our everyday world, from thermostats and wristwatches, to picture frames, personal activity monitors, and even implant-able devices such as pacemakers. All of these devices bring us closer to an “Internet of Things,” but supplying power to sustain this future is a growing burden. Tech-nological advances have so far largely failed to improve power delivery to these machines. Power cords tie devices down, prohibiting their free movement, while batteries add weight, bulk, cost, the need for maintenance, and an undesirable environmental footprint.

Fortunately, running small computing devices using only incident RF signals as the power source is increasingly possible. We call such devices RF-powered computers. As might be expected, the amount of power that can be harvested from typical RF signals is small. However, the energy efficiency of the computers themselves has improved exponentially for decades—a lesser-known con-sequence of Moore’s law. This relentless improvement has recently brought the power requirements of small com-putational workloads into the microwatt realm, roughly equal to the power available from RF sources in practical settings.

Figure 1 shows the increasing energy efficiency that has accompanied decreasing transistor geometry.1

Importantly, this trend favors the emergence of RF-powered computing over time: the size of the workload that can be run from a fixed RF power source is increasing exponentially, and the operating range of an RF-powered device from a fixed RF power source is increasing at half the rate of Moore’s law, assuming 1/r2 propagation losses.2

RF signals are a compelling power source for energy harvesting, even though solar cells can often generate more power. RF signals have three key benefits. First, and most important, they can be reused for communication. Nearly all useful computing devices need to communi-cate, but conventional communication is very expensive in terms of power—1 μW is a tiny fraction of the order 10 mW power consumption of a low-power transmitter such as Zigbee. While it is possible to duty-cycle transmitters to lower their power requirements, a very low duty cycle makes communications largely unavailable.

Second, RF signals for TV broadcasts, cellular com-munications, and so on are ubiquitous in most urban environments. Unlike energy sources such as solar or mechanical vibration, RF signals are capable of provid-ing power to devices day and night, inside and outside buildings, regardless of whether the receiving device is stationary or mobile. With existing RF transmission power levels, it is feasible to harvest power several kilometers from TV towers and several hundred meters from cellular base stations.3 Although Wi-Fi is another obvious source, it has a much lower energy density. Moreover, when devices are powered solely by ambient RF signals, there is zero added energy cost.

Finally, RF signals are attractive from an industrial design perspective. They propagate through plastic hous-ings, while solar energy harvesting requires covering large areas of the device exterior with light-absorbing (and thus dark-colored) glass or plastic. Additionally, most mobile

The Emergence of RF-Powered Computing

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

VI

DE

OU

RL

Page 37: computer magacine

JANUARY 2014 33

devices already have antennas for communication. By reusing them, power can be harvested from RF signals without compromising the device aesthetics.

We expect RF-powered com-puting to move from a research curiosity to consumer products over the next few years. We have developed the necessary tech-nologies over the past five years to build a series of prototypes of increasing functionality. To dem-onstrate the viability of our vision, we describe two of our latest re-search prototypes, both of which combine sensing, computing, and communication in ways that highlight the unique advantages of using RF signals for power and communication. The first is a telemetry system that enables col-lection of data from a dragonfly’s brain and muscle activity while it is in flight.4 This application is not possible using batteries or power cables, because their weight would prevent the insect from flying. The second prototype demonstrates a new communication technique called ambient backscatter that lets nearby battery-free devices exchange messages using ambient TV signals as a communication medium instead of generat-ing their own radio waves.5 Both prototypes demonstrate that RF-powered computing can support rich functional-ities and applications that were previously infeasible.

HARVESTING AND BACKSCATTERThe techniques for harvesting power from RF signals

and using such signals for backscatter communication are well-established and form the basis of modern RFID systems, such as Electronic Product Code tags that can be placed on objects and identified from a distance. This kind of RFID uses UHF (ultra-high frequency) radio waves and is known in the industry as the EPC Gen 2 standard. RF-powered computing builds on this heritage.

Harvesting power from RF signalsWhen a propagating radio wave encounters an antenna,

its electric field causes electrons to move, creating an alter-nating current in the antenna. RF power-harvesting circuits convert the antenna’s AC into a more useful DC form by rectifying it to a steady current that charges a storage

capacitor. Figure 2 shows a representative energy-harvesting circuit. The harvester is composed of diodes and capacitors, with multiple stages that boost the output voltage. The diodes are the rectifying elements: they pass current in only one direction. The resulting DC is then used to charge a capacitor, storing energy that can be drawn down to supply power at a stable voltage level to run a com-puter. It is the presence of the RF signal itself that turns on the power-harvesting circuit.

The amount of power arriving at a device depends on the transmission power of the RF source and the distance over which the signal propagates. The initial transmission power of typical RF sources varies greatly, from around 106 W for TV stations to order 10 W for cellular and RFID systems to roughly 0.1 W for Wi-Fi systems. Only a very small fraction of this power will arrive at a device in normal usage because the RF signal is spread over a large area. The amount of power that can be captured depends on the design of the harvester, and it is a significant en-gineering challenge to capture energy efficiently at low power levels and high frequencies.

1.E+00

1.E+01

1.E+02

1.E+03

1.E+04

1.E+05

1.E+06

1.E+07

1.E+08

1.E+09

1.E+10

1.E+11

1.E+12

1.E+13

1.E+14

1.E+15

1.E+16

Com

puta

tions

per k

Wh

2008+2009 laptops

Cray 1

Univac III (transistors)

ENIAC

IBM PC-XT

Univac II

Univac I

EDVAC

IBM PC-AT

486/25 and 486/33Desktops

SDS 920

DEC PDP-11/20

IBM PC

Gateway P3, 733 MHz

IBM PS/2 E + Sun SS1000Dell Optiplex GXI

Altair8800

Apple IIe

Macintosh 128k Compaq Deskpro 386/20e

20102000199019801970196019501940

Figure 1. Computing’s energy efficiency has improved by 12 orders of magnitude between 1950 and 2010.1

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 38: computer magacine

34 COMPUTER

COVER FE ATURE

Backscatter communicationConventional wireless communications such as Wi-Fi

require each device to generate its own RF carrier. This method has been tremendously successful, but it is less practical for RF-powered computers because typical RF transmitters require orders of magnitude more power for carrier generation than is available.

Backscatter communication works by modulating the reflection of an existing RF signal. When an RF signal is incident on a solid object, the signal is reflected; radar works by tracking these reflections. Conversely, when an RF signal is incident on a well-matched antenna, it is largely absorbed. By changing whether the RF signal is reflected or absorbed in a time-varying pattern, back-scatter communication encodes messages at very low power consumption. The change between absorbing and reflecting states can be made simply by switching the load presented to the antenna. The reflected RF signal propa-gates to a receiver, where it will show up as a small ripple

on the more powerful RF signal. As an analogy, imagine conventional communications to be shining a flashlight at a receiver and turn-ing the light on and off with Morse code. For backscatter communi-cations, imagine holding a mirror and changing its orientation to re-flect sunlight at a receiver.

Figure 3 shows an example of a backscatter signal. Because it le-verages an external power source, backscatter communication can be four orders of magnitude more energy efficient than conventional communications such as Wi-Fi.

BIOTELEMETRY FROM FLYING DRAGONFLIES

We developed an RF-powered computer to gather data from insects while they are in flight. The device’s tiny size and weight demonstrate that smaller and less intrusive wearable and implantable computing devices could be feasible for other species, including humans.4

ScenarioThe study of neural activity during animal behavior is of

intense interest for advancing neuroscience, but progress is slow due to the difficulty of obtaining data in naturalistic settings. The scenario we target measures the activity of multiple neurons in a dragonfly (or another flying insect) during prey capture flights.

To be feasible, the size and weight of the instrumenta-tion must be so small that it does not interfere with the animal’s behavior. Dragonflies of the species Libellula lydiaweigh about 400 mg and can maintain their interception behavior with payloads of up to 33 percent of their body weight. This limits instrumentation to about 130 mg, a severe restriction easily exceeded by today’s batteries. The device we developed is battery-free and weighs only 36 mg.

Another challenge for this scenario is that the biotelem-etry must support a relatively high data rate. The dragonfly has 16 target-selective descending neurons that appear to control wing steering for prey interception. At flight tem-peratures of 35°C, these neurons have action potential spikes of ≈250 microseconds in duration. The action po-tentials must be sampled at 25 to 40 kHz to have sufficient resolution for accurate identification. The total required data rate is approximately 5 Mbps.

System designFigure 4a shows the system’s architecture, which com-

prises a powered base station and a tiny RF-powered

C = 10 pF MIM capacitors C

C

C

C C C

Antenna

Bond pad

Bond pad

Externalsmoothingcapacitor(0.1 μF typ.)

To LDO voltageregulator

Figure 2. RF power-harvesting circuit, used in the dragonfly telemetry prototype. The diodes pass current from the antenna in only one direction to charge the capacitors and provide a steady voltage level at the output.

10.5 11.0 11.5 12.0 12.5 13.0 13.50

Time (ms)

Ampli

tude

Reader message

Backscatter tag response

Figure 3. Varying RF carrier signal (left) and backscatter signal superimposed on a constant carrier (right). The carrier varies greatly in amplitude as the reader sends a message, while the backscatter signal shows up as a small amplitude ripple on top of the carrier.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 39: computer magacine

JANUARY 2014 35

telemetry instrument carried on the dragonfly. Figure 4b shows a close-up of the telemetry package including the antenna mounted on a dragonfly. Conceptually, this arrangement is similar to that of a powered UHF RFID reader and a passive RFID tag, though at a much higher level of capability.

The base station is connected to an antenna near a perch in the insect enclosure. The antenna simultaneously transmits an RF signal from the base station and receives a reflected RF signal from the environment. The transmitted signal is sent in the 902- to 928-MHz Industrial, Scientific and Medical (ISM) bands at +36 dBm (4 W), the maximum radiated power allowed by the US Federal Communications Commission’s (FCC’s) Part 15 regulations, to maximize the power arriving at the telemetry device. The received signal is processed by the base station to separate it from the transmitted signal and decode transmissions from the telemetry chip.

The telemetry device is the more interesting of the two components. It receives the base station signal and uses it to harvest operating power using the circuit in Figure 2. The harvested power runs all functions of the telemetry package, including neural sensor sampling, encoding of measurement data, and backscatter communication, to wirelessly stream data to the base station.

To meet size and power targets, the telemetry device is fabricated as a chip in a commercial 0.35-μm complemen-tary metal-oxide semiconductor (CMOS) process. The die measures 2.36 × 1.88 mm and is wire bonded into a flex circuit assembly measuring 4.6 × 6.8 mm.

When it is in range of the antenna, the device operates as follows. Neural signals are amplified by low-noise bio-potential amplifiers that can capture weak signals in the μV range. Each sample is digitized using an 11-bit analog-to-digital converter (ADC). Error coding is then added with an extended Hamming code, bringing the sample to 16 bits. This coding mitigates any data transmission errors by allowing a single-bit error to be corrected, and two bit errors to be detected.

Frames comprising 192 samples are sent as complete messages using backscatter communication as outlined earlier. The frame is delimited by a unique 48-bit marker so that the base station can detect when it starts. We use phase-shift keying for the backscatter, rather than the tra-ditional amplitude shift keying, because it has minimal effect on power harvesting efficiency.

Results and lessonsThe total weight of the telemetry device is 36 mg, much

lighter than state-of-the-art alternatives using batteries. (Existing battery-powered devices described in the litera-ture weigh at least 170 mg.) The total power draw for the package is 1.31 mW, due mostly to the biopotential ampli-fiers. The backscatter communication scheme consumes

less than 2 percent of the total operating power. For the 4-W transmitted power level, this gives the system an operating range of 1.5 m: when the dragonfly enters the range above the perch, the system turns on and continu-ously streams neural data.

The telemetry system shows that RF-powered comput-ing can deliver significant functionality in a small package. In other work, we have shown that even higher data rates can be supported. By using a 16-state quadrature ampli-tude modulation (QAM) backscatter signaling scheme and a semi-passive design, we have achieved backscatter com-munication rates of 96 Mbps with a power consumption of 15.5 pJ/bit and a theoretical maximum operating range of 17 m.6 This speed is comparable to Wi-Fi, but our device is over 50 times more energy efficient than a Wi-Fi client.

Dragonflyantenna

Telemetrychip

Perchantenna

Perchbox

Basestation

TX + RXToPC

Z^

Y^

X^

Figure 4. Dragonfly. (a) Architecture of the telemetry sys-tem, comprising a powered base station connected to an antenna near the perch, and RF-powered telemetry device carried on a dragonfly. The telemetry device turns on when the dragonfly nears the perch. (b) The telemetry chip is small and mounted on the back of the insect; the longer wires are an antenna.

Telemetry backpack

Dipole antenna

(a)

(b)

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 40: computer magacine

36 COMPUTER

COVER FE ATURE

For comparison, the highest data rate of UHF RFID tags is 640 kbps.

AMBIENT BACKSCATTER COMMUNICATIONWe developed a prototype that has two architectural

advances compared to the telemetry system: power is harvested from ambient TV signals, and the RF-powered computers can communicate directly by piggybacking on existing signals. We call this capability ambient back-scatter communication.5

WISP and WARPOur ambient backscatter system has two key ancestors:

WISP (Wireless Identification and Sensing Platform)7 and WARP (Wireless Ambient Radio Power).3

WISP. A flexible platform that lets researchers explore RF-powered programs, sensing, and backscatter com-munication, WISP was refined over a period of five years at Intel’s former Seattle research lab and the University of Washington. WISP is powered and read by commer-cial RFID readers that implement the popular EPC Gen 2 standard. To ease experimentation, we also developed a highly flexible reader based on the Universal Software Radio Peripheral (USRP) software-defined radio plat-form. Like ordinary RFID tags, WISP has no batteries. Unlike RFID tags, it is fully programmable, can execute arbitrary code, and is easily augmented with sensors.

WISP harvests power from RF signals in the 902- to 928-MHz band, storing energy in a capacitor and using it to operate an MSP430 microcontroller when there is suf-ficient energy at a workable voltage. The microcontroller runs software that can be changed on the fly. Downlink signals from an RFID reader are decoded using enve-lope-detecting amplitude-shift keying demodulation, and uplink backscattered signals to the RFID reader are

encoded via the protocol specified in the EPC Gen 2 standard. A small amount of data storage, both vol-atile and flash, is provided, and the system can be interfaced to low-power external sensors. We have successfully integrated WISP with acceleration, temperature, light, and capacitive touch sensors.

To encourage experimentation, the WISP hard-ware designs and software systems are open source (http://wisp.wikispaces.com). More than 30 research groups worldwide have experimented with it, pro-ducing many papers.

WARP. The RF-harvesting circuit found in WISP or the telemetry device can, in principle, be used to harvest ambient, rather than reader-generated, RF signals. The WARP device harvests power from ambient TV signals to operate a temperature sensor with an LCD display, a microcontroller with short-range radio, and a segmented E Ink display. These experiments were performed several kilometers

from a 1-MW TV tower. We have also recently operated WARP sensor nodes 200 m from cellular base stations, using a new and improved harvester.3

ScenarioThe next logical question is whether backscatter

communication (as well as power harvesting) can be generalized from planted to wild RF signals. We have shown that it can by developing the ambient back-scatter communication primitive5 to let nearby RF-powered computers exchange messages without ded-icated infrastructure. The simple ability to exchange messages is a hallmark of conventional computers that are networked with technologies such as Wi-Fi, yet prior to our work this ability had not been shown for RF-powered computers.

Figure 5 shows the architecture for ambient backscatter, where we use a TV signal as the ambient RF source. One RF-powered computer, Alice, sends a message to another RF-powered computer, Bob, using ambient backscatter. Also shown is a legacy TV receiver as a reminder that the ambient signal exists for a primary purpose—in this case, to broadcast television. Ambient backscatter signals also reach nearby legacy receivers, and they must be designed not to interfere with legacy service.

The device-to-device communication provided by am-bient backscatter enables many formerly difficult-to-build applications. We have prototyped two examples: battery-free smart cards that can perform transactions with one another, and “tagged” items that can detect if they are filed in the wrong position on a shelf.

System designWe selected digital broadcast TV as the ambient RF

source because of its relatively high energy density and

TV tower(RF source)

TV(legacy receiver)Bob (receiver) Alice (sender)

Ambientbackscatter

Additionalmultipath

Figure 5. Ambient backscatter architecture. RF-powered devices communicate without either device generating radio signals. One device, Alice, backscatters signals from the TV tower that can be decoded by a nearby device, Bob. To legacy receivers, this signal is simply an additional source of multipath interference.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 41: computer magacine

JANUARY 2014 37

excellent coverage, particularly in urban areas: the top four broadcast TV networks in America reach 97 percent of households, and the average Ameri-can household receives 17 broadcast TV stations.

To harvest power, we used the WARP energy harvester with a 258-mm dipole antenna to target UHF TV broadcast signals in a 50-MHz band centered at 539 MHz. We work with ATSC-encoded TV broadcasts in the US, though our ideas apply to other widely used standards such as DVB-T and ISDB-T that use different forms of modulation at the physical layer.

Sending an ambient backscatter signal occurs in the same manner as backscatter with a dedicated power signal, but it uses the ambi-ent RF signal instead. Because the antenna is not frequency selec-tive within the TV band, it does not matter which 6-MHz TV channel is broadcast—signals from all channels in the band will be backscattered. Ensuring that the backscatter does not interfere with legacy devices is an im-portant consideration. Fortunately, to a legacy device, a backscatter trans-mitter is simply another feature of the environment that contributes to multipath distortion. Modern receivers are already built to measure and com-pensate for multipath, so there will be no degradation, even to nearby receivers, as long as the backscatter device does not modulate faster than the legacy receiver can adapt.

Decoding an ambient backscatter signal at the intended receiver is another matter. The TV broadcast signal has a rapidly varying amplitude (since it has a high information rate), and the backscatter signal is merely a ripple on this signal. This means that a receiver cannot simply subtract a fixed baseline to expose the backscatter signal (as is done on an RFID reader). Instead, we use the insight that the backscatter signal is changing more slowly than the TV signal (because of its much lower rate of information). This insight lets us expose the backscatter signal by time-aver-aging the received signal: with suitable averaging periods, the variations in the TV signal will be smoothed out, while the variations in the backscatter signal remain. Figure 6 shows this effect with experimental data.

To work in an RF-powered computer, the decoding op-eration must also consume extremely little energy. This is problematic, as power costs for receiving messages are often high and difficult to reduce for conventional receivers

due to the need for power-intensive amplifiers and analog-to-digital converters. Instead, we implement much of the decoding with passive components such as capacitors and resistors that perform the time-averaging opera-tions. Once the signal is time averaged, the information is decoded by sampling at a low rate to distinguish between the zero and one bits.

Finally, we need to sense when a message is received to enable bidirectional communication between devices. Conventional solutions such as energy detection do not apply given the fast-changing TV signal, and correla-tion with a known preamble is not power efficient. Our insight, once again, is that time averaging reveals the back-scattered signal. In the presence of backscatter, the short-term, time-averaged signal smoothens out the variations in the TV signal but not the backscatter signals. Hence the short-term and long-term time-averaged signals differ from each other. In the absence of backscatter, these signals are very similar. To perform this check efficiently, we use the analog processing of a comparator to sample message bits; the comparator requires a small voltage to turn on, with the added benefit that it only activates in the presence of an incoming message.

0 100 200 300 400 500 600 700 800 900 1,0000.19

0.21

0.23

0.25

0.270.27

Time sample #

Powe

r0 100 200 300 400 500 600 700 800 900 1,000

0

0.1

0.2

0.3

0.4

Time sample #(a)

(b)

Ampli

tude

Figure 6. Revealing the ambient backscatter signal. The combined TV and back-scatter signal is shown in the top subplot and appears to be random. After time averaging, the structure of the hidden backscatter signal is exposed in the bottom subplot.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 42: computer magacine

38 COMPUTER

COVER FE ATURE

Combining these techniques, we have a complete ambient backscatter communication system. The proto-type illustrated in Figure 7 is similar to WISP and includes a capacitive touch sensor for input and LED for output.

Results and lessonsOur prototype achieved ambient backscatter commu-

nication at rates of up to 1 kbps with bit-error rates below 10–2 at distances of 0.75 m outdoors and 0.5 m indoors, all with the TV tower up to 4 km from the devices. Although this performance level is modest, the result is significant because it represents a new capability. We expect that the performance of these devices will improve quickly over time with better designs.

It is also significant that we have demonstrated all the key components—including carrier sense—needed to enable a network based on ambient backscatter, in which multiple devices communicate with one another and can ultimately connect to the Internet. This technology could be used to construct wireless sensor networks in which nodes sense, compute, and communicate with each other, all without the need to replace batteries.

RESEARCH DIRECTIONSRF-powered computing is an emerging technology,

so there are many opportunities for it to advance. As its capability increases, we expect that RF-powered comput-ing will push the boundaries of ubiquitous computing, sensor networks, mobile apps, embedded computing, and the Internet of Things.

PowerGregory Durgin and his colleagues have shown that

some signal waveforms can be harvested more efficiently than others: for a fixed-transmit power source, a “peaky” waveform will deliver more harvested power than a signal

with constant amplitude.8 This is because the diode used as a component in the harvester does not “turn on” until a sufficiently large voltage is applied. This observation suggests the notion of coding for the “power channel.” In conventional channel coding, sets of waveforms are chosen to maximize information delivered per channel use or per time; in power coding, the goal is to maximize energy delivered per channel use or per time. Power coding is an interesting challenge because of the inherent nonlin-earity of the diodes in the energy harvester.

A complementary technique is to focus the power of RF signals in the spatial region of a device. Depending on the device’s ability to provide feedback to the transmitter, this can be accomplished with extensions to well-known multiple-input and multiple-output (MIMO) techniques. In preliminary indoor experiments exploring this con-cept, Matthew Reynolds and his colleagues increased the incident RF signal power by over 8dB, nearly an order of magnitude, using an 8×8 element MIMO setup.

Both of these concepts rely on the ability to customize the transmission of RF signals. To increase power effi-ciency when using ambient RF signals, it is likely that we will need harvesters that adapt themselves to characteris-tics of existing ambient signals.

CommunicationsThe range of ambient backscatter communication we

have demonstrated to date is small compared to Wi-Fi or cellular ranges. Larger ranges can be obtained when one endpoint is powered, as is the case for infrastructure with connectivity to the Internet. This is because the pow-ered endpoint can transmit at a higher power level and use sophisticated signal-processing techniques to better decode weak received signals.

A key powered endpoint for RF-powered computers is the cellular base station. The ability to communicate with this endpoint at moderate distances (of 50 m, say) would greatly enhance the value of RF-powered computers. Thus we plan to explore ambient backscatter communication using cellular signals and with a cellular base station. Cellular base stations employ multiple antennas and sophisticated signal processing that can be used to recover weak signals. It should be noted, however, that cellular protocols are heavily tailored to powered devices—mobile phones—and they require prolonged interactions to send a message. Continued evolution of the cellular protocols may lead to better support for RF-powered clients.

TradeoffsRF-powered computers inextricably link energy and

information because both quantities are carried by a single RF signal. This introduces interesting tradeoffs, such as the act of backscattering reducing the power that can be harvested. As yet, there is no result extending

Front

Back

Figure 7. Photo of ambient backscatter communication node. The device can harvest, transmit, and receive without a battery or powered reader. It also includes touch sensors (the A, B, and C buttons) and LEDs (placed near the two arrows) that operate using harvested energy and a programmable onboard microcontroller.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 43: computer magacine

JANUARY 2014 39

Shannon’s capacity theorem to quantify energy deliv-ery alongside information delivery. Thus an important theoretical problem in RF-powered computing is to characterize the maximum rates at which energy and information can be simultaneously transferred over a channel given some constraints. In more practical terms, it is natural to design waveforms that optimize delivery of either energy or information, or trade them off in a con-trolled fashion to best suit the needs.

E xtracting power “from thin air” has a science-fiction-like quality about it, yet technology trends make it likely that in the near future small computers in

urban areas will indeed use ambient RF signals for both power and communication. It is natural to wonder if this is simply the next generation of RFID, the only widely used technology that harvests power from RF signals. Yet the capabilities we have demonstrated—running programs, rich sensing, high-rate data streaming, har-vesting from ambient RF sources, and device-to-device communication—go far beyond the vision and abilities of modern RFID. They are more akin to a new form of computing. As this technology advances, it promises to embed computing into the everyday world in ways that were previously infeasible.

AcknowledgmentsWe thank our collaborators, past and present, who helped us develop our prototypes and vision. We especially acknowl-edge Daniel Arnitz, Hari Balakrishnan, Michael Buettner, Luis Ceze, Reid Harrison, Dina Katabi, Anthony Leonardo, Vincent Liu, Mark Oskin, Aaron Parks, Alanson Sample, Vamsi Talla, and Dan Yeager. This work has been supported by the Na-tional Science Foundation, Intel, the Howard Hughes Medical Institute, and Google.

References 1. J.G. Koomey et al., “Implications of Historical Trends in the

Electrical Efficiency of Computing,” IEEE Annals of the His-tory of Computing, vol. 33, no. 3, 2011, pp. 46–54.

2. J.R. Smith, “Range Scaling of Wirelessly Powered Sensor Systems,” Wirelessly Powered Sensor Networks and Com-putational RFID, J.R. Smith, ed., Springer, 2013 pp. 3–12.

3. A.N. Parks et al., “A Wireless Sensing Platform Utilizing Ambient RF Energy,” Proc. 2013 IEEE Radio and Wireless Symp. (RWS 13), IEEE, 2013, pp. 331–333.

4. S.J. Thomas et al., “A Battery-Free Multichannel Digital Neural/EMG Telemetry System for Flying Insects,” IEEE Trans. Biomedical Circuits and Systems, vol. 6, no. 5, 2012, pp. 424–436.

5. V. Liu et al., “Ambient Backscatter: Wireless Communica-tion Out of Thin Air,” Proc. ACM SIGCOMM 2013 Conf., ACM, 2013, pp. 39–50.

6. S.J. Thomas and M.S. Reynolds, “A 96 Mbit/sec, 15.5 pJ/bit 16-QAM Modulator for UHF Backscatter Communication,” Proc. 2012 IEEE Int’l Conf. RFID (RFID 12), IEEE, 2012, pp. 185–190.

7. A.P. Sample et al., “Design of an RFID-Based Battery-Free Programmable Sensing Platform,” IEEE Trans. Instrumentation and Measurement, vol. 57, no. 11, 2008, pp. 2608–2615.

8. M.S. Trotter, J.D. Griffin, and G.D. Durgin, “Power-Optimized Waveforms for Improving the Range and Reli-ability of RFID Systems,” Proc. 2009 IEEE Int’l Conf. RFID (RFID 09), IEEE, 2009, pp. 80–87.

Shyamnath Gollakota is an assistant professor in the De-partment of Computer Science and Engineering at the University of Washington. He is interested in designing future wireless systems for higher performance and secu-rity, and leveraging wireless technology in new domains such as HCI and power systems. Gollakota received a PhD in electrical engineering and computer science from MIT.Contact him at [email protected].

Matthew S. Reynolds is an associate professor in the De-partments of Computer Science and Engineering and of Electrical Engineering at the University of Washington. His research interests include the physics of sensors and actuators, RFID, and biomedical applications of wireless technology. Reynolds received a PhD in media arts and sciences from MIT. He is a senior member of IEEE. Contact him at [email protected].

Joshua R. Smith is an associate professor in the De-partments of Computer Science and Engineering and of Electrical Engineering at the University of Washington. He is interested in inventing new sensors, creating new ways to power and communicate with them, and applying them in ubiquitous computing, medical devices, and robot-ics. Smith received a PhD in media arts and sciences from MIT. He is a senior member of IEEE. Contact him at [email protected].

David J. Wetherall is a professor in the Department of Computer Science and Engineering at the University of Washington. He is interested in network systems, includ-ing mobile computing and Internet protocols. Wetherall received a PhD in computer science from MIT. He is a Fellow of IEEE and ACM. Contact him at [email protected].

Selected CS articles and columns are available for free at http://ComputingNow.computer.org.

www.computer.org/itpro

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

________

____________

___

______________________

__________________

Page 44: computer magacine

40 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

COVER FE ATURE

Anuj Kumar and Florian Metze, Carnegie Mellon University

Matthew Kam, American Institutes for Research

Speech-user interfaces offer truly hands-free, eyes-free interaction, have unmatched throughput rates, and are the only plau-sible interaction modality for illiterate users across the world, but they are not yet developed in abundance to support every type of user, language, or acoustic scenario. Two approaches present exciting opportunities for future research.

S peech-based user interfaces have become increasingly popular, especially since the intro-duction of Apple’s Siri in 2010. Google and Samsung quickly followed suit with their own

speech-driven services called Google Now and S Voice, respectively. The boom in speech-recognition appli-cations is not surprising, given that they offer several potential advantages. First, speech-based interaction offers truly hands-free, eyes-free interaction, a dream that has evaded us for many years. Second, speech is faster than typing on a keyboard, and without the need for an onscreen keyboard, there is much greater flexibil-ity in terms of screen real estate. Finally, speech-driven applications present important opportunities for the 800 million or so illiterate users in developing regions, giving them a feasible way to access computing.

However, beyond a few success stories, we have not yet seen large numbers of functional and sufficiently accurate speech-recognition services. There are several reasons for this, although two key challenges in particular stand out.

First, to deploy a usable speech recognizer, developers must build something that can be optimized for every new user group, language, or acoustic situation; and from the de-veloper’s perspective, because he or she is not typically an expert in speech recognition or acoustical engineering, this task is daunting.1 Consequently, many application develop-ers conduct Wizard of Oz studies, in which they simulate automatic speech recognition (ASR) instead of deploying a functional ASR system for the purpose of early experi-mentation or data collection. Unfortunately, simulated ASR cannot uncover real recognition errors,2 which are the lead-ing source of usability issues once the product is released.

Second, reaping real benefits of speech-recognition applications requires their successful deployment on mobile phones, or similar devices that users can interact with while on the go; however, most current mobile-based speech services require access to a reliable network con-nection and a data plan. For instance, Siri only works if you have high-speed network connectivity, which is usually un-available in many rural regions of developing nations or in certain areas of developed ones, such as an underground subway station or inside a moving bus. One exception is Google, which has recently started to ship its general-purpose recognition models for local use on devices. How-ever, these models aren’t fine-tuned, even over time, to specific user or usage requirements for improved accuracy.

Enabling the Rapid Development and Adoption of Speech-User Interfaces

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 45: computer magacine

JANUARY 2014 41

Nevertheless, ASR research has made great strides in recent years, reaching a point where, given enough speech expertise and in-domain data, a sufficiently accurate speech recognizer can be developed for any scenario; this includes cases where non-native accents, background noises, children’s voices, or other similar challenges may be present. However, if a speech recognition system does not work as intended, it is generally impossible for a non-expert to pinpoint the exact reasons for failure. For ex-ample, recognition failure can result from a user speaking too slowly (speaking rate is a frequent cause of mismatch) or too clearly (if the person hyper-articulates), or if back-ground noises are unexpected, or for some other reason, and these are error patterns that experts can usually ana-lyze quickly. Adaptation, or several optimizations of an existing recognizer, can generally mitigate these errors, and will often result in a functional system,3 but such adap-tation requires that developers have substantial expertise and experience in speech recognition development. Appli-cation developers typically do not have such expertise or experience and find it difficult to identify people who do, which makes their task even more challenging.

Looking forward to where this technology is headed, we describe the design and development of both a speech toolkit that embeds expert knowledge into speech appli-cations, as well as a new model for mobile device ASR systems that eliminates the requirement of a reliable net-work connection. Put together, these two innovations have the potential to solve the ASR’s fundamental prob-lems, thereby enabling rapid development and adoption of speech services in newer domains and languages, and for users whom it was not previously possible to assist.

A TOOLKIT FOR NON-EXPERTSSeveral resources are currently available for non-expert

speech-application developers. SUEDE, for example, lets any designer rapidly mock up a prompt-and-response speech interface for testing in a Wizard of Oz study.4 It does not, however, support the development of a work-ing recognizer. SPICE, on the other hand, supports rapid development of a baseline recognizer for new languages by allowing any researcher to input a set of audio files, corresponding phoneme set, and dictionary to gener-ate the baseline acoustic model.5 However, SPICE does not automatically perform acoustic- or language-specific optimizations, which are key to achieving reasonable accuracy.

Open source and commercial speech toolkits such as Sphinx (http://sourceforge.net/p/cmusphinx/discussion), Janus,6 or Kaldi (http://kaldi.sourceforge.net) support both the development of a baseline recognizer and an adapted version. However, they do not provide any automatic guidance on what adaptations to perform, leaving it to the developer’s expertise to understand the application’s

context and apply the appropriate improvement tech-niques. Consequently, non-expert researchers find these toolkits extremely difficult to use. In 2012 alone, a discus-sion forum for Sphinx had more than 6,000 posts with over 1,000 unique topics from non-experts asking for help on various issues related to speech recognition.

Industry leaders such as Google and Microsoft also offer various free APIs to facilitate integration of speech recognition into their applications. For instance, appli-cations can send an audio file to a preconfigured server and receive a decoded transcript in real time. Although this solution works well for typical speech applications—those developed for native speakers for use in clean, quiet environments—it is less robust in contexts where the acoustics or language patterns slightly differ from those

they were developed for. Moreover, developers cannot access the background acoustic and language models in-stalled on the server to perform any adaptations that might be needed for the recognizer to work in their own applica-tion’s scenario.

We turned to speech experts to learn how they build accurate and usable speech interfaces. These experts are well trained with years of experiential knowledge that guides them intuitively in building recognizers for new languages, acoustic situations, or users. This knowledge is undoubtedly hard to transfer to non-experts directly, but by observing these experts in action, we can study and formalize their tacit knowledge, and build a toolkit for non-experts. This formalized knowledge can then be used to help novices in automatic analysis and to recommend appropriate optimization techniques.

To do this, we interviewed five experts at Carnegie Mellon University. First, we asked them to describe a general adaptation process, the common challenges they face, the people they consult, and the tools they use. In a second phase, we observed them in action on an actual speech recognition optimization task. We gave each expert a dataset that contained utterances from Indian children, recorded in a noisy background on a mobile phone. We then asked the expert to explain the steps (similar to a retrospective) that he or she would take to build the best recognizer for this dataset. The transcripts of these inter-views became the basis for a line-by-line open-coding process to identify relevant concepts and themes that enhanced our understanding of the optimization process and the associated intuition.

If a speech recognition system does not work as intended, it is generally impossible for a non-expert to pinpoint the exact reasons for failure.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 46: computer magacine

42 COMPUTER

COVER FE ATURE

Based on these interviews, we simplified our goals as follows:

• goal 1—come to a fine-grained understanding of why the speech recognizer is not working;

• goal 2—provide guidance to non-experts on steps they can take to make it work (including optimizations of the underlying recognizer and the user interaction style); and

• goal 3—enable performance visualization of compet-ing setups to better understand the tradeoffs involved in each setup.

Accordingly, our speech toolkit for non-experts (or, as we call it, SToNE) has four modules: a feature ex-tractor, an error visualizer, a knowledge base, and an optimization advisor. Figure 1 details how they are all tied together.

Feature extractorThe feature extractor’s aim is to support goal 1—that is,

help with analysis on why current recognizers might be failing or are unusable. To do so, this module extracts sev-eral lexical and prosodic features that correlate well with popular reasons for error such as pronunciation score, speaking rate, noise, and so on, and, by using regression analysis, it identifies the most significant features that impact recognition accuracy.

Error visualizerThe error visualizer supports goals 1 and 3 by helping

the developer perform semiautomatic error analysis to un-derstand recognition-error patterns. For a more detailed understanding of why the recognition might be failing, the error visualizer assists the user in doing three tasks: un-derstanding the data distribution across variables, creating meaningful subsets and only analyzing that part of the data

Optimizationadvisor

decompressor

Error visualizer

Performancecomparer

Dataselector

Host PCVirtual machine• Installed recognizer• Example scripts• Tutorials• Sample log-files• …

SToNE website

Featureextractor Knowledge

base

Speechtoolkit(SToNE)

Obtainguidance

Compareconfigurations

1

23

45

6

DataVirtual

machine

Uploadnewresults

Uploadaudio filesand initialresults

Performoptimizations

SToNE server

Semiautomaticerror analysis

Figure 1. Process-flow diagram of the developer’s interaction with the SToNE toolkit’s modules. The numbers indicate the developer’s sequence of steps, and the arrows indicate the direction of information flow. The developer uses a preconfigured virtual machine to conduct all experiments and interact with the website to avoid any hassles of recognizer installation.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 47: computer magacine

JANUARY 2014 43

(for example, all females under age 10, because of poor recognition ac-curacy), and comparing two or more subsets to understand differences between them. The latter would be particularly useful when compar-ing a subset giving high accuracy and a subset giving low accuracy to understand where ex-actly the two differ.

Knowledge baseThe knowledge base (KB) supports goal 2 and

consists of a set of rules extracted from an analysis of the interviews about the specific instructions or intuitions the experts had while analyzing a particular development situation. Upon formalization as “if, then, else” rules, the same experts vetted the KB for consistency and accuracy of formulation.

Optimization advisorThe optimization advisor also supports goal 2,

acting as a front end to the KB for the developer. Given a specific acoustic situation, a new user group, or a new language, it queries the KB for the appropriate steps needed to build an accurate recognizer and commu-nicates the answer to the developer. Once a step-by-step strategy has been recommended, the developer can then refer to tutorials in the virtual machine to obtain guidance on actually performing the optimizations and building the required speech recognizer or, in some cases, the in-terface itself.

Initial resultsThe crux of our work is in understanding whether

expert knowledge can be properly formalized, and if so, the extent to which it can be used for the benefit of non-experts. Our work so far has focused on answering this question and evaluating the KB’s quality in terms of its abil-ity to predict correct techniques for both seen and unseen circumstances.7

First, to see which rules get triggered and how well they actually model expert intuition, we evaluated a KB with the same dataset used in the expert interviews. Table 1 lists the results in terms of word-error rate improvements. On the whole, the KB’s recommendations either mirrored or performed better than those of all five experts. The major point of difference arose in a step in which the experts themselves had conflicting opinions (step 1). To deal with such cases, we incorporated the

conflicting alternatives from all experts with a priority ranking for each option. Initially, this priority ranking was the number of experts who had recommended a particular option, but, over time, other experts could provide additional feedback based on new test runs to either update these rankings or add more recent tech-niques to the KB.

Second, to see how well our KB performed for pre-viously unseen datasets or users, we tested it on a second new dataset that contained speech from poor readers of English recorded in noisy environments. This dataset differed from the first by focusing on rec-ognizing full sentences, not isolated words. For this evaluation, we hired two additional experts for their recommendation on techniques, as we also wanted to see how the techniques recommended by our KB compared with those of other experts who did not contribute to the KB’s development. Table 2 shows the results.

On the second task, our KB outperformed one expert (E6) and underperformed another (E7); E7 recommended a technique that was not initially covered by the KB but was significantly better than the KB’s alternatives. As the boundaries of speech recognition expand, such situations are likely to arise in the future as new techniques become available or old ones become obsolete. Fortunately, with

Table 1. Word-error percentage rates for the first test dataset.

Optimization technique

Experts

KBE1 E2 E3 E4 E5

Baseline: use existing acoustic models (AMs) 94.7 94.7 94.7 94.7

Baseline: train new AMs 25.4 25.4 25.4

Maximum likelihood linear regression (MLLR) adaptation 77.3 77.3 77.3

Maximum a posteriori (MAP) adaptation 78.3 78.3

Vocal tract length normalization (VTLN) 76.9 76.9 24.9

Adding pronunciation variants to dictionary 71.7 22.2 22.2

Frame rate adaptation 20.8

Final 78.3 76.9 71.7 22.2 24.9 20.8

The techniques in the first column were recommended by the experts (E) or the knowledge base (KB).

Table 2. Word-error rate percentages for second test dataset.

Optimization technique E6 E7 KB

Add half-words to dictionary

Correct spelling errors

Baseline: train new AM 56.1 56.1

Baseline: train new AM + model bootstrapping 42.2

Cepstral variance normalization (CVN) 40.9 54.3

Frame-rate adaptation 54.8 52.2

Final 40.9 54.8 52.2

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 48: computer magacine

44 COMPUTER

COVER FE ATURE

its rule-based design, the KB is easy to update with newer techniques.

Although the KB performed at least as well as any expert in the first task—and it can be expanded to incorporate the techniques from E7 for the second task—our work is limited to defining “if, then, else” rules based on datasets as pre-sented to the seven experts. The KB would benefit from a more comprehensive approach than the current approach. Therefore, for future work, we plan to go beyond manual analysis and development, to an automated meta-analysis of a large number of comparable, published experiments. This will help increase the knowledge representation’s scal-ability, robustness, and portability.

ASR FOR MOBILE DEVICESWhen designers and developers build a speech inter-

face, the job is only half done: it must then be deployed and monitored. To continuously improve the interface,

developers need access to real data, which can only be collected by deploying an initial system. As mobile devices proliferate, many speech-recognition interfaces are now developed for such platforms. However, when compared with desktop ASR systems or those on central servers, implementation of accurate speech recognizers on mobile devices faces several challenges, including limited avail-able storage space (language and acoustic models must be smaller, which leads to low performance), cheap and variable microphones that are often far from the speaker’s mouth, low processing power without support of paral-lel processing (algorithms trade off speed and accuracy), and highly variable acoustic environments. Moreover, mobile devices consume a lot of energy during algorithm execution;8 this is an important consideration for speech applications in the developing world, where electricity-supply issues could hamper the use of mobile applications in everyday settings.9

Before describing our proposed approach for these de-vices, we review two existing mobile ASR architectures that offer distinct advantages and disadvantages in light of these challenges. As alluded to earlier, the speech recogni-tion process consists of two major steps: feature extraction, where the audio file is converted into features that accu-rately represent the acoustic information but take up much less memory than the raw audio; and ASR search, in which these features are used to identify the most likely text they

might represent. Although feature extraction is a compu-tationally intensive operation, it only consumes 2 percent of the processing time in the entire speech recognition process—98 percent is invested in search.8

Architecture 1: embedded mobile speech recognition

In the first architecture we describe, both processes in-volved in speech recognition—feature extraction and ASR search—happen locally on the mobile device.

Advantages. The main advantage of this mode is that it does not rely on any communication with a central server, and hence the applications that use such ASR can work in areas without any network connectivity, such as rural areas in developing regions or subway stations in developed ones. In other words, the ASR system is always “ready for use.” Moreover, there is no cost and latency associated with transmitting and receiving information to and from the server, as in other modes. While monthly data plans can mitigate cost issues, network latency is a major issue in developing regions—especially the most rural—where cellular data connections are slow and unre-liable, with frequent call drops. This mode also protects user privacy—no speech is transmitted to a central site.

Disadvantages. The disadvantage of this approach is that many mobile devices are not comparable to the high-end servers that can perform complex computa-tions such as personalization algorithms for user or acoustic contexts. They fall somewhat short in terms of speed, runtime, and persistent memory, which restricts the type of applications that can be supported with this architecture. Also, no user speech is readily available for application developers to measure performance and itera-tively improve the system.

Architecture 2: network speech recognitionIn networked or cloud-based recognition, ASR search is

shifted to a central server, with the mobile device sending the encoded audio to the server and getting back the rec-ognition result. A slight variation of this approach is when the feature extraction is done locally to reduce the amount of information sent over the network, but the vast amount of computation goes to the server.

Advantages. This mode moves the burden of audio postprocessing as well as ASR search to a high-configuration server capable of executing real-time speech recognition systems. It can also utilize a much larger (and potentially more accurate) acoustic and lan-guage model, thereby offering significant advantages in terms of accuracy. Additionally, realistic user data is avail-able to the application developer, making it much easier to improve the overall system. Most importantly, in the context of developing countries and regions, it can sup-port speech applications on low-end mobile devices, such

When compared with desktop ASR systems or those on central servers, implementation of accurate speech recognizers on mobile devices faces several challenges.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 49: computer magacine

JANUARY 2014 45

as cell phones that are not capable of running a local ASR system.

Disadvantages. Despite increased resources in the form of a powerful central server, this mode has a number of drawbacks: it requires a continuous, reliable network connection to use the speech application; it loses acoustic information when the audio is encoded using low bit-rate codecs and packeted transmission; and the server needs to account for all variations in device, channel, speaker, or condition within a global set of parameters. Needless to say, this makes user-based adaptation difficult. Existing network-based ASRs from Apple and Google implement speaker-independent models, thereby compromising on accuracy for nontraditional cases, such as dialectal varia-tions of a particular user group.

Hybrid approach: decode locally, supervise remotely, then adapt

Our approach combines the other two modes—the major ASR subsystems (including feature extraction and ASR search) are on the mobile device, so it can perform recognition locally; and, as in Figure 2, applications do not break down under conditions of zero network connectiv-ity. Whenever there is an intermittent cellular connection available, the mobile device sends the (stored) extracted feature vectors with metadata information about the user and device, such as noise levels, channel information, de-coded outputs, and so on, to the server. This enables the server to evaluate the recognition performance for each user independently with a much larger vocabulary and acoustic database, to recommend user- and context-based adaptations, and to send updated models. For instance, a user accessing the application in a noisy background is likely to need noise-filtering adaptations in the acoustic models stored locally on the device. Similarly, a non-native speaker of a language is more likely to need adaptations to the local ASR’s pronunciation dictionary. Hence, even though the server is not responsible for decoding the speech signal while the application is in use, the server’s processing power is used for compute-intensive functions such as user- and context-specific error analysis, and ad-aptations using larger, shared resources.

We propose an architecture that combines the most important advantages of the two server-based approaches (high recognition accuracy, updatability, maintainability, and moderate mobile hardware requirements), with the advantage of local ASR, which is the ability to function without network connectivity. Centralized server power is used, not to perform online recognition but to ensure that

over time recognition on local devices achieves high accu-racy. The assumption, however, is that each mobile phone is being used by a few users in a few acoustic conditions, thereby enabling a reasonable user- and context-based adaptation. As an additional benefit, provisioning server capacity for average and not peak use is acceptable be-cause adaptation does not need to be in real time.

However, we do not expect that industry leaders such as Google or Apple will implement this architecture, as their experience permits efforts in directly collecting datasets for minority languages without deploying an ex-isting system.10 We therefore expect to see highly accurate recognizers in languages of interest to these companies. However, many academic researchers and freelance ap-plication developers do not have the resources to collect data on the scale of Google and Apple. Our proposed archi-tecture is of immense utility for them to easily and rapidly bootstrap an existing, high-quality recognizer for a new domain or user group.

Initial resultsFor the proposed architecture to be successful, it is im-

portant to understand the limitations of different speech recognizers when they run locally on a mobile device. This information would be useful during the development phase to pick the recognizer that best meets the applica-tion requirements.

To this end, we examined two mobile recognizers, pocketSphinx and SphinxTiny,11 which are mobile versions of the popular, open source speech recognizer Sphinx. To our knowledge, these are the only two open source mobile recognizers scaled down for efficient performance on a wide number of mobile platforms. A few hacks can make other popular recognizers, such as Kaldi, work on select mobile devices as well. We used a Nokia N800 Inter-net tablet running Maemo Linux OS2008 with a TI OMAP 2420 ARM processor clocked at 330 MHz with 128 Mbytes of RAM, which is reflective of the kinds of devices that

Error analyzer

ASRsearch

Usercontext

Featurereconstruction

Lightweight ASR

Acousticmodels

Languagemodel

Feature extraction

Updatedmodels

Inte

rmitt

ent

data

tran

sfer

Figure 2. Hybrid approach. Automatic speech recognition (ASR) is done on the local mobile device, which benefits from user- and context-based personalization guided by intermittent server connection, whenever available.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 50: computer magacine

46 COMPUTER

COVER FE ATURE

most households in developing regions might own, say, five years from now.

Our results suggest that while it is possible to achieve real-time recognition using either of these recognizers,11

a design choice must be made based on functionality re-quirements. We find pocketSphinx is superior if real-time recognition—minimizing delay—is key. It also works best for recognition tasks that have a fixed, predefined gram-mar or for small vocabulary tasks with fewer than 1,000 words. For open-ended, large vocabulary tasks such as dictation of emails or text messages, or tasks that might permit larger delays in exchange for better accuracy, SphinxTiny is a better choice. To take this further, we anticipate that with the support of a server—even under intermittent cellular connection—recognition accuracy can be drastically improved over time using online adapta-tion methods, or even the KB we described earlier.

The success of our proposed “decode locally, supervise remotely, then adapt” approach depends on the degree to which the local speech recognizer can be reconfigured and optimized, depending on the server’s analysis. Given the numerous types of optimizations possible, the exact and most relevant choice depends on the use case—noisy back-grounds versus non-native speech, for example. Research is therefore needed to identify the type of optimizations that will be most beneficial to a user or device if only a cer-tain amount of data can be transmitted over the network. Our architecture can adapt as the user’s context changes over time. For instance, if a non-native speaker achieves native-speaker-like pronunciation, the server analysis can identify another metric such as noise, accounting for over-all user history as well.

Therefore, some of the primary research questions we will explore in the future are as follows:

• How do we perform context- and user-specific error analysis on speech data received from a mobile device?

• How do we identify which adaptation techniques are best suited for a user’s context?

• Without compromising accuracy, what type of trade-offs do we perform to minimize the amount of data to be transmitted back to a mobile device?

• How do we determine the maximum size of updated models that can be transmitted from the server to the client?

• What steps do we take when more than one user ac-cesses the device?

To address these questions, in addition to metadata such as caller ID and acoustic features, we plan to also send the recognition results (that is, hypothesis) from the mobile recognizer to the server. The server then inde-pendently analyzes the acoustic features using a much

larger acoustic database as reference to compute a more accurate recognition hypothesis. Having both types of hypothesis has two advantages: first, it helps identify fac-tors that affect recognition performance most in the user’s context, and second, the more accurate hypothesis at the server can be used as a speech label to build and adapt future versions of the model before shipping back to the device.

A s computing devices continue to shrink in size and proliferate to millions of users, speech input/output interfaces will also continue to grow in popularity.

However, designing, developing, and deploying speech recognition application is still rife with major challenges. While it is possible to treat ASR as a commodity for main-stream American-accented English speakers, speech recognition research is likely to add the most value for non-English-language users, foreign-accented speakers of English, and nontraditional use-case scenarios. Unfortu-nately, there is no easy way to provide ASR functionality without a dedicated development effort, which is needed each and every time. This is unlikely to happen in practice.

Two developments will define state-of-the-art speech recognition over the next couple of years: the first is the dramatic improvement that can be achieved with deep learning techniques such as deep neural networks. Aca-demia and industry have adopted them with breathtaking speed and they help improve ASR in particular for big data scenarios. The second is a push toward low-data or “zero resource” scenarios that enable speech recognition to be quickly developed for new languages using very little tran-scribed data, as is the goal in the Intelligence Advanced Research Project Activity (IARPA)-sponsored Babel re-search program (www.iarpa.gov/Programs/ia/Babel/babel.html). Both these developments bode well for the proposed approach: they facilitate providing better initial models, and they provide more techniques to adapt recognizers to specific use cases or scenarios.

We believe that speech input and output technologies can be fundamentally disruptive, enabling new research that will benefit low literacy users, children, the disabled, and so on. But even as it becomes easier for non-experts in core speech technologies to bootstrap a new speech rec-ognizer for a specific scenario, constant monitoring is the key to deploying ASR broadly and successfully in a larger research context. We invite anyone to join us in our efforts to lower the barriers to entry or to simply use our systems, once fully developed, to create the next-generation speech-recognition interfaces.

References1. G.P. Laput et al., “PixelTone: A Multimodal Interface for

Image Editing,” Proc. SIGCHI Conf. Human Factors in Com-puting Systems (CHI 13), ACM, 2013, pp. 2185–2194.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

___

Page 51: computer magacine

JANUARY 2014 47

2. J. Thomason and D. Litman, “Differences in User Re-sponses to a Wizard-of-Oz versus Automated System,” Proc. North American Chapter Assoc. Computational Lin-guistics: Human Language Technologies (NAACL-HLT 13), 2013, pp. 796–801.

3. C. Abras et al., “User-Centered Design,” Encyclopedia of Human-Computer Interaction, W. Bainbridge, ed., Sage Publications, 2004.

4. S.R. Klemmer et al., “SUEDE: A Wizard of Oz Prototyp-ing Tool for Speech User Interfaces,” Proc. 13th Ann. ACM Symp. User Interface Software and Technology (UIST 01), ACM, 2001, pp. 1–10.

5. T. Schultz et al., “SPICE: Web-Based Tools for Rapid Language Adaptation in Speech Processing Systems,” Proc. Interspeech, Int’l Speech Communication Assoc., 2007; http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.123.3902.

6. H. Soltau et al., “A One-Pass Decoder Based on Poly-morphic Linguistic Context Assignment,” Proc. Auto-matic Speech Recognition and Understanding Workshop (ASRU 01), IEEE, 2001, pp. 214–217.

7. A. Kumar et al., “Formalizing Expert Knowledge for De-veloping Accurate Speech Recognizers,” to appear in Proc. Interspeech, Int’l Speech Communication Assoc., 2013.

8. A. Schmitt, D. Zaykovskiy, and W. Minker, “Speech Rec-ognition for Mobile Devices,” Int’l J. Speech Technology,vol. 11, no. 2, 2008, pp. 63–72.

9. A. Kumar et al., “An Exploratory Study of Unsuper-vised Mobile Learning in Rural India,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI 10), ACM, 2010, pp. 743–752.

10. T. Hughes et al., “Building Transcribed Speech Cor-pora Quickly and Cheaply for Many Languages,” Proc. Interspeech, Int’l Speech Communication Assoc., 2010, pp. 1914–1917.

11. A. Kumar et al., “Rethinking Speech Recognition on Mobile Devices,” Proc. Int’l User Interfaces for Developing Regions (IUI4DR 11), ACM, 2011, pp. 10–15.

Anuj Kumar is a PhD candidate in the Human-Computer Interaction Institute at Carnegie Mellon University. His research interests are in developing voice-user interfaces, applications of machine learning, and mobile computing. Kumar is a Siebel Fellow and a member of ACM and the International Society for Computers and Their Applications (ISCA). Contact him at [email protected].

Florian Metze is an assistant research professor at Carn-egie Mellon University’s Language Technologies Institute. His current research interests include low-resource speech recognition, multimedia analysis and summarization, and increasing the uptake of speech recognition in other dis-ciplines. Metze received a PhD in computer science from Universität Karlsruhe (now Karlsruhe Institute of Technol-ogy), Germany. He is a member of ACM, IEEE, ISCA, and GI. Contact him at [email protected].

Matthew Kam is a senior researcher at the American Insti-tutes for Research. He works on technology for broadening access to economic opportunities. Kam received a PhD in computer science from the University of California, Berke-ley. Contact him at [email protected].

Applied Materials, Inc. is accepting resumes for the following position in Santa Clara/Sunnyvale, CA:

PRODUCT MARKETING ENGINEER(SCPBL)

Develops diverse scope business & marketing plans, assesses market penetration and product positioning to drive competitive advantage, revenue and market share. Position may require travel to various unanticipated locations.

Please mail resumes with reference number to Applied Materials, Inc., 3225 Oakmead Village Drive, M/S 1217, Santa Clara, CA 95054. No phone calls please. Must be legally authorized to work in the U.S. without sponsorship. EOE.

www.appliedmaterials.com

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

___________________

______________

_____________

_________

Page 52: computer magacine

48 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

COVER FE ATURE

Emerson Murphy-Hill, North Carolina State University

Building on time-honored strengths of person-to-person social learning, new technologies can help software developers learn from one another more efficiently and productively. In particular, continuous social screencasting is a promising tech-nique for sharing and learning about new software development tools.

A s they design, build, and maintain the software increasingly crucial to daily life, developers face multiple challenges: users’ diverse and growing expectations of what software can accomplish,

the need to maintain legacy systems even as new software is being written, the demand for larger and more complex software, and the fact that software must be integrated into progressively critical applications. Even as developers’ cognitive and perceptual abilities remain relatively fixed, they are under greater pressure to write software that runs correctly, reliably, and safely at all times.

To meet these challenges, software developers must improve their practice, with both long-term skills that will benefit them throughout their careers and short-term skills that will help them on a single, immediate task. Although many learning techniques have evolved over the centu-ries, one of the most basic and natural is social learning. In the context of software development, social learning is the practice of harnessing the efforts of software engi-neers’ past and present to streamline the effort involved in current development work. The following scenarios offer some examples:

A developer’s coworker notices him using a particular sequence of development environment commands to restructure his code. The coworker says she would have done the same restructuring using the develop-ment environment’s refactoring tools instead, which would have saved a significant amount of time. Now aware of these refactoring tools, the developer tries using them on his next restructuring task.Preparing for a project that needs to store and retrieve data quickly, a developer learns about a new database technology by searching Google, finds and reads a book about the technology, tries it in a small application, uses it when she starts her next project, and makes it part of several future projects.A developer regularly reads a blog on software devel-opment. In one post, the author talks about his team’s communication challenges and how they addressed those challenges using Scrum, frequent meetings in which people share status updates. The developer recognizes that her current team faces many of the same communication challenges and decides to try Scrum.

As these examples suggest, social learning in software engineering is not new; yet it is a powerful discovery tool for finding solutions to our most pressing challenges. In each case, social learning is facilitated by a specific tech-nique, from books to blogs to face-to-face conversations, but it generally occurs in the following sequence:

Individuals perform a software engineering task.Information about that task is recorded, even if that record is only a memory.Another person later performs or plans to perform a new software engineering task.

The Future of Social Learning in Software Engineering

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 53: computer magacine

JANUARY 2014 49

Elements of the new task are compared against the record of prior tasks.Relevant elements of the prior tasks are extracted and presented to the person performing the new task in the form of a recommendation, improving the accom-plishment of that task or future tasks.

Although this sequence of social learning is relatively constant, individual learning techniques are not equally appropriate in every situation.

PRINCIPLES OF EFFECTIVE SOCIAL LEARNING

What makes one technique that facilitates social learning more effective than another? The following nine principles that make up effective social learning techniques suggest some answers.

Recording efficiencyTechniques that facilitate social learning should

reduce as much as possible the overhead required to record task information. For example, a developer might learn from her past mistakes by looking at old versions of her files. Some development environments maintain old versions of files automatically—this recording of file history requires no extra effort on the developer’s part, although some overhead in terms of disk space results. Writing a blog post about Scrum does not require much disk space overhead, but it can entail significant overhead on the part of the developer who must take time to write the post.

Learning efficiencyTechniques that facilitate social learning should, to

the extent possible, reduce the overhead involved in learning. Consider again the developer consulting a book about databases: to learn from the book, she must incur some learning overhead as she takes time out of her day for the necessary reading. At the same time, the author of the book incurs no overhead at this stage—he can teach any number of learners without additional overhead, once the book is written. As another example, suppose a manager notices that his team consistently misses its deadlines, then pores over the source code and discovers that the team is incurring significant technical debt, where the software’s design is compromised due to external pressures. He solves this problem by showing the team how to refactor to pay down that debt. In this case, the process of learning to refactor in response to technical debt incurs significant overhead on the part of the manager, who had to spend time recognizing the problem, but the team incurs relatively little overhead to learn about the problem and its solution from the manager.

Privacy preservationTechniques that facilitate social learning should pre-

serve privacy as much as possible. For example, if a blog author writes about her experiences implementing Scrum, the company she works for might not want others to know about the company’s failures. In such cases, the author might be able to post the experience anonymously to pre-serve the company’s privacy.

TargetingTechniques that facilitate social learning should target

those people who will benefit most. A blog post about Scrum has a good opportunity to reach the right people, because it will likely be searchable on the Internet. If a user realizes she needs Scrum, she can search for the blog post. Conversely, if a developer does not realize that her team is having communication problems at all, she might not think to search for Scrum as a solution, and thus might not learn of the blog post at all.

TrustTechniques that facilitate social learning should

encourage trust in the recommendation. At one end of the spectrum, a developer who learns from a coworker about a tool might have a high degree of trust because the two have worked together before and have similar goals. At the opposite end, a developer who learns about database tech-nology from a book might not trust the author because she suspects his aim is only to sell books.

RationaleTechniques that facilitate social learning should provide

a useful rationale for the recommendation’s relevance to the learner. For example, a blog post about Scrum can explain the problem the author was trying to solve and detail why she believed Scrum helped. A reader of the post with similar problems might relate to that rationale for implementing Scrum. However, if in the event of multiple possible rationales for implementing Scrum the blog post explains just one, a reader might not see why Scrum would be the most useful choice.

Feedback efficiencyTechniques that facilitate social learning should allow

learners to provide feedback about why a recommendation was or was not useful to them. Commenting on a Scrum blog post is efficient in the sense that a reader can easily add text, but inefficient in that it might take significant

Techniques that facilitate social learning should, to the extent possible, reduce the overhead involved in learning.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 54: computer magacine

50 COMPUTER

COVER FE ATURE

effort on the part of the reader to add a comment that fully expresses why the recommendation worked or did not work. Figure 1 illustrates the importance of efficient feedback.

BootstrappingTechniques that facilitate social learning should afford

community members the benefits of social learning, even when the community starts out very small. As an exam-ple, learning from peers does not need to take place in a large company to work effectively; instead, only one peer is required. Consider another example: on the question-and-answer site Stack Overflow (http://stackoverflow.com), new topic sites (such as cryptography) are initially deployed in a private beta mode to determine whether a critical mass of community members can be assembled. During private beta, a small set of users bootstrap the site with content, so if the site is opened to the public, a sufficient amount of content exists to encourage social learning.

GeneralityTechniques that facilitate social learning should be gen-

eral enough to allow developers to learn a variety of things. For example, blogs can teach readers about any kind of software engineering innovation, from tools to processes. However, such ideal generality cannot always be achieved; Stack Overflow, for example, discourages developers from asking subjective or open-ended questions because its hosts did not design it to organize information of that type.

AN EXAMPLE: FACILITATING TOOL DISCOVERY THROUGH SOCIAL SCREENCASTING

Future social learning in software engineering will de-velop and refine techniques to balance and maximize

these core principles. It is unlikely that a single technique will maximize them fully; different techniques will be appropriate in different situations and for different soft-ware engineering tasks. Blogs provide one example of how technology can help facilitate and accelerate social learn-ing in software engineering, but there are potentially many others. Let us examine another example my research group is currently exploring.

The problemOne way software developers can cope with the chal-

lenges of building increasingly complex and sophisticated software is by using software development tools. Such tools come in many forms: shortcuts in editors, stand-alone command-line programs, and plug-ins, features, and views in integrated development environments (IDEs), to name a few. Tools range from very high-level and broad, such as profilers, to very low-level and specific, such as hotkeys for navigating to a variable definition. Both research and practice suggest that tools can improve software quality and reduce development time. For example, Andrew Ko and Brad Myers showed that the Whyline debugger can reduce the time it takes to successfully debug programs.1

Despite the benefits such tools offer, many software de-velopers only use a small subset of them. For instance, based on data collected automatically in the Eclipse IDE from hundreds of thousands of software developers, of the more than 1,100 commands available in Eclipse, the average developer uses only 42 (www.eclipse.org/org/usagedata/reports/data). While we obviously should not expect developers to use 100 percent of available tools, even broadly useful tools remain underused. Consider the Open Resource tool in Eclipse that enables developers to open files with only a few keystrokes. The benefits of this tool are suggested by the fact that it is listed as the first command on the popular blog post “10 Eclipse Navigation Shortcuts Every Java Programmer Should Know” (http://goo.gl/v7PXd), and elsewhere called “one of the most useful tools in Eclipse” (http://blog.zvikico.com/2009/07/eclipse-35-hidden-treasures.html). Despite its apparent usefulness, though, of the more than 120,000 people who used Eclipse in May 2009, only 12 percent used Open Resource.

Barriers to developers’ adoption of tools include issues of reliability, usability, and interoperability, but the most common barrier for all tools is probably discoverability—that is, a software developer is simply not aware of a tool, either because she does not realize the tool exists to solve a problem she faces or because she does not realize she has a problem the tool could solve. This barrier is signifi-cant and—given the thousands of tools, both built-in and plug-in, for modern development environments—grow-ing. Making matters more difficult, developers sometimes have to choose between several alternative tools designed to solve the same problem.2

Figure 1. An xkcd comic illustrating the importance of feedback. If you read the comic online and mouse over the image on the original webpage, the author’s comment reads, “All long help threads should have a sticky globally-editable post at the top saying ‘DEAR PEOPLE FROM THE FUTURE: Here’s what we’ve figured out so far...” (Source: http://xkcd.com/979)

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

________________

_______________

____

________

Page 55: computer magacine

JANUARY 2014 51

My research suggests that one of the most effective ways software developers discover new tools is from their peers.3

Specifically, during peer inter-action, one developer learns from another as part of their normal work activities together. Peer interaction can take two main forms: observation or recommendation. During peer observation, a developer ob-serves another developer using a tool that she did not know about; in peer recommendation, a de-veloper notices that a peer is not employing a pertinent tool and recommends its use.

Peer interaction is effective because developers who interact closely tend to trust one another; for the purposes of tool discov-ery, trust means a developer can quickly assess a particular tool’s relevance by comparing her own working style with her peer’s. Unfortunately, despite peer in-teraction’s effectiveness, my research also suggests that the activity occurs only rarely rela-tive to other modes of discovery, such as exploring an IDE’s user interface. This is because de-velopers often work in different locations or at different hours than their peers do.

Continuous social screencasting

One idea that capitalizes on the benefits of peer interaction, and yet allows developers to discover tools from remote and asynchronous peers, is continuous social screencasting: developers sharing automatically recorded screen-casts of themselves that depict tools being used in real development situations, thus encouraging developers not physically together to learn about new tools from one another. Viewing such screencasts is already common on video sites such as PeepCode (http://peepcode.com), a collection of professionally produced screencasts for Web development.

Consider as an example three hypothetical software developers—Archibald, Cuthbert, and Obediah—using a

system that implements continuous social screencasting, as shown in Figure 2.

Recommendations for CuthbertLet us focus first on how this system could work from

Cuthbert’s perspective, looking at the way tools might be recommended for him.

Tool-use recording. To begin, client software on each developer’s machine continually monitors and records two streams of information: which specific tools the developers use at what point in time, and a screencast of

Server

Tb Tc Td Te Tj Tf Tj Ta Tk Td Tf Te Tc Tb Td Te Tn Tk

cm1cm2cm3

Tb Tc Td Te Tj Tf

Tj Ta Tk Td Tf Te

Tc Tb Td Te Tn Tk

Server

Archibald (cm1) Cuthbert (cm2) Obediah (cm3)

Facebook

Blog

Vimeo

Tb Tc Td Te Tj Tf Tj Ta Tk Td Tf Te Tc Tb Td Te Tn Tk

cm1cm2cm3

a

b

c

d e

f

Tb Tc Td Te Tj Tf

Tj Ta Tk Td Tf Te

Tc Tb Td Te Tn Tk

Show me Tc?

Okay!

Show you Ta?

Recommendation?

UnKn={(Tc,cm1)}Kn={(Ta,cm3)}

Figure 2. A system that illustrates continuous social screencasting, in two phases: data collection (top) and sharing (bottom). The system collects a tool stream and screencast on (a) each developers’ computer and (b) a copy of each developer’s tool stream in a central server. A developer can (c) ask the server for a tool recommendation and (d) contact someone in the community who uses the recommended tool. Alternately, the developer can get a recommendation for a tool he is an expert at, then (e) recom-mend that tool proactively to a peer or (f ) post it to social media

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 56: computer magacine

52 COMPUTER

COVER FE ATURE

the developer’s work. (This builds on recent work in the D-Macs system, created to help designers avoid repetition by capturing and sharing action sequences;4 the continu-ous monitoring aspect of the process builds on life-logging technologies in the field of pervasive computing.5) The screencast is captured by taking screenshots of specific events in a developer’s work, such as when she presses a button or clicks the mouse, and both streams are stored locally on the developer’s machine (Figure 2a). The tool-use stream is also stored on a central server, along with tool-use streams from other developers (Figure 2b), much as community knowledge repositories store other types of software development data, like reusable components.6

Each developer is assigned a unique network identifier (in the example case, cm1, cm2, and cm3), enabling the developers to contact one another.

Generating tool recommendations. Based on Cuthbert’s workflow, his client software asks for recommendations from the server. When the server receives this request, it runs a recommender algorithm, such as collaborative filtering or sequential data mining: in the same way that Amazon.com recommends for its customers, “People who liked books X and Y also liked book Z,” this proposed system can recommend tool Z to a developer who already uses tools X and Y. The algorithm produces two different recommendation sets. The first is a set of tools that Cuthbert does not know (call it “UnKn”), but that other developers in the community do know. The second is a set of tools that Cuthbert does know (call it “Kn”), but that other developers in the community do not.

Choosing user recommendations. For each tool returned that Cuthbert does not know (UnKn), the server includes the network identifier for a user who does know that tool. The returned network identifiers correspond to the system’s estimate of which community members the requesting developer will trust most, giving higher rank-ings to members in the same community subgroups, such as developers on the same team or in the same company, with a previous history of screencast sharing, and with higher community-provided reputation scores. As a simple example, the server might return the recommendations shown in Figure 2c.

Tool-use episode recommendation. Using UnKn, Cuth-bert’s client software retrieves a screencast on his behalf in three steps:

Cuthbert’s client software contacts one of the people in his community to ask whether Cuthbert can see an example of that person using a tool Cuthbert does not know. For instance, the client might ask whether cm1 (Archibald) is willing to share a subset of his screen-cast, which I call an episode, showing him using tool Tc (Figure 2d).If Archibald consents, his client software searches through his tool-use stream, finds an instance of Archibald using Tc recently, uses the timestamp in the tool-use stream as an index to the video stream, and extracts an episode showing that tool being used.Finally, Archibald’s client software sends the episode of the tool being used back to Cuthbert’s client. Cuth-bert can then see the tool being used in a real situation on a real codebase.

Recommendations from CuthbertThe system can also enable developers to initiate shar-

ing of their own screencasts. Based on Cuthbert’s Kn, Cuthbert’s client software can suggest that he share his expertise with his community by contacting one of its members and offering to share with that member an epi-sode of Cuthbert’s tool use. For instance, Cuthbert might make this offer to cm3 (Obediah), sending Obediah a recent episode of his own use of the tool (Figure 2e). Obediah then learns about the tool from Cuthbert via the episode. In addition to contacting specific people within his community, Cuthbert may feel his knowledge of the tool could be useful to others more broadly; he can share the episode with a wider community by publishing it to the server. The server, acting as a screencast repository, can then share the episode through websites such as Facebook, a blog, or Vimeo (Figure 2f).

Applying the principlesSo how does continuous social screencasting fare in

terms of the nine principles of effective social learning? First, recording efficiency is good from the developer’s per-spective because she does not have to do anything special to make screencasts; and while continuous screencasts might require significant overhead in terms of long-term storage, compression and decreasing memory costs make such storage increasingly feasible.

Learning efficiency can pose challenges from the per-spective of the developer who makes the screencast: for example, authorizing access to the screencast for each new learner who wishes to view it could result in a rela-tively high cost. From the learner’s perspective, though, such efficiency is quite good: he only has to watch a short screencast to learn something new—with the added benefit that he can learn from developers who are not co-located. However, if developers view tool recommendations as interruptions, they might find the system annoying and

While continuous screencasts might require significant overhead in terms of long-term storage, compression and decreasing memory costs make such storage increasingly feasible.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 57: computer magacine

JANUARY 2014 53

not use it again. Other systems that recommend tools, such as Microsoft Clippy (http://en.wikipedia.org/wiki/Office Assistant), have faced significant criticism because they frequently interrupt the user’s workflow, delivering rec-ommendations at the wrong times.

The system could implement several possible mecha-nisms to make sure recommendations are not perceived as interruptions or delivered at the wrong time. For example, it could use negotiated interruption,7 where the user is not forced to deal with an interruption immediately. Alterna-tively, the system could identify quiescent periods (times when the developer is inactive) and displacement activities8

(those undertaken as a break from difficult tasks) as unob-trusive times to present recommendations.

Privacy preservation is perhaps the most significant challenge posed for continuous social screencasting. While modern developers are accustomed to sharing information such as bug reports and source code, some might balk at the technological leap to sharing screencasts. Specifically, developers might wish to share screencasts only with cer-tain people—and even then share only certain kinds of information.

To facilitate selectivity in terms of who can and cannot view their screencasts, a system would initially require developers either to grant or deny sharing requests on an individual basis. Because explicit granting at this level can be tedious, the system could be further designed to support user-defined community subgroups, allowing users the choice to grant or deny access to entire groups.

To facilitate selectivity in terms of what information is shared in screencasts, the system could support black-listing and whitelisting of tools or tool groups as well as a choice of manual or automatic obfuscation of source code in screencasts to maintain the privacy of a developer’s code. This is especially important in situations where de-velopers working on closed source code wish to share tool knowledge with outside developers.

Unlike privacy preservation, targeting in social screen-casting poses relatively few challenges, depending on the quality of the recommender system algorithm. Addition-ally, social screencasting enables developers to discover tools that they do not already know they need.

Trust factors likewise could be quite high, assuming there is already at least someone in the community that a developer trusts and depending on the degree to which she can build new trust relationships with other commu-nity members.

Rationale, on the other hand, could pose another challenge: automatically created screencasts might not be detailed enough to help a viewer immediately and fully understand the usefulness of a new tool. How-ever, the system could implement several mechanisms to ensure that screencasts are sufficiently informative. First, in addition to depicting a tool in use, the screencast

might also present a few seconds of before-and-after contextualizing—obviously, understanding a tool from a cause-and-effect perspective is important.3 Second, screen-casts could collect or automatically generate metadata, such as which keystrokes invoke a tool, as well as links to helpful resources.

The system could fairly easily be augmented to pro-vide efficient feedback for future users. Specifically, if a learner views a screencast and starts using the tool that screencast demonstrates, one can assume the learner found the screencast and the tool useful: future tool use constitutes positive feedback for both the tool itself and for the video; lack of future use constitutes negative feedback. This feedback could then be attached to the screencast (which perhaps was poorly executed) or to the tool (which perhaps was not as useful as intended), and subsequently be incorporated into the recommender system algorithm.

Continuous social screencasting might be difficult to bootstrap because recommendation algorithms rely on many users to make good recommendations as well as trust among users to make recommendations stick.

Finally, in terms of generality, this system can recom-mend any type of software engineering tool, regardless of function; but it only helps developers learn about tools, not necessarily how to use them efficiently or how to use software development practices outside the IDE.

OTHER TECHNIQUES TO FACILITATE FUTURE SOCIAL LEARNING

What else does the future hold for social learning in software engineering? Certainly, we should expect the continuation of “traditional” social learning. It is difficult to imagine any technology that could replace the richness that comes with watching a trusted peer work or having her make recommendations based on watching you work. Nonetheless, future developers will likely augment tradi-tional social learning with technology.

Two existing communities offer potential paradigms for future technology-mediated social learning: Stack Over-flow on the one hand, and GitHub (https://github.com) on the other. Stack Overflow enables developers to learn from one another by asking and answering explicit questions; GitHub notifies developers when their peers make open source changes of interest. In these areas, the future likely holds improved search functionality for relevant questions (that is, improved targeting), as well as greater facilitation

Privacy preservation is perhaps the most significant challenge posed for continuous social screencasting.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______

Page 58: computer magacine

54 COMPUTER

COVER FE ATURE

for learning about other developers’ activities beyond code changes (improved generality). Compared to social screen-casting, though, these techniques will likely have relatively low recording efficiency.

What makes continuous social screencasting promising is that software engineering activities are recorded auto-matically and then shared automatically in a targeted way. Beyond helping software developers discover new tools and learn how to use their existing toolset more effectively, other types of software engineering knowledge can be dis-seminated using these mechanisms.

For example, some developers have repurposed com-piler warnings in development environments as convenient mechanisms for helping them refactor; screencasts might be a good way to help share these techniques with other developers. The immediate challenge is determining what a tool usage technique looks like, when a developer is and is not using it, and how to measure the effectiveness and optimum goals of different techniques.

Another extension of continuous screencasting could help developers learn more about language features or li-braries. For example, for a developer doing casting when using collections, the system might find examples of other people who have successfully used generics in similar coding situations to avoid casts. Similarly, if a developer is implementing a new piece of functionality similar to one already provided by an existing library, the system might find an example of someone using that API for a compa-rable programming task. In these cases, screencasting might be an overly complicated medium for sharing knowl-edge among developers—simple code snippets might work well. (Like social screencasting for tool discovery, privacy remains a significant challenge when sharing artifacts in this way.)

Technologically mediated social learning could also help software developers outside the IDE. For example, social information could help developers use their brows-ers to find documentation on the Internet that most relates to their current task and that has helped software develop-ers in similar situations. Other tools like bug trackers could help developers reporting bugs find those that are similar to ones that they have fixed in the past.

S ocial learning has always been an element of human interaction, but new advances in technology can help us leverage it as never before. Building on the

strengths of person-to-person learning, technology can help software developers learn from one another in both relatively traditional ways and in ways not imaginable before now. These advances will take the form both of refinements to existing technologies, like Stack Overflow and continuous screencasting, and completely new innova-tions that help connect developers together.

Software engineering is full of challenges that make building and maintaining software difficult, yet we can meet these challenges by combining what we naturally do well with the enormous power that technology brings.

AcknowledgmentsFor their help improving this article, thanks to the anonymous reviewers, as well as Jim Witschey and Kevin Lubick of the Developer Liberation Front (http://research.csc.ncsu.edu/dlf). Thanks also to those who provided help for a prior version of this paper, which described the social screencasting idea. This material is based upon work supported by the National Science Foundation under grant number 1252995.

References1. A.J. Ko and B.A. Myers, “Designing the Whyline: A De-

bugging Interface for Asking Questions about Program Behavior,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI 04), ACM, 2004, pp. 151–158.

2. G.C. Murphy, D. Notkin, and E.S.-C. Lan, “An Empirical Study of Static Call Graph Extractors,” Proc. 18th Int’l Conf. Software Eng. (ICSE 96), IEEE CS, 1996, pp. 90–99.

3. E. Murphy-Hill and G.C. Murphy, “Peer Interaction Effec-tively, Yet Infrequently, Enables Programmers to Discover New Tools,” Proc. ACM Conf. Computer Supported Coopera-tive Work (CSCW 11), ACM, 2011, pp. 405–414.

4. J. Meskens, K. Luyten, and K. Coninx, “D-Macs: Building Multi-Device User Interfaces by Demonstrating, Sharing and Replaying Design Actions,” Proc. 23rd Ann. ACM Symp. User Interface Software and Technology (UIST 10), ACM, 2010, pp. 129–138.

5. M. Blum, A. Pentland, and G. Troster, “Insense: Interest-Based Life Logging,” IEEE Multimedia, vol. 13, no. 4, 2006, pp. 40–48.

6. Y. Ye, G. Fischer, and B. Reeves, “Integrating Active Infor-mation Delivery and Reuse Repository Systems,” ACM SIGSOFT Software Eng. Notes, Nov., 2000, pp. 60–68.

7. D.C. Mcfarlane, “Coordinating the Interruption of People in Human-Computer Interaction,” Proc. Int’l Conf. Human-Computer Interaction (Interact 99), IOS Press, 1999, pp. 295–303.

8. C. Potts and L. Catledge, “Collaborative Conceptual Design: A Large Software Project Case Study,” Computer Supported Cooperative Work, vol. 5, no. 4, 1996, pp. 415–445.

Emerson Murphy-Hill is an assistant professor at North Carolina State University. His research interests include the intersection between human-computer interaction and software engineering. Murphy-Hill received a PhD in com-puter science from Portland State University. Contact him at [email protected] or via http://people.engr.ncsu.edu/ermurph3.

Selected CS articles and columns are available for free at http://ComputingNow.computer.org.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_______________

_________

Page 59: computer magacine

Apple has the following job opportunities in Cupertino, CA:

Product Engineer [Req #3NT0824] Plan & execute all development builds and rapid ramps of new products.

Sr. SW Engineer/Technical Lead [Req #3KG1319] Serve as the technical lead for the design & development of iOS/Mac OS & Business Process Management Systems.

Advanced Mfg Engineer [Req #3NU0211] Design & develop fixtures for electronics assembly & measurements. Occasional on-call duties are required as they pertain to assisting 7x24 production facilities for issue management & resolution. Employer-reimbursed travel req'd.

Data Info Analyst [Req #3OB1411] Design & architect analytic datamarts using SAS, SQL, and PHP.

SW Engineer [Req #3OC1911] Responsible for SW dvlpmt on baseband protocol SW, especially in 3GPP (LTE) access stratum. Employer reimbursed travel req’d.

Sys Programmer [Req #3N72218] Provide sys & storage administration support for (GCS-IS) team within Apple’s (IS&T) organization. Participate in a 24x7 on-call rotation.

Sr. Acoustics Engineer - Sensory Expert [Req #3DF0712] Responsible for the measurement of all aspects of project-based audio test dvlpmt &audio performance for key Apple devices.

SW QA Manager [Req #3DF1113] Design & execute test plans for testing cellular mobile devices on the Cellular Radio Access Networks. Direct reports.

Opr Engineer [Req #3DD1002] Eng'g design & support of Apple Maps sys infrastructure.

Sys Design Engineer [Req. #3MI0310] Research state-of-the- art image analysis techniques & develop methodologies for detecting & grading various display artifacts. Employer-reimbursed travel req’d.

CPU Gate-Level Methodology Engineer [Req. #3MQ0319] Drive methodology for gate-level verification flows on CPU designs.

Field Diagnostics Tools & Systems Manager [Req #3DD0703] Manage the dvlpmt of next generation diagnostic & troubleshooting tools. Will have direct reports.

Cocoa Apps Engineer [Req. #3DD1303] Design & implementation APIs in the user authentication & authorization areas & their adoption by application dvlpmt teams.

GPU SW SoC Technology Lead [Req. #3JR0411] Lead internal team review & signoff process for new HW design spec's.

Sr SAP ABAP Developer [Req #3DD1804] Perform reqs gathering & planning & project execution of SAP design & dvlpmt.

SW Dvlpmt Manager [Req #3MV0704] Lead the team creating the next generation of hosted infrastructure serving Apple products & devices. Manage a team consisting of 2-7 SW engineers.

Mail resumes to 1 Infinite Loop M/S: 104-1GM, Attn: LJ, Cupertino, CA 95014. Principals only. Must include Req# to be considered. EOE.

JANUARY 2014 55

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 60: computer magacine

56 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

COVER FE ATURE

Hye-Chung Kum, Texas A&M Health Science Center

Ashok Krishnamurthy, University of North Carolina at Chapel Hill

Ashwin Machanavajjhala, Duke University

Stanley C. Ahalt, University of North Carolina at Chapel Hill

Data-intensive research using distributed, federated, person-level datasets in near real time has the potential to transform social, behavioral, economic, and health sciences—but issues around privacy, con-fidentiality, access, and data integration have slowed progress in this area. When technology is properly used to manage both privacy concerns and uncertainty, big data will help move the growing field of population informatics forward.

N early all of our activities from birth until death leave digital traces. Health records, wages earned, schools attended—these and countless other data capturing the details of our daily lives

serve as our digital social footprint. Collectively, these digital traces—across a group, town, county, state, or nation—form a population’s social genome, the footprints of our society in general. If properly integrated, analyzed, and interpreted, social genome data could offer crucial insights into how best to serve our greatest societal priori-ties: healthcare, economics, education, and employment.

Social scientists have long drawn on data collections from governments and elsewhere to track demographic trends, inflation, employment rates, and so on. Now,

however, our daily activities leave digital crumbs all over cyberspace, and we have the technology to gather and analyze these crumbs to reveal previously hidden trends. This newfound ability to examine deep analysis-rich ques-tions in near real time using distributed datasets that are large, complex, and diverse has the potential to transform social, behavioral, economic, and health sciences. Popula-tion informatics is the burgeoning field at the intersection of social sciences, health sciences, computer science, and statistics that applies quantitative methods and computa-tional tools to answer questions about human populations. Just as bioinformatics has revolutionized biological research, population informatics could catalyze significant advances in our understanding of trends in society, health, and human behavior.

The use of big data has spurred major advances in many areas, from climatology to bioinformatics to business ana-lytics. Unfortunately, social and health sciences are much more complex, relying on person-level information across a population. So far, challenges associated with maintain-ing privacy and confidentiality, access, data integration, and data management have constrained the use of micro data—person-level data—in these areas of research, leav-ing rich databases largely untapped.

But improving our capacity to analyze big data col-lections that involve person-level information is not just interesting science: the results could lead to more informed and effective policy decisions and management of social programs. Social genome data can tell us about how people live, work, respond to change, and make decisions, as well

Social Genome: Putting Big Data to Work for Population Informatics

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 61: computer magacine

JANUARY 2014 57

as the collective impact of these individual decisions. Such insights help us understand the root causes of social and public health problems, predict the downstream effects of different policy options, and allocate our collective resources for the greatest impact.

DEFINING SOCIAL GENOME DATA Information about individual people is critical to under-

standing society in the same way that the physical genome is critical to understanding an organism. In many ways, the information held in social genome data represent the social being, whereas our physical genome data repre-sent our physical being. A child is born not only with a certain genome sequence but also into a certain social environment—parents, siblings, town, economic status—that influences the life path the child will take. Data on these social environments are just as important in under-standing the overall well-being of a person as his or her physical genome. Being able to study the social genome at scale will enable data-driven understanding of important sociological questions such as the long-term effect of the social genome at birth.

To decipher patterns about the ways societies behave and evolve, social scientists must examine how individuals live and interact. The social genome thus represents a core set of data that information scientists can use to explore connections, build theories, and propel breakthroughs in managing a society. But just as with the physical genome, the social genome does not provide the full story; it also contains some useless and erroneous information, so the problem of extracting insights from these data is very challenging.

The field of bioinformatics—now virtually inextricable from the practice of biology as a whole—was catalyzed largely by a single endeavor: the Human Genome Proj-ect. Although bioinformatics now includes a plethora of methods and tools beyond genome sequencing, the Human Genome Project provided the focus and structure needed to develop key bioinformatics tools and principles. We need a similar Social Genome Project to catalyze population infor-matics. The solution we envision and describe here includes a series of region-based social genome projects that could serve as a springboard for developing the tools, analytical methods, and oversight mechanisms needed to transform population informatics to the next level.

CASE STUDIESBig data is being harnessed for powerful new person-

level applications in many areas already. Health infor-matics analyzes electronic records to improve healthcare delivery and health outcomes for a population. Educa-tion informatics relies on school records for education research and delivery. Transit informatics uses real-time GIS data to facilitate public transportation. Business

analytics turns operations data into meaningful infor-mation for key business functions, such as marketing and client profiling.

Although the data are distinct in each of these fields, the common theme is the application of informatics to process, manage, and analyze individual-level data for group-level insight. Population informatics—accessing existing collections of raw data for secondary purposes—helps drive a deeper understanding of the social genome. However, the key factors that make population informat-ics difficult are that the data capture many features about a large number of individuals (volume), the data are con-tinuously updated to reflect changes (velocity), the data exist in heterogeneous systems that are redundant yet inconsistent (variety), and the data are incomplete and

erroneous (veracity). These four Vs of big data are common to all data-intensive science. Nevertheless, the advantages of being able to integrate, analyze, and interpret massive person-level data collections are clear, as illustrated in the following examples.

Economics applicationOne example of the power of data integration is the

Longitudinal Employment Household Dynamics program at the US Census Bureau. LEHD integrates data from cen-suses, surveys, and administrative records from national and state-based databases across all 50 states to gener-ate information about labor markets. The project enabled economists to use real-world US data to test their models of unemployment dynamics and model-churning behavior, earning some of them a Nobel Prize.

Worker churning, a ubiquitous feature of the US labor market, refers to companies hiring and firing at the same time. Research on worker churning requires the more detailed person-level data available only in the admin-istrative data, and not just the net job loss or creation per company that was traditionally available before this project launched. LEHD vertically integrates data from across multiple geographic areas in one domain—labor. A project that horizontally integrates data across multiple domains—labor, education, and health, for example—would be even more powerful, even if it were confined to one geographic area.

Population informatics is the burgeoning field at the intersection of social sciences, health sciences, computer science, and statistics that applies quantitative methods and computational tools to answer questions about human populations.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 62: computer magacine

58 COMPUTER

COVER FE ATURE

Health applicationAnother successful integration of different facets of the

social genome to gain population insights in near real time is the Google Flu project (www.google.org/flutrends). By combining information about physician visits with individ-uals’ search queries, Google, in collaboration with the US Centers for Disease Control, was able to predict incidences of flu in a more timely and accurate manner than what the CDC could do with just the physician visit information.

A SOCIAL GENOME PROJECTResearchers are already using person-level data to study

population trends. However, many countries lack a na-tional framework for secondary use of such data, leaving each project to develop its own privacy protection and thereby leaving potential vulnerabilities that can degrade public trust in such work. Developing the technology and policies for a virtual “hot cell”—analogous to the shielded

rooms used for working with radioactive material—to provide a safe environment for conducting sensitive person-level data research is of critical importance.

In our vision, each social genome project would estab-lish a regional data gateway—a social genome center—for data relevant to population-level studies in a certain region, such as a state. Generally speaking, these gateways would provide a common portal to multiple databases such as birth, tax, or criminal records where data could be safe-guarded while research is conducted. This would be a virtual repository in that data would still be housed physi-cally where most appropriate. Access to data would still be controlled by different data custodians, but the center would facilitate and streamline the process of obtaining access.

The center would not be responsible for integrating or cleaning the heterogeneous data for a particular study. Rather, it would provide the tools that researchers need to clean and integrate the data to meet their require-ments by building a hybrid human-machine system that researchers can easily plug into. This would allow each study to optimize the utility of the data to particular re-search questions. From the users’ perspective, these social genome centers would function much like a public library or the federal government’s database collection (http://data.gov). But on the back end, the centers would

add value to the databases they can access by making the full process—from raw data to the summarized statistics—transparent and available to authorized users in appropriately protected environments as needed. Each center would also be responsible for developing systems, both technical and governance, to protect personal privacy and confidentiality, overseeing processes for providing access to data and developing the software required for such research.

Our envisioned informatics architecture would begin with the data ingestion layer, responsible for getting data into the system securely and creating the loose con-nections to other data in the system for later use. Loose connections tolerate inconsistencies across datasets that can only be resolved based on the application. The second layer—data access and analysis—would be responsible for giving diverse system users privacy-preserving levels of access and views of the data appropriate for the tasks required to safely turn the social genome data into useful information. Finally, the topmost layer—information delivery—would provide a library of customizable visu-alization tools that content experts could use to deliver relevant, evidence-based information that could then be in-cluded as part of their results and conclusions (for example, real-time reports, graphs, and summary data tables). This layer could easily be integrated with other efforts to make government data more accessible, such as http://data.gov.

THE CHALLENGE To put big data to work for population informatics,

we must overcome some unique challenges. Investing in infrastructures to propel population informatics forward in a coordinated, responsible way can help us unleash the power of big data for the nation’s collective benefit. Many more topics, such as secure data ingestion, auditing, and data version controls, are not covered here due to space constraints.

Building a knowledge base platform with uncertainty management

Whereas a company like Amazon owns much of its customer data and can centrally manage and analyze that information, the data sources of greatest value to popula-tion informatics research are managed by disparate bodies including hundreds of departments within local, state, and federal governments—birth and death records, Medi-care and Medicaid rolls, school enrollment rosters, and criminal records, to name a few. Each agency has its own approach to collecting, labeling, managing, and providing access to data, making it challenging for researchers—even those within government bodies—to integrate data for in-depth analysis. Thus the social genome platform must provide tools to ingest and manage diverse kinds of data, including structured repositories such as medical

The social genome thus represents a core set of data that information scientists can use to explore connections, build theories, and propel breakthroughs in managing a society.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 63: computer magacine

JANUARY 2014 59

records, summary statistics such as socioeconomic indica-tors from the US Census, and real-time unstructured data such as medical notes or tweets. Because social genome centers would primarily deal with individual-level data, the completion of the following two tasks is essential: disambiguating individuals, such as identifying that Bob in education records, Robert in the Social Security dataset, and bob1234 on Twitter, are the same person; and enrich-ing individual information from multiple datasets, such as knowing that Bob volunteers at a retirement home based on his tweets. This process of continuously ingesting, disambiguating, and enriching entities from disparate sources of information is referred to as knowledge base synthesis.1

An important difference between knowledge base synthesis and typical ETL (extract-transform-load) tools available for data warehousing is the fact that it must integrate information from multiple domains and main-tain multiple versions of the data to satisfy the disparate information needs of multiple users. For instance, a social genome center might host the information from three different state agencies (or domains): education, child welfare, and health. Each agency (or user) is interested in maintaining and curating its own data, as well as in enriching datasets with information from other databases, requiring the disambiguation of people and entities such as hospitals and schools. These agencies might have dif-ferent uses for the data—the education department might only release statistical summaries, while the health de-partment might want to utilize child-level data for medical interventions (and thus have a lower tolerance for errors in disambiguation and enrichment). Finally, the social genome center has to support multiple versions represent-ing different time points to understand temporal trends.

As the number of domains, users, and versions increases, the complexity of management, disambiguation, and enrichment of individual information only increases. Moreover, users of the social genome data must be able to bring together (disambiguate, consolidate, and ana-lyze) chaotic data into a view that can address particular research questions on the fly. To support multiple data uses, the system cannot create a single, consolidated, and clean view of the data, but rather a framework and tools so that users can manipulate their own views at will and easily. This is fundamentally different from the goals of the typical ETL process, which maintains one clean collection of data. A better approach is to abstract out domain-independent algorithms into a platform layer and to expose a set of plug-ins that different users can customize for knowledge base synthesis.1 It is important that such plug-ins can support efficient human decision making by quickly pointing out inconsistencies that need to be resolved in the federated data, along with interactive visualization that supports multiple levels of details.

Secure data accessThere is a direct relationship between data usabil-

ity and risk to privacy; greater access to data generally leads to a higher privacy risk but more usability of the data, and more restricted access generally provides better privacy protection at the cost of less usability. The key is to understand data use requirements to design a flexible paradigm that balances the two competing re-quirements for usability and protection given particular needs. This is sometimes called the privacy-by-design approach to privacy protection. Privacy-by-design looks beyond the narrow view of privacy as anonymity and tailors privacy principles and data protection to the full system, thereby building a safe environment consist-ing of secure computer systems and policy frameworks, in which data can be analyzed safely. The fundamen-tal design principles for privacy and usability are the minimum-necessary standard—which states that maxi-

mum privacy protection is provided when the minimum information needed for the task is accessed at any given time—and the maximum-usability principle—which states that data are most usable when access to the data is least restrictive—in other words, direct remote access is most usable. If we apply these design princi-ples into a secure laboratory for population informatics, the three components of that laboratory must be a well-designed secure computer system, secure software and data to carry out the research in a privacy-preserving manner, and a governance framework.2

Broadly speaking, the purpose of population infor-matics is to transform raw administrative data beyond operations into insights that can inform decision making. Table 1 details the computer system, software, and data for four data-access levels—restricted, controlled, monitored, and open—designed around the workflow from raw data to decision based on the four most common activities.2

These access levels offer optimum privacy protection while still providing maximum usability for the given data and activity, and they help define a comprehensive system for privacy protection for most secondary data analysis in population informatics research.

Privacy-by-design looks beyond the narrow view of privacy as anonymity and tailors privacy principles and data protection to the full system, thereby building a safe environment consisting of secure computer systems and policy frameworks, in which data can be analyzed safely.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 64: computer magacine

60 COMPUTER

COVER FE ATURE

Privacy-preserving data integrationIntegrating data from heterogeneous and un-

coordinated systems requires record linkage—the critical task of identifying record pairs that belong to the same real-world entity. But considerations of privacy make this a difficult issue for population informatics. Privacy-preserving record linkage is fundamentally different from most privacy-preserving data operations in that the goals of record linkage are precisely to identify the entity represented by the data so that the linkage can be made accurately. For example, it is very important to distinguish between two twins in a dataset so that the two records are not treated as a duplicate record for one person and that the records are not cross-linked. Incorrect identification has the potential to harm the subjects and can also result in serious legal and clinical consequences.

It is critical to understand the distinction between identity disclosure (who is this person?) and sensitive attribute disclosure (does this person have cancer?). Identity disclosure has little potential for harm on its own, but sensitive attribute disclosure is another matter.3,4 If we define the privacy goal of privacy-preserving record linkage as a guarantee against attribute disclosure, we can

develop systems that allow both privacy protection and high-quality record linkage.4

Private record linkage computes the set of linked records given a mapping function and outputs the linked records to the two private parties without revealing anything about the nonlinked records. The goal of private record linkage is to securely compute a known mapping function. The first generation of private record linkage methods was made up of hash-based algorithms, which provided strong privacy guarantees but were limited to exact matching. The second generation of methods was built on secure approximate string comparison operations, such as Bloom filters, to support approximate record linkage. Major challenges here are that, in reality, the mapping function is typically not known, and there is a requirement to manually refine the ambiguous links for high-quality data integration.5

In practice, trusted third parties with access to all the data perform data linkage and integration. In the US, federal and state health statistics departments and selected research entities are the trusted parties. Several countries operate a data linkage center to support population research. In these centers, the most important protocol for privacy protection is the separation of identifying data and sensitive data to protect against attribute disclosure.6,7

Table 1. Comparison of risk and usability.*

Restricted access Controlled access Monitored access Open access

Example systems RDC (Research Data Center) Secure medical workspace Secure Unix servers Public website

Type of data Decoupled micro data (high-risk data)

De-identified micro data(medium-risk data)

Aggregate data(low-risk data)

Sanitized data(minimal-risk data)

Privacy-protectionmethods used

Encryption for decoupling, locked down computer with physical restriction

Locked down virtual machine (VM) to restrict software on the computer and data channels

Information accountability Disclosure limitation methods

Oversight protocol

Based on the risk and benefit of the research, approval is required

Based on the risk and benefit of the research, approval is required

Must file what and how data are being used, including for what purpose, in advance, but does not require approval; will still support information accountability when breach is suspected

Honor system; no registration or details of use required, but user signs a general agreement with guidelines for appropriate use

Monitor use On and off the computer On the computer On the computer No monitoring

Usability IU1.1: Software (SW)

Only preinstalled data integration and tabulation software; no query capacity

Requested and approved statistical software only

Any software Any software

Usability IIU1.2: Other data

No outside data allowed Only preapproved outside data allowed

Any data Any data

Usability IIIU2: Access

No remote access Remote access Remote access Remote access

Risk IR1:Cryptographic attack

Highly difficult Fairly difficult; would have to break into virtual machine

Easy to run sophisticated soft-ware with outside data

NA

Risk IIR2: Data leakage

Very difficult; memorize data and take out

Physical data leakage (take a picture of monitor)

Electronically take data off the system

NA

*Shaded boxes represent no restriction in a particular dimension; they depict how access levels are fully opened up one at a time from restricted access to open access (allowing for usability 1.1 and 1.2 results in risk 1, and allowing for usability 2 results in risk 2).

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 65: computer magacine

JANUARY 2014 61

H i g h - q u a l i t y d a t a integration requires human involvement to manage the errors inevitably introduced by imperfect real data. Errors that are not properly managed propagate to subsequent data analyses, leading to incorrect analyses and decisions.8

Recently, researchers have proposed Secure Decoupled Linkage (SDLink) for pr ivacy-preserving interactive record linkage. SDLink is a computerized third-party linkage system that offers safe and high-quality data integration by using a hybrid human-machine system4 based on three core pr ivacy principles.4 First, as shown in Figure 1, SDLink decouples the identifying data from the sensit ive data v ia encryption. Second, through chaffing (adding fake data) and universe manipulation (changing the dataset label), SDLink prevents the attribute inference that can occur in group disclosure. For example, if someone you know is on the cancer registry (group disclosure), she must have cancer (attribute disclosure), but this disclosure can be prevented if you know that the list has fake data—that people who do not have cancer are also on the list—or if you did not know this is a cancer registry. Any identity disclosure is additionally minimized by recoding the variables in the GUI (see the top of Figure 1). Only the information that is essential for record linkage is revealed during the linkage process. More research is needed to understand the useful and meaningful differences of the different variable types as well as what people infer from information displayed for linkage. The key is to understand the minimum information required for acceptable linkage and then to design protocols to securely reveal only that information.

Privacy-preserving data analysis Although several privacy and security challenges arise

from unauthorized access or malicious dissemination of data, the results of valid data analyses can also lead to

the disclosure of sensitive information about individuals, and thus a confidentiality breach. There is a fine line between an adversary’s ability to infer sensitive attributes of an individual and a researcher’s ability to learn trends in the population. Hence, mathematically formulating what it means for some data analysis to not breach the privacy of individuals is a challenging task. Understanding these risks well is especially important for data released as open access or as monitored access in the four-level model dis-cussed earlier.

Another challenge in private data analyses is that even if one result does not disclose sensitive informa-tion about any individual, a collection of these tasks could potentially lead to a breach. For instance, con-sider two queries: the number of unemployed males in Durham, North Carolina, and the number of males in Durham other than Bob who are unemployed. While Bob’s employment status is not disclosed by either query in isolation, it can be inferred by combining the answers

Figure 1. Secure Decoupled Linkage (SDLink). The SDLink GUI (top) applies data-recoding techniques that display the difference between the attributes that are meaningful for record linkage instead of the actual data. For example, the gender field only indicates same(_), dif-ferent (D), or missing (M) in one or both fields. Internally, the data are stored in a decoupled data system (bottom), which separates out the identifying attributes from the sensitive attributes and introduces fake data (chaffing). Decoupled data have the same level of privacy protection as deidentified data (mid right), but are much more powerful because researchers can link multiple decoupled datasets safely. Decoupled data, along with chaffing, allow for accurate record linkage with no attribute disclosure.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 66: computer magacine

62 COMPUTER

COVER FE ATURE

to both queries. Recent work has shown that many supposedly safe methods of releasing data can lead to disclosure of individual information by combining multiple invocations of these algorithms.9,10

In fact, a classic result shows that you cannot answer more than an adversarially chosen set of n(log n)2 queries over a database of n bits such that each query has o(√n)errors without the adversary being able to reconstruct the original database. This result poses a fundamental limit on private data analyses and motivates the need to think about private data analysis as a budget-constrained prob-lem. Each query leads to some privacy loss while providing some utility in terms of data analysis. The goal is to achieve the maximum utility under a fixed privacy budget.9

Differential privacy is a methodology that lets us con-cretely reason about privacy-budgeted data analysis. An algorithm satisfies differential privacy if, for any two da-tasets D1 and D2 that differ in one row, the ratio of the likelihood of the algorithm resulting in the same output starting from D1 and D2 is bounded by at most e . Thus, if each row in a database corresponds to an individual, then using a differentially private algorithm provably ensures that the output is not sensitive to an arbitrary change in any one individual’s input.10 Differential privacy is pow-erful because it can be composed—two algorithms that satisfy differential privacy with parameters 1 and 2results in ( 1 + 2) differential privacy, thus allowing us to apportion a total privacy budget of across many subtasks. Differential privacy can allow accurate analyses in certain cases. For instance, one of the LEHD data products boasts of provable differential private protection in the released data (http://onthemap.ces.census.gov).

There has been much theoretical examination of differ-ential privacy, but how to apply this framework to practical individual data is an area of active research, including un-derstanding optimal methods to apportion privacy budgets to sets of overlapping data analyses, minimizing the noise introduced by differentially private methods in sparse data, and customizing and relaxing differential privacy in applications involving correlations, sparse data, and time-varying data.

THE BROADER PROBLEM: PRIVACY, CONFIDENTIALITY, AND ETHICS

In the computer science literature, privacy refers broadly to collection, maintenance, disclosure, and control of, and

access to, information about individuals.11 It is helpful to note that in many other fields privacy refers more narrowly to safe data collection (data input), whereas confidential-ity refers to safe information disclosure (data output).3

Kenneth Prewitt, former director of the US Census Bureau, states that, privacy is akin to “don’t ask” and confidential-ity is akin to “don’t tell.” Some security technologies are applicable to both, and others are specific to only one purpose.

Accidental or purposeful misuse of social genome data has the potential to cause harm to individuals. In addi-tion, privacy and confidentiality breaches can lead to legal consequences, especially in government and research settings. Thus, privacy and confidentiality protection is critical to the success of population informatics research. Protecting privacy and confidentiality in secondary data analysis is complex and requires a holistic approach involving technology, statistics, governance, and a shift in culture of information accountability through trans-parency rather than secrecy. Information accountability focuses on monitoring use of sensitive data to hold users of that data accountable for any misuse.12 For example, protection of financial credit history data is mainly based on information accountability, where all parties know who used what information for what purposes with strict laws to hold them all accountable.

Governance models also play an important role in maximizing protection. Helen Nissenbaum provides a practical legal framework for privacy protection of personal information referred to as contextual integrity—that is, privacy protection depends on the context and the expected norms of protection given a particular situation.13 From a technical standpoint, these privacy standards result in policy requirements on digital data about who has access to which data, for what pur-pose, and how the data should be maintained. The most relevant question for population informatics research is, “What are the expected norms of ethical conduct for doing research with person-level data in a given society?” Each country must start a discourse on the ethics of data analysis that draws on personal data.

Given proper oversight mechanisms, people might willingly donate data if it means more appropriate allocation of tax dollars and a greater impact for gov-ernment programs—just as they willingly share blood samples for research that has the potential to save lives. The challenge lies in establishing the proper oversight mechanisms. As a society, we will have to collectively contemplate the expected norms of ethical personal data use for the greatest benefit. Then, we can apply various privacy, confidentiality, and security technologies to hold researchers accountable and ensure that all research using social genome data is conducted within the legal and ethi-cal boundaries.

As a society, we will have to collectively contemplate the expected norms of ethical personal data use for the greatest benefit.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 67: computer magacine

JANUARY 2014 63

B ig data holds tremendous, yet untapped, potential for informing evidence-based decision making at many levels. To unleash this potential, significant

investments in infrastructure are worthwhile to give researchers the ability to develop the necessary tools for integrating, managing, and using social and health data with proper oversight. A series of regional social genome projects would provide unprecedented access to inte-grated, high-quality, robust datasets and offer the strategic focus and development space needed for population infor-matics to mature in a responsible, coordinated manner. To provide initial investments and ensure long-term main-tenance, we envision regional social genome initiatives as regional-national-academic consortia in which each participant contributes data, funding, and resources and reaps downstream benefits. Social genome initiatives pro-vide tremendous opportunity for both research and public programs, and investments in them will provide benefits for decades to come.

AcknowledgmentsWe thank Mary Whitton, Michael K. Reiter, Ronald Rindfuss, and Dean F. Duncan for their valuable comments in develop-ing our ideas and article.

References1. K. Bellare et al., “WOO: A Scalable and Multi-Tenant Plat-

form for Continuous Knowledge Base Synthesis,” VLDB Endowment, vol. 6, no. 11, 2013, pp. 1114–1125.

2. H.-C. Kum and S. Ahalt, “Privacy by Design: Understanding Data Access Models for Secondary Data,” AMIA Summits on Translational Science Proc., vol. 2013, 2013, pp. 126-130.

3. S. Fienberg, “Confidentiality, Privacy and Disclosure Limi-tation,” Encyclopedia of Social Measurement, Academic Press, 2005, pp. 463–469.

4. H.-C. Kum et al., “Privacy Preserving Interactive Record Linkage,” to appear in J. Am. Informatics Assoc., 2014, doi: 10.1136/amiajn/-2013-00216/.

5. D. Vatsalan, P. Christen, and V.S. Verykios, “A Taxonomy of Privacy-Preserving Record Linkage Techniques,” Informa-tion Systems, vol. 38, no. 6, 2013, pp. 946–969.

6. C.J. Bradley et al., “Health Services Research and Data Linkages: Issues, Methods, and Directions for the Future,” Health Services Research, 16 Oct. 2010, pp. 1468–1488.

7. D. Holman et al., “A Decade of Data Linkage in Western Australia: Strategic Design, Applications and Benefits of the WA Data Linkage System,” Australian Health Rev., vol. 32, 2008, pp. 766–777.

8. P. Lahiri and M.D. Larsen, “Regression Analysis with Linked Data,” J. Am. Statistical Assoc., vol. 100, no. 469, 2005, pp. 222–230.

9. I. Dinur and K. Nissim, “Revealing Information while Pre-serving Privacy,” Proc. 22nd ACM SIGMOD-SIGACT-SIGART Symp. Principles of Database Systems (PODS 03), ACM, 2003, pp. 202–210.

10. C. Dwork, “Differential Privacy: A Survey of Results,” Proc. 5th Int’l Conf. Theory and Applications of Models of Compu-tation (TAMC 08), M. Agrawal et al., eds., Springer, 2008, pp. 1–19.

11. K. Prewitt, “Why It Matters to Distinguish between Privacy and Confidentiality,” J. Privacy and Confidentiality, vol. 3, no. 2, 2011, article 3.

12. D.J. Weitzner et al., “Information Accountability,” Comm. ACM, vol. 51, no. 6, 2008, pp. 82–87.

13. H. Nissenbaum, “Privacy as Contextual Integrity,” Wash-ington Law Rev., vol. 79, no. 1, 2004, pp. 19–158.

Hye-Chung Kum is an associate professor at the Texas A&M Health Science Center School of Public Health, Department of Health Policy and Management, with an adjunct ap-pointment in the School of Medicine Baylor Scott & White, Department of Pediatrics. She is also an adjunct associate professor at the University of North Carolina at Chapel Hill’s Department of Computer Science, where she leads the Pop-ulation Informatics Research Group. Her research interests include privacy-preserving entity resolution, secure data infrastructure for safe analysis of sensitive data, popula-tion informatics, data science, health services research, and health informatics. Kum received a PhD in computer science from the University of North Carolina at Chapel Hill. Con-tact her at [email protected].

Ashok Krishnamurthy is deputy director at the Renais-sance Computing Institute (RENCI) and adjunct professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. His research interests are in computational science and engineering, and signal and image processing. Ashok received a PhD in electrical and computer engineering from the University of Florida. He is a member of IEEE and ACM. Contact him at [email protected].

Ashwin Machanavajjhala is an assistant professor in the Department of Computer Science at Duke Univer-sity. His primary research interests include data privacy, systems for massive data analytics, and statistical meth-ods for information extraction and entity resolution. Machanavajjhala received a PhD in computer science from Cornell University. He is a member of ACM. Contact him at [email protected].

Stanley C. Ahalt is director of RENCI and a professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. His research interests include signal image and video processing, high-performance scien-tific and industrial computing, and secure data integration. Ahalt received a PhD in electrical and computer engineer-ing from Clemson University. He is a member of the IEEE Computer Society, the Coalition for Academic and Scientific Computing (CASC), the Research Data Alliance (RDA), and the National Consortium for Data Science (NCDS). Contact him at [email protected].

Selected CS articles and columns are available for free at http://ComputingNow.computer.org.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_____

___________

_____________

_______________

Page 68: computer magacine

64 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

PERSPECTIVES

Augmented Reading: The Present and Future of Electronic Scientific Publications Paolo Montuschi and Alfredo Benso, Polytechnic University of Turin

As technological, economic, and social factors drive scientific publishing toward electronic formats, opportunities open beyond traditional reading and writing frameworks. Journal articles now, and in the future, can increasingly include a variety of supplemental multimedia and interactive materials for augmented reading that will impact both the nature and presentation of scientific research. The IEEE Computer Society is preparing for this evolution.

F or centuries, the primary avenue for exchanging scientific knowledge has been through papers. To “publish a paper” is historically among the most pres-tigious validations of a researcher’s work—having

that work scrutinized and evaluated through peer review and then presented to the scientific community more broadly.

Today, as the production of new research results and data steadily accelerates, reaching proportions unimagined only 50 years ago, and as increasingly rapid and widespread means of disseminating information develop, our idea of the “paper” must adapt correspondingly. First, we must find new forms for sharing this data that will enable fuller reader understanding. Second, we must develop reliable ways to filter this data so that readers can efficiently access

only what is necessary and relevant to their needs and interests. Further, the ease of publishing for new portable reading technologies, such as e-readers and tablets, makes processing and printing paper material less appealing from a financial perspective—not to mention, less eco-friendly.

The interplay of these drivers opens fascinating oppor-tunities for the scientific community to create innovative ways of exchanging knowledge. Papers in electronic format are just a first, small step toward making “reading” a new and different experience. For this reason, in 2010, as vol-unteers on the IEEE Computer Society’s Digital Library Operations Committee, we coined the term “augmented reading,” by which we refer to a topic for research more than to a specific technology or solution. The goal of aug-mented reading is to define new paradigms of scientific

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

VI

DE

O

Page 69: computer magacine

JANUARY 2014 65

content delivery using technology based on cognitive models, human and social factors, and ergonomics. In our view, augmented reading sets an ever-moving target: optimal modes of implementation will depend on avail-able technologies and platforms, as well as on the ideas and data being presented; different disciplines may iden-tify and develop different modes of delivering augmented reading content appropriate to specific area of research. (Augmented reading materials for this article, in the form of seven videos, are available at www.computer.org/portal/web/computingnow/computer/multimedia.)

Our purpose here is to discuss in general terms how augmented reading concepts can change—and improve—the way scientific publications are planned, written, and delivered, as electronic publishing creates new models for more complete and engaging knowledge transmission. Although the concept applies across many disciplines and is a topic of considerable interest throughout the scientific publishing community, our analysis focuses mainly on computer engineering and the kinds of work of interest to the Computer Society.

MOTIVATIONS ARE FROM THE REAL WORLDThe idea of augmented reading grew from our observa-

tions that readers and researchers expect scientific papers to keep pace with the times and to facilitate full transmis-sion of knowledge, not just its narrow communication.

Consider how information today is commonly deliv-ered and received. A pop music star posts a clip of her new single on YouTube and in just a week accumulates millions of views; politicians, activists, and advertisers use social media to spread their message to constituents as never before; and electronic versions of newspapers and maga-zines provide multimedia content and links that appeal to an increasingly broad base of readers. Our habits of gath-ering information have changed drastically in a very short time: most of today’s communication takes place through channels not even in existence only 10 years ago. We agree with David Alan Grier, who, as the Computer Society’s vice president for publications, said in his first podcast on the OnlinePlus publication model in January 2010,

It does not take much to observe that publications have

changed radically in the last four to five years. … The old

forms of publications, the ones that we grew up with and

the ones that we have used all of our lives, are having to

adapt to this new world.

In terms of facilitating the transmission of knowledge, we see two issues as particularly important: involv-ing readers and providing them with tools to better understand—and possibly reproduce—research results.

With such abundant information of varying quality and depth available, both casual and expert readers can find

it difficult to filter among the different alternatives and identify those that best meet their needs. This problem cannot be solved by just relying on the ranking results of an automated query tool. As scientific research more closely targets its specific audience, publications have to appeal to, involve, and make those readers care. Filmmaker Andrew Staunton put this well in a recent TED talk (www.ted.com/talks/andrew_stanton_the_clues_to_a_great_story.html):

Make me care please, emotionally, intellectually, aestheti-

cally, just make me care. We all know what it’s like to not

care. You’ve gone through hundreds of TV channels, just

switching channel after channel, and then suddenly you

actually stop on one. It’s already halfway over, but some-

thing’s caught you and you’re drawn in and you care. That’s

not by chance, that’s by design.

Providing better understanding implies giving readers wide experience of research results as well as their appli-cations and assuring reproducibility,1,2 because both are necessary conditions for future advances. In many cases, this requires the addition to the original paper of much more information, including important details that can “vir-tually” put the reader as close as possible to the researcher at the time of making the discovery.

BROADENING “SENSE” APPEALReading a printed manuscript involves only one of the

human senses: sight. The brain “visualizes” and under-stands ideas as the eyes read about them. Figures can aid comprehension, but, being static, cannot directly reproduce dynamic relationships, which can only be interpreted by the brain through reading. Tables provide evidence that research is accurately performed and substantiated by data, but these visuals still leave much information for the brain to perceive and interpret.

Most of our first-hand experiences and perceptions of surrounding reality are multi-sensorial. Therefore, readers’ brains might not always fully transfer what is described on the printed page into concepts that match the writer’s intended “reality.” In works such as novels and romances, this gap between the written content and a reader’s percep-tion is stimulating and can be bridged by human fantasy, by an inner personalization of the story.

In scientific publications, however, where facts must be clearly demonstrated and methodologies accurate, con-vincing, and rigorous, this gap is not so easily bridged. Here, static text has limited expressive capabilities, and therefore when a writer omits (whether purposefully or not) important details, readers likely perceive an incom-plete, perhaps even inaccurate, version of what the writer intends.

Scientific research has tried in the past to mitigate this problem by introducing strict formalisms and self-

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

____________________________

_____________________________________

VI

DE

O

Page 70: computer magacine

66 COMPUTER

PERSPECTIVES

contained worlds that (ideally, at least) outlaw ambiguity. The wide use of mathematics in some scientific disciplines, for example, represents an attempt to transmit knowledge in a self-contained way; all the reader needs to under-stand from the scientific contribution—relevant details included—is embedded in the mathematics.

Unfortunately, disciplines such as medicine, biology, and the cognitive and pedagogical sciences, as well as all those areas where the empirical approach plays a relevant and necessary role, cannot rely solely on mathematics or formal methods. Very interesting in this regard is a recent TED talk, in which surgeon Steven Schwaitzberg outlines the two main difficulties he sees in teaching laparoscopic surgery to physicians around the world: language and distance (www.ted.com/talks/steven_schwaitzberg_a_universal_translator_for_surgeons.html).

So what more holistic approaches might writers use to fill this gap? Since human reality is multi-sensorial, repre-senting it fully should involve more senses than sight alone. Hearing is obviously the best candidate for including as part of the experience, followed by touch, smell, and taste (although given current technological limitations, the latter three pose significant challenges).

Imagine being fully immersed in a multi-sensorial “paper,” experiencing conditions very similar to what took place as an experiment was conducted and its results first discovered by the writing researcher. Or imagine reading the results of a paper presented as physician and statistician Hans Rosling does in his famous TED talk demonstrating statistics through the use of moving and interactive graphics (www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html). Animations, 3D models, virtual environments, simulators, augmented reality—all are rela-tively new concepts that could play an important role in creating an “immersive” experience for readers of research results in scientific publication.

We are already witnessing some initial attempts to provide such innovative reading experiences, even if these are still limited to entertainments like the Wonderbook: Book of Spells project by Sony, an aug-mented reality product for its PlayStation. The world of scientific publishing is still far from reshaping itself to take full advantage of prototypical consumer imple-mentations such as this, but it is a useful exercise to observe what is currently available, and at the same time plan for future innovation. We present here our vision based on available opportunities, projecting what might be accomplished even from a “starting small” philosophy of improvement.

A FRAMEWORK FOR AUGMENTED READINGWe can begin to posit a basic idea of augmented reading

by observing three facts related to how information today is gathered and presented:

The PDF version of a research paper does not take full advantage of the many possibilities offered by most consumer electronic reading platforms.Users now commonly look for video information in addition to textual documents—for example, consult-ing YouTube or other video versions of device and software manuals.Social media play an important role in people’s lives today. Any communication system, including those aimed at knowledge transmission, must offer users the possibility to connect with others and share experiences, comments, current information, and the like.

All of this leads us to define augmented reading as “a new way to design and deliver scientific content for inno-vative reading and learning experiences.” As noted earlier, we see augmented reading as a moving target; its actual implementation depends greatly on available technology and platforms, as well as on the needs of particular disci-plines and sets of writers and readers.

In this context, we believe any framework for augmented reading should ideally meet the following minimal require-ments. We hope it would be

accepted, driven, and led by the appropriate scientific communities;viewed as a topic for scientific research in terms of where and how best to direct efforts and investments;based, regardless of discipline, on a common foun-dation where cognitive, social, and human-machine interface facilitates and enhances the experience and understanding of scientific research;simple to use, even by those with little technological expertise;designed to incorporate smoothly, pervasively, and continuously into the world of electronic scientific pub-lications, keeping pace with technological advances, social tools and habits, new disciplines, and so forth;an inspiration for the production of scientifically sound products that bring real value and are readily acces-sible to a large majority of readers and authors; and a useful technology for increasing the combined reading-learning experience.

Additionally, augmented reading might provide a social and ethical opportunity to offer more features and assistance that allow partially impaired readers and users greater access to scientific knowledge. The sidebar “Augmented Reading: A (Slightly) More Formalized Approach” offers a somewhat more systematic view of the process.

The Computer Society’s initial movements toward imple-menting augmented reading over the past several years

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

___________________

_______________

Page 71: computer magacine

JANUARY 2014 67

have led to some intermediate but still important contribu-tions and results.

AUGMENTED READING AT IEEE AND THE COMPUTER SOCIETY

Thanks to its Digital Library (CSDL) and previous online platforms, the Computer Society has delivered scientific publications in electronic format for almost 20 years. From the beginning, electronic versions of all papers have matched exactly their corresponding print versions, both in editorial content and layout. Nevertheless, even as early as 2000, the Society had begun to explore the opportuni-ties offered by new electronic tools, media, and publishing

facilities. This led over the years to a revolution in the Society’s publishing model and in the way it promotes learning.3-7 The timeline in Figure 1 summarizes the most important milestones and achievements in this revolution.

Today, Computer magazine, along with other IEEE CS magazines and transactions, has its own multimedia page, as well as playlists on the IEEE Computer Society YouTube channel. All authors are encouraged to submit multimedia, as suggested on Computer’s multimedia page.

Besides these efforts, the path is still long, as incoming Society President Dejan Milojicic has highlighted:5

Augmented Reading: A (Slightly) More Formalized Approach

W hat might augmented reading offer in terms of improving understanding? Put another way, what impediments

involved in the process of knowledge production and knowl-edge reception could elements of augmented reading minimize?

First, let us roughly formalize this process of a researcher producing knowledge based on observing and measuring reality through the transmittal of the knowledge produced to a reader (or user). As shown in Figure A, the process occurs in four major stages, with the passage of knowledge from one stage to the next in each case introducing errors and uncertainties.

We denote the errors between stages one and two as ER-MO,those between stages two and three as EMO-FDT, and those between stages three and four as EFDT-R. The way a reader ultimately receives and perceives the initial reality is a function of these three levels of error. It is reasonable to expect that reductions of the errors between each of the stages could improve the reader’s final perception.

Here are some considerations about each level of error:

ER-MO defines the errors that occur during the perceptual passage from reality to its observation and measurement. Here we are basically under the influence of two classes of errors: ei and em—errors of interference and measurement, respectively. In fact, besides the errors em based on mea-surement introduced by the researcher, quantum mechanics tells us that any measure interferes with the reality being measured, resulting in a distortion, however small, modeled by ei. (This can also be seen as a sort of intrinsic error due to passage from reality to one of its measurements.)EMO-FDT defines the errors that occur during the passage from observation and measurement of reality to formaliza-tion into a model, the model’s description, and finally its transmission. Here we have two classes of errors as well: the intrinsic error ek, resulting from the inter-stage pas-sage, and the error efdt, resulting from the lack of accurate tools to formalize, describe, and transmit the model.EFDT-R defines the errors that occur during the passage from formalization, description and transmission to reception. Here again we have two classes of errors: ej, which is the intrinsic error, and er, resulting from the reader’s (or user’s)

lack of tools to properly receive the information, including the lack of technological platforms available to experience the knowledge being received.

The classes of errors that cannot be avoided (in other words, zeroed) but only mitigated are the intrinsic errors ei, ek, and ej.We define the group of non-intrinsic errors, em, efdt, and er, as the gap G between the researcher and the final user. There are many reasons for this gap, not all of which necessarily refer to a lack of good technological tools for some specific task; they can also result from a lack of specific knowledge on the part of the main actor in charge of carrying over the stage. For example, er could have a high negative influence overall because the reader or user does not have a good screen on which to enjoy a video clip, or because he does not have the background knowledge to under-stand what he is receiving, or even because he does not have sufficient expertise to manage the technological tools necessary to properly receive the knowledge.

Augmented reading targets the reduction of this gap G, in terms of efdt and er and, for the methodological aspects, of em, as well. In order to do this, augmented reading has to consider the entire scenario—that is, improving the factors that could possi-bly increase the full experience and perception of knowledge from the earliest stages of its production. In this context, aug-mented reading can enable research itself, by driving the way it is structured and conceived from the initial measurement phases—that is, targeting a methodological decrease of em.Clearly, improving technology is not the only direction of pursuit for achieving these stated goals.

Figure A. From reality to user/reader perception. The passage of knowledge between the stages introduces three levels of error.

User/reader

perception

Formalization,description, and

transmission

Observation andmeasureReality

EFDT–R(ei+er)

EMO–FDT(ek+efdt)

ER–MO(ei+em)

ˇ ´

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

VI

DE

OV

ID

EO

Page 72: computer magacine

68 COMPUTER

PERSPECTIVES

The future offers enormous opportunities to further mod-

ernize academic and professional publications. Our goal is

to meet customer (readers, authors, members) expectations

driven by their other online experiences, including the ability

to provide comments, reviews, and cross-references. These

efforts will require capturing new data from our customers

and from diverse sources—not just at the point of sale.

In addition, IEEE has taken some important steps toward electronic publishing, particularly in terms of the open access model. In 2010, the IEEE Publication Services and Products Board (PSPB) and the IEEE Technical Activities Board (TAB) made five recommendations to the Board of Directors, which soon led to the implementation of three open access publishing options:

Figure 1. Milestones in Computer Society electronic publishing.

AchievementYear

2000

2007

2008

2010-2011

2011

2011

2011

2011

2012

2012-2013

The Computer Society launched IEEE Distributed Systems Online, thefirst successful IEEE CS online-only magazine and an importantcontribution to new publishing media. Sustainability issues ledDistributed Systems Online to cease publication after nine years.

In 2007 Computer magazine was made available in PDF electronicformat in addition to print.

In May 2008, the Computer Society launched Computing Now, anon-line portal offering free access to blogs, multimedia, news, andselected articles from magazines and journals in its Digital Library.

In 2010, the Computer Society announced the OnlinePlus publicationmodel, a framework for electronic publication providing subscribers“with features and benefits that cannot be found in traditional print.” In 2011, IEEE Transactions on Visualization and Computer Graphics andIEEE Transactions on Dependable and Secure Computing were the firstto adopt the OnlinePlus publication model. In 2011, IEEE Transactionson Pattern Analysis and Machine Intelligence offered two subscriptionoptions: dual print and online versions or OnlinePlus. Print-on-demandfor OnlinePlus titles was announced at the end of 2011.

Starting in 2011, the Computer Society began making magazines andjournals available through its Digital Library in the epub format,suitable for portable devices ranging from e-readers to smartphones.

In July 2011, IEEE Transactions on Computers started to select, on amonthly basis, one of its papers for publication in Computing Now together with an additional multimedia contribution. These multimediacontributions, along with those from a steadily growing number of othertransactions, are being collected and made available in the TransactionsMedia Center.

In 2011, the Computer Society launched the Special TechnicalCommunities as flexible structures allowing a new way for members todevelop communities focusing on selected technical areas.

The IEEE Computer Society’s Conference Publishing Services (CPS)division announced plans to offer conference organizers the option topublish their proceedings online.

In 2012 the Computer Society launched the iPad application, along withan Android application, for Computer magazine, as well as the DigitalLibrary mobile application for iPad.

In 2012, IEEE Transactions on Mobile Computing and IEEETransactions on Parallel and Distributed Systems adopted theOnlinePlus publication model, followed in 2013 by IEEE Transactionson Computers, IEEE Transactions on Software Engineering, and IEEETransactions on Knowledge and Data Engineering. Other transactionsare scheduled for migration in the near future.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 73: computer magacine

JANUARY 2014 69

Hybrid journals, which permit both traditional subscription-based content as well as open access, author-pays content; most IEEE transactions and mag-azines now offer this option.A multidisciplinary open access mega-journal (IEEE Access) debuting in 2013, available online only and spanning all IEEE fields of scientific interest.Fully open access journals dedicated to specific sub-ject areas and delivered online only; currently, two journals have already launched, and two more are approved and under development.

This move to provide publications in electronic format has raised some basic questions:

Why electronic publications? Do we really need electronic publications? What are readers’ and authors’ perceptions of this effort?

While these questions have already been answered in part, the world of opportunities electronic publishing opens—particularly from the perspective of augmented reading—is virtually limitless.

THE ELECTRONIC (R)EVOLUTION: OPPORTUNITIES AND PRIORITIES

A primary benefit of electronic publication for scientific research is that it minimizes the time from submission to publication. Quality scientific and technical research requires both peer review and fast, widespread dissemina-tion, within an environment marked by competitiveness and the risk of rapid obsolescence. In some areas, the avail-ability of intellectual property in digital form can be critical for the state of research overall—this is true in the life sci-ences, for example, where the most recent breakthroughs are a mix of laboratory experiments and data mining. Pub-lishing in electronic format offers the optimal solution in all these regards.

But time issues are only one driving factor. Other ben-efits of electronic publishing include lower production costs, the portability and easy accessibility of electronic documents, more efficient management of resources, lower storage costs and readier retrieval for archival purposes, and the fact that electronic databases can offer ad hoc user personalization in ways unavailable with print.

Clearly, some of these are benefits more from the pub-lisher’s perspective—primarily in terms of lower costs—and others more from the author’s and reader’s—enhanced flexibility and potential for data delivery and gathering. The key to success is finding a middle ground where all three can meet.

So, to provide the greatest advantages to authors, read-ers, and the Computer Society alike, the Society over

the past three years has invested considerable effort in defining and formalizing new publication models. This process involved a variety of volunteers, committees, and boards, including the Publications Board, the Digi-tal Library Operations Committee, the Magazine and Transactions Operations Committees, the Electronic Products and Services Committee, and the Board of Governors.

We prefer to view this transformation not as another “digital revolution,” but as an evolution of the publication semantic, characterized by gradual, targeted, and adap-tive change aimed at improving and complementing what already exists, rather than wiping away past efforts and lessons learned. Therefore, starting with what is currently available, our top priorities had to focus on outreach and on reader requirements: tailoring information around the reader, as well as delivering it quickly, in minimal size and simple language, and with frequent updates.

In 2010, we sketched three phases for this evolution, the first two targeted at outreach and the third at improving the reading experience, as illustrated in Figure 2:

Phase 1—mobility. This action targets the short-term goal of creating a mobile-enabled website allowing member access to Computer Society publications, thus providing easier, faster, and more flexible ways to con-nect with our publications’ broad scientific content. Phase 2—applications. This action targets the goal of creating ad hoc applications for mobile devices to enable easy and seamless access to Computer Society intellectual property—a natural evolution of Phase 1, offering an extended feature set better customized to mobile devices. Phase 3—augmented reading. This action targets the goal of creating, according to our definition, “a new way to design and deliver scientific content for innova-tive reading and learning experiences.”

The first two phases, in different stages of implementa-tion, are currently offered to Computer Society members. Implementing the third phase poses interesting challenges and offers opportunities for further discussion.

STARTING SMALL: SUPPLEMENTAL MATERIALS

For the short term and deploying currently available technologies, a “starting small” approach to implementing our augmented-reading-target could be planned around three steps:

Adding sound to papers—for example, a recorded interview or short author introduction, additional bibliographic information, or a third-party comment complementing or extending the scientific contribution.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 74: computer magacine

70 COMPUTER

PERSPECTIVES

Integrating into the paper different technologies—such as videos, simulation scripts, and embedded labs or simulators—to provide a more heterogeneous, com-plete mechanism for transmitting knowledge.Experimenting with further innovative solutions to keep pace as technologies and readers’ social habits evolve.

Some publications are already enacting aspects of the first two steps as well as, to a lesser degree, the third. Just to mention a few:

Computer and several other of the Computer Society’s magazines and transactions have multimedia pages that include video clips, and Computer in addition has sound and video embedded in the PDFs of its enhanced digital edition.ACM’s Transactions on Mathematical Software has a category of submissions called “Algorithms” where “a submission consists of all the code and test data necessary for the effective use and testing of the algo-rithm implementation by a large section of its intended audience”;

Figure 2. Phases for future evolution of Computer Society electronic publications.

Browser E-readingapplication

App “store”Digital Library

App “store”Digital Libraryenhanced app

Phase 3: Augmented reading

Phase 2: Applications

Phase 1: Mobility

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 75: computer magacine

JANUARY 2014 71

Elsevier is using tools “to enable authors to embed chunks of executable code and data into their papers, allowing their readers to inspect code and reproduce computational results”;Nature will start in Spring 2014 Scientific Data, “a new open-access, online-only publication for descriptions of scientifically valuable datasets.”

Materials like these that accompany the conventional PDF version of a paper are generally referred to as supplemental material. Use of such supplemental material has been avail-able long enough now and is sufficiently widespread to make possible some observations and plans for its future.

Over the past 15 years, use of online repositories to store scientific material not originally included in the traditional article has grown steadily. This practice is perhaps most fully consolidated in the field of life sciences—this results from technical more than cultural reasons because of the expectation that published results in the life sciences be supported by a considerable amount of additional, unprint-able material, such as data sets, large tables, high-definition images, and so forth.

The assumption has always been that including more data with an article adds value and allows readers to more fully understand the science involved. Nevertheless, as jour-nals saw contributions of supplemental material increase dramatically, editors, reviewers, publishers, and readers began to view the experience of handling all this supple-mental material more negatively.

Following this trend, the National Information Stan-dards Organization (NISO) and the National Federation of Advanced Information Services (NFAIS) instituted a work-ing group to study the impact of supplemental material at different levels and from different points of view. The main recommendations of this working group have been published on the NISO/NFAIS Supplemental Journal Arti-

cle Materials Project webpage (www.niso.org/workrooms/supplemental).

Supplemental material, although formally “outside the paper,” can be classified according to its role with respect to the original journal contribution. The definition provided by the NFAIS suggests that supplemental material can be considered as “integral content” or “additional content” in relation to the paper. Table 1 summarizes the NFAIS defini-tion as presented by its executive director.

By this definition, “integral content”—that is, some-thing necessary to fully understand the authors’ contribution—should be included in the original manu-script, apparently requiring that multimedia and other like features necessary to understanding be integrated into scientific papers (which, by extension, raises ques-tions of how to do so technically). Once fully integrated, this “integral content” can no longer be considered sup-plemental material, but necessarily becomes an actual part of the published article in a non-textual form. In this case, it should be

published and archived together with the main article;subject to a peer review process; andaccessible and “searchable” through metadata, key-words, and the like.

From the publisher’s point of view, these considerations have led, or must eventually lead, to a definition of best prac-tices and policies for identifying the scope and handling of acceptable supplemental material, which requires answer-ing, at minimum, the following questions:

What supplemental material may, or may not, be included?What are the acceptable formats for supplemental material?

Table 1: Types of supplemental materials in scientific journals.

Content type

Hosted or managed by the publisher Hosted elsewhere

Integral content Additional content Other related content

Text, figures, tables

Critical to understanding the work reported, but technical issues prevent inclusion in the article

Expansion of article, added detail and context; provides a layered approach for readers with different information needs

Not applicable

Multimedia; chemical, crystal, and protein structures; com-puter algorithms, executables; and so on

Critical to understanding the work reported, but technical issues prevent inclusion in the article

Some journals post either in addition to a repository posting or in place of the repository

May be posted to repository as well as publisher’s site

Raw datasets Not applicable Some journals post either in addition to a repository posting or in place of the repository

Should be posted in repository if not with publisher or may be posted both places

Source: B. Lawlor, “Recommended Practices for Journal Article Supplemental Material,” presentation for the Council of Science Editors 2011 Annual Meeting (www.councilscienceeditors.org/files/presentations/2011/20_Lawlor.pdf).

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_________

VI

DE

O

Page 76: computer magacine

72 COMPUTER

PERSPECTIVES

Who is in charge of peer review and validation of sup-plemental material? How mandatory can supplemental material be con-sidered? To what extent can referees ask authors to provide additional data, and, by extension, to what extent can authors expect referees to review such material, no matter how large?

How important is the technical quality of supple-mental material as submitted? Who has responsi-bility for improving its quality (noisy audio, poor video, and the like) if necessary, and who bears the costs?What is the storage space (and cost) required or allowed to archive this supplemental material?

At the Computer Society we have already started to respond to some of these questions. In particular, IEEE Transactions on Computers, as well as some other transac-tions, now are asking the referees to review supplemental material when it is an integral part of the submission. Fur-thermore, both Computer magazine and the Publications Board have defined best practices for multimedia supple-mental material.

From the author’s point of view, peer evaluation of sup-plemental material might entail unacceptable delays in the review process; and some material (such as datasets) might not be reliably reviewable. Reviewers might even delay publication by asking for modifications to supple-mental material, possibly requiring the author to perform significant additional work.

Obviously, including this kind of supplemental material cuts both ways in terms of benefits and liabilities. Does one outweigh the other?

Some journals have already declared this an unac-ceptable obstacle in terms of the quality of their scientific offerings. The Journal of Neuroscience, for example, in 2010 published its “Announcement Regarding Supple-mental Material,”8 prohibiting authors from submitting supplemental material along with their articles.

The bottom line is that one size certainly does not fit all; publications have their own specific characteristics, and therefore will develop different policies based on the publication type, the community served, the content repre-sented, the intended audience, the expected social impact, available technology, as well as a host of other relevant criteria. (We offer a few tentative guidelines of our own in the sidebar “Some Practical Suggestions for Authors, Edi-tors, Publishers, and Readers.”)

AUGMENTED READING: ENABLING RESEARCH

Another aspect of today’s supplemental material—and tomorrow’s potential augmented reading features—also deserves attention. Research areas that focus on extremely complex systems—the life sciences, quantum physics, and astronomy, to name just a few—build incrementally on their knowledge base a bit like a puzzle, with each researcher or publication adding a tiny piece to the overall picture. This collective process, carried out by many researchers usu-ally independently of one another, adds a further level of complexity that is not related to the experimental system

Some Practical Suggestions for Authors, Editors, Publishers, and Readers

F or the most part, authors and readers have already begun to understand and experience the potential of electronic

products. However, the augmented reading philosophy is more encompassing, and it will require time before its poten-tial for designing and experiencing new scientific contribu-tions can be fully comprehended and achieved. Here we offer a few tentative recommendations for the immediate future.

For authors, scientific publication becomes an opportu-nity to transmit knowledge, in a balanced and integrated way, using a variety of elements: text, video, audio, multi-media, tables, datasets—and possibilities still to be explored. This represents a very different approach from simply placing supplemental material “at the bottom” of a conventional PDF—as a fully integrated part of the text. For an even better integration, authors could take into account elements other than scientific ones in presenting their research—cognitive and social aspects, for example.

Editors can prepare the ground to nurture this evolution by proposing changes to the submission model, going beyond the simple integration of supplemental material. Col-laboration with both authors and readers in these efforts will be key. Their mission is to maintain a journal’s quality and reputation by not overloading research supplemental resources too quickly, but at the same time keeping in mind the final goal of a sustainable—and evolving—augmented reading philosophy.

Publishers could provide the facilities to support aug-mented reading, without increasing the costs for the final users. This would require helping editors define physical sus-tainability, for example, in terms of the maximum archival requirements for a new scientific paper. Publishers should, whenever possible, include both editors and authors in the process of determining how new technologies could help the scientific editorial world. Making rapid decisions in address-ing these new policies will become an important part of their mission and strategic planning.

Readers and final users in general must provide the moti-vational engine of this evolving process. Their feedback is crucial to help authors, editors, and publishers find optimal solutions for disseminating knowledge in this new electronic environment; their ideas will help determine what works and also provide inspiration for even more innovative features.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

VI

DE

O

Page 77: computer magacine

JANUARY 2014 73

itself, but stems from the way scientific data are collected (that is, using many different experimental techniques), from the amount of collected data (which are often highly oversampled), and, most critically, from the way experi-ments and results are documented and communicated.

Rules differ from publisher to publisher, which con-stitutes from the point of view of the user—the reader or researcher—a critical lack of common transferrable standards. Because most scientific research is reported and documented in textual form and in variable for-mats (papers, meeting abstracts, patents, and so forth), a huge amount of potentially useful information is lost—in abstracts, captions, and notes, for example. Even that which is stored into databases as direct experimental results is scattered among myriad different and incompatible for-mats and access interfaces.

This is becoming such a critical issue in so many dis-ciplines that a new research area has recently emerged responding to the problem of managing (and querying) “big data.” It is important to understand that making infor-mation retrievable will make it possible not only to find necessary information but also to extract new knowledge from data already available. This means that published data may increasingly have the potential to drive research rather than simply be the outcome of it. Successful imple-mentations of this sort of research are already occurring in fields such as system biology and bioinformatics. It is not by chance that Wikimedia recently initiated a new project called Wikidata, “a free knowledge base that can be read and edited by humans and machines alike.” The goal is collecting structured data to allow easy reuse of that data by Wikimedia projects and third parties and to enable com-puters to easily process and “understand” it.

To some extent, efforts to formalize and make available research results will increasingly become as important as the research itself. Three aspects play a relevant role:

The data format. Without standardized data formats, results cannot be aggregated, compared, filtered, and manipulated. Even the most important repositories often have different methods for indexing the same data. The data access interface. It is not only necessary to make data available on the Web through a user inter-face; most often, big repositories need to be accessed by automatic tools because the human interface is too limited to complete the data mining tasks required. Unfortunately, even if standards are defined and avail-able, most databases have different access interfaces that make their real use in an automatic framework implementable only by computer experts at a con-siderable expense of time. There is a huge difference between making data available and rendering it in formats that are actually usable.

Semantic searches. Reliable tools to implement seman-tic data searches will become of critical importance to future data mining.

From this perspective, the use and standardization of supplemental material (in terms of indexing, metadata, keyword ontology, and so forth) are crucial for the imple-mentation of next-generation semantic search engines, not only to enable them to find words in a paper but to find the material that, for example, matches or confirms a certain hypothesis.

One of the technical recommendations of the NISO/NFAIS working group focuses exactly on providing a frame-work for assigning descriptive and physical metadata to the supplemental materials associated with an article.

LOOKING TO THE FUTURE It might sound trivial, but in this case the future is in

our brains, and not just in our keyboards. We all know the importance of thinking and operating quickly; something we hypothesize today takes a long time before its bug-free implementation comes to life.

So our starting point today is with the widespread availability of mobile devices, social networks, Internet connections both slow and fast, potential users with differ-ent expertise and needs, and huge amounts of information with varying levels of relevance and scientific soundness. In addition, today we have available interesting technolo-gies developed to provide innovative reading experiences:

Self-publishing technologies that allow an author to become his or her own publisher; examples include Apple iBooks Author, Amazon Kindle Direct Publish-ing, Simplicissimus Book Farm Stealth Platform, and others.Book-related technologies that allow interactivity, like Ted Books and Push/Pop Press’s multimedia appli-cation, the publication platform for Al Gore’s recent “book” Our Choice ;Electronic publishing platforms and tools that add dynamic content, such as the Inbooki.com platform where, as the site describes, “the story can change depending on the weather, or has chapters that can be read only in the evening. …”

Some scientific and technical press editors and publish-ers have already started to consider issues like these and are testing possible solutions. One excellent example is the idea of executable papers launched by Elsevier through a 2011 contest as part of an initiative to find innovative tools for the “article of the future,” a 2009 project now in proto-type form (http://articleofthefuture.com).

Today, the Computer Society is planning for many scenarios:

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

VI

DE

O

Page 78: computer magacine

74 COMPUTER

PERSPECTIVES

the digital version of Computer magazine, which offers a “rich mix of extra multimedia content”;the OnlinePlus model and its projected evolutions and extensions; and living abstracts videos, such this one to quickly explain the research and to invite reading the attached paper.

Of course, links to the video are not available in this print version—which leads us to our thoughts about tomorrow.

We expect tomorrow will bring many modifications and improvements to the innovations we have already discussed and introduce others we cannot foresee today. A primary key to future success, both for publishers and the scientific communities, will be the ability to maintain a balance between what has worked in the past and the enormous potential for new opportunities that lie ahead. This will require thoughtful action in the short term and creative prediction, planning, and decision making for the long term. New solutions must constantly evolve, keeping at least some element of surprise. Innovations that spark a “Wow!” today can quickly grow outdated, eliciting only a shrug and “So what?” tomorrow.

I n our view, the move toward augmented reading must be driven by authors’ and readers’ needs; new technol-ogies are only tools allowing us to meet these needs. We do not claim that just because technology now

and in the future offers interactive possibilities beyond those available with current PDFs, it will, in itself, advance or yield better communication of scientific knowledge. Instead, we trust that these new tools have great potential that depends on a number of factors, beginning with the quality of what is proposed to supplement the PDF.

Still, as both authors and readers, we look forward to the changes ahead and, in fact, have already started research-ing and writing papers in electronic format that fully integrate multimedia with the written text.9 We strongly believe that the augmented reading philosophy will be key to the continued success of electronic dissemination of sci-entific research and envision an exciting future for authors, editors, publishers, and readers alike.

AcknowledgmentsFor their continued support and advice, we thank S. Reisman, D. Milojicic, D.A. Grier, J. Walz, T. Conte, A. Burgess, E. But-terfield, A. Zomaya, F. Lombardi, J.-L. Gaudiot, F. Ferrante, J. Rokne, P. Laplante, C. Metra, A. Stickley, G. Carter, S. Woods, E. Hardison, R. Baldwin, G. Pointdexter, B. Brannon, C. Walsh, and M. Gallaher. A special thanks goes to Ron Vetter for his advice and help while revising the preliminary drafts of this manuscript, and to the anonymous referees.

References1. B. Jasney et al, “Introduction to Special Issue: Again, and

Again, and Again …,” Science, vol. 334, no. 6060, 2011, p. 1225.

2. N. Gautam, “Scientists’ Elusive Goal: Reproducing Study Results,” The Wall Street J., 2 Dec. 2011.

3. S. Reisman, “Using Learning Objects to Affect Educa-tional Outcomes,” Computer, vol. 42, no. 8, 2009, pp. 6-8.

4. S. Reisman, “Changing the Publishing Model,” IT Profes-sional, vol. 11, no. 5, 2009, pp. 60-62.

5. D. Milojicic and P. Laplante, “Special Technical Com-munities,” Computer, vol. 44, no. 6, 2011, pp. 84-88.

6. D. Milojicic et al., “Innovation Mashups: Academic Rigor Meets Social Networking Buzz,” Computer, vol. 45, no. 9, 2012, pp. 1001-105.

7. S. Reisman, “Planning for an Inevitable Future,” Com-puter, vol. 44, no. 1, 2011, pp. 6-8.

8. J. Maunsell, “Announcement Regarding Supple-mental Material,” J. Neuroscience, vol. 30, no. 32, pp. 10599-10600.

9. P. Montuschi et al., “Job Recruitment and Job-Seeking Processes: How Technology Can Help,” to be published in IT Professional, 2014; doi:10.1109/MITP.2013.62.

Paolo Montuschi is a professor of computer engineering at the Polytechnic University of Turin. His research interests include computer arithmetic and architectures, computer graphics, electronic publications, and new frameworks for the dissemination of scientific knowledge. Montuschi received a PhD in computer engineering from the Poly-technic of Turin. He is a Fellow of IEEE and a Computer Society Golden Core member, and he serves as chair of the Computer Society’s Magazine Operations Committee and is associate editor in chief of IEEE Transactions on Comput-ing. Contact him at [email protected].

Alfredo Benso is an associate professor of computer engi-neering at the Polytechnic University of Turin. His research interests include systems biology and pattern recognition techniques for biological data. Benso received a PhD in information technologies from the Polytechnic University of Turin. He is a Computer Society Golden Core member and an IEEE senior member. Contact him at [email protected].

Selected CS articles and columns are available for free at http://ComputingNow.computer.org.

ˇ ´

ˇ ´

| IEEE Computer Society| Computing Now

| youtube.com/ieeecomputersociety

| facebook.com/IEEE ComputerSociety| facebook.com/ComputingNow

| @ComputerSociety | @ComputingNow

STAY CONNECTED

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_________________

__________

_____

___________________

Page 79: computer magacine

0018-9162/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society JANUARY 2014 75

GREEN IT

Cloud-Based Execution to Improve Mobile Application Energy EfficiencyEli Tilevich and Young-Woo Kwon, Virginia Tech

Cloud offloading—a popular energy optimization technique—executes a mobile application’s energy-intensive functionality via a cloud-based server. To maximize efficiency, systems must determine the functionality to offload at runtime, which will require innovation in both automated program transformation and systematic runtime adaptation.

T he growing complexity of mobile application functionality poses increasing concerns

for device battery capacity (K. Pentikousis, “In Search of Energy-Efficient Mobile Network-ing,” IEEE Comm. Magazine,vol. 48, no. 1, 2010, pp. 95-103). Recent research has focused on cloud offloading as a possible technique for improving mobile application energy efficiency. The concept is relatively simple: when a mobile application identifies some functionality as “hot” (that is, energy intensive), it transforms to a distributed application and offloads the required data to a cloud server, which executes the hot functionality and then trans-fers back the results, thereby saving battery power for the mobile device.

However, most offloading schemes fail to consider the energy required for network transfer, which can sometimes negate any energy savings gained from offloading. Because mobile network conditions vary, the decision whether or not offloading will save energy must occur dynamically at runtime and requires an immediate cost-benefit analysis to determine whether the energy saved by reducing local processing exceeds the overhead required for distributed processing. This analysis is far from trivial due to mobile hardware heterogeneity and execution environment volatility, and it must ultimately involve encapsu-lating the functionality and exposing it via clean programming abstrac-tions. In other words, adaptive cloud offloading requires innovation in both program transformation and runtime adaptation.

CURRENT LIMITATIONS AND OPPORTUNITIES

In a typical mobile application, network communication consumes up to half the total energy budget. Our studies of distributed pro-gramming abstractions and energy efficiency reveal that network types and conditions can signifi-cantly affect the energy required for remote data transfer (Y.-W. Kwon and E. Tilevich, “The Impact of Distributed Programming Abstractions on Application Energy Consumption,” Information and Software Technology, vol. 55, no. 9, 2013, pp. 1602-1613). By con-sidering these variables, mobile computing researchers can create adaptive cloud offloading mecha-nisms that maximize the energy saved by mobile applications run-ning on a variety of devices over dissimilar networks.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 80: computer magacine

76 COMPUTER

GREEN IT

is rewritten to run the resulting ap-plications accordingly, either on the mobile device or in the cloud.

Adaptive multitarget cloud offloading

Building on our prior research, we next focused on increasing energy savings by enhancing the runtime system to make offload-ing decisions in accordance with the mobile device’s hardware setup and the network’s characteristics, and to make these decisions dy-namically at runtime and adjust them continuously in response to fluctuations in the mobile execu-tion environment (Y.-W. Kwon and E. Tilevich, “Reducing The Energy Consumption of Mobile Applica-tions behind the Scenes,” Proc. 29thIEEE Int’l Conf. Software Mainte-nance [ICDCS 13], IEEE CS, 2013, pp. 170-179). The programmer annotates suspected energy hotspots, and, after validating programmer input, the mobile application transforms into a distributed application with local and remote parts determined at runtime, as required by the execu-tion environment. An elaborate checkpointing mechanism makes postponing distribution until run-time possible.

The runtime system controls the adaptivity. It manages network con-nections between client and server, estimates the mobile device’s energy consumption, identifies the offload-ing strategy to follow, synchronizes the transferred check-pointed state, and provides resilience in the case of network disconnections. In essence, the runtime system continuously monitors the energy that each offloading candidate program component consumes, at the level of its constituent subcom-ponents. Those subcomponents whose cloud-based execution would save the most energy for the current execution environment network are offloaded.

crucial in the event that the mobile network connecting the device to the cloud becomes disconnected. Rather than split an execution into local and remote partitions, our approach efficiently replicates pro-gram state to switch between local and remote executions, which can both reduce client energy consump-tion and tolerate network outages. When the network is operational, energy-intensive functionality is offloaded to the cloud server by transferring only the program state necessary for remote execution. Ef-ficient check-pointing synchronizes the program state between the local

and remote executions. In cases when the network becomes discon-nected during offloading, the remote execution redirects to the mobile device. Thus, while network outages inhibit optimal energy use, they do not make the application unusable.

Because transferring large data volumes across the network can consume more energy than is saved by offloading, our second contri-bution was a program analysis technique that reduces the amount of transferred state. This technique leverages static program analysis to determine whether the program state changed during an offloadingoperation. The analysis uses pro-grammer annotations as input to specify which methods are ex-pected to be energy intensive. Based on this input, the analysis identi-fies the state to be transferred for each offloading scenario. Finally, the application’s binary bytecode

However, most current offload-ing techniques are preset and so not capable of adapting when an appli-cation switches between mobile networks with different character-istics. Those that can adapt their offloading mechanisms at runtime have runtime systems custom-tailored for individual research proj-ects and so lack clear programming interfaces; thus, they can’t be config-ured, reused, or ported easily.

To maximize energy savings in heterogeneous and volatile mobile networks, cloud offloading schemes must support adaptive offloading strategies. Doing so without over-burdening programmers requires runtime mechanisms that properly encapsulate the offloading logic by including energy profiling and network monitoring. Such runtime functionality should be exposed via intuitive programming abstrac-tions that allow programmers to express the adaptive logic of offload-ing decisions, including all required parameters.

REDUCING ENERGY CONSUMPTION USING CLOUD OFFLOADING

Here, we present our cloud offloading research over the past several years, focused primarily on advancing various program analysis and transformation techniques and, more recently, on dynamic runtime support.

Cloud offloading in the presence of network disconnections

Our first major work in cloud offloading was a program trans-formation approach that preserves a mobile application’s ability to execute locally, as it was originally written (Y.-W. Kwon and E. Tilevich, “Energy-Efficient and Fault-Tolerant Distributed Mobile Execution,” Proc. 32nd Int’l Conf. Distributed Comput-ing Systems [ICDCS 11], IEEE CS, 2012, pp. 586-595). This ability is

To maximize energy savings in heterogeneous and volatile mobile networks, cloud offloading schemes must support adaptive offloading strategies.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 81: computer magacine

JANUARY 2014 77

server. These energy optimizations should be selectable on per-application and per-user bases.

Runtime adaptation pro-vides a highly promising avenue for improving cloud

offloading effectiveness. Just as important, this improved efficiency does not need to come at the cost of systematic software development practices. We hope these insights can help influence the research agenda for optimizing future mobile application energy efficiency in the cloud.

AcknowledgmentThe National Science Foundation sup-ported this research through grant CCF-1116565.

Eli Tilevich is an associate profes-sor in Virginia Tech’s Department of Computer Science. Contact him at [email protected].

Young-Woo Kwon is a PhD candi-date in Virginia Tech’s Department of Computer Science. Contact him at [email protected].

changes in the execution environ-ment without imposing undue performance overhead. In our cur-rent work, we are exploring how runtime systems can unobtrusively gather execution parameters, including network and hardware parameters such as delay, net-work connection type, and CPU frequency. Using these parameters makes it possible to build a power-ful adaptive system that can predict how much energy a given cloud offloading operation will consume. The runtime system can correlate previously obtained device- and execution-specific values, using network delay, connectivity type, CPU frequency, and voltage values to compute future energy consumption.

Custom-tailored runtime adaptation

In addition to automated adaptations, mobile application pro-grammers can implement custom optimization strategies that exploit the application’s business logic. Integrating these custom-tailored adaptations with the runtime system can further improve cloud offloading efficiency. A typical custom-tailored optimization can comprise well-known energy optimi-zations for network communication, including data compression, reduc-ing the offloaded data’s size, and selecting the easiest-to-reach remote

ADAPTIVE RUNTIME FOR EFFICIENT AND SYSTEMATIC CLOUD OFFLOADING

Systems design that provides pow-erful runtime adaptivity to support advanced cloud offloading mecha-nisms will require innovation in a number of areas. Because distributed programming mechanisms define the patterns distributed applications use to transmit data across networks, they significantly impact the energy mobile applications consume. How-ever, existing distributed runtime systems can’t adapt execution pat-terns flexibly to reduce energy con-sumption when mobile applications switch between dissimilar net-works. Maximizing energy savings requires tailoring runtime execution to account for specific applications’ business logic; to support this kind of customization, runtime systems need programming abstractions that can express energy optimization param-eters and triggering strategies.

Programming abstractionsRuntime programming abstrac-

tions can minimize energy con-sumption via advanced, dynamic optimizations. Enabling these re-quires innovation in expressiveness and runtime support. Such abstrac-tions let programmers write several cloud offloading plans that can be used under different runtime con-ditions (accounting for network latency, bandwidth, and volatility, for example). At execution, the run-time system automatically selects an appropriate offloading plan and switches dynamically among plans in response to changes in the execu-tion environment. Runtime systems should be realized as ready-made, reusable software components to reduce the maintenance burden on mobile application developers.

Runtime monitoring and energy profiling

Runtime adaptation requires careful monitoring of runtime

Editor: Kirk Cameron, Department of Computer Science, Virginia Tech; [email protected]

Selected CS articles and columns are available for free at

http://ComputingNow.computer.org.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

____________

____________________

____________

_____

Page 82: computer magacine

78 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

COLUMN SECTION TITLEOUT OF BAND

Privacy Informatics: A Primer on Defensive Tactics for a Society under Siege Hal Berghel, University of Nevada, Las Vegas

What the world needs now is a new field of study: privacy informatics. This emerging field will fill the information-awareness gap between a trusting citizenry and the emerging digital dystopia.

H ow did we get here? With the current state of privacy abuse and our wholesale sellout

to the surveillance society, it’s clear that our elected representatives have become the lapdogs for busi-ness interests that derive benefit from eavesdropping economics. We enjoy the collateral benefits of the technologies used in secu-rity cameras for home protection, GPS for navigation, RFID cards for everything from access control to vehicle telematics to cardiac pace-makers, OnStar for emergencies, the Web for ecommerce, and so on. Along the way, it never occurred to most of us that the technology that enables a call for help in an auto-mobile accident could also be used to record personal meetings in a car, or that those recordings could be used to convict people of crimes. From a technology perspective, you can’t have one without the other; it’s a packaged deal.

We’re also an increasingly dis-tracted society. With television, radio, advertising, Web surfing, social networking, and texting, we have a potpourri of digital distrac-tions. As media critic Neil Postman put it, we’re “amusing ourselves to death,” and that has led us to a Huxleyan (versus Orwellian) dysto-pia, where talking heads and visual images distract us from issues of genuine importance. In Modernity and the Holocaust, sociologist Zygmunt Bauman said that “ratio-nal people will quietly, meekly go into gas chambers if only you allow them to believe that they’re bath-rooms.” In the same way, we digital denizens march willingly to a future where the price for privacy is digital death. We did this to ourselves, by behaving rationally and passively, because, as Bauman further noted, “the rationality of the ruled is always the weapon of the rulers.” Today, the “rulers” are the political and financial neoliberal elite who have

significant vested interests in their own information monopolies.

We bear this responsibility whenever we provide personal information to genealogy and social networking sites, credit card com-panies, e-commerce businesses, healthcare professionals, schools, religious organizations, and so on. Of course, a minimal amount of information is required to sustain social interaction and commerce, but as a society, we maxed out on that generations ago. Everyone who uses the cloud, social networks, and smartphones without use of anony-mization and encryption is part of the problem.

So, where do we go from here? I offer here the poor person’s

substitute. It won’t fix your privacy problems, but it’s better than nothing.

BETTER-THAN-NOTHING PRIVACY DEFENSES

Ten years ago, I launched Better-than-Nothing-Security-Practices

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 83: computer magacine

JANUARY 2014 79

(http://www.berghel.net/btnsp/btnsp.php) in a desperate attempt to sat-isfy some basic security needs for clients and audiences. Nowhere near the depth, breadth, and quality of SANS training—which is the single most important resource for secu-rity training in the world (http://sans.org)—my approach to security had the distinct advantage of being free. I’m comfortable in speculating that it lived up to its name.

Ten years ago, I devoted attention to publicizing security threats from hackers and criminals. These days, I’m devoting my efforts to educating people about privacy threats from government and industry. Although the players’ wardrobes and beverage choices have changed, the abuse of the electorate remained constant.

This is the first installment of my Better-than-Nothing Privacy Prac-tices series. In this episode, I’ll focus on two common tools: browsers and cell phones—specifically, Mozilla Firefox v24.0 and Android v2.3.3, two tools that I rely on heavily. My goal here is to raise the bar a little to discourage those digital demons who might wish to violate our privacy.

FIREFOX AND THE ADD-ON WARS

From the version numbers, it’s clear that I don’t update my Android, but I do keep Firefox current. This is due to the very different levels of trust I have in the companies and products involved.

The initial configuration of Firefox is critical in preparing for add-on privacy enhancements that I’ll return to in a few paragraphs. Just to set the stage, I’ll offer a few observations for those who aren’t in the habit of tweaking their browser security and privacy settings. For more thorough analyses, readers are directed to the wealth of online resources.

We begin with the Mozilla privacy panel (Menu bar>Options>Privacy Tab). I recommend, for your

consideration, checking “Tell sites that I do not want to be tracked” (the so-called Do Not Track option). Under “Use custom settings for history,” I recommend checking both “Always use private browsing mode” and “Accept cookies from sites.” How-ever, for “Accept third-party cookies,” the preferred option is “Never.” Set “When using the location bar, sug-gest” equal to “Nothing.” Using the custom settings option in this way automatically clears the history when Firefox closes (a byproduct of private browsing mode). It’s always wise to click on “Show Cookies” now and again to inspect for cookie crumbs.

Actually, I occasionally force my browser to manually discard cook-ies and other cache items like scripts, such as <ga.js>, to minimize the risk of packet injection. A delightful little add-on, the Empty Cache Button, works well for this. It’s amazing how hard it is to keep the net snoops’ mitts off our cache.

Now let’s see what we’ve ac-complished. First, the Do Not Track option falls under the category of “gratuitous act of defiance for opti-mists” (see this column, September 2013). DNT is an accepted Inter-net Engineering Task Force (IETF) HTTP header field, but (and here’s the rub), it’s not a “core” header field and hence not required for IETF compliance. Simply put, servers and network appliances can ignore without penalty and remain in con-formance with standards.

So, why do it? Because some web servers are respectful of the user’s privacy, and so in those cases, DNT works. Many of us continue to call

for rigid enforcement of DNT, and someday this might pay off. (I’m not holding my breath, since corporate America is the de facto regulator of the Internet.) In any case, after a reboot of Firefox, these settings should produce no recorded history and minimal cookie crumbs.

The History and Location Bar rec-ommendations limit the amount of “browser guano” left on the com-puter. Firefox originally included history and location bar options to limit the recovery of this in-formation from public or shared computers. However, in the age of warrantless wiretaps, extrajudi-

cial detention, and penetration of journalist’s shield laws, it’s wise to consider how we might prevent this information from getting through in the first place. Any minimal in-convenience is more than offset by the increased protection against computer activity mining by gov-ernment, law enforcement, private surveillance merchants, and cor-porate information harvesters. Needless to say, this will undercut the use of the Awesome Bar—Firefox’s self-adapting browser bar.

The rationale for this sort of privacy configuration is that it minimizes exposure of access to user behavior by prying eyes, legal and not. I emphasize “minimize” here because browser develop-ers are less than transparent these days on where they hide this stuff. Earlier versions of Firefox, for ex-ample, allowed the user to specify the location of the browser cache. That made monitoring and cleanup straightforward. Now, Firefox buries

A decade has elapsed, and the world’s digital concerns have shifted from mostly security to a balance between security and privacy. Edward Snowden’s greatest contribution could be that he, more than any other individual, added fuel to the global debate on privacy.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

___

__

Page 84: computer magacine

80 COMPUTER

OUT OF BAND

sundry metadata and browser guano as history, bookmarks, cookies, configuration settings, passwords, autocomplete histories, and so on, in a special profile folder (on Win-dows computers, %APPDATA%\Mozilla\Firefox\Profiles), much in en-coded or encrypted form, so there’s no easy way to find and inspect it. Mozilla claims that having this pro-file isolated from the application is a feature, because the data integrity isn’t dependent on the stability of Firefox. Baloney. This is just another developer’s way of restricting user behavior for its own convenience and self-serving purposes. Making the eradication of browser guano a hassle for the user serves the inter-ests of myopic software developers who believe that their vision of computer use trumps the privacy interests of the customer. For those interested in more detail, I’ve dis-cussed the recovery of such residue under the rubric of BRAP (BRowser and APplications) Forensics else-where (www.berghel.net/col-edit/digital_village/jun-08/dv_6-08.pdf).

Now, on to the security settings (Security tab). Check “Warn me when sites try to install add-ons,” “Block reported attack sites,” and

“Block reported web forgeries.” I recommend avoiding both pass-word options. As a general guideline, browsers aren’t optimal tools for password management. There are other configuration settings that enhance privacy and security to be sure, but these few changes are enough to move us forward to the real breakthrough in personal pri-vacy and security for browsers: the add-ons. Whereas the 1990s were characterized by the browser wars (for more, see www.berghel.net/col-edit/digital_village/oct-98/dv_10-98.pdf), we’ve now entered the era of add-on skirmishes.

The NoScript add-onTo start, I’m going to recommend

two add-ons unreservedly—both offered though the Electronic Fron-tier Foundation (EFF), which has long been a leading voice behind the protection of civil liberties in cyber-space. The first is NoScript (see Figure 1), which is a dream come true for privacy zealots—it’s a cus-tomizable, real-time, interactive script blocker that’s also free. What a deal.

NoScript is designed to work seamlessly with scripting

environments that operate with the more popular secure-sandbox-model virtual machines like Java, JavaScript, and Flash. I’ve had no problem with Adobe Reader, Acro-bat, Silverlight, and Windows Media Player either. NoScript uses several innovations to get us around the problem where blocking the script renders the webpage unreadable. NoScript uses script surrogates that function essentially like the script embedded in the page, preserving usability and breaking nothing criti-cal, but still disabling any nastiness. Script surrogates deal with page scripts of many ilk, including dis-tracting “pop-unders” (aka “on-click” popups). NoScript is also effec-tive at blocking cross-site scripting attacks and isolate IFRAMEs to pre-vent clickjacking. It also has HTTPS forcing as an option (which I don’t use, as explained below). But even if NoScript breaks something, the con-figuration (NoScript Icon>Options) allows it to be tailored to suit the user’s need. It can also be configured on the fly simply by selecting which of the scripts you want to run.

One of the best features of No-Script is its compatibility with other add-ons (and there are many good ones available). The latest version is 2.6.8.5 (see http://noscript.net). I should mention that silent running with NoScript comes with a pen-alty—users will have to go one extra step to temporarily enable scripts (individually or as a group) on script-hog sites, but that will give you the opportunity to reflect on whether that site is worthy of your inter-est after all. If you’re interested in protecting your online privacy, this irritation is minor compared to the risk avoided.

The HTTPS Everywhere add-on

HTTPS Everywhere is a sister add-on from the EFF. This add-on only does one thing: it forces a Transport Layer Security/Secure

Figure 1. Smooth running with NoScript can be completely transparent. In this case, all scripts have been disabled. The inset is the DoNotTrackMe report that runs concurrently.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_______________________

____

______________________

Page 85: computer magacine

JANUARY 2014 81

Sockets Layer (TLS/SSL) HTTPS con-nection if one is available on the server. It does this by means of the HTTP Strict Transport Security pro-tocol (HSTS; RFC 6797). Of course, HTTPS is always preferable to HTTP where privacy is concerned, but until HSTS, the user had no way of knowing whether it was available. In addition, as Moxie Marlinspike showed in 2009, basic HTTPS could be vulnerable to a “SSL-stripping man-in-the-middle” attack, where a hacker could convert a secure HTTPS connection into an insecure HTTP connection without the user’s awareness. Both the need of default-ing to HTTPS when possible, and preventing the Marlinspike hack are dealt with in HTTPS Every-where. In addition, the EFF builds in the SSL Observatory that monitors the use of HTTPS certificates on the Internet and provides warnings of possible attacks. Although NoScript also has HTTPS-forcing built in, its list-oriented configuration is more primitive and not as convenient as HTTPS Everywhere. HTTPS Everywhere v3.4.2.xpi is the current version for Firefox (https://www.eff.org/https-everywhere).

There are far too many good browser add-ons to describe in one setting, so stay tuned to this channel in Computer for updates. Remem-ber that caveat emptor also applies with add-ons, as the code typically isn’t validated by trusted third par-ties. For that reason, I recommend staying with add-ons written by the organizations you trust.

RAISING THE BAR FOR TELCO SNOOPS

Let’s face it, the telcos have been illegally sharing our infor-mation with the government (and who knows who else) at least as far back as 1919, when Herbert Yardley formed his Black Chamber spy group after World War I. The big telcos of the day, Western Union Telegraph Company, Postal Telegraph, and

All-American Cable Company, were then, as the big telcos are now, eavesdropping on US citizens on behalf of the government. The big difference is that these days the telcos spy with impunity.

Modern telcos aren’t doing any-thing particularly unusual for their industry, and, likewise, the US gov-ernment’s eavesdropping on its citizens is also nothing new (Project Shamrock, Project Minaret, COIN-TELPRO). There have always been rooms like 641A somewhere. The recent twists are the Narus and

Verint fiber-optic intercept suites and the optical fiber they work with. The stress testing of the Bill of Rights remains the same.

The same applies to Wi-Fi—especially in metropolitan area networks. An alternative online magazine, The Stranger, recently exposed the Seattle Police Depart-ment’s use of the infamous “white boxes” to intercept and store IP addresses, device types, applications running on the device, and location history data (www.thestranger.com/seattle/you-are-a-rogue-device/Content?oid=18143845). The “white box” project was funded by the US government (Department of Home-land Security), so it’s unlikely that it’s unique among metropolitan areas. The SPD apparently denied activating the white boxes until David Ham of Seattle’s KIRO-7 News team asked why the wireless access point names were identifiable by smartphones (http://rt.com/usa/seattle-mesh-network-disabled-676). Another public deception over sur-veillance. Imagine that.

So, for you Seattle residents (and all you other white boxers out there), I have a few modest sugges-tions as well as some caveats. First,

my remarks only apply to Android 2.3.3 on Casio 771/Verizon smart-phones (the reader will have to extrapolate from there); second, the only effective countermea-sure to undesired business and government eavesdropping and surveillance on cell phones is to “jailbreak” them to gain root privi-leges, and from root, block all of the intrusions from the carrier, Google, and the applications devel-opers. I should note that the law in this area is magnificent in its disor-der and continuously in flux.

To illustrate, the firmware in your smartphone is covered by the Digital Millennium Copyright Act (DMCA), which is interpreted every few years by the Librarian of Congress. In his most recent 2012 ruling, he opined that jailbreaking your smartphone will remain legal until 2015, but that unlocking a smartphone isillegal if it’s done after 31 December 2012 (http://arstechnica.com/tech-policy/2010/07/apple-loses-big-in-drm-ruling-jailbreaks-are-fair-use). Note that under his penultimate opinion, the opposite was the case. Without rooting your phone, the telco can do/undo anything that you undo/do if it chooses, and if you do root your phone, a telco may develop an attitude and threaten to discontinue warranty service. So, at this writing, I’ll take a swerve around jailbreaking and unlocking issues, leaving them to you and your attorney.

The security and privacy threats presented by smartphones and cell phones are real and should be taken seriously (www.zdnet.com/millions-of-android-users-vulnerable-to-security-threats-say-feds-7000019845). With these few caveats in mind, let’s get into some privacy tactics.

Hopefully, we’ll all soon come to appreciate that the price for personal privacy is eternal vigilance!

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_______________________

____________________

_______________________

___________

__________________

_____________________

_____________________

____________________

_______________

Page 86: computer magacine

82 COMPUTER

OUT OF BAND

I’ll organize these suggestions by menu item. First, the telcos and their federal three-letter-agency partners have no business knowing where you are without your permission. So, let’s shut off GPS services until we need them:

Settings>Location & Security

VZW location services (uncheck)Standalone GPS services (uncheck)Google location services (uncheck)

Bear in mind that GPS 911 track-ing won’t be available with these services disabled, so if you’re in a fix, you won’t be able to tell 911 operators to “come find you”—you’ll either have to tell them where you are or turn the GPS services back on.

Moving on:

Settings>Privacy

Back up my data to Google servers (uncheck)

Backing up data to Google may be a really bad idea (see http://blogs.computerworld.com/android/22806/google-knows-nearly-every-wi-fi-password-world).

We continue:

Settings>Wireless & Networks>Mobile Networks

Data enabled (uncheck)Global data roaming access—deny data roaming access (or “Allow access only for this trip”)Wi-Fi (uncheck/off until needed)Bluetooth (uncheck/off until needed)

Settings>Accounts & Synch

Background data (uncheck/off unless needed)Backup assistant (uncheck/off unless needed)

Disable synch for all accounts

Settings>Applications

Allow installation of non-market apps/unknown sources (uncheck/off)

Settings>Security

Encrypt both device memory and SD card with different, long complex passwords that are dif-ferent from the boot password

And when installing an applica-tion, carefully read the entire list of permissions required for it. If the app seeks permissions for camera, microphone, and so on, and you don’t think that’s reasonable, don’t install it. Remember, the telcos oper-ate under the caveat emptor rubric (and immunity!).

So there you have it. You’ve turned your smartphone into a paperweight. But it’s a Constitution-ally friendly, libertarian-and-privacy-pleasing paperweight that can make phone calls. That’s not bad for a pa-perweight. And, of course, you can always turn these features back on and undo everything.

This primer barely scratches the surface, but it’s … well, you know. Let me know

what you think.I would be remiss if I failed to

again emphasize the obvious: the use of smartphones and the Web are, in and of themselves, invitations to privacy abuse. The widely avail-able smartphone that’s built around an encryption-based privacy model is the BlackBerry—which is precisely why it’s unpopular in privacy-averse nations. The same applies to the use of social networking services like Facebook, Google+, Twitter, and LinkedIn, to name but a few, not to mention storing data on a cloud service. As Nicholas Weaver puts it, the government

has weaponized the Internet (www.wired.com/opinion/2013/11/this-is-how-the-internet-backbone-has-been-turned-into-a-weapon), and unfortunately, there’s a host of private cybermercenaries like ManTech, the Gamma Group, and Stratfor that are also in the mix (see www.washingtonpost.com/blogs/the-switch/wp/2013/11/04/yes-there-actually-is-a-huge-difference-between-government-and-corporate-surveillance/?tid=up_next).

But then the same may be said of the use of lower-tech credit and debit cards. Bankers, credit card vendors, and law enforcement agencies are all in agreement on one point: cash is the enemy of Big Brother.

Limiting use of such technol-ogy platforms and services isn’t a product of technophobia or neo-Ludditism, but rather a defensive reaction to the rise of this modern “digital heel” that is used to control and manipulate the populace. These technologies are the grist for Or-wellian and Huxleyan mills.

Those who persist at using privacy-revealing technologies and operating browsers in unsafe modes will prob-ably not derive much benefit from the suggestions given here. But for the rest, this is a start. Those of us in the computing disciplines have been aware of security/privacy versus usability tradeoffs throughout our professional lives—some are more aware than others. What is new to this millennia is that governments have taken the leader-ship position in privacy and security abuses, from Stuxnet to tapping Angela Merkel’s cell phones.

Hal Berghel, Out of Band column editor, is a professor of computer science at the University of Nevada, Las Vegas, where he is the director of the Identity Theft and Finan-cial Fraud Research and Operations Center (http://itffroc.org). Contact him at [email protected].

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_____________________

___________________

_____________________

__________________

_________

______________________

_____________________

______________________

___________

____________

Page 87: computer magacine

0018-9162/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society JANUARY 2014 83

COLUMN SECTION TITLESECURIT Y

Japan’s cybersecurity efforts have been lacking compared to other advanced economies, but the country is now taking more aggressive steps to address this deficiency.

A s late as 2012, Japan hadn’t officially acknowledged cyber-attacks as a national

security threat (http://tinyurl.com/k4rgawn). A British official involved in cybersecurity recently went so far as to assert that Japan has “zero capability” and lacks “situ-ational awareness” (http://tinyurl.com/kmhghy7). For instance, in 2011, hackers stole online creden-tials of Lower House Diet members and their secretaries, giving the perpetrators access to emails and documents possessed by the 480 lawmakers and other personnel (http://tinyurl.com/lr8rg6r). Report-edly only 45 percent of lawmakers changed their passwords following the attacks.

Due primarily to this and other high-profile cyberattacks as well as internal and external pressures, Japan has revised its cybersecurity strategies. Policymakers realize that the country’s data privacy–related regulations have acted as a barrier to cloud computing and big data adoption. There have also

been discussions about changing the constitution to increase Japan’s cyberdefense as well as traditional military defense capabilities.

These recent developments are likely to have far-reaching effects on the economic, political, and military institutions of the world’s third-largest economy. To effec-tively protect Japan’s digital assets and IT infrastructure, IT profession-als and business executives need an informed understanding of key elements of these changes. An anal-ysis of the country’s cybersecurity dynamics highlights the major chal-lenges Japan faces in strengthening cybersecurity and the differences between its approach and those of other major world economies.

GROWING CYBERSECURITY AWARENESS

Of Japan’s 11 political parties, only 3 included statements about cybersecurity in their manifestoes for the Upper House Diet election in 2010 (http://tinyurl.com/k4rgawn). However, the 2011 cyberattacks against the Diet as well as defense

contractor Mitsubishi Heavy Indus-tries were eye-opening events to both policymakers and business executives. According to Mitsubishi, the perpetrators gained access to 83 computers and servers at 11 locations including its Tokyo headquarters, many factories, and an R&D center (http://tinyurl.com/n5sorx9). Other defense contractors such as IHI and Kawasaki Heavy In-dustries reported similar intrusions.

The cyberattacks also aroused concerns about Japan’s cyberdefense capabilities from its trading partners and military and political allies. For instance, US authorities fear that private or state-sponsored hack-ers could obtain secret data about American warships, military air-craft, and missiles built under license by Japanese companies.

In response to these concerns, the Japanese government is creating an extensive network of regulatory bodies and enforcement agencies that significantly expand the coun-try’s cybersecurity infrastructure beyond the Cabinet Secretariat’s Na-tional Information Security Center,

Japan’s Changing Cybersecurity LandscapeNir Kshetri University of North Carolina at Greensboro

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______

_____

Page 88: computer magacine

84 COMPUTER

SECURIT Y

which was established in 2005 fol-lowing a series of cyberattacks on the websites of numerous govern-ment ministries and agencies in 2000 (http://tinyurl.com/n8fh7eg).

A key aspect of this effort is the ongoing attempt by the Liberal Democratic Party (LDP), which took power in 2012, to alter Article 9 of the Japanese Constitution to renounce the self-imposed ban on collective self-defense and thereby sanction counterattacks against for-eign enemies. The LDP government argues that the proposed change would not only enhance its physical security against potential antago-nists such as China and North Korea, but would also improve cyberspace security, which is critical to the country’s economic prosperity.

In addition to cyberwarfare, espi-onage, and the theft of trade secrets, Japanese authorities are concerned about organized crime syndicates, which in recent years have shifted to cybercrime as the worldwide financial crisis and enhanced polic-ing efforts have sharply curtailed revenues from traditional crimi-nal activities (http://tinyurl.com/bwgw2ke).

JAPAN’S CYBERSECURITY EFFORTS

Japan’s cybersecurity efforts fall into four broad categories: anti-cybercrime initiatives led by the National Police Agency (NPA); indus-try protection policies spearheaded by the Ministry of Economy, Trade, and Industry (METI) and the Ministry of Internal Affairs and Communica-tions (MIAC); and national security measures coordinated by the Min-istry of Defense (MoD). Academic institutions and the private sector are also working together to promote cybersecurity.

Anti-cybercrime initiativesIn 2004, the NPA installed a

Cybercrime Division as well as a High-Tech Crime Technology

Division in each prefectural Info-Communications Department. In March 2013, it announced the launch of a nationwide cybercrime task force consisting of 140 staff members. The NPA also plays a key role in public education about the importance of cybersecurity.

Industry protection policiesIn November 2012, as part of its

IT Integration Forum initiative established earlier that year, METI created the Personal Data Working Group. The group’s report, released in May 2013, recommended using information-providing intermediary

organizations to help “build a new relationship of trust between busi-nesses and consumers for utilizing personal data.” The working group also said that companies, instead of requiring consumers to disclose des-ignated personal information to use any of their services, should provide different levels of services based on the type of data consumers want to disclose (http://tinyurl.com/mlp94t6).

Also in November 2012, MIAC convened the Research Society for Use and Circulation of Personal Data. The group’s official report, released in June 2013, advocated transparency, user participation, and proper means of data collection and management of user informa-tion, among other things (http://tinyurl.com/k3dyhex).

National security measuresThe Japanese government rec-

ognizes that ensuring cyberspace stability is critical to the mission readiness of the country’s Self-

Defense Forces (SDF). In April 2013, the MoD announced that by March 2014 it would set up a new Cyber Defense Unit (CDU) within the SDF with an operating budget of US$142 million (http://tinyurl.com/l5b5wbf). The CDU’s key goals include protect-ing the SDF’s information systems and contributing to the govern-ment’s response to cyberthreats by advancing relevant knowledge and skills. A 100-person staff will be responsible for collecting data about malware and viruses and identifying ways to respond to cyberthreats.

Japan also actively cooper-ates with other nations to promote cyberdefense capabilities. For in-stance, it’s working with the US to revise the two allies’ Cold War–era treaty guidelines to enhance infor-mation security (http://tinyurl.com/o7qw6j6) and plans to conduct joint cybersecurity drills with Russia (http://tinyurl.com/kwtpesw).

Academia and the private sector

A report on long-term cyberse-curity strategy released in mid-2013 by a government panel of experts emphasized the need to upgrade specialized education at universities and other institutions to strengthen human and technological capabili-ties in cybersecurity and to boost the number of IT security engineers in the country (http://tinyurl.com/pcm2cye).

In addition, Japanese universities are collaborating with companies to increase cybersecurity aware-ness. In May 2013, for example, Keio University hosted the Microsoft-sponsored Asia Forum on Cyber Security and Privacy (http://tinyurl.com/mzsqhc9).

Japanese businesses are also partnering with other corpora-tions with cybersecurity expertise to provide cybersecurity training and solutions. In September 2012, for example, Sojitz inked a deal with Boeing “to help defend Japan’s

The Japanese govern-ment recognizes that cybersecurity is critical to the country’s stability and economic future.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

____ _________

______

______

_____

Page 89: computer magacine

JANUARY 2014 85

information technology infrastruc-ture from sophisticated, evolving, and persistent cyberattacks” (http://tinyurl.com/l7hflf4).

INSTITUTIONAL AND SOCIOLOGICAL CHALLENGES

Despite these efforts and greater awareness of the importance of cybersecurity, Japan faces many obstacles implementing robust measures.

One major challenge is the gov-ernment’s longstanding reluctance to invest in cybersecurity (http://tinyurl.com/l5b5wbf). For example, from 2006 to 2010, while many countries were steadily increasing R&D spending, Japanese cut such spending by nearly 50 percent. In comparison, South Korea’s cyber-security investment is significantly larger; in July 2013, it announced that it was doubling its cybersecurity budget to 10 trillion won ($US8.76 billion) and hoped to train 5,000 IT security experts by 2017 (http://tinyurl.com/jwfmvdw).

Japan also suffers from insuf-ficient and underqualified human resources. According to the In-formation-technology Promotion Agency, Japan faces a shortage of at

least 80,000 IT experts and, among the country’s 265,000 experts, 160,000 need additional education and training (http://tinyurl.com/l2d4bkd). The problem is exem-plified by MoD’s difficulty finding capable analysts for its planned 100-member CDU (http://tinyurl.com/kzfqzs7). One Japanese secu-rity specialist argues that the CDU’s staff is likely to be recruited inter-nally from among SDF personnel who lack sufficient computing skills as well as a “cyberwarrior mental-ity” and maintains that the CDU needs at least 2,000-3,000 dedicated cybersecurity specialists (http://tinyurl.com/l5b5wbf).

Sociological factors are equally important. A senior fellow at Tokyo’s Center for International Public Policy Studies notes that, in common with other professionals, Japanese cybersecurity specialists seek lifetime employment. In highly mobile job markets such as in the US, however, workers frequently move among the public sector, pri-vate sector, and academia, which facilitates the institutional trans-fer of IT skills. Moreover, unlike US government agencies like the FBI and NSA, Japan is wary about hiring hackers (http://tinyurl.com/lnyy4tl).

COMPARISON WITH OTHER COUNTRIES

As Figure 1 shows, there are key similarities with and differences between Japan’s cybersecurity approach and those of the EU and US.

European UnionLike the EU, Japan has a

detailed, comprehensive regula-tory framework that applies across all sectors. In addition, it shares the EU’s concerns about privacy and data protection and thus similarly limits cloud computing and big data services (http://tinyurl.com/kevjxa6). In this regard, recommendations by METI’s Personal Data Working Group, MIAC’s Research Society for Use and Circulation of Personal Data, and other interested bodies encourage the government to ease such restrictions so that society might benefit from these technologi-cal advances.

There are also key differences between Japan and the EU with re-spect to cybersecurity. While the EU mandates that companies obtain users’ consent to the collection, pro-cessing, and transfer of personal data (http://tinyurl.com/la8nonn), Japan only requires enterprises

Figure 1. Key similarities with and differences between Japan’s cybersecurity approach and those of the EU and US.

European Union United States

Similarities Comprehensive regulatory framework that To some extent relies on private-sector self-regulation applies across all sectors to ensure privacy and protect data

Concerned about privacy and data protection; Faces common cyberthreats from foreign countrieslimits imposed on cloud computing and big data

Differences Collection, processing, and transfer of personal Doesn’t have a consumer-protection agency equivalentdata by enterprises doesn’t require user consent to the US Federal Trade Commission

Businesses have no general obligation to delete Low job mobility of cybersecurity specialists across the personal data after use public sector, private sector, and academia

Companies providing online services don’t haveto report cyberattacks

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

____

____

____

______

____

Page 90: computer magacine

86 COMPUTER

SECURIT Y

to state the purpose of using such data—user consent isn’t necessary. Likewise, Japanese businesses have no general obligation to delete per-sonal data after use. Finally, unlike the EU, companies offering online services in Japan aren’t required to report cyberattacks.

United StatesJapan, like the US, relies to

some extent on private-sector self-regulation to ensure privacy and protect data. The Personal Infor-mation Protection Law, enacted in 2005, obligates companies handling personal data of 5,000 or more individuals to take “necessary and proper measures for the preven-tion of leakage, loss, or damage, and for other security control of the personal data” (http://tinyurl.com/lp4zrz3). Failure to comply with the law is punishable by a fine up to ¥300,000 (about US$3,000) or imprisonment for up to six months.

Editor: Jeffrey Voas, National Institute of Standards and Technology; [email protected]

Japan also faces common cyberthreats with the US, which encourages the two countries to work more closely together. The 2011 cyberattacks against Japan, which coincided with the 80th anniversary of the Manchurian Incident, allegedly originated from China (http://tinyurl.com/438fogl), and there is strong evidence that hackers either working directly for the Chinese government or with their sponsorship are targeting government agencies and private companies in both Japan and the US to steal sensitive data (http://tinyurl.com/kxzk73w).

On the other hand, Japan doesn’t have a consumer-protection agency equivalent to the US Federal Trade Commission. METI provides various data protection and pri-vacy guidelines, but, while most businesses comply with these guidelines, they’re not legally binding.

In recent years, Japan has initi-ated significant measures to boost cybersecurity. Compared

to other advanced economies, however, its efforts still fall short of what’s needed to effectively deal with an increasingly sophisticated array of cyberthreats. Nevertheless, government officials and the public have become more aware of the importance of cybersecurity, and Japan is now taking more aggressive steps to address this deficiency.

Nir Kshetri is a professor at the University of North Carolina at Greensboro and a research fellow at the Research Institute for Econom-ics and Business Administration at Kobe University, Japan. Contact him at [email protected].

Applied Materials, Inc. is accepting resumes for the following position in Chandler, Arizona:

CUSTOMER ENGINEER(AZOLK)

Responsible for performing preventative maintenance on complex semiconductor manufacturing equipment and assisting on-site customer engineers in basic engineering duties. Position may require travel to various unanticipated locations.

Please mail resumes with reference number to Applied Materials, Inc., 3225 Oakmead Village Drive, M/S 1217, Santa Clara, CA 95054. No phone calls please. Must be legally authorized to work in the U.S. without sponsorship. EOE.

www.appliedmaterials.com

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_____

________

_____________

____

Page 91: computer magacine

0018-9162/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society JANUARY 2014 87

SCIENCE FICTION PROTOT YPING

Utopian science fiction prototypes might not actually be about the world that we want to live in, but rather the people we want to be.

“I started this speaker series to flush out the geeks here at USC,” Henry Jenkins said into the microphone

with a wicked grin. “I figured if I brought out Cory Doctorow and Brian David Johnson, we would have a great conversation about the uses and abuses of science fiction as well as some fantastic geek bait.”

And that’s how the Three Geeks event got started in the basement auditorium of the University of Southern California’s Annenberg School of Communication. Jenkins, who has been rightly called the Mar-shall McLuhan of the 21st century, invited activist and science fiction author Cory Doctorow and me for an evening conversation about science fiction, technology, and culture. The academic (Jenkins), the activist (Doctorow), and the futurist (me) explored the sometimes geeky details together on stage.

“I figured if the three tenors could go on tour,” Jenkins said, explaining the title of the speaker series, “then we could do a three geeks version.”

QUESTIONS FROM THE AUDIENCE

I’ve spent much of 2013 traveling to schools and universities to talk about science fiction prototypes.

Typically, I stand in front of a class or a gathering of students and talk about the process and how different people all over the world have used it to explore the human, cultural, ethical, and business implications of science and technology. (By the way, I also talk about this column in Computer, telling my listeners about the subjects I cover as well as the re-sponses and letters I get from you, dear reader. Those responses are the most valuable and enjoyable part of writing this column, so please keep them coming!)

The end of each talk always includes a question-and-answer ses-sion, which is the liveliest part of the event and something I look forward to. Along with the general ques-tions about the future of technology, there’s always one person who wants to talk about the technological singu-larity and when the robot apocalypse is going to happen. When will the robots/computers rise up and become our overlords? (My answer, in short, is that it’s not going to happen—I make robots, and it doesn’t work like that. But it’s always entertaining to imag-ine, “What if …?”).

Another subject that also comes up quite a bit is the idea of dystopias and utopias. Science fiction pro-totypes embrace the idea that the

future is not an accident. It isn’t some fixed point on the horizon—the future is made every day by the actions of people. And because of that, if we’re going to build the future, we really need to have a vision for what we want that future to be. It would also help to have an idea of the various futures we want to avoid.

These two questions are at the very heart of the science fiction prototyping process. We don’t shy away from dark visions of the future (“Secret Science Fiction,” Computer,May 2013, pp. 105–107; www.computer.org/csdl/mags/co/2013/05/mco2013050105.html). Quite the contrary, we need to explore the dark potential of the science, technology, and businesses we’re developing so that we can chart what we do and don’t want them to become.

Science fiction in particular lets us explore these futures, with utopia and dystopia acting as the yin and yang of the fictional world. Utopias capture the grand expanse of our dreams, the absolute best thing that could happen, and dystopias explore the dark landscape of our night-mares, the absolute worst.

In the history of science fiction, the volume of stories that falls into

Utopia RisingBrian David Johnson, Intel

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

________________

___

URL

Page 92: computer magacine

SCIENCE FICTION PROTOT YPING

88 COMPUTER

TO READERS

I’d love to hear from you! What role does imagination play in your research and development? Was science fiction your inspiration to become an engi-

neer? Does science fiction drive you today?

Send your science fiction prototypes to [email protected].

the dystopic column far outweighs the small pittance comprising the utopias. I often get that question: Why are science fiction visions of the future so dark and negative? From 2001: A Space Odyssey to Blade Runner to Minority Report, it seems that all science fiction creators are obsessed with the negative, and the public loves it! But why aren’t we just as fascinated by visions of a bright and happy future?

The short answer is that a bright and happy future is really boring—it’s bad story telling. The main architecture of a good story goes like this: a real PERSON in a real PLACE faces a big PROBLEM. Fiction (films, stories, comics, games, and even art) thrives on conflict. That’s what makes a good story. When bad things happen, it’s interesting to see how people react.

If you had a main character who wakes up and has an awesome life—great job, perfect family, and everything goes her way all day—it’s deeply uninteresting from a plot point of view and makes people hate that character. Nobody likes to hang out with people whose lives are awesomely awesome, nothing ever goes wrong, and everything is perfect—perfect is boring. Those people come off as annoying and clueless about the real world.

Think of your favorite science fiction story—I bet it involves bad things happening to good or at least likeable people. Dystopias let us spend time with our demons to imagine a world that might be different than the one we fear.

MY DYSTOPIC 20THCENTURY

Science fiction author Brian Aldiss made a name for himself by writing challenging and engag-ing science fiction in the 1960s. He became famous for his award-winning collections Hothouse (1962 Hugo Award) and The Saliva Tree(1966 Nebula Award), but today he’s usually remembered for pen-ning the short story “Supertoys Last All Summer Long” (1969), which became the movie A.I. Artificial Intel-ligence. (Note: the “Supertoys” stories are worth reading from a science fic-tion prototyping point of view. They describe a rich and realistic world in which complex people interact with futuristic and flawed technology.)

What many people don’t know about Aldiss is that in 1973 he wrote a comprehensive and opinionated history of the science fiction genre called Billion Year Spree: The True History of Science Fiction. Over the years, he has continued to update and revise the volume with coauthor David Wingrove, retitling it Trillion Year Spree: The History of Science Fiction in 1986. The authors form an interesting idea about the fuel that powered most of the 20th century’s dystopian fiction:

Western society is still liberalizing

itself, tortuous though the process is

(and threatened all the while). We used

to hang people for stealing bread; now

we pay unemployment benefits. We

used to allow children to be used as

slave labour; now we are extending

the school-leaving age. We used to treat

as criminal people who were merely

sick. We may have many hang-ups,

but socially we are more enlightened

than we were at the beginning of the

century.

This moral progress comes as a

result of scientific developments—a

positive thing science does, often

forgotten in a time when science’s

failures claim our attention. Human

dignity does not go with an empty

stomach, and it is science which feeds

more mouths than ever before. The

biological and biochemical springs

of human action are sti ll being

examined; we can only say that they

seem to undermine an authoritarian

view of government, and equally to

make moral judgments of the old kind

irrelevant. The double helix of heredity

may prove to be the next politico-

religious symbol after the swastika.

Because this more understanding or

science-based attitude has to fight its

way to general acceptance—and has

a painfully long way to go—we can

expect to find it worked out in novel

form, filtered through various aspects

by various minds.

All these [dystopian] novels,

whatever else they are, treat the

predicament of the individual in

societies that represent varying

degrees of repression…the authors are

searching for a definition that will stand

in the terrifying light of twentieth-

century knowledge.

According to Aldiss and Wing-rove, all the best-known dystopias of the 20th century were a way for authors and the rest of us to play out our anxieties as the world moved from the rural to the urban, from superstition to science, and headed into a future that envisioned mass mechanization, government rule, and a very different role for the indi-vidual in society.

From Jack London’s The Iron Heel (1908—usually sited as the first modern dystopia) up through Aldous Huxley’s Brave New World(1932), George Orwell’s Nineteen Eighty-Four (1949), Ray Bradbury’s

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

__________________

Page 93: computer magacine

JANUARY 2014 89

Fahrenheit 451 (1953), and Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968), this tradition contin-ues today with Suzanne Collins’s wildly popular The Hunger Gamestrilogy (2008-2010). But what do the dystopias of the 21st century look like? Are we still wrestling with the same demons, or do we need a new dystopia for our new century? (Send me your thoughts. … I’m still think-ing about it.)

A UTOPIA FOR THE 21STCENTURY

That night on stage at USC, the topic of utopias kept coming up, like a strange intellectual virus that seemed to infect everyone’s ques-tions and thoughts. It’s always hard to talk about utopias—I’m an op-timist, so saying that out loud just feels weird. I genuinely believe that the future will be great because people build the future, and, really, why would we build an awful one? Yet talking about pure utopias is dif-ficult because they’re kind of boring and there aren’t a lot of good exam-ples of them.

“We need utopias just like we need dystopias,” I said in front of the USC audience. “But it’s hard to stomach a world where everything goes right. It makes utopias hard to write and read.” Doctorow made a face that indicated he didn’t agree, and that always makes for a good conversation.

“You don’t agree?” I paused.“No,” he said, “I don’t agree with

your definition of a utopia. A utopia isn’t a story where everything goes right and people’s lives are awe-some. A real utopia is in the real world where things go wrong and bad things happen, but what makes it a utopia is how people react to the world around them.”

“What do you mean?” I pushed as Jenkins listened.

“A utopia is created in how people react to the real world. It’s a place where when the world is going to

hell and everyone expects people to be at their worst, people turn around and do their best,” Doctorow ex-plained. “It’s a world where when a major hurricane hits an American city, people don’t start looting and killing each other, but actually go down the street and check on their neighbors. Dystopian fiction like Cormac McCarthy’s The Road tells us that we’re all just a massive power outage away from killing and eating our neighbors. I don’t agree that we’re all a half-step away from ful-filling our worst nightmares when the slightest adversity hits.”

That night, my entire view of utopias changed, broadened, and became much more meaningful. No longer were utopias a candy-colored landscape of happiness, now they were real tools to imagine how real people might show the better side of themselves. To watch the event, visit http://vimeo.com/81120151.

U topian science fiction pro-totypes might not actually be about the world that we

want to live in, but rather the people

we want to be. We can envision our better selves living in the reality of the world that we’re going to inhabit through science fiction prototypes that look beyond the dystopic.

Brian David Johnson, Science Fiction Prototyping column editor, is Intel’s first futurist. Johnson is charged with developing an actionable vision for computing 10 to 15 years in the future. His latest book is Humanity in the Machine: What Comes After Greed? (York House, 2013; www.amazon.com/Humanity-Machine-Comes-After-Volume/dp/0985550864/ref=tmm_pap_title_0). Also check out Vintage Tomorrows: A Histor-ian and a Futurist Journey through Steampunk and into the Future of Technology (Make, 2013; www.amazon.com/Vintage-Tomorrows-Historian-Steampunk-Technology/dp/1449337996). Contact him at [email protected] or follow him on Twitter @IntelFuturist.

Selected CS articles and columns are available for free at

http://ComputingNow.computer.org.

Take the CS Library wherever you go!

IEEE Computer Society magazines and Transactions are now available to subscribers in the portable ePub format.

Just download the articles from the IEEE Computer Society Digital Library, and you can read them on any device that supports ePub. For more information, including a list of compatible devices, visit

www.computer.org/epub

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

________________________

____________________

______________________

__________

______________

___

___

Page 94: computer magacine

COMPUTER SOCIET Y CONNECTION

90 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

EIC APPLICANTS SOUGHT FOR CS PERIODICALS

The IEEE Computer Society seeks editor-in-chief applicants to serve two-year terms starting 1 January 2015. Prospective candidates should provide PDF files of their complete curriculum vitae, a brief plan for the publication’s future, and a letter of support from their institution or employer by 1 March 2014.

For more information on the search process and to submit appli-cation materials, please contact the appropriate person below:

Computer: Carrie Clark Walsh, [email protected]

: Brian Kirk, [email protected]

: Bonnie Wylie, [email protected]

: Brian Brannon, [email protected]

:Kimberly Sperka, [email protected]

Candidates for any Computer Soci-ety EIC position should understand industry, academic, and government aspects of the specific publication’s field and be able to attract respected experts to the editorial board. In addi-tion, candidates must demonstrate the managerial skills necessary to process manuscripts through the edi-torial cycle in a timely fashion. Major responsibilities include

actively soliciting high-quality manuscripts from potential authors and, with support from staff, helping these authors get their manuscripts published;identifying and appointing edi-torial board members, with the concurrence of the Pubs Board;selecting competent manuscript reviewers and managing timely reviews of manuscripts;directing editorial board members to seek special-issue

proposals and manuscripts in specific areas;providing a clear, broad focus through promotion of personal vision and guidance where appropriate; andresolving conflicts or problems as necessary.

Applicants should possess rec-ognized expertise in the computer science and engineering commu-nity, and must have clear employer support.

SPECIFIC CALL FOR EICThe IEEE Computer Society seeks

nominations and applications for the volunteer position of EIC for

. is intended to be a cross-

disciplinary and international archive journal aimed at dissemi-nating results of research on the design of systems that can recog-nize, interpret, and simulate human emotions and related affective phenomena. The journal publishes original research on the principles and theories explaining why and how affective factors condition interaction between humans and technology, on how affective sens-ing and simulation techniques can inform our understanding of

human affective processes, and on the design, implementation, and evaluation of systems that carefully consider affect among the factors that influence their usability.

The EIC will be responsible for the day-to-day volunteer leadership of , including coordinating and overseeing the peer review process; recommending candidates for the editorial board; chairing the edito-rial board; developing editorial plans for the publication; and working in general with volunteer and staff to ensure and maintain the timely publication of an exceptionally high-quality transactions.

Applications and nominations should include a complete curricu-lum vita, a brief plan for the future of for , and a letter of support from the candidate’s institution or employer. Additional information about the IEEE Computer Society, the co-sponsoring societies, and for is available at www.computer.org and www.computer.org/tac.

The search committee prefers electronic submissions in Microsoft Word or PDF. Please direct all questions and submit completed applications to Kimberly Sperka, [email protected].

The due date for nominations and applications is 7 February 2014.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_______

_________________

______________

_______________

______________

________________

Page 95: computer magacine

0018-9162/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society JANUARY 2014 91

CALL AND CALENDAR

CALLS FOR ARTICLES FOR COMPUTER

Computer seeks submissions for an October 2014 special issue on e-government interoperability.

Electronic government refers to the use of information and com-munication technology to provide and improve government services, transactions, and interactions with citizens, businesses, and other arms of government.

Interoperability is essential to broad success in e-government. For such critical governmental concerns as justice and healthcare, inter-operability and its governance are of particular importance. Chal-lenges emerging in this area focus on interoperability in cloud computing, open government, and smart city initiatives.

The guest editors solicit papers from all areas of e-government interoperability including policies, infrastructure, and software- and application-level aspects. They especially invite contributions reporting on real deployments, novel applications, new architectures, management, or semantics.

Final submissions are due 1 February 2014. For author guide-lines and information on how to submit a manuscript electronically, visit www.computer.org/portal/web/computingnow/cocfp10.

CALLS FOR ARTICLES FOR OTHER IEEE CS PUBLICATIONS

IEEE Software plans a September/October 2014 special issue on

new directions in programming languages.

Programming languages have evolved to satisfy a diverse range of requirements for many different groups of programmers.

The guest editors are seek-ing contributions describing how modern programming languages cope with the challenges posed by varied requirements from differ-ent programmer groups, combined with the rapid evolution of hardware platforms.

All submissions must take the form of case studies of language use, design, and/or implementation.

Articles are due 1 February 2014. Visit www.computer.org/portal/web/computingnow/swcfp5 to view the complete call for papers.

IT Professional plans a September/October 2014 special issue on the consumerization of IT.

In today’s enterprise, the consum-erization of IT is being pushed by a younger, more mobile workforce

consisting of active users of new technologies and applications.

Employees now expect to be able to use their personal devices—and applications they are familiar with—at work.

This special issue will review trends, risk factors, and approaches that businesses must consider to capitalize on demographic and tech-nological shifts in the information environment and avoid the pitfalls brought about by the blurring line between consumer and business technologies.

Articles are due 1 February 2014. Visit www.computer.org/portal/web/computingnow/itcfp5 to view the complete call for papers.

Computing in Science & Engineer-ing plans a November/December 2014 special issue on Scientific Soft-ware Days.

Scientific software has many distinct features when compared to more typical software projects. For example, the communities are smaller and the software is made to run on high-performance distributed clusters.

Scientific Software Days is a small meeting of consumers and producers of scientific software that addresses these broad issues as well as specific projects in the field.

This special issue highlights many topics presented at SSD over the years. Participants are invited to

Submission Instructions

T he Call and Calendar section lists conferences, symposia, and workshops that the IEEE Computer Society sponsors or cooperates in presenting.

Visit www.computer.org/conferences for instructions on how to submit confer-ence or call listings as well as a more complete listing of upcoming computing-related conferences.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

________________

______________________

_____________________

Page 96: computer magacine

92 COMPUTER

CALL AND CALENDAR

PERCOM 2014

T he 2014 IEEE International Conference on Pervasive Computing and Communica-tions (PerCom 2014) is sponsored by the IEEE Computer Society, the Scientific

Association for Infocommunications Hungary (HTE), and the University of Texas at Arlington’s Department of Computer Science and Engineering.

Pervasive computing and communications have evolved into active R&D areas due to advances in a broad spectrum of technologies and topics including wireless networking, mobile and distributed computing, sensor systems, RFID technology, and the mobile phone.

PerCom 2014 will provide a leading-edge, scholarly forum for researchers, engi-neers, and students to share their state-of-the art work on pervasive computing and communications.

The conference will take place 24-28 March 2014 in Budapest, Hungary. Visit www.percom.org for complete conference information.

computingnow/micfp5 to view the complete call for papers.

IEEE Computer Graphics and Applications plans a November/December 2014 special issue on The Next Big Thing.

The guest editors seek papers on topics that authors believe are the start of the next new wave in com-puter graphics. They are looking for coverage of risky ideas, and work that is significantly different from current mainstream topics but that has some evidence of potential.

They welcome papers—from commercial and academic sources, from researchers and practitioners—on topics such as insights or tech-niques from other disciplines that accelerate or improve a computer graphics method, and new applica-tions of graphics to other disciplines.

Articles are due 7 March 2014.Visit www.computer.org/portal/web/computingnow/cgacfp6 to view the complete call for papers.

IEEE Software plans a November/December 2014 special issue on vir-tual teams.

Projects with team members located around the globe have become increasingly common in software, R&D, and business pro-cesses across all industry sectors.

Improving the effectiveness and efficiency of virtual teams is there-fore an increasingly business-critical issue.

This special issue aims at collect-ing empirically validated solutions that help increase the efficiency and effectiveness of virtual teams or that increase the quality of their outcomes.

Articles are due 1 April 2014.Visit www.computer.org/portal/web/computingnow/swcfp6 to view the complete call for papers.

IEEE Software plans a January/February 2015 special issue on software engineering for Inter-net computing: Internetware and beyond.

The Internet, once just a network of networks, has become not only the platform of choice for deliver-ing services to increasingly mobile users but also the connective tissue among people, information, and things.

The open, dynamic, and evolving environment of Internet computing continues to demand new software technologies.

This special issue seeks arti-cles that explore state-of-the-art software-engineering research and industry practices for Internet computing.

Articles are due 1 June 2014.Visit www.computer.org/portal/web/computingnow/swcfp1 to view the complete call for papers.

IEEE MultiMedia plans a July-September 2015 special issue on social multimedia and storytelling: using social media to capture, mine, and re-create experiences, events, and places.

The pervasive use of media-capturing devices and the wide adoption of social-networking plat-forms have led to the proliferation of online content captured at vari-ous places and events. Such content holds great potential for deriv-

present their work on the practice of building scientific codes, the inter-action with scientific communities around code, and the place of code in science.

Articles are due 4 March 2014.Visit www.computer.org/portal/web/computingnow/cscfp6 to view the complete call for papers.

IEEE Micro plans a September/October 2014 special issue on high-speed datacenter interconnects.

The demands on datacenter net-works continue to expand. Today’s massive datacenters interconnect tens to hundreds of thousands of machines while attempting to deliver high bisection bandwidths and low latency under power and cost constraints.

Meeting these challenges is becoming increasingly difficult as the port speed of individual servers’ network interface controllers moves past 10 Gbits per second.

Designing practical high-speed interconnects to deliver on the promise of next-generation data-centers and cloud-based computing calls for new architectural advances.

This special issue seeks original papers on a range of topics related to such interconnects.

Articles are due 7 March 2014.Visit www.computer.org/portal/web/

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______________

_______________

_______________

_______________

_______________

Page 97: computer magacine

JANUARY 2014 93

14-17 April: RTAS 2014, IEEE Real-Time and Embedded Technology and Applications Symp., Berlin, Ger-many; www.cpsweek.org

14-17 April: ICCPS 2014, ACM/IEEE Int’l Conf. Cyber-Physical Systems,Berlin, Germany; www.cpsweek.org

MAY 2014

13-16 May: AINA-2014, 28th IEEE Int’l Conf. Advanced Information Networking and Applications,Victoria, BC, Canada; www.aina-conference.org/2014

14-16 May: SeGAH 2014, 3rd Int’l Conf. Serious Games and Applica-tions for Health, Rio de Janeiro; www.ipca.pt/segah2014

26-29 May: CCGrid 2014, 14th IEEE/ACM Int’l Symp. Cluster, Cloud and Grid Computing, Chicago; http://datasys.cs.iit.edu/events/CCGrid2014

ing richer representations of the depicted places and events.

However, the uncontrolled nature of user-contributed con-tent and the social media life cycle’s complexity raise significant research challenges related to the effective collection, mining, and indexing of social multimedia and to their combination, creative reuse, and presentation.

This special issue’s objective is to revisit how social multimedia is transforming the way multimedia content is captured, shared, and made available to others.

Articles are due 20 July 2014.Visit www.computer.org/portal/web/computingnow/mmcfp3 to view the complete call for papers.

FEBRUARY 2014

15-19 February: HPCA 2014, 20th IEEE Int’l Symp. High Performance Computer Architecture, Orlando, Florida; http://hpca20.ece.ufl.edu

MARCH 2014

4-7 March: PacificVis 2014, 7th IEEE Pacific Visualization Symp., Yoko-hama, Japan; www.pvis.org

24-26 March: WACV 2014, IEEE Winter Conf. Applications of Com-puter Vision, Steamboat Springs, Colorado; www.wacv14.org

24-28 March: PerCom 2014, IEEE Int’l Conf. Pervasive Computing and Communications, Budapest, Hungary; www.percom.org

APRIL 2014

6-8 April: SSIAI 2014, IEEE South-west Symp. Image Analysis and Interpretation, San Diego; http://ssiai.org

7-11 April: WICSA 2014, 11th IEEE Conf. Software Architecture,Sydney, Australia; www.wicsa.net

Events in 2014

FEBRUARY 2014

15-19.....................................HPCA 2014

MARCH 2014

4-7..................................PacificVis 2014

24-26 ...................................WACV 2014

24-28 ................................PerCom 2014

APRIL 2014

6-8 .........................................SSIAI 2014

7-11 .....................................WICSA 2014

14-17......................................RTAS 2014

14-17.................................... ICCPS 2014

MAY 2014

13-16..................................... AINA-2014

14-16.................................. SeGAH 2014

26-29 ................................. CCGrid 2014

Advertising PersonnelMarian AndersonSr. Advertising CoordinatorEmail: [email protected]: +1 714 816 2139Fax: +1 714 821 4010

Sandy BrownSr. Business Development Mgr.Email: [email protected]: +1 714 816 2144Fax: +1 714 821 4010

Advertising Sales Represen-tatives (display)

Central, Northwest, Far East:Eric KincaidEmail: [email protected]: +1 214 673 3742Fax: +1 888 886 8599

Northeast, Midwest, Europe,Middle East:Ann & David Schissler

Email: [email protected], [email protected]: +1 508 394 4026Fax: +1 508 394 1707

Southwest, California:Mike HughesEmail: [email protected]: +1 805 529 6790

Southeast:Heather BuonadiesEmail: [email protected]: +1 973 304-4123Fax: +1 973 585 7071

Advertising SalesRepresentative(Classifi ed Line & Jobs Board)

Heather BuonadiesEmail: [email protected]: +1 973 304-4123Fax: +1 973 585 7071

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

____

______

________________

______________________________

_____________

______________

________________

________________

________________

________

______________

_______

________________

Page 98: computer magacine

94 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

CAREER OPPORTUNITIES

DARTMOUTH COLLEGE, ASSIS-TANT PROFESSOR OF COMPUTERSCIENCE: MACHINE LEARNING. The Dartmouth College Department of Computer Science invites applica-tions for a tenure-track faculty position at the level of assistant professor. We seek candidates who will be excellent researchers and teachers in the area of machine learning, although out-standing candidates in any area will be considered. We particularly seek candi-dates who will help lead, initiate, and participate in collaborative research projects both within Computer Science and involving other Dartmouth re-searchers, including those in other Arts & Sciences departments, Dartmouth’s Geisel School of Medicine, Thayer School of Engineering, and Tuck School of Business. The department is home to 17 tenured and tenure-track faculty members and two research faculty members. Research areas of the depart-ment encompass the areas of systems, security, vision, digital arts, algorithms, theory, robotics, and computational biology. The Computer Science depart-ment is in the School of Arts & Sciences, and it has strong Ph.D. and M.S. pro-grams and outstanding undergraduate

majors. The department is affiliated with Dartmouth’s M.D.-Ph.D. program and has strong collaborations with Dartmouth’s other schools. Dartmouth College, a member of the Ivy League, is located in Hanover, New Hampshire (on the Vermont border). Dartmouth has a beautiful, historic campus, located in a scenic area on the Connecticut River. Recreational opportunities abound in all four seasons. With an even distribu-tion of male and female students and over one third of the undergraduate student population members of minor-ity groups, Dartmouth is committed to diversity and encourages applications from women and minorities. To create an atmosphere supportive of research, Dartmouth offers new faculty members grants for research-related expenses, a quarter of sabbatical leave for each three academic years in residence, and flexible scheduling of teaching respon-sibilities. Applicants are invited to sub-mit application materials via Interfolio at http://apply.interfolio.com/23502. Upload a CV, research statement, and teaching statement, and request at least four references to upload letters of recommendation, at least one of which should comment on teaching.

Email [email protected] with any questions. Application review will begin November 1, 2013, and con-tinue until the position is f illed.

BOISE STATE UNIVERSITY. The De-partment of Computer Science at Boise State University invites applications for three tenure/tenure-track open-rank positions. Applicants should have a commitment to excellence in teaching and a desire to make significant con-tributions in research by collaborat-ing with faculty and local industry to develop and sustain funded research programs. Senior applicants should have an established track record of research, teaching, and external fund-ing. Preferences will be given to ap-plicants working in the areas of Data-bases with an emphasis on Big Data, or Human-Computer Interaction with a particular emphasis on usability of user interfaces, or Visualization. An earned Ph.D. in Computer Science or a closely related field is required at the time of appointment. Boise State has made a significant investment in the growth of the Computer Science department, which is a critical part of the vibrant software and high-tech industry in the

JOIN THE INNOVATION.

/QCRI.QA @QatarComputing QatarComputing

Qatar Computing Research Institute seeks talented scientists

and software engineers to join our team and conduct world-class

applied research focused on tackling large-scale computing

challenges.

We offer unique opportunities for a strong career spanning

academic and applied research in the areas of Arabic language

technologies including natural language processing, information

retrieval and machine translation, distributed systems, data

analytics, cyber security, social computing and computational

science and engineering.

We also welcome applications for post-doctoral researcher

positions.

As a national research institute and proud member of Qatar

Foundation, our research program offers a collaborative,

multidisciplinary team environment endowed with a comprehen-

sive support infrastructure.

Successful candidates will be offered a highly competitive

compensation package including an attractive tax-free salary

and additional ts such as furnished accommodation,

excellent medical insurance, generous annual paid leave,

and more.

For full details about our vacancies and how to apply online please

visit http://www.qcri.qa/join-us/

For queries, please email [email protected]

QatarComputing www.qcri.qa

Scientist applicants must hold (or will hold at the time of

hiring) a PhD degree, and should have a compelling track

record of accomplishments and publications, strong academic

excellence, effective communication and collaboration skills.

Software engineer applicants must hold a degree in computer

science, computer engineering or related ; MSc or PhD

degree is a plus.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

____________________

____________

Page 99: computer magacine

JANUARY 2014 95

Florida International University is a comprehensive university offering 340 majors in 188 degree programs in 23 colleges and schools, with innovative bachelor’s, mas-ter’s and doctoral programs across all disciplines including medicine, public health, law, journalism, hospitality, and architecture. FIU is Carnegie-designated as both a research university with high research activity and a community-engaged university. Located in the heart of the dynamic south Florida urban region, our multiple campus-es serve over 50,000 students, placing FIU among the ten largest universities in the nation. Our annual research expenditures in excess of $100 million and our deep com-mitment to engagement have made FIU the go-to solutions center for issues ranging from local to global. FIU leads the nation in granting bachelor’s degrees, including in the STEM fields, to minority students and is first in awarding STEM master’s degrees to Hispanics. Our students, faculty, and staff reflect Miami’s diverse population, earn-ing FIU the designation of Hispanic-Serving Institution. At FIU, we are proud to be ‘Worlds Ahead’! For more information about FIU, visit fiu.edu.

The School of Computing and Information Sciences at Florida International University seeks candidates for tenure-track and tenured faculty positions at all levels.

Open-Rank Tenure Track/Tenured Positions (Job ID# 506754)

We seek outstanding candidates in all areas of Computer Science and researchers in the areas of compilers and programming languages, computer architecture, data-bases, information retrieval and big data, natural language processing, and health informatics, are particularly encouraged to apply. Candidates from minority groups are encouraged to apply. Preference will be given to candidates who will enhance or complement our existing research strengths.

Ideal candidates for junior positions should have a record of exceptional research in their early careers. Candidates for senior positions must have an active and proven record of excellence in funded research, publications, and professional service, as well as a demonstrated ability to develop and lead collaborative research projects. In addition to developing or expanding a high-quality research program, all successful applicants must be committed to excellence in teaching at both the graduate and undergraduate levels. An earned Ph.D. in Computer Science or related disciplines is required.

Florida International University (FIU) is the state university of Florida in Miami. It is ranked by the Carnegie Foundation as a comprehensive, doctoral research university with high research activity. The School of Computing and Information Sciences (SCIS) is a rapidly growing program of excellence at the University, with 36 faculty mem-bers and over 1,800 students, including 80 Ph.D. students. SCIS offers B.S., M.S., and Ph.D. degrees in Computer Science, an M.S. degree in Telecommunications and Networking, and B.S., B.A., and M.S. degrees in Information Technology. SCIS has received approximately $19.6M in the last four years in external research funding, has 14 research centers/clusters with first-class computing infrastructure and support, and enjoys broad and dynamic industry and international partnerships.

HOW TO APPLY:

Applications, including a letter of interest, contact information, curriculum vitae, aca-demic transcript, and the names of at least three references, should be submitted directly to the FIU Careers Website at careers.fiu.edu; refer to Job ID# 506754. The application review process will begin on January 1st, 2014, and will continue until the position is filled. Further information can be obtained from the School website http://www.cis.fiu.edu, or by e-mail to [email protected].

FIU is a member of the State University System of Florida and is an Equal Opportunity, Equal Access Affirmative Action Employer.

Boise metropolitan area. New faculty lines, graduate student support, and a tutoring center have been added to the department. The department is committed to offering a high quality educational experience and in build-ing its research capabilities. For more information, including details on how to apply, please visit us online at http://coen.boisestate.edu/cs/jobs. Boise State University is strongly committed to achieving excellence through cul-tural diversity. The University actively encourages applications and nomina-tions of women, persons of color, and members of other underrepresented groups. EEO/AA Institution, Veterans preference may be applicable.

BMC SOFT WARE INC. has an opening for Associate Manager – IT in Houston, TX to Provide Solution Architecture for all MDM Projects. Requires Bachelor’s + 7 yrs exp. In end to end Enterprise Project Implementations. Apply online at www.bmc.com/careers and refer to Req# 17237.

Violin Memory Inc.

in Santa Clara, CA seeks

Software QA Engineers

To debug software products through the use of systematic tests to develop, apply, and maintain quality standards for company products.

Resume to:HR

4555 Great America ParkwaySuite 150

Santa Clara, CA 95054. Indicate ref#6650.13. EOE.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

______________________

___

___________

____

_________

Page 100: computer magacine

96 COMPUTER

CAREER OPPORTUNITIES

STATE UNIVERSITY OF NEW YORK AT BINGHAMTON, DEPARTMENT OF COMPUTER SCIENCE. Applications are invited for four tenure-track Assistant Professor Positions beginning Fall 2014 with specializations in: (a) cybersecurity (three positions) and, (b) embedded sys-tems programming/design with an em-phasis on energy optimization (one po-sition). The Department has established graduate and undergraduate programs, including 60 full-time PhD students. Ju-nior faculty have a significantly reduced teaching load for at least the first three years. Please indicate your teaching and research areas of interest in a single sentence on your cover letter. Further details and application information are available at: http://www.binghamton.edu/cs. Applications will be reviewed until positions are filled. First consider-ation will be given to applications re-ceived by February 17, 2014. We are an EE/AA employer.

DARTMOUTH COLLEGE, ASSISTANTPROFESSOR OF COMPUTER SCIENCE: THEORY/ALGORITHMS. The Dart-mouth College Department of Computer Science invites applications for a tenure-track faculty position at the level of assis-tant professor. We seek candidates who will be excellent researchers and teach-ers in the area of theoretical computer science, including algorithms, although outstanding candidates in any area will be considered. We particularly seek can-didates who will help lead, initiate, and participate in collaborative research proj-ects both within Computer Science and involving other Dartmouth researchers, including those in other Arts & Sciences departments, Dartmouth’s Geisel School of Medicine, Thayer School of Engineer-ing, and Tuck School of Business. The department is home to 17 tenured and tenure-track faculty members and two re-search faculty members. Research areas of the department encompass the areas of systems, security, vision, digital arts, algorithms, theory, robotics, and com-putational biology. The Computer Sci-ence department is in the School of Arts & Sciences, and it has strong Ph.D. and M.S. programs and outstanding under-graduate majors. The department is af-filiated with Dartmouth’s M.D.-Ph.D. pro-gram and has strong collaborations with Dartmouth’s other schools. Dartmouth College, a member of the Ivy League, is located in Hanover, New Hampshire (on the Vermont border). Dartmouth has a beautiful, historic campus, located in a scenic area on the Connecticut River. Recreational opportunities abound in all four seasons. With an even distribution of male and female students and over one third of the undergraduate student

CLASSIFIED LINE AD SUBMISSION DETAILS: Rates are $400.00 per column inch ($600 minimum). Eight lines per column inch and average five typeset words per line. Send copy at least one month prior to publication date to: Marian Anderson, Classi-fied Advertising, Computer Magazine, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720; (714) 821-8380; fax (714) 821-4010. Email: [email protected].

FACULTY POSITIONElectrical Engineering:

Neuroscience and Data Science

NYU SHANGHAI

NYU Shanghai is currently inviting applications for one position at all levels (assistant, associate, and full professor) from outstanding candidates having demonstrated abilities in both research and teaching. We are interested in candidates with a Ph.D. in Electrical Engineering (or related field) whose research interests are in either of the following two research areas: (1) neuroscience, including brain-machine interfaces and computational neuroscience; or (2) data science, including machine learning and deep learning. When discussing their teaching experience, candidates should identify courses they could teach both within and outside their specialty.

Candidates must have completed a Ph.D. or equivalent by the time of appointment. The search will remain open until the position is filled, but review of applications will begin November 15, 2013. The appointment could begin as soon as September 1, 2014, pending administrative and budgetary approval, or could be delayed until September 1, 2015.

NYU Shanghai is the first Sino-US higher education joint venture to grant a degree that is accredited in the US as well as in China. A research university with liberal arts and sciences at its core, it resides in one of the world’s great cities, which is also a vibrant intellectual community (http://shanghai.nyu.edu/). NYU Shanghai will recruit scholars who are committed to our global vision of transformative teaching and innovative research.

New York University has established itself as a Global Network University, with three degree-granting campuses - New York, Shanghai, and Abu Dhabi - complemented by 12 additional academic centers across five continents. Faculty and students circulate within the network in pursuit of common research interests and cross-cultural, interdisciplinary endeavors, both local and global.

The terms of employment in NYU Shanghai are comparable to U.S. institutions. Faculty may also spend time at NYU New York and other sites of the global network, engaging in both research and teaching opportunities.

Applicants should submit curriculum vitae, a statement of research and teaching interests, electronic copies of up to five recent relevant publications, and the names and addresses of three or more individuals willing to provide letters of reference by January 31, 2014. Please visit our website at http://shanghai.nyu.edu/about/open-positions-faculty for instructions and other information on how to apply. If you have any questions, please e-mail [email protected].

NYU Shanghai is an Equal Opportunity/Affirmative Action Employer.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

___________________

________________________

____

Page 101: computer magacine

JANUARY 2014 97

population members of minority groups, Dartmouth is committed to diversity and encourages applications from women and minorities. To create an atmosphere supportive of research, Dartmouth offers new faculty members grants for research-related expenses, a quarter of sabbati-cal leave for each three academic years in residence, and flexible scheduling of teaching responsibilities. Applicants are invited to submit application materials via Interfolio at http://apply.interfolio.com/23503. Upload a CV, research state-ment, and teaching statement, and re-quest at least four references to upload letters of recommendation, at least one of which should comment on teaching. Email [email protected] with any questions. Application review will begin November 1, 2013, and contin-ue until the position is filled.

TECHNICAL SALES ASSOCIATE sought by Screentek, Inc.; Houston, TX 77051. Responsible for managing co’s tech’l sales operations. Reqmts: Bach’s in Bus. Admin. or Bach’s in Comp Systems + 36 mos work exp as Strategic Outsourc-ing Mgr. Email CV to [email protected].

TRINITY COLLEGE DUBLINThe University of Dublin

jobs.tcd.ie

Professor of Computer Systems Professor of Intelligent SystemsThe University of Dublin, Trinity College, School of Computer Science and Statistics invites applications for two full-time permanent Professor positions in the Discipline of Computer Systems and in the Discipline of Intelligent Systems. These positions offer an exciting opportunity to provide innovative academic leadership in research and teaching, and to have a pivotal role in strengthening the academic activity of the School.

The positions are tenable from 1st September, 2014.

Professor of Computer ScienceThe successful candidate will be an internationally recognised scholar with an established track record in research, teaching and supervision related to the activities within the Discipline. The Discipline is divided into five research groups: the Distributed Systems Group (DSG) and Future Cities Research Programme; the Networks and Telecommunications Research Group (NTRG); CTVR – The Telecommunications Research Centre; the Computer Architecture and Grid Research Group (CAG); and the Conformant Communications Group (CCG). The successful candidate will have the experience to lead and contribute to Computer Systems research that advances Trinity’s strategic research themes, in particular, Smart and Sustainable Cities and Telecommunications. The successful candidate is expected to have a strong history of collaboration with industry and a proven ability to obtain strategic research funding.

Professor of Intelligent SystemsThe successful candidate will be an internationally recognised scholar in at least two of the following three research areas: (i) knowledge and data engineering, (ii) content analytics,and (iii) user interaction, related to the Centre for Global Intelligent Content (CNGL). The successful candidate must also have experience in bringing together research across these and related areas. The successful candidate will provide a leading interdisciplinary role across the School and College, including TCD research themes (in particular Intelligent Content and Communications). An internationally recognised research profile, with a demonstrated ability to raise research funding and a proven capacity to collaborate with industry in domains such as (but not limited to) business, cultural entertainment, education, health, telecommunications and utilities is essential. An excellent track record in teaching and supervision is required.

Preferred closing date for receipt of completed applications is noon on Friday, 31st January, 2014.

Please note: Applications will only be accepted by e-recruitment and further information can be obtained at: https://jobs.tcd.ie

Caradigm USA, LLC,a health-care information technology

and services company, has various levels of multiple openings in

Bellevue, WA, Andover, MA, Chevy Chase, MD and Draper, UT for:

Software EngineersDesign and develop software technolo-gies, features, and prototypes.

UX DesignersDevelop and define user experience (UX) requirements for company's suite of products.

Program ManagersDrive release management and develop platform for medical/health compo-nents.

Support EngineersProvide technical support to customers, partners, and internal employees.

Please mail resumes and reference job title and location to Caradigm USA, LLC, Attn: B. Thomas, 500 108th Avenue NE, Suite 300, Bellevue, WA 98004. EOE.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

__________

________

___________________

Page 102: computer magacine

98 COMPUTER

CAREER OPPORTUNITIES

DARTMOUTH COLLEGE, ASSISTANTPROFESSOR OF COMPUTER SCI-ENCE: COMPUTER GRAPHICS/DIGI-TAL ARTS. The Dartmouth College De-partment of Computer Science invites applications for a tenure-track faculty position at the level of assistant profes-sor. We seek candidates who will be ex-cellent researchers and teachers in the areas of computer graphics and/or digital arts, although outstanding candidates in any area will be considered. We particu-larly seek candidates who will be integral members of the Digital Arts program and help lead, initiate, and participate in col-laborative research projects both within Computer Science and involving other Dartmouth researchers, including those in other Arts & Sciences departments, Dartmouth’s Geisel School of Medicine, and Thayer School of Engineering. The department is home to 17 tenured and tenure-track faculty members and two re-search faculty members. Research areas of the department encompass the areas of systems, security, vision, digital arts, algorithms, theory, robotics, and compu-tational biology. The Computer Science department is in the School of Arts & Sci-ences, and it has strong Ph.D. and M.S. programs and outstanding undergradu-ate majors. Digital Arts at Dartmouth is an interdisciplinary program housed in the Computer Science department,

working with several other departments, including Studio Art, Theater, and Film and Media Studies. The department is af-filiated with Dartmouth’s M.D.-Ph.D. pro-gram and has strong collaborations with Dartmouth’s other schools. Dartmouth College, a member of the Ivy League, is located in Hanover, New Hampshire (on the Vermont border). Dartmouth has a beautiful, historic campus, located in a scenic area on the Connecticut River. Recreational opportunities abound in all four seasons. With an even distribution of male and female students and over one third of the undergraduate student population members of minority groups, Dartmouth is committed to diversity and encourages applications from women and minorities. To create an atmosphere supportive of research, Dartmouth offers new faculty members grants for research-related expenses, a quarter of sabbati-cal leave for each three academic years in residence, and flexible scheduling of teaching responsibilities. Applicants are invited to submit application materials via Interfolio at http://apply.interfolio.com/23489. Upload a CV, research state-ment, and teaching statement, and re-quest at least four references to upload letters of recommendation, at least one of which should comment on teaching. Email [email protected] with any questions. Application review

will begin November 1, 2013, and contin-ue until the position is filled.

QATAR UNIVERSITY, ASSOCIATE/FULL RESEARCH PROFESSOR IN CY-BER SECURITY AND BIOINFORMAT-ICS. Qatar University invites applications for research faculty positions at the level of associate or full professor to begin on September 2014. Candidates in the fol-lowing fields will be considered: • Cyber security • Bioinformatics Candidates will cultivate and lead large-scale research projects at the KINDI Lab for Computing Research in the areas of cloud computing security, privacy and cancer informat-ics. Qatar University offers a competitive benefits package including a 3-year re-newable contract, tax free salary, free fur-nished accommodation, and more. Apply by posting your application before Febru-ary 28, 2014 on the QU online recruitment system at http://careers.qu.edu.qa under “College of Engineering”.

SOFTWARE ENGINEER. Analyze, De-sign, Develop, Test & Implement large applications using Oracle ApEx, HTML, AJAX, CSS, JavaScript, Toad 8.5,10, PL/SQL, SQL*plus, C, C++, C#.NET, Java. Must be willing to travel & reloc. Reqs MS in comp sci, eng or rel. Mail resumes to Pro SAAMYA, Inc, 28 Kennedy Blvd. Suite # 200, East Brunswick NJ 08816.

Applied Materials, Inc. is accepting resumes for the following position in Boise, Idaho:

PROCESS ENGINEER(IDAMI)

Develops new or modified process formulations, defines process or handling equipment requirements and specifications, reviews process techniques and methods applied in the fabrication of integrated circuits. Position may be assigned to work at unanticipated work sites.

Please mail resumes with reference number to Applied Materials, Inc., 3225 Oakmead Village Drive, M/S 1217, Santa Clara, CA 95054. No phone calls please. Must be legally authorized to work in the U.S. without sponsorship. EOE.

www.appliedmaterials.com

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

___________________

Page 103: computer magacine

JANUARY 2014 99

www.cisco.com

Cisco Systems, Inc. is accepting resumes for the following positions:

Austin, TXManager, Software Development (Ref#: AUS9): Lead a team in the design and development of company’s hardware or software products.

Irvine, CATechnical Solutions Architect (Ref# IRV9): Technical sales professional responsible for providing architectural expertise on high value opportunities on accounts in the Global Enterprise Theater.

Overland Park, KSNetwork Consulting Engineer (Ref#: OVE1): Responsible for the support and delivery of Advanced Services to company’s major accounts.

Research Triangle Park, NCTeam Lead (Customer Advocacy) (Ref#: RTP201): Conduct bi-weekly technical reviews of the com-pany's automated network discovery and inventory tool supported by the team.

San Jose/Milpitas/Santa Clara, CACorporate Development Technology Engineer (Ref# SJ156): Integrate and orchestrate in designing and implementing next generation SAS services and reusable components for these services.

Database Administrator (Ref# SJ43): Provides database design and management function for busi-ness and/or engineering computer databases.

Compliance Engineer (Ref# SJ136): Work with Product management team closely on feature and functional requirement.

Please mail resumes with reference number to Cisco Systems, Inc., Attn: J51W, 170 W. Tasman Drive, Mail Stop: SJC 5/1/4, San Jose, CA 95134. No phone calls please. Must be legally authorized to work in the U.S. without sponsorship. EOE.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 104: computer magacine

100 COMPUTER

CAREER OPPORTUNITIES

PROGRAMMER ANALYST. Design, de-velop, test & implement Web and Client/Server Technologies using  C#.NET,  ASP.NET, XML, XSLT, SOAP, XSD,  SQL Server, BizTalk Server, Windows 2000/NT/XP/VISTA/7, Covast EDI Accelerator, BAM, HAT, BRE. Must be willing to travel & re-loc. Requires MS in comp sci, eng or rel. Mail resumes to Strategic Resources In-ternational, 777 Washington Rd, Ste 2, Parlin, NJ 08859.

PRINC. SVC. CONSULT. (Islandia, NY & unanticip sites in US) Provide IT con-sult. svcs & leadership in dsgn & dvlpmnt of solutions. Lead tech team in dvlpng & delivrng overall solution dsgn. REQ: Bach deg or for equiv in Comp Sci, Math, Engg (any), Bus or rel field + 5 yrs prog exp in job &/or rel occup. Must have exp w/ collabrtng w/clients to deliver cloud/service vrtlizatn sol; Dsgning, custmzing & implmntng CA ITKO Service Virtualiza-tion prod capabilities to address client’s app deliv. & quality lifecycle challenges; Java enterprise frmwrk, middleware platfrms - incl: messaging tech, MQ or ESB; Testing tools & methods in phases of app lifecycle fr unit to perf. testing for SOA-based multi-tiered apps; Prgmmng w/ Java & shell scripting; instlling, con-fgring, admnstrng, & fine-tuning middle-ware s’ware prod on Windows & UNIX OS; implmntng sol in a func. &/or tech role in

a prof svc org; Mentring jr tech members: assc. conslts & conslts; Work fr home benefit avail anywhere in US. Freq travel req.: Pls send resume to: Althea Wilson, CA Technologies, One CA Plaza, Islandia, NY 11749, Pls refer to Requisition 27261.

SERVICES ARCHITECT (Islandia, NY & loc throughout US). Implmnt IT workload automation prods. Troubleshoot tech implmntn. Analyze clients infrastructure & assess & execute appropriate config. Reqs.: Bach’s deg or for equiv in CS, CIS, Math, Engg (any fld) or rel 5 yrs of prog exp in job off’d &/or a rel occup. Must have exp w/: IT Consulting, CMDB, Cohe-sion, Service Desk, ITPAM. Freq travel req. Work fr home benefit avail anywhere in US. Send resume to: Althea Wilson, CA Technologies, One CA Plaza, Islandia, NY 11749, Refer to Requisition #27661.

SR SYS’S ARCHITECTS. First Reserve Corporation LLC (Greenwich, CT): Pro-vide tech leadership in design, imple-mentation & maint. of new techs along w/ integration into existing infrastruc-ture that supports bus. ops. Req’s: BS or equiv in CS, Eng’g, or rel. field & 8 yrs exp in plan, design, implement, integrate & maintain complex network infrastruc-tures & sys’s at enterprise level; perform proj mgmt w/ utilization of internal re-sources & 3rd party vendors. Exp must

incl support &/or admin Win Server & Win client operating sys’s, SQL Server, MS Of-fice, SCCM or MOM, & backup solutions, incl softw & infrastructure; integrate SAN technologies w/ MS Win; perform bus. continuity & disaster recovery planning, incl resiliency & redundancy of both data & infrastructure; create data backup/re-tention & archive policies; utilize TCP/IP, DNS, & load balancing technologies, wired & wireless networking, & data & network security; perform troubleshoot-ing, incl resolve app integration issues in Win environ; utilize formal IT help desk & tracking sys’s. Must possess certs in MS technologies, incl MCSE & MCP.  Em-ployer will accept BS or equiv in any field & 2 yrs IT exp to fulfill ed req’t & 8 yrs exp in above listed skills. Submit resume to HR, First Reserve Corporation, LLC, 1 La-fayette Place, Greenwich, CT, 06830.  Pls indicate job code PP112113IP.

PROGR AMMER ANALYST. Design, develop, test & implement application s/w using C, C++, Oracle, Oracle SQL Developer, OBIEE and Oracle BI Appli-cations, Data Warehousing Concepts, HTML, Java, MS - Office, Windows/Mac OS and Microsoft IIS; Must be willing to travel & reloc. Reqs MS in comp sci, eng or rel. Mail resumes to Strategic Re-sources International, 777 Washington Rd, Ste 2, Parlin, NJ 08859.

Help build the next generation of systems behind Facebook's products.

Facebook, Inc.currently has the following openings in Menlo Park, CA (various levels/types):

Software Engineer (SWE) Help build the next generation of systems behind Facebook's products, create web and/or mobile applications that reach over one billion people, and build high volume servers to support our content.

Operations Manager (442) Responsible for production site issues, site reliability, & incident management for outages when on call for all production-facing site services.

Site Reliability Engineering Manager (1047) Direct, develop, & supervise a team of engineers who work to analyze and maintain service stability by documenting policies & best practices in an around-the-clock daily operation.

Security Engineer (968) Conduct targeted research into criminal actors and groups who are impacting Facebook, its subsidiaries and related customers.

Mail resume to: Facebook, Inc. Attn: JAA-GTI, 1 Hacker Way, Menlo Park, CA 94025. Must reference job title and job# shown above, when applying.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 105: computer magacine

JANUARY 2014 101

SENIOR PROGR AMMER/ANALYST sought by Advent Global Solutions, Inc. Job Duties: Develop applications, manage daily operations of depart-ment, assign and review the work of systems analysts, programmers, de-velop computer information resources, Review and approve all systems charts and programs prior to their imple-mentation, evaluate the organiza-tion’s technology use and needs and recommend improvements, meet with department heads, to solicit coopera-tion and resolve problems. Experience in designing and developing appli-cations using JAVA, J2EE, EJB, Struts, Spring, Hibernate, Servlets, JSP, XML, HTML, Javascript, Ajax, SQL. Perform Requirement Gathering, GAP analysis and writing functional specif ications. Expertise in J2ee design specif ication, documentation, development, con-figuration, testing, troubleshooting or NET Programmers with hands on pro-gramming experience in C#.Net, ASP.Net,VB.Net, Sharepoint, Biztalk, experi-ence in analysis, design, system devel-opment, unit testing, system testing, documentation, implementation, cli-ent interaction, capturing user require-ments, reviewing design documents. Should have in depth knowledge in databases - Oracle, SQL Server, Sybase, DB2 and MS Access. Education reqmts:

Bachelor’s Deg plus 60 months of ex-perience. Prevailing wage will be paid - $170914.00. Work Loc: 12777 Jones Rd, #445, Houston, TX 77070. Send re-sumes to: Reshma Soni, (BS-60) Advent Global Solutions, Inc., 12777 Jones Rd, #445, Houston, TX 77070.

PROGR AMMER ANALYST. Must have knowledge in Business Intelligence development and architecture using OBIEE & Major ETL Tools, Oracle OLAP, Siebel Analytics, OWB, Build Reports & Dashboards, Cognos, Informatica, Hy-perion Essbase, Peoplesoft, JD Edward ,Open Source BI Tools Pentaho and Jas-per on Unix and windows platforms. Must be willing to travel & reloc. Reqs MS in comp sci, eng or rel. Mail resumes to Nartal Systems,  Inc.  3 Joanna CT, Suite E, East Brunswick, NJ 08816.

SENIOR PROGRAMMER/ANALYST sought by Advent Global Solutions, Inc.Job Duties: Manage backup, security and user help systems, direct daily opera-tions of department, assign and review the work of systems analysts, program-mers, develop computer information re-sources. Review and approve all systems charts and programs prior to their imple-mentation. Evaluate the organizations technology use and needs and recom-mend improvements, meet with depart-

ment heads to solicit cooperation and resolve problems. Experience in WM, MM, APO, EWM with in depth knowledge of configuring modules, guiding ABAP Programmers, Gap Analysis and writing functional specifications or detailed SAP Netweaver Technology, Extensive con-figuration knowledge and development of Netweaver portal, knowledge man-agement and collaboration. Expertise in SAP Portal design specification, docu-mentation, development, configuration, testing, troubleshooting, administration and performance tuning as well as SAP Portal design specification, documenta-tion, development, configuration, test-ing, troubleshooting, administration and performance or ABAP programmers with hands on programming experience in SAP R3 with multiple SAP functional modules SD, MM, CS, FICO, PP, PS, PM AND EH & S, SAP-BW/Bi, experience in analysis, design, system development, unit testing, system testing, documen-tation, implementation, client interac-tion, capturing user requirements, re-viewing design documents. Education reqmts: Master’s Deg plus 6 months of experience. Prevailing wage will be paid - $116896.00. Work Loc: 12777 Jones Rd, #445, Houston, TX 77070. Send resumes to: Reshma Soni, (MS-6) Advent Global Solutions, Inc., 12777 Jones Rd, #445, Houston, TX 77070.

Apple has the following job opportunities in Cupertino, CA:

SW Engineer - Enterprise Identity & Security [Req. #3JR0101]

Perform SW eng'g services for Apple's Identity Management system.

Battery Validation Engineer [Req# 3MU0219]

Review, test & validate the design & performance of Rechargeable Smart Battery Sys.

RF Engineer [Req. #3SZ1307]

Design & test Radio Frequency (RF) system used in wireless communications. Employer-reimbursed travel req'd.

Mail resumes to 1 Infinite Loop M/S: 104-1GM, Attn: LJ, Cupertino, CA 95014. Principals only. Must include Req# to be considered. EOE.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 106: computer magacine

102 COMPUTER

HOTWIRE, INC. currently has open-ings for the following positions in our San Francisco, CA location (vari-ous level types): *Software Engineers: (728.965): Design, implement and de-bug software for computers includ-ing algorithms and data structures for internet travel company. Create and execute test manual and automated cases based on functional and soft-ware design specifications. *Web De-velopers: (728.1007): Work closely with Business teams to translate business requirements into technical functions. Design and implement front-end site pages and email campaigns of various scopes and technical complexity. *Soft-ware Engineers: (728.1289): Create and execute test plans and test cases based upon functional and software design specifications. Perform ad-hoc and re-gression testing of system components. *Product Managers: (728.1017PM): Ana-lyze user requirements, procedures and problems to automate or improve existing systems. Translate business requests into technical requirements for the development teams. Send your resumes to: Hotwire / Expedia Recruit-ing, 333 108th Avenue NE, Bellevue, WA 98004. Must reference position and Job ID# listed above.

VSQUARE INFOTECH INC., New Jersey based IT firm, is seeking multiple can-didates for the following positions: *Sr. Business Analysts (NJ): Conduct organi-zational studies and evaluations, design systems and procedures, conduct work implications and measurement studies, and prepares operations and procedures manuals to assist management in oper-ating more efficiently and effectively. Analyze data gathered and develop so-lutions or alternative methods of pro-ceeding. Design, evaluate, recommend, and approve changes of forms and re-ports. Gather and organize information on problems or procedures. Using Excel/Access event functions coding, Audit-ing, Trend Analysis, Financial Scheduling & Reporting. Master degree in Business Administration (any), Accounting with one year of experience in accounting or related occupation is reqd. Bach-elor degree with 5 years of experience equivalent to Masters is acceptable. *Sr. Programmer Analysts (NJ): Develop, and write computer programs to store, lo-cate, and retrieve specific documents, data, and information. Analyze user needs and software requirements to de-termine feasibility of design within time and cost constraints. Design, develop and implement the next generation IP

platforms using tools and software with back-end databases to provide an inte-grated management system. Convert project specifications and statements of problems and procedures to detailed logical flow charts for coding into com-puter language. Prepares functional specifications and technical documents. Master degree in Computer Science, Sci-ence (any), Engineering (any) is reqd. Bachelor degree with 5 years of experi-ence equivalent to Masters is acceptable. *Sr. SAS Developers (NJ): Develop pro-grams to create listings, graphs and re-ports. Validating and scrubbing the data using SAS Base procedures. Gather and convert requirements into technical de-sign. Code, test and debug programs to ensure adherence to the project sched-ule. Extraction of data and creation of analysis datasets. Create customized reports and case report tabulations. Pre-pare all documents as required by the systems development life cycle process. Using SAS/Bass, SAS/Macros, PROC SQL, Sybase, Teradata, Data Mining. Master degree in Science (any) with six months of experience in any related occupation is reqd. SAS certification is required. *Sr. Business Systems Analysts (NJ): Analyze, Plan and Develop Business Programs; Manage resources in accordance with

Apple has the following job opportunities in Cupertino, CA:

Sr Product Quality Manager [Req #GE30808] Manage a team focused on new product introduction, outgoing quality, cosmetic excellence, & design for quality initiatives.

IS&T Service Apps Manager [Req #3DD2009] Manage a team of Eng'g Project Managers in the design & delivery of bus transaction functional-ity in support of Apple’s Service Division. Will have direct reports. 5-10% international travel req'd.

SW QA Engineer [Req #3LI0419] Execute WiFi Testing on iOS devices.

Networking Filesystem Engineer [Req #3AA2222] Design & implement Filesystem technologies.

SW QA Engineer [Req #3LI1819] Test apps w/ OS X & Windows based platforms.

SW Dvlpmt Engineer [Req #3LK1620] Design & develop SW for Apple Maps.

Product Design Engineer [Req #3DM0113] Design, experiment & test boards, including AC test boards for VNA measurements & electronic test boards that perform a specific, digital or analog, measurement or task.

ASIC Design Engineer [Req #3DP1619] Create stimulus for testing complex logic Circuits.

Mail resumes to 1 Infinite Loop M/S: 104-1GM, Attn: LJ, Cupertino, CA 95014. Principals only. Must include Req# to be considered. EOE.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 107: computer magacine

JANUARY 2014 103

Paragon Solutions, Inc.has the following job opportunity

available in Cranford, NJ:

SR. DEVELOPER (Job Code DEV-NJ)

Create and design documents and diagrams. Create and modify Java code. Create and modify JSP Screens based on the design. Create Hibernate mappings. Maintain Webser-vice Interfaces. Create unit test cases. Conduct unit test and development to Dev Test. Write simple SQL queries and Stored Procedures. May travel to various unanticipated locations throughout the U.S.

Mail resume to: Paragon Solutions, Inc.

Attn: Staffing25 Commerce Drive, Suite 100

Cranford NJ 07016. Must reference job code DEV-NJ.

project schedule; Gather and synthesize business requirements and translate the business requirement document; Design methodology and programs to ensure that the project deliverables meet in-dustry best practices and standards; Re-view and Modify Software programs to fulfill desirable accuracy and reliability of programs; Coordinates and link com-puter systems within an organization to increase compatibility so information can be shared. Using JAD Sessions, Item Capture Validation and Testing, Mercury QC, GL Transactions, Data Mapping, Data Flow diagrams. Master degree in Science (any), Information Systems with six months of experience in any related occupation is reqd. Bachelor degree with 5 years of experience equivalent to Masters is acceptable. Multiple positions available in each category. Offer stan-dard employment benefits. Apply with 2 copies of resume by mail to VSquare In-fotech Inc. 485 US Highway 1 S, Building C, Suite # 105, Iselin, NJ 08830.

EXPEDIA, INC. currently has open-ings for the following opportunities in our Bellevue, WA office (various/levels/

types:) *Software Engineers (728.SWE): Design, implement, and debug software for computers including algorithms and data structures. *Database Developers (728.1264): Create, build, and maintain datamarts and other data structures maintained by the Data Capture Solu-tions team. *Database Developers (728.DBD): Coordinate changes to computer databases, test and implement the da-tabase applying knowledge of database management systems. *Site Conversion Managers (728.863): Responsible for providing data-driven decision support and management information to opera-tional teams, management groups and executives to enable the optimization of Expedia sites and drive organizational excellence. *Senior Program Managers (728.1125): Responsible for microcom-puter software product design features and coordinating development of soft-ware among functional groups through product release. *Technology Leads (728.393): Serve in a lead capacity, lead-ing technical engineering individual contributors by directing week to week activities, tasks, and/or project. *Senior Accountants (728.1320): Provide support

and assist with coordination of financial accounting and reporting activities for operations. *Senior Release Engineers (728.123): Manage global release engi-neering team. Align release engineering team to design and implement integra-tion and continuous delivery concepts and procedures. Send your resume to: Expedia Recruiting, 333 108th Avenue NE, Bellevue, WA 98004. Must reference position & Job ID# listed above.

SIEMENS PLM SOFTWARE INC. has the following openings 1) Services Soft-ware Product Consultant (UGS145) for Plano, TX (or may work from home office anywhere in the U.S.) to provide consult-ing for solution definition & aligning processes to PLM, Requires 70% domes-tic travel to client sites in U.S. 2) Soft-ware Engineer – Adv. (UGS142) Plano, TX (or may work from any Siemens office or home office in the U.S.) to install & de-ploy Siemens PLM products. 3) Software Engineer –Adv (UGS144) in Cypress, CA to design, develop & implement soft-ware programming for NX. Email re-sumes to [email protected] & refer to Job code of interest. EOE

www.cisco.com

Cisco Systems, Inc. is accepting resumes for the following positions:

Fort Lauderdale, FL

Strategic Product Sales Specialist (Ref#: FL10): Responsible for owning relationships and driving sales from a global perspec-tive, encompassing all regional hubs of a customer’s organization across the world.

San Jose/Milpitas/Santa Clara, CA

Component Engineer (Ref# SJ171): Responsible for assessment and qualification of component technologies used in company products.

Please mail resumes with reference number to Cisco Systems, Inc., Attn: J51W, 170 W. Tasman Drive, Mail Stop: SJC 5/1/4, San Jose, CA 95134. No phone calls please. Must be legally authorized to work in the U.S. without sponsorship. EOE.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_______________

______________

Page 108: computer magacine

104 COMPUTER Published by the IEEE Computer Society 0018-9162/14/$31.00 © 2014 IEEE

THE ERR ANT HASHTAG

Modernity seems to have done nothing to weaken our desire to be part of a group. If anything, it seems to have strengthened it.

I t was a threat—simple, under-stated, unmistakable. Les had been trying to get the staff to adopt a new piece of software

but faced nothing but quiet, pas-sive resistance. No one had made the slightest effort to learn the new system. For his final effort, he told the group that there would be consequences. “Here’s the deal,” he said. ‘”If you don’t use this app, you’re just,” and then he paused to drive the point home, “out of it.”

He looked at us and continued. “In less than a year, you won’t be able to communicate with anyone. Soon enough, you’ll be an old fart sitting in the corner, and no one will know who you are.” I left the meet-ing thinking that he had squandered an opportunity, but I quickly saw that I was wrong. Within a month, everyone had embraced the new system. No cajoling, no bribery could have made the change happen any faster. No one wanted to risk being excluded from the inner circle.

Two decades ago, this kind of fear clearly hung over the rush to con-nect homes and businesses to the Internet. Many were concerned that network connections would limit their privacy, reduce the security of their records, and damage the integ-rity of their machine. Yet, few were willing to let those concerns dictate

their actions as others began to put their system online. They agreed that websites were free to leave cookies on their drives so that they wouldn’t be left “out of it.”

I must confess that I’m not immune to such pressure. Last week, I agreed to talk to a reporter about the Web of Things, a subject that wasn’t of much interest to me. However, this reporter was one of the “cool kids” and wrote for a news outlet that could still afford the luxury of a paper edition. Anyone who was anyone was quoted in it.

The reporter said he was look-ing for context and a good quote to focus his article. From the start, he warned me that he knew a fair bit about the Web of Things and that he needed no exaggerated claims or wild speculations about the future—just an honest discussion of the technology and its prospects.

I quickly summarized the recent IEEE literature on the Web of Things. It’s the natural evolution of the In-ternet of Things, applying the ideas of Web services, middleware, and similar technologies to networks of sensors. For example, it might enable data collection for a program that manages a building’s HVAC system, giving it a detailed picture of the climate inside and outside the building by querying sensors in

offices, hallways, on the street, and even in neighboring buildings.

With this example, the reporter asked the question that plagues the Web of Things: Why would anyone agree to let a sensor in their office or home to provide data to others?

The answer to this question isn’t straightforward, but we can see the broad outlines of the response. When applied to homes and people, the Web of Things is a technology of urbanization. It lets people live in close proximity, either through the close relationship of websites or the close virtual communities on networks. When we live close to our neighbors, we need to share things with them—infrastructure, markets, information. Primarily, we see benefits in sharing informa-tion or face penalties in not sharing it. However, sometimes, we share information because we’re afraid that if we don’t, we’ll be excluded from the community—we’ll be out of it.

David Alan Grier is an associate professor at George Washington University. Contact him at [email protected].

Just Out of ItDavid Alan Grier

Selected CS articles and columns are available for free at

http://ComputingNow.computer.org.

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

_____

AU

DI

O

Page 109: computer magacine

Apple has the following job opportunities in Cupertino, CA:

Hardware Electrical Engineer. [Req# 98NVBZ] Work on sensor design & integ., schematic design, component select, circuit design, & tolerance analysis, w/ analog & mixed signal design.

ASIC Design Engineer. [Req # 999SPM] Resp for design of high perf app-specif integrated circuit (ASIC) semiconductor.

Software Development Engineer. [Req# 999TAJ] Research, design, devlp, implement and debug multimedia sftwre. Review sftwre source code to i.d. defects and perf. issues.

Test Engineer. [Req # 99NUYA] Work on comprehensive test suites for complex sftwre and hdwre in the video wkflow, especially on the embedded side.

Software Engineer, Applications. [Req# 9A3VZ5] Design & devlp large-scale sftwre sys. Design & devlp Customer Identity Mgmt security solutions to prevent unauth access to critical apps and data.

Software Engineer. [Req# 99NRQ6] Design and dev sftwre for mapping platforms. Design and implement complex, high availability server sftwre sys (public transit routing engine).

IS&T (Information Services & Technology) Technical Project Specialist. [Req# 99BN3RX] Analyze, dvlp and rev User Interface Des, Eng Req Spec, High lvl and Detail Tech Des Spec and Test Strategy.

Gas Gauge SQA Engineer. [Req# 997UZP] Dvlp, design, and exec. tests for validation of gas gauge hdwre & firmware at the sys level, dvlping test plans and other test docs as req’d.

Sr SAP AppleCare Logistics Functional Analyst. [Req# 99937J] Des, dev & impl SAP AppleCare logistics.

Software Engineer, Applications. [Req#999SEU] Design & devlp sftwre sys. Build front & back end of scalable web apps.

Technologist and Patent Agent. [Req# 9ASSGK] Eval new & emerging techs, patent portfolios, tech transfer & acquisition opport, & licensing opportunities from tech & legal risk mgmt perspective.

SW Develop Engineer 3. [Req #9BWU63] Dvlp. automated scripts to ensure high qual. of baseband protocols.

Engineering Project Manager. [Req #98WUBD] Review suppl. chain mgmt of mech. enclosure comp., ID, dev., negotiate, mng. mech. suppliers. Travel req'd 30%.

SW Eng Apps 3. [Req #9BWVZU] Des., dev., & lead functional, automation, & load perf. Test for global enterprise point of sale app.

Data Analyst 5 [Req #9BXT69] Analyze and eval. perf. of key bus. metrics: warranty return rate, repair turnaround time, &repeat repairs

Mail resumes to 1 Infinite Loop M/S: 104-1GM, Attn: S.A., Cupertino, CA 95014. Principals only. Must include Req# to be considered. EOE.

JANUARY 2014

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

Page 110: computer magacine

CLOUDIEEE

COMPUTING

computer.org/cloudcomputing

IEEE Computer Society’s newest magazinetackles the emerging technology of cloud computing.

Subscribe today!

Coming March 2014

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

Previous Page | Contents | Zoom in | Zoom out | Front Cover | Search Issue | Next PageComputerComputer

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND

qqM

Mq

qM

MqM

Qmags®THE WORLD’S NEWSSTAND