infoline - kongu arts and science collegeinfoline 1 executive committee 2 amazing 3-d display lets...

19
1 INFOLINE VOLUME: 4 ISSUE: 3

Upload: others

Post on 27-May-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

1

INFOLINE

VOLUME: 4

ISSUE: 3

Page 2: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

2

INFOLINE INFOLINE INFOLINE INFOLINE

TECHNOLOGY NAVIGATORTECHNOLOGY NAVIGATORTECHNOLOGY NAVIGATORTECHNOLOGY NAVIGATOR

Executive Committee

Chief Patron : Thiru P.Sachithanandan

Patron : Dr. N.Raman M.B.A., M.Com., M.Phil., B.Ed., PGDCA.,Ph.D.,

Editor in Chief : S.Muruganantham M.Sc., M.Phil.,

Staff Advisor:

Ms.P.Kalarani M.Sc., M.C.A., M.Phil.,

Assistant Professor, Department of CT and IT.

Staff Editor:

Mr.S.Thangamani M.C.A., M.Phil.,

Assistant Professor, Department of CT and IT.

Student Editors:

Manivasagam.S III.B.Sc(IT) Sachin.V III.B.Sc(IT) Thirunavukkarasu.S III.B.Sc(CT) Kiruthika T III.B.Sc(CT) Prem Kumar P II.B.Sc(IT) Ramya K II.B.Sc(IT) Jaya Prakash A II.B.Sc(CT) Kiruthika T II.B.Sc(CT) Elango B I.B.Sc(IT) Parthiban M I.B.Sc(IT) Shanmugapriya S I.B.Sc(CT)

Sivaranjani S I.B.Sc(CT)

CONTENTS

Page 3: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

3

Infoline

1

Executive Committee 2

Amazing 3-D Display Lets Video Chatters Interact With Remote Objects

[Video]

4

Video Friday: Google's Project Tango, Visual Servoing, and Valkyrie at

Work

5

Urban Computing Reveals the Hidden City

6

New Algorithms Reduce the Carbon Cost of Cloud Computing

7

Liquid Fix for the Cloud’s Heavy Energy Footprint

10

Seagate Crams 500 GB of Storage into Prototype Tablet

12

Single chip device to provide real-time 3-D images from inside the heart,

blood vessels

14

Robotic construction crew needs no foreman

16

K-Glass: Extremely low-powered, high-performance head-mounted

display embedding an augmented reality chip

18

Page 4: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

4

Amazing 3-D Display Lets Video

Chatters Interact With Remote

Objects [Video]

The future of the Web, it seems, is not just

sending data but transmitting actions.

Telepresence robots and remote-control drones

already let an internet user in one place control

far-off gadgets in the physical world. Now

another such device has emerged on the scene—

a dynamic display that transmits 3-D shapes

from the sender to the receiver.

The device is called inFORM, a “dynamic shape

display” developed by researchers at MIT. Think

of it as a long, wireless line of communication.

On the receiving end of this line is a surface

comprised of 30 by 30 pins. Each pin has a tiny

motor attached to its base, which can move it up

and down independently of the 899 others.

On the other end of the line, a depth-sensing

camera records physical objects or movements

and sends that information to the motorized

surface. Each of the pins acts as a three-

dimensional pixel to recreate that information in

a physical form. It essentially makes the 3-D

vision pop up from the surface.

This may all sound kind of confusing and

obscure, but think of it this way: it’s like a cross

between Skype one of those bizarre, ’90s pin-art

toys.

The list of potential applications inFORM’s

developers foresee is nifty and far-reaching:

from 3-D visualizations of CT scans, via

interactive terrain models for urban planners, to

long-distance design sessions between

collaborating architects. But to make these

applications practical, the resolution will need to

be ramped up significantly. Future iterations of

inFORM will have to include far more pins and

far greater control.

It’s extremely impressive stuff, but it’s just one

step on a long path to what MIT calls Radical

Atoms. First conceptualized over a decade ago,

Radical Atoms are what MIT believes will be

the future of interactivity. The idea is that we

presently interact with computers through

graphical user interfaces (GUI), while inFORM

and other projects like it offer up a tactile user

interface (TUI).

K.SURENDAR

III – B.Sc (IT).

Page 5: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

5

Video Friday: Google's Project

Tango, Visual Servoing, and

Valkyrie at Work

Google announced Project Tango. It's a phone. It

also creates 3D maps of whatever you point it at.

It looks amazing. There is a video.

Our current prototype is a 5” phone containing

customized hardware and software designed to

track the full 3D motion of the device, while

simultaneously creating a map of the

environment. These sensors allow the phone to

make over a quarter million 3D measurements

every second, updating it’s position and

orientation in real-time, combining that data into

a single 3D model of the space around you.

It runs Android and includes development APIs

to provide position, orientation, and depth data

to standard Android applications written in Java,

C/C++, as well as the Unity Game Engine.

These early prototypes, algorithms, and APIs are

still in active development. So, these

experimental devices are intended only for the

adventurous and are not a final shipping product.

Obviously, there's a lot more that we want to

know. Fortunately (we hope), there's a serious

robotics angle here, as evidenced by the fact that

nearly all of the non-Googlers in the video are

celebrity roboticists, from places

like HiDOF, OLogic, 3D Robotics, and

the Open Source Robotics Foundation. And if

these people know what's good for them, they'll

agree talk to us before we have to send out the

crack IEEE Spectrum Roboticist Intimidation

Squad.

Meanwhile, if you're way ahead of us and have

already decided that you want one of these, you

can apply at the website below for one of the

first 200 dev kits, which Google intends for

"projects in the areas of indoor

navigation/mapping, single/multiplayer games

that use physical space, and new algorithms for

processing sensor data," although if you have a

better idea than that, Google's open to it. All you

have to do is to convince them that you're an

"incorporated entity or institution," and you have

until no later than March 14th to make that

happen.

ABBAS MANDHRI . A . S

III – B.Sc (IT).

Page 6: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

6

Urban Computing Reveals the

Hidden City

In his essay “Walking in the City,” the French

scholar Michel de Certeau talks about the

“invisible identities of the visible.” He is talking

specifically about the memories and personal

narratives associated with a location. Until

recently, this information was only accessible

one-to-one—that is, by talking to people who

had knowledge of a place.

But what if that data became one-to-many, or

even many-to-many, and easily accessible via

some sort of street-level interface that could be

accessed manually, or wirelessly using a

smartphone? This is essentially the idea

behind urban computing, where the city itself

becomes a kind of distributed computer. The

pedestrian is the moving cursor; neighborhoods,

buildings, and street objects become the

interface; and the smartphone is used to “click”

or “tap” that interface. In the same way that a

computer, mouse, and interface are required to

operate a Web browser to surf sites, the

equivalent components of street

computing create a reality browser that enables

the city dweller to “surf” urban objects. On a

broader level, the collection, storage, and

distribution of the data related to a city and its

objects is known as urban informatics (described

by one technologist as “a city that talks back to

you”).

Smartphone in hand, what can the modern-day

flaneur expect to find in this newly digitized

urban environment? First, thanks to the

prevalence of GPS data, wayfinding is giving

way (so to speak) to wayshowing, interfaces that

provide specific directions from here to there,

and to social navigation, getting around with the

help of others (avoiding traffic, for example) and

then checking in with your friends when you get

there. Similarly, our urban gadabout might take

advantage of use-someplace technologies such

as augmented reality, where physical space is

overlaid with virtual data. A good example

is Streetmuseum, a Museum of London app that

can overlay an archive photo of a street scene

onto the same scene as shown through your

smartphone’s camera. Beyond augmented reality

is amplified reality, where extra data is built into

an object from the get-go. For example, the

embedding of radio-frequency identification or

near-field communication technologies in street

objects enables the creation of locative

media (also called location-based media).

These situated technologies contain data about a

specific location, which is then beamed to

devices as they come within range, an exchange

Page 7: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

7

known as a situated interaction. An example is

the sound garden, where designers assign sounds

to public places, which users can then listen to

using Wi-Fi–enabled devices.

There is, sadly, the ever-present danger that

advertisers and hucksters will take advantage of

these technologies to turn the city into a giant

billboard. But to the technologists and social

scientists at the forefront of urban computing,

the goal is enhanced civic engagement. To that

end, where once the ideal of pervasive

computing was to create seamless, unnoticeable

technology, today’s urban computing designers

want to build seamful interfaces, whose

visibility encourages users to interact directly

with systems. Curatorial media allow for urban

data curation, the careful collection of stories—

histories as well as facts and figures—using

technologies called urban annotation systems.

Since data are both curated and disseminated in

such systems, this is known as read/write

urbanism.

Is the urban computer a good thing? Well, it’s

certainly an inevitable thing,Think about a

regular PC: You can turn it off, or you can use it

for fun or for productivity. The urban computer

is no different. You can ignore it (turning a city

off is problematic), or you can use it to become a

more attentive, engaged, and concerned citizen.

It’s a tool. Make it sing.

KANNAN. A

III – B.Sc (IT).

New Algorithms Reduce the

Carbon Cost of Cloud Computing

The computing cloud may feel intangible to

users, but it has a definite physical form and

a corresponding carbon footprint.

Facebook’s data centers, for example, were

responsible for the emission of 298 000

metric tons of carbon dioxide in 2012,

the equivalent of roughly 55 000 cars on the

road. Computer scientists at Trinity College

Dublin and IBM Research Dublin have

shown that there are ways to reduce

emissions from cloud computing, although

their plan would likely cause some speed

reductions and cost increases. By developing

Page 8: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

8

a group of algorithms, collectively called

Stratus, the team was able to model a

worldwide network of connected data

centers and predict how best to use them to

keep carbon emissions low while still

getting the needed computing done and data

delivered.

“The overall goal of the work was to see

load coming from different parts of the

globe and spread it out to different data

centers to achieve objectives like

minimizing carbon emissions or having the

lowest electricity costs,” says Donal

O’Mahony, a computer science professor at

Trinity.

For the simulation, the scientists modeled a

scenario inspired by Amazon’s Elastic

Compute Cloud (EC2) data center setup that

incorporated three key variables—carbon

emissions, cost of electricity, and the time

needed for computation and data transfer on

a network. Amazon EC2 has data centers in

Ireland and the U.S. states of Virginia and

California, so the experimental model placed

data centers there too, and it used queries

from 34 sources in different parts of Europe,

Canada, and the United States as tests.

Source: “Stratus: Load Balancing the Cloud

for Carbon Emissions Control,” by Joseph

Doyle et al., accepted for publication

in IEEE Transactions on Cloud

ComputingCloud Computing and Carbon

Dioxide:Algorithms route requests from

different sites [circles] to data centers

[yellow squares] by balancing round-trip

travel time and the data center’s carbon

footprint.

The researchers then used the Stratus

algorithms to optimize the workings of the

network for any of the three variables. With

the algorithms they were able to reduce the

EC2 cloud’s emissions by 21 percent over a

common commercial scheme for balancing

computing loads. The key to the reduction,

scientists found, was in routing requests to

the Irish data center more than to those in

California or Virginia. Ireland also tended to

have faster-than-average service request

times, so even when Stratus was tuned to

Page 9: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

9

reduce carbon, it shaved 38 milliseconds off

the average time taken to request and

receive a response from the data centers.

The researchers stress that the results have

more value in representing trends than in

predicting real-world numbers for quantities

like carbon savings. Some of the key inputs

were necessarily inexact. As an example, for

some geographic locations, such as Ireland,

it was easy to find real-time carbon intensity

data or real-time electricity pricing data, but

in other areas, including the United States,

only seasonal or annual averages were

available. “If we had the real-time data for

California and Virginia, the simulations

might look quite different,” says Joseph

Doyle, a networks researcher at Trinity who

worked with O’Mahony and IBM’s Robert

Shroten on Stratus.

Christopher Stewart, who

researches sustainable cloud computing at

Ohio State University, says that although

Stratus and other recent work have made

significant progress toward modeling

effective load balancing, data storage is

another important factor [PDF] to consider.

In order to handle requests, “With data

growing rapidly, storage capacity is a major

concern now, too, and that may limit your

flexibility in terms of being able to route

requests from one data center to another.”

The researchers hope that the easier it is to

achieve load balancing and optimization in

cloud computing, the more it will be

implemented by environmentally conscious

companies, or those just looking to save

money. “A company like Twitter might have

lots of options in how it decides that all the

Twitter traffic is going to get served around

the world,” O’Mahony says. “If they

decided that greenness was one of the things

that was most important to them, they could

structure their load balancing accordingly.

Or if getting it done as cheaply as possible

was important, they could structure it that

way. Or they could do anything in the

middle.”

J.ISAK RAJA KARUNYA PRAKASH

III – B.Sc (IT).

Page 10: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

10

Liquid Fix for the Cloud’s Heavy

Energy Footprint

Asicminer, a Hong Kong–based bitcoin mining

operation, has taken an unorthodox step to gain

an advantage over other computing systems

running the algorithms that generate the virtual

currency. To save money on energy, Asicminer

puts its servers in liquid baths to cool them.

The result? Asicminer’s 500-kilowatt computing

system uses 97 percent less energy on cooling

than if it employed a conventional method. Its

custom-made racks hold computers that are

submerged in tanks filled with an engineered

fluid produced by 3M that won’t damage the

machines. The liquid takes up the system’s heat,

and inexpensive cooling equipment extracts the

heat, ultimately expelling it outside.

The bitcoin-mining facility is on the leading

edge of a movement to use liquids to cool data

centers. Operators of high-performance

supercomputers have long understood that

liquids trump air in the physics of heat removal.

Because liquids are denser than gases, they are a

more efficient medium to transport and remove

unwanted heat.

Yet direct liquid cooling is a rarity in the

corporate data centers that run bank transactions

and the cloud facilities that serve data to

smartphones. Data centers consume more than 1

percent of the world’s electricity and about 2

percent of the electricity in the United States. A

third or more of that expenditure is for cooling.

Given computing’s growing energy cost and

environmental footprint, proponents say it’s just

a matter of time before some form of liquid

cooling wins out.

“Air cooling is such a goofy idea,” says Herb

Zien, the CEO of LiquidCool Solutions, in

Rochester, Minn., which makes immersion-

cooling technology. “The problem is that there’s

so much inertia and so much investment in the

current system that it’s hard to turn back.”

Indeed, over the years many smart people have

perfected the art of moving air around data

centers for maximum efficiency. They have a

number of techniques to choose from, such as

setting up hot and cold aisles, using sensors to

monitor conditions, and bringing in cold outdoor

air for cooling. And the very idea of pumping

fluids, especially water, into an expensive server

rack requires a leap of faith that not all

technology professionals are willing to take.

Page 11: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

11

“Historically, the thinking has been that

electronics and liquids don’t mix,” says Steven

Hammond, the director of the Computational

Science Center at the National Renewable

Energy Laboratory (NREL), in Golden, Colo.

“Everybody working in data centers is

hydrophobic.” NREL flows water into its server

racks to remove heat, eliminating the need for

power-hungry air conditioners. In the colder

months, pumps circulate the heated water to

warm the laboratory building.

The average data center spends more than 30

percent of its energy bill just oncooling, making

it a major cost to the Googles and Facebooks of

the world. But liquid cooling, particularly

immersion cooling or circulating water through

server racks, has yet to make a big splash in the

cloud. Microsoft, which operates more than a

million servers worldwide, is sticking with air

cooling because it’s proven and scalable, says

Kushagra Vaid, general manager of cloud server

engineering at Microsoft. “Cost of scaling is a

big factor for Microsoft when considering new

types of cooling methods,” Vaid says. “Our

scale demands standardized and simplified

techniques that are deployable across server

environments and geographies.”

One maker of immersion cooling, Green

Revolution Cooling, in Austin, Texas, claims

that its system, in which servers are placed in a

tank filled with mineral oil, is 60 percent

cheaper than building and operating a new data

center. But it does require a change in how data

centers are installed and serviced. For example,

server fans need to be removed, and technicians

need to wear gloves when swapping out servers.

The strongest need for liquid cooling is in

situations where a lot of compute power is

packed into a small space, experts say. The

Asicminer system in Hong Kong, for instance, is

compact enough to reside in a high-rise building,

taking up one-tenth of the space it would if it

were air-cooled.

In the future, though, data-center operators may

want to place their computing power closer to

users. There’s also increasing pressure from

environmental groups to lower energy use from

cloud data centers. Still, whether liquid cooling

will break beyond its niche status remains an

open question. “There’s a point where the

technology stops being used by early adopters

and starts being used by the early majority, and

there’s a chasm in between,” says Matt

Solomon, the marketing director at Green

Revolution Cooling. “We’re just waiting for the

domino effect.”

V.L.JAYANTH

III – B.Sc (IT).

Page 12: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

12

Seagate Crams 500 GB of Storage

into Prototype Tablet

Flash memory is fantastic stuff. It's small, it's

fast, and it's robust. It's also absurdly expensive

if you want a lot of it, which is at odds with our

evolving media-hungry mobile lifestyle. Google,

Apple, and Amazon would like us to store

everything in the cloud. But hard disk drive

manufactures have other ideas.

For a few years now, Seagate has offered

wireless traditional hard drives to give mobile

devices a storage boost, but at CES this year,

they're showing off a prototype tablet that skips

the peripheral completely. And somehow, it

does so without many compromises.

Seagate doesn't have a name for this prototype

tablet, and they don't intend to jump into the

tablet game. It's more of a design concept,

intended to illustrate the feasibility of stuffing an

old-school magnetic platter hard drive into a

slim tablet.

The hard drive in question is Seagate's

impressively skinny "Ultra Mobile HDD," a

five-millimeter-thick single system with 500GB

of storage, robust power management, and drop

protection. It's cheap, too: Seagate won't tell us

how much, exactly, except that it's "a fraction of

the cost" of even just 64GB of flash memory.

Of course there's plenty of reasons we don't

already have hard drives in tablets. The

compromise that immediately leaps to mind

when you add a spinning hard drive is, of

course, battery life. Seagate's solution in this

prototype was to hybridize the storage with the

addition of 8GB of flash memory. The vast

majority of the time, the tablet is just running on

flash, and the magnetic drive is powered off. If

you want to play a movie, though, the drive will

Page 13: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

13

spin up, swap the movie onto the flash memory

through a fast 6 gb/s SATA interface, and then

spin down again. The upshot of this is that you

have 500GB that you can access whenever you

want, but you're not paying for it in battery life,

because it's almost never running.

With battery life rendered a non-issue, putting a

drive like this into a tablet is almost entirely

upside. You get a lot more storage, of course,

and you also save a lot of money. According to

Seagate, there's "no compromise" in battery life,

robustness, or performance: you just get more

storage for less money, and that's it. Hopefully, a

manufacturer will take the plunge on this, and

give us a consumer model to play with at some

point in the near future.

Also: Fast, Portable Storage

The other interesting thing that Seagate had on

display is something that you can buy, right

now. It's called Backup Plus Fast, and it's a

chubby 2.5" external USB 3.0 hard drive. It's

chubby (the picture above shows it next to a

regular sized external HD) because there are

actually two drives in there, set up in a striped

(RAID 0) configuration. You get a staggering

four terabytes of bus-powered storage that can

maximize its USB 3.0 connection with transfer

speeds of up to 220 MB/s, great for working

with video or piles of pictures.

While the drive is currently only available in

RAID 0, Seagate told us that they're looking at

whether they'll put out a RAID 1 (mirrored)

version at some point in the future. Personally,

I'm super paranoid about irreplaceable media

like pictures and videos, and I'd love to have a

portable solution that offers protection against

drive failure, even if it means sacrificing the

capacity and speed.

The Seagate Backup Plus Fast is available now

for a penny under $300.

M.BHARANI BABU

III – B.Sc (CT).

Page 14: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

14

Single chip device to provide real-

time 3-D images from inside the

heart, blood vessels

Researchers have developed the technology for a

catheter-based device that would provide

forward-looking, real-time, three-dimensional

imaging from inside the heart, coronary arteries

and peripheral blood vessels. With its volumetric

imaging, the new device could better guide

surgeons working in the heart, and potentially

allow more of patients' clogged arteries to be

cleared without major surgery.

The device integrates ultrasound transducers

with processing electronics on a single 1.4

millimeter silicon chip. On-chip processing of

signals allows data from more than a hundred

elements on the device to be transmitted using

just 13 tiny cables, permitting it to easily travel

through circuitous blood vessels. The forward-

looking images produced by the device would

provide significantly more information than

existing cross-sectional ultrasound.

Researchers have developed and tested a

prototype able to provide image data at 60

frames per second, and plan next to conduct

animal studies that could lead to

commercialization of the device.

"Our device will allow doctors to see the whole

volume that is in front of them within a blood

vessel," said F. Levent Degertekin, a professor

in the George W. Woodruff School of

Mechanical Engineering at the Georgia Institute

of Technology. "This will give cardiologists the

equivalent of a flashlight so they can see

blockages ahead of them in occluded arteries. It

has the potential for reducing the amount of

surgery that must be done to clear these vessels."

Details of the research were published online in

the February 2014 issue of the journalIEEE

Transactions on Ultrasonics, Ferroelectrics and

Frequency Control. Research leading to the

device development was supported by the

National Institute of Biomedical Imaging and

Bioengineering (NIBIB), part of the National

Institutes of Health.

"If you're a doctor, you want to see what is

going on inside the arteries and inside the heart,

but most of the devices being used for this today

provide only cross-sectional images,"

Degertekin explained. "If you have an artery that

is totally blocked, for example, you need a

system that tells you what's in front of you. You

need to see the front, back and sidewalls

altogether. That kind of information is basically

not available at this time."

The single chip device combines capacitive

micromachined ultrasonic transducer (CMUT)

arrays with front-end CMOS electronics

Page 15: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

15

technology to provide three-dimensional

intravascular ultrasound (IVUS) and intracardiac

echography (ICE) images. The dual-ring array

includes 56 ultrasound transmit elements and 48

receive elements. When assembled, the donut-

shaped array is just 1.5 millimeters in diameter,

with a 430-micron center hole to accommodate a

guide wire.

Power-saving circuitry in the array shuts down

sensors when they are not needed, allowing the

device to operate with just 20 milliwatts of

power, reducing the amount of heat generated

inside the body. The ultrasound transducers

operate at a frequency of 20 megahertz (MHz).

Imaging devices operating within blood vessels

can provide higher resolution images than

devices used from outside the body because they

can operate at higher frequencies. But operating

inside blood vessels requires devices that are

small and flexible enough to travel through the

circulatory system. They must also be able to

operate in blood.

Doing that requires a large number of elements

to transmit and receive the ultrasound

information. Transmitting data from these

elements to external processing equipment could

require many cable connections, potentially

limiting the device's ability to be threaded inside

the body.

Degertekin and his collaborators addressed that

challenge by miniaturizing the elements and

carrying out some of the processing on the probe

itself, allowing them to obtain what they believe

are clinically-useful images with only 13 cables.

"You want the most compact and flexible

catheter possible," Degertekin explained. "We

could not do that without integrating the

electronics and the imaging array on the same

chip."

Based on their prototype, the researchers expect

to conduct animal trials to demonstrate the

device's potential applications. They ultimately

expect to license the technology to an

established medical diagnostic firm to conduct

the clinical trials necessary to obtain FDA

approval.

For the future, Degertekin hopes to develop a

version of the device that could guide

interventions in the heart under magnetic

resonance imaging (MRI). Other plans include

further reducing the size of the device to place it

on a 400-micron diameter guide wire.

In addition to Degertekin, the research team

included Jennifer Hasler, a professor in the

Georgia Tech School of Electrical and Computer

Engineering; w in the Woodruff School of

Mechanical Engineering; Gokce Gurun and

Jaime Zahorian, recent graduates of Georgia

Tech's School of Electrical and Computer

Engineering, and Georgia Tech Ph.D. students

Toby Xu and Sarp Satir.

T.HEMALATHA

III – B.Sc (CT).

Page 16: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

16

Robotic construction crew needs no

foreman

On the plains of Namibia, millions of tiny

termites are building a mound of soil -- an 8-

foot-tall "lung" for their underground nest.

During a year of construction, many termites

will live and die, wind and rain will erode the

structure, and yet the colony's life-sustaining

project will continue.

Inspired by the termites' resilience and collective

intelligence, a team of computer scientists and

engineers at the Harvard School of Engineering

and Applied Sciences (SEAS) and the Wyss

Institute for Biologically Inspired Engineering at

Harvard University has created an autonomous

robotic construction crew. The system needs no

supervisor, no eye in the sky, and no

communication: just simple robots -- any

number of robots -- that cooperate by modifying

their environment.

Harvard's TERMES system demonstrates that

collective systems of robots can build complex,

three-dimensional structures without the need

for any central command or prescribed roles.

The results of the four-year project were

presented this week at the AAAS 2014 Annual

Meeting and published in the February 14 issue

of Science.

The TERMES robots can build towers, castles,

and pyramids out of foam bricks, autonomously

building themselves staircases to reach the

higher levels and adding bricks wherever they

are needed. In the future, similar robots could

lay sandbags in advance of a flood, or perform

simple construction tasks on Mars.

"The key inspiration we took from termites is

the idea that you can do something really

complicated as a group, without a supervisor,

and secondly that you can do it without

everybody discussing explicitly what's going on,

but just by modifying the environment," says

principal investigator Radhika Nagpal, Fred

Kavli Professor of Computer Science at Harvard

SEAS. She is also a core faculty member at the

Wyss Institute, where she co-leads the

Bioinspired Robotics platform.

Most human construction projects today are

performed by trained workers in a hierarchical

organization, explains lead author Justin Werfel,

a staff scientist in bioinspired robotics at the

Wyss Institute and a former SEAS postdoctoral

fellow.

"Normally, at the beginning, you have a

blueprint and a detailed plan of how to execute

Page 17: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

17

it, and the foreman goes out and directs his crew,

supervising them as they do it," he says. "In

insect colonies, it's not as if the queen is giving

them all individual instructions. Each termite

doesn't know what the others are doing or what

the current overall state of the mound is."

Instead, termites rely on a concept known

as stigmergy, a kind of implicit communication:

they observe each others' changes to the

environment and act accordingly. That is what

Nagpal's team has designed the robots to do,

with impressive results. Supplementary videos

published with the Science paper show the

robots cooperating to build several kinds of

structures and even recovering from unexpected

changes to the structures during construction.

Each robot executes its building process in

parallel with others, but without knowing who

else is working at the same time. If one robot

breaks, or has to leave, it does not affect the

others. This also means that the same

instructions can be executed by five robots or

five hundred. The TERMES system is an

important proof of concept for scalable,

distributed artificial intelligence.

Nagpal's Self-Organizing Systems Research

Group specializes in distributed algorithms that

allow very large groups of robots to act as a

colony. Close connections between Harvard's

computer scientists, electrical engineers, and

biologists are key to her team's success. They

created a swarm of friendly Kilobots a few years

ago and are contributing artificial intelligence

expertise to the ongoing RoboBees project, in

collaboration with Harvard faculty members

Robert J. Wood and Gu-Yeon Wei.

"When many agents get together -- whether

they're termites, bees, or robots -- often some

interesting, higher-level behavior emerges that

you wouldn't predict from looking at the

components by themselves," says Werfel.

"Broadly speaking, we're interested in

connecting what happens at the low level, with

individual agent rules, to these emergent

outcomes."

Coauthor Kirstin Petersen, a graduate student at

Harvard SEAS with a fellowship from the Wyss

Institute, spearheaded the design and

construction of the TERMES robots and bricks.

These robots can perform all the necessary tasks

-- carrying blocks, climbing the structure,

attaching the blocks, and so on -- with only four

simple types of sensors and three actuators.

"We co-designed robots and bricks in an effort

to make the system as minimalist and reliable as

possible," Petersen says. "Not only does this

help to make the system more robust; it also

greatly simplifies the amount of computing

required of the onboard processor. The idea is

not just to reduce the number of small-scale

errors, but more so to detect and correct them

before they propagate into errors that can be

fatal to the entire system."

In contrast to the TERMES system, it is

currently more common for robotic systems to

depend on a central controller. These systems

typically rely on an "eye in the sky" that can see

the whole process or on all of the robots being

Page 18: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

18

able to talk to each other frequently. These

approaches can improve group efficiency and

help the system recover from problems quickly,

but as the numbers of robots and the size of their

territory increase, these systems become harder

to operate. In dangerous or remote

environments, a central controller presents a

single failure point that could bring down the

whole system.

"It may be that in the end you want something in

between the centralized and the decentralized

system -- but we've proven the extreme end of

the scale: that it could be just like the termites,"

says Nagpal. "And from the termites' point of

view, it's working out great."

LOGA PRIYA M

I – B.Sc (IT).

K-Glass: Extremely low-powered,

high-performance head-mounted

display embedding an augmented

reality chip

Walking around the streets searching for a place

to eat will be no hassle when a head-mounted

display (HMD) becomes affordable and

ubiquitous. Researchers at the Korea Advanced

Institute of Science and Technology (KAIST)

developed K-Glass, a wearable, hands-free

HMD that enables users to find restaurants while

checking out their menus. If the user of K-Glass

walks up to a restaurant and looks at the name of

the restaurant, today's menu and a 3D image of

food pop up. The Glass can even show the

number of tables available inside the restaurant.

K-Glass makes this possible because of its built-

in augmented reality (AR) processor.

Unlike virtual reality which replaces the real

world with a computer-simulated environment,

AR incorporates digital data generated by the

computer into the reality of a user. With the

computer-made sensory inputs such as sound,

video, graphics or GPS data, the user's real and

physical world becomes live and interactive.

Augmentation takes place in real-time and in

semantic context with surrounding

environments, such as a menu list overlain on

the signboard of a restaurant when the user

passes by it, not an airplane flight schedule,

which is irrelevant information, displayed.

Most commonly, location-based or computer-

vision services are used in order to generate AR

effects. Location-based services activate motion

Page 19: INFOLINE - Kongu Arts and Science CollegeInfoline 1 Executive Committee 2 Amazing 3-D Display Lets Video Chatters Interact With Remote Objects [Video] 4 Video Friday: Google's Project

19

sensors to identify the user's surroundings,

whereas computer-vision uses algorithms such

as facial, pattern, and optical character

recognition, or object and motion tracking to

distinguish images and objects. Many of the

current HMDs deliver augmented reality

experiences employing location-based services

by scanning the markers or bar-codes printed on

the back of objects. The AR system tracks the

codes or markers to identify objects and then

align them with virtual reality. However, this

AR algorithm is difficult to use for the objects or

spaces which do not have bar-codes, QR codes,

or markers, particularly those in outdoor

environments and thus cannot be recognized.

To solve this problem, Hoi-Jun Yoo, Professor

of Electrical Engineering at KAIST and his team

developed, for the first time in the world, an AR

chip that works just like human vision. This

processor is based on the Visual Attention

Model (VAM) that duplicates the ability of

human brain to process visual data. VAM,

almost unconsciously or automatically,

disentangles the most salient and relevant

information about the environment in which

human vision operates, thereby eliminating

unnecessary data unless they must be processed.

In return, the processor can dramatically speed

up the computation of complex AR algorithms.

The AR processor has a data processing network

similar to that of a human brain's central nervous

system. When the human brain perceives visual

data, different sets of neurons, all connected,

work concurrently on each fragment of a

decision-making process; one group's work is

relayed to other group of neurons for the next

round of the process, which continues until a set

of decider neurons determines the character of

the data. Likewise, the artificial neural network

allows parallel data processing, alleviating data

congestion and reducing power consumption

significantly.

KAIST's AR processor, which is produced using

the 65 nm (nanometers) manufacturing process

with the area of 32 mm2, delivers 1.22 TOPS

(tera-operations per second) peak performance

when running at 250 MHz and consumes 778

miliWatts on a 1.2V power supply. The ultra-

low power processor shows 1.57 TOPS/W high

efficiency rate of energy consumption under the

real-time operation of 30fps/720p video camera,

a 76% improvement in power conservation over

other devices. The HMDs, available on the

market including the Project Glass whose

battery lasts only for two hours, have revealed so

far poor performance. Professor Yoo said, "Our

processor can work for long hours without

sacrificing K-Glass's high performance, an ideal

mobile gadget or wearable computer, which

users can wear for almost the whole day."

S.DIVAKAR

III – B.Sc (CT).