Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e1
FROM THE PRINCIPAL’S DESK
I congratulate the Department of
Computer Science and MCA for their
efforts in bringing out this newsletter
on the theme “Artificial Intelligence –
Marvel Creations Dawn of simulated
cognition. “Alexa." "Siri." "Hey,
Google." these are the names that
today’s generation call out now,
perhaps more often than we call out those of friends and family members. Artificial intelligence
(AI) isn’t part of a science-fiction future; it’s our reality. Over the past decade we’ve become
accustomed to the idea that when we call a customer service line, we must often pass through a
gauntlet of voice-activated technology before reaching a human. In some ways, it’s an evolution
of the wisdom of the crowds; only instead of having to sift through the potential options, it simply
surfaces what you are most likely to enjoy. I wish the department to grow and explore the
research directions in this current emerging area and reach greater heights.
Dr. Sr. ARPANA
MESSAGE FROM CO-ORDINATOR, DEPARTMENT OF MCA
Will machines surpass human levels of intelligence and ability? From SIRI to self-driving cars AI is
progressing rapidly. AI is not only the Marvel Hero’s with human-like characters but it's a super-
intelligence that can encompass anything and even might help us eradicate war, disease and poverty
[Some experts have expressed concern on this 'Super Intelligence' that it could be extremely powerful
possibly beyond our control]. But remember there's still an underlying level we haven't yet touched. Yes
we haven't rewritten the human brain! I congratulate the editorial team of Computer Science and MCA
department for choosing this theme for this version of 2K18 and wish them all the very best.
Ms. VIJAYALAKSHMI N
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e2
ARTIFICIAL INTELLIGENCE, THE COMING WAVE
Since the invention of computers or machines,
their capability to perform various tasks went on
growing exponentially. Humans have developed
the power of computer systems in terms of their
diverse working domains, their increasing speed,
and reducing size with respect to time. A branch
of computer science named “Artificial
Intelligence” focuses on creating the computers
or machines as intelligent as human beings.
According to the father of Artificial Intelligence,
John McCarthy, it is “The science and
engineering of making intelligent machines,
especially intelligent computer programs”. It is a
way of making a computer, a computer-
controlled robot, or a software think
intelligently, in the similar manner the intelligent
humans think. This is accomplished by studying
how human brain thinks and how humans learn,
decide, and work while trying to solve a problem,
and then using the outcomes of this study as a
basis of developing intelligent software and
systems. While exploiting the power of the
computer systems, the curiosity of human, lead
him to wonder, “Can a machine think and
behave like humans do? “Thus, the development
of Artificial intelligence started with the
intention of creating similar intelligence in
machines that we find and regard high in
humans. Some of the goals are: To create expert
systems − The systems which exhibit intelligent
behaviour, learn, demonstrate, explain, and
advice its users. To implement human
intelligence in machines − Creating systems that
understand, think, learn, and behave like
humans. Artificial Intelligence has been
dominant in various fields such as –gaming,
natural language processing, expert systems,
vision systems, speech recognition, handwriting
recognition and intelligent robots. Hence it is
something which could also be a threat to
humanity. It should be implemented at a
moderate pace since this is a very powerful tool
and could even dominate our day to day
activities. Therefore, artificial intelligence is
something which could be a boon or a bane to
us.
Shriya Raj III MCA
Technology is the campfire around which we tell our stories
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e3
3D PRINTING 3D printing or additive manufacturing is a
process of making three dimensional solid
objects from a digital file.
There are several ways to 3D print. All these
technologies are additive, differing mainly in the
way layers are built to create an object. Some
methods use melting or softening material to
extrude layers. Others cure a photo-reactive
resin with a UV laser (or another similar power
source) layer by layer.
Some of the processes are:
Vat Photo polymerisation
Material Jetting
Binder Jetting
Material Extrusion
Powder Bed Fusion
Sheet Lamination
Directed Energy Deposition
Applications of 3D printing
3D printing has many applications in
manufacturing, medicine, architecture, and
custom art and design.
Applications in medical field
Medical applications in field of 3d printing has
evolved considerably. Recently published
reviews describe the use of 3D printing to
produce bones, ears, exoskeletons, windpipes, a
jaw bone, eyeglasses, cell cultures, stem cells,
blood vessels, vascular networks, tissues, and
organs, as well as novel dosage forms and drug
delivery devices. The current medical uses of 3D
printing can be organized into several broad
categories. There’s little question that 3D
medical advances are going to make life better
for many people. With printers, people in
remote areas or on airplanes or cruise ships will
have access to faster medical care in
emergencies. In the past, it would have been
necessary to either ship medical parts to the
injured/sick person or bring him or her to a major
hospital, now the essential part can simply be 3D
printed.
In the food industry
Bringing the food industry to the digital age is
one of the essential and revolutionary
applications of 3D printing. Using robotic
layer-based food printing systems allows the
recipe of the food to be digitized and saved in
order to prepare very repeatable and
high-quality dishes without any margin for
operator error. Also, the shape and decoration
of the food can be individualized based on the
customer or the occasion.
A company called Choc Edge is currently
marketing "The world's first commercial 3D
chocolate printer", the Choc Creator. It uses a
nozzle to dispense molten chocolate into any
pattern and shape. While the $3,500 price might
be expensive for home use, it can prove to be
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e4
very successful for niche shops tailoring to
specific customers or events
Field of Shelter
Shelter is another basic human necessity which
can be an interesting application for 3D printing
Conventional building methods are hazardous,
time consuming, and expensive; 3D printing of
buildings can enable automated creation of
variety of buildings quickly and efficiently.
Field of Transportation
3D printers are creating parts for cars and
airplanes, transforming the aerospace and
automobile industries. It’s even possible to print
entire vehicles. Beyond this, there are
applications for using this technology for
building materials. NASA now uses 3D printed
rocket engines. In the near future, 3D printing is
likely to make transportation faster, cheaper,
and safer.
Field of Construction and Architecture
3D printers can not only print out models but
actual homes and other buildings. A company in
China, for example, has built a 3D-printed house
made to withstand powerful earthquakes. This is
another area where we are only starting to
scratch the surface of potential applications. As
this technology gets cheaper and more refined,
3D-printed homes could help address issues such
as homelessness and housing shortages.
Future of 3D printing
In the coming years, 3-D printers will be at the
heart of full-scale production capabilities in
several industries, from aerospace to
automotive to health care to fashion.
Manufacturing as we know it will never be the
same. 3D printing has countless possibilities in
many industries and areas of life. 3D-printed
items will increasingly be used to make items for
all purposes, from the frivolous to the practical
to the humanitarian. Only time will tell where
this technology will take us. 3D printing is likely
to radically change the way people purchase
many items. First, e-commerce made it possible
to order products from the comfort of your
home. 3D printing will create new possibilities,
where you choose the specifications and simply
print out what you want.
-ANNIE SIMRIN SIRISHA.DK
II BCA
The science of today is technology of tomorrow
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e5
CROSSWORD
- SRISHTI III MCA
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e6
WHEN CELLULAR NETWORKS MEET ARTIFICIAL INTELLIGENCE
Currently, fourth-generation (4G) cellular
networks are being globally deployed to provide
all-IP (Internet Protocol) broadband
connectivity. Recalling that second-generation
(2G) global networks for mobile communications
(GSM), debuted in 1991, just started to provide
digital voice telephony, and third-generation
(3G) cellular networks, launched in 2001, initially
provided mobile Internet solutions. Nowadays,
the landscape of the Information
Communication Technology (ICT) industry is
rapidly changing. Therefore, to enhance service
provisioning and satisfy the coming diversified
requirements, it is necessary to revolutionize the
cellular networks with cutting-edge
technologies. The standardization of next-
generation (5G) cellular networks is being
expedited, which also implies more of the
candidate technologies will be adopted.
5G cellular networks are assumed to be the key
enabler and infrastructure provider in the ICT
industry, by offering a variety of services with
diverse requirements. The standardization of 5G
cellular networks is being expedited, which also
implies more of the candidate technologies will
be adopted. Therefore, it is worthwhile to
provide insight into the candidate techniques as
a whole and examine the design philosophy
behind them. In this article, I try to highlight one
of the most fundamental features among the
revolutionary techniques in the 5G era, i.e.,
there emerges initial intelligence in nearly every
important aspect of cellular networks, including
radio resource management, mobility
management, service provisioning
management, and so on. However, faced with
ever-increasingly complicated configuration
issues and blossoming new service
requirements, it is still insufficient for 5G cellular
networks if it lacks complete AI functionalities.
Hence, we further introduce fundamental
concepts in AI and discuss the relationship
between AI and the candidate techniques in 5G
cellular networks. Specifically, we highlight the
opportunities and challenges to exploit AI to
achieve intelligent 5G networks, and
demonstrate the effectiveness of AI to manage
and orchestrate cellular network resources. We
envision that AI-empowered 5G cellular
networks will make the acclaimed ICT enabler a
reality.’
- RINIYA BENNY
II BCA
FACTS ABOUT 5G
Design and preserve the future through advanced methods of technology
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e7
If you've never heard of 5G wireless connections
(they barely even exist yet). And the ones that do
aren't even official 5G connections because no
one can decide on what that even means yet.
But 5G is coming, and many wireless carriers and
technology companies are already investing in
the new technology. So here are five facts to
know about 5G.
5G wireless will be available by 2020, or even a
bit earlier. Verizon Communications (NYSE: VZ),
Alphabet's (NASDAQ: GOOG) (NASDAQ: GOOGL)
Google, and AT&T (NYSE: T) are already testing
5G technologies right now. Google is testing
solar-powered drones that can stay up in the sky
for as long as five years and beam down 5G
signals to users. AT&T and Verizon are taking a
more traditional approach and are currently
using 5G signals near their respective
headquarters. Verizon says it will roll out tests in
Boston, New York and San Francisco later this
year. But there aren't any set standards for 5G
yet. The international wireless standards body,
3GPP, is still determine the specifications, along
with Ericsson, Samsung, Nokia, Cisco Systems,
and Verizon. The next generation of wave radio
transmissions standards are likely to be set by
2018.5G will be lightning fast. Verizon says that
its 5G network will likely be 200 times faster than
the 5Mbps speeds many of its users get on 4G
LTE. That means 5G speeds will hit 1 Gbps, which
is currently the fastest speed you can get from
Google Fiber. At that rate, you'll be able to
download an HD movie in seven seconds. Speeds
are expected to increase even higher than 1Gbps
as well, as 5G evolves.
5G will likely be the next major fight for wireless
carriers, and no one wants to be left out. The
major U.S. carriers are all closing the gap on their
4G LTE coverage and speeds, which means they'll
likely latch onto their 5G networks to
differentiate themselves. AT&T was dismissive
about any type of 5G talk just a few months ago,
but is now very open about its 5G plans. The
company's about-face shows just how much
carriers don't want to be seen as falling behind.
5G will cost more than 4G LTE connections, but
probably not much more. According to research
by the University of Bridgeport, carriers will likely
keep costs around the same as they are now, but
you'll get much faster speeds. That's because
carriers reduce the price of data by a little bit
each year. Huawei and Nokia believe 5G will cost
more than 4G LTE, but say that the carriers won't
be able to charge too much more than the
current rates.
Garima Singh
II B.Sc. CMS
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e8
5G NETWORKS
5G network are the next generation of mobile
internet connectivity, offering faster speeds and
more reliable connections on smartphones and
other devices than ever before.5G is a marketing
term for some new mobile technologies.
The ITU IMT-2020 standard provides for speeds
up to 20 gigabits and has only been
demonstrated with high frequency millimetre
waves of 15 gigahertz and higher. ITU has
divided 5G network services into three
categories: enhanced Mobile Broadband (eMBB)
or handsets, Ultra-Reliable Low-Latency
Communications (URLLC), which includes
industrial applications and autonomous vehicles,
and Massive Machine Type Communications
(MMTC) or sensors.
With speeds of up to 100 gigabits per second, 5G
will be as much as 1,000 times faster than 4G, the
latest iteration of mobile data technology. 5G
will, however, make communications so fast
they become almost real-time, putting mobile
internet services on a par with office services.
5G will increase download speeds up to 10
gigabits per second. That means a full HD movie
can be downloaded in a matter of seconds. It will
also reduce latency significantly (giving people
faster load times). In short, it will give wireless
broadband the capacity it needs to power
thousands of connected devices that will reach
our homes and workplaces.
5G technology is driven by 8 specification
requirements
Up to 10Gbps data rate - > 10 to 100x
improvement over 4G and 4.5G networks
1-millisecond latency
1000x bandwidth per unit area
Up to 100x number of connected devices per
unit area (compared with 4G LTE)
99.999% availability
100% coverage
90% reduction in network energy usage
Up to 10-year battery life for low power IoT
devices
-Kritika. R Jain
II BCA
Any sufficiently advanced technology is indistinguishable from magic
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e9
INTRODUCTION TO DEEP LEARNING
Deep learning is a subfield of machine learning it
is concerned with algorithms inspired by the
structure and function of the brain called
artificial neural networks. Deep learning was
introduced with the objective of moving
Machine Learning closer to one of its original
goals: Artificial Intelligence.
Machine learning: It is an application of AI that
provides systems the ability to automatically
learn and improve from experience without
being programmed. Its primary aim is to allow
computers to learn automatically.
The 2 main types of machine learning can be
supervised and unsupervised. Under supervised
machine learning the data scientist acts as a
guide to teach the algorithm what conclusions it
should come up with, while unsupervised
machine learning is more closely aligned with
what some call true artificial intelligence — the
idea that a computer can learn to identify
complex processes and patterns without a
human to provide guidance along the way.
Artificial Neural Networks (ANN)
The inventor of the first neurocomputer, Dr.
Robert Hecht-Nielsen, defines a neural network
as −"...a computing system made up of a number
of simple, highly interconnected processing
elements, which process information by their
dynamic state response to external inputs.”
The ANN is like an artificial human nervous
system for receiving, processing, and
transmitting information in terms of Computer
Science.
The idea of ANNs is based on the belief that
working of human brain by making the right
connections can be imitated using silicon and
wires as living neurons and dendrites.
Why Deep Learning
Deep Learning has enabled many practical
applications of Machine Learning and by
extension the overall field of AI. Deep Learning
breaks down tasks in ways that makes all kinds
of machine assists seem possible, even likely.
Driverless cars, better preventive healthcare,
even better movie recommendations, are all
here today or on the horizon.
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e10
Deep Learning identifies defects that would
otherwise be difficult to detect. When consistent
images are challenging due to ambient
conditions, product reflection, or lens distortion,
Deep Learning can account for these types of
variations and learn interesting features to make
your inspection robust.
It has the best-in-class performance on problems
that significantly outperforms other solutions in
multiple domains. This includes speech,
language, vision, playing games like Go etc. This
isn’t by a little bit, but by a significant amount.
Reduces the need for feature engineering, one of
the most time-consuming parts of machine
learning practice.
Is an architecture that can be adapted to new
problems relatively easily (e.g. Vision, time
series, language etc.) using techniques like
convolutional neural networks, recurrent neural
networks, long short-term memory etc.
- Tanya
II BCA
ITS RIDDLES TIME….
1.What is an alien’s favourite place on the
computer?
2.Why did the computer get glasses?
3.What does a computer do when it's tired?
4.How do you know when a computer monitor
is sad?
5.What do the cookie and the computer have in
common?
Nihad Afza III MCA
Simplicity is about subtracting the obvious and adding the meaningful.
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e11
HOW AI IS DRIVING THE FUTURE OF AUTONOMOUS CARS
Over the past decade, the United States has seen
an explosion of autonomous vehicle technology
that has swept across the auto industry. This
wave of progress encompasses all aspects of
computer technology, software engineering, and
thought-leaders from major automakers like
Tesla, BMW, Ford, Audi, and even Google. While
many have only started hearing about
autonomous technology recently, self-driving
car research has been going on now for over 45
years. One of the earliest research publications
on autonomous vehicle technology can be found
in an article IEEE Spectrum from 1969. In the
featured article, lead engineers Robert E. Fenton
and Karl W. Olson hypothesized that the future
of automated vehicles would rely on “smart
infrastructure” that would guide the cars on
roadways. As a result, autonomous vehicles now
rely on onboard technologies and state of the art
computers to observe and process their
environment.
While the technology used has made leaps and
bounds, the ability for computers to understand
their surrounding and make decisions based on
relevant information has also improved. Artificial
Intelligence (AI) plays an integral role in the
progression of self-driving vehicles on public
roads.
AI: the brain of autonomous vehicles
Just like a human, self-driving cars need to have
sensors to understand the world around them
and a brain that collects processes and chooses
specific actions based on information gathered.
The same goes for self-driving cars, and each
autonomous vehicle is outfitted with advanced
tools to gather information, including long-range
radar, LIDAR, cameras, short/medium-range
radar, and ultrasound. Each of these
technologies is used in different capacities, and
each collects different information. However,
this information is useless unless it is processed
and some form of action is taken based on the
gathered information.
This is where Artificial Intelligence comes into
play and can be compared to the human brain,
and the actual goal of Artificial Intelligence is for
a self-driving car to conduct in-depth learning.
In a recent interview, Sameep Tandon, CEO and
co-founder of Drive.ai, explains “deep learning is
the best enabling technology for self-driving
cars.” He goes on to explain that “you hear a lot
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e12
about all these things on a car: the sensors, the
cameras, the radar, and LIDAR. What you need
are the brains to make an autonomous car work
safely and understand its environment.”
Artificial Intelligence has many applications for
these vehicles; among the more immediate and
obvious functions:
Directing the car to a gas station or recharge
station when it is running low on fuel.
Adjust the trip’s directions based on known
traffic conditions to find the quickest route.
Incorporate speech recognition for advanced
communication with passengers.
Eye tracking for improved driver monitoring.
Natural language interfaces and virtual
assistance technologies.
RINIYA BENNY
II BCA
Puzzle
Part of a family of machine learning
methods.
Big and complex data set.
Ai that conducts conversion via textual
method.
Makes enterprise software.
Software testing framework for web
application across.
Data structures used in ai.
High level programming language.
List of records using cryptography.
Field of study where machine acts like
human.
Interpretation of meaningful pattern in
data.
Set of rules followed in calculation by
computers.
Faiza Rahaman
III BSc CMS
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e13
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e14
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e15
INDUSTRIAL VISIT
Philips Innovation Campus, Bengaluru
Date : 30Th January 2017
IT CLUB ACTIVITY
CUL-WEEK
On 20th and 21st July 2017
Organised by
TALK ON ARTIFICIAL
INTELIGENCE
By Hemant Pawar
Service Delivery
Manager at IBM
Global Business
Services
On 31st January 2018
III sem MCA - SUR-SANGRAM-SURANA
COLLEGE INTERCOLLEGIATE FEST
Overall Runners.
I place in digital collage & II place in digital
marketing.
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e16
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e17
DRIVERLESS CAR
Driverless car seems like a thing in sci-fi movies
but this is changing into reality in 2018.
Driverless cars are closer to becoming an
everyday part of world.
What does Driverless Car mean?
A driverless car is a vehicle that can guide itself
without human conduction. This kind of vehicle
has become a concrete reality and may pave the
way for future systems where computers take
over the art of driving. It is also known as a
autonomous car, robot car or self-driving car. It
is developed in companies like BMW, Mercedes,
Tesla, etc.
A driverless car uses an artificial intelligence
system that senses its surroundings, processes
the visual data to determine how to avoid
collisions, operates car machinery like the
steering and brake, and also uses GPS to track
the car's current location and destination. To
perceive visual surroundings, most self-driving
cars have some combination of visual systems
like video cameras, deep learning software to
interpret objects like street lights and stop signs,
and while radar catches most obstacles instantly,
it’s not as good as spotting smaller obstacles as
lidar.
Fully driverless tech is still at an advanced testing
stage, but partially automated technology has
been around for the last few years. Executive
saloons like the BMW 7 Series feature
automated parking, and can even be controlled
remotely.
Although autonomous car functions efficiently, it
still face problems like the unpredictable
behaviour of humans which represent challenge
for the technology. The Google Car is one of the
most experienced autonomous vehicles, and its
interaction with human drivers has exposed one
of driverless cars’ main weaknesses. The first
injury involving the Google Car wasn’t due to a
fault in its system, but human-error.
For example, if I move in front of driverless car
then I have no idea how the car will behave. But
driverless car is designed and programmed in
such a way that it will predict how a human being
is going to react. At this stage the programming
is at risk if there is any serious injure to human
or human property. Hence concluding that
autonomous cars are at risk when surrounded by
human road users.
Until these problems are solved, fully
autonomous cars will pose a dangerous risk to
other road users. At the moment, driverless cars
are only truly safe when tested and operated
around other driverless cars in a controlled
environment. But in nearing years we'll benefit
from driverless car by improving in technology
by avoiding accidents.
Jyothi Khati
II B.Sc. CME
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e18
AI ALGORITHM CAN TEACH CARS SELF-DRIVING IN 20 MINUTES
A start-up founded by two researchers from
the Cambridge University, has developed an
artificial intelligence (AI) algorithm that can
learn how to drive a car from scratch in 20
minutes. Instead of relying on external sensors
and custom-built hardware, the start-up wants
to create autonomous vehicles which are
entirely software-based. It was hooked up with
a learning program to intelligently analyse the
camera's data in real-time. Then the car was
given full autonomous control.
Every time it off-roader, a human driver
corrected it. The algorithm accordingly kept
tweaking itself, and within 20 minutes learned
how to drive
The AI algorithm powering self-driving system
doesn't require any cloud connectivity or pre-
loaded maps. It is a four-layer convolutional
neural network that processes everything on a
GPU.
"The missing piece of the self-driving puzzle is
intelligent algorithms, not more sensors, rules,
and maps," co-founder and CEO Amar Shah
said.
MEGHA BALIYAN
III PMC
The great growling engine of change – technology
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e19
BIG DATA
Training the many layers of virtual neurons in
the experiment took 16,000 computer
processors—the kind of computing
infrastructure that Google has developed for
its search engine and other services. At least 80
percent of the recent advances in AI can be
attributed to the availability of more computer
power, reckons Dileep George, cofounder of
the machine-learning start-up Vicarious.
There is more to it than the sheer size of
Google’s data centres, though. Deep learning
has also benefited from the company’s
method of splitting computing tasks among
many machines so they can be done much
more quickly. That is a technology Dean helped
develop earlier in his 14-year career at Google.
It vastly speeds up the training of deep-
learning neural networks as well, enabling
Google to run larger networks and feed a lot
more data to them.
Already, deep learning has improved voice
search on smartphones. Until last year,
Google’s Android software used a method that
misunderstood many words. However, in
preparation for a new release of Android last
July, Dean and his team helped replace part of
the speech system with one based on deep
learning. Because the multiple layers of
neurons allow for more precise training on the
many variants of a sound, the system can
recognize scraps of sound more reliably,
especially in noisy environments such as
subway platforms. Since it is likelier to
understand what was actually uttered, the
result it returns is likelier to be accurate as
well. Almost overnight, the number of errors
fell by up to 25 percent—results so good that
many reviewers now deem Android’s voice
search smarter than Apple’s more famous Siri
voice assistant.
For all the advances, not everyone thinks deep
learning can move artificial intelligence toward
something rivalling human intelligence. Some
critics say deep learning and AI in general
ignore too much of the brain’s biology in
favour of brute-force computing.
One such critic is Jeff Hawkins, founder of Palm
Computing, whose latest venture, Numenta, is
developing a machine-learning system that is
biologically inspired but does not use deep
learning. Numenta’s system can help predict
energy consumption patterns and the
likelihood that a machine such as a windmill is
about to fail. Hawkins, author of On
Intelligence, a 2004 book on how the brain
works and how it might provide a guide to
building intelligent machines, says deep
learning fails to account for the concept of
time. Brains process streams of sensory data,
he says, and human learning depends on our
ability to recall sequences of patterns: when
you watch a video of a cat doing something
funny, it is the motion that matters, not a
series of still images like those Google used in
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e20
its experiment. “Google’s attitude is: lots of
data makes up for everything,” Hawkins says.
However, if it does not make up for everything,
the computing resources a company like
Google throws at these problems cannot be
dismissed. They are crucial, say deep-learning
advocates, because the brain itself is still so
much more complex than any of today’s neural
networks. “You need lots of computational
resources to make the ideas work at all,” says
Hinton.
RINIYA BENNY
II BCA
BITCOINS
Bit coin is a new currency that was created in
2009 by an unknown person Satoshi
Nakamoto. Bit coin is the first decentralized
digital currency. Bit coins are digital coins you
can send through the internet. Compared to
other alternatives bit coins have a number of
advantages. Bit coins are transferred directly
from person to person via the net without
going through a bank or clearing outs.
This means that the fees are much lower.
You can use them in every country.
Your count cannot be frozen.
There are no prerequisites or arbitrary limits.
How are they created?
Bit coins are generated all over the internet
by anybody running a free application called a
Bit coin miner. Mining requires a certain
Technology like art is a soaring exercise of the human imagination
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e21
amount of work for each block of coins. This
amount is automatically adjusted by the
network such that the Bit coins are always
created at a predictable and limited rate.
Your bit coins are stored in your digital wallet
which might look familiar if you use online
banking. When you transfer Bit coins, an
electronic signature is added and after few
minutes the transaction is verified by a minor
and permanently and anonymously stored in
the network on a ledger called block chain.
After a certain amount of transactions have
been verified by the miner, that miner will
receive newly minted bit coins for their work.
The Bit coin software completely open source
and anybody can review the code. Bit coin is
changing finance the same way the web
change publishing. When everyone has access
to the Global market great ideas flourish.
Some examples to how Bit coins are already
used today:
You can purchase video games gifts, books
and servers.
Several currency exchanges exist where you
can trade your Bit coins for dollar, euro and
more.
Bit coins a great pay for small businesses and
freelancers to get noticed. It doesn't cost
anything to start accepting them.
There are no charge banks or fees. And you'll
get additional business from the bit coin
economy.
VINDHYA.H II BCA
5 THINGS YOU SHOULD KNOW ABOUT BITCOIN AND DIGITAL
CURRENCIES
The difference between virtual, digital,
and cryptocurrencies
Virtual currency was defined in 2012 by the
European Central Bank as "a type of
unregulated, digital money, which is issued
and usually controlled by its developers, and
used and accepted among the members of a
specific virtual community." Last year, the US
Department of Treasury said that digital
currency operates like traditional currency, but
does not have all the same attributes — as in,
it does not have legal tender.
Digital currency, however, is a form of virtual
currency that is electronically created and
stored. Some types of digital currencies are
cryptocurrencies, but not all of them are.
So that leads us to the more specific definition
of a cryptocurrency, which is a subset of digital
currencies that uses cryptography for security
so that it is extremely difficult to counterfeit. A
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e22
defining feature of these is the fact they are
not issued by any central authority.
The origin of Bitcoin
Bitcoin is a cryptocurrency, a number
associated with a Bitcoin address. In 2008, a
programmer (or group of programmers) under
the pseudonym Satoshi Nakamoto published a
paper describing digital currencies. Then in
2009, it launched software that created the
first Bitcoin network and cryptocurrency.
Bitcoin was created to take power out of the
hands of the government and central bankers,
and put it back into the hands of the people.
The origin of Dogecoin
Dogecoin is a form of cryptocurrency that was
created in December 2013. It features Doge,
the Shiba Inu that has turned into a famous
internet meme. It was created by Billy Markus
from Portland, Oregon, who wanted to reach a
broader demographic than Bitcoin did. As of
March, more than 65 billion Dogecoins have
been mined, and the production schedule of
this cryptocurrency is in production faster than
most.
Other types of digital currencies
There are other types of digital currencies,
though we don't hear much about them. The
next most popular is probably Litecoin, which
is accepted by some online retailers. It was
inspired by Bitcoin and is nearly identical, but
it was created to improve upon Bitcoin by
using open source design.
There are many other types of
cryptocurrencies, such
as Peercoin, Ripple, Master coin, and Name
coin. Cryptocurrencies get some dissension
because they are often replicating of other
versions, with no real improvements.
Where you can use Bitcoin
There are many places you can use Bitcoin to
purchase products or services. There is no real
rhyme or reason to the list, which includes big
corporations and smaller, independent
retailers including bakeries and restaurants.
You can also use the currencies to buy flights,
train tickets, and hotels on Cheap Air;
upgrades to your OK Cupid profile; products
on Overstock.com; gift cards on eGifter.
There's a list on Spend Bitcoins that shows all
the places that accept the cryptocurrency.
RINIYA BENNY
II BCA
It’s not that we use technology, we live technology
Mount Carmel College Digitally Yours Version 18.0
Department of Computer Science
Pag
e23
ANSWERS
CROSSWORD
RIDDLES: CROSSWORD
1.Space
2.To improve its web sight
3.It crashes
4.When it breaks down
5.They both have chips