clear dec 2012
DESCRIPTION
Computational Lingusitcs in Engineering And Rsearch (CLEAR) Vol.1 Issue 2TRANSCRIPT
CLEAR Dec 2012
CLEAR Dec 2012
Volume-1 Issue-2
CLEAR Magazine
(Computational Linguistics in
Engineering And Research)
M. Tech Computational Linguistics
Dept. of Computer Science and
Engineering
Govt. Engineering College,
Sreekrishnapuram, Palakkad
678633
Chief Editor
Dr. P. C. Reghu Raj
Professor and Head
Dept. of Computer Science and
Engineering
Govt. Engineering College,
Sreekrishnapuram, Palakkad
Editors
Manu Madhavan
Robert Jesuraj. K
Athira P M
Cover page and Layout
Mujeeb Rehman. O
Fuzzy Logic Applications in Natural
Language processing
…Our understanding of most physical processes is based
largely on imprecise human reasoning. This imprecision
(when compared to the precise quantities required by
computers) is nonetheless a form of information that can be
quite useful to humans…. 1
Indic Language Computing: A Review
….But with almost three dozen major languages and
hundreds of dialects, the task is more complex in India. The
tools present in the global market cannot be replicated
owing to the complexity of multiple languages that exist in
the country….. 7
Natural Language Processing and Human
Computer Interaction
……With data mining, Wal-Mart was able to figure out that
diapers and beer were bought together. This allowed them
to position those two groceries closer together. We can see
that a normal human would not be able to…….. 11
Google’s Driverless Car.
…….The Google car project team was working in secret in
plain view on vehicles that can drive themselves, using
artificial-intelligence software that can sense anything near
the car and mimic the decisions made by a human driver.
With someone behind the wheel to take control……. 17
GNU Octave
…a tool for numerical calculations and solving numerical
problems … 21
CLEAR Dec 2012
CLEAR Dec 2012
Dear Readers!
Welcome back to the world of Computational Linguistics. This edition of
CLEAR brings to you some insight into current trends in Indian Language
Computing, Fuzzy logic applications etc. It is heartening to note that
better recognition of the importance of language processing using
computational means is visible among the computing community. Our
interaction with various academic and R&D organizations of repute in the
country definitely show the emergence of new applications of CL, ASR,
etc. in implementing better HCI modules. This has given us esh energy to
work harder. At the same time, it was a disappointment to see that the
response to our call for a national conference on CL and IR did not attract
attention of the research community in this field. This points to the big
gap between the demand and supply of ideas and people in CL/NLP. It is
this gap that CLEAR aims to reduce.
The CLEAR team wishes all the readers a Merry Christmas and a
prosperous year ahead!
Sincerely,
Reghu Raj
CLEAR Dec 2012 1
Fuzzy Logic Applications in Natural Language Processing
Fuzzy Logic has
widespread applications in
the field of natural
language processing. Here
we also discuss a fuzzy
logic based natural
language processing
system for speech
recognition, and a fuzzy
logic based term weighting
scheme used for
information extraction. We
also discuss how fuzzy
logic and fuzzy reasoning
are used to deal with
uncertainty information in
Panini's Sanskrit
Grammar.
Fuzzy logic is an approach to
computing based on degrees
of truth rather than the usual
true or false (1 or 0) Boolean
logic on which the modern
computer is based. Natural
language (like most other
activities in life) is not easily
translated into the absolute
terms of 0 and 1. Fuzzy logic
includes 0 and 1 as extreme
cases of truth but also
includes the various states of
truth in between. Fuzzy logic
deals mathematically with
imprecise information usually
employed by humans.
Fuzzy Logic has widespread
applications in the field of
natural language processing.
We discuss some applications
of fuzzy logic in NLP. Lotfi A
Zadeh's work on Computing
with Words is an important
application of fuzzy logic in
natural language processing.
Here we also discuss a fuzzy
logic based natural language
processing system for speech
recognition, and a fuzzy logic
Author
Divya S
M. Tech Computational Linguistics Govt. Engineering College,
Sreekrishnapuram Palakkad
Palakkad
based term weighting scheme
used for information extraction.
We also discuss how fuzzy logic
and fuzzy reasoning are used to
deal with uncertainty
information in Panini's Sanskrit
Grammar.
Fuzzy Logic
Our understanding of most
physical processes is based
largely on imprecise human
reasoning. This imprecision
(when compared to the precise
quantities required by
computers) is nonetheless a
form of information that can be
quite useful to humans. The
ability to embed such reasoning
in hitherto intractable and
complex problems is the
criterion by which the efficiency
of fuzzy logic is judged.
Undoubtedly this ability cannot
solve problems that require
precision. But not many human
problems require such precision
problems such as parking a car,
backing up a trailer, navigating
a car among others on a
freeway, washing clothes,
controlling traffic at
intersections, judging beauty
contests and a
CLEAR Dec 2012 2
preliminary understanding of a
complex system. And for such
problems Fuzzy logic takes the
focus.
Fuzzy logic resembles human
decision making with an ability to
generate precise solutions from
certain or approximate information.
It fills an important gap in
engineering design methods left
vacant by purely mathematical
approaches (e.g. linear control
design), and purely logic-based
approaches (e.g. expert systems) in
system design.
Fuzzy Logic allows something to be
partially true and partially false. A
simple example follows: Is a man
who stands 170 centimeters (5‘6")
considered to be tall? Traditionally
we must define a threshold over
which a man of a certain height is
considered a member of the tall set
and under which he is not. Fuzzy
Logic allows one to speak of a 170
cm man as both a member of the
tall set and the medium set, and
possibly even the short set. He may
be considered to a larger degree a
member of the medium set than he
is of the tall set. A man who stands
190 centimeters will be to a higher
degree a member of the tall set. If a
problem suggests there is some
consequence
related to the height of a tall man,
consequence related to the height of a
tall man, then the consequence can be
applied or inferred in relation to his
degree of membership in the tall set.
Basically, Fuzzy Logic (FL) is a
multivalve logic that allows intermediate
values to be defined between
conventional evaluations like true/false,
yes/no, high/low, etc. Notions like
rather tall or very fast can be
formulated mathematically and
processed by computers, in order to
apply a more human-like way of
thinking in the programming of
computers.
Fuzzy Logic can be used to generate
solutions to problems based on "vague,
ambiguous, qualitative, incomplete or
imprecise information. The use of fuzzy
logic is an effective alternative in
natural language analysis compared to
statistical and other approaches. It is
commonly recognized that many
phenomena in natural language lend
themselves to descriptions by fuzzy
mathematics, including fuzzy sets, fuzzy
relations and fuzzy logic.
Fuzzy logic deals mathematically with
imprecise information usually employed
by humans. When considering the use
of fuzzy logic for a given problem, an
engineer or scientist should ponder the
need for exploiting the tolerance for
imprecision.
Fuzzy Set and Crisp Set
The universe of discourse is
the universe of all available
information on a given
problem. Once this universe
is defined it is able to define
certain events on this
information space. Sets are
described as mathematical
abstractions of these events
and of the universe itself. A
classical set is defined by
crisp boundaries, i.e., there
is no uncertainty in the
prescription or location of the
boundaries of the set, as
shown in Fig. 3.1a where the
boundary of crisp set A is an
unambiguous line. In figure
3.1a, point a is clearly a
member of crisp set A; point
b is unambiguously not a
member of set A.
Figure 1(a): Crisp Set (b)
Fuzzy set
A fuzzy set, on the other
hand, is prescribed by vague
or ambiguous properties;
CLEAR Dec 2012 3
Fuzzy Set A is represented as A. The
shaded boundary represents the
boundary region of A. In the central
(unshaded) region of the fuzzy set,
point a is clearly a full member of
the set. Outside the boundary region
of the fuzzy set, point b is clearly not
a member of the fuzzy set. However,
the membership of point c, which is
on the boundary region, is
ambiguous. If complete membership
in a set (such as point a in Fig. 3.1b)
is represented by value 1, and non-
membership in a set (such as point b
in Fig. 3.1b) is represented by 0,
then point c in Fig.3.1b must have
some intermediate value of
membership (partial membership in
fuzzy set A) on the interval [0,1].
Presumably the membership of point
c in A approaches a value of 1 as it
moves closer to the central
(unshaded) region of A, and the
membership of point c in A
approaches a value of 0 as it moves
closer to leaving the boundary region
of A. Fuzzy sets cover virtually all of
the definitions, precepts, and axioms
that define classical sets. Crisp sets
are a special form of fuzzy sets; they
are sets without ambiguity in their
membership (i.e., they are sets with
unambiguous boundaries).
In classical, or crisp, sets the
transition for an element in the
universe between membership and
non-membership in a given set is
abrupt and well defined. For an
element in a universe that contains
fuzzy sets, this transition can be
gradual. This transition among various
degrees of membership can be
thought of as conforming to the fact
that the boundaries of the fuzzy sets
are vague and ambiguous. Hence,
membership of an element from the
universe in this set is measured by a
function that attempts to describe
vagueness and ambiguity. If an
element in the universe, say x, is a
member of fuzzy set A then this
mapping is given by µA (x) ε [0,1].
Fuzzy Logic and NLP
Computing with words, is a
methodology in which the objects of
computation are words and
propositions drawn from a natural
language, e.g., small, large, far,
heavy, not very likely, Berkeley is
near San Francisco, etc. Computing
with words is inspired by the
remarkable human capability to
perform a wide variety of physical and
mental tasks without any
measurements and any computations.
Underlying this remarkable
capability is the brains crucial
ability to manipulate perceptions
of distance, size, weight, color,
speed, time, direction, force,
number, truth, likelihood and
other characteristics of physical
and mental objects. Manipulation
of perceptions plays a key role in
human recognition, decision and
execution processes. Computing
with words provides a foundation
for a computational theory of
perceptions a theory which may
have an important bearing on
how humans make and machines
might make perception-based
rational decisions in an
environment of imprecision,
uncertainty and partial truth.
A basic difference between
perceptions and measurements
is that, in general, measurements
are crisp whereas perceptions are
fuzzy. To deal with perceptions it
is necessary to employ a logical
system that is fuzzy rather than
crisp. The computational theory of
perceptions, or CTP for short, is
based on the methodology of
computing with words (CW).
CLEAR Dec 2012 4
In CTP, words play the role of
labels of perceptions and, more
generally, perceptions are
expressed as propositions in a
natural language. CW-based
techniques are employed to
translate propositions expressed in
a natural language into what is
called the Generalized Constraint
Language (GCL).Fuzzy logic has
been successfully applied to the
description of words meanings as
related to language external
phenomena [4]. Another case of
fuzzy application is natural
language-driven database search.
Here the semantics of words can be
expressed as fuzzy membership
functions for certain database
search keys [Medina, Vila]. A
language internal fuzzy treatment
is found in [Subasic], in which
affect types of certain words in
documents are dealt with as fuzzy
sets. Words representing emotions
are mapped to these fuzzy sets.
The difference between this case
and the previous two is that the
latter dealt with language internal
fuzzy phenomena.
Fuzzy Logic in Speech Recognition Fuzzy Logic has many applications in
Natural Language Processing. Fuzzy
Logic based NLP system can learn from
a linguistic corpus the fuzzy semantic
relations between the concepts
represented by words and use such
relations to process the word
sequences generated by speech
recognition systems [2]. Fuzzy logic
has also been successfully applied to
the description of words meanings as
related to language external
phenomena. Also Fuzzy linguistic
descriptors have been used in control
systems, in which mappings can be
established between fuzzy linguistic
terms and physical quantities. Hot,
cold, for example, can serve as labels
for fuzzy sets to which temperature
readings can be mapped into
membership degrees. Fuzzy logic rules
for control systems can accept fuzzy
descriptors in both the premises and
the consequents to simulate human-
like inference.
The main goal of a speech recognition
system is efficient processing of speech
recognition output.
Speech recognition system is
applied on restricted domains.
This means the vocabulary size
and senses and syntactic
constructs are restricted. Here
are some often-encountered
phenomena in a domain-
constrained speech system.
Out-of-vocabulary words.
A user may speak words
that are not contained in
the system lexicon.
Speech recognizer errors.
This may match a word
into a wrong word, insert
or delete a word, etc.
Flexible structures. The
user may use expressions
that the system's
grammar does not cover.
Disfluency. False start,
re-phrasing, repeated
words, mis-pronounced
words, half-pronounced
words, filled pauses, etc.
These could make the
system confused about
word semantic relations.
Fuzzy Logic based NLP system can learn from a linguistic corpus the fuzzy semantic
relations between the concepts represented by words and use such relations to process the
word sequences generated by speech recognition systems
CLEAR Dec 2012 5
Fuzzy Logic Based Term
weighting
Term weighting (TW) is one of the
major challenges in IE and IR. The
values of the weights must be
related somehow to the importance
of an index term in its corresponding
set of knowledge in this case, Topic,
Section or Object. In FL based term
weighting scheme every index term
has an associated weight. This
weight has a value between 0 and 1
depending on the importance of term
in every hierarchy level. Greater
importance means higher weight. A
FL engine is used to determine the
degree of certainty or importance of
a document for a given query. Index
term weight for every level act as
input to FL Engine. Output of FL
Engine is the Degree of certainty. If
degree of certainty lowers than a
certain threshold; content is
rejected.
In this method, the whole set of
knowledge, which constitutes the
hierarchic level 0, is divided into
level 1 subsets. For each level 1
subset, index terms must have
certain weights, which are the
possible inputs to an FL engine. If
the degree of certainty
corresponding to a subset is lower
than a predefined value, named
threshold, the content of the
corresponding
corresponding subset is rejected.
For every subset that overcomes
the threshold of certainty, the
process is repeated. Now, the
inputs to the FL engine are the
level 2 weights for the
corresponding index terms and
the process is repeated. The final
output corresponds to the
elements of the last level that is
to say, the objects whose degree
of certainty overcomes the
definitive threshold. Figure 4.1
shows the process in two level
hierarchic structures.
Implementing FL engine obtained
success to a great level.
Fuzzy Modeling for Panini's Sanskrit Grammar Indian languages have long
history in World Natural
languages. Panini was the first to
define Grammar for Sanskrit
language with about 4000 rules in
fifth century. These rules contain
uncertainty information. It is not
possible to Computer processing
of Sanskrit language with
uncertain information. Grammars
are defined to either programming
languages or natural languages.
Computer processing of natural
languages and language
translations is an application area
in the computer field. Indian
Languages
eld. I
languages are having long
history. Panini proposed
grammar with 4000 rules for
Sanskrit. These are categorized
into different sets. One of them
is Syadvada set. The Syadvada
set contains seven possibilities
they are given below.
1. May be, it is. (Syadasti)
µSyadasti(x) -> [0; 1]
2. May be, it is not (Syad nasti)
Syad nasti = 1 - µSyadasti(x)
3. May be it is, and it is not at
different times (Syadasti-nasti)
µSyadasti(x)^(1-µSyadasti(x) ^
µdifferenttimes(x; y))
4. May be it is and it is not at
the same time and is
indescribable
µSyadasti(x)^(1-µ)0 µSyadasti(x)
µdifferenttimes(x;t)) ^ µdifferenttimes(x)
5. May be it is and yet
indescribable.
(Syad astiavaktavya)
=µSyadasti(x) ^ µdifferenttimes(x)1/2
6. May be it is not, and also
indescribable (yad astinasti
avaktavya)
(1-µSyadasti(x)) ^ µdifferenttimes(x)
This fuzzy representation of the
Sanskrit sentences shall be
further used for fuzzy
reasoning. For instance,
consider two sentences
CLEAR Dec 2012 6
May be, it is. (Syadasti)
May be it is, and it is not at different
times (Syad asti-nasti)
The inference will be given as using R1
it is not at different times with the
fuzziness (Syadasti) ^ (Syad asti-
nasti).
Conclusion
Fuzzy logic deals mathematically with imprecise
information usually employed by humans. Fuzzy
Logic and fuzzy systems tries to mimic human
thinking and approximations. It is multi-valued
logic that extends Boolean logic.
Fuzzy Logic based NLP system can learn from a
linguistic corpus the fuzzy semantic relations
between the concepts represented by words and
use such relations to process the word
sequences generated by speech recognition
systems. An intelligent agent based on fuzzy
logic is used for information extraction. A new
term weighting scheme based on fuzzy logic is
introduced. When perceptions are described in
words, manipulation of perceptions is reduced to
computing with words (CW). FL is applied for
computation with words.
Panini proposed grammar with 4000 rules for
Sanskrit. Fuzzy logic and fuzzy reasoning are
discussed to deal with uncertainty
information in Panini's Sanskrit Grammar.
References:
1. Jiping Sun, Fakhari Karray, Otman Basir
& Mohamed Kamel ,‖Fuzzy Logic-Based
Natural Language Processing and Its
Application to Speech Recognition,"
Department of Electrical and Computer
Engineering, University of Waterloo.
2. Lot A. Zadeh ―From Computing with
Numbers to Computing with Words
from Manipulation of Measurements to
Manipulation of Perceptions," in Int. J.
Appl. Math. Comput. Sci., 2002, Vol.12,
No.3, 307324.
3. Timothy J Ross (2010), Fuzzy Logic with Engineering Applications. Third
Edition, Wiley India Pvt.Ltd.
4. Zadeh L. A., "Fuzzy sets," Inf. Control Vol. 8, pp. 338353.
5. P. Venkata Subba Reddy, ―Fuzzy
Modeling and Natural Language Processing for Paninis Sanskrit
Grammar‖, Journal of Computer Science and Engineering, Volume 1, Issue 1, May
2010.
6. Ropero, J., et al. ―A Fuzzy Logic intelligent agent for Information
Extraction: Introducing a new Fuzzy Logic-based term weighting scheme. “
Expert Systems with Applications (2011)doi:10.1016/j.eswa.2011.10.009
CLEAR Dec 2012 7
Indic Language Computing: A Review
Author
Manu Madhavan
M. Tech Computational Linguistics Govt. Engineering College,
Sreekrishnapuram Palakkad
Palakkad In this twenty first century, where
Computation and Information technologies
have reached uncomparable heights,
Language Computing may not be a buzz
word. It is the most evolving research
area, making the fast growing technologies
to fastest. The people involved and the
organizations invested in this area show
the future and scope of this technology.
Even though India is a dominant IT service
provider, the Language computing is still
struggling here to find its market place.
Why Indian engineers fail to bring the
technology to our common man? This
article collaborates different views on Indic
Language Computing, the challenges and
applications.
Through the current technological
innovations related IT movement, our
country is promoting the maximum
exploration of electronic media and
internet for reaching the people. But, in
many under-developed areas of the
Country, people only know their own
mother tongue for communication,
‗exploring‘ as visualized by
Government not effective. The
solution is providing the technology
in their own language. People
throughout the world have been
using computers and Internet in
their own languages. Somehow,
Indian users are compelled to use
them in English. In western
countries, the language computing
application is an active research
area. They developed many
intelligent systems for English, even
with speech capability. But with
almost three dozen major languages
and hundreds of dialects, the task is
more complex in India. The tools
present in the global market cannot
be replicated owing to the
complexity of multiple languages
that exist in the country. For
translation in Indian languages one-
to-one mapping of each word as it is
to form a sentence is not workable.
The methodology to be followed
here is to first process the source
language, convert words according
to the target language, and then
process it all again with respect to
the target language for the
conversion to make sense. With
these complexities, the current
translation systems and
other language computing
resources developed by
different research institutes
and volunteer NLP
enthusiasts, shows a hopeful
future.
Challenges:
Indian language computing
has faced many challenges
since the early ages of
language computing and
even today. Let‘s go through
some of the
challenges in Indic language
computing.
Dialects: Apart from the
typical nature of Indian
languages, cultures also
affect our language usage
and pronunciation. For
example, in northern parts
of India, Hindi is spoken in
varied forms across different
states and cities. Thus we
cannot have a generic tool,
CLEAR Dec 2012 8
especially for translation, and all tools
have to be developed for all of the
languages.
Corpus: One of the important
resource for language computing is
corpus. Some languages are spoken
by large number of people, others by
a small group. So, getting good
corpus is difficult. The criteria for
sample collection require the target
group to be computer savvy and
conversant in English as well as the
local language. This narrows down the
number of people who can be
contacted for giving sample of the
local lingo.
Linguistic Features: Indian
languages are morphologically richer
than English. So, computing all the
valid inflections and derivations in
language is challenging. A relief is
that the language is strictly structured
by well defined grammars, and the
ambiguity is less compared to English.
The presence of post fixes instead of
prefixes and existence of free word
order make the things more difficult.
Translating Jargons: Most of the
computer jargons and technical
phrases were not grammatically
complete sentences, they were just
computer commands. Also, words like
document, folder, delimiters, add-ons
are not enlisted in any dictionary of
Indian languages. While in some
languages it has been transliterated
and retained as it is, experts of some
other languages went on to create a
whole new set of words
corresponding to the IT terminology.
Script: Representing the Indian
scripts in digital format is difficult,
even with the development of
Unicode. The lack of standards in this
representation suppresses the use of
local languages in internet media.
ISCII representation similar to ASCII
for English is a standard developed
for Indian languages. Now
Government of India accepted
Unicode standard characters for
Indian languages. Transliteration for
Indian languages is considerably
successful today. Indic languages are
languishing due to lack of
standardization and available
technology.
Shakti Standard Format Shakti Standard Format (SSF)
is a highly readable
representation for storing
language analysis. It is
designed to be used as a
common format or common
representation on which all
modules of a system operate.
The representation is
extensible in which different
modules add their analysis.
SSF also permits partial
analysis to be represented and
operated upon by different
modules. This leads to graceful
degradation in case some
modules fail to properly
analyze a difficult sentence.
(Developed by LTRC, IIIT-
Hyderabad)
…When the user
dials the Voice
Number of a
website, he or she
gets to hear the
content of the
respective site over
the phone) is an
interesting
application ….
CLEAR Dec 2012 9
Applications:
One prominent use is the
digitization or creation of e-
books of the mounds of rich
literature in different Indian
languages. This would help
greater and better digitization of
libraries across the Indian
cultural terrain---- Physical
documents can be converted
into e-documents and these can
be further read out using text-
to-speech engines developed by
private companies and
institutions.
Another application of language
computing comes to play with
the concept of cross-lingual
search and the wordnet that
are being developed by Pushpak
Bhattacharyya, professor of
computer science engineering at
IIT-Bombay and head of
Laboratory for Intelligent
Internet Access at the institute.
Software localization TDIL
defines Software localization as
the process of adapting a software product
to the linguistic, cultural and technical
requirements of a target market. This
process is labor-intensive and often
requires a significant amount of time from
the development teams. So in addition to
translation, the localization process may
also include adapting graphics to the target
markets, modifying content layout to fit the
translated text, converting to local
currencies, using of proper formats for
dates, addresses, and phone numbers,
addressing local regulations and more. The
goal is to provide a product with the look
and feel of having been created for the
target market to eliminate or minimize
local sensitivities.
Speech is the area yet to be explored.
There are hardly any successful speech
processors. With an efficient speech system
in local language (say) for railway ticket
booking, helps the illiterate people. The
IBM voice web (When the user dials the
Voice Number of a website, he or she gets
to hear the content of the respective site
over the phone) is an interesting
application in this field. A language tutor
for Indian languages can also possible from
speech realm. Mobile applications, based
on NLP and speech systems have
interesting scope in Indian market.
Microsoft’s Bhashaindia
Towards establishing a direct
contact and providing a
common platform to the larger
community of people, including
students, linguists, and
academicians etc, Microsoft
launched the portal
"www.bhashaindia.com". This
portal aims at building a
community of developers and
linguistic academia who will
contribute towards the
development and use of Indian
languages for PC usage. The
portals a one-point reference
for all Indic related activities.
Additionally this portal would
be of interest and use for
general PC users, educational
and training institutions, and
government agencies.
BhashaIndia, India‘s leading
Indic computing community
portal has over 15000
registered users and continues
to grow by the day. It has
become a one stop center for
all resources related to Indian
language computing. Articles,
latest news, snippets of
interesting information and
resources like applications
related to Indic computing are
all available on this site. Today
BhashaIndia has become the
destination for anybody
interested in Indian language
computing.
Ref : www.bhashaindia.com
-Sreeejith C
CLEAR Dec 2012 10
Research Initiatives:
Different centers of C-DAC—in
Bangalore, Kolkata, Mumbai,
Noida, Pune, and
Thiruvananthapuram—work on
language computing technologies.
Their activities include
development of smaller utilities
like desktops and Internet access
in Indian languages and core
research in areas of machine
translation, OCR, cross-lingual
access, search engines,
standardization, digital library,
and more. Other smaller groups
are also being seen as key players
in the field -- including the IIT-
Madras group that has been
working and incubating innovative
Indian-language solutions, the
NCST (National Centre for
Software Technology) in Mumbai,
and the IIIT (International
Institute of Information
Technology) in Hyderabad, which
has done impressive work on
machine-translation and related
areas. The works of NLP
communities like Swathanthra
Malayalam Computing(SMC),
International Forum for
Information Technology in
Tamil(INFIT), wikimedia etc are
well appreciable and have key role
in development of this area.
SILPA project, Linux Indian
language versions are some of
the efforts from these volunteer
groups. The open source
environment provides a large
scope future development.
Need for Tomorrow:
The major problem in this field is
the lack of central co-ordinations.
More people have to come
forward to work in this area.
Government has to take
necessary steps to teach
language computing technology
for engineering graduates. It is
very clear that the survival of
language in the cyber world is
essential to make the citizen a
global man.
CLIP
The Microsoft Captions
Language Interface Pack (CLIP)
is a simple language translation
solution that uses tooltip
captions to display results. Use
CLIP as a language aid, to see
translations in your own dialect,
update results in your own
native tongue or use it as a
learning tool.
CLIP is designed to enable and
support indigenous languages
and native dialects and is the
result of the close collaboration
between Microsoft and local
communities. Users will be able
to download multiple languages,
switching target translations
quickly and easily.
To use, simply move your
mouse around the screen and
halt briefly over any text you
want translated. Users can also
add their own translations and
copy and paste any results.
- Sreejith C
References:
1.http://magazine.itmagz.com/ind
ex.php/component/content/article/
521.html
2.http://bhashaindia.com/Develop
ers/Tutorial/Pages/IndianLanguage
Computing.aspx
3.http://www.technologyreview.in/
computing/37921/
CLEAR Dec 2012 11
Natural Language Processing and Human Computer Interaction
Author
Sreejith C
M. Tech Computational Linguistics Govt. Engineering College,
Sreekrishnapuram Palakkad
Palakkad
Natural Language Processing, also
abbreviated as NLP, is a field of
computer science. The field focuses on
helping computers understand and
interpret human languages. Human
languages are also known as natural
languages, thus the term
NLP. Computers are programmed to
try and interpret an input sentence in a
natural language into a more formal
computerized representation. Many
NLP problems apply to both generation
and understanding natural
languages. A computer must be able
to understand the model of a natural
language in order to understand
it. Patterns in natural languages must
be programmed in order to produce a
grammatically correct sentence in that
particular natural language.
NLP is considered to have great
potential to provide services for
corporate companies and governmental
agencies. In present times, electronics
are relied upon for many day-to-day
tasks and our society relies on
electronics more than it did the
previous day. This high demand on
sophisticated electronics shows a need
for technology such as NLP.
What makes this field
really interesting is
that not only do we
have the computer try
and understand a
human language; we
have a way to
investigate and learn
more about natural
languages in
general. To illustrate
this, we can look at
data mining, a field
that tries to describe
and predict
outcomes. With data
mining, Wal-Mart was
able to figure out that
diapers and beer were
bought together. This
allowed them to
position those two
groceries closer
together. We can see
that a normal human
would not be able to
figure out this relation
but with a computer,
it is very possible to
find out more
information about
natural languages
using NLP.
Over the past few years, our
research group has been
comprised of researchers from
both the Human-Computer
Interaction (HCI) and the Natural
Language Processing (NLP)
communities, and they have
thus been exploring how the two
communities can benefit each
other. This paper intends to
present several views on this
topic, as well as some basic
concepts and examples of how
the two disciplines meet in
specific projects. This paper will
focus on the relationships that
can exist between HCI and NLP.
CLEAR Dec 2012 12
information about natural languages
using NLP.
Some neat technologies have been
developed to explore the field of
NLP. For example, there are chat
bots that can have conversations with
a human or another bot. These
machines can learn more about how
humans talk to each other and
simulate a human. Other applications
include tools to help investigate
plagiarism...so for example, if I
decided to simply copy and paste
content from a small set of websites,
programs can figure out that there is
a high relationship between the page I
created and the websites that were
listed as a reference. This compilation
of websites explores NLP by exploring
its history, its uses, and its side
effects, good and bad.
Human–computer interaction Human–computer Interaction (HCI)
involves the study, planning, and
design of the interaction between
people (users) and computers. It is
often regarded as the intersection of
computer science, behavioral
sciences, design and several other
fields of study. Because human–
computer interaction studies a human
and a machine in conjunction, it
draws from supporting knowledge on
both the machine and the human side.
On the machine side, techniques in
computer graphics, operating
systems, programming languages,
and development environments are
relevant. On the human side,
communication theory, graphic and
industrial design disciplines,
linguistics, social sciences, cognitive
psychology, and human factors such
as computer user satisfaction are
relevant. Engineering and design
methods are also relevant. Due to the
multidisciplinary nature of HCI, people
with different backgrounds contribute
to its success. HCI is also sometimes
referred to as man–machine
interaction (MMI) or computer–human
interaction (CHI). A basic goal of HCI
is to improve the interactions between
users and computers by making
computers more usable and receptive
to the user's needs. Researchers in
HCI are interested in developing new
design methodologies, experimenting
with new hardware devices,
prototyping new software systems,
exploring new paradigms for
interaction, and developing models
and theories of interaction.
Relationship between
HCI and Natural
Language Processing
To answer the first hot
question: Are HCI and NLP
complementary fields? For that
We need to clarify our
understanding of goals and
methods of both disciplines.
Indeed, it seems to us that the
gap can only partially be
explained by epistemic
distinctions, but that it is
related to strong discipline
boundaries separating HCI
from AI.
Of course, HCI and
NLP should meet in
one obvious place:
the natural language
interface. Natural
language interfaces
have several
advantages over
direct manipulation
CLEAR Dec 2012 13
As a matter of fact, HCI and NLP
attempt to reach a common goal:
simplifying user interaction with
information systems. Despite this,
historically they have followed two
antithetic design approaches. HCI
is, by definition, user-centered;
NLP has for long been based on a
prevailing system-centered view.
HCI concentrates on interfaces,
artificial modules able to translate
digital signals into analog
representations. The focus of
attention has always been on
users: interfaces adapt computers
to limits, capabilities and needs of
humans. The focus of attention
has always been on users:
interfaces adapt computers to
limits, capabilities and needs of
humans. On the other hand, for
many years NLP has focused on
systems, attempting to reproduce
verbal communication at the
human-computer interface by
architectures processing
conversational inputs. In a perfect
NL system the traditional concept
of user-interface tends to
disappear: the language itself
constitutes the interface.
Yet, both HCI and NLG are concerned
with the effectiveness of
communication, and we can see
parallels between their various
concerns. HCI design practitioners are
concerned with such issues as
information grouping and
differentiation, consistency with the
ways users perform their tasks, and
clear specification of the purpose of
each interface element. This is
analogous to ensuring in NLG that a
chunk of text is coherent and achieves
one or more specific communicative
goals the user can recognize, and that
a sequence of such chunks (or moves
in a dialogue) is also coherent. Of
course, HCI and NLP should meet in
one obvious place: the natural
language interface. The main
paradigm in HCI design today is direct
manipulation. However, natural
language interfaces have several
advantages over direct manipulation:
they allow references to objects that
are not directly visible and to events
that have occurred in the past or will
occur in the future. In addition, with
the increasing number of small
displays (e.g., mobile phones) and
mobile devices, vocal interaction
between user and on-line services will
probably become more
prominent. This is an obvious
instance where NLG and HCI
experts should collaborate.
Speech interfaces are not the
only point of contact between
HCI and NLG, though. Another
type of interface where the two
disciplines meet is one in
which documents act as
interface. This is the case, for
example, for web pages, or
any form of hypertext. There,
interaction occurs within the
document/text. While issues
related to language and
dialogue are important here,
so are other interactional
issues. An example of these
issues is the trade-off between
the number of hypertext links
the user must traverse to
arrive at the appropriate
information and the amount of
text to be presented at each
point. Another example
concerns the positioning of
new windows and whether the
old window disappears or not.
A third example concerns the
way a hypertext anchor is
specified, and if and how
CLEAR Dec 2012 14
information about the target page
should be provided. These issues
relate to the interface proper, and
the interaction between the user
and the computer.
A Look into the Future: How NL Could Change HCI
One goal for artificial intelligence
work in natural language is to
enable communication between
people and computers without
resorting to memorization of
complex commands and
procedures. Automatic translation—
enabling scientists, business people
and just plain folks to interact easily
with people around the world—is
another goal. So, research will
continue to enable humans to
communicate more naturally with
their computers, with the ultimate
goal being to determine a system of
symbols, relations, and conceptual
information that can be used by
computer logic to implement
artificial language interpretation.NLP
has continuing implications for
translation, gaming, summarization,
question answering, information
retrieval, and robot creation.
Information management and data
querying would benefit hugely from
NLP.NLP can help with extracting
and structuring text-based clinical
information, making clinical data
readily accessible in human
language form.
As NL will gain more importance in
HCI, interaction will be less and less
a matter of pushing buttons and
dragging slides, and more and more
a matter of specifying operations
and assessing their effects through
the use of language. Computers will
no longer be medium where
performing tasks fully requires
users to define and execute all the
actions; computers will work at a
higher level, being able to split
actions in tasks and autonomously
executing them. The change can
deeply affect the paradigm of
interaction: from doing to having it
done consequently, the mental
representation elicited by computers
may drastically evolve.
Real Life Examples
1. Chatter bots or Artificial Conversational Entities
A type of computer program that
simulates a real conversation via
auditory or textual methods; most
simply scan for keywords within
input from human conversation and
create a reply using matching
keywords from an available
database. They ―converse‖ by
recognizing cue words or phrases
from the human user, which allows
them to use pre-prepared or pre-
calculated responses which can
move the conversation on in an
apparently meaningful way
without requiring them to know
what they are talking about.
For example, if a human types, "I
am feeling very worried lately,"
the chatterbox may be
programmed to recognize the
phrase "I am" and respond by
replacing it with "Why are you"
plus a question mark at the end,
giving the answer, "Why are you
feeling very worried lately?"
A similar approach using
keywords would be for the
program to answer any comment
including (Name of celebrity)
with "I think they're great, don't
you?" Humans, especially those
unfamiliar with chatter bots,
sometimes find the resulting
conversations engaging. Critics
aren‘t impressed.
CLEAR Dec 2012 15
2. Robot Nurse
Robot-Nurse, developed by
Samsung and Robot-Hosting.com
is a very practical application of
NLP. The machine uses face
recognition (via camera), as well
as voice recognition (via
microphone) and has flexible arms
and grasping tools for "hands," the
better to perform the more menial
tasks usually done by nursing
staff. Researchers at the University
of Auckland are creating the
knowledge base for the robot.
Using several global server clusters
as a brain, Robot-Nurse will tend
to patients when nurses sleep at
night. "She" can reason logically,
deliver prescriptions, and remind
patients of things like a daily
exercise routine, by acting as a
coach and encouraging them
verbally.
Another way Robot-Nurse bonds
with her patients is to keep those
company who have no visitors to
tell them jokes or simply talk with
them.
Robot-Nurse is too short to change
bedpans, but perhaps the later
versions will be able to free their
human counterparts from this
unpleasant chore.
3. The Isolde Project
The Isolde project is concerned
with the design and development
of a tool to support the production
of hypertext-based on-line help for
software systems, using language
technology (Paris et al., 1998).The
projects emphasis was to try to
address some of the limitations of
current language technology that
prevent its use in realistic settings.
In particular, our concern was with
the knowledge acquisition issue:
how to obtain the knowledge
required for the generation of on-
line help.
4. Stair, the Stanford Robot
University of Stanford is building a
robot that can navigate home and
office environments, pick up and
interact with objects and tools,
and intelligently converse with and
help people in these environments
Over the long term, Stair‘s
creators envision a single robot
that can perform tasks such as:
Fetch or deliver items around
the home or office
Tidy up a room including
picking up and throwing
away trash
Prepare meals using a
normal kitchen
Use tools to assemble a
bookshelf
Conclusion
In conclusion, from a
methodological point of view,
both HCI and NLP need to
upgrade their scientific apparatus
to cope with the design of social
artifacts. It is well clear that the
HCI and NLP communities should
work together on a wide variety
of problems. There are several
areas where the cross-fertilization
can occur, and the combination of
the two types of expertise could
be beneficial. Hence there is still
enough to research and Improve
in this area, a promising future is
waiting in this field.
CLEAR Dec 2012 16
References:
1. http://www.cngl.ie/drupal/sites/default/files/papers
2/p4333-karamanis.pdf
2. Antonella De Angeli and Daniela Petrelli, ―
Bridging the gap between NLP and HCI: A new
synergy in the name of the user” Cognitive
Technology Laboratory Department of Psychology -
University of Trieste Via S. Anastasio , 12 ; I-34100,
Trieste, Italy
3. Cile Paris and Nadine Ozkan ― Motivating the
cross-fertilization between HCI and Natural
Language Processing “, CSIRO/MIS Locked bag 17,
North Ryde NSW 1670, Australia.
4. https://sites.google.com/site/naturallanguageproce
ssingnlp/Home/real-life-examples
5. http://www.cnlp.org/cnlp.asp?m=5&sm=0
6. http://www.cs.utep.edu/novick/nlchi/papers/Paris.
htm
7. De Angeli* and Daniela Petrelli ―Bridging the gap
between NLP and HCI: A new synergy in the
name of the user‖,
8. Do HCI and NLP Interact? CHI 2009 ~ Spotlight on
Works in Progress ~ Session 2 April 4-9, 2009 ~
Boston, MA, USA
Inviting Article for
CLEAR March 2013
We are inviting thought-provoking articles,
interesting dialogues and healthy debates on
multifaceted aspects of Computational
Linguistics, for the forthcoming issue of CLEAR
(Computational Linguistics in Engineering And
Research) magazine, publishing on March 2013.
The topics of the articles would preferably be
related to the areas of Natural Language
Processing, Computational Linguistics and
Information Retrieval. The articles may be sent
to the Editor on or before 15th February, 2013
through the email [email protected].
-Editor
CLEAR Dec 2012 17
Google Driverless Car
Author
Robert Jesuraj K
M. Tech Computational Linguistics Govt. Engineering College,
Sreekrishnapuram Palakkad
Palakkad The Google driverless car is a project
by Google that involves developing
technology for driverless cars. The
project is currently being led by Google
engineer Sebastian Thrun, director of
the Stanford Artificial Intelligence
Laboratory and co-inventor of Google
Street View. Thrun's team at Stanford
created the robotic vehicle Stanley
which won the 2005 DARPA Grand
Challenge and its US$2 million prize
from the United States Department of
Defense. The team developing the
system consisted of 15 engineers
working for Google, including Chris
Urmson, Mike Montemerlo, and
Anthony Levandowski who had worked
on the DARPA Grand and Urban
Challenges.
The U.S. state of Nevada passed a law
on June 29th, 2011 permitting the
operation of driverless cars in Nevada
and California. Google had been
lobbying for driverless car laws. The
Nevada law went into effect on March
1, 2012, and the Nevada
Department of Motor Vehicles
issued the first license for a self-
driven car in May 2012. The license
was issued to a Toyota Prius
modified with Google's
experimental driverless technology.
While Google had no immediate
plans to commercially develop the
system, the company hopes to
develop a business which would
market the system and the data
behind it to automobile
manufacturers. An attorney for the
become a reality because
"the technology is now
advancing so quickly that it
is in danger of outstripping
existing law, some of which
dates back to the era of
horse-drawn carriages".
Google lobbied for two bills
that made Nevada the first
state where driverless
vehicles can be legally
operated on public roads.
The first bill is an
California Department of Motor
Vehicles raised concerns that "The
technology is ahead of the law in
many areas," citing state laws that
"all presume to have a human
being operating the vehicle". to the
New York Times, policy makers and
have argued that new laws be
required if driverless vehicles are to
amendment to an electric
vehicle bill that provides for
the licensing and testing of
autonomous vehicles. The
second bill will provide an
exemption from the ban on
distracted driving to permit
occupants to send text
messages while sitting
behind the wheel.
CLEAR Dec 2012 18
a human driver to take control
by stepping on the brake or
turning the wheel.
Google's driverless test cars
have about $150,000 in
equipment including a $70,000
lidar (laser radar) system. The
range finder mounted on the top
is a Velodyne 64-beam laser.
The Google car project team was
working in secret in plain view
on vehicles that can drive
themselves, using artificial-
intelligence software that can
sense anything near the car and
mimic the decisions made by a
human driver. With someone
behind the wheel to take control
if something goes awry and a
The two bills came to a vote before the
Nevada state legislature‘s session ended
in June 2011. It has been speculated
that Nevada was selected due to the Las
Vegas Auto Show and the Consumer
Electronics Show, and the high likelihood
that Google will present the first
commercially viable product at either or
both of these events. Google executives,
however, refused to state the precise
reason they chose Nevada to be the
maiden state for the driverless car.
Nevada passed a law in June 2011
concerning the operation of driverless
cars in Nevada, which went into effect
on March 1, 2012. A Toyota Prius
modified with Google's experimental
driverless technology was licensed by
the Nevada Department of Motor
Vehicles (DMV) in May 2012. This was
the first license issue in the United
States for a self-driven car. License
plates issued in Nevada for autonomous
cars will have a red background and
feature an infinity symbol (∞) on the left
side because, according to the DMV
Director, "...using the infinity symbol
was the best way to represent the 'car
of the future'." Nevada's regulations
require a person behind the wheel and
one in the passenger‘s seat during tests.
Google's autonomous system permits
technician in the passenger
seat to monitor the navigation
system, seven test cars have
driven 1,000 miles without
human intervention and more
than 140,000 miles with only
occasional human control.
One even drove itself down
Lombard Street in San
Francisco, one of the steepest
and curviest streets in the
nation. The only accident,
engineers said, was when one
Google car was rear-ended
while stopped at a traffic light.
Autonomous cars are years
from mass production, but
technologists who have long
dreamed of them believe that
they can transform society as
profoundly as the Internet
has.
CLEAR Dec 2012 19
Robot drivers react faster than
humans, have 360-degree perception
and do not get distracted, sleepy or
intoxicated, the engineers argue. They
speak in terms of lives saved and
injuries avoided — more than 37,000
people died in car accidents in the
United States in 2008. The engineers
say the technology could double the
capacity of roads by allowing cars to
drive more safely while closer together.
Because the robot cars would
eventually be less likely to crash, they
could be built lighter, reducing fuel
consumption. But of course, to be truly
safer, the cars must be far more
reliable than, say, today‘s personal
computers, which crash on occasion
and are frequently infected.
The Google research program using
artificial intelligence to revolutionize
the automobile is proof that the
company‘s ambitions reach beyond the
search engine business. The program is
also a departure from the mainstream
of innovation in Silicon Valley, which
has veered toward social networks and
Hollywood-style digital media.
During a half-hour drive beginning on
Google‘s campus 35 miles south of San
Francisco, a Prius equipped with a
variety of sensors and following a route
programmed into the GPS navigation
system nimbly accelerated in the entrance
lane and merged into fast-moving traffic
on Highway 101, the freeway through
Silicon Valley.
It drove at the speed limit, which it knew
because the limit for every road is included
in its database, and left the freeway
several exits later. The device atop the car
produced a detailed map of the
environment.
The car then drove in city traffic through
Mountain View, stopping for lights and
stop signs, as well as making
announcements like ―approaching a
crosswalk‖ (to warn the human at the
wheel) or ―turn ahead‖ in a pleasant
female voice. This same pleasant voice
would, engineers said, alert the driver if a
master control system detected anything
amiss with the various sensors.
The car can be programmed for different
driving personalities — from cautious, in
which it is more likely to yield to another
car, to aggressive,
where it is more
likely to go first.
Christopher Urmson,
a Carnegie Mellon
University robotics
scientist, was behind
the wheel but not
using it. To gain
control of the car he
has to do one of
three things: hit a
red button near his
right hand, touch the
brake or turn the
steering wheel. He
did so twice, once
when a bicyclist ran
a red light and again
when a car in front
stopped and began
to back into a
parking space. But
the car seemed likely
to have prevented an
accident itself.
"...using the infinity symbol was the best way
to represent the 'car of the future'."
CLEAR Dec 2012 20
When he returned to automated ―cruise‖
mode, the car gave a little ―whir‖ meant
to evoke going into warp drive on ―Star
Trek,‖ and Dr. Urmson was able to rest
his hands by his sides or gesticulate
when talking to a passenger in the back
seat. He said the cars did attract
attention, but people seem to think they
are just the next generation of the Street
View cars that Google uses to take
photographs and collect data for its
maps.
The project is the brainchild of Sebastian Thrun, the 43-year-old director of the Stanford Artificial Intelligence
Laboratory, a Google engineer and the co-inventor of the Street View mapping service.
Besides the team of 15 engineers working on the current project, Google hired more than a dozen people, each with a
spotless driving record, to sit in the driver‘s seat, paying $15 an hour or more. Google is using six Priuses and an Audi
TT in the project.
The Google researchers said the company did not yet have a clear plan to create a business from the experiments. Dr.
Thrun is known as a passionate promoter of the potential to use robotic vehicles to make highways safer and lower the
nation‘s energy costs. It is a commitment shared by Larry Page, Google‘s co-founder, according to several people
familiar with the project.
CLEAR Dec 2012 21
Author
Razee Marikar
Subex Azure Limited, Bangalore
GNU Octave
Octave is a tool for numerical calculations and solving
numerical problems. It also has graphing and
visualization capabilities. It can be either used in an
interactive way, or by writing non-interactive
programs. In this article, I give an overview of the
basic capabilities of Octave.
Installing and Running Octave
If you are on a Linux environment, check the package
manager of the OS. You should find octave as one of
the packages. Check the download page at
http://www.gnu.org/software/octave/ for obtaining
Octave for other operating systems or to build from
source. Now you can run it.
On Linux, open a command shell (on the GUI if you
want to use it to view graphs), and type 'octave'. On
Windows, depending on your installation method, you
may need to open your cygwin environment and run
octave or open it from start menu.
Simple Calculations
Let's get started with simple calculations. Suppose you
want to find out the result of a simple calculation like
(2+10i)x(3π+5i)³. On the octave prompt, you should
enter the command as follows, using syntax similar to
most other languages. But remember, there are some
differences, for example, octave can handle irrational
numbers:
(2 + 10i) * (3*pi + 5i) ^3
Octave interprets "i" to identify the irrational part,
it understands constants like pi, and it interprets
"^" as the power function.
You can also store results to a variable. Here are
some examples:
Octave-3.2.4.exe:12> a=10
a = 10
Octave-3.2.4.exe:13> a*a
ans = 100
Octave-3.2.4.exe:14> b=(3 + 5i) +
(2.2 + 3.1i) * (10 + 2i)
b = 18.800 + 40.400i
Octave-3.2.4.exe:15> c = a*b
c = 188 + 404i
Octave-3.2.4.exe:17> c = a*b+42;
Octave-3.2.4.exe:18> c
c = 230 + 404i
One thing to be noted here is that if you enter a
semi column at the end of the command, the result
of the operation won't be printed. It is useful while
using Octave in non-interactive mode using a
program stored in a file.
Matrix Calculations
Octave is very good at handling matrices. In this
article, I will quickly introduce you on how to work
with matrices on Octave. First, to enter and store a
matrix into variables:
Octave-3.2.4.exe:19> A = [1 2 3; 5
7 2; 7 8 0];
Octave-3.2.4.exe:20> B = [5 7 5; 1
0 1; -1 3 5];
Octave-3.2.4.exe:21> A
A =
1 2 3
5 7 2
7 8 0
CLEAR Dec 2012 22
Octave-3.2.4.exe:22> inv(A)
ans =
1.06667 -1.60000 1.13333
-0.93333 1.40000 -0.86667
0.60000 -0.40000 0.20000
Octave-3.2.4.exe:23> A + B
ans =
6 9 8
6 7 3
6 11 5
Octave-3.2.4.exe:24> A * B
ans =
4 16 22
30 41 42
43 49 43
Octave-3.2.4.exe:25> 2*A
ans =
2 4 6
10 14 4
14 16 0
Octave-3.2.4.exe:26> B/A
ans =
1.80000 -0.20000 0.60000
1.66667 -2.00000 1.33333
-0.86667 3.80000 -2.73333
Using the above examples, it should be evident how this
can be used to solve numeric equations.
References and further reading:
1. Official documentation here:
http://www.gnu.org/software/octave/doc/interpreter
2. Introduction to Octave by Dr. P.J.G. Long based
on the Tutorial Guide to Matlab written by Dr. Paul
Smith:
http://wwwmdp.eng.cam.ac.uk/web/CD/engapps/oct
ave/octavetut.pdf
3. Machine Learning classes available online by
Stanford University (Prof. Andrew Ng)
CLEAR Dec 2012 23
Hello World,
Let me share my experience, from the valedictory function of Amrita CLMT
workshop. During a discussion on Indian Language Computing, one delegate from
Andhra commented that Indians are reluctant to use their language in digital world.
His observation has relevance in the light of past, present and future scenario in ILC.
The people interested in this area are few. Many technologist working in IT sectors
have not even heard of this area. Even though India is one among in top IT
solutions, technology is away from most of the citizens.
Why ILC fails to reach the common man‘s desktop? What make language computing
so much difficult?
The answer is simple: "This is not a rocket science. Solutions are possible‖. We need
linguists interested in technology and technocrats interested in language.
Government has to take necessary steps to include language technology for
engineering graduate. Moreover, people should have an enthusiasm on their
language, not to divide themselves, but to join the global technology.
Few months before, Sam Pitroda -- technical advisor to the Prime Minister of India --
told that, ―India needs lot of language technologists in near future‖. This shows the
scope and growth of language technology. We are not bothering about people‘s
attitude. We have a bright future.
Thanks for your ‗ ‘ and ‗ ‘, you put for the last issue of CLEAR. This motivated
Simple Groups to bring the second issue.
Expecting your future supports!
Wish you all the best....
Manu Madhavan
L
A
S
T
W
O
R
D
CLEAR Dec 2012