speech based human machine interaction system for home...

6
Speech based Human Machine Interaction system for Home Automation Mayur Katore 1 1 Research scholar, Department of E & Tc., G.H. Raisoni Institute of Engineering & Technology, Pune, India [email protected] Mrs. M. R. Bachute 2 2 Asst. professor, Department of E & Tc, G.H. Raisoni Institute of Engineering & Technology, Pune, India [email protected] Abstract—Human-machine interaction technology defines the way human use machines and it plays crucial role for developing an ecosystem between user and machine. Speech based Human- machine interaction has potential to be future of Human-machine interaction and recent advancement in technology have made it possible to operate computers and electronic devices through speech. While there are lots of research going on for the development of better speech recognition systems but very less efforts are made to develop speech based Human-machine interaction system for everyday use. This paper presents an innovative approach by integrating an Android enabled device, Bluetooth technology, Matlab computing platform and advanced microcontroller to develop a Speech based Human-machine interaction system for operating home appliances. The proposed system is very effective speech based Human-machine interaction system for real world applications as it is affordable, flexible, user configurable, user friendly, and also it is very helpful for elderly persons and disabled people. Keywords— Android application; Bluetooth technology; Human-computer interaction (HCI) systems; Human Machine Interaction (HMI)systems; Matlab; Microcontroller; Speech recognition; Speech commands I. INTRODUCTION Primary purpose of any Machine or device is to provide some functionality for productivity, comfort and convince of user. This purpose can be realized only with proper human– machine communication, so Human-Machine interaction (HMI) has attracted attention of many researchers and engineers in last few years [1]. The Human-Machine interaction (HMI) is the method by which humans have been interacting with machines and computing devices. HMI technologies has travelled a long way from punch cards and clunky switches to touchscreen and the journey still continues with development of new technologies and designs of new systems. Human-Machine interaction (HMI) can be achieved by technologies based on touch, motion, Optic i.e. light, Sound or speech, or bionic [2]. Aim of any HMI system is to enhance the interactions between users and machines by making machines more interactive and compatible to user’s needs. HMI systems are designed such as to minimize the barrier between nature of human needs and the machine's understanding of the user's task [1]. As speech is the most natural way of communication in man-man interaction so speech based HMI systems have the potential to serve as most efficient way of human-machine interaction systems [3]. This area requires the information acquired by different audio signals, while the nature of audio signals vary less compared to nature of visual signals and so the information gathered from audio signals can be more trustable, helpful, and in some cases audio signals provides unique information [4]. Speech recognition is prime factor in the speech based HMI systems which is to make machine understand human voice and its commands [5, 6]. Speech Recognition is the process of converting a speech signal to a sequence of words, by means of an algorithm implemented as a computer program. It is a kind of pattern recognition system, including three basic units; feature extraction, pattern matching, and reference model library as shown in Fig.1. Fig.1. Basic Speech Recognition Model There have been considerable achievements in the field of speech recognition by computers like improvement in speech processing techniques, development of architectures for speech based HMI algorithms for learning and modeling, development of various models like Hidden Markov Model, and now even voice recognition chips like HM2007 are also available but not many efforts are made to develop speech based HMI systems that serve day to day uses. This paper presents a speech based HMI system for home by which user can operate and interact with home appliances and machines using speech. In the proposed work, user is supposed to give speech commands through android enabled device which conveys these commands to programming unit through Bluetooth communication and then programming unit commands the controlling unit to operate devices according to user’s need. The proposed work is innovatively designed to 978-1-4673-9542-7/15/$31.00 ©2015 IEEE 2015 IEEE Bombay Section Symposium (IBSS)

Upload: others

Post on 23-May-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Speech based human machine interaction system for home ...static.tongtianta.site/paper_pdf/e2770002-df92-11e9-9edb-00163e08bb86.pdf · Human-Machine interaction (HMI) can be achieved

Speech based Human Machine Interaction system for Home Automation

Mayur Katore1 1Research scholar, Department of E & Tc.,

G.H. Raisoni Institute of Engineering & Technology, Pune, India

[email protected]

Mrs. M. R. Bachute2 2Asst. professor, Department of E & Tc,

G.H. Raisoni Institute of Engineering & Technology, Pune, India

[email protected]

Abstract—Human-machine interaction technology defines the way human use machines and it plays crucial role for developing an ecosystem between user and machine. Speech based Human-machine interaction has potential to be future of Human-machine interaction and recent advancement in technology have made it possible to operate computers and electronic devices through speech. While there are lots of research going on for the development of better speech recognition systems but very less efforts are made to develop speech based Human-machine interaction system for everyday use. This paper presents an innovative approach by integrating an Android enabled device, Bluetooth technology, Matlab computing platform and advanced microcontroller to develop a Speech based Human-machine interaction system for operating home appliances. The proposed system is very effective speech based Human-machine interaction system for real world applications as it is affordable, flexible, user configurable, user friendly, and also it is very helpful for elderly persons and disabled people.

Keywords— Android application; Bluetooth technology; Human-computer interaction (HCI) systems; Human Machine Interaction (HMI)systems; Matlab; Microcontroller; Speech recognition; Speech commands

I. INTRODUCTION Primary purpose of any Machine or device is to provide

some functionality for productivity, comfort and convince of user. This purpose can be realized only with proper human–machine communication, so Human-Machine interaction (HMI) has attracted attention of many researchers and engineers in last few years [1]. The Human-Machine interaction (HMI) is the method by which humans have been interacting with machines and computing devices. HMI technologies has travelled a long way from punch cards and clunky switches to touchscreen and the journey still continues with development of new technologies and designs of new systems. Human-Machine interaction (HMI) can be achieved by technologies based on touch, motion, Optic i.e. light, Sound or speech, or bionic [2]. Aim of any HMI system is to enhance the interactions between users and machines by making machines more interactive and compatible to user’s needs. HMI systems are designed such as to minimize the barrier between nature of human needs and the machine's understanding of the user's task [1].

As speech is the most natural way of communication in man-man interaction so speech based HMI systems have the

potential to serve as most efficient way of human-machine interaction systems [3]. This area requires the information acquired by different audio signals, while the nature of audio signals vary less compared to nature of visual signals and so the information gathered from audio signals can be more trustable, helpful, and in some cases audio signals provides unique information [4]. Speech recognition is prime factor in the speech based HMI systems which is to make machine understand human voice and its commands [5, 6]. Speech Recognition is the process of converting a speech signal to a sequence of words, by means of an algorithm implemented as a computer program. It is a kind of pattern recognition system, including three basic units; feature extraction, pattern matching, and reference model library as shown in Fig.1.

Fig.1. Basic Speech Recognition Model

There have been considerable achievements in the field of speech recognition by computers like improvement in speech processing techniques, development of architectures for speech based HMI algorithms for learning and modeling, development of various models like Hidden Markov Model, and now even voice recognition chips like HM2007 are also available but not many efforts are made to develop speech based HMI systems that serve day to day uses.

This paper presents a speech based HMI system for home by which user can operate and interact with home appliances and machines using speech. In the proposed work, user is supposed to give speech commands through android enabled device which conveys these commands to programming unit through Bluetooth communication and then programming unit commands the controlling unit to operate devices according to user’s need. The proposed work is innovatively designed to

978-1-4673-9542-7/15/$31.00 ©2015 IEEE

2015 IEEE Bombay Section Symposium (IBSS)

Page 2: Speech based human machine interaction system for home ...static.tongtianta.site/paper_pdf/e2770002-df92-11e9-9edb-00163e08bb86.pdf · Human-Machine interaction (HMI) can be achieved

provide very cost effective solution, and user can also configure the commands they wish to use. The proposed ecosystem between man-machine is developed such as user and machine can interact like natural man-man interaction so that machine can provide a human like assistance to user.

This paper is organized as follows; Section II gives literature review and idea about related work going-on in this field. In Section III objective of the project and methodology used to achieve it is explained. Section IV provides an overview of the elements involve in proposed system’s development, and Section V presents the implementation and working of a prototype. In Section VI results are included, the advantages and disadvantages are mentioned and comparison with other available technologies is also discussed, a final conclusion is drawn in Section VII which also gives some ideas about future developments of the project.

II. RELATED WORK James Cannan and Huosheng Hu [1] presented a literature

survey of published papers in the area of Human Machine Interaction (HMI). They kept focus of this survey on state of the art technology that has or will have a major influence on the future direction of HMI. From their study they concluded that advanced HMI technologies will converge and combine functionality, to bring intelligent machines and robots more close to humans. Researchers at Bell Labs were the pioneers in development speech recognition technologies and HMI systems based on sound specifically human speech came into existence around 1950. Speech based Human-computer interface (HCI), which is similar to that of HMI systems, are being used in smartphones and computers.

Fakhreddine Karray, Milad Alemzadeh, Jamil Abou Saleh and Mo Nours [2] presented a paper which provides an overview of Human-Computer Interaction which includes the basic definitions and terminology, a survey of existing technologies and recent advances in the field, common architectures used in the design of HCI systems which includes uni-modal and multimodal configurations, and the applications of HCI. With the development of speech based HCI which are accurate enough to be employable for real world applications and are getting better and better indicates that now it is possible to develop speech based HMI systems which are employable for real world applications.

M. A. Anusuya and S. K. Katti [3] presents a brief survey on Automatic Speech Recognition and discusses the major themes and advances made in the past 60 years of research .They have summarize and compare some of the wellknown methods used in various stages of speech recognition system and also discussed the concern that even after years of research and development the accuracy of automatic speech recognition remains one of the important research challenges (e.g. variations of the context, speakers, and environment).

Saptarshi Boruah and Subhash Basishtha [5] discuss the basic overview of speech recognition system, the different variations, basic building blocks of this system, the algorithm behind the success of this system and the performance

evaluation of the system. They gave a basic overview of HMM based speech recognition system, how does it actually work.

Roger K. Moore [12] their work is developed from the results of recent study of neurobiology of living systems. Title of this research is “PRESENCE: A Human-Inspired Architecture for Speech- Based Human-Machine Interaction” and PRESENCE means PREdictive SENsorimotor Control and Emulation, in which they present an architecture for speech based HMI where system understands user’s need and intensions and vice versa. This work is being considered as very important for further study and development of speech science for technology and for development of various models for HMI which includes psychological and human spoken language behavior.

Mohammad A. M. Abu Shariah, Raja N. Ainon, Roziati Zainuddin, and Othman O. Khalifa [15] developed an isolated-word automatic speech recognition (IWASR) system based on Vector Quantization (VQ), in which to extract features from speech signals, Mel-Frequency Cepstral Coefficients (MFCC) algorithm was used and Vector Quantization was applied for all feature vectors generated from the MFCC. Experimental results of this work concludes the recognition rate has been improved with the increase of database size and they achieved target of database size of 81 feature vectors which had a recognition rate exceeded 85%.

Madhuri R. Dubey and S. A. Chhabria [16] highlights the concept of eye and speech fusion in human computer interaction as human uses wide variety of senses to express or commands for variable activity. The proposed system considers mainly three modules which are as eye recognition, speech recognition and fusion, in addition to that different fusion techniques are applied on multimodal senses like eye and speech for certain desktop application. The eye recognition is processed using Kalman filter and 2d-measurement method under motion based approach and discrete wavelet transform (DWT) method is used for extracting speech features.

Now DSP processors are also being developed whose whole and sole function is to recognize voice, they are popularly called as “Voice Recognition Chips” like HM 2007 which provides high accuracy and these processors can be used by interfacing with suitable Microcontroller, it is very easy and convenient to develop speech based HMI systems using such Voice Recognition Chips. Dhawan S. Thakur and Aditi Sharma [17] developed a project of wireless home automation based on Zigbee communication technology which is controlled voice commands. For voice command recognition they have basically used a voice recognition chip (HM2007) with microcontroller and such HMI systems are mostly speaker dependent i.e. it works effectively for single speaker only. But HMI systems develop with these chips have limitations as these voice recognition chips very limited memory therefore such HMI systems can recognize only limited words and also user have to speak in a particular manner to operate such HMI systems.

Carl J. Debono and Kurt Abela [18] proposes architecture for home automation by which user controls devices by employing a central Field Programmable Gate Array (FPGA) controller to which the devices and sensors are interfaced.

Page 3: Speech based human machine interaction system for home ...static.tongtianta.site/paper_pdf/e2770002-df92-11e9-9edb-00163e08bb86.pdf · Human-Machine interaction (HMI) can be achieved

Control is communicated to the FPGA from a mobile phone through its Bluetooth interface. The proposed architecture is very user friendly but the use of FPGA makes system expensive.

Bader et al. [19] presents a wireless real-time home automation system based on Arduino Uno microcontroller as central controller developed with Matlab-GUI platform. The proposed system has two operational modes- Manual and self- automated. In self- automated mode controllers monitors and controls different appliances in the home automatically in response to the signals coming from the related sensors and in Manual mode the user can monitor and control the home appliances from anywhere over the world using the cellular phone through Wi-Fi communication technology. Use of Matlab for GUI makes system very flexible and reliable for user as well as developer.

III. PROBLEM DEFINITION AND METHODOLOGY

A. Aim The objective of this project is to build a prototype of basic

speech based HMI system which operates electronic devices interfaced with system through speech by using ability of computing devices and advance technologies. Target of our work is to provide user a system to interact with machines or home appliances in most natural way, using speech commands.

B. Methodology First of all input speech signal must be transformed into a

computer or controller understandable instruction like when user utters “light-A Turn On” or “Fan Turn OFF” or “light- B intensity 2” then the computer must understand that it is instructed to operate a particular device in particular manner. After understanding the user command computer will then operate the particular interfaced device as required. So it has to be worked out in two steps; first to make computer or controller interpret and understand user’s voice command and secondly computer or controller must operate particular device in certain way as desired command.

The methodology we have used is shown in Fig.2. in which overall project is divided in three units i.e. Mobile unit, Programming Unit, and Control Unit. Each unit is developed separately considering role of all units.

Fig.2. Methodology used

• Mobile unit is the android Smartphone.

• Programming unit is PC which is compatible with Matlab computing platform.

• Control unit is ATmega8 Microcontroller board interfaced with devices to be controlled.

• On Software front proposed system uses an Android App, Matlab computing software, and Keil platform.

• On Hardware front proposed work uses ATmega8 microcontroller, Bluetooth module, and PL-2303 Controller.

IV. SYSTEM DESIGN The block diagram of the proposed system is shown in Fig.

3. It consists of an android Smartphone, Matlab installed computing system (PC or Laptop) interfaced with Bluetooth module, and the ATmega8 microcontroller is interfaced computing system as well as home appliances.

Fig.3. Proposed system’s block diagram

A. Mobile Device The system requires an android Smartphone device with

bluetooth connectivity and internet facility. Android smartphone serves as an advance microphone for speech commands. The mobile recognizes speech via Android app, developed using the java software. Android application used in proposed work access Google voice app to recognize speech commands using internet facility. Android application converts the recognized speech command in textual format. Then mobile device communicates this text in a form of string to its inbuilt bluetooth module through an Application Programming Interface (API) which later passed on to Matlab software in programming unit, through bluetooth connectivity.

B. Matlab computing Platform Proposed system requires Matlab installed PC or Laptop as

the software implementation is carried out on Matlab platform. Main program consists of code for Graphical-User interface

Page 4: Speech based human machine interaction system for home ...static.tongtianta.site/paper_pdf/e2770002-df92-11e9-9edb-00163e08bb86.pdf · Human-Machine interaction (HMI) can be achieved

(GUI) which is coded for manual mode and automated mode. Manual mode is useful for simulation of system and devices while in automatic mode system operates appliances according to speech command.

Code is written separately for speech commands to be recognized by system, which is called as a dictionary and it is user configurable i.e. it can be configured by user for commands to be used. When Matlab runs in automated mode and PC receives strings transferred from smartphone, Matlab reads the received string. If this string is matched with the strings configured in dictionary then Matlab program generates the corresponding instruction for ATmega8 microcontroller and communicate it to microcontroller through prolific controller.

In main program, code for time monitoring is also included which monitors the programming unit i.e. PC’s time. Certain time-limits are programmed like morning time at 7:00 am or evening time at 7:00 pm or midnight at 12:00 am. According to PC’s clock when its 7:00 am or 7:00 pm or 12:00 am, then Matlab runs in morning mode or night mode or sleep mode respectively. In these modes Matlab automatically turn ON or turn OFF certain set of devices according to user’s daily routine and also conveys the same to user via speech output through PC’s speakers.

Thus by the use of Matlab platform makes overall system smart, interactive and we can modify the program for making system more and more effective, intelligent to provide human like assistance.

C. Bluetooth communication technology To establish communication between Android enabled

smartphone and programming unit is accomplished using external bluetooth module via USB to serial prolific controller interface. External bluetooth module is used as it provides more reliability and easy to configure than PC’s internal bluetooth module. Bluetooth technology is selected over other communication techniques as it is available in most mobile phones, can be implemented with low cost, consumes low power, and provides high level of security through its use in short distances and through its pairing functionality.

D. ATmega8 Microcontroller The ATmega8 Microcontroller is interfaced with

programming unit and also with home appliances to be controlled. In this particular prototype we are operating five home appliances. It is interfaced to computing system via prolific controller and it is programmed according to command received from Matlab and device to be operated. Keil software platform is used to program ATmega8 microcontroller. Use of advance microcontroller like ATmega8 gives the scope to modify system on hardware level.

V. IMPLEMENTATION AND WORKING Proposed system is designed to develop a prototype of user

interactive Speech based HMI system which operates electronic devices in surrounding. In the proposed system we are using speech commands like “turn on light one”, “turn on

TV”, “turn off TV” etc. to interact with devices which are very similar to the way we normally communicate with each-other. Fig.4. shows the flow chart of proposed system which explains the implementation and working of the system.

Fig.4. Flow chart of proposed system implementation

Initially, Android app is required to be paired with external bluetooth module. Once the pairing is successfully established, Matlab program is executed in automated mode. Now user can give the input speech command through smartphone.

When speech command is recognized, it is transmitted to the programming unit in the textual string format using bluetooth technology. Another bluetooth module which is connected to programming unit externally, receives these input commands in form of a string. Then the string is received serially by PC through USB to serial prolific controller.

If Matlab coded dictionary contains the given input string, then it instruct the ATmega8 to perform particular operation for particular device otherwise no action is taken. In automated mode, when the programmed time-limit is reached then program first lookout for user’s speech commands and if there is no command from user then it runs in the programmed mode according to time-limit. User’s speech is given higher priority than any other factor.

Page 5: Speech based human machine interaction system for home ...static.tongtianta.site/paper_pdf/e2770002-df92-11e9-9edb-00163e08bb86.pdf · Human-Machine interaction (HMI) can be achieved

VI. RESULTS AND DISCUSSION In Fig.5. Graphical user interface (GUI) coded in Matlab is

shown while Fig.6. represent the prototype’s experimental setup in running condition.

Fig.5. Matlab coded GUI

Fig.6. Prototype’s experimental setup

The system works very effectively and efficiently provided there is good Internet connection and user operates within Bluetooth range (10 meters). If system doesn’t operate as desired user can use manual mode to find if all devices are properly interfaced or any device is faulty.

A. Advantages: • Dialogue like speech commands makes system very

user friendly. Use of smartphone provides user mobility.

• Use of bluetooth technology for connectivity makes system affordable. Also it reduces hardware and software complexity.

• Use of android application which access Google voice app for speech recognition provides high accuracy. It recognizes speech within fraction of seconds.

• The use of android application, Matlab and ATmega8 microcontroller provides lots scope of improvement.

TABLE I. RELEVANCE OF PROPOSED SYSTEM

Parameters Proposed system

FPGA Based

Code based

Micro-controller

based Hardware

complexity Moderate High Low High

Code Complexity Moderate High Very High Low

Flexibility Available na Available Very limited

Reliability High Moderate Moderate Low

Costing Less Very costly Less Moderate Comparative study

The proposed system is more cost effective, more flexible and reliable, with less hardware and software complexity if compared with other available solutions like FPGA based HMI systems or completely microcontroller based HMI systems. Comparative study is shown through table I.

B. Disadvantages: • Few times app is unable to interpret speech because of

bad internet connection as it takes time for uploading and downloading data.

• Also user has to pronounce the speech command loud and clear, android device misinterpret speech command if command is not pronounced clearly.

• As connectivity range of bluetooth is less, so user has to be in range of 10 meters from programming unit to give command.

• Only one user can connect with programming unit’s external bluetooth module at a time so two or more users cannot command simultaneously.

VII. CONCLUSION The implementation of controlling home appliances by

speech based Human Machine Interaction is achieved. The connection between computing system and ATmega8 microcontroller and also devices interfaced to the microcontroller can use either a wired connection or a wireless one, such as Zigbee or Infra-red. The implementation of proposed system is done by wired solution, however the interface can be easily replaced by a wireless solution. Use of advance ATmega8 controller makes system scalable, while Matlab platform provides flexibility, reliability and lots of scope for future development. In future when smart grid will be widely used technologies like Speech based HMI will play important role in smart homes. The proposed speech based HMI system can work effectively and efficiently for real world applications.

In future, system can also be interfaced with sensors so that system can perform actions on its own and by modifying program proposed system can become more interactive and autonomous. Including the concepts like Internet of things (IoT) in the proposed system, it can interact with user more intellectually.

Page 6: Speech based human machine interaction system for home ...static.tongtianta.site/paper_pdf/e2770002-df92-11e9-9edb-00163e08bb86.pdf · Human-Machine interaction (HMI) can be achieved

ACKNOWLEDGMENT The authors would like to Mr. N.B. Hulle (HOD), Mr. A.D.

Bhoi (Vice Principal) and Dr. R.D. Kharadkar (Principal) of G.H. Raisoni Institute of Engineering & Technology, Wagholi, Pune for providing resources and untimely support.

REFERENCES [1] James Cannan and Huosheng Hu, “Human-Machine Interaction (HMI):

A Survey,” a Technical Report. [2] Fakhreddine Karray, Milad Alemzadeh, Jamil Abou Saleh and Mo

Nours, “Human-Computer Interaction: Overview on State of the Art,” International Journal on Smart Sensing and Intelligent Systems, Vol. 1, No. 1, pp. 137-159, March 2008.

[3] M. A. Anusuya and S. K. Katti, “Speech Recognition by Machine: A Review,” International Journal of Computer Science and Information Security, Vol. 6, No. 3, pp. 181-205, 2009.

[4] Youhao Yu, “Research on Speech Recognition Technology and Its Application,” International Conference on Computer Science and Electronics Engineering, in Proc. IEEE computer society, pp. 306-309, 2012.

[5] Saptarshi Boruah, Subhash Basishtha, “A Study on HMM based Speech Recognition System,” in IEEE International Conference on Computational Intelligence and Computing Research, 2013.

[6] Biing-Hwang Juang and Sadaoki Furui, “Automatic Recognition and Understanding of Spoken Language—A First Step Toward Natural Human–Machine Communication,” in Proc. IEEE, VOL. 88, NO. 8, pp. 1142- 1165 AUGUST 2000.

[7] Ankita N. Chadha, Jagannath H. Nirmal, and Pramod Kachare, “A Comparative Performance of Various Speech Analysis-Synthesis Techniques,” in International Journal of Signal Processing Systems, Vol. 2, No. 1, June 2014.

[8] Lakshmi Saheer, Milos Cernak, “Automatic Staging of Audio with Emotions,” in Humaine Association Conference on Affective Computing and Intelligent Interaction, proc. IEEE, pp. 705-706, 2013.

[9] Georgios Galatas, Fillia Makedon, “A Framework for Audiovisual Dialog-based Human Computer Interaction,” Journal of Multimedia Theory and Applications, Volume 2, Issue 1, pp. 45-49, 2014.

[10] R. K. Aggarwal and M. Dave, “Fitness Evaluation of Gaussian Mixtures in Hindi Speech Recognition System,” in First International Conference on Integrated Intelligent Computing, proc. IEEE computer society, pp. 177-183, 2010.

[11] Gerasimos Potamianos, Jing Huang, Etienne Marcheret, Vit Libal, Rajesh Balchandran, Mark Epstein, Ladislav Seredi, Martin Labsky, Lubos Ures, Matthew Black, Patrick Lucey, “Far-Field Multimodal Speech Processing and Conversational Interaction in Smart Spaces,” in HSCMA 2008, Proc. IEEE pp. 119-123, 2008.

[12] Roger K. Moore, “PRESENCE: A Human-Inspired Architecture for Speech-Based Human-Machine Interaction,” IEEE Transactions on Computers, Vol. 56, No. 9, pp. 1176- 1187, September 2007.

[13] Wen-Li Wei, Chung-Hsien Wu, Jen-Chun Lin, “Exploiting Psychological Factors for Interaction Style Recognition in Spoken Conversation”, IEEE/ACM Transactions On Audio, Speech, And Language Processing, Vol. 22, No. 3, pp. 659-671, March 2014.

[14] Bogdan Vlasenko, Andreas Wendemuth, “Heading toward to the naturalway of Human-Machine Interaction: The Nimitek Project,” ICME 2009, in Proc. IEEE pp. 950-953, 2009.

[15] Mohammad A. M. Abu Shariah, Raja N. Ainon, Roziati Zainuddin, Othman O. Khalifa, “Human Computer Interaction using Isolated-Words Speech Recognition Technology”, International Conference on Intelligent and Advanced Systems, proc. IEEE, pp. 1173-1178, 2007.

[16] Madhuri R. Dubey, S. A. Chhabria, “Eye and Speech Fusion in Human Computer Interaction”, International Journal of Advance Research in Computer Science and Management Studies, Vol. 2, Issue 3,pp. 371-377, March 2014.

[17] Dhawan S. Thakur and Aditi Sharma, “Voice Recognition for wireless Home Automation system based on Zigbee,” Journal of Electronics and

Communication Engineering, Vol. 6, Issue 1, pp. 65-75, May. - Jun. 2013.

[18] Carl J. Debono and Kurt Abela, “Implementation of a Home Automation System through a Central FPGA Controller”, IEEE Trans. on Automation and Control Systems, pp. 641-644, 2012.

[19] Bader M. O., Al-thobaiti, Iman, I. M. Abosolaiman, Mahdi H. M. Alzahrani, Sami H. A. Almalki, Mohamed S. Soliman, “Design and Implementation of a Reliable Wireless Real-Time Home Automation System Based on Arduino Uno Single-Board Microcontroller”, International Journal Of Control, Automation and Systems, Vol.3, July 2014.