[ieee 2010 11th international symposium on computational intelligence and informatics (cinti) -...
TRANSCRIPT
Cognitive Infocommunications:
CogInfoCom
Peter Baranyi
3DICC Laboratory, the Consortium of
Budapest University of Technology and Economics (BME) -
Department of Telecommunication and Media Informatics (TMIT)
Computer and Automation Research Institute (MTA SZTAKI) -
Hungarian Academy of Sciences
Email: [email protected]
Adam Csapo
3DICC Laboratory, the Consortium of
Budapest University of Technology and Economics (BME) -
Department of Telecommunication and Media Informatics (TMIT)
Computer and Automation Research Institute (MTA SZTAKI) -
Hungarian Academy of Sciences
Email: [email protected]
Abstract—In recent years, considerable amount of researchhas been dedicated to the integration of artificial cognitivefunctionalities into informatics. With the immense growth involume of cognitive content handled by both artificial andnatural cognitive systems, the scientific treatment of new andefficient communication forms between such cognitive systems isinevitable. In this paper, we provide the first definition of cognitiveinfocommunications, a multidisciplinary field which aims toexpand the information space between communicating cognitivesystems (artificial or otherwise). Following this definition, wespecify the modes and types of communication which make upcognitive infocommunications. Through a number of examples,we describe what is expected from this new discipline in furtherdetail.
I. INTRODUCTION
The idea that the information systems we use need to
be accomodated with artificially cognitive functionalities has
culminated in the creation of cognitive informatics [1], [2].
With the strong support of cognitive science, results in cogni-
tive informatics are contributing to the creation of more and
more sophisticated artificially cognitive engineering systems.
Given this trend, it is rapidly becoming clear that the amount
of cognitive content handled by our engineering systems is
reaching a point where the communication forms necessary to
enable interaction with this content are becoming more and
more complex [3], [4], [5], [6], [7].
The inspiration to create engineering systems capable of
communicating with users in natural ways is not new. It is
one of the primary goals of affective computing to endow
information systems with emotion, and to enable them to
communicate these emotions in ways that resonate with the
human users [8], [9]. On the other hand, there are a whole host
of research fields which concentrate less on modeling (human)
psychological emotion, but aim to allow users to have a more
tractable and pleasurable interaction with machines (e.g., hu-
man computer interaction, human robot interaction, interactive
systems engineering) [10], [11]. Further fields specialize in the
communication of hidden parameters to remote environments
(e.g., sensory substitution and sensorimotor extension in engi-
neering applications, iSpace research, multimodal interaction)
[12], [13], [14], [15], [16], [17], [18].
In recent years, several applications have appeared in the
technical literature which combine various aspects of the
previously mentioned fields, but also extend them in significant
ways (e.g., [19], [20], [21], [22], [23], [24], [25], [18], [26],
[27], [13], [28]). However, in these works, the fact that a new
research field is emerging is only implicitly mentioned. The
goal of this paper is to provide a concise but clear definition
of cognitive infocommunications, and to further demonstrate
through examples what is expected from results within this
research field.
II. DEFINITION
Cognitive infocommunications (CogInfoCom)
investigates the links between the research areas of
infocommunications, informatics and cognitive sciences,
as well as the various fields which have emerged as
a combination of these sciences. The primary goal of
CogInfoCom is to provide a complete view of how brain
processes can be merged with infocommunications devices
so that the cognitive capabilities of the human brain may not
only be efficiently extended through these devices, irrespective
of geographical distance, but may also be efficiently matched
with the capabilities of any artificially cognitive system. This
merging and extending of cognitive capabilities is targeted
CINTI 2010 • 11th IEEE International Symposium on Computational Intelligence and Informatics • 18–20 November, 2010 • Budapest, Hungary
978-1-4244-9280-0/10/$26.00 ©2010 IEEE- 141 -
towards engineering applications in which any combination
of artificial and biological cognitive systems are required to
work together.
We define two important dimensions of cognitive infocom-
munications: the mode of communication, and the type of
communication.
The mode of communication refers to the actors at the two
endpoints of communication:
• Intra-cognitive communication: The mode of com-
munication is intra-cognitive when information transfer
occurs between two cognitive beings with level cognitive
capabilities (e.g., between two humans).
• Inter-cognitive communication: The mode of communi-
cation is inter-cognitive when information transfer occurs
between a two cognitive beings with different cognitive
capabilities (e.g., between a human and an artificially
cognitive system).
The type of communication refers to the type of informa-
tion that is conveyed between the two actors, and the way in
which this is done:
• Sensor-sharing communication: The type of commu-
nication is sensor-sharing when the sensory information
obtained or experienced by each of the actors is merely
transferred to the other end of the infocommunications
line, and therefore the same sensory modality is used on
both ends to perceive the information.
• Sensor-bridging communication: The type of commu-
nication is sensor-bridging when the sensory information
obtained or experienced by each of the actors is not
only transferred to the other end of the line, but also
reallocated and transformed to an appropriate sensory
modality on the receiver end. A sensor-bridging appli-
cation uses the plasticity of biological cognitive systems
to create an effective matching between the properties of
the remotely obtained information (e.g., the number of
its dimensions, its density and its communication speed,
etc.) to the properties of the receiving sensory modality
(e.g., the number of perceptible dimensions using the
modality, the resolution with which the modality can be
used to perceive each dimension, and the speed at which
the modality can process the information, etc.).
Sensor-bridging communication also includes scenarios
in which the two communicating actors are in a master-
slave relationship. In such cases, the master can be
a human user, and the slave can be a simple sensor
that performs remote sensing and/or preprocessing tasks
(therefore, the slave does not implement an autonomous
cognitive system at all). Such communication could serve
to allow the human operator to directly sense information
which is not normally directly perceptible through the
sensors of his/her cognitive system, and therefore, an
efficient mapping may be created between the highly
sophisticated organization of cognitive content in the
environment and the possibilities afforded by the human
nervous system for the sensing and representation of this
cognitive content.
Remarks
1) Dominant parts of a number of basic ideas behind
CogInfoCom are not new in the sense that different
aspects of many key points have appeared, and are
being actively researched, in several existing areas of IT
(e.g. affective computing, human computer interaction,
human robot interaction, sensory substitution, sensori-
motor extension, iSpace research, interactive systems
engineering, ergonomics, etc.)
2) CogInfoCom should not be confused with computa-
tional neuroscience or computational cognitive model-
ing, which can mainly be considered as a very impor-
tant set of modeling tools for cognitive sciences (thus
indirectly for CogInfoCom), but have no intention of
directly serving engineering systems.
3) A sensor-sharing application of CogInfoCom is novel in
the sense that it extends traditional infocommunications
by conveying any kind of signal normally perceptible
through the actor’s senses to the other end of the commu-
nication line. The transferred information may describe
not only the actor involved in the communication, but
also the environment in which the actor is located.
The key determinant of sensor-sharing communication
is that the same sensory modality is used to perceive
the sensory information on both ends of the infocom-
munications line.
4) Sensor bridging can be taken to mean not only the way
in which the information is conveyed (i.e., by changing
sensory modality), but also the kind of information that
is conveyed. Whenever the transferred information type
is imperceptible to the receiving actor (e.g., because its
cognitive system is incompatible with the information
type) the communication of information will necessarily
occur through sensor bridging.
III. DISCUSSION
In this section, we examine the research areas treated within
CogInfoCom from two different points of view: the research
historical view and the cognitive informatics view.
A. Historical View
Traditionally, the research fields of informatics, media,
and communications were very different areas, treated by
researchers from significantly different backgrounds. As a
synthesis between pairs of these 3 disciplines, the fields of
infocommunications, media informatics and media communi-
cations emerged in the latter half of the 20th century (figure
1). The past evolution of these disciplines points towards their
convergence in the near future, given that modern network
services aim to provide a more holistic user experience, which
presupposes achievements from these different fields [29],
[30]. In place of these research areas, with the enormous
growth in scope and generality of cognitive sciences in the
P. Baranyi and Á. Csapó • Cognitive Infocommunications: CogInfoCom
- 142 -
Fig. 1. The three fields of media, informatics and communications originallycreated separate theories, but are gradually morphing into one field today.From a research historical point of view, CogInfoCom is situated in the regionbetween cognitive informatics and cognitive communications.
past few decades, the new fields of cognitive media [31],
[32], cognitive informatics and cognitive communications [33],
[34] are gradually emerging. In a way analogous to the evo-
lution of infocommunications, media informatics and media
communications, we are seeing more and more examples of
research achievements which can be categorized as results
in cognitive infocommunications, cognitive media informatics
and cognitive media communications, even if – as of yet –
these fields are not always clearly defined [35], [36], [19],
[20], [24], [25], [37], [13], [38], [31].
The primary goal of CogInfoCom is to use information
theoretical methods to synthesize research results in some
of these areas, while aiming primarily to make use of these
synthesized results in the design of engineering systems. It
is novel in the sense that it views both the medium used
for communication and the media which is communicated as
entities which are interpreted by a cognitive system.
B. Cognitive Informatics View
Cognitive informatics (CI) is a research field which was
created in the early 21st century, and which pioneered the
adoption of research results in cognitive sciences within in-
formation technology [2], [1]. The main purpose of CI is
to investigate the internal information storing and processing
mechanisms in natural intelligent systems such as the human
brain. Much like CogInfoCom, CI also aims primarily to create
numerically tractable models which are well grounded from
an information theoretical point of view, and are amenable
to engineering systems. The key difference between CI and
CogInfoCom is that while the results of CI largely converge
towards and support the creation of artificial cognitive sys-
tems, the goal of CogInfoCom is to enable these systems to
communicate with each other and their users efficiently.
Thus, CogInfoCom builds on a large part of results in
CI, since it deals with the communication space between the
(a) Intra-cognitive infocommunications
(b) Inter-cognitive infocommunications
Fig. 2. Cognitive informatics view of CogInfoCom. The figure on the topshows a case of intra-cognitive infocommunication, and demonstrates thatwhile traditional tele-com deals with the distance-bridging transfer of rawdata (not interpreted by any cognitive system), cognitive infocommunicationsdeals with the endpoint-to-endpoint communication of information. The figureon the bottom shows a case of inter-cognitive infocommunication, when twocognitive systems with different cognitive capabilities are communicating witheach other. In this case, autonomous cognitive systems as well as remotesensors (sensorimotor extensions, as described in the definition of the sensor-bridging communication type) require the use of a communication adapter,while biological cognitive systems use traditional telecommunications devices.
human cognitive system and other natural or artifical cognitive
systems. A conceptual view of how CogInfoCom builds on CI
and traditional telecommunications can be seen in figure 2.
IV. EXAMPLES
We provide basic examples of the four specific combinations
of individual modes and types of communication, and one
more complex example which uses a combination of commu-
nication modes and types.
A. Intra-cognitive sensor-sharing communication
An example of intra-cognitive sensor-sharing communica-
tion is when two humans communicate through Skype or
some other telecommunication system, and a large variety of
information types (e.g. metalinguistic information and back-
ground noises through sound, gesture-based metacommunica-
tion through video, etc.) are communicated to both ends of the
CINTI 2010 • 11th IEEE International Symposium on Computational Intelligence and Informatics • 18–20 November, 2010 • Budapest, Hungary
- 143 -
line. In more futuristic applications, information from other
sensory modalities (e.g. smells through the use of electronic
noses and scent generators, tastes using equipment not yet
available today) may also be communicated. Because the
communicating actors are both human, the communication
mode is intra-cognitive, and because the communicated in-
formation is shared using the same cognitive subsystems (i.e.,
the same sensory modalities) on both ends of the line, the
type of communication is sensor-sharing. The communication
of such information is significant not only because the users
can feel physically closer to each other, but also because the
sensory information obtained at each end of the line (i.e.,
information which describes not the actor, but the environment
of the actor) is shared with the other end (in such cases,
the communication is intra-cognitive, despite the fact that the
transferred information describes the environment of the actor,
because the environment is treated as a factor which has an
implicit, but direct effect on the actor).
B. Intra-cognitive sensor-bridging communication
An example of intra-cognitive sensor-bridging communi-
cation is when two humans communicate through Skype or
some other telecommunication system, and each actor’s pulse
is transferred to the other actor using a visual representation
comprised of a blinking red dot, or the breath rate of each actor
is transferred to the other actor using a visual representation
which consists of a discoloration of the screen. The frequency
of the discoloration could symbolize the rate of breathing, and
the extent of discoloration might symbolize the amount of air
inhaled each time. (Similar ideas are investigated in, e.g. [26],
[18]). Because the communcating actors are both human, the
communication mode is intra-cognitive. Because the sensory
modality used to perceive the information (visual system)
is different from the modality used to normally perceive
the information (it is questionable if such a modality even
exists, because we don’t usually feel the pulse or breath rate
of other people during normal conversation), we say that
the communication is sensor-bridging. The communication of
such parameters is significant in that they help further describe
the psychological state of the actors. Due to the fact that
such parameters are directly imperceptible even in face-to-face
communication, the only possibility is to convey them through
sensor bridging. In general, the transferred information is
considered cognitive because the psychological state of the
actors does not depend on this information in a definitve way,
but when interpreted by a cognitive system such as a human
actor, the information and its context together can help create a
deeper understanding of the psychological state of the remote
user.
C. Inter-cognitive sensor-sharing communication
An example of inter-cognitive sensor-sharing communica-
tion might include the transfer of the operating sound of
a robot actor, as well as a variety of background scents
(using electronic noses and scent generators) to a human actor
controlling the robot from a remote teleoperation room. The
operating sound of a robot actor can help the teleoperator gain
a good sense of the amount of load the robot is dealing with,
how much resistance it is encountering during its operation,
etc. Further, the ability to perceive smells from the robot’s
surroundings can augment the teleoperator’s perception of
possible hazards in the robot’s environment. A further example
of inter-cognitive sensor sharing would be the transfer of direct
force feedback through e.g. a joystick. The communication in
these examples is inter-cognitive because the robot’s cognitive
system is significantly different from the human teleopera-
tor’s cognitive system. Because the transferred information is
conveyed directly to the same sensory modality, the commu-
nication is also sensor-sharing. Similar to the case of intra-
cognitive sensor-sharing, the transfer of such information is
significant because it helps further describe the environment
in which the remote cognitive system is operating, which has
an implicit effect on the remote cognitive system.
D. Inter-cognitive sensor-bridging communication
As the information systems, artificial cognitive systems
and the virtual manifestations of these systems (which are
gaining wide acceptance in today’s engineering systems, e.g.
as in iSpace [13]) are becoming more and more sophisticated,
the operation of these systems and the way in which they
organize complex information are, by their nature, essentially
inaccessible in many cases to the human perceptual system and
the information representation it uses. For this reason, inter-
cognitive sensor bridging is perhaps the most complex area of
CogInfoCom, because it relies on a sophisticated combination
of a number of fields from information engineering and
infocommunications to the various cognitive sciences.
A rudimentary example of inter-cognitive sensor-bridging
communication that is already in wide use today is the
collision-detection system available in many cars which plays
a frequency modulated signal, the frequency of which depends
on the distance between the car and the (otherwise completely
visible) car behind it. In this case, auditory signals are used to
convey spatial (visual) information. A further example could
be the use of the vibrotactile system to provide force feedback
through axial vibrations (this is a commonly adopted approach
in various applications, from remote vehicle guidance to
telesurgery, e.g. [39], [40], [41], [42], [43]). Force feedback
through axial vibration is also very widespread in gaming,
because with practice, the players can easily adapt to the
signals and will really interpret them as if they corresponded
to a real collision with an object or someone else’s body [44],
[45]. It is important to note, however, that the use of vibrations
is no longer limited to the transfer of information on collisions
or other simple events, but is also used to communicate more
complex information, such as warning signals to alert the
user’s attention to events whose occurrence is deduced from a
combination of events with a more complex structure (e.g.,
vibrations of smart phones to alert the user of suspicious
account activity, etc.). Such event-detection sytems can be
powerful when combined with iSpace [23].
Finally, more complex examples of sensor bridging in
P. Baranyi and Á. Csapó • Cognitive Infocommunications: CogInfoCom
- 144 -
Fig. 3. Scenario for the complex example, in which two remote telesurgeonsare communicating with each other and the telesurgical devices they are usingto operate a patient.
inter-cognitive communication might include the the use of
electrotactile arrays placed on the tongue to convey visual
information received from a camera placed on the forehead
(as in [16]), or the transfer of a robot actor’s tactile percepts
(as detected by e.g. a laser profilometer) using abstract sounds
on the other end of the communication line. In [46], relatively
short audio signals (i.e., 2-3 seconds long) are used to convey
abstract tactile dimensions such as the softness, roughness,
stickiness and temperature of surfaces. The necessity of haptic
feedback in virtual environments cannot be underrated [47].
The type of information conveyed through sensor bridging,
the extent to which this information is abstract and the sensory
modality to which it is conveyed is open to research. As re-
searchers obtain closer estimates to the number of dimensions
each sensory modality is sensitive to, and the resolution and
speed of each modality’s information processing capabilities,
research and development in sensory substitution will surely
provide tremendous improvements to today’s engineering sys-
tems.
E. Complex example
Let us consider a scenario where a telesurgeon in location A
is communicating with a telesurgical robot in remote location
B, and another telesurgeon in remote location C. At the same
time, let us imagine that the other telesurgeon (in location C)
is communicating with a different telesurgical robot, also in
remote location B (in much the same way as a surgical assis-
tant would perform a different task on the same patient), and
the first telesurgeon in location A (figure 3). In this case, both
teleoperators are involved in one channel of inter-cognitive and
one channel of intra-cognitive communication. Within these
two modes, examples of sensor sharing and sensor bridging
might occur at the same time. Each telesurgeon may see a
camera view of the robot they are controlling, feel the limits
of their motions through direct force feedback, and hear the
soft, but whining sound of the operating robot through direct
sound transfer. These are all examples of sensor-sharing inter-
cognitive communication. The transmission of the operated
patient’s blood pressure and heart rate are also examples of
sensor-sharing inter-cognitive communication (they are inter-
cognitive, because the transmission of information is effected
through the communication links with the robot, and they
are sensor-sharing, because they are presented in the same
graphical form in which blood pressure and heart rhythm
information are normally displayed). At the same time, infor-
mation from various sensors on the telesurgical robot might be
transmitted to a different sensory modality of the teleoperator
(e.g., information from moisture sensors using pressure applied
to the arm, etc.), which would serve to augment the telesur-
geon’s cognitive awareness of the remote environment, and
can be considered as sensor-bridging communication resulting
in an augmented form of telepresence. Through the intra-
cognitive mode of communication, the two teleoperators can
obtain information on each other’s psychological state and
environment. Here we can also imagine both distance and
sensor-bridging types of communication, all of which can
directly or indirectly help raise each telesurgeon’s attention
to possible problems or abnormalities the other telesurgeon is
experiencing.
V. ACKNOWLEDGEMENT
The research was supported by the HUNOROB project
(HU0045), a grant from Iceland, Liechtenstein and Norway
through the EEA Financial Mechanism and the Hungarian Na-
tional Development Agency, as well as the ETOCOM project
(TAMOP-4.2.2-08/1/KMR-2008-0007), coordinated by BME
TMIT and MTA SZTAKI through the Hungarian National
Development Agency in the framework of the Social Renewal
Operative Programme supported by the EU and co-financed
by the European Social Fund. We would also like to thank
Prof. Gyula Sallai for his invaluable scientific advice.
REFERENCES
[1] Y. Wang and W. Kinsner, “Recent advances in cognitive informatics,”IEEE Transactions on Systems, Man and Cybernetics, vol. 36, no. 2, pp.121–123, 2006.
[2] Y. Wang, “On cognitive informatics (keynote speech),” in 1st IEEE
International Conference on Cognitive Informatics, 2002, pp. 34–42.[3] F. Davide, M. Lunghi, G. Riva, and F. Vatalaro, “Communications
through virtual technologies,” in IDENTITY, COMMUNITY AND TECH-
NOLOGY IN THE COMMUNICATION AGE, IOS PRESS. Springer-Verlag, 2001, pp. 124–154.
[4] W. IJsselsteijn and G. Riva, “Being there: The experience of presencein mediated environments,” 2003.
[5] S. Benford, C. Greenhalgh, G. Reynard, C. Brown, and B. Koleva,“Understanding and constructing shared spaces with mixed realityboundaries steve,” 1998.
[6] B. Shneiderman, “Direct manipulation for comprehensible, predictableand controllable user interfaces,” in Proceedings of IUI97, 1997 Inter-
national Conference on Intelligent User Interfaces. ACM Press, 1997,pp. 33–39.
[7] D. Kieras and P. G. Polson, “An approach to the formal analysis of usercomplexity,” International Journal of Man-Machine Studies, vol. 22,no. 4, pp. 365 – 394, 1985.
[8] R. Picard, Affective Computing. The MIT Press, 1997.[9] J. Tao and T. Tan, “Affective computing: A review,” in Affective
Computing and Intelligent Interaction, ser. Lecture Notes in ComputerScience, J. Tao, T. Tan, and R. Picard, Eds. Springer Berlin / Heidelberg,2005, vol. 3784, pp. 981–995.
CINTI 2010 • 11th IEEE International Symposium on Computational Intelligence and Informatics • 18–20 November, 2010 • Budapest, Hungary
- 145 -
[10] R. Baecker, J. Grudin, W. Buxton, and S. Greenberg, Readings in
Human-Computer Interaction. Toward the Year 2000. Morgan Kauf-mann, San Francisco, 1995.
[11] C. Bartneck and M. Okada, “Robotic user interfaces,” in Human and
Computer Converence (HC’01), Aizu, Japan, 2001, pp. 130–140.
[12] M. Auvray and E. Myin, “Perception with compensatory devices.from sensory substitution to sensorimotor extension,” Cognitive Science,vol. 33, pp. 1036–1058, 2009.
[13] P. Korondi and H. Hashimoto, “Intelligent space, as an integratedintelligent system (keynote paper), high tatras, slovakia,” in International
Conferece on Electrical Drives and Power Electronics, 2003, pp. 24–31.
[14] P. Korondi, B. Solvang, and P. Baranyi, “Cognitive robotics and telema-nipulation,” in 15th International Conference on Electrical Drives and
Power Electronics, Dubrovnik, Croatia, 2009, pp. 1–8.
[15] P. Bach-y Rita, “Tactile sensory substitution studies,” Annals of New
York Academic Sciences, vol. 1013, pp. 83–91, 2004.
[16] P. Bach-y Rita, K. Kaczmarek, M. Tyler, and J. Garcia-Lara, “Formperception with a 49-point electrotactile stimulus array on the tongue,”Journal of Rehabilitation Research Development, vol. 35, pp. 427–430,1998.
[17] P. Arno, A. Vanlierde, E. Streel, M.-C. Wanet-Defalque, S. Sanabria-Bohorquez, and C. Veraart, “Auditory substitution of vision: patternrecognition by the blind,” Applied Cognitive Psychology, vol. 15, no. 5,pp. 509–519, 2001.
[18] L. Mignonneau and C. Sommerer, “Designing emotional, metaphoric,natural and intuitive interfaces for interactive art, edutainment andmobile communications,” Computers & Graphics, vol. 29, no. 6, pp.837 – 851, 2005.
[19] M. Niitsuma and H. Hashimoto, “Extraction of space-human activityassociation for design of intelligent environment,” in IEEE International
Conference on Robotics and Automation, 2007, pp. 1814–1819.
[20] N. Campbell, “Conversational speech synthesis and the need for somelaughter,” IEEE Transactions on Audio, Speech and Language Process-
ing, vol. 14, no. 4, pp. 1171–1178, 2006.
[21] A. Luneski, R. Moore, and P. Bamidis, “Affective computing andcollaborative networks: Towards emotion-aware interaction,” in Perva-
sive Collaborative Networks, L. Camarinha-Matos and W. Picard, Eds.Springer Boston, 2008, vol. 283, pp. 315–322.
[22] L. Boves, L. ten Bosch, and R. Moore, “Acorns - towards computa-tional modeling of communication and recognition skills,” in 6th IEEE
International Conference on Cognitive Informatics, 2007, pp. 349–356.
[23] P. Podrzaj and H. Hashimoto, “Intelligent space as a fire detectionsystem,” in Systems, Man and Cybernetics, 2006. SMC ’06. IEEE
International Conference on, vol. 3, oct. 2006, pp. 2240 –2244.
[24] O. Lopez-Ortega and V. Lopez-Morales, “Cognitive communication ina multiagent system for distributed process planning,” International
Journal of Computer Applications in Technology, vol. 26, no. 1/2, pp.99–107, 2006.
[25] N. Suzuki and C. Bartneck, “Editorial: special issue on subtle expressiv-ity for characters and robots,” International Journal of Human-Computer
Studies, vol. 62, no. 2, pp. 159–160, 2005.
[26] C. Sommerer and L. Mignonneau, “Mobile feelings - wireless commu-nication of heartbeat and breath for mobile art,” in 14th International
Conference on Artificial Reality and Teleexistence, Seoul, South Korea
(ICAT ’04), 2004, pp. 346–349.
[27] Y. Wilks, R. Catizone, S. Worgan, A. Dingli, R. Moore, and W. Cheng,“A prototype for a conversational companion for reminiscing aboutimages,” Computer Speech & Language, vol. In Press, Corrected Proof,pp. –, 2010.
[28] K. Morioka, J.-H. Lee, and H. Hashimoto, “Human following mobilerobot in a distributed intelligent sensor network,” IEEE Transactions on
Industrial Electronics, vol. 51, no. 1, pp. 229–237, 2004.
[29] B. Preissl and J. Muller, Governance of Communication Networks:
Connecting Societies and Markets with IT. Physica-Verlag HD, 1979.
[30] G. Sallai, “Converging information, communication and media technolo-gies,” in Assessing Societal Implications of Converging Technological
Development, G. Banse, Ed. Edition Sigma, Berlin, 2007, pp. 25–43.
[31] M. M. Recker, A. Ram, T. Shikano, G. Li, and J. Stasko, “Cognitivemedia types for multimedia information access,” Journal of Educational
Multimedia and Hypermedia, vol. 4, no. 2–3, pp. 183–210, 1995.
[32] R. Kozma, “Learning with media,” Review of Educational Research,vol. 61, no. 2, pp. 179–212, 1991.
[33] J. Roschelle, “Designing for cognitive communication: epistemic fidelityor mediating collaborative inquiry?” in Computers, communication and
mental models. Taylor & Francis, 1996, pp. 15–27.[34] D. Hewes, The Cognitive Bases of Interpersonal Communication. Rout-
ledge, 1995.[35] P. Baranyi, B. Solvang, H. Hashimoto, and P. Korondi, “3d internet
for cognitive info–communication,” in 10th International Symposium of
Hungarian Researchers on Computational Intelligence and Informatics
(CINTI ’09, Budapest), 2009, pp. 229–243.[36] H. Jenkins, Convergence Culture: Where Old and New Media Collide.
NYU Press, 2008.[37] P. Thompson, G. Cybenko, and A. Giani, “Cognitive hacking,” in Eco-
nomics of Information Security, ser. Advances in Information Security,S. Jajodia, L. Camp, and S. Lewis, Eds. Springer US, 2004, vol. 12,pp. 255–287.
[38] J. Cheesebro and D. Bertelsen, Analyzing Media: Communication Tech-
nologies as Symbolic and Cognitive Systems. The Guilford Press, 1998.[39] S. Cho, H. Jin, J. Lee, and B. Yao, “Teleoperation of a mobile robot using
a force-reflection joystick with sensing mechanism of rotating magneticfield,” Mechatronics, IEEE/ASME Transactions on, vol. 15, no. 1, pp.17 –26, feb. 2010.
[40] N. Gurari, K. Smith, M. Madhav, and A. Okamura, “Environmentdiscrimination with vibration feedback to the foot, arm, and fingertip,”in Rehabilitation Robotics, 2009. ICORR 2009. IEEE International
Conference on, 2009, pp. 343 –348.[41] H. Xin, C. Burns, and J. Zelek, “Non-situated vibrotactile force feedback
and laparoscopy performance,” in Haptic Audio Visual Environments and
their Applications, 2006. HAVE 2006. IEEE International Workshop on,2006, pp. 27 –32.
[42] R. Schoonmaker and C. Cao, “Vibrotactile force feedback system forminimally invasive surgical procedures,” in Systems, Man and Cyber-
netics, 2006. SMC ’06. IEEE International Conference on, vol. 3, 2006,pp. 2464 –2469.
[43] A. Okamura, J. Dennerlein, and R. Howe, “Vibration feedback modelsfor virtual environments,” in Robotics and Automation, 1998. Proceed-
ings. 1998 IEEE International Conference on, vol. 1, may. 1998, pp.674 –679 vol.1.
[44] K. Minamizawa, S. Fukamachi, H. Kajimoto, N. Kawakami, andS. Tachi, “Wearable haptic display to present virtual mass sensation,”in SIGGRAPH ’07: ACM SIGGRAPH 2007 sketches. New York, NY,USA: ACM, 2007, p. 43.
[45] S.-H. Choi, H.-D. Chang, and K.-S. Kim, “Development of force-feedback device for pc-game using vibration,” in ACE ’04: Proceedings
of the 2004 ACM SIGCHI International Conference on Advances in
computer entertainment technology. ACM, 2004, pp. 325–330.[46] A. Csapo and P. Baranyi, “An interaction-based model for auditory
subsitution of tactile percepts,” in IEEE International Conference on
Intelligent Engineering Systems, INES, 2010, p. (In press.).[47] G. Robles-De-La-Torre, “The importance of the sense of touch in virtual
and real environments,” Multimedia, IEEE, vol. 13, no. 3, pp. 24 –30,jul. 2006.
P. Baranyi and Á. Csapó • Cognitive Infocommunications: CogInfoCom
- 146 -