coding scheme in gestures analysis liang zhou dr. manolya kwa

22
Coding Scheme in Gestures Analysis Liang Zhou Dr. Manolya Kwa

Post on 18-Dec-2015

219 views

Category:

Documents


3 download

TRANSCRIPT

Coding Scheme in Gestures Analysis

Liang Zhou

Dr. Manolya Kwa

What are Gestures and G-Analysis?

• Gesture is the special motion(s) which express specific meaning from thought. ( Another form of Speech)

• G-Analysis is the studying of human behaviour which purpose to understand the human actions and their thought.

Why G-Analysis?

• Gesture analysis by computer becomes an important role in science studying field. (Sign language)

• it is key role in human and computer interaction research

• It has significant scientific value.

Why coding gesture?

• Coding Gestures is the process of translating attributes or properties of the gestures (i.e Finger Form) into a form that

researchers can systematically analyse.• Coding gesture is the important to • HCI need coding gesture. (One of the most

important concepts is that of direct manipulation)

Problem

In Cognitive studying, the abstract concept

of coding scheme can’t be implement in HCI.

Reason:

1. Abstract Code

2. Ambiguous Expression.

Cognitive Actions

Physical Actions:

L-Action, M-Action, D-Action

Perceptual Action

Functional Action

Conceptual Action: Goals etc.

Cognitive Progress

Why Coding Gesture in CA?

• 1. Speech and Gesture are the same system.• 2. Interpret the thought more clearly than

the sentence and words (Better Understanding)

• 3. Gesture express the thought instantly but speech systematically .

• 4. Gesture Recognition Tools are mature and widely use in Gesture Analysis.

Hypothesis

Challenge

1. How to prove it?

2. How to Code the gesture ?

3. How to improve the reliability of Coding Scheme?

4. How to get the sufficient data ?

Method

• 4 participants : 2 task• Task 1: Cognitive Progress • Task 2: Gesture Capture Phase.

1. Cognitive progress : Sketching the chair and record the whole phase.

2. Capture the gesture and motion by 2 cameras. One for body motion, one for Hand motion.

Experiment

Chair Object

Data Collection

1. Sketching Record

2. Gesture Image.

E2

E1

E3

E4

E5

E6

1 2 3 4 5 6 7 8 9 11 12 13 14 15 16 17 18 19 2010

10987654321 20191817161514131211

E2

E6

E5

E4

E3

E1

21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 67 68 69

ELEMENTS

STEPS

ELEMENTS

STEPS

Figure 13: Graphic of Participant 1

Figure 14: Graphic of Participant 2

66

Coding Scheme

• Motion::• attributes: M_Name, Hand_type frequency(default)

TimeStamp Position Direction FingerForm• M_Name: {}• Hand_type: {LH,RH,TH}• frequency: {Single, Repeated }• TimeStamp: {ST,FT}• Position: {Left, Lower-Left, Upper-Left, Center,

Right, Lower-Right, Upper-Right, Lower}• Direction {(1,0), (1,1),(-1,0),….}

Example

MotionName

Hand Type

ST FT Position Direction Rotation

Rise L 0s 1s Upper_Left

(1,1) 0

Coding Scheme

• Gesture ::

• Attributes : Name, TimeGap, PosSS, Objects

• Name: (GestureName)

• TimeGap: (Time Second)

• PosSS: (Distance)

• Objects( parts of chair)

Tools

• The Anvil could compute statistics on an annotated video sequence and the annotation element which we defined in the coding scheme will be represented the same behavior and grouped together. Then, we will understand the label between Gestures and Objects. Basic on the Kawali‘s experiment result, we will understand that what kind of the gestures is mapping to which behaviors and the gestures is mapping to the goal /goals or not.

Current Works

• We have found out 6 common gestures

• However, because of lack of the data , we just can simply to capture and represent it by our coding scheme.

• But we’ve already find that some data in cognitive actions show a strong correlation with gestures.

What we find

• One of the most occurring gesture:

• “This part of the chair I think is too small or big.”

• The hand focus on the size of the part they drew. The level of the size and amount should be expressed..

• P1, P3, P4, P5

Conclusion

• We provide a new methodology to understand the cognitive actions and provide its possibility to implement HCI in design field

• Future work : • 1. Coding scheme need to cover more other’s

field. For example , facial and Head movement. • 2. More task in the experiment will gain sufficient

data to analysis and explore the gesture.

Thanks

Question?