emergence of depiction in acquisition of american sign language · 2020. 11. 18. · newport, e.,...

1
As reported by Schick (2006) "role play and direct quotation" is acquired early and around age two "handling classifiers" are preferred over "entity classifiers" in spontaneous production. Handling classifier predicates, role-play, and direct quotation can be considered in the Real-Space framework as similar to each other in terms of the cognitive abilities recruited (i.e. there is no partitioning of the body parts to represent multiple visible elements.) Instances of depiction involving both the depicted subject of conception (Langacker 2008) and "entity classifiers" can be viewed as more complex than those with "handling classifiers." This is because partitioning is utilized to represent a distinct visible object that is mapped onto the body part, (i.e. a hand, face, etc. no longer "belongs" to the depicted subject of conception.) By viewing these phenomena as related and describing this relationship with the Real-Space blending framework, we can describe, for example, handling classifiers, constructed action, and constructed dialogue as being similar to each other in terms of the "source material" (body space, etc.) and cognitive abilities recruited. That is, there is no partitioning (Dudis 2007) of the body to represent multiple visible elements. The torso, face, and arms are all considered to be as belonging to the subject. In short, the same machinery is recruited to produce them. By viewing handling classifier predicates, constructed action, and constructed dialog as belonging to the similar types of depiction (see code A & B in the flowchart) may present us with clearer patterns illustrating the course of acquisition for depiction. This can be achieved by using Dudis' approach to analyzing what is required to produce these types of depiction. For example, instances of depiction involving both the depicted subject of conception (aka |subject|) and "entity classifiers" can be viewed as more complex than those with "handling classifiers." This is because partitioning is utilized to represent a distinct visible object that is mapped onto the body part, (i.e. a hand, face, etc. no longer "belongs" to the depicted subject of conception.) With this approach, we are presented with specific predictions that can be tested. For example, assuming that simpler constructions appear earlier than complex constructions we can predict that code B would appear prior to code D. This translates to clear hypotheses that can tested by coding children's utterances for depiction. Emergence of Depiction in Acquisition of American Sign Language Rationale Discussion Methodology Depiction in Children Acknowledgements Conclusions The three examples from JIL illustrates how early depiction appears (at least by 1;07) and her strategies as she begins to integrate depiction more tightly into her utterances (i.e. sequentially), then finally an adult-like utterance by the age of 2;09. This set of examples contains observations similar to that of Anderson & Reilly (1998) where children produce nonmanual signs with the incorrect scope. Additionally, Schick's (2006) observations of a preference for handling classifier predicates at early ages were also confirmed within the dataset transcribed for this project. When viewed with the lens of Real-Space framework and when considering the specific cognitive abilities identified by Dudis, we can see a clearer course of development for depiction. The sequence of JIL's utterances suggests JIL comes to produce increasingly complex forms of depiction, and the errors can be accounted for if we include the consideration of the cognitive abilities that children must utilize in order to produce certain types of depiction. Funding for this research is provided by the National Science Foundation, Grant Number SBE-0541953 to the Visual Language and Visual Learning, Science of Learning Center (VL2.) Child videos provided by Diane Lillo-Martin (NIDCD DC-00183). Dr. Dudis & Dr. Chen Pichler for their support. This poster draws from fifteen video recorded sessions of naturalistic play with JIL, a deaf child with deaf parents whose primary language is ASL, from the ages of 1;07 to 1;10 and 2;07-2;09. Each session typically lasts around 45 minutes and were transcribed for instances of depiction. Below are three sets of screen captures illustrating the development of partitioning, which is one of several sequences that have been observed in the of emergence of depiction in child acquisition of ASL. References Anderson, D. and J. S. Reilly. 1998. PAH! The Acquisition of Adverbials in ASL. Sign Language & Linguistics (1-2 pp. 117-142). Dudis, P. 2007. Types of depiction in ASL. Gallaudet University: Manuscript. Langacker, R. 2008. Cognitive Grammar: A basic introduction. Oxford University Press. Newport, E., R. Meier. 1985. Acquisition of American Sign Language. In D. Slobin (Ed.), The crosslinguistic study of language acquisition (Vol. 1, pp. 881-938) Schick, B. 2006. Acquiring a visually motivated sign language: Evidence from diverse learners. In Brenda Schick et al. (Eds.) Advances in the Sign language Development of Deaf Children. Slobin, Dan I., et al. 2003. A Cognitive/Functional Perspective on the Acquisition of “Classifiers.” In Karen Emmorey (Ed.), Perspectives on Classifier Constructions in Sign Language. Limitations The data here is derived from one child. Similar patterns regarding partitioning and integration of elements across other children are expected, but the milestones for each development may vary considerably. Additionally, this work has not yet been able to consider the relationship between the complexity of all depiction types and their milestones. For example, Code H is simpler than Code D, but based on one observed instance of Code H, it seems to appear later than Code D. This may be attributed to the fact that Code H requires refraining from depicting |subject|. also fragment buoys and theme buoys non-visible |entity| cannot be manipulated Type I: setting-only version of event depictions or depicts dimensions of objects. Type J: "scale model" of settings verbs that exhibit temporal compression; temporal aspect constructions depiction of thought/feeling is not visible to |others|; depiction of seen entity is distal at least one |hand as entity| is a body part of the |subject| or a |vehicle| the |subject| uses the partitioned |hand as entity| (often) can be in contact with |subject| no body partitioning, save for what's needed in B to depict manner (adverbial NMS) or sensation arising from physical interaction (onomatopoeic-like items) CODE: K Note eye gaze and facial expression: is there evidence of a |subject|? is an event being depicted without a |subject|? Is a place or thing being depicted (concretely)? depiction identification flowchart : version 4.9 is it life-sized? compressed? NO YES CODE: G CODE: H is it life-sized? compressed? CODE: I CODE: J NO YES YES YES YES NO NO YES NO is the space 2D (more abstract) as opposed to 3D (more concrete)? are buoys being used? CODE: L YES NO are tokens being used? CODE: M YES NO ! YES CODE: A several types of what is known as constructed dialogue CODE: B CODE: C CODE: D |subject| & |extension of subject|? YES CODE: E |subject| & |psychological experience|? YES handling classifiers constructed action instrument classifiers |signer as patient| perceived motion inner speech YES depiction of utterance? |subject| & other visible |elements| share same scale? NO |subject| = only visible element? NO NO YES YES NO vehicle/person classifiers (limited) limb classifiers, etc. map plane CODE: F |subject| and temporal aspect? YES durational aspect unrealized inceptive aspect NO [email protected] fictive motion cl.BE-AT Type G: |subject|-less versions of Types B-F. Type H: "scale model" of events YES NOTES 1. The types listed here are isolated from actual usages, which often exhibit complex sequences and interrelationships. 2. Many types of depiction have metaphorical versions. For example, the |subject| looking up towards an |authority figure| in a depiction of utterance (Type A) may be motivated in part by the metaphor AUTHORITY IS UP. includes many sub-types, see Types B-F scale model LEGEND 1. filled stick figure: |subject| 2. arrow (within illustration): |time| 3. box: |setting| 4. dashed stick figure: |vantage point| 5. unfilled stick figure: signer scale model tokens are materializa- ions of concepts that are (construed as) abstract calendar plane "age ranking" buoy "4 college years" buoy e.g. |professional hockey league game| Abstract This poster presents a small sample of recent exploratory work on the feasibility of applying Dudis (2007) towards the emergence of depiction in child acquisition of ASL. The phenomena subsumed by depiction are described, the benefits of viewing depicting utterances by children as related in terms of the cognitive components recruited to produce these utterances, and a hypothesis from the Real-Space framework is tested. Future work New work focuses on effects of interaction between depiction and several other components, namely, discourse cohesion (cf. Slobin 2006), nonmanual errors (cf Anderson & Reilly), and overgeneralization of depiction. Preliminary findings suggest that as the child comes to integrate additional elements into their signing (e.g. non-manuals, new types of depiction, discourse cohesion, etc.) errors appear. In other words, the protracted acquisition timeline for classifier constructions is the result of gradual integration of multiple resources. Depiction in Adults Depiction Introduced Depiction is the iconic representation of events, settings, and objects, but distinct from the iconicity exhibited by nouns and indicating verbs (Dudis 2007). Various classifiers constructions, constructed action & dialogue, tokens and combinations of the above are all types of depiction. The flowchart below illustrates the relationship of different types of depiction. (Note: not all types of depiction are within this flowchart.) For additional examples see the section below "Depiction in Adults." Depictive utterances are analyzed here with the Real-Space blending framework. Concepts associated with the entity being depicted are blended with Real-Space elements. For example, Codes A-F refer to instances of depiction where at least a setting, and subject of conception (Langacker 2008) of an event are integrated with the Real-Space signer, a portion of physical space, and the current setting, resulting in |subject| and |setting|. These elements exist within a third mental space, the blend. Dudis demonstrates that it is possible to describe depiction with greater precision when additional Real-Space elements and cognitive abilities are considered. Partitioning is one cognitive ability with particular relevance here. Partitioning refers to the assignment of representations onto the hand(s), arm(s), face, or lips that are distinct from |subject|. Code B: (depicting example of verb agreement) HAND-TO-from location A to location B That person gave it to that person. Code D: (constructed action + entity classifiers) CAR SWERVING+BODY SWAYING I was swerving the car everywhere. Code A: (sequence of reported speech) IX-2 TOLD-ME, "YOU DO-neg THAT" That person told me, "You shouldn't do that." Code B: (Handling + |subject|) EFFORT-OPEN PAH-OPEN After a bit of effort, the jar lid came open. Code H: (event depicted with figure & ground) CAR-HIT-TREE The car hit the tree. At the age of 1;07, JIL depicts a dog. No lexical items produced, only the depiction of |subject|, which in this case, is a dog that barks. This is the first recorded session with JIL and already at this age, she is able to recruit the necessary components to depict |subject|, but she does not produce any signs in conjunction with depicting |subject|. At the age of 2;09, JIL produces an adult-like production of Code D. JIL produces a depicting verb "STARE", which is partitioned away from |subject|. One year later, JIL is observed producing a depiction of |subject| and a sign in a tightly ordered sequence. At 2;07, JIL depicts the pout of a |subject| which dissolves and JIL then produces the verb "CRY". This utterance is similar to the disjointed production of adverbial NMS and manual signs as observed by Anderson&Reilly (1998). At this point JIL has not yet mastered the ability to partition manual articulation that is distinct from |subject|. Instead she produces them sequentially. Inquiries should be directed to [email protected] Dudis et al. (2008) introduce a flowchart to aid in the identification of depiction This section presents a test of one hypothesis borne from the Real-Space framework as described in § "Rational." Code B is predicted to appear prior to Code D because the latter is more complex as it invokes the partitioning of the hands, arm, face, or lips to represent an entity distinct from |subject|. The examples below suggest the hypothesis above may be true. Clifton Langdon Department of Linguistics, Gallaudet University

Upload: others

Post on 04-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • As reported by Schick (2006) "role play and direct quotation" is acquired early and around age two "handling classifiers" are preferred over "entity classifiers" in spontaneous production. Handling classifier predicates, role-play, and direct quotation can be considered in the Real-Space framework as similar to each other in terms of the cognitive abilities recruited (i.e. there is no partitioning of the body parts to represent multiple visible elements.) Instances of depiction involving both the depicted subject of conception (Langacker 2008) and "entity classifiers" can be viewed as more complex than those with "handling classifiers." This is because partitioning is utilized to represent a distinct visible object that is mapped onto the body part, (i.e. a hand, face, etc. no longer "belongs" to the depicted subject of conception.) By viewing these phenomena as related and describing this relationship with the Real-Space blending framework, we can describe, for example, handling classifiers, constructed action, and constructed dialogue as being similar to each other in terms of the "source material" (body space, etc.) and cognitive abilities recruited. That is, there is no partitioning (Dudis 2007) of the body to represent multiple visible elements. The torso, face, and arms are all considered to be as belonging to the subject. In short, the same machinery is recruited to produce them. By viewing handling classifier predicates, constructed action, and constructed dialog as belonging to the similar types of depiction (see code A & B in the flowchart) may present us with clearer patterns illustrating the course of acquisition for depiction. This can be achieved by using Dudis' approach to analyzing what is required to produce these types of depiction. For example, instances of depiction involving both the depicted subject of conception (aka |subject|) and "entity classifiers" can be viewed as more complex than those with "handling classifiers." This is because partitioning is utilized to represent a distinct visible object that is mapped onto the body part, (i.e. a hand, face, etc. no longer "belongs" to the depicted subject of conception.) With this approach, we are presented with specific predictions that can be tested. For example, assuming that simpler constructions appear earlier than complex constructions we can predict that code B would appear prior to code D. This translates to clear hypotheses that can tested by coding children's utterances for depiction.

    Emergence of Depiction in Acquisition of American Sign Language

    Rationale Discussion

    Methodology

    Depiction in Children

    Acknowledgements

    Conclusions

    The three examples from JIL illustrates how early depiction appears (at least by 1;07) and her strategies as she begins to integrate depiction more tightly into her utterances (i.e. sequentially), then finally an adult-like utterance by the age of 2;09. This set of examples contains observations similar to that of Anderson & Reilly (1998) where children produce nonmanual signs with the incorrect scope. Additionally, Schick's (2006) observations of a preference for handling classifier predicates at early ages were also confirmed within the dataset transcribed for this project.

    When viewed with the lens of Real-Space framework and when considering the specific cognitive abilities identified by Dudis, we can see a clearer course of development for depiction. The sequence of JIL's utterances suggests JIL comes to produce increasingly complex forms of depiction, and the errors can be accounted for if we include the consideration of the cognitive abilities that children must utilize in order to produce certain types of depiction.

    Funding for this research is provided by the National Science Foundation, Grant Number SBE-0541953 to the Visual Language and Visual Learning, Science of Learning Center (VL2.) Child videos provided by Diane Lillo-Martin (NIDCD DC-00183). Dr. Dudis & Dr. Chen Pichler for their support.

    This poster draws from fifteen video recorded sessions of naturalistic play with JIL, a deaf child with deaf parents whose primary language is ASL, from the ages of 1;07 to 1;10 and 2;07-2;09. Each session typically lasts around 45 minutes and were transcribed for instances of depiction. Below are three sets of screen captures illustrating the development of partitioning, which is one of several sequences that have been observed in the of emergence of depiction in child acquisition of ASL.

    ReferencesAnderson, D. and J. S. Reilly. 1998. PAH! The Acquisition of Adverbials in ASL. Sign Language & Linguistics (1-2 pp. 117-142).Dudis, P. 2007. Types of depiction in ASL. Gallaudet University: Manuscript.Langacker, R. 2008. Cognitive Grammar: A basic introduction. Oxford University Press.Newport, E., R. Meier. 1985. Acquisition of American Sign Language. In D. Slobin (Ed.), The crosslinguistic study of language acquisition (Vol. 1, pp. 881-938)Schick, B. 2006. Acquiring a visually motivated sign language: Evidence from diverse learners. In Brenda Schick et al. (Eds.) Advances in the Sign language Development of Deaf Children.Slobin, Dan I., et al. 2003. A Cognitive/Functional Perspective on the Acquisition of “Classifiers.” In Karen Emmorey (Ed.), Perspectives on Classifier Constructions in Sign Language.

    LimitationsThe data here is derived from one child. Similar patterns regarding partitioning and integration of elements across other children are expected, but the milestones for each development may vary considerably. Additionally, this work has not yet been able to consider the relationship between the complexity of all depiction types and their milestones. For example, Code H is simpler than Code D, but based on one observed instance of Code H, it seems to appear later than Code D. This may be attributed to the fact that Code H requires refraining from depicting |subject|.

    also fragment buoys and theme buoys

    non-visible |entity| cannot be manipulated

    Type I: setting-only version of event depictions or depicts dimensions of objects. Type J: "scale model" of settings

    verbs that exhibit temporal compression; temporal aspect constructions

    depiction of thought/feeling is not visible to |others|; depiction of seen entity is distal

    at least one |hand as entity| is a body part of the |subject| or a |vehicle| the |subject| uses

    the partitioned |hand as entity| (often) can be in contact with |subject|

    no body partitioning, save for what's needed in B to depict manner (adverbial NMS) or sensation arising from physical interaction (onomatopoeic-like items)

    CODE: K

    Note eye gaze and facial

    expression: is there evidence of a |subject|?

    is an event being depicted

    without a |subject|?

    Is a place or thing being depicted(concretely)?

    depiction identification flowchart : version 4.9

    is it life-sized? compressed?NO

    YES

    CODE: G CODE: H

    is it life-sized? compressed?

    CODE: I CODE: J

    NO

    YES YESYES

    YES

    NO

    NO

    YES

    NO

    is the space 2D (more abstract) as

    opposed to 3D (more concrete)?

    are buoys being used?

    CODE: L

    YES

    NOare tokens being

    used?

    CODE: M

    YES

    NO

    !

    YES

    CODE: A

    several types of what is known as

    constructed dialogue

    CODE: B CODE: C CODE: D

    |subject| &|extension of subject|?

    YES

    CODE: E

    |subject| &|psychological experience|?

    YES

    handling classifiers

    constructed action

    instrument classifiers

    |signer as patient|

    perceivedmotion

    innerspeech

    YES

    depiction of utterance?

    |subject| & other visible |elements|

    share same scale?

    NO

    |subject| = only visible element?

    NONO

    YES YES

    NO

    vehicle/person classifiers

    (limited)

    limb classifiers,

    etc.

    map plane

    CODE: F

    |subject| and temporal aspect?

    YES

    durational aspect

    unrealized inceptiveaspect

    NO

    [email protected]

    fictive motion

    cl.BE-AT

    Type G: |subject|-less versions of Types B-F. Type H: "scale model" of events

    YES

    NOTES

    1. The types listed here are isolated from actual usages, which often exhibit complex sequences and interrelationships.

    2. Many types of depiction have metaphorical versions. For example, the |subject| looking up towards an |authority figure| in a depiction of utterance (Type A) may be motivated in part by the metaphor AUTHORITY IS UP.

    includes many sub-types, see

    Types B-F

    scale model

    LEGEND

    1. filled stick figure: |subject|2. arrow (within illustration): |time|3. box: |setting|4. dashed stick figure: |vantage point|5. unfilled stick figure: signer

    scale model

    tokens are materializa-ions of concepts that are (construed as) abstract

    calendar plane

    "age ranking"

    buoy

    "4 college years" buoy

    e.g. |professional hockey league

    game|

    AbstractThis poster presents a small sample of recent exploratory work on the feasibility of applying Dudis (2007) towards the emergence of depiction in child acquisition of ASL. The phenomena subsumed by depiction are described, the benefits of viewing depicting utterances by children as related in terms of the cognitive components recruited to produce these utterances, and a hypothesis from the Real-Space framework is tested.

    Future workNew work focuses on effects of interaction between depiction and several other components, namely, discourse cohesion (cf. Slobin 2006), nonmanual errors (cf Anderson & Reilly), and overgeneralization of depiction. Preliminary findings suggest that as the child comes to integrate additional elements into their signing (e.g. non-manuals, new types of depiction, discourse cohesion, etc.) errors appear. In other words, the protracted acquisition timeline for classifier constructions is the result of gradual integration of multiple resources.

    Depiction in Adults

    Depiction Introduced Depiction is the iconic representation of events, settings, and objects, but distinct from the iconicity exhibited by nouns and indicating verbs (Dudis 2007). Various classifiers constructions, constructed action & dialogue, tokens and combinations of the above are all types of depiction. The flowchart below illustrates the relationship of different types of depiction. (Note: not all types of depiction are within this flowchart.) For additional examples see the section below "Depiction in Adults." Depictive utterances are analyzed here with the Real-Space blending framework. Concepts associated with the entity being depicted are blended with Real-Space elements. For example, Codes A-F refer to instances of depiction where at least a setting, and subject of conception (Langacker 2008) of an event are integrated with the Real-Space signer, a portion of physical space, and the current setting, resulting in |subject| and |setting|. These elements exist within a third mental space, the blend. Dudis demonstrates that it is possible to describe depiction with greater precision when additional Real-Space elements and cognitive abilities are considered. Partitioning is one cognitive ability with particular relevance here. Partitioning refers to the assignment of representations onto the hand(s), arm(s), face, or lips that are distinct from |subject|.

    Code B: (depicting example of verb agreement)HAND-TO-from location A to location BThat person gave it to that person.

    Code D: (constructed action + entity classifiers)CAR SWERVING+BODY SWAYINGI was swerving the car everywhere.

    Code A: (sequence of reported speech)IX-2 TOLD-ME, "YOU DO-neg THAT"That person told me, "You shouldn't do that."

    Code B: (Handling + |subject|)EFFORT-OPEN PAH-OPENAfter a bit of effort, the jar lid came open.

    Code H: (event depicted with figure & ground)CAR-HIT-TREEThe car hit the tree.

    At the age of 1;07, JIL depicts a dog. No lexical items produced, only the depiction of |subject|, which in this case, is a dog that barks. This is the first recorded session with JIL and already at this age, she is able to recruit the necessary components to depict |subject|, but she does not produce any signs in conjunction with depicting |subject|.

    At the age of 2;09, JIL produces an adult-like production of Code D.JIL produces a depicting verb "STARE", which is partitioned away from |subject|.

    One year later, JIL is observed producing a depiction of |subject| and a sign in a tightly ordered sequence. At 2;07, JIL depicts the pout of a |subject| which dissolves and JIL then produces the verb "CRY". This utterance is similar to the disjointed production of adverbial NMS and manual signs as observed by Anderson&Reilly (1998). At this point JIL has not yet mastered the ability to partition manual articulation that is distinct from |subject|. Instead she produces them sequentially.

    Inquiries should be directed to [email protected]

    Dudis et al. (2008) introduce a flowchart to aid in the identification of depiction This section presents a test of one hypothesis borne from the Real-Space framework as described in § "Rational." Code B is predicted to appear prior to Code D because the latter is more complex as it invokes the partitioning of the hands, arm, face, or lips to represent an entity distinct from |subject|. The examples below suggest the hypothesis above may be true.

    Clifton Langdon Department of Linguistics, Gallaudet University