2009 siym theory, research, practice, and profession evidence_final

Post on 18-Dec-2014

200 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

TRANSCRIPT

Theory, Research, Practice, Profession:

Institute objectives and themes

Thomas Keller

Introduction

Field of youth mentoring Interplay of theory, research, practice Ideas and aims of summer institute Participation and feedback Research principles and statistical analysis Conceptual frameworks for mentoring and

mentoring relationships

Field of youth mentoring Origins

Juvenile court probation (late 1800’s) Big Brothers Big Sisters (1904)

The Kindness of Strangers (Freedman, 1993) Interest and growth Positive youth development Potential benefits “Fervor without infrastructure”

Continued expansion MENTOR (1990) PPV study (1995) OJJDP/gov’t funding (1990’s) Handbook of Youth Mentoring (2005)

Development of field

Attributes of a profession (Greenwood, 1957): Skill based on theoretical knowledge Period of formal training and education Occupational organization Code of ethics or conduct Some form of community-sanctioned license

Continuing challenges

Expanding and solidifying base of knowledge (practice knowledge and research)

Sharing knowledge and education (bi-directional communication)

MENTOR National Research & Policy Council (September, 2006): “What do practitioners need to know and what is

he best way to disseminate needed information to them?”

Interrelations of theory, research, practice

Theory Research

Practice

Sequence Define need/problem Identify associated factors Develop explanatory model Create intervention based on model Pilot test intervention Refine intervention Efficacy trial Effectiveness trials Dissemination

Alternative sequence

Theory Practice

Research

Common language

Idea Action

Observation

Everyday example What should I do?

You have a reason or rationale based on past experience and/or logic (explanation)

You expect a certain result (prediction) Did it work?

Determine result Confirming evidence (reinforces your idea/approach) Disconfirming evidence (need to remember this and try

something different next time) Other examples

Learning and survival Scientist in the crib Internal working model Reflective practice

Personal vs. formal

Personal theory Intuition Common sense

Personal evaluation Observations and

recollections Interpretations and

meanings Result: convince

yourself (belief)

Formal theory Stated propositions Logical case

Formal evaluation Systematic collection of

data Systematic analysis of

data Result: convince others

(evidence)

Cautions The main difference:

Making it explicit, documentation The common risk:

Want to hold on to our ideas/beliefs Avoid any evidence Pay attention to confirming evidence Disregard disconfirming evidence Twist and stretch theory to match disconfirming

evidence

Research and practice

Internal validity—study has solid design that permits strong conclusions (quality)

External validity—study conclusions apply beyond specific sample (relevance)

Different roles

Theory

PracticeResearch

Modified model

Ideas

Action

Observation

Action

Observation

Practice Research

Real-life examples

Idea Action

Observation

Example: School-based mentoring

Opportunities and potential (Herrera, 1999) Recruitment of volunteers (time, convenience) Reach children with greater needs/challenges (school

referrals) Fewer staff resources (less screening, easier supervision) Focus on academic development

Rapid growth in BBBSA (Herrera, 2004) 1999=27,000; 2002=90,000 233% increase

Careful evaluation of effects (Herrera, et al., 2007)

Example: Summer Institute Goal: To improve services for youth who participate

in formal youth mentoring programs Premise: A sustained dialogue between

experienced professionals and researchers stimulates research with relevance to the field and enhances its translation to practical application

Strategy: A direct relationship between researchers and professionals

Model: A series of highly interactive discussions that provide an in-depth view of the research and examine its implications for program policies and practices

Model elaborated People

Professionals eager for cutting edge research and exchange of information and ideas

Researchers who want work to be relevant and want findings translated into improvements in services for youth

Size Small-group format to encourage active exchange

Time Ample time to think critically and creatively about issues

and explore opportunities for innovation Get away and go back to school

Dynamics Cohort effect Mixing within and between roles

Mixture model

Advocates Program Leaders

Researchers

YOU

Professional development and career

Training tied to transitions (Caplan & Curry, 2001) Internship: Transition from student to worker

Transfer knowledge from class to real-life Entry-level: Transition to professional

Learn skills and tasks of position Leadership development: Transition to leader in

organization Preparation for supervision and management

Master practitioner: Transition to leader in the field Special opportunities for experienced professionals

Leaders Wisdom and insight to share at institute Wisdom and insight to share in communities Transfer of learning

Hold positions of influence Training and supervision of staff Development of program models Implementation of service delivery changes

Task Think about new program models, program policies and

procedures, training materials for program staff, training materials for volunteer mentors and youth participants

Summer Institute aims Contribute to the development of policy and

practice in the field of youth mentoring Convene leaders in the field for substantive

discussion of practices, policies, and new directions

Create new networks of peer relationships among professionals and researchers from different programs and backgrounds

Promote professional identity and commitment of participants and researchers

Moving forward

Idea Action

Observation

Preliminary observations

Support of advocacy agencies providing training and technical assistance? Yes! (support with plans and announcements)

Interest on part of researchers? Yes! (eager to attend)

Interest on part of professionals? Yes! (competitive application process)

Ability to reach target participants? Yes! (look around)

Study

Encourage reflection Provide feedback Respond to questions at end of institute Respond to questions after 6-12 months Informed consent

Questions What types of opportunities for collaboration

among colleagues (research-practice, practice-practice, research-research) do you see emerging from the institute: a) this week and b) continuing into the future?

What types of initiatives or changes could you undertake upon returning to your program?

How will you share the new information and ideas with others in your agency or community?

Assignment

Assignment for each presentation Summarize (few sentences) Follow-up questions (2-3) Implications for program (example)

SIYM Themes

Year 1: School-based Mentoring Reprise in year 3:

Symposium on Monday Guest speakers on Friday

Year 2: Diversity in Mentoring Year 3: Use of Evidence for Practice

MENTOR—Elements of Effective Practice, 3rd Ed. BBBSA program models (SBM, CBM) New Meta-analysis Standards and accreditation discussions

Research-Practice Survey

WT Grant Distinguished Fellows Use of high quality research in practice Marc Wheeler, VP Programs BBBS Alaska (SIYM

alum) to PSU David DuBois, to BBBSA

Web-based Survey Opinion survey Needs assessment Descriptive data

Survey: sample

Sample Responses: n=455 Positions

ED/CEO (39%) VP of Program/Program Director (18%) Program Coordinator (25%)

Mostly with BBBS agencies (54%) Average employment in field=8.9 yrs.

Survey: current assessment

Current level of EB decision making in field A lot more needed (30%) Somewhat more needed (55%) About right (10%) Less (5%)

Personally comfortable using EB decision making Very/somewhat uncomfortable (26%) Neutral (9%) Somewhat comfortable (33%) Very comfortable (32%)

Survey: type of data

Use of published research Haven’t (11%) Small steps (37%) Creating systems (16%) Have systems to

routinely do this (37%)

Use of internal agency data Haven’t (7%) Small steps (31%) Creating systems (22%) Have systems to

routinely do this (40%)

Survey: evidence used

Use of the following: Little/none Somewhat Substantial Extensive

Published external research 18% 37% 34% 11%

Data on client/stakeholder preferences

17% 28% 42% 14%

Professional experience/expertise

7% 17% 50% 27%

Data on local trends/needs 13% 31% 40% 16%

Performance data on program operation/outcome

7% 21% 40% 33%

Survey: reasons to use evidenceGoals: Un-

importantNeutral Somewhat

importantVery important

Improve youth outcomes 5% 1% 11% 83%

Demonstrate impact to funders

4% 3% 14% 78%

Prevent negative outcomes for youth

7% 8% 26% 60%

Focus resources on effective areas

4% 4% 33% 59%

Design new programs for specific populations

6% 8% 38% 49%

Guide decisions of board and stakeholders

6% 6% 41% 47%

Survey: Greatest needs (very imp.)

1. How to analyze data and report findings (59%)2. Step-by-step guide for EB decision making (56%)3. How to find/select measures/metrics (56%)4. How to find, read, use existing research (54%)5. Description of different types of evidence (42%)6. How to collaborate with researchers/colleagues on

EB decision making (38%)7. Glossary of common research terms (37%)8. Guidance on ethical issues with research (36%)9. Description of scientific method and applications

(29%)

Research principles and statistical analysis

Briefly….

Classic questions

What works for whom under what circumstances? Why? Does program work better/differently for certain types of

mentees (age, gender, race, stress, aptitude)? Does program work better/differently in certain settings

(community, school, etc.)? Does program work better/differently with certain types

of volunteers (age, gender, occupation, personality)? What are the essential processes that yield the results?

Research paradigms

Qualitative approach Complex data Smaller samples

Strengths Reflects complexity of

experience Captures contexts and

processes Good for discovering what

is happening Good for within-system view

Limitations Subjective interpretations Translating results to others

Quantitative approach Defined data Larger samples

Strengths Reflects clear definitions

and theories Captures relations between

variables Good for demonstrating

what is happening Good for comparisons

Limitations Less nuanced Sampling biases

Research Process

Introduction Framing the issue Motivation, rationale Theory/model Research questions

Method Design Sample Procedures Measures Analysis plan

Results Data Findings

Discussion Conclusions/insights Interpretations Limitations Next steps

Outcomes Criteria for selecting “outcomes”

Outcome can reasonably be expected to change during period, given intensity of intervention

Proximal (intervening) vs distal (ultimate) Number of other uncontrolled factors Portion of variance explained

Outcome is measurable and assessment is sensitive enough to detect likely change

Clearly and narrowly defined Reliable measure Valid measure

Measures

Validity

XXX

X

X

OOO

OO

Z

Z Z

Z

Z

Reliability

Sampling

Defining population of interest Representative members Random sampling (vs. assignment)

Each individual has equal chance of being selected

Population parameter vs. sample statistic Inferences apply to population

Sampling distribution Unlikely to get a sample statistic exactly equal to

population parameter (sampling error). Imagine a hypothetical sampling distribution if you

took multiple samples from population and plotted all the sample statistics.

Central Limit Theorem:If a population has a mean of μ and a standard deviation of σ, then

the sampling distribution of the mean based on sample size N will have a mean of μ and a standard deviation of σ/sq root (N) and will approach a normal distribution as the sample size N upon which it is based becomes larger (regardless of population distribution).

Evaluation Design Logic

We can determine the true effect of a program (or experience) if we compare what happens to an individual who is in the program versus what would have happened to that individual if he or she were not in the program (impossible in one lifetime)

Problem We always lack the ideal counterfactual (the outcome in the

what if situation…) Missing data solution

Compare participants to non-participants who are as similar as possible in every way except for having or not having the intervention

Evaluation Design

How do we get comparison group? Experimental design (optimal)

Researcher controls exposure to intervention through random assignment to intervention

Quasi-experimental designs Research tries to get a non-participant comparison

group as equal/similar as possible

Random assignment means everyone has equal chance of being in program

Imagine this dimension is motivation to succeed. We would have equal distribution of low, middle, high

among participants and “control group”With random assignment, this would be true on EVERY dimension

(observed and unobserved).

Without random assignment, they may differ on important dimensions. For example, program participants may have higher motivation to succeed

than non-participants—that’s why they signed up

Experimental design

Pre-test Post-test

Program (assume equivalence) Xp

Control Xc

Program X Xp

Control X Xc

Test of effect = mean (Xp) – mean (Xc)

Experimental comparisonPre-test Post-test

Program group

Control group

Program group

Control group

No control groupPre-test Post-test

Program group

Program groupControl group

Comparing Group Means

State a null hypothesis (e.g. X1 – X2 = 0) Create sampling distribution for difference

between means. Compare observed difference between

means to null hypothesis. If difference is relatively small, it could be due

to sampling error (p>.05, FAIL to reject null). If difference is relatively large, it is unlikely

due to sampling error (p<.05, REJECT null) and conclude actual difference exists.

Errors

Conclusions are based on probabilities Conclusions can be incorrect

Reject null hypothesis when we shouldn’t (freaky sample)

Fail to reject null hypothesis when we should (don’t detect actual difference) Low statistical power, need larger sample

Observational studies

No imposed difference between groups--naturalistic observation

Can see how certain variables correspond Correlation Regression Multiple regression (can evaluate one factor controlling for

others) Sampling distribution for each estimate (assumption

of no association) Causal inference depends on several considerations

(temporal order, ruling out other explanations, etc.)

PSI--Total stress, T7

14012010080604020

CT

S_

7

50

40

30

20

10

0

-10

Model Summary

.415a .172 .146 .95353Model1

R R SquareAdjustedR Square

Std. Error ofthe Estimate

Predictors: (Constant), bm_wsafa.

Coefficientsa

.083 1.455 .057 .955

.607 .239 .415 2.542 .016

(Constant)

bm_wsaf

Model1

B Std. Error

UnstandardizedCoefficients

Beta

StandardizedCoefficients

t Sig.

Dependent Variable: fm_clowa.

Orienting frameworks

Central ideas….

Development

Universal processes vs. individual differences Successive adaptations

Past experience Current circumstances

Continuity and change Change possible at any time Change constrained by prior adaptation

Diversity in process and outcome Equi-finality Multi-finality

Developmental adaptation

Source: Sroufe, L.A. (1997). Psychopathology as and outcome of development. Development & Psychopathology, 9, 251-268.

Mentoring relationships

What distinguishes relationships? (Laursen & Bukowski, 1997)

Permanence Voluntary, kinship, committed

Social power Resources, experience/knowledge, rank

Gender Male-male, female-female, cross-gender

Relationship dimensions

Permanent

(obligation)

Voluntary

(mutual)

Unequal social power

(vertical)

Parent Mentor

Equal social power

(horizontal)

Cousin Friend

Systemic model

Child

Mentor

Parent Worker

Program/Agency

Systemic model Conceptual points

Wholeness and order Parts are interconnected and interdependent

Hierarchical structure Composed of sub-systems with boundaries

Practical points Intervention goes beyond mentor-child relationship Caseworker, parent, teacher contribute to success or

failure of relationship Mentoring effects can be indirect, through multiple

pathways of influence

Systemic model

Analytical uses Direct (M C) Reciprocal (M C) Transitive (W M, M C) Parallel (W M, W C, M C) Circular (C W, W M, M C)

Developmental stages

Contemplation

Initiation

Growth & Maintenance

Decline & Dissolution

Redefinition

Stage Conceptual features Program practices

ContemplationAnticipating and preparing for relationship

Recruiting, screening, training

InitiationBeginning relationship and becoming acquainted

Matching, making introductions

Growth and maintenance

Meeting regularly and establishing patterns of interaction

Supervising and supporting, ongoing training

Decline and dissolution

Addressing challenges to relationship or ending relationship

Supervising and supporting, facilitating closure

RedefinitionNegotiating terms of future contact or rejuvenating relationship

Facilitating closure, rematching

Stage model

Final thoughts

Human beings of all ages are happiest and able to deploy their talents to best advantage when they are confident that, standing behind them, there are one or more trusted persons who will come to their aid should difficulties arise. --John Bowlby

top related