V Seminario TEST DE JUICIO SITUACIONAL

Anuncio
V Seminario
TEST DE JUICIO SITUACIONAL
•
Introducción
2-3
•
Programación
5
•
Prof. Filip Lievens
6-29
•
Prof. Fiona Patterson
30-41
•
Mr. Stefan Meyer
42-49
•
Prof. Julio Olea
50-59
•
Resumen posters
60-67
•
Notas
El objetivo de los seminarios de la Cátedra
MAP es fomentar y contribuir a la medición
psicológica de calidad, especialmente en
el campo de las organizaciones.
En este quinto seminario se ha escogido
como tema central los test situacionales.
•
26 de junio de 2013
Facultad de Psicología
Universidad Autónoma de Madrid
4—5
PROGRAMA
8:45
Recogida del material del seminario.
9:15
Apertura y presentación.
Ángela Loeches
Decana de la facultad de Psicología de la UAM.
José Miguel Mata
Director General del Instituto de Ingeniería del
Conocimiento.
9:30
Conferencia.
Situational Judgment Tests: An introduction
to theory, practice, and research.
Prof. Filip Lievens
Profesor del departamento de Gestión de personal y
Psicología del trabajo y de las organizaciones de la
Universidad de Gante de Bélgica.
10:40
Sesión de pósters.
Café.
11:40
Conferencia.
Evaluations of situational judgments tests
for high stakes selection.
Prof. Fiona Patterson
Profesora del departamento de Psicología de la
Universidad de Cambridge y directora del Work
Psychology Group.
12:20
Conferencia.
Situational Judgment Test in EPSO open
competitions for the European Institutions
from 2010 to present.
Mr. Stefan Meyer
Responsable de evaluaciones informatizadas de la
European Personnel Selection Office (EPSO).
13:00
Conferencia.
Medición de competencias mediante
Modelos de Diagnóstico Cognitivo:
aplicación a un Test de Juicio Situacional.
Prof. Julio Olea
Catedrático de Psicología de la Universidad Autónoma
de Madrid y codirector de la cátedra UAM-IIC Modelos y
aplicaciones psicométricos (MAP).
13:40
Debate y clausura del seminario.
6—7
Situational Judgment
Tests: An introduction
to theory, practice,
and research.
— Prof. Filip Lievens
Departamento de Gestión de
Personal y Psicología del Trabajo
y de las Organizaciones, de la
Universidad de Gante, Bélgica.
Situational Judgment Tests:
An Introduction
to Theory,
Situational
Judgment
Tests:
Practice, &to
Research
An Introduction
Theory,
Practice, & Research
Madrid
June, 2013
Madrid
June, 2013
Prof. Dr. Filip Lievens
Ghent University, Belgium
Prof. Dr. Filip Lievens
Ghent
University,
Belgium
Department of Personnel Dunantlaan
2, 9000,
Ghent
[email protected]
Management
Work and Dunantlaan
Organizational
Psychology
Departmentand
of Personnel
2, 9000,
Ghent
http://users.ugent.be
/~flievens/
[email protected]
Management and Work and Organizational Psychology
http://users.ugent.be /~flievens/
Who am I?
! 
Professor:
Who
amGhent
I? U. (Belgium)
! 
Professor: Ghent U. (Belgium)
! 
Visiting professor:
! 
! 
U. of Minnesota (USA)
Visiting professor:
!  Bowling Green State U. (USA)
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
U. of
of Zürich
Minnesota
(USA)
U.
(Switzerland)
Bowling
Green
State
U. (USA)
U. of Guelph (Canada)
U.
of
Zürich
(Switzerland)
Nanyang Technological U. (Singapore)
U. of Guelph
(Canada) U. (Singapore)
Singapore
Management
Nanyang
Technological
U. of Valencia (Spain) U. (Singapore)
Singapore
Management
U.
of Giessen
(Germany)U. (Singapore)
U.
of
Valencia
(Spain)
U. of Stellenbosch
(South Africa)
U. of Giessen (Germany)
U. of Stellenbosch (South Africa)
Who am I?
! 
Situational judgment tests, high-stakes testing, assessment centers,
web-based assessment, & employer branding
Teaching at Ghent U. (Belgium)
! 
! 
1
Research expertise
! 
! 
1
HRM
Consultancy
! 
! 
Private, public, & military sector
Metaconsultant of international consultancy firms
3
8—9
Objectives & overview
! 
Theory
! 
! 
! 
! 
Practice
! 
! 
Basics of SJTs
History
Definition
SJT development & SJT building bocks
State-of-the-art of SJT research
4
Roles
Roles
2
SJT developer or
user or
SJTSJT
developer
SJT user
SJT
Candidate
SJT
Candidate
Trainer /
supervisor
Trainerof/ SJT
usersof SJT
supervisor
users
SJT
researcher
SJT
researcher
5
5
SJT Theory:
SJT Theory:
Basics
of SJTs
Basics of SJTs
Situational Judgment Tests: An introduction to theory, practice, and research.
SJT Basics
Written SJT item
Written
SJT
Suppose you
haveitem
to assess people on their level of
SJT Basics
customer service during selection? What do you do?
Suppose you have to assess people on their level of
customer service during selection? What do you do?
!  Personality inventory
Sign-based/
! 
! 
! 
! 
! 
! 
Agreeableness, Emotional Stability,
Personality
inventory
& Conscientiousness
Agreeableness, Emotional Stability,
& Conscientiousness
Interview
Interview
Role-play (high-fidelity simulation)
! 
Role-play (high-fidelity simulation)
SJT (low-fidelity simulation)
! 
SJT (low-fidelity simulation)
! 
dispositional
approach
Sign-based/
dispositional
approach
Sample-based/
interactional
approach
Sample-based/
interactional
approach
SJT Basics
Written SJT item
Written
SJT item
You have an excellent
long term employee who is always visiting
SJT Basics
at length with customers. Although customer orientation is one
Youofhave
an excellent
employee
is always
visiting
the key
values in long
yourterm
company,
otherwho
employees
have
at
length
with
customers.
Although
customer
orientation
is one
complained that the employee is not doing her share of the
of
the
key
values
in
your
company,
other
employees
have
administrative work. Pick the best response.
complained that the employee is not doing her share of the
•  administrative
Leave her alone
sincePick
thethe
customers
seem to enjoy the chats.
work.
best response.
•  Leave
Tell herher
you
needsince
her to
more and
talktoless
andthe
then
give
alone
thework
customers
seem
enjoy
chats.
her assignments to be completed by specific times.
•  Tell her you need her to work more and talk less and then give
•  her
Initiate
formal discipline
against her.
assignments
to be completed
by specific times.
•  Initiate
Explain formal
the difference
between
discipline
againstsocial
her. chatter and service and
coach her on how to keep her conversations shorter.
•  Explain the difference between social chatter and service and
•  coach
Discussher
theonissues
with
theher
employee
only if her
production is
how to
keep
conversations
shorter.
SJT Introduction
Prof. Dr. Filip Lievens
below standard.
•  Discuss the issues with the employee only if her production is
SJT Introduction
Prof. Dr. Filip Lievens
below standard.
8
8
SJT Basics
4
Written SJT item
4
You work as a waiter/waitress and a customer
orders a dish that is not on the menu. You
mention this politely but the customer gets very
upset. What do you do?
! 
! 
! 
! 
Suggest alternative main dishes and side orders that
might be of interest
Suggest alternative dishes that might be of interest to
the customer
Suggest only those alternative dishes that are very
similar to the dish the customer wanted
Tell the customer there is nothing you can do
9
10—11
SJT Basics
What Are SJTs?
! 
! 
An applicant is presented with a situation
& asked what he/she would do.
SJT items are typically in a multiple
choice format: items have a stem and
various item responses (response
options).
SJT Introduction
Prof. Dr. Filip Lievens
10
SJT Basics
History
of SJTs
SJT Basics
History
of Civil
SJTs
!  1873: US
Service Examinations
! 
! 
5
« A banking company asks protection for a certain
1873:
USasCivil
Service Examinations
device,
a trade-mark,
which they propose to put
« A banking
company
protection
for take
a certain
upon
their notes.
Whatasks
action
would you
in the
device,
as
a
trade-mark,
which
they
propose
to put
application? »
upon their notes. What action would you take in the
1905:
Binet »
application?
!  « When a person has offended you and comes to offer
1905:
Binet
his apologies, what would you do? »
!  « When a person has offended you and comes to offer
1941:
Ansbacher
his apologies,
what would you do? »
!  Your sports club is planning a trip to Berlin to attend
1941:
Ansbacher
the Germany-England football game, which will take
!  place
Yourinsports
clubYou
is planning
a trip
to Berlinwith
to attend
2 weeks.
have been
entrusted
the
the
Germany-England
football
game,
which
will
preparations and entire management of the trip.take
What
SJT Introduction
Prof. Dr. Filip been
Lievens
place
2 weeks.
You
entrusted with the11
have
do youinintend
to do?
preparations and entire management of the trip. What
do you intend to do?
! 
! 
! 
! 
! 
SJT Introduction
Prof. Dr. Filip Lievens
11
SJT Basics
History
of SJTs
SJT Basics
History
of SJTs
!  Early years
1926:
Judgment
!  ! 
Early
years
! 
! 
! 
! 
! 
! 
! 
scale in the George Washington
University Social Intelligence Test
1926: Judgment scale in the George Washington
Used in World
War
II
University
Social
Intelligence
Test
1948:
How
Supervise?
Used in World War II (Rosen, 1961)
1960’s:How
SJTs
used at the
U.S. Civil
Service System
1948:
Supervise?
(Rosen,
1961)
1960’s: SJTs used at the U.S. Civil Service System
Breakthrough years
Motowidlo reinvigorated
interest: low fidelity
!  ! 
Breakthrough
years
! 
! 
! 
! 
simulations.
Motowidlo reinvigorated interest: low fidelity
Sternberg reinvigorated interest: tacit knowledge
simulations.
inventories.
Sternberg reinvigorated interest: tacit knowledge
inventories.
SJT Introduction
Prof. Dr. Filip Lievens
SJT Introduction
Prof. Dr. Filip Lievens
12
12
Situational Judgment Tests: An introduction to theory, practice, and research.
SJT Basics
SJT Basics
In
2013
In 2013
! 
Popular in US & UK
Public &
!  ! 
Popular
inprivate
US & sector
UK
! 
! 
! 
! 
! 
EPSO
Public & private sector
EPSO
Emerging body of research
!  2006: first SJT book (edited by
Emerging body of research
Weekley & Ployhart)
! 
! 
! 
2006: first SJT book (edited by
Weekley & Ployhart)
Increasing demand around the
world
Increasing demand around the
world
SJT Introduction
Prof. Dr. Filip Lievens
14
SJT Introduction
Prof. Dr. Filip Lievens
14
SJT Basics
7
7
Research on SJTs
SJT Publications in Web of Science
60
Frequency
50
40
30
20
10
0
<91
91-95
96-00
01-05
06-10
Years
SJT Introduction
Prof. Dr. Filip Lievens
15
12—13
SJT Basics
In 2013
16
SJT Basics
Wat do SJTs measure?
Wat do SJTs measure?
SJT Basics
! 
! 
8
Measurement method that can be designed to
Measurement
method
that can be designed to
measure
a variety
of constructs
measure a variety of constructs
! 
! 
Procedural knowledge about costs &
Procedural
knowledge
about in
costs
&
benefits of courses
of action
a particular
benefits of courses of action in a particular
domain
domain
!  Going beyond cognitive ability
! 
! 
Mostly
interpersonal
& leadership
competencies
Going beyond
cognitive
ability
Mostly interpersonal & leadership competencies
17
SJT Introduction
Prof. Dr. Filip Lievens
SJT Introduction
Prof. Dr. Filip Lievens
17
SJT Basics
Theory of Prediction
Theory of Prediction
SJT Basics
On-the-job
On-the-job
Situation
Person
Person
Situation
Behavior
Behavior
Performance
Performance
Performance
Experience
Selection
Selection
Performance
Experience
IQ
Behavior
Behavior
Job specific
Knowledge
Job
specific
General
Knowledge
Person
Situation
Situation
Person
General
IQ
Personality
Personality
Socialization
Socialization
Situational Judgment Tests: An introduction to theory, practice, and research.
SJT Basics
Comparisons
with tests
SJT Basics
Comparisons with tests
!  Parallels
! 
! 
Standardization
Parallels
!  Automatic scoring
! 
Standardization
Performance
test
Automatic
scoring
Large group
screening
!  Selecting-outtest
Performance
Large group screening
! 
Sample-based vs. sign-based
! 
! 
! 
! 
! 
! 
!  Selecting-out
Differences
! 
Differences
!  Contextualized vs. decontextualized
! 
! 
! 
! 
! 
! 
! 
! 
Sample-based
vs.vs.
sign-based
Multidimensional
unidimensional
Contextualized
vs. decontextualized
Procedural
knowledge
vs. aptitude & declarative knowledge
Multidimensional
vs. unidimensional
Noncognitive
competencies
vs. cognitive competencies
Procedural knowledge vs. aptitude & declarative knowledge
Noncognitive competencies vs. cognitive competencies
SJT Introduction
Prof. Dr. Filip Lievens
SJT Introduction
Prof. Dr. Filip Lievens
SJT Introduction
Prof. Dr. Filip Lievens
SJT Introduction
Prof. Dr. Filip Lievens
19
19
SJT Basics
Comparisons with assessment centers
SJT Basics
Comparisons with assessment centers
! 
Parallels
! 
!  Sample-based (vs. sign-based)
Parallels
! 
Behavioral consistency logic
Psychological
fidelity logic
Behavioral consistency
Multidimensional:
which measure multiple constructs
Sample-based (vs.Methods
sign-based)
!  Psychological fidelity
!  Multidimensional: Methods which measure multiple constructs
Differences
!  Stimulus
! 
! 
! 
! 
!  Standardized situation vs. life situation
Differences
! 
! 
!  Low fidelity vs. high fidelity
Stimulus
Response
!  Standardized situation vs. life situation
! 
! 
!  MC vs. open ended
Response
Scoring
!  Procedural knowledge vs. actual behavior
! 
! 
!  Built-in model vs. trained assessors
Scoring
Use
!  A priori vs. life observation/rating
! 
!  Select out vs. select in
Use
! 
! 
Procedural
vs. actual behavior
Low fidelity knowledge
vs. high fidelity
! 
! 
A
priori
vs. lifeended
observation/rating
MC
vs. open
! 
! 
Large
smallvs.
groups
Built-invs.
model
trained assessors
! 
! 
SJT Introduction
Large vs. small groups
Select out vs. select SJT
in Introduction
Prof. Dr. Filip Lievens
20
Prof. Dr. Filip Lievens
20
10
SJT Basics
Comparisons with behavior description interviews
! 
Parallels
! 
! 
! 
! 
10
Behavioral consistency logic
Sample-based (vs. sign-based)
Multidimensional: Methods which measure multiple constructs
Differences
! 
Stimulus
! 
! 
! 
Response
! 
! 
! 
Written response vs. oral response
MC vs. open ended
Scoring
! 
! 
! 
Standardized situation vs. situation reported by interviewee
Past or future situation vs. past situation
A priori vs. life observation & rating
Built-in model vs. trained interviewers
Use
! 
Large vs. small groups
21
SJT Introduction
Prof. Dr. Filip Lievens
14—15
SJT Practice:
SJT Development
SJT Development
SJT
vendors
SJT
Development
SJT
vendors
!  General
11
SHL / Previsor
!  ! 
General
! 
! 
! 
! 
! 
! 
! 
! 
A
& DC
SHL
/ Previsor
Kronos
A
& DC
Aon
Kronos
! 
! 
…
Aon
! 
…
Specific
! 
Work Skills First
Specific
! 
! 
Ergometrics
Work Skills First
Van
der Maesen/Koch
Ergometrics
! 
Van der Maesen/Koch
! 
! 
SJT Introduction
Prof. Dr. Filip Lievens
23
SJT Introduction
Prof. Dr. Filip Lievens
23
SJT Development
Stages
in SJT Development
SJT
Development
Stages
in SJT Development
Strategic SJT decisions
! 
! 
! 
! 
! 
! 
! 
! 
! 
Determine purpose of SJT
Strategic
SJT decisions
!  Identify target population
Determine purpose of SJT
Identify target population
Generate critical incidents
Content
development of SJT
!  Sort critical incidents
!  Generate critical incidents
!  Turn selected critical incidents into item stems
!  Sort critical incidents
!  Generate item responses
!  Turn selected critical incidents into item stems
!  Edit item responses
!  Generate item responses
Instructions & scoring
!  Edit item responses
!  Determine response instructions
Instructions & scoring
!  Develop a scoring key
!  Determine response instructions
Implementation
!  Develop a scoring key
!  Continued use of SJTs
! 
Content development of SJT
! 
! 
Implementation
! 
Continued use of SJTs
SJT Introduction
Prof. Dr. Filip Lievens
24
SJT Introduction
Prof. Dr. Filip Lievens
24
Situational Judgment Tests: An introduction to theory, practice, and research.
SJT Development
SJT Development
SJT
Building Blocks
SJT Building
1.  Item lengthBlocks
2. 
1.  Item
Item complexity
length
3. 
2.  Item
Item comprehensibility
complexity
Item
Item contextualization
comprehensibility
Item
Item branching
contextualization
6. 
& stimulus fidelity
5.  Response
Item branching
7. 
6.  Response
Response instructions
& stimulus fidelity
8. 
Scoring keyinstructions
7.  Response
9. 
8.  Measurement
Scoring key level of scores
10. 
measure
9.  Constructs
Measurement
level of scores
Purpose
Purpose
4. 
3. 
5. 
4. 
10.  Constructs measure
25
25
SJT Development
SJT Purpose
SJT Purpose
SJT Development
Selection & assessment
!  Selection & assessment
!  Recruitment (realistic job preview)
!  Recruitment (realistic job preview)
!  Training & evaluation (scenario-based)
!  Training & evaluation (scenario-based)
!  Promotion
!  Promotion
! 
SJT Introduction
Prof. Dr. Filip Lievens
26
SJT Introduction
Prof. Dr. Filip Lievens
26
SJT Development
13
1. Item Length
! 
13
Stems vary in the length of the situation
presented.
! 
Very short descriptions of situation
! 
! 
E.g., Your colleague is rude towards you.
Very detailed descriptions of situation
! 
E.g., Tacit Knowledge Inventory (Wagner & Sternberg,
1991)
SJT Introduction
Prof. Dr. Filip Lievens
27
16—17
SJT Development
Tacit Knowledge Inventory (Sternberg)
You are a company commander, and your battalion commander is the type of person who
seems always to shoot the messenger– he does not like to be surprised by bad news,
and he tends to take his anger out on the person who brought him the bad news. You
want to build a positive, professional relationship with your battalion commander. What
should you do?
___ Speak to your battalion commander about his behavior and share your perception of it.
___ Attempt to keep the battalion commander over-informed by telling him what is
occurring in your unit on a regular basis (e.g., daily or every other day).
___ Speak to the sergeant major and see if she/he is willing to try to influence the battalion
commander.
___ Keep the battalion commander informed only on important issues, but dont bring up
issues you dont have to discuss with him.
___ When you bring a problem to your battalion commander, bring a solution at the same
time.
___ Disregard the battalion commanders behavior: Continue to bring him news as you
normally would.
___ Tell your battalion commander all of the good news you can, but try to shield him from
hearing the bad news.
___ Tell the battalion commander as little as possible; deal with problems on your own if at
all possible.
28
SJT Development
SJT Development
2.
Item Complexity
2. Item Complexity
14
Stems vary in the complexity of the situation
!  presented.
Stems vary in the complexity of the situation
! 
Low complexity
presented.
! 
One
has difficulty
!  ! 
Low
complexity
! 
! 
with a new assignment and needs
instructions.
One has difficulty with a new assignment and needs
instructions.
High complexity
One
has multiple supervisors who are not
High
complexity
!  ! 
! 
cooperating with each other, and who are providing
One has multiple supervisors who are not
conflicting instructions concerning which of your
cooperating with each other, and who are providing
assignments has
highest
priority.
SJT Introduction concerning
Prof. Dr. Filip Lievens
conflicting instructions
which of your
assignments has
highest
priority.
SJT Introduction
Prof. Dr. Filip Lievens
29
29
SJT Development
SJT Development
3.
Item Comprehensiblity
3. Item Comprehensiblity
Comprehensibility: It is more difficult to
the meaning
and difficult
import of
!  understand
Comprehensibility:
It is more
to some
situations
than
situations
understand
the other
meaning
and import of some
! 
cf. reading
levelother
formulas,
technical jargon
situations
than
situations
! 
! 
cf. reading level formulas, technical jargon
Length, complexity, & comprehensibility of
situations
are interrelated
and probably
!  the
Length,
complexity,
& comprehensibility
of
drive
the
cognitive
loading
of
items.
the situations are interrelated and probably
drive the cognitive loading of items.
! 
SJT Introduction
Prof. Dr. Filip Lievens
30
SJT Introduction
Prof. Dr. Filip Lievens
30
Situational Judgment Tests: An introduction to theory, practice, and research.
SJT Development
Example admission SJT item
Example
admission
item
Patient
: So, this physiotherapy
is really going SJT
to help me?
SJT Development
Physician: Absolutely, even though the first days it might still be painful.
Patient : So, this physiotherapy is really going to help me?
Patient : Yes, I suppose it will take a while before it starts working.
Physician: Absolutely, even though the first days it might still be painful.
Physician: That is why I am going to prescribe a painkiller. You should take 3
Patient : Yes, I suppose it will take a while before it starts working.
painkillers per day.
Physician: That is why I am going to prescribe a painkiller. You should take 3
Patient: Do I really have to take them? I have already tried a few things. First, they
painkillers per day.
didnt help me. And second, Im actually opposed to taking any medication. Id
reallythem.
haveThey
to take
I have
tried a few things. First, they
Patient:
rather Do
notI take
arethem?
not good
for already
my health.
didnt help me. And second, Im actually opposed to taking any medication. Id
What is the best way for you (as a physician) to react to this patients refusal to
rather not take them. They are not good for my health.
take the prescribed medication?
What is the best way for you (as a physician) to react to this patients refusal to
a. Ask her if she knows something else to relieve the pain.
take the prescribed medication?
b. Give her the scientific evidence as to why painkillers will help.
a. Ask her if she knows something else to relieve the pain.
c. Agree not to take them now but also stress the importance of the physiotherapy.
b. Give her the scientific evidence as to why painkillers will help.
31
d. Tell her that, in her own interest, she will have to start changing her attitude.
c. Agree not to take them now but also stress the importance of the physiotherapy.
31
d. Tell her that, in her own interest, she will have to start changing her attitude.
SJT Development
Example advanced level SJT item
advanced
level
SJT
item20
AExample
55 year old woman
with ischaemic heart
disease
has smoked
SJT Development
cigarettes per day for 40 years. She requests nicotine replacement
A 55
year oldShe
woman
withthese
ischaemic
heartbut
disease
has smoked
20 in
patches.
has had
previously
has been
inconsistent
cigarettes
per has
day often
for 40continued
years. She
nicotine
replacement
their use and
torequests
smoke while
using
the patches.
patches. She has had these previously but has been inconsistent in
A. Emphasize
thehas
dangers
smokingtobut
do not
prescribe.
their use and
often of
continued
smoke
while
using the patches.
B.
Enquire
about
the
difficulties
she
has
with
stopping
smoking and any
A. Emphasize the dangers of smoking but do not prescribe.
previous problems with patches.
B. Enquire about the difficulties she has with stopping smoking and any
C. Insist
on a
period ofwith
abstinence
previous
problems
patches.before prescribing any further
patches.
C. Insist on a period of abstinence before prescribing any further
D. Prescribe
patches. another supply of patches and explain how they should be
used.
D. Prescribe another supply of patches and explain how they should be
E. Suggest
used. that nicotine replacement therapy is not suitable for her but
explore alternative therapies.
E. Suggest that nicotine replacement therapy is not suitable for her but32
explore alternative therapies.
32
SJT Development
16
4. Item Contextualization
! 
Items for a job can be more specific.
! 
! 
! 
16
Mention job specific equipment, technical terms.
Items for a family of jobs need to make sense for all the
jobs to be covered by the test.
!  Entry-level vs. Supervisory
!  Armed Forces, Coast Guard, Paramilitary Forces, &
Security Forces
Items for specific competencies needed across several
jobs or job families to be covered by SJT.
! 
SJT taxonomy
SJT Introduction
Prof. Dr. Filip Lievens
33
18—19
Judgment at Work Survey for Customer Service
© Work Skills First, Inc.
! 
A customer is disrespectful and rude to you.
! 
! 
! 
! 
A customer is very angry with you because the customer believes that the
product received is not what the customer
! 
! 
! 
Ask the customer to leave until he is calm.
Treat the customer as the customer treats you.
Try to solve the problem as quickly as possible to get the customer out of the store.
Offer to replace the customer's product with the correct one.
Ask a co-worker to deal with this problem.
A customer is becoming impatient about waiting for service for a long
time. Other customers are becoming equally anxious.
! 
! 
! 
! 
! 
! 
Apologize for the slow service on the part of your colleagues.
Offer the customers a discount.
Reassure the customer that the wait is necessary and address the needs they may
have.
Try to find someone else to wait on the customer immediately.
Refer the customer to a competing organization that offers similar services.
Suggest that the customer consider leaving and coming back when you are not so
busy.
Extremely Ineffective - Ineffective -Average Effectiveness – Effective -Extremely Effective
34
SJT Development
5.
Item Branching / Integration
SJT Development
5. Item Branching / Integration
! 
Unbranched/linear items
! 
Unbranched/linear items
! 
Branched/nonlinear items
! 
17
Present overall situation
followed by subordinate
Branched/nonlinear
items
! 
! 
situations.
Present overall situation followed by subordinate
Subordinate stems are stems linked to the responses.
situations.
! 
! 
Subordinate SJT
stems
linked to the responses.35
Introductionare
Prof.stems
Dr. Filip Lievens
SJT Introduction
Prof. Dr. Filip Lievens
35
SJT Development
6.
Stimulus & Response Fidelity
SJT Development
6. Stimulus & Response Fidelity
! 
! 
Degree to which format of stimulus (item stem) and
response is consistent with how situation is
Degree to which
of stimulus (item stem) and
encountered
in a format
work setting.
response is consistent with how situation is
encountered in a work setting.
! 
! 
Stimulus
Written
Stimulus
! 
! 
! 
! 
! 
! 
! 
Oral
Written
Video
Oral
Behavior
Video
! 
Behavior
! 
Written SJT
JT
dS
Situational
aseinterview
b
eo
Vid Written SJT
SJT
Webcam
sed SJT
Situational
-ba interview
o
e
id
VAssessment
center
Webcam SJT
! 
Assessment center
SJT Introduction
Prof. Dr. Filip Lievens
SJT Introduction
Prof. Dr. Filip Lievens
Response
Written
Response
! 
! 
! 
! 
! 
! 
! 
Oral
Written
Video
Oral
Behavior
Video
! 
Behavior
36
36
Situational Judgment Tests: An introduction to theory, practice, and research.
Level of standardization
Level of standardization
Written SJT
Cartoon SJT
Video-based SJT
Written SJT
Cartoon SJT
Multimedia SJT
Video-based SJT
Multimedia SJT
Webcam SJT
Situational
interview
Situational
interview
Webcam SJT
Role-play
Role-play
Group discussion
SJT Introduction
Prof. Dr. Filip Lievens
SJT Introduction
Prof. Dr. Filip Lievens
Group discussion
Level of fidelity
37
Level of fidelity
37
SJT Development
SJT generations
Written SJT
SJT generations
SJT Development
You have an excellent long term employee who is always
visiting at length with customers. Although customer
Written
orientation is one
of the keySJT
values in your company,
other employees have complained that the employee is
You have
excellent
longofterm
who work.
is always
not an
doing
her share
the employee
administrative
Pick the
visiting
at length with customers. Although customer
best response.
orientation is one of the key values in your company,
A.  other
Leaveemployees
her alone since
the customers
to enjoy the
have complained
thatseem
the employee
is
chats.
not
doing her share of the administrative work. Pick the
response.
B.  best
Tell her
you need her to work more and talk less and
then give
assignments
to be completed
specific
A. 
Leave
her her
alone
since the customers
seem tobyenjoy
the
times.
chats.
C. 
B. 
D. 
Initiate
formal
discipline
against
her.and talk less and
Tell
her you
need
her to work
more
then
givethe
herdifference
assignments
to besocial
completed
byand
specific
Explain
between
chatter
service
times.
and coach her on how to keep her conversations shorter.
C. 
E. 
Initiate
discipline
against
her. only if her
Discussformal
the issues
with the
employee
production
is below standard.
Explain
the difference
between social chatter and service
and coach her on how to keep her conversations shorter.
D. 
E. 
Video SJT
Webcam SJT
Video SJT
3D animatedWebcam
SJT
SJT
3D animated SJT
Avatar-based
SJT
Avatar-based
SJT
Discuss the issues with the employee only if her
production is below standard.
© Previsor
Cartoon SJT
Cartoon SJT
38
© Previsor
38
19
SJT Development
7. Response Instructions
! 
Knowledge instructions
ask respondents to
display their knowledge
of the effectiveness of
behavioral responses
! 
Best action
! 
! 
Behavioral tendency
instructions ask respondents
to report how they typically
respond
! 
Pick the best & worst
answer.
Rate on effectiveness
! 
Rate each response for
effectiveness.
Most likely action
! 
! 
Best action/worst action
! 
! 
Pick the best answer.
! 
19
! 
! 
What would you most likely do?
Most likely action/least likely
action
What would you most likely do?
What would you least likely do?
Rate on the likelihood of
performing the action
! 
Rate each response on
likelihood that you would do the
behavior.
39
20—21
SJT Development
7. Response instructions
Knowledge
Behavioral consistency
Cognitive
loading
More correlated with
cognitive ability (.35)
Less correlated with
cognitive ability (.19)
Personality
loading
Less correlated with
A (.19), C (.24), ES (.12)
Mean SJT
scores
Higher
Lower
Race
differences
Larger
Smaller
Fakability
Less
Validity
ρ = .26
More correlated with
A (.37), C (.34), ES (.35)
More
SJT Introduction
Prof. Dr. Filip Lievens
ρ = .26
40
SJT Development
SJTScoring
Development Key
8.
8. Scoring Key
20
Rational (consensus among experts)
E.g., 60%
agreement among experts)
!  ! 
Rational
(consensus
! 
! 
E.g., 60% agreement
Empirical (administration of large groups)
E.g., 25%(administration
endorsement of correct
option
!  ! 
Empirical
of large
groups)
! 
! 
E.g., 25% endorsement of correct option
Theoretical model
!  ! 
Theoretical
E.g., conflictmodel
management, leadership
! 
! 
! 
E.g., conflict management, leadership
Hybrid
!  ! 
Hybrid
Combination
! 
of approaches
Combination of approaches
41
41
SJT Development
SJTScoring
Development Key
8.
8. Scoring Key
! 
! 
Most use discrete points assigned to response options.
!  From a legal point of view, it makes some uneasy to try to
Most use discrete points assigned to response options.
defend the difference between very effective and effective.
!  From a legal point of view, it makes some uneasy to try to
!  Very effective = 1
defend the difference between very effective and effective.
! 
! 
! 
! 
! 
! 
! 
! 
! 
Effective = 1
Very effective = 1
Ineffective = 0
Effective = 1
Very ineffective = 0
Ineffective = 0
Very ineffective = 0
Others use score responses as deviations from the mean
effectiveness ratings (= the correct answer)
Others use score responses as deviations from the mean
!  If the mean is 1.5, a respondent who provided a rating of 1 or
effectiveness ratings (= the correct answer)
2 would both have a -.5 as a score on the item.
!  If the mean is 1.5, a respondent who provided a rating of 1 or
!  Zero is the highest possible score
2 would both have a -.5 as a score on the item.
!  Prone to large coaching effects (Cullen et al., 2006)
!  Zero is the highest possible score
! 
SJT Introduction
Prof.(Cullen
Dr. Filip Lievens
Prone to large coaching
effects
et al., 2006)
SJT Introduction
Prof. Dr. Filip Lievens
42
42
Situational Judgment Tests: An introduction to theory, practice, and research.
SJT Development
9. Measurement
Level of Scores
SJT
Development
9. Measurement Level of Scores
! 
! 
! 
! 
! 
! 
! 
! 
One dichotomous response per item
! 
Most
likely, pick response
the best per item
One
dichotomous
!  Most likely, pick the best
Two dichotomous responses per item
! 
Most/least
likely,responses
pick the best/worst
Two
dichotomous
per item
!  Most/least likely, pick the best/worst
Ranking the responses yields ordinal level data.
Ranking the responses yields ordinal level data.
Rating the effectiveness yields interval level data
per
itemthe
response.
Rating
effectiveness yields interval level data
per item response.
43
43
SJT Development
10.
Constructs measured
SJT Development
10. Constructs measured
Construct
Homogeneous
One construct
!  ! 
Homogeneous
! 
! 
One construct
Heterogeneous
Multiple constructs
!  ! 
Heterogeneous
! 
! 
Multiple constructs
Usage
Construct
Usage
Technical knowledge
2.94%
Technical knowledge
Interpersonal
Skills
2.94%
12.50%
Interpersonal
Skills
Teamwork
Skills
12.50%
4.41%
Teamwork Skills
Leadership
4.41%
37.50%
Leadership
Basic personality
37.50%
9.56%
Basic personality
Heterogeneous
9.56%
33.09%
Heterogeneous
33.09%
Christian et al., 2010; N= 136 SJTs
SJT Introduction
Prof.
Dr. Filipet
Lievens
Christian
al., 2010;
SJT Introduction
Prof. Dr. Filip Lievens
N= 136 SJTs
44
44
22
22
SJT Research:
State-of-the-art
22—23
Research
SJT Quiz
Indica si la siguiente afirmación es “verdadera” o “falsa”:
1. 
La validez predictiva de los tests de juicio situacional (SJT) es aproximadamente 0.3.
2. 
Los SJTs mejoran la predicción del rendimiento que puede alcanzarse con las pruebas de capacidad
cognitiva y personalidad.
3. 
Los SJTs basados en video son mejores predictores del rendimiento futuro que los SJTs en papel.
4. 
Los hombres puntúan por lo general más alto que las mujeres en los SJTs.
5. 
Se han encontrado diferencias raciales de escasa importancia en los SJTs.
6. 
Las diferencias raciales en los SJTs son mas pequeñas en los SJTs basados en video.
7. 
No se han encontrado diferencias entre la administración informatizada de los SJTs y la tradicional
(administración en papel).
8. 
Cuando se realiza un análisis factorial sobre las puntuaciones factoriales, se suele encontrar una
estructura factorial interpretable.
9. 
La fiabilidad test-retest de los SJT es aproximadamente 0.70.
10.  Es posible entrenar a la gente para que mejore su rendimiento en los SJTs.
SJT Introduction
Prof. Dr. Filip Lievens
46
Research
Internal
consistency
Research
Internal consistency
23
Meta-analysis (Catano, Brochu & Lamerson, 2012)
Average alpha(Catano,
= .46Brochu
(N=56)
!  ! 
Meta-analysis
& Lamerson, 2012)
! 
! 
! 
! 
Factor
analysis:
factors & no clear structure
Average
alpha =Several
.46 (N=56)
Factor analysis: Several factors & no clear structure
Moderators (McDaniel et al., 2001)
Length of SJT
!  ! 
Moderators
(McDaniel et al., 2001)
! 
! 
! 
Type
ofof
scale
Length
SJT
Rating
scale
Type of scale (.73) > Two choices (.60) > One choice (.24)
!  ! 
! 
Rating scale (.73) > Two choices (.60) > One choice (.24)
SJT Introduction
Prof. Dr. Filip Lievens
47
SJT Introduction
Prof. Dr. Filip Lievens
47
Research
Test-retest
Reliability
Research
Test-retest Reliability
Adequate
Between .60 - .80
!  ! 
Adequate
! 
! 
Between .60 - .80
48
48
Situational Judgment Tests: An introduction to theory, practice, and research.
Research
Criterion-related
Validity
Research
Criterion-related
Validity
McDaniel, Morgeson, & Campion (2001)
! 
! 
ρ = .27 (corrected r = .34)
McDaniel, Morgeson, & Campion (2001)
McDaniel
al. (2007)r = .34)
! 
ρ = .27et(corrected
!  ρ = .20 (corrected r = .26)
McDaniel et al. (2007)
Validity
for (corrected
predicting relevant
! 
ρ = .20
r = .26) criteria (Christian et al., 2010)
! 
Equivalence
computerrelevant
& paper-and-pencil
SJTs et
(Ployhart
et al., 2003)
Validity for predicting
criteria (Christian
al., 2010)
! 
! 
! 
! 
! 
! 
Moderators:
Morgeson
& Campion
(2001)
Equivalence McDaniel,
computer &
paper-and-pencil
SJTs
(Ployhart et al., 2003)
!  Job analysis (.38 vs. .29)
! 
Less detailed
questions
(.35 vs.& 33)
Moderators:
McDaniel,
Morgeson
Campion (2001)
!  Video-based
(Christian
et al., 2010)
Job analysis (.38
vs. .29)
!  Less detailed questions (.35 vs. 33)
!  Video-based (Christian et al., 2010)
Research
Incremental
Validity (McDaniel et al., 2007)
Research
Incremental
Validity (McDaniel et al., 2007)
!  Incremental validity over
Cognitive ability
(3%-5%)
!  ! 
Incremental
validity
over
! 
! 
Personality
(6%-7%)
Cognitive ability
(3%-5%)
Cognitive
ability
& personality (1%-2%)
Personality (6%-7%)
! 
Cognitive ability & personality (1%-2%)
! 
! 
50
50
25
25
0,60
0,50
0,40
0,30
0,20
Cognitive Tests
Video-based
Interpersonal SJT
0,10
0,00
-0,10
Note. Correlations corrected for range restriction and unreliability in criterion.
24—25
Research
Gender Differences
! 
! 
Females > males (d = .10)
Explanation
!  Not related to cognitive loading & response instructions
! 
Interpersonal nature of situations?
Cf. assessment center/work sample research
! 
SJT Introduction
Prof. Dr. Filip Lievens
52
Research
Race differences
Race differences
26
Research
! 
! 
! 
! 
! 
d = .38 (N = 36.355; K = 39, Nguyen et al., 2005)
d = .38 (N = 36.355; K = 39, Nguyen et al., 2005)
Correlation of SJT with cognitive ability drives racial
differences of
found.
Correlation
SJT with cognitive ability drives racial
differences found.
Moderators
!  Reading component
!  Moderators
! 
! 
! 
! 
! 
Presentation
format (written vs. video)
Reading
component
Response
instructions
Presentation format (written vs. video)
Response instructions
SJT Introduction
Prof. Dr. Filip Lievens
53
SJT Introduction
Prof. Dr. Filip Lievens
53
Research
Faking, Coaching, & Retest effects
Faking, Coaching, & Retest effects
Research
! 
Hooper, Cullen, & Sackett (2006)
! 
Hooper, Cullen, & Sackett (2006)
! 
! 
! 
! 
! 
! 
Smaller than personality inventories
d: .08 - .89
Smaller than personality inventories
d: .08 - .89
Moderators
!  Cognitive loading of items
Moderators
! 
Transparency of items
Cognitive loading of items
Response instructions
Transparency of items
Type of study design
Response instructions
!  Type of study design
Retest effects (SJTs) < retest effects (cognitive ability tests)
! 
Retest effects (SJTs) < retest effects (cognitive ability tests)
! 
! 
! 
! 
! 
! 
(Dunlop et al., 2011; Lievens, Buyse, & Sackett, 2005)
(Dunlop et al., 2011; Lievens, Buyse, & Sackett, 2005)
54
54
Situational Judgment Tests: An introduction to theory, practice, and research.
Research
Applicant
Perceptions
Research
Applicant Perceptions
! 
! 
Meta-analysis (Hausknecht et al., 2005)
More favorable(Hausknecht
reactions et
towards
job-related selection
Meta-analysis
al., 2005)
! 
! 
! 
procedures
More favorable reactions towards job-related selection
procedures
Moderators
Video SJTs
!  ! 
Moderators
(Chan & Schmitt, 1997)
! 
! 
Non-linear
(Kanning et al., 2006)
Video SJTsSJTs
(Chan & Schmitt, 1997)
Multimedia
SJTs (Kanning
(Richman-Hirsch et al., 2000)
Non-linear SJTs
et al., 2006)
! 
Multimedia SJTs (Richman-Hirsch et al., 2000)
! 
! 
55
55
Epilogue
Epilogue
Epilogue
28
Future & Advanced Topics
! 
Technologically-enhanced SJTs
! 
Construct-oriented SJTs
! 
28
Trait Activation Theory
! 
Cross-cultural issues
! 
Norming & item banking
! 
Alternate form development
57
26—27
Epilogue
A Famous Quote
[SJTs] seem to represent psychometric alchemy
(adverse impact is down, validity is up), they
seem to assess practically important KSAOs,
and assessees like them
Landy, 2007, p. 418
29
Thank you
More info on
http://users.ugent.be/~flievens
Useful references
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
! 
Becker, T.E. (2005). Development and validation of a situational judgment test of employee integrity. International Journal of
Selection and Assessment, 13, 225-232.
Bledow, R., & Frese, M. (2009). A situational judgment test of personal initiative and its relationship to performance. Personnel
Psychology, 62, 229-258.
Bergman, M.E., Drasgow, F., Donovan, M.A., & Henning, J.B. (2006). Scoring situational judgment tests: Once you get the data, your
troubles begin. International Journal of Selection and Assessment, 14, 223-235.
Bruce, M. M., & Learner, D. B. (1958). A supervisory practices test. Personnel Psychology, 11, 207-216.
Chan, D., & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment is situational judgment tests: Subgroup
differences in test performance and face validity perceptions. Journal of Applied Psychology, 82, 143-159.
Christian, M. S., Edwards, B. D., & Bradley, J. C. (2010). Situational judgement tests: Constructs assessed and a meta-analysis of
their criterion-related validities. Personnel Psychology, 63, 83-117.
Cullen, M. J., Sackett, P. R., & Lievens, F. (2006). Threats to the operational use of situational judgment tests in the college
admission process. International Journal of Selection and Assessment, 14, 142-155.
Chan, D., & Schmitt, N. (2002). Situational judgment and job performance. Human Performance, 15, 233-254.
Clause, C.C., Mullins, M.E., Nee, M.T., Pulakos, E.D., & Schmitt, N. (1998). Parallel test form development: A procedure for
alternative predictors and an example. Personnel Psychology, 51, 193-208.
Clevenger, J., Pereira, G.M., Wiechmann, D., Schmitt, N., & Schmidt-Harvey, V.S. (2001). Incremental validity of situational judgment
tests. Journal of Applied Psychology, 86, 410-417.
Cullen, M.J., Sackett, P.R., & Lievens, F. (2006). Threats to the operational use of situational judgment tests in the college admission
process. International Journal of Selection and Assessment, 14, 142-155.
Dalessio, A.T. (1994). Predicting insurance agent turnover using a video-based situational judgment test. Journal of Business and
Psychology, 9, 23-32.
Funke, U., & Schuler, H. (1998). Validity of stimulus and response components in a video test of social competence. International
Journal of Selection and Assessment, 6, 115-123.
Hooper, A.C., Cullen, M.J., & Sackett, P.R. (2006). Operational threats to the use of SJTs: Faking, coaching, and retesting issues. In
J.A. Weekley & R.E. Ployhart (Eds.), Situational judgment tests: Theory, measurement and application (pp. 205-232). Mahwah, New
Jersey: Lawrence Erlbaum Associates.
Hunter, D.R. (2003). Measuring general aviation pilot judgment using a situational judgment technique. International Journal of
Aviation Psychology, 13, 373-386.
Kanning, U.P., Grewe, K., Hollenberg, S., & Hadouch, M. (2006). From the subjects’ point of view – Reactions to different types of
situational judgment items. European Journal of Psychological Assessment, 22, 168-176.
Konradt, U., Hertel, G., & Joder, K. (2003). Web-based assessment of call center agents: Development and validation of a
computerized instrument. International Journal of Selection and Assessment, 11, 184-193.
Lievens, F. (2006). International situational judgment tests. In J.A. Weekley & R.E. Ployhart (Eds.) Situational judgment tests:
Introduction
Prof.
Dr. Filip
Lievens
Theory, measurement and application (pp. 279-300).SJT
SIOP
Frontier Series.
Mahwah,
NJ:
Lawrence Erlbaum Associates.
60
Situational Judgment Tests: An introduction to theory, practice, and research.
USEFUL REFERENCES
• Becker, T.E. (2005). Development and validation
process. International Journal of Selection and
of a situational judgment test of employee
Assessment, 14, 142-155.
integrity. International Journal of Selection and • Dalessio, A.T. (1994). Predicting insurance
Assessment, 13, 225-232.
agent turnover using a video-based situational
• Bledow, R., & Frese, M. (2009). A situational
judgment test. Journal of Business and
judgment test of personal initiative and its
Psychology, 9, 23-32.
relationship
to
performance.
Personnel • Funke, U., & Schuler, H. (1998). Validity of
Psychology, 62, 229-258.
stimulus and response components in a video
• Bergman, M.E., Drasgow, F., Donovan, M.A.,
test of social competence. International Journal
& Henning, J.B. (2006). Scoring situational
of Selection and Assessment, 6, 115-123.
judgment tests: Once you get the data, your • Hooper, A.C., Cullen, M.J., & Sackett, P.R. (2006).
troubles begin. International Journal of Selection
Operational threats to the use of SJTs: Faking,
and Assessment, 14, 223-235.
coaching, and retesting issues. In J.A. Weekley &
• Bruce, M. M., & Learner, D. B. (1958). A
R.E. Ployhart (Eds.), Situational judgment tests:
supervisory practices test. Personnel Psychology, Theory, measurement and application (pp. 20511, 207-216.
232). Mahwah, New Jersey: Lawrence Erlbaum
Associates.
• Chan, D., & Schmitt, N. (1997). Video-based
versus paper-and-pencil method of assessment • Hunter, D.R. (2003). Measuring general aviation
is situational judgment tests: Subgroup
pilot judgment using a situational judgment
differences in test performance and face validity
technique. International Journal of Aviation
perceptions. Journal of Applied Psychology, 82, Psychology, 13, 373-386.
143-159.
• Kanning, U.P., Grewe, K., Hollenberg, S., &
• Christian, M. S., Edwards, B. D., & Bradley, J. C.
Hadouch, M. (2006). From the subjects’ point
(2010). Situational judgement tests: Constructs
of view – Reactions to different types of
assessed and a meta-analysis of their criterion- situational judgment items. European Journal of
related validities. Personnel Psychology, 63, 83- Psychological Assessment, 22, 168-176.
117.
• Konradt, U., Hertel, G., & Joder, K. (2003).
• Cullen, M. J., Sackett, P. R., & Lievens, F. (2006).
Web-based assessment of call center agents:
Threats to the operational use of situational
Development and validation of a computerized
judgment tests in the college admission
instrument. International Journal of Selection
process. International Journal of Selection and
and Assessment, 11, 184-193.
Assessment, 14, 142-155.
• Lievens, F. (2006). International situational
• Chan, D., & Schmitt, N. (2002). Situational
judgment tests. In J.A. Weekley & R.E. Ployhart
judgment and job performance. Human
(Eds.) Situational judgment tests: Theory,
Performance, 15, 233-254.
measurement and application (pp. 279-300).
SIOP Frontier Series. Mahwah, NJ: Lawrence
• Clause, C.C., Mullins, M.E., Nee, M.T., Pulakos,
Erlbaum Associates.
E.D., & Schmitt, N. (1998). Parallel test form
development: A procedure for alternative • Lievens, F., Buyse, T., & Sackett, P.R. (2005a). The
predictors and an example. Personnel
operational validity of a video-based situational
Psychology, 51, 193-208.
judgment test for medical college admissions:
Illustrating
the importance of matching predictor
• Clevenger, J., Pereira, G.M., Wiechmann, D.,
and criterion construct domains. Journal of
Schmitt, N., & Schmidt-Harvey, V.S. (2001).
Applied
Psychology, 90, 442-452.
Incremental validity of situational judgment tests.
Journal of Applied Psychology, 86, 410-417.
• Lievens, F., Buyse, T., & Sackett, P.R. (2005b).
Retest effects in operational selection settings:
• Cullen, M.J., Sackett, P.R., & Lievens, F. (2006).
Development and test of a framework. Personnel
Threats to the operational use of situational
Psychology,
58, 981-1007.
judgment tests in the college admission
28—29
• Lievens, F., & Sackett, P.R. (2006). Video-based • Motowidlo, S., Dunnette, M.D., & Carter, G.W.
versus written situational judgment tests: A
(1990). An alternative selection procedure:
comparison in terms of predictive validity. Journal
The low-fidelity simulation. Journal of Applied
of Applied Psychology, 91, 1181-1188.
Psychology, 75, 640-647.
• Lievens, F., & Sackett, P.R. (2007). Situational • Motowidlo, S.J., Hooper, A.C., & Jackson, H.L.
judgment tests in high stakes settings: Issues
(2006). Implicit policies about relations between
and strategies with generating alternate forms.
personality traits and behavioral effectiveness
Journal of Applied Psychology, 92, 1043-1055.
in situational judgment items.Journal of Applied
Psychology,
91, 749-761.
• Lievens, F., Peeters, H., & Schollaert, E. (2008).
Situational judgment tests: A review of recent • Motowidlo, S.J., & Beier, M. E. (2010).
research. Personnel Review, 37, 426-441.
Differentiating specific job knowledge from
implicit trait policies in procedural knowledge
• Lievens, F., Sackett, P.R., & Buyse, T. (2009). The
measured by a situational judgment test. Journal
effects of response instructions on situational
of
Applied Psychology, 95, 321-333.
judgment test performance and validity in a high-
stakes context. Journal of Applied Psychology, • Motowidlo, S.J., Crook, A.E., Kell, H.J., & Naemi,
94, 1095-1101.
B. (2009). Measuring procedural knowledge
more simply with a single-response
• McClough, A.C., & Rogelberg, S.G. (2003).
Selection in teams: An exploration of the • situational judgment test. Journal of Business
Teamwork Knowledge, Skills, and Ability
and Psychology, 24, 281-288.
test. International Journal of Selection and • Nguyen, N.T., Biderman, M.D., & McDaniel, M.A.
Assessment, 11, 56-66.
(2005). Effects of response instructions on
• McDaniel, M.A., Hartman, N.S., Whetzel, D.L., &
faking a situational judgment test.
Grubb, W.L. (2007). Situational judgment tests, • International Journal of Selection and Assessment,
response instructions, and validity: A meta- 13, 250-260.
analysis. Personnel Psychology, 60, 63-91.
• Nguyen, N.T., McDaniel, M.A., & Whetzel, D.L.
• McDaniel, M.A., Morgeson, F.P., Finnegan,
(2005, April). Subgroup differences in situational
E.B., Campion, M.A., & Braverman, E.P. (2001).
judgment test performance: A metaanalysis.
Predicting job performance using situational
Paper presented at the 20th Annual Conference
judgment tests: A clarification of the literature.
of the Society for Industrial and Organizational
Journal of Applied Psychology, 86, 730-740.
Psychology, Los Angeles, CA.
• McDaniel, M.A., & Motowidlo, S. (2005). • O’Connell, M.S., Hartman, N.S., McDaniel, M.A.,
Situational
Judgment
Tests.
Workshop
Grubb, W.L., & Lawrence, A. (2007). Incremental
presented at SIOP.
validity of situational judgment tests for task and
• McDaniel, M.A., & Nguyen, N.T. (2001).
contextual job performance. International Journal
Situational judgment tests: A review of practice
of Selection and Assessment, 15, 19-29.
and constructs assessed. International Journal • Olson-Buchanan, J.B., & Drasgow, F. (2006).
of Selection and Assessment, 9, 103-113.
Multimedia situational judgment tests: The
• McDaniel, M.A., & Whetzel, D.L. (2005).
medium creates the message. In J.A. Weekley &
Situational judgment test research: Informing
R.E. Ployhart (Eds.), Situational Judgment Tests
the debate on practical intelligence theory.
(pp. 253-278). San Francisco, Jossey Bass.
Intelligence, 33, 515-525.
• Olson-Buchanan, J. B., Drasgow, F., Moberg, P.
• McHenry, J. J. & Schmitt, N. (1994). Multimedia
J., Mead, A. D., Keenan, P. A., & Donovan, M. A.
testing. In M. G. Rumsey & C. B. Walker (Eds.),
(1998). Interactive video assessment of conflict
Personnel selection and classification (pp. 193- resolution skills. Personnel Psychology, 51, 1-24.
232). Hillsdale, NJ: Erlbaum.
• Oswald, F.L., Friede, A.J., Schmitt, N., Kim, B.K.,
• Morgeson, F.P., Reider, M.H., & Campion, M.A. & Ramsay, L.J. (2005). Extending a practical
(2005). Selecting individuals in team settings:
method for developing alternate test forms
The importance of social skills, personality
using independent sets of items. Organizational
characteristics, and teamwork knowledge. Research methods, 8, 149-164.
Personnel Psychology, 58, 583-611.
•
Situational Judgment Tests: An introduction to theory, practice, and research.
• Oswald, F.L., Schmitt, N., Kim, B.H., Ramsay, L.J.,
using cognitive and noncognitive predictors and
& Gillespie, M.A. (2004). Developing a biodata
the impact of demographic status on admitted
measure and situational judgment inventory
students. Journal of Applied Psychology, 94,
as predictors of college student performance. 1479-1497.
Journal of Applied Psychology, 89, 187-208.
• Stevens, M.J., & Campion, M.A. (1999). Staffing
• Peeters, H., & Lievens, F. (2005). Situational
work teams: Development and validation
judgment tests and their predictiveness of
of a selection test for teamwork. Journal of
college students’ success: The influence of faking. Management, 25, 207-228.
Educational and Psychological Measurement, • Such, M.J., & Schmidt, D.B. (2004, April).
65, 70-89.
Examining the effectiveness of empirical keying:
• Ployhart, R.E., & Ehrhart, M.G. (2003). Be careful
A cross-cultural perspective. Paper presented
what you ask for: Effects of response instructions
at the 19th Annual Conference of the Society
on the construct validity and reliability of
for Industrial and Organizational Psychology,
situational judgment tests. International Journal
Chicago, IL.
of Selection and Assessment, 11, 1-16.
• Wagner, R.K., & Sternberg, R.J. (1985). Practical
• Ployhart, R.E., Weekley, J.A., Holtz, B.C. & Kemp,
intelligence in real world pursuits: The role of
C.F. (2003). Web-based and paper-and-pencil
tacit knowledge. Journal of Personality and
testing of applicants in a proctored setting: Are
Social Psychology, 49, 436-458.
personality, biodata, and situational judgment • Weekley, J.A., & Jones, C. (1997). Video-based
tests comparable? Personnel Psychology, 56,
situational testing. Personnel Psychology, 50,
733-752.
25-49.
• Smiderle, D., Perry, B. A., & Cronshaw, S. F. • Weekley, J.A., & Jones, C. (1999). Further studies
(1994). Evaluation of video-based assessment
of situational tests. Personnel Psychology, 52,
in transit operator selection. Journal of Business
679-700.
and Psychology, 9, 3-22.
• Weekley, J.A., & Ployhart, R.E. (2006). Situational
• Schmitt, N., & Chan, D. (2006). Situational
judgment tests: Theory, measurement and
judgment tests: Method or construct? In J.
application. San Francisco, Jossey Bass.
Weekley & R.E. Ployhart (Eds.), Situational
judgment tests (pp. 135-156). Mahwah, NJ: • Whetzel, D.L., & McDaniel, M.A. (2009).
Situational judgment tests: An overview of
Lawrence Erlbaum
current research. Human Resource Management
• Schmitt, N., Keeney, J., Oswald, F. L., Pleskac, Review, 19, 188-202.
T., Quinn, A., Sinha, R., & Zorzie, M. (2009).
Prediction of 4-year college student performance
30—31
Evaluations of
situational judgments
tests for high stakes
selection.
— Prof. Fiona Patterson
Departamento de Psicología de
la Universidad de Cambridge,
Reino Unido y directora del Work
Psychology Group.
Evaluations of situational judgement tests
for high stakes selection:
Research & practice in the healthcare
professions
Professor Fiona Patterson
Work Psychology Group & University of Cambridge
Madrid, June 2013
Overview
• Selecting doctors in the UK National Health Service
• Selection methods
– Low fidelity: Knowledge test, situational judgement test (SJT)
– High fidelity: Assessment Centre (AC)
• Predictive validity studies
• Low fidelity simulations are cost-effective to develop,
administer & score, but how do they compare to high fidelity
ACs?
• Current research & implications for policy & practice
UK Medical Training & Career Pathway
Specialty Training
Undergraduate
medical
school
training
GP Specialty
Training
FT1
FT2
Core
Specialty
Training
Selection
Gateway 1
Foundation Training
(FT)
2 Years
Selection
Gateway 2
CCT &
eligible for
Consultant
Grade
3 years
Higher
Specialty
Training
Specialty Training (ST)
8 Years
Selection
Gateway 3
32—33
Selection in medicine through the ages…
“Work for me, son – I
knew your father.”
1970
“ Fill out the application
form for HR
and the job is
yours, mate.”
1980
• 26,000
applicants for
8,000 medical
school places
“It isn’t an interview – just
an informal
chat, sweetie.
Just a
formality.”
1990
Help!
• 8,000 medical
students apply
for their first
post
• 10,000 speciality
applicants
• 27,000 +
interviews
• Weeks of
offering,
rejecting,
cascading
1000s Consultant hours
Key research questions
What non-cognitive attributes are important to be
an effective clinician?
What methods are available to test these in high
stakes selection?
Should we use personality testing?
Given the costs, beyond some basic assessment,
is a lottery the best option?
Evaluations of situational judgments tests for high stakes selection.
Why not use a lottery system?
GP Specialty selection (selection gateway 3)
N= 6,500 applicants per year for 3,000 training posts
UK Medical Training & Career Pathway
Specialty Training
Undergraduate
medical
school
training
FT1
FT2
GP Specialty
Training
Core
Specialty
Training
Selection
Gateway 1
Foundation Training
(FT)
2 Years
Selection
Gateway 2
CCT &
eligible for
Consultant
Grade
3 years
Higher
Specialty
Training
Specialty Training (ST)
8 Years
Selection
Gateway 3
34—35
Job analysis of the GP role
• GP selection system is based on
multi-source, multi-method job analysis
Patterson et al, BJGP, 2000; Patterson et al, BJGP, 2013
•
•
•
•
•
•
Empathy & sensitivity
Communication skills
Problem-solving
Professional integrity
Coping with pressure
Clinical expertise
British Journal of General Practice, May 2013
Previously…
Shortlisting
Interview
GP Selection
National
Longlisting
Electronic
Application
Process
Foundation
Competency
National
panel
Regional
Selection Tests
Assessment Centre
(AC)
Clinical Problem solving test
CPS
Situational Judgement test
SJT
Assessment Centre Using
Simulated Consultations
Ranking
Matching to Region
Evaluations of situational judgments tests for high stakes selection.
Clinical Problem Solving (CPS) Knowledge Test
• Applying clinical knowledge in relevant contexts, e.g. diagnosis, management
• Item type single best answer/ multiple choice
• Operational test: 100 items, duration 90 minutes
Reduced Vision
A.
Basilar migraine
F.
Central retinal vein occlusion
B.
Cerebral tumour
G.
Optic neuritis (demyelinating)
C.
Cranial arteritis
H.
Retinal detachment
D.
Macular degeneration
I.
Tobacco optic neuropathy
E.
Central retinal artery occlusion
For each patient below select the SINGLE most likely diagnosis from the list above. Each option
may be selected once, more than once or not at all.
1. A 75 year old man, who is a heavy smoker, with a blood pressure of 170/105, complains of
floaters in the left eye for many months and flashing lights in bright sunlight. He has now
noticed a "curtain" across his vision.
Situational Judgement Test (SJT)
• Professional dilemmas focusing on non-cognitive attributes in
complex interpersonal scenarios (empathy, integrity, coping with
pressure, teamworking)
• Item writing by subject matter experts, concordance panel to agree
scoring key
• Cognitively-oriented (‘What should you do?’)
• Item types: rank 4/5 options, choose best 2/3
• Operational tests comprise 50 items, in 90 minutes
Example SJT item
You are reviewing a routine drug chart for a patient with
rheumatoid arthritis during an overnight shift. You notice that
your consultant has inappropriately prescribed methotrexate
7.5mg daily instead of weekly.
Rank in order the following actions in response to this situation (1= Most
appropriate; 5= Least appropriate)
A Ask the nurses if the consultant has made any other drug errors
recently
B Correct the prescription to 7.5mg weekly
C Leave the prescription unchanged until the consultant ward round
the following morning
D Phone the consultant at home to ask about changing the prescription
E Inform the patient of the error
36—37
Predictive Validity Results
GP Selection: Validation studies
Study 1.
Supervisor ratings after 1 year into training
Study 2.
End-of-training outcomes (licensure exam)
after 3 years
Study 3.
Structural (theoretical) model to evaluate the
incremental validity for each selection method
Study 1: Correlations between the selection
methods & job performance after 1 year
N=196
1.
2.
Mean
SD
1. Clinical problem-solving test
78.88
9.02
2. Situational judgement test
637.87 34.31 .50
3. Assessment centre
3.32
0.39
.30
.43
4.63
0.73
.36
(.54)
.37
(.56)
3.
Selection methods (Predictors)
Outcome variable
4. Supervisor ratings
.30
(.50)
Note. Correlations between parentheses were corrected for multivariate range restriction. Correlations are
significant at p < .01
Evaluations of situational judgments tests for high stakes selection.
Study 2: Correlations between the selection
methods & end-of-training licensure assessments
N=2292
Mean
SD
1.
2.
3.
4.
Selection methods (Predictors)
1. Clinical problem-solving test
80.08 8.14
2. Situational judgement test
640.13 31.66 .40
3. Assessment centre
3.34
0.36
0.26
0.90
.24
.32
Outcome variables
4. End of training applied
knowledge test
5. End of training clinical skills
exam (Simulated surgery)
0.20
.73 .43 .24
(.85) (.69) (.41)
0.80 .38 .43 .32
(.55) (.57) (.41)
.41
Note. Correlations between parentheses were corrected for multivariate range restriction. Correlations are
significant at p < .01
Study 3. Structural/theoretical model showing
selection methods & their link to job performance
Lievens & Patterson (2011)
SJT fully mediates the effects of declarative knowledge
on job performance. The AC partially mediates the effects
of the SJT
Significant incremental validity offered by the AC
Theoretical Model Underlying
SJTs in GP Selection
General
Experience
Implicit Trait
Policies
Specific Job
Experience
Specific Job
Knowledge
Job/Training
Performance
Procedural
Knowledge Captured
by the SJT
38—39
Current Research into Medical & Dental
School Admissions
UK Clinical Aptitude Test (UKCAT)
5 subtests
• Verbal, numerical, abstract reasoning
& decision analysis
• Non-cognitive analysis using an SJT targeting empathy, integrity & team
involvement
http://www.ukcat.ac.uk/
UK Medical Training & Career Pathway
Specialty Training
Undergraduate
medical
school
training
FT1
GP Specialty
Training
FT2
Core
Specialty
Training
Selection
Gateway 1
Foundation Training
(FT)
2 Years
Selection
Gateway 2
CCT &
eligible for
Consultant
Grade
3 years
Higher
Specialty
Training
Specialty Training (ST)
8 Years
Selection
Gateway 3
SJT Test Specification
Content
• Scenarios based in either a clinical setting or during
education/training for a medical/dental career
• Third party perspective to increase breadth of available scenarios
Response Format (rating using a 4 point scale)
• Rate the appropriateness of a series of options from ‘very appropriate’ to ‘very inappropriate.
• Rate the importance of a series of options from ‘very important’ to ‘not important at all’
Evaluations of situational judgments tests for high stakes selection.
Example UKCAT SJT items
A consultation is taking place between a senior doctor and a patient; a medical
student is observing. The senior doctor tells the patient that he requires some
blood tests to rule out a terminal disease. The senior doctor is called away
urgently, leaving the medical student alone with the patient. The patient tells the
student that he is worried he is going to die and asks the student what the blood
tests will show.
How appropriate are each of the following responses by the medical student in
this situation?
Q1 Explain to the patient that he is unable to comment on what the tests will
show as he is a medical student
Q2 Acknowledge the patient’s concerns and ask whether he would like them to be raised with the senior doctor
Q3 Suggest to the patient that he poses these questions to the senior doctor
when he returns
Q4 Tell the patient that he should not worry and that it is unlikely that he will die
UKCAT SJT Evaluation
• Reliability of a 70 item test with similar quality items estimated
(α=.75 to .85)
• Candidate reactions show high face validity (significantly more
than the cognitive subsections of UKCAT)
• Content of SJT relevant for med/dental applicants = 70%
• Content of the SJT is fair to med/dental applicants = 63%
Group differences & content validity
• Gender: Females outperform males (0.2 SD)
• Ethnicity: White candidates performed better (0.3SD)
• Occupation & Employment Status: those in the higher occupational
classes (i.e. Managerial/Professional Occupations) do not always score
higher than those in lower classes - in some cases those from lowest
occupational groups, received the highest mean score.
• SJT correlates with other subtests (approx r=0.28) indicating some
shared variance between the tests. Since a large amount of variance
is not explained, the SJT is assessing different constructs to the other
tests.
40—41
Implications for policy & practice
• Strong predictive validity translates into significant gains in utility
• Focus not on ‘how much validity’ does a selection method add but more ‘valid for what’
• Closer attention to the criterion constructs targeted as high-fidelity
simulations show incremental validity over low-fidelity simulations for
predicting interpersonal job performance dimensions
• Bespoke job analysis is the cornerstone to effective selection
• Positive candidate reactions (Patterson et al, 2010)
• Political validity is an important concept in this setting (Patterson et al,
2012)
Thank you
[email protected]
Some useful links
http://www.workpsychologygroup.com/
http://www.gprecruitment.org.uk/
http://www.isfp.org.uk/
http://www.ukcat.ac.uk/about-the-test/behavioural-test/
http://www.agpt.com.au/
http://www.gmc-uk.org/about/research/14400.asp
Evaluations of situational judgments tests for high stakes selection.
42—43
Situational Judgment
Test in EPSO open
competitions for the
European Institutions
from 2010 to present.
— Mr. Stefan Meyer
EPSO (European Personnel
Selection Office).
Situational Judgment Tests
in EPSO open competitions
for the European Institutions
Stefan Meyer, European Personnel Selection Office
Madrid, 26 June 2013
1
The European Union
Commission
Economic &
Social Committee
Council
Committee of
the Regions
Parliament
Ombudsman
Court of Justice
Data Protection
Supervisor
Court of Auditors
The EU Institutions
Posts in 2012
European Parliament
6655
Council of the European Union
3153
European Commission
Court of Justice
25478
1952
European Court of Auditors
887
European Economic and Social Committee
724
Committee of the Regions
531
European Ombudsman
European Data Protection Supervisor
European External Action Service
Total
European
Court
of Auditors
2%
Court of
Justice
5%
European
External
Action Service
European
4%
Others Parliament
16%
3%
66
43
1670
European
Commission
62%
41159
44—45
Council of
the
European
Union
8%
Entry Streams
•  Administrators (AD)
e.g. policy officers, lawyers, auditors, scientific
officers, translators, interpreters, communication
& press officers …
Qualification: University degree (bachelor)
•  Assistants (AST)
e.g. secretaries, HR assistants, financial
assistants, conference technicians …
Qualification: Secondary education (at least) and
relevant professional experience
Where in competition procedure?
Registration
CBT
Admission
Assessment
centre
Reserve
list
5
EPSO Development Programme
(2008 – 2011)
Redesign of the CBT pre-selection phase:
•  Keep verbal reasoning
•  Keep numerical reasoning
•  Skip EU knowledge
•  Introduce abstract reasoning tests
•  Introduce Situational Judgement Tests
•  Introduce professional skills testing,
including linguistic skills
•  New linguistic regime as of 2011: cognitive
tests in 23 languages
6
Situational Judgment Test in EPSO open competitions for the European Institutions from 2010 to present.
New EPSO admission phase
Abstract
Reasoning
Situational
Judgement Tests
(in EN, FR, DE; where
appropriate)
CBT
Verbal &
Numerical reasoning
- In 23 languages as of 2011!
Professional / Linguistic
Skills (where appropriate)
7
Situational Judgment Tests
8
EPSO Development Programme
•  EDP, action No. 8:
The introduction of situational/
behavioural testing based on a wellfounded competency framework
Why?
•  Good indicators for job performance
•  Widely used and perceived as relevant
+ fair
46—47
9
SJT competencies*
(from the EU competency framework)
Analysing &
Problem
solving
Delivering
Quality &
Results
Prioritising&
Organising
Resilience
Working
with Others
!  Identifies the critical facts in complex issues and develops
creative and practical solutions
!  Takes personal responsibility and initiative for delivering
work to a high standard of quality within set procedures
!  Prioritises the most important tasks, works flexibly and
organises own workload efficiently
!  Remains effective under a heavy workload, handles
organisational frustrations positively and adapts to a
changing work environment
!  Works co-operatively with others in teams and across
organisational boundaries and respects differences between
people
10
SJT: features
–  SJT evaluate workplace-related behaviour
–  Target groups: AD5 and AST3 annual cycle of
competitions
–  Administered @ CBT stage
–  Score report used as expert input @ AC stage
–  20 items test in 30 mins. in either EN, FR or DE
=> NOT a speed test!
–  Tests developed in cooperation with expert external
contractor
–  First launch: AD cycle 2010 (non-eliminatory)
11
Multi-stage TD process for SJT
1.  Creation of draft item-models in EN
2.  Review by experienced SME (contractor) =>
amendments/modifications (if applicable)
3.  Statistical trials => selection of item models
4.  Review by EPSO quality board in view of
–  Organisational fit
–  Degrees of ambiguety
–  Translation/localisation (and other)
5.  Translation into FR and DE
6.  CBT upload => ROLL OUT
12
Situational Judgment Test in EPSO open competitions for the European Institutions from 2010 to present.
Which are the most and least effective
options?
You work as part of a technical support team that produces
work internally for an organisation. You have noticed that
often work is not performed correctly or a step has been
omitted from a procedure.
You are aware that some
individuals are more at fault than others as they do not make
the effort to produce high quality results and they work in a
disorganised way.
a. 
Explain to your team why these procedures are important and
what the consequences are of not performing these correctly.
b. 
Try to arrange for your team to observe another team in the
organisation who produce high quality work.
c. 
Check your own work and that of everyone else in the team to
make sure any errors are found.
d. 
Suggest that the team tries many different ways to approach
their work to see if they can find a method where fewer mistakes
are made.
M
L
13
Cdts sentiment towards SJT
14
SJT cumulative score distributions
15
48—49
To sum up
•  No evidence of bias by language version
•  No evidence of bias by gender or age
•  Good psychometric properties for the tests
–  Reliabilities found to exceed 0.7 for the short forms
–  This has been confirmed for the longer 20 items tests
•  Score ranges showed that the tests would discriminate
between candidates in an effective but fair way
…. and candidates actually like SJT!
16
Thank you
for your attention!
17
Situational Judgment Test in EPSO open competitions for the European Institutions from 2010 to present.
50—51
Medición de competencias
mediante Modelos de
Diagnóstico Cognitivo:
aplicación a un Test de
Juicio Situacional.
— Prof. Julio Olea
Catedrático de Psicología de la
Universidad Autónoma de Madrid y
codirector de la cátedra UAM-IIC Modelos
y aplicaciones psicométricos (MAP).
Índice de la presentación
– Ejemplo: estimación de la competencia
matemática.
– MDC: modelos, proceso de aplicación y
aportaciones previsibles para la medición de
competencias.
– Estudio empírico: objetivos, procedimiento y
resultados al aplicar un MDC a un TJS.
Ejemplo: Competencia matemática
de un niño de 6 años
• ¿Tiene adquiridas las competencias de suma y
de resta de números naturales?
• 4 posibles estados latentes:
a)
b)
c)
d)
No suma, no resta (0, 0)
Suma, no resta (1, 0)
No suma, resta (0, 1)
Suma, resta (1, 1)
Medición de competencias mediante Modelos de
Diagnóstico Cognitivo:
aplicación a un Test de Juicio Situacional
Pablo E. García (IIC)
Julio Olea (UAM)
Jimmy de la Torre (Rutgers U.)
52—53
Test
Respuestas de un niño
Ítem 1:
3+4=
7
Ítem 2:
7-2=
9
Ítem 3:
7+6-3=
16 Test
Respuestas de un niño
Ítem 1:
3+4=
7
Ítem 2:
7-2=
9
Ítem 3:
7+6-3=
16 ¿Qué estado latente es más probable?
a) No suma, no resta (0, 0)
b) Suma, no resta (1, 0)
c) No suma, resta (0, 1)
d) Suma, resta (1, 1)
El MDC más simple: modelo DINA
• Matriz Q:
• Asume dos parámetros para cada ítem:
• s: slipping (desliz). P. ej. 0.1
• g: guessing (acierto por azar). P. ej. 0.2
• Asume que es preciso tener todas las competencias requeridas para
acertar un ítem.
Medición de competencias mediante Modelos de Diagnóstico Cognitivo: aplicación a un Test de Juicio Situacional.
Probabilidades de acierto
Probabilidades de acierto
ESTADO LATENTE (ɲ)
Ítem 1
Ítem 2
Ítem 3
(1,0)
(0,1)
(1,1)
No suma, no resta (0, 0)
0.2
0.2
0.2
Suma, no resta (1, 0)
0.9
0.2
0.2
No suma, resta (0, 1)
0.2
0.9
0.2
Suma, resta (1, 1)
0.9
0.9
0.9
Verosimilitud
Probabilidades para el niño
ESTADO
LATENTE (ɲ)
Ítem 1
Ítem 2
Ítem 3
Verosimilitud
No suma, no resta (0, 0)
0.2
0.8
0.8
0.032
Suma, no resta (1, 0)
0.9
0.8
0.8
0.144
No suma, resta (0, 1)
0.2
0.1
0.8
0.016
Suma, resta (1, 1)
0.9
0.1
0.1
0.009
Verosimilitud
Probabilidades para el niño
ESTADO
LATENTE (ɲ)
Ítem 1
Ítem 2
Ítem 3
Verosimilitud
No suma, no resta (0, 0)
0.2
0.8
0.8
0.032
Suma, no resta (1, 0)
0.9
0.8
0.8
0.144
No suma, resta (0, 1)
0.2
0.1
0.8
0.016
Suma, resta (1, 1)
0.9
0.1
0.1
0.009
54—55
Diferentes MDC
VARIEDAD DE MODELOS (Rupp, Templin & Henson, 2010):
•
No compensatorios:
•
Compensatorios:
•
Modelos generales:
– DINA: Deterministic-input, noisy-and-gate (Junker & Sijtsma, 2001).
– DINO: Deterministic-input, noisy-or-gate (Templin & Henson, 2006): s es la probabilidad de
fallo cuando “al menos” un atributo está presente.
–
G-DINA (de la Torre, 2011).
PECULIARIDADES:
•
•
•
Modelos multidimensionales.
Naturaleza confirmatoria: podemos obtener indicadores de ajuste para aplicar un
modelo u otro.
Novedosos, en relación a otros modelos psicométricos (TCT, TRI): mayor
información “diagnóstica”.
Proceso de aplicación de un MDC
Jueces
expertos
2 K
[0 1 0 1 1]
TJS para la medición de competencias
• Problemas habituales de validez (Christian, Edwards &
Bradley, 2010; Ployhart & MacKenzie, 2011).
– Evidencias sobre validez de contenido: ¿Qué miden
exactamente los TJS? ¿Una o diferentes competencias?
Normalmente se extrae una única puntuación.
– Evidencias sobre la estructura interna: ¿Cómo comprobar
que las competencias están realmente reflejadas en los
ítems o situaciones?
– Evidencias sobre relaciones con otras variables. Estudios
de validez referida al criterio inconsistentes. Estudios
prometedores de validez predictiva (Lievens & Sackett,
2012).
Medición de competencias mediante Modelos de Diagnóstico Cognitivo: aplicación a un Test de Juicio Situacional.
TJS para la medición de competencias
(Lievens & Sackett, 2012)
PREDICTORES
(examen de admisión)
Test de capacidades cognitivas y de
conocimientos científicos
Test de contenidos médicos
TJS en video (interpersonal skills)
CRITERIOS (ɴ)
Calificaciones en las
prácticas (7años)
Rendimiento en el
trabajo (9 años)
.13(*)
.12
-.02
-.17
.22(*)
.23(*)
¿Qué pueden aportar los MDC a la
medición de competencias?
Evidencias sobre el contenido
Evidencias sobre la estructura
interna y los procesos de
respuesta
Evidencias sobre relaciones
con otras variables
Matriz Q
Ajuste del modelo a los
datos
???
Habrá que
comprobarlo
Estudio empírico: objetivos
• Aplicar los MDC a la medición de competencias
profesionales a través de TJS.
• Pregunta principal: ¿qué competencias podemos juzgar que
tienen o no tienen los evaluados?
• Preguntas adicionales:
– ¿Qué competencias están incorporadas en las situaciones?
– ¿Son todas relevantes?
– ¿Se requieren todas las competencias incluidas en una situación
para emitir la respuesta apropiada?
– ¿Se relacionan las medidas categóricas que proporciona el MDC
con otras variables?
56—57
Participantes e instrumentos
Participantes: 485 empleados de una entidad bancaria española (programa de
desarrollo interno). Se aplicó el TJS y un test de personalidad (Big Five).
TJS informatizado de 30 ítems (perteneciente a la batería eValue del IIC).Cada ítem
comienza con un incidente crítico, seguido de una pregunta y tres opciones.
ÍTEM 4: Durante una reunión con su equipo de trabajo y algunos responsables de otras
áreas, tras exponer las cualidades del proyecto en desarrollo, uno de los responsables
contesta: “Encuentro que su proyecto presenta algunas carencias”.
¿Qué respondería?
a) No estoy de acuerdo, pero explíqueme exactamente qué tipo de carencias y
examinemos todos los puntos sobre los que tenga cualquier duda.
b) Probablemente es que no hemos profundizado lo suficiente en las ventajas.
Repasemos de nuevo cada uno de los puntos del proyecto.
c) Puede ser, es un tema sobre el que estamos trabajando y al que esperamos darle
pronto solución. Aún así, será rentable de igual modo.
Definición de la matriz Q
7
Librería de competencias Great-Eight (SHL Group; Kurz &
Bartram, 2002; Bartram, 2005). Tres niveles:
1. 8 factores generales.
2. 20 competencias
3. 112 componentes.
Competencias
Factores
Big Five
Interactuar y
presentar
Extraversión
Adaptarse y
aguantar
Estabilidad
emocional
Decidir e iniciar acciones
Relacionarse y establecer redes
Persuadir e influenciar
Presentar y comunicar información
Seguir instrucciones y procedimientos
Adaptarse y responder al cambio
Aguantar la presión y contratiempos
Definición de la matriz Q
• 4 expertos en Psicología organizacional especificaron la
Matriz-Q:
– Se les proporcionó varios indicadores para cada competencia.
– Valoraron, para cada ítem: ¿qué competencias son necesarias
para dar la respuesta considerada como más adecuada?
– La tarea se llevó a cabo en 3 fases:
• Fase 1: tarea individual.
• Fase 2: posible reconsideración cuando conocen los resultados de la
fase 1.
• Fase 3: grupo de discusión para intentar llegar a mayores acuerdos.
– Una competencia (Relacionarse y establecer redes) fue asignada sólo a 4
ítems.
Medición de competencias mediante Modelos de Diagnóstico Cognitivo: aplicación a un Test de Juicio Situacional.
Definición de la matriz Q
• ¿Cuál de las matrices Q?
– Se valoró estadísticamente el ajuste de diferentes posibles matrices.
– Chen, de la Torre & Zhang (en prensa): el índice BIC (Bayesian
Information Criterion; Schwarz, 1976) permite elegir la mejor matriz,
asumiendo como cierto el modelo G-DINA.
Matriz-Q (7)
࡮ࡵ࡯ = െ૛ࡸࡸ + ࡼ ࢒࢔(ࡺ)
BIC
Matriz-Q (6)
BIC
Unanimidad
16.115,88
Unanimidad
15.782,75
Al menos dos
16.169,48
Al menos dos
15.749,79
Al menos uno
16.300,34
Al menos uno
15.840,37
1 competencia
eliminada
4 ítems
eliminados
Definición de la matriz Q
1. Persuadir e influenciar 2.Presentar y comunicar información 3.Seguir instrucciones y procedimientos
4.Adaptarse y responder al cambio 5.Aguantar la presión y contratiempos 6.Decidir e iniciar acciones
Item
1
2
3
4
5
6
7
8
9
10
11
12
13
Competency
3
4
5
Item
1
2
6
0
0
1
1
1
0
0
1
0
1
1
0
1
1
0
0
0
1
1
0
0
0
1
0
0
1
0
0
0
0
0
0
1
1
1
0
1
0
0
0
0
0
0
0
1
1
1
0
0
0
1
1
1
0
1
1
0
0
0
0
0
0
0
1
0
1
1
1
0
0
0
1
0
0
1
0
1
1
14
15
16
17
18
19
20
21
22
23
24
25
26
Competency
3
4
5
1
2
0
0
1
1
0
0
1
1
0
0
0
6
1
0
1
0
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
1
1
0
0
0
0
1
1
0
0
0
1
0
0
1
1
0
0
0
0
1
1
1
0
0
0
1
1
1
0
1
1
0
0
0
1
1
1
0
0
0
1
1
1
0
0
0
1
ÍTEM 4: Durante una reunión con su equipo de trabajo y algunos responsables de otras áreas, tras exponer las cualidades del
proyecto en desarrollo, uno de los responsables contesta: “Encuentro que su proyecto presenta algunas carencias”.
¿Qué respondería?
a) No estoy de acuerdo, pero explíqueme exactamente qué tipo de carencias y examinemos todos los puntos sobre los que tenga
cualquier duda.
Selección del MDC
• Ajuste relativo:
– Una vez seleccionada la Matriz-Q, se comparó el modelo
G-DINA con modelos más restrictivos (y parsimoniosos):
DINA (no-compensatorio) y DINO (compensatorio).
– En la medida en que son modelos anidados, se empleó la
razón de verosimilitudes (Likelihood Ratio) para valorar si
la pérdida de ajuste al emplear un modelo más sencillo era
estadísticamente significativa. Así fue.
• Se seleccionó en consecuencia el modelo G-DINA (de
la Torre, 2011).
58—59
Estimación de perfiles de competencias
• Nº de perfiles posibles:
• Los más frecuentes:
2K
26
%
64
Competency
1
2
3
4
5
6
9.46
1
1
0
0
1
1
8.25
1
0
0
0
1
0
8.08
0
1
0
1
0
0
7.93
1
1
1
0
1
1
7.05
1
1
1
1
1
1
1. Persuadir e influenciar 2.Presentar y comunicar información 3.Seguir instrucciones y procedimientos
4.Adaptarse y responder al cambio 5.Aguantar la presión y contratiempos 6.Decidir e iniciar acciones
Evidencias de validez: relaciones con
otras variables
Adaptarse y aguantar
Estabilidad emocional (promedio)
Posee la competencia
69,26
No posee la competencia
64,59
T=-3,37; punilateral=0,0005
Interactuar y presentar
Extraversión (promedio)
Posee la competencia
56,63
No posee la competencia
53,58
T=-2,52; punilateral=0,006
Medición de competencias mediante Modelos de Diagnóstico Cognitivo: aplicación a un Test de Juicio Situacional.
CONCLUSIONES: ¿Qué pueden aportar los
MDC a la medición de competencias?
Evidencias sobre el contenido
Evidencias sobre la estructura
interna y los procesos de
respuesta
- Permiten especificar los
contenidos incluidos en los
ítems.
- Podemos quedarnos con las competencias
realmente válidas.
-El modelo que mejor ajusta nos
proporciona información sobre los procesos
de respuesta.
Evidencias sobre relaciones
con otras variables
- Convendría estudiar si proporcionan
mejores pronósticos del rendimiento
en el trabajo.
MUCHAS GRACIAS
THANK YOU VERY MUCH
60—61
RESUMEN POSTERS
Competencias de empleabilidad:
diseño de un test situacional.
— Laura Arnau Sabate1, Josefina Sala Roca1,
Mercè Jariot2, Teresa Eulalia Marzo3, Adrià Pagès2
RESUMEN
Las competencias de empleabilidad son competencias básicas necesarias para obtener y
mantener un trabajo independientemente del tipo y sector de ocupación. Son transversales
y transferibles a otras situaciones o contextos personales y profesionales.
Nuestro grupo ha desarrollado una propuesta de competencias de empleabililidad (autoorganización, contrucción del proyecto formativo-profesional, toma de decisiones y resolución de problemas, trabajo en equipo, comunicación, perseverancia, flexibilidad y responsabilidad) validado por un grupo de expertos en una primera fase (Arnau et al, 2013) y que
posteriormente ha sido validado con profesionales de los diferentes sectores económicos
de la población (trabajo pendiente de publicación).
Se ha diseñado un test situacional a partir de la creación de 5 historias ambientadas en
diferentes contextos de la vida cotidiana (de amigos, escolar, familiar, más formales o informales). En estas historias se plantean situaciones a las que el sujeto debe responder
como va a actuar frente a esa situación. Estas situaciones requieren de un buen nivel de
competencia para ser afrontadas adecuadamente. Se contó con un panel de expertos para
la elaboración de las historias y situaciones.
Para elaborar las respuestas, se realizaron entrevistas a 115 jóvenes de edades entre 12 y
18 años, que fueron transcritas y sometidas a análisis de contenido. El panel de expertos
seleccionó de entre las respuestas dadas 5 respuestas que podrían ser indicadoras de
diferentes niveles de competencia y que asimismo asegurarían suficiente variabilidad.
Posteriormente otro panel de 11 expertos distintos está validando la asignación de cada una
de las situaciones como indicador de las competencias, y en una segunda fase se pedirá a
estos expertos que asignen un valor a cada respuesta en función del nivel de competencia.
Una vez construido el test será validado para comprobar sus propiedades psicométricas y
revisar su estructura interna.
1
2
3
Departament de Pedagogia Sistemàtica i Social. Universitat Autònoma de Barcelona
Departament de Pedagogia Aplicada. Universitat Autònoma de Barcelona
Pere Tarrés. Universitat Ramón Llull
62—63
Análisis de la estructura cognitiva del área de
habilidades cuantitativas del exhcoba mediante
los modelos LLTM y LSDM.
— Juan Carlos Pérez Morán1, Norma Larrazolo
Reyna y Eduardo Backhoff Escudero
Instituto de Investigación y Desarrollo Educativo
Universidad Autónoma de Baja California
RESUMEN
En el presente estudio se integran aspectos de la psicología cognitiva y la psicometría (Rupp
y Mislevy, 2006) para el análisis de la estructura cognitiva del área de Habilidades Cuantitativas (HC) del Examen de Habilidades y Conocimientos Básicos (EXHCOBA) (Backhoff,
Tirado, y Larrazolo, 2001), mediante el Linear Logistic Latent Trait Model (en inglés, LLTM)
de Fischer (1973, 1997) y el Least Squares Distance Method (en inglés, LDSM) propuesto
por Dimitrov (2007). Para alcanzar dicho propósito, un grupo de expertos propuso catorce
operaciones para resolver los ítems del área HC apoyándose en los resultados de reportes
verbales concurrentes y retrospectivos realizados a un grupo de estudiantes de educación
secundaria. Además, se analizó la estructura cognitiva determinada por los expertos y se
evaluó la dimensionalidad y el ajuste de los ítems a los modelos Rasch y LLTM. Los resultados mostraron que los datos ajustan moderadamente al modelo LLTM con una matriz Q de
catorce atributos y que un refinamiento de las operaciones para resolver los ítems podría
explicar mejor sus fuentes de dificultad y el rendimiento de los examinados en la prueba.
Además, la aplicación del LSDM mostró que es necesario mejorar la validación de los atributos cognitivos requeridos en la solución de ítems del área de HC del EXHCOBA.
Palabras clave: modelo LLTM, modelo Rasch, matriz Q, EXHCOBA.
1
Presentador: Juan Carlos Pérez Morán
Correo electrónico: [email protected]
Instituto de Investigación y Desarrollo Educativo Universidad Autónoma de Baja California
T. celular. 044 (664) 3851253 · T. (646) 175-07-33 ext. 64508 · F. (646) 174-20-60
Midiendo la competencia Iniciativa-Innovación
mediante Test: diseño, construcción y análisis
psicométrico inicial de un Test Informatizado
de Juicio Situacional.
— Sonia Rodríguez, Beatriz Lucía, David Aguado
Instituto de Ingeniería del Conocimiento
RESUMEN
La competencia Iniciativa-Innovación es una competencia transversal definida en la mayoría
de los modelos de competencias de mayor uso tanto en nuestro país como en el contexto
internacional (Bartram, 2002; Alles, 2002, Lombardo, 2005).
En este trabajo se presenta el diseño, construcción y análisis de un test situacional orientado a la evaluación de esta competencia. En la prueba diseñada el evaluado debe imaginar
que forma parte de un equipo de trabajo e interactuar con el resto de componentes a través
del correo electrónico. Los diferentes mensajes de correo electrónico que recibe plantean
situaciones donde es posible manifestar los diferentes comportamientos asociados a la
competencia objeto de medida. Además, el evaluado tiene la posibilidad de enviar mensajes
por iniciativa propia.
Aplicada a una muestra de 261 profesionales de una entidad financiera de ámbito multinacional los resultados muestran una buena capacidad de discriminación de la prueba y una
baja consistencia interna (alpha=.52) -en línea con otros test situacionales-.
64—65
Cheating en administración de Test por
Internet. Análisis del Test de Verificación eCat.
— David Aguado1, Julio Olea2, Vicente Ponsoda2
y Francisco Abad2
RESUMEN
El uso de internet ha modificado en los últimos tiempos las prácticas de reclutamiento y selección desarrolladas por las compañías y es común que tanto el proceso de reclutamiento
como los procesos de evaluación inicial se realicen a través de internet en lo que se conoce
como Unproctored Internet Testing (UIT). A pesar de las grandes ventajas asociadas al UIT
en términos de costes y flexibilidad es una práctica cuanto menos controvertida debido a
problemas relacionados con el engaño (cheating). Para controlarlo, la recomendación de
la International Test Comission es la de realizar una evaluación en un entorno controlado y
comparar las puntuaciones en la condición UIT y la controlada para identificar a las personas que engañan. El objetivo del trabajo que se presenta es analizar los resultados que se
obtienen al implantar un procedimiento específico de verificación sobre un Test Adaptativo
Informatizado para la evaluación del nivel de inglés: eCat; aplicado en un proceso real de
selección de personal.
Se evaluó a una muestra de 417 candidatos a puestos de trabajo como ingenieros en una
multinacional española del sector energético. Los resultados iniciales muestran como el
procedimiento implementado es eficaz en la detección del cheating detectándose que 44
personas (10.55%) habían utilizado algún tipo de ayuda adicional en el entorno UIT.
1
2
Instituto de Ingeniería del Conocimiento
Departamento de Psicología Social y Metodología. Universidad Autónoma de Madrid.
Batería de aptitudes de TEA:
Fiabilidad y evidencias de validez.
— David Arribas Águila
Dpto. de I+D+i de TEA Ediciones
RESUMEN
El BAT-7 es una nueva batería que permite estimar la inteligencia y evaluar 8 aptitudes cognitivas. Se estructura en 3 niveles de dificultad creciente y está enfocada a la evaluación de
escolares, universitarios y adultos.
Método: La muestra de tipificación estuvo compuesta por 4.263 escolares y 1.507 adultos
con diferente grado de formación. Se ha utilizado un modelo logístico unidimensional de 3
parámetros y se ha estudiado el ajuste mediante los residuos estandarizados y el estadístico
chi-cuadrado. La fiabilidad se ha analizado utilizando el coeficiente alfa ordinal y las funciones de información. Para el estudio de la validez se desarrolló un modelo AFC multigrupo.
Resultados: El modelo TRI de 3 parámetros mostró un adecuado ajuste en todos los ítems.
Los valores de fiabilidad oscilaron entre 0,79 y 0,91 para los tests y entre 0,91 y 0,97 para
los índices de inteligencia. El modelo estructural basado en la teoría CHC presentó un buen
ajuste a los datos empíricos.
Discusión: El BAT-7 supone una medida informativa y altamente fiable para la evaluación
de las aptitudes cognitivas. Igualmente, mediante su aplicación puede obtenerse una estimación bastante razonable de la capacidad general (g), de la inteligencia fluida (Gf) y de la
inteligencia cristalizada (Gc).
Correspondencia:
[email protected]
c/ Fray Bernardino Sahagún, 24.
28036 Madrid
66—67
Papel del psicólogo del trabajo en el
reclutamiento y selección de personas.
— Gloria Castaño, Vicente Ponsoda, M. Ángeles Romeo,
Pedro Sanz, Ana Sanz, José Manuel Chamorro, C. Manuel
Leal, Yolanda Peinador, Jesús Martínez, Laura Alonso, M.
Ely Iglesias, Francisco Álvarez y Luis Picazo
Grupo de Trabajo de Buenas Prácticas en Reclutamiento y Selección del Colegio Oficial
de Psicólogos de Madrid
RESUMEN
En este estudio se presentan los resultados de una encuesta realizada por el Grupo de Trabajo de Buenas Prácticas en Reclutamiento y Selección del Colegio Oficial de Psicólogos
de Madrid. Su principal objetivo es el de clarificar el papel de los psicólogos en materia de
Reclutamiento y Selección, tanto en los departamentos de Recursos Humanos como en las
consultoras que prestan servicios en esta área.
Para el estudio se seleccionaron 129 empresas, entre las que se encontraban las del IBEX35 y otras representativas de los diferentes sectores de actividad, así como las consultoras
más relevantes. La tasa de respuesta fue del 29,2%
Los resultados indican que la mayoría de los responsables de reclutamiento y selección son
psicólogos (65,8%) y que sigue siendo la titulación preferida a la hora de incorporar profesionales que se dediquen a esta actividad. La participación de los psicólogos es relevante
en todas las fases del proceso de R&S: en la clarificación de la demanda (76,3%), en la
elaboración del perfil de exigencias (84,2%), en el reclutamiento y captación de candidatos
(89,5%) y en la preselección de candidatos (92,1%). En la fase de evaluación, los psicólogos son los principales responsables de evaluar inteligencia, aptitudes, personalidad y
competencias (81,6%), de la elaboración de informes (89,5%) y participan en la toma de
decisiones sobre el candidato elegido (76,3%). Su involucración es algo menor en los seguimientos de la incorporación (57,9%), desvinculación o salida (28, 9%) y en la auditoria
del proceso (44,7%). En cuanto a las técnicas utilizadas, la mayoría (73,7%) utilizan test
psicológicos. El póster ofrecerá otras informaciones recogidas en la encuesta.
¿Cómo puntuar las diferentes opciones
en un ítem de un test situacional?.
— Miguel Ángel Sorrel1, Francisco José Abad1,
Vicente Ponsoda1, Julio Olea1, Juan Francisco
Riesco2, Maria Luisa Vidania2
RESUMEN
Mientras que en los test de habilidad cognitiva está claro cuál de las opciones a un ítem es
la correcta, en los test de juicio situacional a menudo no existe una alternativa objetivamente
correcta, de tal manera que varias de las opciones pueden resultar plausibles. Esto hace
necesario el desarrollo de procedimientos de asignación de puntuaciones. En el presente
estudio se han generado dos alternativas a la propuesta original para valorar las respuestas al Test de integridad de Becker (Becker, 2005). Se tradujo la prueba al castellano y se
aplicó a una muestra de 182 personas. Para comprobar la validez convergente y discriminante del test, se aplicaron también medidas de los cinco rasgos básicos de personalidad
(NEO-FFI), la escala de deseabilidad social de Marlowe y Crowne (SDS) y un cuestionario
de Integridad construido al efecto. Los resultados indican que las dos puntuaciones alternativas dan lugar a mejores propiedades psicométricas en lo referido a fiabilidad y validez.
1
Universidad Autónoma de Madrid
2
Gabinete Psicopedagógico de la Academia de Policía Local de la Comunidad de Madrid.
68—69
NOTAS
70—71
72—73
http://www.iic.uam.es/catedras/map
Documentos relacionados
Descargar