Subido por Grisel García

Artículo - Robótica e IA en Mercado laboral

Anuncio
Supplementary Materials for
How to compete with robots by assessing job automation risks and
resilient alternatives
Antonio Paolillo et al.
Corresponding author: Dario Floreano, [email protected]; Rafael Lalive, [email protected]
Sci. Robot. 7, eabg5561 (2022)
DOI: 10.1126/scirobotics.abg5561
This PDF file includes:
Supplementary Text
Tables S1 to S6
Figs. S1 to S4
References (25–123)
TRL evaluation of the robotic abilities
We inspected the state-of-the-art of each robotic ability to assess the corresponding TRLs used in the ARI
computation. To this end, we reviewed the scientific literature in the fields of robotics and machine learning.
We primarily consulted review and survey papers, to have a general overview of the technology behind the
individual robotic abilities. If there were no survey papers, we used more specific papers to describe the
current development stage. Whenever possible, the websites of available commercial products or pertinent
applications are given as evidence of the technology readiness.
Value
Definition
1
Basic principle observed and reported
2
Technology concept formulated
3
Experimental proof of concept
4
Technology validated in lab
5
Technology validated in relevant environment
6
Technology demonstrated in relevant environment
7
System prototype demonstration in operational environment
8
System complete and qualified
9
Actual system proven in operational environment
Table S1: Technology Readiness Level scale18 used to evaluate the maturity of robotic abilities.
Human ability
Matched robotic
abilities
Active learning
None (Unmatched)
Match
ed
Human ability
TRL
Matched robotic
abilities
Match
ed
TRL
Operation and
control
None (Unmatched)
0/9
9
0/9
Active
listening
None (Unmatched)
0/9
Operation
monitoring
Direct single and multiparameter sensing +
Recognition of the need
for task adaptation
Arm-hand
steadiness
Position constrained
parameterised motion
9
Operations
analysis
None (Unmatched)
0/9
Auditory
attention
Object detection
7
Oral
comprehension
Social interaction
9
Category
flexibility
Context-based recognition
+ Pre-defined reasoning
6
Oral expression
Social interaction
9
Complex
problem
solving
Task reasoning
4
Originality
None (Unmatched)
0/9
Control
precision
Position constrained
parameterised motion
9
Perceptual speed Context-based recognition
Coordination
Direct physical interaction
+ Intuitive interaction
4
Peripheral vision
None (Unmatched)
0/9
Critical
thinking
Task reasoning
4
Persuasion
None (Unmatched)
0/9
Deductive
reasoning
Pre-defined reasoning
9
Problem
sensitivity
Recognition of the need
for parameter adaptation
9
Depth
perception
None (Unmatched)
0/9
Programming
None (Unmatched)
0/9
Dynamic autonomy +
Recognition of the need
for task adaptation
4
6
Dynamic
flexibility
None (Unmatched)
0/9
Quality control
analysis
Dynamic
strength
None (Intrinsic)
9
Rate control
None (Unmatched)
0/9
Equipment
maintenance
Recognition of the need
for parameter adaptation
9
Reaction time
None (Unmatched)
0/9
Equipment
selection
Adaptation of individual
components
8
Reading
comprehension
None (Unmatched)
0/9
Explosive
strength
None (Intrinsic)
9
Repairing
None (Unmatched)
0/9
Extent
flexibility
Position constrained
parameterised motion
9
Response
orientation
Dynamic autonomy + Predefined closed loop
motion
4
Far vision
None (Unmatched)
0/9
Science
None (Unmatched)
0/9
Finger
dexterity
Complex object grasping
8
Selective
attention
None (Unmatched)
0/9
Flexibility of Context-based recognition
closure
+ Parameterised motion
6
Service
orientation
None (Unmatched)
0/9
Fluency of
ideas
None (Unmatched)
0/9
Social
perceptiveness
Intuitive interaction +
Social interaction
4
Glare
sensitivity
None (Unmatched)
0/9
Sound
localization
None (Unmatched)
0/9
Gross body
coordination
Position constrained
parameterised motion
9
Spatial
orientation
Feature based location
7
Gross body
equilibrium
Position constrained
parameterised motion
9
Speaking
None (Unmatched)
0/9
Hearing
sensitivity
Direct single and multiparameter sensing +
Object detection
7
Speech clarity
None (Unmatched)
0/9
Inductive
reasoning
Observation learning
7
Speech
recognition
None (Unmatched)
0/9
Information
ordering
None (Unmatched)
0/9
Installation
None (Unmatched)
0/9
Speed of limb
movement
Parameterised motion
9
Instructing
None (Unmatched)
0/9
Stamina
None (Intrinsic)
9
Judgment and
decision
making
Dynamic autonomy +
Task reasoning
4
Static strength
None (Unmatched)
0/9
Learning
strategies
Interaction acquisition
4
Systems analysis
Basic environment
envisioning
3
Management
of financial
resources
None (Unmatched)
0/9
Systems
evaluation
Multiple parameter
adaptation
9
Management
of material
resources
None (Unmatched)
0/9
Technology
design
None (Unmatched)
0/9
Management
of personnel
resources
None (Unmatched)
0/9
Time
management
None (Unmatched)
0/9
Manual
dexterity
Unknown object handling
5
Time sharing
None (Unmatched)
0/9
Mathematical
reasoning
None (Intrinsic)
9
Troubleshooting
Dynamic autonomy +
Individual parameter
adaptation
4
Mathematics
None (Intrinsic)
9
Trunk strength
None (Intrinsic)
9
Speed of closure Context-based recognition
6
Memorization
None (Unmatched)
0/9
Visual color
discrimination
None (Unmatched)
0/9
Monitoring
Dynamic autonomy
4
Visualization
Flexible object interaction
3
Multi-limb
coordination
None (Unmatched)
0/9
Wrist-finger
speed
None (Unmatched)
0/9
Near vision
None (Unmatched)
0/9
Writing
None (Unmatched)
0/9
Negotiation
None (Unmatched)
0/9
Written
comprehension
None (Unmatched)
0/9
Night vision
None (Unmatched)
0/9
Written
expression
None (Unmatched)
0/9
Number
facility
None (Intrinsic)
9
Table S2: List of the 87 human abilities, the corresponding matched robotic abilities and matched TRL.
We match a total of 36 human abilities, and in the case of a match with a combination of two robotic
abilities, we consider the lower TRL. When a match was not possible, we assigned a matched TRL of 0 and
9 for the low-automation and high-automation scenario, respectively. A TRL of 0 means robots can not
match humans, whereas 9 means robots outperform humans.
In this section, we present the evaluation of each robotic ability considered in our study. The evaluation has
been performed considering the Technological Readiness Level (TRL) scale as defined by the European
Union18 (see Table S1). In what follows, each robotic ability is listed with the corresponding MAR
definition (italic font) and the state of the art review. Table S2 gives an overview on the human-robotic
abilities matching and the corresponding TRL.
Direct physical human-robot interaction, i.e., the user controls the robot by physically interacting with
it. The robot reacts to the user interaction by feeding back physical information to the user via the contact
point. Physical human-robot interaction has reached a high level of maturity. It integrates diverse
technologies and research in the fields of control, sensing, mechanical design and many more25. Nowadays,
it is possible to design and build robots with the ability to perceive collisions and react to them26.
Technological advances in this field led to the building of impressive robot manipulators, such as LBR iiwa
manufactured by KUKA27 and Panda by Franka Emika28, that demonstrate this ability in mass-produced
robots. Thus, we assessed the TRL of this robotic ability to 9.
Complex object grasping, i.e., the robot is able to pick up an object belonging to a certain parameterised
type where the object can be articulated, or consists of multiple separate parts. Robotic grasping, essential
for the execution of everyday-life tasks, is a lively research activity spanning from mechanical design to
machine learning29. Solutions such as claws, suction cups or a combination of these30 are used in structured
industrial environments. More complex and general grasping is made possible by the employment of either
soft materials31,32 or fingers33,34. Some commercially available soft material solutions exist, such as the
Festo MultiChoiceGripper35 and the SoftRoboticsInc mGrip36. Among many available robotic hands37,
platforms worth mentioning are the Soft Hand38 by qb robotics and Hannes39,40 developed by IIT and
INAIL. To the best of our knowledge, neither of these technologies on their own are a general solution, but
the impressive examples justify a TRL of 8.
Unknown object handling, i.e., the robot is able to determine the generic grasping properties of an
unknown object. It is able to use those properties to determine how to handle and place the object. How to
grasp an unknown object has been extensively studied in an analytical way41 as well as using data-driven
approaches42. A large effort has been employed to find a solution to industrial applications43; promising
solutions44,45 are paving the road to have a comprehensive and generic technology to pick-and-place
unknown objects. However, at the current stage, this ability is the focus of more scientific research than
commercial development46. For this ability we consider a TRL of 5.
Dynamic autonomy, i.e., the system is able to alter its decisions about actions (sub-tasks) within the time
frame of dynamic events that occur in the environment so that the execution of the task remains optimal to
some degree. To deal with a changing environment, robots need deliberation abilities47, e.g. in the form of
a supervisor or planner48, to make decisions online towards a desired objective. Recent advances of
reinforcement learning49,50 techniques and its hierarchical variants51–53 show the possibility to learn how to
make decisions on a sequence of sub-tasks to achieve a pre-defined goal. Although these approaches look
promising, they have not been proven safe when deployed in the real world. As an exception, autonomous
driving received considerable attention for the advancement of technology and necessary legal and ethical
regulation54–56. This cannot be said for other applications, we thus consider TRL equal to 4.
Pre-defined closed-loop motion, i.e., the robot carries out pre-defined moves in sequence where each
motion is controlled to ensure position and/or speed goals are satisfied within some error bound.
Parameterised motion, i.e., the robot can execute a path move that optimises for a parameter. Position
constrained parameterised motion, i.e., the robot can operate through a physically constrained region
while at the same time optimising a parameter or set of parameters that constrain the motions of the robot.
Robot motion represents a foundational robotic ability and several control techniques exist 57–59. The
complexity of motion and the presence of multiple constraints - from the robot and the environment - pushed
roboticists to leverage optimization techniques60 to compute or plan robot trajectories61. Nowadays, robot
manipulators are able to perform very accurate motion27,28, humanoids can achieve multiple motion tasks
simultaneously62, and mobile robots are capable of autonomous motion in domestic environments63,64: each
of these robotic motion abilities are evaluated with a TRL of 9.
Direct single and multi-parameter sensing, i.e., a robot uses sensors that provide a single, or multiple
parameter output directly, for example a distance sensor, or a contact sensor. The robot utilizes these
outputs to directly alter behavior within an operating cycle. The vast majority of commercially-available
robots rely on several sensing modalities for altering their behavior65. For this reason, we set a TRL of 9.
Object detection, i.e., multiple persistent features can be grouped to build models of distinct objects
allowing objects to be differentiated from each other and from the environment. We matched this robotic
ability with the human abilities ‘Auditory attention’ and ‘Hearing Sensitivity’’. Thus the object here is a
sound. Products66–69 making use of sound detection and natural language understanding are available on
the market (see also the robotic ability “Social interaction”) for applications such as home automation and
music recognition. However, sound source localization systems still present challenging problems when
applied to robotics70,71. Current commercial sound localization is scaled too large for a single robot (such
as ShotSpotter72), or limited to identifying a direction (such as the ReSpeaker Mic Array73). For this reason
we set a TRL of 7.
Context-based recognition, i.e., the system is able to use its knowledge of context or location to improve
its ability to recognise objects by reducing ambiguities through expectations based on location or context.
Context modelling for object recognition has received a lot of attention in the last few decades 74. Inspired
by studies in the domain of cognitive science75, many approaches are proposed to exploit context
information to recognize and categorize objects within a scene76. Despite the efforts, this ability still heavily
depends on environmental conditions77,78. For this reason we evaluate this ability to be at TRL 6.
Multiple object detection, i.e., the system is able to delineate multiple objects from the static environment
where there may be partially occluded with respect to the sense data gathered. Many multi-object trackers
have been proposed and compared79, including systems specific to the handling of occlusions80. Even
though industrial solutions to detect multiple objects using different sensors1,82 are available, to our
knowledge, the provided technologies strongly depend on the application, for which it is complicated to
have a general solution: TRL=8.
Feature-based location, i.e., the system calculates its position within an environment based on the motion
of fixed features in the environment. Simultaneous localization and mapping (SLAM) is a well-known
technique used to estimate the location of a robot in an environment while building or updating a map. A
lot of work has been done in research and, despite a strong interest from companies83, a number of
challenges still have to be solved in the robotics domain84. For this reason, we assessed the TRL of this
robotic ability to 7.
Recognition of the need for parameter adaptation, i.e., the system recognizes the need for parameter
adaptation. Individual parameter adaptation, i.e., the system alters individual parameters in any part of
the system based on assessments of performance local to the module on which the parameter operates.
Several control techniques enable robotic behavior adaptation to the uncertainty introduced by real-world
applications85,86. Among others, proportional-integral-derivative (PID) control law is a simple, mature
technique used to control industrial processes or implement robot low-level control loops. An automatic
tuning of its parameters is possible and industrial products exist87. These abilities are given a TRL of 9.
Multiple parameter adaptation, i.e., the system alters several parameters based on the aggregate
performance of a set of interconnected or closely coupled modules. Artificial neural networks are powerful
function approximators and their parameters can be adjusted to maximize their accuracy. The set of
techniques used to train artificial neural networks, today also referred to as deep learning88 has been
demonstrated to be versatile and applicable to several domains including robotics (despite several
limitations)89 and other commercial applications, such as the Google thermostat68 or a recent application
from DeepMind AI90 to improve energy consumption: TRL=9.
Adaptation of individual components, i.e., the system selects one of several processing components based
on online feedback during operation. The use of different sensory components can be handled at the level
of control design. For instance, composite adaptive techniques use both predicted torque and tracking error
to drive the motion of a manipulator91 In a visual tracker, different kinds of features can be weighted
according to their level of noise using tools from statistics92. From an estimation problem perspective, the
Kalman filter and its variations93,94 provide a method to fuse different sensor information and weigh the
estimation according to the reliability of the measurement. The Kalman filter is widely used across different
domains95. We set a TRL of 8.
Recognition of the need for task adaptation, i.e., the system recognises that the performance of a
particular task could be optimised according to some metric, but no adaptation is performed. Multiple
task adaptation, i.e., a set of tasks performed during the process cycle is adapted over time (by reordering
tasks or by adapting individual ones) to optimise a particular metric; adaptation is the result of
accumulated experience. Recent advances in the field of robotic learning allowed the implementation of
task adaptation schemes. For example, one approach is used to adapt the robot to a particular task and
environment96. Another example is an adaptive scheme which allows smooth task switching in the context
of human-robot interaction97. Learning from demonstration can be used to learn task prioritization for
bimanual manipulation applications98. It is worth mentioning applications of reinforcement learning, whose
ambitious objective is to implement a general framework able to adapt to the different application domains,
as shown in the context of playing games99,100. While the recognition of the need for adaptation can be
considered an easy robotic ability (TRL=9), we think that some work still needs to be done to achieve a
general paradigm for multiple task adaptation. Thus, for this ability we assessed a TRL of 4.
Flexible object interaction, i.e., the system is able to envision the effect its planned actions will have on
flexible objects that it has parameterised. Simulations can be used to envision the effect of robotic actions
and decide the best one to take101. Two practical examples worth mentioning are pancake preparation102
and laundry folding103. While the laundry folding is being investigated by a private company, no products
are available at the moment and their prototype is limited in what it can fold104. Both these applications are
very specific and neither are commercially available yet, thus we could not assess a TRL higher than 3.
Basic environment envisioning, i.e., the system is able to observe events in the environment that relate to
the task and envision their impact on the actions of the robot. Recent advances in computer vision using
deep neural networks allow accurate object detection and tracking105. This information can be used to
forecast the object motion and use it to modify the current motion planning of the robot 106. Despite few
exceptions in autonomous driving and unmanned aerial vehicles, the research on this field is still in its early
stage: TRL=3.
Interaction acquisition, i.e., the system is able to acquire knowledge about its environment and objects
within it through planned interactions with the environment and objects. The trial and error mechanism of
reinforcement learning enables the agent to learn how to interact with its environment107. The bottleneck of
this approach is that it requires long training sessions that, thus, have to run on simulation. This process
introduces a discrepancy between the simulated environment and the real task, an issue known as reality
gap. Therefore, extensive research has been applied to transfer what has been learnt on simulation to the
real robot efficiently108,109. However, deep reinforcement learning has not reached yet a state in which it
can be safely deployed to products: TRL=4
Observation learning, i.e., the system is able to acquire knowledge indirectly from observing other robots
or people carrying out tasks. Imitation learning was introduced to simplify the role of the robotic user in
implementing desired tasks110. Today, techniques developed under the name of programming by
demonstration111 and learning from demonstration112 are popular in robotics. Even if very impressive show
cases of this ability exist113, it is difficult to find something close to a final product (TRL=7).
Pre-defined reasoning, i.e., the robot is able to use basic pre-defined knowledge about structures and
objects in the environment to guide action and interaction. Many robots have a pre-defined knowledge of
the environment in which they will operate. Examples are industrial manipulators that can be aware of their
workspace114,115, and warehouse mobile robots or autonomous cars that can be aware of a map of their
operational environment116. Thus, for this basic ability, we assessed a TRL=9.
Task reasoning, i.e., the system is able to reason about the appropriate courses of action to achieve a task
where there are alternative actions that can be undertaken. Typically the system will be able to identify the
course of action which matches the desired task parameters, typically these involve time to completion,
resource usage, or a desired performance level. Making decisions on the course of actions to accomplish a
task is treated by reinforcement learning, where the optimality of the task execution can be defined with an
appropriate reward function during training. Specifically in robotics, reinforcement learning49,50,117 has been
demonstrated to be an effective technique to carry out very complex tasks like playing games with humanlevel performance89,99 or performing complex maneuvers with a helicopter118. However, several issues such
as safety or the reality gap problem (see “Interaction acquisition”) limit its application outside academic
purposes: TRL=4.
Social interaction, i.e., the system is able to maintain dialogues that cover more than one type of social
interaction, or domain task. The robot is able to manage the interaction provided it remains within the
defined context of the task or mission. The technology of dialogue systems is quite advanced119, thanks to
the availability of a multitude of machine learning techniques able to handle big amounts of data. Market
products such as Siri by Apple67, Alexa by Amazon66 and Google Home68 are valuable examples of systems
able to hold a conversation with humans, applying speech recognition and machine learning techniques:
TRL set to 9.
Intuitive interaction, i.e., the robot is able to intuit the needs of a user with or without explicit command
or dialogue. The user may communicate to the robot without issuing explicit commands. The robot will
intuit from the current context and historical information the implied command. Even if impressive products
showing effective verbal communication exist (see “Social interaction”), technology to ensure both fluent
verbal and non-verbal human-robot interaction does not seem to be ready120. Indeed, recent work spells out
the complexity of this ability and the need for more advanced solutions able to make the human-robot
interaction more natural121. Furthermore, the relatively low TRL of the simpler “Object Detection” ability
indicates that a system capable of detecting the subtle indicators required for non-verbal communication is
still far off. Thus, for this ability, we assessed a TRL=4.
Sensitivity analysis for ARI calculations
The proposed approach assumes that we have no information on the skills and abilities that were not
matched and assessed. In order to account for those skills and abilities too, we specified two scenarios, a
low-automation and a high-automation scenario, which we report in Table 1 of the Main Text (reproduced
below for convenience). Without posterior information, we adopt a uniform prior probability of one half
for each scenario to calculate the ARI122 . Specifying a probability distribution over the high-automation
or the low-automation scenario appears challenging, as both these scenarios are extreme cases, while the
real world likely is somewhere in between.
Our sensitivity analysis assumes that TRL information is missing at random, a common assumption in
statistics on missing values123. When TRL data is missing at random, we can complete it by drawing TRL
information at random from the set of skills and abilities that have been assessed. Repeating this process
several times, we estimate a probability distribution of ARI values.
Specifically, to perform our sensitivity analysis, we refer to the set of abilities and skills that are “assessed”
with A , and to the remaining skills and abilities that are “not assessed” with N. The algorithm for the
sensitivity analysis is then as follows:
1. Randomly draw a TRL from A and attribute this TRL to a member of N. This step provides a TRL for
each member in N such that its TRL distribution mimics the TRL distribution in A.
2. Calculate ARI using the inferred TRL on N and the actual TRL on A to compute function f in equation
(1) of the Main Text.
3. Repeat steps 1-2 999 times (an odd number that simplifies estimation of the median and percentiles).
This step provides an estimate of the probability distribution of ARI values (e.g. the 999 calculated values
of ARI).
4. We summarize the probability distribution of the ARI values using the average of the 999 ARI calculated
values (Mean), and 5 percent smallest (p5) and largest ARI values (p95).
Comparing ARI estimates in the Main Text to ARI estimates in the sensitivity analysis (Table S3 below,
column Mean), we note that the sensitivity analysis produces ARI estimates that are higher than those given
in the Main Text, e.g. ARI is 0.49 for physicists in the sensitivity analysis, and 0.44 in the Main Text (mean
sensitivity vs mean Main Text). This suggests that the approach in the Main Text is more conservative, i.e.
generates lower ARI values. The ARI estimates from the sensitivity analysis tend to be about 10-15 percent
larger than the ARI estimates with the method described in the Main Text (Physicists: 0.44 in the Main
Text, and 0.49 in the sensitivity analysis; Electrical Engineering Technicians: 0.61 in the Main Text vs 0.74
in the sensitivity analysis). Nonetheless, ARI estimates from the sensitivity analysis are similar to those in
the Main Text in relative terms: the ranks of occupations according to their ARI values remain mostly
unchanged, and the ARI estimates in the sensitivity analysis are all within the bounds specified by the lowautomation and high-automation ARI scenarios documented in the Main Text.
Main Texta
Sensitivity
(low-automation- high-
(Missing at Random)
automation)
Occupation
low-
high-
Rank Mean automat automatio Rank Mean
ion
n
p5
p95
Physicists
1
0.44
0.20
0.67
1
0.49
0.43
0.55
Robotics Engineers
122
0.55
0.31
0.80
78
0.64
0.59
0.69
Economists
203
0.57
0.31
0.83
207
0.68
0.61
0.74
Electrical Engineering
Technicians
Slaughterers and Meat
Packers
458
0.61
0.38
0.85
462
0.74
0.69
0.78
967
0.78
0.57
0.99
967
0.95
0.92
0.98
Table S3. Sensitivity Analysis. ARI from the Main Text (adopting the high-low automation Scenario
approach), compared against ARI as estimated by the sensitivity analysis (adopting the Missing at Random
strategy). Mean is the mean prediction, p5 is the 5th percentile, p95 is the 95th percentile.
Figure S1: Sensitivity analysis for ARI estimates. The figure plots ARI estimates of all occupations
computed with the sensitivity analysis (assuming data are missing at random) against ARI estimates
computed with the method described in the Main Text (average of low-automation and high-automation
scenarios). Each data point refers to one occupation.
Despite small differences in the ARI estimates computed with the two methods (Figure S1), we find a very
close agreement between the two approaches in terms of their correlation (simple correlation coefficient =
0.9913, Spearman rank order correlation =0.9915). This suggests that the two approaches capture similar
patterns in the underlying data.
Validity check
Our first external validation exercise concerns whether ARI predicts effects on the labor market. This
question is hard to address because the largest outcomes of automation are probably not yet fully realized,
and yet to come. However, existing literature has addressed in more detail the effect of computerization on
the labor market. Computerization refers to innovations in industrial production that already happened in
the last decades of the last century. A notable contribution to this literature19 argues that the effect of
computerization differs by the task content of jobs, i.e. by whether they are routine or not, and by whether
they are manual or cognitive. This study19 predicted that Routine jobs will suffer strongly from competition
by computers, whereas Non-Routine Cognitive jobs would benefit from better computing power, and NonRoutine Manual jobs would be neutral to computerization. Autor and Dorn15 later showed that regions of
the U.S. that specialized in routine tasks experienced both labor market polarization (job growth in low
skilled services), and wage polarization (strong earnings growth at the tails of the distribution).
In order to check if our ARI values comply with the analysis described by those authors, we classify the
jobs of this study according to Autor and Dorn into Non-Routine Cognitive, Non-Routine Manual, Routine,
and compute the average ARI of each job category. According to Autor and Dorn, Routine and Non-Routine
Manual jobs should face a higher risk of computerization. Consistently, our analysis reveals that Routine
occupations have higherARI than Non-Routine Cognitive occupations (Table S4). Interestingly, our
analysis reveals that Non-Routine Manual occupations have the same high ARI of Routine occupations,
whereas Autor et al.19 predicted that they should neither suffer nor benefit from computerization. The high
ARI of Non-Routine Manual jobs, e.g. Janitorial services, or Truck driving, derives from the fact that our
study considers automation by robots rather than by computers alone, as Autor et al.19 do.
Job classification
ARI
Percentile
Non-Routine Cognitive
0.59
36
Non-Routine Manual
0.67
75
Routine
0.67
78
Table S4: Average ARI and average percentile of ARI for Routine, Non-Routine Cognitive and Non-Routine
Manual occupations.
Technological advancement is a gradual process. While some forms of automation captured by the proposed
ARI, such as artificial intelligence, have taken off only recently in the 2010s, other forms, such as industrial
robots started to substantially grow already in the 1990s. Therefore, we conjecture that the pressure exerted
by automation on occupations with high ARI is a phenomenon that will not only be visible in the future,
but could also be observed in the recent past. If that is the case, we expect to observe that both past
employment and wages in occupations with low ARI should have been growing faster than in occupations
with high ARI.
We test this conjecture using data from the Occupation Employment Statistics (OES) of the Bureau of
Labor Statistics. The OES program (already used in our work to assess our method w.r.t. the US economy)
provides data on the total number of people working in a given occupation, as well as the mean, median,
10th and 90th percentile of wages, in a comparable format since 2003. In particular, we used data for 2003,
2008, 2013, and 2018 to document the long-run trends in employment and wages. We then computed the
growth rate of employment as the difference between the log employment in 2008, 2013, 2018, and the
2003 log employment for each occupation, and divided this log difference by the number of years between
2008 (or 2013, 2018) and 2003, i.e., [log(E_it)-log(E_i,2003)]/(t-2003), where E_it is the number of
workers employed in occupation i at time t. We thus obtain the annual growth rate between 2008 (or 2013,
2018) and 2003 for each occupation. We then group occupations into terciles according to ARI levels, and
provide the average annual growth rate of employment between 2008, 2013, and 2018 vs 2003.
Figure S2: Employment growth by ARI. Average annual growth rate of employment between 2008, 2013, or 2018
and 2003 (difference between log occupation employment in 2008, 2013, or 2018 and log employment in 2003,
divided by the number of years intervening between 2008, 2013, or 2018 and 2003). Capped spikes provide the 95%
confidence interval for the average annual growth rate.
Employment growth between 2003 and 2008, as well as between 2003 and 2013, and 2003 and 2018, is
positive for occupations with the low ARI (Figure S2). For occupations with medium ARI, employment
expands between 2003 and 2008, but remains approximately at the same level in 2013 and 2018 with respect
to 2003. Employment substantially drops in occupations with high ARI, and the decline is especially strong
in 2013 and 2018 compared to 2003.
In summary, occupations with high ARI on average experienced employment decline, while occupations
with low ARI experienced strong employment growth. This pattern is consistent with the hypothesis of
automation exerting strong pressure on jobs that demand skills and abilities that can be replaced by
machines.
For each occupation, we also analysed the annual growth rate of the mean hourly wage. As noted
previously, we are interested in the evolution of wage levels in 2008, 2013, and 2018 relative to 2003. As
for the employment growth, we measure the annual growth rate of wages as the differences between log
wage in 2008, 2013, or 2018 and log wages in 2003 divided by the number of intervening years, i.e.,
[log(W_it)-log(W_i,2003)]/(t-2003)], where W_it is the number of workers employed in occupation i at
time t.
Figure S3: Wage growth by ARI. Average annual growth rate of the hourly wage in 2008, 2013, or 2018
compared to 2003. Capped spikes provide the 95% confidence interval for the average log difference.
Wage levels substantially increased over time, but the evolution of average wages is different for low ARI,
medium ARI, and high ARI occupations (Figure S3). Wages increased more for occupations with low ARI
compared to occupations with high ARI over the periods 2003 to 2008, and 2003 to 2013. Instead, wage
growth became more similar across occupation groups in the period 2003 to 2018.
The study by Acemoglu and Restrepo4 indicated slower growth of hourly wages for workers in industries
with high robot penetration, over the period 1990 to 2007 consistently with our results for the periods
2003 to 2008 and to 2013. However, our finding for the recent past, where wage dynamics do not seem to
be correlated with ARI levels, reveals an interesting phenomenon that would deserve further research.
To check that the transitions recommended by our Resilience Index correspond to frequent job transitions
observed in the real world, which would confirm their feasibility, we performed the following analysis
based on recent data on occupational transitions published by Schubert, Stansbury and Taska24 (2020).
Occupational transitions were calculated from 16 million resumés of U.S. workers, obtained and parsed by
Burning Glass Technologies (a labor market analytics company) over the period 2002–2016. Specifically,
in the Burning Glass Technologies (BGT) data, approximately 217 thousand transitions are presented in a
transition matrix. Occupations are defined on the 6-digit Standard Occupational Classification (SOC code).
The published data provide information on the share of workers who switch between an occupation A to
another occupation B.
We augmented this data with the employment share of each occupation 2003 from the OES data. We denote
the share of workers in occupation "A" that transitioned to occupation "B", given an average retraining
distance (ARE) “D” in the period between 2002 and 2016, as s(AB|D).
Then, we calculate the correlations by using s(A)=N(A)/N, the number of workers employed in occupation
A, N(A), divided by the total number of workers in the economy, N. s(A) is also called the employment
share of occupation A from the OES dataset. We combine this information with BGT transition shares an
get s'(AB|D)= s(AB|D)*s(A), which reflects the share of all workers transiting from A to B at a given
distance D. This weighted analysis, shown in Figure S4, indicates that the average number of workers
making an observed transition from occupation A to occupation B is 488, while the average number of
workers who would undertake a transition recommended by our method is 417. The similarity of the two
numbers (488 vs 417) indicates that the recommended transitions are feasible because they are close to
those observed in the past. Similarly, the correlation between observed and recommended transitions
weighted by the occupation size is positive (0.3964) with the main difference being that our method
recommends transitions with lower retraining effort, and our method emphasizes recommendations for
transitions with smaller average retraining effort than those actually made by people. This result reinforces
our belief that our method could be used to make transitions more efficient in terms of retraining effort.
The share of persons that transition between occupations with small retraining efforts, as proxied by the
ARE values, is high and decreases for higher retraining efforts. Transitions between jobs that have high
ARE values are rare, presumably because of educational or ability barriers. The transitions recommended
by our risk index (RI) demand little retraining effort and are feasible, as demonstrated by the actual job
transitions reported by these new data.
Figure S4: Actual and recommended transition. Share of observed occupational transitions (dashed
line) and of recommended transitions (solid line) ordered by average retraining distance and weighted
by the number of workers in the occupation of origin.
Sensitivity analysis for Career Moves Simulation
In the Main Text of the article we define RI = (ARIB-ARIA) / ARE. Note that we divide by ARE to the
power of one. In a more general framework, we could write the resilience index as a function of a parameter,
e.g. theta, and write RIθ = (ARIB-ARIA) / (ARE)θ. The parameter θ can be set to give more (θ > 1) or less
(θ <1) weight to job distances and retraining efforts.
In addition to the data for θ = 1 that are shown in Table 2 of the Main Text, we performed two new
simulations with θ = 2 and θ = 1/2.
Occupa
tion
group
Average ARI
(average
percentile)
Average ARI of
the best RI
suggestion
(average
percentile)
Average ARI
change
(average
percentile
change)
Average
human
abilities
retraining
effort
Average
human
knowledge
retraining
effort
Main Text (RI1)
High
0.694 (85.78)
0.626 (54.52)
0.069 (31.26)
0.116
0.071
Medium
0.614 (49.73)
0.549 (20.60)
0.065 (29.13)
0.128
0.120
Low
0.563 (19.59)
0.479 (4.07)
0.085 (15.52)
0.196
0.171
High Retraining Effort ( RI2)
High
0.694 (86)
0.659 (71)
0.035 (14)
0.068
0.043
Medium
0.614 (50)
0.588 (34)
0.026 (16)
0.043
0.079
Low
0.563 (20)
0.539 (10)
0.025 (10)
0.056
0.101
Low Retraining Effort (RI(1/2))
High
0.690 (86)
0.440 (4)
0.250 (81)
0.470
0.360
Medium
0.610 (50)
0.440 (1)
0.180 (49)
0.380
0.290
Low
0.560 (20)
0.440 (1)
0.130 (19)
0.290
0.230
Table S5: Sensitivity analysis of career moves for different weighting of the average retraining effort (this
is an augmented version of Table 2 in the Main Text).
Giving more weight to retraining efforts leads to career transitions that are closer to the original occupation.
Consequently, career moves generate smaller improvements in automation risk when retraining effort is
high; for example, the improvement in risk is 0.061, or 25 percentiles, for High Risk workers when θ = 1
but it is only 0.035, or 14 percentiles, when theta = 2. The opposite happens for simulations where the
importance of the retraining effort is low (θ = 1/2).
In order to analyse how our ARI relates to actual transitions happening in the economy, we calculated the
number of workers who transition aways from, and towards, each occupation, based on the Burning Glass
Technology data on transition shares and OES data on the number of workers. We then group these
transitions by ARI terciles. Overall, as shown in Table S6, the flow from higher ARI to lower ARI (lightgray cells) is larger than the flow from lower ARI to higher ARI (dark-gray cells), suggesting that workers
tend to transition to occupations with lower risk.
Number of workers (x100k)
Low ARI
Destination
(1st tercile)
Medium ARI
Destination (2nd
tercile)
High ARI
Destination
(3rd tercile)
Low ARI
Origin (1st
tercile)
138.5605
49.62519
26.57739
Medium ARI
Origin (2nd
tercile)
132.6353
87.43871
71.03501
High ARI
Origin (3rd
tercile)
169.7817
172.9307
219.3909
Table S6: Transitions grouped by ARI terciles. Aggregate number of workers estimated to change from an
occupation with an origin ARI tercile (rows) to an occupation with a destination ARI tercile (columns)
anywhere between 2002 and 2016. Estimates are based on combined BGT resume data for 2002 and 2016
and OES 2003 data. Light shading indicates worker flows from higher ARI to lower ARI occupations, while
dark shading identifies worker flows from lower ARI to higher ARI occupations.
Descargar