Subido por Michael Didier

ROBOTICS ENGINEERING Basic to Advanced Concepts of Robotics Engineering

Anuncio
ROBOTICS
ENGINEERING
Basic to Advanced Concepts of
Robotics Engineering
Prabhu TL
Nestfame Creations Pvt. Ltd.
[Robotics Engineering]
Copyright © [2021] Prabhu TL. All rights reserved.
Publisher - Nestfame Creations Pvt. Ltd.
Publisher Website - www.nestfamecreations.com
The contents of this book may not be reproduced, duplicated or transmitted
without direct written permission from the Author .
Under no circumstances will any legal responsibility or blame be held against
the publisher for any reparation, damages, or monetary loss due to the
information herein, either directly or indirectly.
Author - Prabhu TL
Indexer - Akshai Kumar RY
Legal Notice:
This book is copyright protected. This is only for personal use. You cannot
amend, distribute, sell, quote or paraphrase any part or the content within this
book without the consent of the author.
Disclaimer Notice:
Please note the information contained within this document is for educational
and entertainment purpose only .every attempt has been made to provide
accurate, up to date and reliable complete information. No warranties of any
kind are expressed or implied. Please consult a licensed professional before
attempting any techniques outlined in this book .
By reading this document, the reader agrees that under no circumstances are
is the author responsible for any losses, direct or indirect, which are incurred
as a result of the use of information contained within this document,
including, but not limited to, __eRrors, omissions, or inaccuracies.
PREFACE
Robotics is an area of engineering and science that encompasses electronics,
mechanical engineering, and computer science, among other disciplines. This
branch is concerned with the design, building, and use of robots, as well as
sensory feedback and data processing. In the coming years, these are some of
the technologies that will replace humans and human activities. These robots
are designed to be utilised for a variety of tasks, however they are currently
being used in sensitive environments such as bomb detection and
deactivation. Robots can take on any shape, although many of them have a
human-like look. The robots that have taken on a human-like appearance are
expected to move, speak, and think like humans.
Robotics is the engineering discipline that deals with the conception, design,
operation, and manufacture of robots. Issac Asimov, a science fiction
novelist, claimed to be the first to name robotics in a short tale written in the
1940s. Issac proposed three principles for guiding these types of robotic
robots in that scenario. Issac's three rules of Robotics were later named after
these three ideas. The following are the three laws:
Humans will never be harmed by robots.
With the exception of breaking law one, robots will follow human
commands.
Without breaking any other restrictions, robots will defend themselves.
Characteristics
The following are some of the properties of robots:
Robots have a physical body that they can move around in. They are
maintained in place by their body's structure and moved by their mechanical
components. Robots will be nothing more than a software programme if they
don't have an appearance.
On-board control unit is another name for the brain in robots. This robot
receives data and then sends commands as an output. Otherwise, the robot
will just be a remote-controlled machine without this control device.
Sensors: These sensors are used in robots to collect data from the outside
world and deliver it to the Brain. These sensors, in essence, have circuits in
them that produce voltage.
Actuators are the robots that move and the pieces that move with the help of
these robots. Motors, pumps, and compressors are examples of actuators.
These actuators are told when and how to respond or move by the brain.
Robots can only work or respond to instructions that are given to them in the
form of a programme. These programmes merely inform the brain when to do
certain things, such as move or make sounds. These programmes only
instruct the robot on how to make judgments based on sensor data.
The robot's behaviour is determined by the programme that was created for it.
When the robot starts moving, it's easy to identify what kind of programme
it's running.
The Different Types of Robots
The following are some examples of robots:
Articulated: This robot's distinguishing feature is its rotational joints, which
range in number from two to ten or more. The rotary joint is attached to the
arm, and each joint is known as an axis, which allows for a variety of
movements.
Cartesian robots are also referred to as gantry robots. The Cartesian
coordinate system, i.e. x, y, and z, is used in these three joints. Wrists are
fitted to these robots to give rotatory mobility.
Cylindrical robots contain at least one rotatory and one prismatic joint for
connecting the links. Rotatory joints are used to rotate along an axis, while
prismatic joints offer linear motion.
Spherical robots are sometimes known as polar robots. The arm has a
twisting joint that connects it to the base, as well as two rotatory joints and
one linear joint.
Scara: Assembly robots are the most common use for these robots. Its arm is
shaped like a cylinder. It features two parallel joints that give compliance in a
single plane.
Delta: These robots have a spider-like structure to them. They're made up of
joint parallelograms joined by a shared basis. In a dome-shaped work area,
the parallelogram moves. They're mostly used in the food and electronics
industries.
Robots' scope and limitations: Advanced machines are robots that are trained
to make decisions on their own and are utilised to do advanced tasks. When
designing a robot, the most crucial considerations are what function the robot
will perform and what the robot's constraints are. Each robot has a
fundamental level of complexity, with each level having a scope that restricts
the functions that may be done. The number of limbs, actuators, and sensors
used in basic robots determines their complexity, whereas the number of
microprocessors and microcontrollers used in sophisticated robots determines
their complexity. As with any increase,
The following are some of the benefits of using robots:
● They have access to knowledge that humans do not.
● They are capable of completing jobs accurately, efficiently, and quickly.
● Because most robots are fully automated, they can execute a variety of
jobs without the need for human intervention.
● Robots are utilised in various factories to manufacture goods such as
planes and car parts.
● They can be employed for mining and dispatched to the nadris of the
Earth.
The following are some of the downsides of using robots:
●
●
●
●
●
They require the power supply to continue operating. Factory workers
may lose their employment if robots are able to replace them.
To keep them working all day, they require a lot of upkeep. In addition,
maintaining the robots can be costly.
They can store a lot of information, but they aren't as efficient as our
brains.
Robots, as we all know, work according to the programme that has been
installed in them. Robots can't do anything other than run the
programme they've been given.
The most serious downside is that if a robot's programme falls into the
wrong hands, it can inflict massive harm.
●
Applications: Different types of robots are capable of doing various jobs.
Many robots, for example, are designed for assembly work and are referred
to as Assembly Robots since they are not suitable for any other task.
Similarly, several suppliers provide seam welding robots with their welding
materials, and these robots are referred to as Welding Robots. Many robots,
on the other hand, are built for heavy-duty labour and are referred to as
Heavy Duty Robots.
The following are some examples of applications:
●
●
●
●
●
●
●
Caterpillar has plans to build remote-controlled machinery and heavy
robots by 2021, according to the company.
Herding is another task that a robot can perform.
In manufacturing, robots are increasingly being employed more than
people, with “Robots” accounting for more than half of all work in the
auto sector.
Military robots are used by many of the robots.
Robots have been employed to clean up regions such as toxic waste and
industrial waste, among other things.
Robots that work in agriculture.
Robots for the home.
● Robots for the home.
● Nanorobots are tiny robots.
● Swarm robots.
TABLE OF CONTENT
FUNDAMENTALS
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
Robotics: Introduction
Robotics: Scope and Limitations of Robots
Classification of Robotic Systems
Current Uses of Robots
Components Of Robots
What are Industrial Robots?
Benefits Of Robots
Position and Orientation of the Objects in Robotic Automation
The Kinematics of Manipulators – Forward and Inverse
Kinematics of Manipulators: Velocity Analysis
How Does a Robot's Voice Recognition System Works?
Light Sensors in Robots
Vision System in Robots
Robots in Engineering and Manufacturing
Robotics: Construction of a Robot
Robotics: Structure of Industrial Robots or Manipulators: Types of Base Bodies - I
Robotics: Structure of Industrial Robots or Manipulators: Types of Base Bodies – II
Manipulation Robotic System: Manual Type Robots
The Required Features of a Multimeter for Robot Building
Measuring Resistance of Resistors
The Optional Features of Multimeters for Robot Building
Variable Resistors: Identifying Potentiometers
The LM393 Voltage Comparator Chip
How to Test LED Lamps
Basic LED Attributes
Articulated Robots – SCARA and PUMA
Base Bodies of Robots: Articulated Robot Base
Base Bodies of Robots: Spherical Base Robot - Control and Application
Manipulation Robotic System: Tele-control or Remotely Operated Robot
Spherical Base Robot: Construction and Workspace
Base Bodies of Robots: Cylindrical Base Robot
Introduction to Robotics Technology
Advantages of Robotics in Engineering
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
Medical Robotics
Dealing with Decommissioned Industrial Robots
PID Loop Tuning Methods for Robotics
Honda Asimo - How Long to Robots in the Home?
The Brains and Body of a Robot
The Future of Robotics
Manipulation Robotic Systems: Automatic Type Robot
Recommended Additional Features for Multimeters in Robot Building
Identifying and Purchasing Resistors
Self-Learning Control System Concepts Simplified
Applying Artificial Intelligence and Machine Learning in Robotics
A safe, wearable soft sensor
How robots are helping doctors save lives in the Canadian North
Robocar year in review for 2018
Robots with sticky feet can climb up, down, and all around
Growing bio-inspired shapes with a 300-robot swarm
Drones and satellite imaging to make forest protection pay
3Q: Aleksander Madry on building trustworthy artificial intelligence
What are some of the basic sensors used in Robotics?
How to make your first Robot?
What is Mechatronics and Robotics?
Underwater Photography and Videography
Lithium, Lithium-ion, LiPo & LiFePO4Battery Safety and Fire Handling
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
CAN and CCP/XCP for Error Detection and Calibration
LIDAR vs RADAR: A Detailed Comparison
Motor Control Systems – Bode Plot & Stability
Perception in Smoke, Dust or Fog
How to Mount an External Removable Hard Drive on a Robot
Tether’s: Should Your Robot Have One?
Roboticists EDC (Every Day Carry)
Automation
Types of robot
Required studies in robotics
Technologies Of A Robot
Transmission system (Mechanics)
Sensors
Electronics
Servo Motor Design
How to define a suitable servo motor speed
73. Controlling inertia
74. Main types of an industrial robot
75. Industrial Manipulators And Its Kinematics
76. Types of robotic chains
77. Defining work space area
78. Trajectory Definition
79. 3R planar manipulator
80. Position, Orientation, Frames
81. Transformation
82. Translation operators
83. Trajectory Planning In Robotics
84. Cubic polynomials
85. Trajectory Planning By Using Robot Studio
86. Moving robot joint space
87. Target teach method
88. Create program using virtual flex pendant part 1
89. Create program using virtual flex pendant part -2
90. Toto the Robot
91. Toto’s Mapping Behaviour’s
92. Go, Team! Group Robotics
93. Types of Groups and Teams
94. Where To Next? The Future of Robotics
95. Manipulation and Dexterity
96. Actuators and Drive Systems
97. Dynamics of Single-Axis Drive Systems
98. PWM switching characteristics
99. Optical Shaft Encoders
100.
Brushless Dc Motors
101.
Robot Mechanisms
102.
Parallel Linkages
103.
Planar Kinematics
104.
Inverse Kinematics of Planar Mechanisms
105.
Kinematics of Parallel Link Mechanisms
106.
Differential Motion
107.
Properties of the Jacobian
108.
Inverse Kinematics of Differential Motion
109.
Statics
110.
Duality of Differential Kinematics and Statics
111.
Dynamics
112.
113.
114.
115.
116.
117.
118.
119.
120.
121.
122.
123.
Physical Interpretation of the Dynamic Equations
Lagrangian Formulation of Robot Dynamics
Inertia Matrix
Force and Compliance Controls
Architecture of Hybrid Position/Force Control System
Compliance Control
Braitenberg Vehicles
Hexapod Walker
Speech Recognition
Speech Recognition Circuit
How the circuit works
Robotic Arm
AUTONOMOUS VEHICLE
1. Application of Sampling-Based Motion Planning Algorithms in Autonomous Vehicle
Navigation.
2. Robust Accelerating Control for Consistent Node Dynamics in a Platoon of CAVs.
3. The Reliability of Autonomous Vehicles Under Collisions.
4. ADAS-Assisted Driver Behaviour in Near Missing Car-Pedestrian Accidents.
5. Flight Control Development and Test for an Unconventional VTOL UAV.
6. Design of Large Diameter Mine Countermeasure Hybrid Power Unmanned Underwater
Vehicle.
KINEMATICS
1.
2.
3.
4.
5.
6.
7.
8.
Kinematics
Path Planning in the Local-Level Frame for Small Unmanned Aircraft Systems
How to Expand the Workspace of Parallel Robots
Expanding the rotational workspace
Kinematic and Biodynamic Model of the Long Jump Technique
Kinematic Model for Project Scheduling with Constrained Resources Under Uncertainties
A New Methodology for Kinematic Parameter Identification in Laser Trackers
Optimization of Single-Sided Lapping Kinematics Based on Statistical Analysis of Abrasive
Particles Trajectories
9. A Random Multi-Trajectory Generation Method for Online Emergency Threat Management
(Analysis and Application in Path Planning Algorithm)
MOTION TRACKING AND
GESTURE RECOGNITION
1.
2.
3.
4.
5.
6.
7.
Motion Tracking System in Surgical Training
Layered Path Planning with Human Motion Detection for Autonomous Robots
Audio-Visual Speaker Tracking
Motion Tracking and Potentially Dangerous Situations Recognition in Complex Environment.
Human Action Recognition with RGB-D Sensors
Gesture Recognition by Using Depth Data: Comparison of Different Methodologies
Gait Recognition
NEW TRENDS IN 3D PRINTING
1.
2.
3.
4.
5.
6.
Advanced Design for Additive Manufacturing: 3D Slicing and 2D Path Planning
Color 3D Printing: Theory, Method, and Application
Colour Image Reproduction for 3D Printing Facial Prosthese
3D-Printed Models Applied in Medical Research Studies
3D Printing Cardiovascular Anatomy: A Single-Centre Experience
Regenerative Repair of Bone Defects with Osteoinductive Hydroxyapatite Fabricated to Match
the Defect and Implanted with CAD, CAM, and Computer-Assisted Surgery Systems.
7. Metal Powder Additive Manufacturing
8. Laser-Assisted 3D Printing of Functional Graded Structures from Polymer Covered
Nanocomposites: A Self-Review.
ROBOTICS OPERATING IN
HAZARDOUS ENVIRONMENT
1.
2.
3.
4.
5.
Robots for Humanitarian Demining
UAV for Landmine Detection Using SDR-Based GPR Technology
Towards Advanced Robotic Manipulations for Nuclear Decommissioning
Robot Protection in the Hazardous Environments
Safety Assessment Strategy for Collaborative Robot Installations
SEARCH AND RESCUE
ROBOTICS
1.
2.
3.
4.
Introduction to the Use of Robotic Tools for Search and Rescue
User-Centered Design
Unmanned Aerial Systems
Unmanned Ground Robots for Rescue Tasks
5.
6.
7.
8.
9.
Unmanned Maritime Systems for Search and Rescue
Tactical Communications for Cooperative SAR Robot Missions
ICARUS Training and Support System
Operational Validation of Search and Rescue Robots
Command and Control Systems for Search and Rescue Robots
Robotics: Introduction
Robots are not just machines, they are many steps ahead a typical machine.
Robots like machines can perform different tough jobs easily but the
advancement is that they can do it by their own. Once programmed robots
can perform required tasks repeatedly in exactly the same way.
Introduction
A robot can be defined as an electro-mechanical system that follows a set of
instructions to do certain tasks, however the word robot literally means
"slave." In industries, robots are referred to as industrial robots, and in
science fiction films, they are referred to as humanoids. This and subsequent
articles will provide an overview of robotics.
Robotics and Automation
The first thing that comes to mind when we think of robotics is automation.
Except for initial programming and instruction set, robots are known to
accomplish activities automatically without much human participation. A
milk packaging machine was the first machine I saw as a child when we went
on a tour to a milk processing facility. It was the closest thing to a robot I'd
ever seen. A roll of packing material ran through the machine, and half a litre
of milk fell onto the roll each time, which was then sealed and cut by a
machine mechanism.
This machine can be a simple example of a very basic robot. It performs the
specified sequence of operations repeatedly with the same accuracy. It was
programmed and provided with the required material and then started.
Advancements in Robotics
The more advanced versions of robots seen now-a-days can perform
operations adaptively, that is, changing the dimensions and other settings
according to the requirements. One such advanced example of an adaptive
robot is a stitching machine which can read the different dimensions of dress
size on the personal card of a person and then cut the desired dress material
and stitch it to the size fitting to the person.
From a broad view, robotics is actually the continuous endeavor of robotics
engineers to make machines capable of performing tasks as delicately as
human can do and also the complicated, tough and repeated tasks which
humans would prefer not to do. The advancements in the field robotics are
made possible by use of microprocessors and microcontrollers with the
intelligent combination of them with servo motors, sensors and actuators.
Robotics: Future Scope
Now that the scope of robotics has broadened, robots that can only work on
preprogrammed instructions, regardless of the environment in which they
operate, will quickly become obsolete. Robots that are now being created can
perceive their environment, act in accordance with what they see, and make
decisions on their own about how to respond. The days of robots being able
to sense and respond to emotions, let alone express them, are long gone.
Robotics: Scope and Limitations of Robots
Robotics experts can create robots that can perform a wide range of jobs,
from delicate and precise operations like fitting microscopic pieces of
watches to hazardous tasks like fueling nuclear reactor chambers. Although
robots are considered super-machines, they still have limitations.
Despite significant advances in the field of robotics and ongoing efforts to
create robots more intelligent so that they can match, if not exceed, human
capabilities, robots developed to this point are still no closer to human beings
from a scientific and logical standpoint.
Scope and Limitations of Robots
In basic robotics we design machines to do the specified tasks and in the
advanced version of it robots are designed to be adaptive, that is, respond
according to the changing environment and even autonomous, that is, capable
to make decisions on their own. While designing a robot the most important
thing to be taken in consideration is, obviously, the function to be performed.
Here comes into play the discussion about the scope of the robot and
robotics. Robots have basic levels of complexity and each level has its scope
for performing the requisite function.
The levels of complexity of robots is defined by the members used in its
limbs, number of limbs, number of actuators and sensors used and for
advanced robots the type and number of microprocessors and
microcontrollers used. Each increasing component adds to the scope of
functionality of a robot. With every joint added, the degrees of freedom in
which a robot can work increases and with the quality of the microprocessors
and microcontrollers the accuracy and effectiveness with which a robot can
work is enhanced.
Example
To understand the scope for any robot of given complexity, we will discuss it
with a very simple example. Consider a robot comprising of one member
joined to the base with a revolute joint and a servo motor is connected to that
revolute joint which is controlled by a microcontroller programmed to move
the member through a certain degrees of rotation. This is the most basic robot
which I could think of.
Scope: The motion of this robotic arm is restricted to a circular path. Any
task which can be done by the motion along the circular arc can be performed
by this robot. Say, we want to draw arcs on sheets of papers or we want to cut
them in circular pieces that can be achieved by fitting a pencil and a cutter to
the end of this robotic arm.
Limitation: Any point on this robotic arm can only move along a circular
path. Any task involving motion other that the circular motion cannot be
performed by such robot.
Scope of robots can be extended and limitations can be reduced by adding to
the complexity of the robots. One can imagine of the possibilities of motions
which can arise by simply adding one more limb to the existing one through a
revolute joint and a servo motor. This is a very basic example; in fact,
robotics is very vibrant field with infinite scope and an equal numbers of
limitations ever reducing.
Classification of Robotic Systems
Robotics systems, which are used in practically all production processes, are
categorised according to their use. The key categories of this classification
will be discussed in this article.
Introduction
Robotic systems, a way of automating manufacturing process and also
reducing manual labor attached with it, are used extensively in almost all
fields these days. However, there are still many auxiliary processes that
require manual labor.
The manual labor is now getting reduced because of the introduction of
industrial robots equipped with manipulators with several degrees of freedom
and a dedicated controlled actuator for each of them. A manipulator with high
degree of freedom is similar to a human hand in movement and functionality.
The control system of a manipulator can be adjusted according to the
application. The manipulators are generally used in industrial robots that
cater to various needs of the application at the same time.
Classification of Robotic Systems
The robotic systems are classified mainly into three main types on the basis
of application. They are:
○
Manipulation Robotic system
○
Mobile robotic system
○
Data acquisition and control robotic system
Manipulation Robotic system
The Manipulation Robotic system is the most extensively used robotic
system that is found mainly in manufacturing industries. Manipulation
robotic system comes in several forms, depending mainly on its application.
Mobile robotic system
A Mobile robotic system is usually an automated platform that carries goods
from one place to another. The motion of the system can also be controlled
autonomously and might have a pre-programmed destination from where the
system might load or unload automatically.
Mobile robotic systems are also used mainly in industrial purposes for
carrying tools and spare parts to the storage. One more application where
mobile robotic systems are used is in farms, wherein they can be used for
pulling equipments to plough the fields or for transporting agricultural
products. Mobile robots are also used by petroleum and gas production
companies for offshore oil and gas exploration and by salvage companies for
searching for sunken ships. Mobility of these robots can be in the form of
flying, swimming, or running on land.
Data acquisition and Control Robotic System
The Data acquisition and control robotic system is used for acquiring,
processing, and transmitting important data used for generating various
signals. Generally meant for activities that require less human participation, a
control robotic system generates signals that can be used for controlling other
robots. Data acquisition and control robotic system are also used for CAD
systems used in engineering and business processes. Many mobile robotic
systems, especially the unmanned craft used for the exploration of the sea bed
are equipped with Data acquisition and control robotic system for procuring
important information and sending it back to the shore in the form of signals.
Current Uses of Robots
Though robots have limitations due to the way they are built, humans are able
to use them to their full potential depending on the applications. Robotics has
gone a long way, and new discoveries and improvements are always being
made. Here are a few examples of applications in which robots play a vital
part.
What robots can do?
Robotics is highly advanced technology that strives to develop robots for
various applications. Let us have a look at robots already invented and being
used in the industry.
1) Industrial robots: Industrial robots are electronically controlled, both
programmable and reprogrammable to carry out certain tasks with high
precision and accuracy. Robots have been extensively used in highly
advanced manufacturing facilities or high volume assembly lines for a long
time. They are efficient and produce high yields or output. The most common
use of robots is in the automobile industry where they are used for various
applications like welding, painting, etc. The robots can carry out tasks which
are very difficult or hazardous for human beings.
Because robots use a perfect copy of exact strategy for troubleshooting
purposes, issues and solutions can easily be resolved and implemented. With
proper maintenance procedures and schedules, machine wear-out or
breakdowns can easily be predicted, resolved and controlled. Over-all,
manufacturing plants run by industrial robots can run smoothly without much
trouble and with less interactions of humans.
2) Aerospace robots: Another application of robots is in aerospace for outer
space exploration. Aerospace robots or unmanned robotic spacecraft play a
key role in outer space probe. Scientists can explore in outer space without
putting themselves in great danger considering the risks involved if they go to
outer space themselves. With controlled robots, the same results can be
achieved safely. The risk to lose a human life in outer space has been greatly
avoided.
3) Healthcare delivery: A highly possible advancement in healthcare is
using robots in robotic surgery. Due to technological advancement, this is
possible even if the patient is located in remote areas. This possibility defies
distance. With the proper tools and set-up in place, proper healthcare could
be delivered to the patient even in remote areas without the corresponding
risks involved.
4) Robots resembling human beings and robotic pets: At home, humanlike robots and robotic pets have always been considered friends and
companions. They can play a musical instrument, sing, dance, etc. only to
please us humans for our own comfort.
5) Military robots: Possibly the worst part of robotic application is in
military use, as it may curtail human life instead of upholding it. You can
conquer or be conquered with a robot.
With man always in conquest of power, we hope that robots will not be used
justify an end result for their own favor. Our only hope is that man, along
with the advancement in technology, always promotes for the well being of
mankind.
Components Of Robots
Robots consist of a variety of high tech components which are designed for
accuracy, speed and convenience of operation. Some are for general purpose
usage, while some are custom made to handle specific parts. The main
components are the main body, end effectors, and grippers.
Main Body
An important component of a robot is the main body which holds the
actuators and manipulators that create the activity for each axis of movement.
The manipulator carries the end effectors which grip the objects. Mechanisms
that provide response regarding the location are included for identification
and rectification of any difference between the chosen position in accordance
with the command and the existent position. The intelligence of the robot is
in the control element, which directs the manipulator along the ideal route. A
power supply is essential to activate the actuators.
End Effectors
End Effectors are a mechanism, which is located on the manipulator that pick
up objects, grasp, and manage the movement of items in accordance with the
designed parameters. They are designed specifically to handle desired parts
and to move in the intended path. End Effectors can be used as nut drivers,
accessory for welding processes, or guns for painting purposes. Types of End
Effectors include:
Permanent Magnet End Effector: It consists of a permanent magnet that
moves in an aluminium cylinder. When the actuator drives the magnet
towards the front end of the cylinder, it holds ferrous parts. As the magnet is
extracted from the cylinder, the magnetic field fades, and the parts are
released. This type can be used for only ferrous parts, and has the benefit of
managing parts with asymmetrical form as well as holding a number of parts
concurrently.
Electro Magnet End Effector: It is easy to operate, and multiple end
effectors can be positioned with the robot arm to perform multiple operations.
Even with minor disturbance in the location of parts, or alterations in
configuration and dimensions, these end effectors can function effectively.
These types can be used for parts with uneven exterior shape, such as coarse
ferrous castings or rounded components.
Vacuum End Effector: It consists of a cup-shaped component, and when it
comes into contact with a smooth part, vacuum is created in the cup which
ensures that the part remains attached. Controls are used to generate or
remove vacuum. This type of end effectors is used for delicate parts.
Grippers
Grippers are of the following types:
Two Finger: These Grippers are used for gripping components that have
uncomplicated form, and can be utilized for inner or outer gripping. The
outer gripper is similar to thumb and index finger, and can be utilized for
holding a small part or that which is located in a densely filled assembly of
components. A tailored form consists of two parallel fingers with disposable
inserts that are designed to handle special parts.
Three Finger: These types are similar to the thumb, index finger, and middle
finger of the human hand, and are used for holding parts that are round or that
need revolving.
What are Industrial Robots?
Robots are being extensively utilized in the automotive industry, and in the
near future, their application, because of the quality and accuracy produced,
will spread to many more industries,
Selecting a Robot
A large range of robots with different components, techniques, and means of
operation have already been designed and manufactured. These are selected
according to their utility and financial considerations. A futuristic robot, with
modern sensors and appropriate software, can perform tasks efficiently,
accurately, and quickly, but will be expensive. Thus, all the relevant factors
must be considered while selecting robots for industrial applications,
including the initial expenditure and the benefits to be achieved in using the
robot.
Use Of Robots In Industry
Robots are being beneficially utilized in a vast variety of industries, some of
which are:
Loading and Unloading: Robots are extensively being utilized for the
loading and unloading of machines and parts in industries, thus substituting
human labor and other mechanical methods. Robots possess the benefit of
duplicating the designed tasks, performing accurately, and being compatible
with the nearby equipment. Future robots are expected to completely
substitute human beings in carrying out loading work, positioning tools for
machineries, and replacing tools when necessary.
Spot Welding: Robots perform spot welding very accurately, with recurring
operations, and are extensively being used in automotive industry. They can
extend to places which normally would be difficult to reach by manual
welding.
Arc Welding: The surroundings of arc welding are unsafe for the fitness of
human beings, and achievement of quality welds is difficult by manual
operations. Therefore, since smooth movements provide superior welds, the
use of robots for such operations is growing very rapidly. Utilization of
industrial robots for arc welding is economical, and the welds are of an
excellent quality.
Painting: It is a difficult and unhealthy operation, where hazardous fumes are
released that are extremely dangerous for human beings, and in which a
danger of explosion exists in the areas of operation. Furthermore, manual
painting results are not consistent due to unpredictable speed of movement of
the components involved in this process. Thus, robots are gradually replacing
manual labor for industrial painting applications.
Investment Casting: Investment casting requires duplication, accuracy, and
uniformity in production, all of which can be achieved with employment of
industrial robots.
Integration of Parts: The integration of parts in various sub systems of
production is an important application where robots can function more
efficiently and with extra speed, thus assisting in the increase of production
rate. Presently, robots are being used for tightening of bolts and nuts, placing
of components in circuit boards, and a variety of other similar tasks. Logic
devices are used for identification and rectification of defects or
inconsistencies.
Benefits Of Robots
Robots have many advantages, and production units which do not obtain their
services will be left behind and will not be able to compete in quality,
production, and cost.
Introduction
Robots have been developed by extended modern research, and are being
used in numerous industries for achieving advantages which would not be
possible with the human beings. Some of the advantages for using robots are
mentioned below:
Operatin In Unsafe Surroundings
There are numerous industries where the surroundings are unsafe for the
employment of human labor due to the existence of hazardous environments.
Robots can be used effectively in such environments where handling of
radioactive materials is involved, such as hospitals or nuclear establishments,
where direct exposure to human beings can be dangerous for their health.
Improvement In Quality
Robots perform operations with superior exactitude, ensure uniformity of
production due to which rejections are minimized, and reduce losses.
Measurements and movements of tools being utilized are more accurate.
Thus, the quality of the product manufactured is improved manifold
compared to the performance by human beings.
Increase In Production
Robots have the ability to work continuously without pause, unlike human
labor for which breaks and vacation are essential. Thus, production is
increased by the utilization of robots in industrial applications, and
consequently profits of the production unit are increased.
Execute Boring And Repetitive Work
In many production establishments work required to be executed is awfully
boring, being cyclic and repetitive, due to which it is difficult for the
operators to remain fully dedicated to their tasks and generate interest in their
work. When tasks are monotonous, workers tend to be careless, thereby
increasing the probability of accidents and malfunctions of machines.
Utilization of robots has eliminated problems associated with boredom in
production.
Duty During Adverse Hours
Most of the production units are required to function twenty-four hours,
during day or night, on holidays, without any break so as to ensure increased
production which is commensurate with the capacity of the machinery. Thus,
human laborers who do not feel very comfortable working such odd hours
can be employed accordingly. However, robots can be beneficially utilized
whenever necessary.
Safety And Health Of Workers
Since robots are capable of working in hazardous environments, more
dangerous operations are being handled by robots. Thus the safety and health
of workers is ensured, thereby reducing expenditures on health and
medicines. Robots are now engaged in hoisting and moving heavy objects,
and perform other unsafe tasks.
Position and Orientation of the Objects in Robotic
Automation
To study the mechanics of robotic manipulators comprehensively and then
apply it for the mechanical synthesis of the manipulators, we will first look at
the basic topics of mechanics involved in the mechanics of manipulators.
Future articles will deal each of them in detail.
Industrial Robots result in increased levels of automation in industrial
processes. Robot use started in repetitive, simple tasks, and now they are also
being used also in precision works. This transition happened because of the
sophisticated synthesis of robots and inclusion of electronics and computer
control. An industrial robot is designed such that it can be programmed to be
used in different applications.
The mechanics of the industrial robots and manipulators is not a new field of
engineering in itself. It is actually a combination of different branches of
engineering. The mechanics, static and dynamic analysis and design, comes
from mechanical engineering, and the analysis of motion and path planning
of the manipulators is done with the help of mathematical tools.
The Position and Orientation of Objects
For the design and analysis of the manipulators, the basic thing to keep track
of and to plan is the position as well as the orientation of objects. The objects
for which we are concerned are the components of the manipulators, like
their links, joints, and tools, and also the objects in the space around the
manipulator as well as all the objects with which the manipulator interacts.
For the effective design of the manipulator, the position and orientation of the
objects concerned need to be represented suitably such that it can be
mathematically processed to make the manipulator move in the desired
manner.
Attaching Frame to the Object
To define the position and orientation of the object and to keep track of its
movement a coordinate frame is rigidly attached to the object. The motion of
the object, that is, the change in position and orientation of the object is now
given by the frame attached to the object.
Reference Coordinate Frame
A Reference Coordinate Frame is defined with respect to which the position
and orientations of all the other frames attached to the objects are described.
Transformation of Frames
We can define any frame as the reference frame. For convenience we keep
changing the reference frames. So the position and orientation of the frames
attached to the objects need to be transformed from one reference frame to
the other. There are mathematical methods for the transformation of the
frames.
The article series on Position and Orientation of manipulators dealing with
the methods for describing position and orientation of the objects related to
the manipulators and different techniques for transformation of the position
and orientation for different reference frames will soon follow up under
Robotics topic in the Bright Hub Mechanical Engineering Channel.
The Kinematics of Manipulators – Forward and
Inverse
In the kinematic design and analysis of manipulators, it is required to find the
position and orientation of the end-effector for given values of joint
parameters. Sometimes the position to be attained is given and possible
values of joint parameters to attain that are to be found.
In general, kinematics is the study of motion without any mention of the
forces causing it. When any object moves, the parameters attached to it are its
position, velocity and acceleration. Velocity and acceleration are derivatives
of the position. Sometimes even higher derivatives of position are required to
be calculated for the study of jerkiness and other irregularities in the motion.
The structure of a manipulator is like a kinematic chain with the end-effector
at the free end. It has rigid links connected by joints such that there can be
relative motion. The joints can be rotary or revolute and sliding or prismatic.
At the rotary joints the motion is angular and measured in terms of joint
angles. At the prismatic joint there is translational motion and is specified in
units of length. The angle and length between links is measure by the position
sensors attached to the joints.
Forward Kinematics of Manipulators
For a particular set of values of joint angles and distance between the links
the end-effector will be at a particular position in space. The forward
kinematic analysis is to find the position and orientation of the end-effector
of the manipulator with respect to the base frame for the given set of joint
parameters.
For describing the position and orientation of the links, we attach coordinate
frames to each of them and then the position and orientation of these frames
are used for specifying the links. This scheme is known as the “joint space
description." The other form for describing position and orientation can be by
directly specifying them with reference to base frame. This representation is
called the Cartesian description.
Forward kinematics is like defining the position of manipulator links in a
base frame. This is done by sequential transformation of the reference frames
attached to the links.
Inverse Kinematics of Manipulators
We are provided with the position and orientation of the end-effector of the
manipulator, and the exercise is to find the values of joint angles and
displacements with which the specified position and orientation of the endeffector can be attained. There can be one or more such set of values and
even no such set of parameters for which the specified position and
orientation of the end-effector can be attained.
The equations formulated for solving the inverse kinematic problem are
nonlinear and it’s very difficult to obtain closed form solutions for that. There
may be multiple solutions to the problem, and maybe any solution doesn’t
exist at all.
The solutions of the inverse kinematic problem for manipulators are helpful
to define the workspace of manipulators. If solutions exist for a point then it
is in the workspace of the manipulator, and if no solution exists then the point
is not in the workspace.
Kinematics of Manipulators: Velocity Analysis
Finding the position and orientation of the links of a manipulator required for
attaining a certain position and orientation of the end-effector is just not
sufficient. We also want to know the corresponding variation of velocity of
the links and joints with the motion of the end-effector.
Velocity Analysis
In addition to finding the end-effector position for given joint parameters and
vice versa in forward and inverse kinematics, the kinematics of manipulators
also includes the analysis of manipulators in motion. Not only the final
position of the links and joints to attain the desired position of the endeffector, but also the velocity, and its variation, of the links and joints of the
manipulators while attaining the final position is important for analysis.
We discussed joint space and Cartesian space in the previous article. To find
the final position of an end-effector for given joint parameters is like moving
from joint space to Cartesian space, and obtaining joint parameters for a
given final position of the end-effector is like expressing the Cartesian space
in joint space.
The
Jacobian
Matrix
For the velocity analysis of the motion of manipulators a matrix is defined
which represents the mapping of velocities from joint space to Cartesian
space. Such matrix is called the jacobian of the manipulator. The Jacobian
matrix is characteristic to a particular configuration of the manipulator. As
the configuration of the manipulator changes while in motion, the mapping of
velocities changes accordingly.
Singularities
During the motion of the manipulator, the links of the manipulator acquire
different configurations. With these changing configurations, the description
of the Jacobian changes accordingly. While moving from one configuration
to another, sometimes the manipulator reaches a point beyond which it
cannot go. These points describe the workspace of the manipulator. And there
are some points which cannot be attained by the manipulator due to structural
constraints.
At all such points the Jacobian of the manipulator is not invertible and these
points are called Singularities of the Jacobian. Singularities of the
manipulator can tell what the limits of workspace of the manipulator are. The
points of singularities should be taken into consideration while designing and
programming a manipulator for any specific task. The knowledge and
understanding of singularities is very helpful for designers and users of
manipulators.
The Jacobian of a manipulator is also used for mapping of forces and torques.
When the force and moment at the end-effector are given and the set of joint
torques required to generate that are to be found then also the Jacobian of the
manipulator is very useful.
The previous two articles have presented a brief overview of the kinematics
of manipulators. The kinematics of manipulators contains position,
orientation, and velocity analysis of manipulators. Now we move on to
Dynamics, the study of forces causing motion.
How Does a Robot's Voice Recognition System
Works?
Ever wondered how a robot is able to perform tasks after taking commands
from the user? It is the Robot's Voice Recognition system that identifies the
words and performs the desired action. Want to know how? Read the article
inside.
Introduction
Voice recognition is the process by which a robot identifies what is ordered
to it and performs an action based on the information received. It is to note
that not all robots have this functionality; however the system can also be
integrated at a later stage.
The voice recognition system works on the basis of the frequency and
amplitudes of the spoken words. A signal is generated and sent to the main
operating unit of the robot, after dissecting the received words into various
frequencies and amplitudes.
How
Does
the
System
Works?
When an order is given to a robot in the form of words the voice recognition
system breaks down each and every word into constituent frequencies of each
alphabet. The voice recognition system has a pre-programmed database,
which has a unique frequency for each and every letter. This database helps
the robot to identify the word and perform the right action. However, a good
amount of training needs to be given by the user for facilitating the robot to
initially form a table comprising of major frequencies of words and
alphabets.
The table once formed, acts as a quick reference for the robot to identify the
frequently used words. When a word is spoken, the robot identifies the sound,
determines the exact frequency and looks up in the table to perceive the right
word. If the robot is not able to find the right frequency, it finds the frequency
of the alphabet closest to the one needed and thus recognizes the whole word.
Frequent Training Required
In order to increase the accuracy, the robot should be trained repetitively to
identify the right frequency. Moreover, more the training provided, less is the
variation. This means that in case of any type of voice modulation or
variation, the system will not try to match the perceived signal to many
frequencies but will neglect the frequency and won’t perform any action.
However, if very few frequencies are matched for a particular variation, it
may misinterpret a word or choose a word similar in sound.
It is for this reason that most of the robots with voice recognition system are
highly trained by the users. More the robot is trained, quicker is the process
of the voice recognition system to identify the word and send a signal to the
controlling unit, performing the desired action.
Light Sensors in Robots
Have you ever wondered how a robot visualizes an object and then performs
an action related to it? What is it in a robot that allows it to see and identify
various objects? Don't know the answers? Know all of them in the article
inside.
Introduction
Sensors are small electronic devices used in robots for converting a physical
quantity such as temperature, pressure, light etc. into an electrical signal. In
robots, sensors are used for both internal and external functions, for
coordinating the internal functions and for interacting with the outside world.
The light sensor is one such type of sensor that measures the amount of light
falling on a photocell. Photocell is a type of resistive sensor, which produces
a signal according to the change of resistance inside it. This means that if the
intensity of light falling on the photocell is high, the resistance in the
photocell will be low and vise-versa when the intensity of light is low.
Light sensors are used to do a variety of functions such as measuring the light
intensity, measuring the difference in intensity between two photocells and
measuring any change in intensity. Moreover, different types of light sensors
are used for different functions. Let’s take a quick look at each of them.
Optical Sensors
An optical sensor consists of two main things - an emitter and a detector. An
emitter is generally made of a light emitting diode and a detector is made of a
photodiode or a phototransistor. Depending on the arrangement of the emitter
and detector, optical sensors are divided into two types.
Reflective sensors – In this type the emitter and the detector are placed
together but are divided by a barrier. An object is detected when the light is
emitted from the emitter and detected by the detector.
Break beam sensors
– In this type, the emitter and detector are placed opposite to each other. An
object is detected when the beam between the emitter and detector is
interrupted.
Light Reflective Sensors
Light reflective sensors detect an object based on the color and other
properties of the surface of the object. However, the reliability of these
sensors is less. This is because the sensors have difficult to measure darker
objects than the lighter ones. Moreover, if the distance of a lighter and darker
object is more but same from the sensor, the lighter object will seem more
closer than the darker object.
Infrared Sensors
Infrared sensors are also a type of light sensor but they work in the infra red
zone of the frequency spectrum. Infrared sensors also have an emitter and
detector and have an arrangement same as that of break beam sensors.
Infrared sensors are frequently used in robots because they are less affected
by interferences and can identify objects easily.
Vision System in Robots
introduction
The vision system enables a robot to see and identify various objects. The
system provides the robot with an ability to collect the light reflected from
the objects, process an image out of it and then perform the desired action.
The vision system helps with these functions by using various electronic
devices and hardware systems.
How Does the System Works?
A vision system in a robot identifies an object by forming an electronic image
using a bunch of pixels already stored in the memory of the robot’s
controlling unit. Each pixel has a binary number allotted to it. Each of these
binary numbers represents a particular wavelength and intensity in the light
spectrum. An electronic image is formed in the controlling unit of the robot
by assembling various binary numbers according to the amount of light.
Types of Vision Systems
A robot’s vision system is classified into three main types on the basis of the
color of the objects. They are :
○
Binary image, which consist of black and white images
○
Gray colored images
○
Colored images with the base of red, green or blue.
An electronic image is formed with the help of pixels classified into these
three categories. If an image is not been able to put in any of these categories,
then the category that is extremely near to the image is selected.
.
Parts of the Process
A vision system will consists of a small camera, a monitoring system (a
computer) and the necessary hardware and software. The whole process of
identifying the image is classified into three main parts:
○
Image Processing
○
Thresholding
○
Connectivity paths
Image processing is a process by which an image is formed for analysis and
use at a later stage. It uses various techniques such as image analysis and
histogram of images to identify, simplify, modify or enhance an image.
Threshold is a process in which each image is classified into various
categories and then compared with the pixels stored in the database. The
pixels once compared are aligned to different levels to form an image.
The connectivity path is a process by which a particular pixel is connected
to a neighboring pixel if it falls in the same color and texture region.
It is the combination of all these three processes that a final electronic image
is conceived and the required action is taken after analysis.
Robots in Engineering and Manufacturing
From the inspiration of the miraculous robot origins of the 1960s to the farfetched imagination portrayed in movies, we see how robots have digressed
from their silver screen persona to create an impact in our day-to-day lives in
a practical sense.
The mere mention of the word robot conjures images of human-like
machines capable of intelligent interaction with the world around them. From
the robots of odd shapes and forms in George Lucas’s original "Star Wars" of
1977 to the lifelike robots shown more recently in the 2004 movie "I, Robot"
adapted from Isaac Asimov’s classic novel of 1950, these androids, equipped
with fictitious highly advanced artificial intelligence by their makers, were
able to mimic human intelligence and thrive and survive through all forms of
challenges and obstacles in their environment. Another incredible example of
robots that have been put to great film effect is found in the movie
Transformers, an adaptation from the animated series of the 1980s. Although
recent years have seen almost similar commercially-produced robots in the
form of AIBO and ASIMO introduced by Sony and Honda respectively, the
real practical use of robots in the engineering industry is a huge contrast to
what their persona from the silver screen would suggest.
In the real world, robots are utilised to perform repetitive tasks and job
functions which are in likelihood too tedious and boring for man to perform,
such as in the manufacturing line of plastics and various hardware. These
practical robots often take the form of mechanical arms equipped with
electromagnetic plates, grippers, or suction cups working in tandem with
conveyor belts to aid in the assembly of equipment in the manufacturing
industries. Although stationary in nature, these robotic arms are endowed
with several degrees-of-freedom, giving them the flexibility to move in many
directions through multiple angles with utmost ease and agility.
Robots can also be made to execute difficult activities and processes
requiring high precision at an accuracy usually beyond human capabilities.
Through proper programming, a task can be repeatedly carried out with
minimal or virtually without error. With proximity and pressure sensors
abound, the accuracy which can be achieved by some of these robots can be
rather remarkable, and allow them to work with extremely fragile materials
with great care. Even enervating jobs requiring skill and expertise such as arc
welding and spray painting can seem like child’s play to a robot.
Robots are also best employed in situations which may prove to be too
dangerous for man to perform. In steel manufacturing mills, robots are used
in materials handling and transfer, to load iron ore and coke into blast
furnaces and move molten iron, a chore perhaps too life-threatening for man
himself to undertake. With practical use of robots such as these, accidents
and loss of life at the industrial workplace can be minimized. Also, with the
capability of withstanding stress far beyond what human being can endure,
these tireless machines have a proven reliability and a track record in getting
the job done right without the adversities and risks commonly associated with
human error and behaviour.
Robots have also assisted man in accomplishing job duties in environments
which are not conducive to human life. In space exploration, the rover
spacecraft robots had been successfully deployed by NASA in their Mars
Exploration Rover Mission to survey and explore the surface of Mars, and to
retrieve geological samples from the red planet. In marine engineering, robots
known as autonomous underwater vehicles (AUVs) serve to produce detailed
maps of the seabed for the oil and gas industry, as well as for research into
deep underwater chemical compound composition and the presence of
microorganisms without any risk to human life.
In HVAC applications, robots equipped with sensors and brushes are
commonly used to effectively clean air-conditioning ducts in the engineering
maintenance of commercial buildings. Camera sensors are sometimes
installed on these robots to allow their human operator a glimpse of duct
interiors that are mostly too narrow and tight for humans to work inside them.
Similar robots have also recently found their way into residential homes,
performing the same automated cleansing duties on floors and carpets.
Robotic Arm
In closing, robots have established their place as dependable servants in the
daily lives of human beings, with robotic technology playing a vital role in
the engineering industry. Robots have been reliable in getting their assigned
responsibilities accomplished correctly and accurately with great efficiency,
and keeping their human masters safe from harm in the process.
Robotics: Construction of a Robot
What makes a machine different from a Robot? What improvisations convert
a dumb machine into a Robot? Though the basic construction of a robot is
very much build on the structure of a machine but there are certain things
like, actuators, sensors and controllers added to it which makes it a Robot.
Robot: Advancement over Machines
The basic building blocks of a robot are very similar to that of machines. As a
machine has moving parts or members connected to each other through
different types of joints, driven by some motor or any other driving
mechanism and moving in synchronization with each other to execute the
specified operation. In the same way the basic structure of a robot is similar
to that of a machine, but there are some advancements or additions to the
machine which make it a robot.
In robots the simple driver motors are replaced by the servo motors, which
are more accurate and fitted with sensors made up of transducers to provide
feedback. There is more number of actuator motors in the robots as compared
to the machines, which generally have single driving motor. Also there are
controllers, governing the operations of different sensors and actuators in the
robots, provided with the instruction set and they give command accordingly
to the actuators and receive feedback from the sensors. These are some
advancement in robots over machines.
Building Blocks of Robots
On addition to the frame of a machine the components of a robot can be
broadly named as:
Actuators: Actuators move the different members of the robots by the
specified amount and the specified point of time, angle or the linear
translation, as per the commands received from the controllers. Along with
the motion actuators also provide the requisite force to the members.
Actuators can be electrical like permanent magnet D.C. motors, stepper
motors, solenoids, printed armature motors or hydraulic like linear cylinders,
hydraulic motors and actuators in robots can also be pneumatic. Read more
about Electrical Actuators, Hydraulic Actuators, Pneumatic Actuators.
Sensors:
What makes a robot special and advanced than a machine? It is the proper
use of sensors, at right places, in a robot. Sensors as the name suggest can
sense the surroundings and the different objects with which it is interacting.
Sensors provide feedback to the controlling system about the motion of
actuators and other members, how much they have moved, what is the speed,
acceleration and also sensors can provide information about pressures and
forces which are acting upon. Some advanced sensors such as optical sensor,
thermal sensors and vision sensors which can create more vivid image of the
surroundings and provide more detailed information to the controlling
system.
Processors and Controllers: To give the starting command to the actuators,
receive and process the feedback information from the sensors and then
provide the corrected signal to the actuators, Processing and Controlling
system is required. Controllers are programmed as per the function to the
robot; certain software and programmers are used for this purpose. The feed
back information received from the sensors is passed through the signal
convertors where is made usable and passed on to the processors which in
turn disseminate the modified signals.
Robotics: Structure of Industrial Robots
Manipulators: Types of Base Bodies - I
or
Robots used in industries mostly perform the tasks of picking and placing
different things. These functions are similar to that of a human arm, hold
something and then place and fit that at the require spot. Such robotic arms
are also called as manipulators. They have different types of Base Bodies.
Structure of Industrial Robots or Manipulators
Robotic arms are used in assembly lines to pick the components of products
to be assembled and place and fit them at the right place. Such as in a
production line of canned food, lids are placed, cans are picked by robotic
arms and placed in the container packets or like in the assembly line of cars,
the frames of cars move on the conveying system one by one and robotic
arms or manipulators working around the frame fit different components on
to it and finally completed cars come out of the assembly line.
The functions that can be performed by a manipulator, the reach of its end
effectors, the orientation in which it can work and the overall working space
for a manipulator is determined by the structure of the body of the
manipulator. Also the types of joints used to connect different members of
the manipulator along with the basic Body of the robot determine the overall
degrees of freedom of the manipulator in its motion.
The body of the manipulator moves the end effectors to the target point
where the object is grasped by the gripper fitted to the wrist. Wrist orients the
object in the required direction. The wrist is fitted on the end point of the
manipulator. The base Body, wrist and gripper constitute the basic structure
of a manipulator.
Types of Base Bodies of Manipulators
Based on the working space, axes of movements and degrees of freedom of
the manipulator as whole the Base Bodies can be classified into different
types.
Linear: This type of manipulator arm as the name suggests can move along
only a single direction. Linear manipulator base body comprises of one
prismatic joint, that is, a slider which can move only along an axis. Such
robots can only perform simple tasks like picking and placing objects from
one spot to other. This can be the most basic type of manipulator possible.
The range of this manipulator arm is limited by the length of the prismatic
joint. The motion of the linear base body can be defined by a single variable
and can be obtained by one linear actuator.
This is a very basic body type of the robots. In the next article I will list some
more Base Body types of Industrial Robots or Manipulators.
Robotics: Structure of Industrial Robots
Manipulators: Types of Base Bodies – II
or
Types of Base Bodies of Manipulators
As the number of members and the degrees of freedom of the joints
connecting those increases the robotic arm can work in more complex space
and orientation.
Planer: The planer base body of a manipulator can be seen as one linear
manipulator attached to another linear manipulator with axes of the two
manipulators being perpendicular to each other in the same plane. Thus the
end of the manipulator can move along two mutually perpendicular axes and
can cover all the points in the plane defined by the range of motion along the
two axes. The motion of the planer base body can be defined by two variables
and motion can be achieved by two linear actuators. Such manipulator has
two degrees of freedom.
Examples of tasks planer manipulators can do; curve tracing or printing on
sheets, metal sheet cutting for specified design, welding on a planer metal
piece or drilling at different spots in a plane.
Cartesian: Such robot body allows the manipulator to move along the three
Cartesian axes only, which are the three mutually orthogonal axes. The
working space of such robots will be a cuboid whose dimensions are defined
by the range of motion of the manipulator along each axis. Cartesian robot
base consists of three prismatic joints arranged perpendicular to each other. It
requires three variables to define its position, has equal number of actuators
and has the same number of degrees of freedom.
One such example can be the hoisting robot used in a workshop, shipping
yard or construction site which moves on rails and the hoisting hook can be
moved up and down through the cables.
The types of robot base bodies discussed so far only comprise of prismatic
joints and the motion along any of the axes is only linear, there is no rotary
motion. Though the end effectors of a Cartesian manipulator can reach every
point defined in the space but still it is not used for every purpose. It will be
slow for moving between different points in the space and cannot effectively
orient its end effectors properly for certain jobs. Thus revolute and cylindrical
joints are included in the base body to obtain more efficient structure of
manipulators.
Manipulation
Robots
Robotic
System:
Manual
Type
Introduction
A manual robot is a type of manipulation robotic system that requires
complete human intervention for its operation. The manual type of robotic
system requires a particular kind of human control, a system seldom found in
any other type of robotic systems. Manual manipulators comprise a range of
robotic systems, from basic to highly advanced, each having a specific
control system according to its application.
Manual type robots can be divided into two main types:
○
articulated
○
exoskeletons
Articulated robot
Articulated robot, as the name suggests, consists of a balanced manipulator
consisting of various joints with a separate actuator for each one of them.
One of the joints with a grip or tool will hold the load, within a particular
range, and will move according to signals fed to the control system for each
of the individual joint’s actuator. A proper synchronization of all these joints
will move a load from one point to another. Generally used for handling
heavy materials, articulated type robots are easy to use but require good
handling skills. The control signals given to each of the actuator should be
precise in comparison to the signals fed to the actuators of other joints. Thus,
functioning and controlling of an articulated robot mainly consists of feeding
the right signal to the right actuator at the right time.
Exoskeleton robot
An exoskeleton manipulator consists of a peculiar type of robot mechanism
wherein each joint of the manipulator control system corresponds with that of
a human arm or leg. When a particular joint of the operator is moved, the
manipulator joint which corresponds to that particular human joint also
moves. The motion of the human joint generates a control signal that moves
the corresponding manipulator accordingly. The joints are connected to
drives that are again connected to tools that do the needed job. This type of
manipulator is generally used for doing arduous assignments such as lifting
heavy weights or moving across a rugged terrain. Presently exoskeleton
manipulator system are extensively used in the entertainment industry.
Example
An apt example of a manual manipulator is the high power mechanism used
for rock drilling and breaking down concrete structures. This mechanism is
also a multiple joint system which carries a heavy hydraulically operated
drilling tool used for breaking the rock or concrete by frequent impacts. The
drilling tool is held by a fully automatic manipulator that controls the
movement and position of the drill by analyzing and sensing the motion of
the truck on which the whole system is mounted. Also, according to the
quality and state of the rock, the manipulator automatically adjusts the
drilling tool’s speed and hydraulic system.
The Required Features of a Multimeter for Robot
Building
A tool that is necessary for all robot builders is the multimeter. However,
there is sometimes an uncertainty as to what features a multimeter must have
for building robots. Although most multimeters have the required features,
one should still make sure before beginning a robot building project.
Introduction
Among the many tools that a robot builder has by their side, the digital
multimeter is a crucial measurement tool that every robot builder is equipped
with. With a multimeter, you can:
○
determine what kind of part you’re dealing with
○
determine the electrical denomination of a part
determine whether too much or too little power is being
○
used
○
○
determine whether a part is damaged or not
read chips and sensors to help with the designing and
debugging processes.
Although some digital multimeters are much more expensive than others,
they all have similar features. Basically, multimeter features can be divided
into three categories: required, recommended, and optional. In this article, I
will be discussing the required features for a multimeter. The following
features are those that a multimeter must have in order to be used for robot
building:
Images
Digital
Although outdated, some multimeters have a needle display. These analog
multimeters are not always easy to read due to the swinging nature of the
needle. However, digital multimeters display values on an LCD (liquidcrystal display) such as the display found on calculators. Digital meters are
precise and easy to read. Therefore, digital multimeters must be used for
accurate readings.
Images
DC Voltage
DC voltage is an indication as to how much force the electricity has, and is
measured in volts. For robotic purposes, only a range of 0.1 V to 30 V is
required. All multimeters meet this basic requirement.
DC Current
DC current is an indication as to how many electrons are passing a certain
point every second, and is measured in amps. For robotic purposes, only a
range of 0.0001 A to 2 A is required. All multimeters typically meet this
requirement.
Resistance
Resistance is an indication as to how much the flow of electricity is being
opposed. A range of 1 Ω to 2,000,000 Ω is all that is needed for robotic
purposes. Most multimeters meet this requirement.
Probes
Although most multimeters are sold with test probes, you must make sure
that you have a set of these cables in order to connect them to your robot
parts for testing.
Images
Fuse Protection
In order to protect the multimeter from damage if an excess amount of
voltage or current passes through the test probes, all multimeters must have
fuse protection. To determine whether your multimeter has fuse protection,
look for UL (Underwriters Laboratory) or CE (Conformité Européenne)
markings. However, although a multimeter might have fuse protection,
precautions must still be taken since a multimeter is still vulnerable to
damage through its capacitor, transistor, and data ports.
Images
Measuring Resistance of Resistors
For any beginners in robotics, or even experienced individuals, being able to
determine the resistance value of a resistor is important in building a robot, or
more specifically an electronic device.
Converting Colors to Numbers
Before we begin measuring the resistance of resistors, it would be very
helpful if we know how to distinguish between the different resistors.
Resistors have an assortment of colored bands that indicate their value. The
chart below shows all the possibilities. Click it to enlarge it.
Using this chart, a resistor with the colors yellow, violet, and brown, would
be 47 x 10 = 470
Moreover, if the resistor has four colored bands, the last color will indicate
the tolerance. The chart below shows the tolerance for different colors. Click
it to enlarge it.
The tolerance of a resistor will tell you the range of value expected from it.
For instance, a 5% tolerance on a value of 100 indicates that it could actually
be anywhere between 95 and 105.
To measure the resistance of a resistor, one must have a multimeter handy.
The following steps should be followed in order to correctly measure the
resistance:
1.
Switch the dial to the 200 Ω range, if it is not already there.
2.
The black lead must be connected to the COM terminal.
3.
The red lead is connected the terminal labeled Ω. This may
vary for different models, so consult your specific instructions.
4.
5.
Turn the meter on.
Obtain a 100 Ω resistor. The color bands will be brown,
black, brown, and gold which is the proper order.
6.
Connect the tip of the black probe to one end of the wire of
the resistor. The end you choose is arbitrary.
7.
Connect the tip of the red probe to the other end of the
resistor wire.
If you happen to have them, alligator-clips or hook adapters would be very
useful for meter test probe tips. Otherwise, it is difficult to touch each wire
end of a resistor with both probe tips simultaneously.
Interpreting the Resistance Displayed on the Meter
Ideally, the meter should be displaying 100 Ω. It is likely that the meter will
read a value slightly lower or higher than 100. However, if the meter displays
0L or an unusually large number, then you should ensure the setting for the
dial is at ohm, if that is the setting for your particular multimeter. Jiggle the
test probe connections so that the connection is properly established. Or, you
may have the color bands in the wrong order. The correct order is specified in
step 5. But if the multimeter is in working order, then the resistor may be the
cause for the unusual readings. Resistors are not resistible to ageing,
temperature extremes, shock, and other such mistreatment. Still, it is unlikely
that a new, 5% tolerance, 100 Ω resistor would give readings outside the 95105 Ω range. To be certain, test several more resistors.
Experiencing Resistance Ranges
8.
Obtain a 470 Ω resistor, of which the color bands are
yellow, purple, brown, and gold.
9.
With a manual ranging multimeter, choose an ohm range
less than 470 on the meter.
10.
Connect the 470 Ω resistor the same way you did the 100 Ω
resistor.
Choosing a low maximum range with the dial is not harmful to the
multimeter as is choosing an inappropriate voltage range. Nonetheless, a
manual-ranging meter will not be able to display the measurement of the
resistance. Typically, 0L will be displayed instead. Due to the need of
frequently testing different ranges on the manual-ranging meter, it will be
tedious to determine the resistance of a box of resistors. This job is made
much easier with an auto ranging meter.
The Optional Features of Multimeters for Robot
Building
Although basic multimeters are sufficient for beginners in the field of
robotics, those who are more advanced may consider purchasing a multimeter
that will allow them to test a larger variety of values in a more efficient
manner. These multimeters are distinguishable through their optional features
Introduction
For those who are willing to spend the money on a top-end multimeter with
plans on more advanced robot building, they may choose to purchase a
multimeter with features that are not required for the average robot builder. In
two previous articles I discussed the required and recommended features of
multimeters for robot building. I will be concluding my discussion on
multimeter features with this article on the optional features of multimeters.
These optional features optimize the process of testing different parts for their
electrical denominations and make robot building a more involved and
enjoyable activity:
Inductance
Inductance gives an indication as to the extent of opposition to the change in
flow of electricity, and is measured in henries. This is a rare feature and is
only found in a few multimeters.
RS-232 Data Interface
The Recommended Standard 232 allows for communication of data between
a multimeter and a computer or other device. Such a data interface is crucial
when producing test logs. Despite its communicative abilities, multimeters
with data interface do not replace oscilloscopes, since multimeters only
update data a few times every second. If you plan on purchasing a multimeter
with an RS-232 data interface, make sure you purchase the software that must
be installed in order for you to transfer data from the multimeter to your
computer.
Scope
The scope mode takes measured data values and graphs them over a certain
time scale. When analyzing the quality of timing signals and data pulses, the
scope mode becomes crucial. However, the limited scope mode of a
multimeter does not compete with a more powerful oscilloscope, but are still
useful for the average robot builder. Be careful when deciding whether or not
to purchase a multimeter with a scope, since the scope mode typically results
in a significant increase in price.
Temperature
The temperature mode gives an indication as to how hot or cold something is,
and is measured in degrees Celsius or degrees Fahrenheit. This feature is
helpful when determining whether a part is too hot, or whether a part is
functioning outside its optimal temperature range.
Sound
The sound mode gives an indication as to how loud or quiet something is, and
is measured in decibels. This feature can be useful when performing volume
tests on parts such as speakers and motors.
Bar Graph
When a part is being tested by a multimeter and there is a change in a
measurement value, the multimeter does not record every change. Rather, the
multimeter takes an average of the values over a small period of time and
displays this average a few times every second. All multimeters display a
numerical value, but some display the value as a bar graph as well.
Relative
The relative mode calculates the difference between the last measurement and
the current measurement. This mode is useful when testing the differences in
electrical denominations of similar parts.
Variable Resistors: Identifying Potentiometers
Potentiometers allow one to change their resistance and are invaluable when
it comes to building circuit boards. Although there are many types of
potentiometers, the one often used in robotics is the trimpot because it can fit
into a breadboard.
Introduction
For those beginning to learn about robotics, particularly in the area of
building circuits, you may have come across the question of how to change
the brightness of a LED, without having to keep switching parts. This is the
concept I will be discussing. Quite simply, the solution to this issue is a
potentiometer.
Potentiometers
Although you may not have heard of the word potentiometer, you may have
heard of variable resistors. In fact, potentiometers are variable resistors.
Specifically, they function to alter their resistance via a knob or dial. You
have probably used one before by adjusting the volume on your stereo or
using a light dimmer.
Potentiometers have a range of resistance, that is, they can be attuned from
zero ohms to whatever maximum resistance that is specific to it. For example,
a potentiometer of 300 Ω can be adjusted from 0 Ω to its maximum of 300 Ω.
When it comes to installing a potentiometer onto a solderless breadboard,
however, only a specific type of potentiometer can fit. (Many are shaped in a
way that they can not fit into the breadboard.)
Trimpot
Trimpot is another term for trimmer potentiometer. They are useful in that
they are lightweight, small, and usually fit into solderless breadboards.
On the other hand, it is tedious to change their control dial. Instead of knobs
that can be changed with your own fingers, trimpots require a screwdriver, or
some similar tool with a flat end. Moreover, trimpots are more fragile than
larger potentiometers. They can damage after their dial has been adjusted
about 100 times compared to larger potentiometers that can be adjusted
thousands of times without damage.
Turning the Dial
Most potentiometers require only a single turn of their dial to adjust between
their minimum and maximum value, or from 0 Ω to whatever maximum ohm.
This allows for a simple assessment of whether the dial is at the minimum,
half way, or at its maximum.
There also exists potentiometers that require multiple turns. For instance, a
multi-turn potentiometer may require up to 30 turns to cover the entire range
in ohms. The advantage of this potentiometer is that it allows for small
incremental changes. Also, if vibration to the dial causes a partial turn, the
effect may even go unnoticed, as opposed to a single-turn dial.
A multi-turn trimpot is identified by a metal screw located to the side of the
trimpot. A single-turn trimpot usually has a plastic dial. Due to their greater
level of sophistication, multi-turn trimpots are more expensive.
Conclusion
At one point or another in your robotics career, you will have the need to
conveniently change between resistors, such as different LEDs, and the way
to do this is to use a potentiometer. There are many kinds of potentiometers,
but ones that are often used in solderless breadboards are trimpots. All
potentiometers have a mechanism by which their resistance value is changed,
whether it is a finger knob or a dial requiring a screwdriver. Moreover, the
knob or dial can be adjusted across its range in ohms by a single turn or
multiple turns, and this poses several advantages and disadvantages.
The LM393 Voltage Comparator Chip
When building a light sensing robot, it is necessary to include a comparator
chip that compares the pair of sensors located on the breadboard. An LED is
lit up depending on the difference in voltage between the sensors.
Introduction
In a general sense, an analog voltage comparator chip is like a small
voltmeter with integrated switches. It measures voltages at two different
points and compares the difference in quantity of voltage. If the first point
has a higher voltage than the second point, the switch is turned on. However,
if the first point has a lower voltage than the second point, the switch is
turned off. Although there are different models of voltage comparator chips, I
will discuss a very common comparator, the LM393.
What does LM393 stand for?
LM393 stands for “Low Power, Low Offset Voltage, Single Supply, Dual,
Differential Comparators." I will define each part:
○
“Low Power" is an indication that the chip uses little
electricity. This can be very useful for a robot that runs on low
voltage batteries.
○
“Low Offset Voltage" is an indication that the chip can
compare voltages of points that are very close together.
○
“Single Supply" is an indication that the chip uses the same
power supply as the points being compared.
○
“Dual" is an indication that there are two comparators in the
chip.
○
“Differential" is an indication that the chip is comparing the
amount of voltage of each point to each other and not
comparing the voltage to a set value, such as below 4.0 V.
Examining the Datasheet
Each voltage comparator chip has a datasheet that includes important
information about features of the part and how it is an improvement over
previous models of that part. Engineers find the datasheet very useful, as it
indicates specific aspects of the comparator that were not present before.
Furthermore, the datasheet states average and maximum values for certain
aspects, including the amount of current the comparator uses, the
comparator’s optimal voltage range, and the comparator’s optimal
temperature range. The datasheet provided for the LM393 states that it has an
optimal voltage range of 2 V to 36 V. This makes the LM393 suitable with a
9 V battery, since this battery has a voltage range of approximately 5 V to 10
V.
Datasheets
for
the
LM393
can
be
downloaded
here:
http://www.datasheetcatalog.com/datasheets_pdf/L/M/3/9/LM393.shtml
Analyzing the Pinouts
If you inspect the LM393 comparator, you will notice metal wires that stick
out. These are called pins. Undoubtedly, the most significant information
about a comparator chip is how to connect the pins to the rest of the
components in a circuit. Since the LM393 comparator chip is too small for an
indication of the pins to be printed, the datasheet has an illustrated Image, a
pinout, which shows the location and function of each pin. The Image to the
right shows the pinout for the LM393 comparator.
Conclusion
When building your own robot on a solderless breadboard, one of the
components of the robot will be its brains. Part of this component can be a
comparator chip, depending on the type of robot you’re building. The
discussion on the LM393 comparator chip, which is frequently used by robot
builders, is directed to those who are interested in building a light sensing
robot, but is still useful for all who wish to incorporate a voltage comparator
chip in their robot.
How to Test LED Lamps
Anybody experimenting with robotics will need to know how to test an LED
light. For this, one requires a multimeter and the particular LED to be tested.
Obtaining the voltage consumption of the LED will aid one in assessing the
required battery life needed to sustain the circuit.
Testing an LED with a Multimeter
In previous topic about LEDs, the distinct details of the LED is discussed .
Now, lets see their practical use.
Although you can easily test an LED by connecting it to a circuit and seeing
if it will light up, you can also use a multimeter with a diode test function to
test an LED and discover a few more things about it too.
How to Test a Diode with a Multimeter
1.
Connect the black lead to the COM terminal on the
multimeter.
2.
Connect the red lead to the Ω terminal, unless your
particular model differs.
3.
Turn the dial to the diode symbol on the multimeter. This
allows for electric current to travel in one direction (the arrow)
and not the other.
4.
Turn the multimeter on. The display window should
indicate either 0L or OPEN.
5.
6.
Choose a regular red LED.
Connect the black probe to the cathode end of the LED,
which usually is the shorter end and/or cut flat at its bottom.
Connect the red probe to the anode end of the LED.
Interpreting the LED Test Results
If it occurs that the multimeter display doesn’t change from 0L or OPEN,
then it may be that you connected the probes in the wrong order, or that the
connections are not secure. Make sure the steps above are followed
accurately. Otherwise, it may indicate that the particular LED is damaged. If
the voltage in the display is below 400 mV, then it is possible that the
cathode and anode are touching, or the probes are touching. This is termed a
short circuit—when current passes directly from the cathode to the anode,
instead of passing through the LED.
If the steps are followed properly and the LED is undamaged, however, the
display should indicate a value of approximately 1600 mV.
When you are testing your LED, take notice of its brightness. If you are
already in a lit room, then shade the LED with your hands. A lower
efficiency LED will grow dimly, or may just gleam faintly, whereas a higher
efficiency LED will glow clearly.
LED Forward Voltage Drop
The value displayed on your multimeter is called the forward voltage drop.
This indicates the quantity of voltage used up by the LED, or dropped, when
current is traveling in the appropriate direction, forward.
This kind of data is extremely useful when it comes to building your own
robot or designing your circuit board. You will definitely need to keep track
of the total voltage used by your robot, whether it is from a LED or some
other component, in order to choose a battery strong enough to power it.
Therefore, it is equally important for you to purchase the LEDs that your
battery can sustain. Usually, you should not purchase an LED with a forward
voltage exceeding 4V, because most robotic circuits can not function at such
voltages.
Basic LED Attributes
LEDs are used extensively in robots, or any electronic device for that matter.
The main reason for this is that LEDs come in a large variety of shapes, sizes,
and colors. This allows for many different functions, such as simple traffic
lights, to more complicated devices, such as digital clocks.
Light Emitting Diodes (LEDs) are very common components in any
electronic device. Their function is to transform electrical energy into visible
light. Moreover, they are inexpensive, durable, and are available in a number
of sizes and colors.
LED Sizes
One of the most common LED size is the T 1 ¾. The “T" indicates the bullet
shape and the 1 ¾ indicates the diameter measured in 1/8ths of an inch. The T
1 ¾ is considered a standard, as it is the most inexpensive and comes in the
largest range of colors. Indeed, the smaller T 1 LEDs are more modern, as
they use up less space, and the surface mount LEDs are even smaller, but are
not used often for experimentation.
LED Shapes
Single LEDs exist in a considerable array of shapes and mounts. Although
the bullet shape is the most common, the side view and square types are also
available. The shape of the LED can behave as either a light straightening
lens or a light distributing lens. Multiple LEDs exist in a package. These are
very common in digital clocks. Specifically, the different packages receive
electric current at specific times to display the numerals.
LED Lens Clarities
Currently, there exists three common lens clarities: water-clear, white
diffused, and color diffused.
Water-clear LED lenses are composed of a transparent plastic. They are a
little brighter because the transparency of the lens does not allow for
absorption or dispersion of the light. This type of lens is a light straightening
lens, and is used in devices such as flashlights or traffic lights.
White diffused LED lenses are opaque, in that they are cloudy. These types
of lenses act to disperse the light, thus making it possible to view from all
sides. As opposed to the water-clear LED, the opaque lens absorbs the light
and spreads it over a greater viewing angle. Common uses for white diffused
lenses are for power and ready status.
Color diffused LED lenses are essentially the same as white diffused lenses,
except they are tinted in a specific color. Therefore, colored lenses also
spread the light consistently on all sides, but in a colored light.
LED Viewing Angles
The viewing angle of a specific LED indicates the extent to which the light
disperses out. However, viewing angles are only applicable with water-clear
lenses, since they focus the light head on.
LED Colors
LEDs are available in a variety of colors for commercial production. The
colors red, orange, yellow, and yellow-green are the most readily available,
whereas true green, blue, and white are not as much, due to their higher costs.
Fortunately, cheaper means of production will allow for the latter LEDs to
grow in use for experimenters in robotics.
Articulated Robots – SCARA and PUMA
Articulated Robots are no doubt the best robots for assembly and packaging
applications. SCARA and PUMA are the most widely used articulated robots
in industries for assembly and packaging and in academia for research and
analysis. Read more about these variants of articulated robots SCARA and
PUMA.
SCARA – Selective Compliant Assembly Robot Arm
SCRA is also called as Selective Compliant Articulated Robot Arm. This
robot was developed by a team at Yamanashi University in Japan headed by
Professor Hiroshi Makino. It is a simple articulated robot which can perform
assembly tasks precisely and fast. SCARA is most adept in pick and place
operations in any assembly line in industries with speed as well as precision.
SCARA is more or less like a human arm the motion is restricted horizontal
sweeping and vertical movement, it cannot rotate along an axis other than
vertical.
Structure of SCARA
The base structure of SCARA is a chain of rigid members connected through
parallel revolute joints. The axes of these revolute joints are all vertical. Some
SCARAs have linear motion along the vertical axis at the base and can also
have vertical motion of the wrist attached to the end member. The parallel
axis structure of SCARA makes it flexible or compliant in the XY direction
and rigid in the vertical or Z direction. SCARA’s name is derived this way
only by being selectively compliant. Such structure makes SCARA adept in
pick-and-insert operations used mostly in assembly of electronics circuit and
devices
PUMA - Programmable Universal Machine for Assembly
PUMA is the most commonly used industrial robot in assembly, welding
operations and university laboratories. PUMA was designed by Vic
Schienman at MIT in mid seventies and was first manufactured by
Unimation. PUMA resembles more closely to the human arm than SCARA.
PUMA has greater flexibility than SCARA but with the increased compliance
comes the reduced precision. Thus, PUMA is preferably used in assembly
applications which do not require high precision, such as, welding and
stocking operations.
Structure of PUMA
PUMA is an articulated robot with a chain of members connected with each
other through revolute or rotary joints as that in SCARA but the difference is
the orientation of the axes of the joints. In PUMA not all the joints are
parallel; the second joint from the base is orthogonal to the other joints. This
makes PUMA compliant in XY as well as in Z direction. In all PUMA has
six degree of freedom. Each rotary joint is actuated by DC servomotors and
accompanying gear trains. The flexibility of the PUMA makes it capable of
being taught to perform various tasks.
Base Bodies of Robots: Articulated Robot Base
Articulated Robots have all the joints of single axis revolute or rotary type.
This chain of revolute joints provides greater freedom and dexterity in
movement of the articulated robotic arm. SCARA and PUMA are the most
popularly used articulated robots in assembly lines and packaging processes.
Types of robot bases discussed so far in the last two article series started with
the most simple and basic single axis and single degree of freedom robot
base, capable of movement only in one direction, linear robot base. Linear
robot base has only prismatic joint. Going to higher types of robot bases the
number of joints increase then the types of joints start changing. Starting
from one prismatic joint to three prismatic joints then with each changing
type one prismatic joint was replaced by a revolute or rotary joint.
The robot bases with prismatic joints require many components for the
actuation of the prismatic pairs. A Cartesian, Cylindrical or Spherical and
Articulated robots all work in three dimensional space and can reach any
points in there workspace. But the number of components required to obtain
the desired motion is more, such as hydraulic or pneumatic cylinders, rack
and pinion arrangements, solenoids and many switches. The same motion in
robots with revolute joints can be obtained by increased ease and reduced
number of components. The reduced number of components reduces the
probability of failure of the robot. Only the programming of the robot and the
transformation from global coordinates to the actuators input becomes more
complex.
Articulated Robot structure
Cylindrical Robot base has one revolute joint and two prismatic joints.
Spherical Robot base has two revolute joints and one prismatic joint. After
these two types, further the robot base is generalized as Articulated Robot
base with all the joints being revolute joints. Any robot with revolute joints
between all its base members falls in the category of articulated robot. The
number of members in an articulated robot can be anything from two to ten.
The axes of the revolute joints can be parallel or orthogonal to each other
with some pairs of joints parallel and others orthogonal to each other.
Articulated robots have a base called as waist which is vertical to the ground
and the upper body of the robot base is connected to the waist through a
revolute joint which rotates along the axis of the waist. Another link is also
connected to the waist through a revolute joint perpendicular to the waist
joint. This joint between the waist and the link is called as Shoulder of the
articulated robot and the link may be called as the Arm. One more link is
connected to the arm through a revolute joint parallel to the shoulder joint.
This joint with the arm forms the elbow of the articulated robot. Finally a
wrist and a gripper is attached to the last link. The structure of the articulated
robot with three revolute joints is very much similar to the human arm.
Base Bodies of Robots: Spherical Base Robot Control and Application
Spherical Robots are more involved in construction and more dexterous in
working so is there control. The control of spherical robots requires three
variables as Cartesian and Cylindrical robots do but the coordinate frame and
there transformation is bit complex than other types.
Spherical Robots can perform tasks requiring movement in three dimensional
spaces easily. It can reach at points in the space with ease, speed and
accuracy. But to make such three dimensional robot work properly according
to the specified instruction of reaching particular points and doing job there it
requires a substantial amount of background work involving calculations of
coordinate frames and control variables.
Work Frame of Spherical Robots
It is helpful to define coordinate frames at suitable points on the structure of a
robot to analyse and effect the movements of different parts and the robot as
a whole. The primary frame, common to any type of robot, is the coordinate
frame fixed to the base of the robot. This frame is called as the world
coordinate frame or can be called as global coordinate frame. The points in
the workspace are provided in this frame. In spherical robots the second
frame is attached to the joint between the vertical member and the arm. Third
frame is attached to the wrist of the robot.
Control Coordinates of Spherical Robots
The target points for the end effectors are specified as per the task in world
coordinate frame. But to make the end effectors move to the specific points
the actuators attached to each joints have to be provided with the values of
their respective movements producing the final effect as desired. This means
that the control variables are the values provided to the actuators, hence, the
coordinates of the target points have to be converted to the control
coordinates.
The global coordinates and the control coordinates are represented in the
matrix form. Control Coordinates are obtained from Global Coordinates
through various transformations of the matrices. For a particular
transformation of a matrix it is multiplied with a transformation matrix to
obtain the required transformation. All these calculations are incorporated in
a computer program and implemented through micro controllers and
processors such that it takes the global coordinates and supply the actuators
with the required values of the control coordinates.
Applications of Spherical Base Robots
The type of robot base to choose for a particular application is primarily
decided by the reach and workspace requirement of the task. Then we search
for a robot base with similar reach and workspace. After this selection
criterion other details such as the accuracy and repeatability of the robot are
considered. The common applications of spherical base robots are in material
handling operations in the production line, such as, transferring or pick-andplace materials and stacking objects.
Manipulation Robotic System: Tele-control or
Remotely Operated Robot
Tele-control robots, as the name suggests, are robotic systems that can be
controlled from a distance. The following article describes the different types
of remotely operated robots, their construction and design, as well as
applications.
Type of Remotely Operated Robots
Remotely operated robots are a type of manipulation robotic system that can
be controlled from a distant place. Remotely operated robots, also known as
tele-control robots, can be classified into five main subtypes on the basis of
their construction and characteristics.
They main types of remotely operated type robots are as follows:
○
Command-controlled manipulator
○
Master slave manipulator
○
Semi Automatic manipulator
○
Supervisory controlled robots
○
Interactive robots
All the above mentioned types have one or more controllable arms with a
series of joints, each having their own control system.
Command-controlled manipulator
In a command controlled manipulator, each of the joints is controlled by a
remotely located human operator who controls the manipulator from a
control panel. For example, in a type of research vessel used for the ocean
bed, an operator sitting inside operates the manipulators using a control panel
and by watching through a viewport.
Master Slave Manipulator
A master slave manipulator is also operated by a human from a safe distance,
but uses a master and slave mechanism, both having the same kind of
functionalities. The master and slave parts are attached using a joint that
transmits the motion from the master part to the slave part. This type of
system is generally used in areas where humans are exposed to radiation or
harmful chemicals or gases.
Semi Automatic Manipulator
Semi Automatic manipulators are controlled using joystick, which is located
at a remote place or attached to a control panel. Semi Automatic manipulators
are more sophisticated than the above two manipulators as they allow a
higher degree of freedom and can perform any motion in the required
direction. The control system generates a signal on every movement of the
joystick. The signal is then converted into a control signal that applies to the
respective actuator of the manipulator.
Supervisory Controlled Robots
Supervisory controlled robots and interactive robots are fully functional
robots. The supervisory controlled robot has a fully programmed task written
in its control system. Though a type of remotely operated robot, supervisory
controlled robots do not require any significant human involvement. A
human operator just watches the robot’s operation from a distance and
intervenes when a goal has to be assigned. Thus, the human controller only
has to assign a task to the robot, while the robot performs the task on its own.
Interactive robots
Interactive robots are a type that is one step superior to the supervisory
controlled robot. The main difference between a supervisory controlled
robots and interactive robots is that the later one can determine its own action
through its sensing and recognizing abilities and perform the required action
on its own.
Though all these types of robots are distinguished on the basis of their
characteristics, two or more types can also be combined to form a new type
of robot for a particular application. This type of robot is known as a hybrid
robot.
Spherical
Base
Workspace
Robot:
Construction
and
Spherical Base of the robots has just one replacement of the joint in the
Cylindrical Base of the robots. The slider joint between the robot arm and the
vertical member is replaced by a revolute joint. This change of a joint makes
the spherical based robot more dexterous than cylindrical robot.
Spherical Base Robots
Spherical bases of the robots make them capable of working in a spherical
space. Though the workspace cannot be more than a three dimensional one
but with the increasing number of the revolute joints the arm movements of
the robot can become more sophisticated. Spherical Bases Robots, as the
name says, work in a space defined by the intersection of spherical volumes.
With wide range of possibilities of complex movements robots with spherical
bases find application in many industrial processes.
Construction of the Base Structure
Spherical type of manipulator has the base member which can rotate about
the vertical axis. A member is connected to the base member through a
revolute joint and this member can extend and contract like a telescope. This
arrangement of the base body makes the manipulator arm to work in a space
defined as the intersection of spherical spaces.
The spherical base has the same, three, numbers of joints as the other three
dimensional robot bases has. Two joints are revolute joints and the remaining
is a prismatic joint such that the arm of the robot can extend and retract. The
end effectors of the robot are mounted on this telescopic arm. The two
revolute joint movements can be actuated by direct coupling with the servo
motors and the telescopic arm movement can be actuated by a rack and
pinion arrangement. Spherical base has three degrees of freedom and three
variables to be controlled to operate it.
Reach and Workspace of Spherical Base Robot
Reach of a robot is the limits to which its end effectors can approach the
work points. For the spherical robots the reach of its end effectors is
determined by the limits of the motion of the three joints of the base. For
such type of base of the robots the reach of the end effectors of the robot is a
sphere. The radius of this sphere is dependent on the maximum extension of
the telescopic arm
The workspace of the spherical base robots is the volume intersection of the
two concentric spheres. The dimensions of the external sphere are equal to
the maximum limits of the joint movements and the radius of the inner sphere
is determined by the minimum limits of the joint movements which are in
turn governed by design constraints. The range of rotation of the revolute
joint at base and between the base member and the arm determines the sector
of the sphere that can be covered; and the range of movement of the telescope
arm determines the range of the radius of the spherical volume of
intersection.
Base Bodies of Robots: Cylindrical Base Robot
Robots mounted on base bodies with revolute, cylindrical or spherical joints
along with prismatic joints have better functionality and move with ease
between the points in space. In continuation of the last series on Robots Base
bodies here more involved robotic base bodies are discussed.
Cylindrical Base Robot
The body of this type is such that the robotic arm can move up and down
along a vertical member. The arm can rotate about that vertical axis and the
arm can also extend or contract. This construction makes the manipulator
able to work in a cylindrical space. The dimensions of the cylindrical space
are defined as, radius by the extent of the arm and height by the movement
along the vertical member. The cylindrical manipulator base body has one
revolute joint at the fixed frame, one cylindrical joint about the axis of
rotation and one prismatic joint in the arm of the manipulator
The position of the end is defined by the extension of the arm, height of the
arm and rotation of the main body axis. These are the three variables to be
controlled to position the end effectors of a cylindrical base robot. In other
words this type of structure forms a cylindrical coordinate system and be
controlled the same way.
Workspace of Cylindrical Base Robot
The reach of the end of the Cylindrical Robot is a cylinder whose dimensions
are determined by the motion limits of the different parts of the robot. But the
motion limits of any joint in on the both sides, maximum as well as the
minimum
Thus, the workspace, volume comprised of the points where the end point of
the robotic arm can be positioned, is not a complete cylinder but it is an
intersection of two concentric cylinders. Dimensions of the inner cylinder are
determined by the minimum limits of the motion of robot parts
Replacing a Cartesian Robot with a Cylindrical Robot
Both the Cartesian and Cylindrical base robots, being able to reach points in
the three dimensional space, can be interchanged having common minimum
workspace for the application. Each robot base has its own suitable
applications. For some applications Cartesian may be more preferred and for
other applications Cylindrical Base Robots may be suitable. Even then, the
two types can be interchanged with some advantages or disadvantages.
To count as an advantage, Cylindrical Base Robots can move between the
required points faster than the Cartesian Robot, especially in the case when
the two points are at the same radius. Out of the three motions two are along
the same axis. And for disadvantage one can consider the efforts required to
make transformation of instructions from the Cartesian coordinate system to
the cylindrical coordinate system.
Introduction to Robotics Technology
Robotics technology consists of the devices, components, designs, and
programming that have gone into development of robots as we know them
today. A large sector is industrial robotics, with many of the industrial robots
being essentially a robotic arm. Read on for images and details about robots.
What is Robotics Technology?
Robotics technology has developed considerably since the author, Isaac
Asimov, coined the term robotics in the early 1940's in one of his science
fiction stories. Robotics is defined as: The science or study of the technology
associated with the design, fabrication, theory, and application of robots, in
the 2009 update of the Fourth Edition of The American Heritage Dictionary
of the English Language. This definition brings up the question, 'What is a
robot?' There are indeed a number of definitions in use for 'robot.' A usable
one that is attributed to the Robotics Institute of America (RIA) is: A robot is
a reprogrammable multi-functional manipulator designed to move materials,
parts, tools, or specialized devices through variable programmed motions for
the performance of a variety of tasks.
What do Robots Do?
In order to learn about robotics technology, it is helpful to learn a bit about
robots and their capabilities. When the idea of robots was first developing,
they were envisioned as humanlike in appearance and in behavior.
The greatest number of robots in use, however, are industrial robots, which
do not look at all like humans. The images in this section show a couple of
industrial robots, one doing material handling and the other doing welding.
Many industrial robots, like the two shown here, look somewhat like an arm,
and also go by the name 'robotic arm.'
A large percentage of the robots in the world are industrial robots used in a
wide variety of industries. Robots can do jobs that would be boring for
humans and jobs that are dangerous or dirty. Robotics technology has
developed to the point that robots can lift heavy objects, do precise assembly
line work, pick something up and place it precisely where it needs to be,
guide a machining operation, defuse bombs, or inspect sewers, just as a few
examples.
The first industrial robot, Unimate, was developed by George Devol, and was
used for die casting handling and spot welding by General Motors. This was
perhaps a predictor of things to come, because the automobile industry today
is the largest user of industrial robots and robotic arms.
In addition to industrial robotics, another large sector is robot toys and
robotics in games. Robots in this sector are more likely to have an appearance
that is more like humans, and to have motion capabilities and the capability
to do human types of activities.
The Components of Robots
One way of generalizing the nature of robotics technology is to categorize the
typical components of robots. The components of a robot would typically
include a power source, a means of sensing, actuators, a power source, a
means of manipulation, an overall structure, and perhaps a means of
locomotion. Robotics sensors are available to measure a wide range of
parameters, such as light, temperature, sound, or acceleration. Actuators
make robots or parts of robots move. The most commonly used actuator for
robots is the electric motor. Batteries are a commonly used power source. A
couple of ways that manipulation can be accomplished are with vacuum
grippers or with mechanical grippers. Mechanical grippers are the most
common means of manipulation. The first robots used as industrial robots
were stationary and so didn't need any means of locomotion. Now robotics
technology has advanced so that some robots require a means of locomotion
to do the tasks for which they are designed. The simplest means of
locomotion is four wheels, although some robots move by a number of
different methods, including walking, snaking, or hopping.
Summary
Robotics technology goes back at most 70 years, to the time when Isaac
Asimov first used the term robotics in his writing. The use of industrial
robots, such as robotic arms, has grown tremendously, so that now industrial
robots carry out a wide variety of tasks that are too boring, too dirty, or too
dangerous for humans to do.
Advantages of Robotics in Engineering
Some advantages of robotics, like improved quality and quantity of
production, are due to the mechanical nature and computerized control in
industrial robotics technology. Other advantages of robotics are due to
freedom from human characteristics like boredom and the ability to do
dangerous tasks.
Background
The advantages of robotics have become more apparent as industrial robotics
technology has grown and developed in the 50+ years since the first
industrial robot, Unimate, was put into use in the 1950s. About 90% of the
robots in use today are in the industrial robotics sector in factories. As of
2004, about 140,000 industrial robots were in use in the U.S., as reported by
the Robotics Industry Association (RIA). Robots are now also used in
warehouses, laboratories, research and exploration sites, energy plants,
hospitals, and outer space.
The advantages of robotics can be classified into four major categories: 1)
quality/accuracy/precision; 2) Efficiency/speed/production rate; 3) Ability to
work in environments that are unsafe or inhospitable for humans; 4) Freedom
from human limitations such as boredom and the need to eat and sleep.
Advantages of Robotics #1: Quality/Accuracy/Precision
Many industrial robots are in the form of a robotic arm. The image at the left
shows Unimate, the first industrial robot, which has the appearance of a
robotic arm. The image in the next section shows a contemporary industrial
robotics arm. Due to its mechanical nature and computerized control, a
robotic arm can carry out a repetitive task with great precision and accuracy,
thus providing improved, consistent product quality. This would apply to
quite a variety of production line tasks, like welding, assembling a product,
spray painting, or cutting and finishing.
Advantages of Robotics #2: Efficiency/Speed/Production Rate
The same features of industrial robotics technology mentioned above, the
mechanical nature of the equipment and the computerized control, make
industrial robotics technology more efficient and speedy, leading to higher
production rates than with human labor. Another aspect of efficiency is that
robots can be mounted from the ceiling and have no problem with working
upside down. This can lead to a savings in floor space.
Advantages of Robotics #3: Ability to Work in Environments that are
Inhospitable to Humans
This is an interesting set of advantages of robotics. There are a number of
tasks that are too dangerous, too exposed to toxins, or just plain too dirty for
humans to conveniently do them. These are ideal robotics tasks. This includes
tasks as simple as spray painting, because there is no need to worry about the
robot inhaling the paint fumes! It also includes such daunting tasks as
defusing bombs and such dirty tasks as cleaning sewers.
Advantages of Robotics #4: Freedom from Human Limitations like
Boredom
This set of advantages of robotics is due to the fact that human characteristics
like boredom from doing a repetitive task don't interfere with the functioning
of a robot. There is some overlap with the first two categories of advantages
of robotics, because the lack of interference from boredom leads to greater
accuracy, quality, and rate of production. There is more to this set of
advantages of robotics, however. Since a robot doesn't need to rest or eat, and
never gets sick, a robotic arm can work 24/7, with only limited occasional
downtime for scheduled maintenance.
Limitations of Robotics
An article about the advantages of robotics wouldn't be complete without
some discussion of the limitations of robotics. In spite of the very useful set
of advantages of robotics discussed above, there are some tasks for which
human beings are better suited than robots. For example:
○
Robots are not suited for creativity or innovation
○
Robots are not capable of independent thinking
○
Robots are not good at learning from their mistakes
○
Robots are not as suitable for making complicated decisions
○
Robots can't as readily adapt quickly to changes in the
surroundings
Human beings are needed for these types of tasks, so there is hope that we
will not become superfluous in a world dominated by robots at some point in
the future, as projected by some science fiction authors!
Summary
The advantages of robotics in an industrial setting, where most of them are
used, are notable, including increased product quality and quantity, ability to
work in inhospitable environments, and freedom from natural human needs
like food and rest. There are, however, some limitations of robotics, in areas
such as creativity, innovation, independent thinking, and making complicated
decisions, that lead to a projection that humans will remain in charge of
robots rather than vice versa.
Medical Robotics
Medical robotics is an interesting discipline that is related to human health of
all individuals. Their use is becoming popular due to their numerous
advantages in the medical field.
Introduction
Medical robotics is a stimulating and modern field in medical science that
involves numerous operations and extensive use of telepresence. The
discipline of telepresence signifies the technologies that permit an individual
to sense as if they were at another location without being actually there.
Robots are utilized in the discipline of medicine to execute operations that are
normally performed manually by human beings.
These operations may be extremely professional and facilitated to diagnose
and treat the patients. Though medical robotics may still be in its infancy, the
use of medical robots for numerous operations may increase the quality of
medical treatment. Utilization of telepresence in the medical operations has
eliminated the barriers of distance, due to which professional expertise is
readily available. Use of
robotics in the medical field and telepresence minimize individual oversight
and brings specialized knowledge to inaccessible regions without the need of
physical travel.
Image Credit: labintsis.com
History Of Medical Robotics
Medical robotics was introduced in the science of medicine during the early
1980s, first in the field of urology. Robotic arms were introduced and used
for medical operations. Robotics initially had inferior quality imaging
capabilities. During this period, the National Aeronautics and Space
Administration also started exploring utilization of robotics for telemedicine.
Telemedicine comprises the use of robotics by physicians for the observation
and treatment of patients without being actually in the physical presence of
the patient. As telemedicine improved, it started to be used on battlefields.
During the close of the last century, medical robotics was developed for use
in surgery and numerous other disciplines. Continued advancement in
medical robotics is still in progress, and improved techniques are being
developed.
Image Credit: aegiselect.wordpress.com
Features Of Medical Robotics
Medical robotics is managed by physicians through computerized consoles.
The consoles may be near the patients, or at an external site. Consoles include
single or multiple arms being in the control of the physicians who perform
operations on patients. The shape and dimensions of these arms depend upon
the type of surgery being performed. The medical data and requirement is fed
in the robotics before start of surgery, including the X-rays, and other
diagnostic examinations. This information facilitates the medical robotics to
traverse the human body correctly.
The purpose of utilizing medical robotics is the provision of enhanced
diagnostic capabilities, increased patient comfort, and less hazardous and
more meticulous interventions. Robots are being used for multiple operations,
including replacement of joints, kidneys, and open heart surgery. The patient
images are visible to the physician, and he can accordingly control the robot
by a computer. He may not be required to be present in the patient room. The
robots have enabled the physicians to perform operations on patients who are
located at long distances. Therefore, the environment produced is friendly
where the physicians experience less fatigue. (Some surgeries may be
performed for long durations causing extensive fatigue to the physicians.)
The use of robotics in the medical field makes many medical procedures
much more smooth and comfortable.
Dealing with Decommissioned Industrial Robots
After a robot has outlived its normal utility, its disposal becomes a challenge
for the enterprise using it. Resale, sending to a scrap yard, using it for landfill, and recycling are some of the options available for decommissioned
robots.
What are Robots?
By definition, robots are aids created to make work easier, faster, more
accurate, and safer to do. They are mechanically driven and have some
artificial intelligence that can be programmed to perform different
commands. Robots have been developed for many reasons, but have largely
found their major use in the manufacturing sector. Some of the work that can
be performed by industrial robots is the lifting of heavy weights, painting,
drilling, welding, and handling chemicals and hazardous materials. These
robots are mainly fixed and have limited movement.
Maintenance departments and facilities that handle hazardous material also
use robots to do work. Exploration missions to the moon and more recently to
Mars used robotic equipment to survey and collect data for research centers
on earth. Exploration robots are built to withstand extreme conditions and are
mobile; they need to be mobile to conduct geological surveys and collect data
and samples. Anti-terrorism agencies and the military use robots to neutralize
dangerous things like bombs or mines. Personal robots are rare since they
need to be programmed to do many different tasks.
Decommissioning of Robots
When industrial robots stop working or are replaced with newer versions, it is
called "decommissioning." The problem of disposal, of course, starts after the
robot has been decommissioned. Resale, sending to a scrap yard, using it for
land-fill, and recycling are some of the options available for decommissioned
robots. The fate that a decommissioned robot meets depends on the nature of
work for which it was developed, the materials used for making it, and the
law of the land.
Recycling
If no potential customer for the used equipment comes forward, retired
industrial robots should ideally be sent to recycling plants for the proper
disposal of the different materials used to build them. Most robots are built
using plastic and metals, which should not pose any dangers, but within these
robots are many electronic sensors, motion detectors, batteries, motors, and
other part that may contain harmful materials. This does not apply to all
robots, but particularly does for robots that are constructed as a single
operational unit.
Robotic attachments like used pick and place robots, painting robots, and
precision welding, positioning, and manufacturing robots, as well as all
robots that receive commands from a central command, do not pose a high
pollution risk as they are composed mostly of mechanical parts. Nevertheless,
all used robots, if possible, should be sent to robot recycling units who
specialize in dismantling them and extracting all the reusable parts and
materials.
Storing
Robots that have been used in certain hazardous operations such as inside
nuclear power plants may never get proper disposal due to the nature of the
work they have to do. When they are retired or break down, replacement is
done and the old robot is moved to a storage area within the secured
perimeter.
Exploration robots are also never built for cycling since there fate is not
predictable. Robots on the moon and mars are some example of these kinds
of irretrievable robots.
Back on earth the problem is getting bigger with the risk of contamination of
water reservoirs and underground water by harmful elements that are
contained within the artificial intelligence and sensors that controls the robots
mobility and precision. People who are environmentally conscious have
created innovative ways by which they can contribute to reducing pollution.
Retired Robots and Art
Artists have begun using them in designs to decorate offices, businesses, and
homes and parks. Some of these innovative artists create sculpture depicting
different scenarios that modern day people face on a daily bases. Some
interior decorators use retired robots which have been extracted of all harmful
elements to decorate robot enthusiasts homes and rooms with them. Robotic
arms used for welding can be used as lamp holders or as a coat rest.
The Problem of E-Waste
With the world population increasing on a daily basis and demand for
produces and resources increasing, there is urgency now more than ever to
act responsibly towards e-waste. With few recycling plants distributed around
the world and more e-waste being produced on a daily basis humanity has to
think twice before we find ourselves buried in the waste itself.
With proper management and financial support, e-waste like used robots can
be recycled and extracted parts can be reused on other models of robots. This
requires a lot of time and man power since they need to be dismantled piece
by piece in order to perform the job correctly.
PID Loop Tuning Methods for Robotics
PID loop tuning is a programming method that allows a user to vary any
combination of three variables to produce the desired effects within a closed
system. Here we will discuss common forms of PID tuning as well as
situations in which no PID is needed.
The PID, as the name itself implies has three main coefficients which affect
the system response, varying these coefficients will give different results. A
PID algorithm compares the system output against a calibrated or reference
value. A reference value is a standardized result and based on that, system
output is changed to achieve desired results. The best thing about a PID
controller is that it is tunable and scalable. You do not need to vary all the
three coefficients, you can change just one coefficient, or change two
different coefficient by making combination or change all of them, depending
upon your requirements and this is what we call loop tuning of a PID
controller.
PID and Robotics
Robotics means automated control of machines. PID controllers go hand in
hand with robotics as they bring accuracy into the system. However not every
robotic device will require you to put a PID controller and vary your
parameters. There are certain conditions that decide if you need PID loop
tuning or not.
You surely need a PID controller when your system is linear. For non-linear
devices, characteristics properties and parameter values will change with
change in the curve of the graph, which means at every point of the curve,
you will need to change your parameter gains. If the graph is linear, you can
change parameter gains as a whole.
You do not need a PID controller under the following conditions:
○
If your reference output and system output match, you get
what you want, there is no need for loop tuning of PID.
○
If the system response of your system does not show any
error and there is no need to improve upon its characteristics,
you do not need PID.
○
If all the input parameters are working fine and the system
is not showing any dangerous output characteristics.
Different Methods for Loop Tuning in Robotics
PID loop tuning given the natural frequency of the parameter gains can be
manipulated using different methods. Here are some of the most popular loop
tuning methods that are tested by times and trusted by people all over the
world.
Ziegler Nicholas Method is arguable the most popular and reliable method
for tuning a PID loop. This method includes setting the D and I gains to zero
value. The parameters which are monitored and manipulated in this method
are Proportional Gain (Kp ), Ultimate Period (Pu) and Oscillation Period
(Tu). The P, I and D gains are now set against oscillation period and ultimate
gain so that desired output is achieved.This method can be used for both the
open and closed loop systems.
The basic formula used in the Ziegler Nicholas method for tuning is : Kc =
0.45Ku and Tu = (Pu/1.2)
Cohen-Coon Method is suitable only for open loop systems and it can only
be used for time delay models that belong to t he first order class. It corrects
the steady-state response given by the Ziegler Nicholas method. Cohen Coon
method is an offline method, whereas Ziegler Nicholas method is an online
method. For every tuning cycle, a steady state must be achieved because in
offline methods steady state only leads towards step change in the input.The
corresponding image shows parameters used in the Cohen Coon method.
Automatic Tuning based on relay feedback is another method that is an
alternative to the conventional continuous cycling technique. This method is
also known as Auto Tune Variation (AV) Method. It is performed for closedloop systems and it is efficient for long time constant processes as compared
to the conventional hit and trial or step methods.
Advancement in technology has made it possible to perform loop tuning and
visualize the changes taking place on screen. LabView is one such tool that
helps in monitoring and optimizing control systems using graphical
flowcharts. Another tool is Robust Control Toolbox from Matlab. This tool
minimizes overshoot and keeps a check on the steady state errors. It also
offers approximation algorithms for model order reduction.
Honda Asimo - How Long to Robots in the Home?
You must have seen them in movies. You must have read about them in
novels. They were fictitious, but currently live. Who are we speaking about?
Indeed the humanoid robots! Yes, they are ready to work for you as your
servant! Read on to learn more about the Honda Asimo and other robots….
What is ASIMO?
Being an assistant to people was the idea in mind when ASIMO (Advanced
Step in Innovative MObility) was being created. ASIMO, which is 4’ 3" or
rather 130 centimeters tall, is a perfect helper in and around a house and can
also help a person who is restricted to bed or a wheelchair. The size of this
robot makes it easier to directly communicate with a person who is either
sitting on his/her bed or a chair.
Actually ASIMO is an extremely stylish piece of equipment, technically
speaking. As a marketing tool, ASIMO is creating a sensation, assuring a
volley of news reporting with each promotion stunt. Honda had in mind
selling units as domestic servants when they actually brought out the robot in
2002.
A robot having the capability to walk up and down stairs, clasp objects, and
work together with humans looks like an ideal fit for elderly care. Six years
later the cost of Asimo still remains at more than $100,000 to rent. The
human roboid has not yet made anyone's bed and constitutes just the most
recent in a chain of letdowns for future household robots. So with what
Asimo can and cannot do, the question is when will this robot get a job?
Isaac Asimov's "Three Laws of Robotics" states that:
1.
A robot may not injure a human being or, through inaction,
allow a human being to come to harm.
2.
A robot must obey orders given it by human beings except
where such orders would conflict with the First Law.
3.
A robot must protect its own existence as long as such
protection does not conflict with the First or Second Law.
A key factor of Asimov’s Three Laws is that they are changeless. He actually
anticipated a robot brain of huge complexity that could only be created by
humans at the mathematical level. Consequently, the Three Laws are not in
the textual form demonstrated above, but were programmed in mathematical
terms straight into the center of the robot brain. This programming could not
alter in any important way during the course of the robot’s life. Asimov had
actually assumed that a robot would be programmed with all things it
required to operate before its activation.
Thus the name ASIMO can be easily associated with Asimov, the iconic
science-fiction writer who visualized intelligent humanoid robots in his tales
and was the first to establish the three laws of robotics, controlling humanmachine interactions.
Asimo serving drinks
RIBA, the robot nurse
"RIBA" stands for Robot for Interactive Body Assistance and is perhaps the
first robot that can pick up or lay down a real human weighing up to
61kg/134lbs from or to a bed or wheelchair. Oddly, this is a robot that
requires a human assistant. RIBA uses its very firm human-like arms and by
novel perceptible direction methods applying high-accuracy tactile sensors, is
able to do most of the jobs that a nurse does. RIBA could establish its worth
when you acquire one, but before you buy RIBA just consider the number of
times patients in your hospitals and care institutions are lifted and moved
every day. This would also relieve the many care-givers who ultimately fight
back bad backs, injuries, and exhaustion. Apart from this the patients will no
longer suffer from poorly-executed moves.
Are we safe from robots thinking for themselves?
Robots which can think for themselves in the future would be available for
minding our children and the aged and patrolling our streets, say experts.
Scientists state that a new generation of robots working without human
counsel is in the making. Apart from this in about five years robots can be
seen working in care homes, monitoring prisons and helping police in tracing
criminals.
Alan Winfield, professor of electronic engineering at the University of the
West of England in Bristol, said “it would not be long before technological
advances made it possible for robots to be introduced in the home, as well as
prisons and the police service."
Further, while speaking at a debate on robot ethics at the London Science
Media Centre, he said, "It is highly likely that in a number of years robots
will be employed both for child-minding and care for the elderly. But the
danger is that we will sleepwalk into a situation where we accept a large
number of autonomous robots in our lives without being sure of the
consequences. The outcome could be that when given a choice the robot
could make the wrong decision and someone gets hurt. They can go wrong
just like a motor car can. We should be aware of the future that we are letting
ourselves in for. We need to look at their safety and reliability." The
predictions of the professor could be seen in the hit Hollywood sci-fi film IRobot, in which a slave robot having a mind of its own causes pandemonium.
The major boosts in robotics in current years have been as arms of war. The
U.S. military is building up battlefield robots that will have the capacity to
make a decision as to when lethal force has to be used. At the Georgia
Institute of Technology in Atlanta, a battlefield robot has been trained to use
radar data and intelligence feeds and to make decisions based on a set of
moral rules.
Drawbacks
Even though at some point of time in the future robots may fit into the picture
drawn by us, nothing can be perfectly said as of now. Regardless of the fact
that robots are not alive, numerous delicate actions have been attributed to
Asimo's poses, which were added to conquer the next barrier - the emotional
one. Robots are made to look safe in order that we don't imagine a cold
calculating heartless being moving around in our homes chopping vegetables,
holding boiling water, or sitting in rest/recharge mode next to us on the sofa.
So if the robot does all of our household work, should we as family members
only sit and watch TV or sleep? Robots like Asimo try to generate a fantasy
that is not essentially human, but something else that is roughly a cartoon-like
overstated description of human mannerisms.
Even though at some point of time in the future robots may fit into the picture
drawn by us, nothing can be perfectly said as of now. Regardless of the fact
that robots are not alive, numerous delicate actions have been contributed in
Asimo's poses, which were added to conquer the next barrier - the emotional
one. Robots are made to look safe in order that we don't imagine a cold
calculating heartless being moving around in our homes chopping vegetables,
holding boiling water, or sitting in rest mode next to us on the sofa.
If robots help us and in industries do the job more effectively and efficiently,
then what about us? Do we become useless? If the robot replaces the human
job, there will be no long queue for job applications from the human later
resulting in the increase of unemployment rate. This is indeed debatable!
Conclusion
Now the time has come to decide whether Asimo and future generations of
robots will merge comfortably well into homes when they are more easily
available to the general populace. Or will there be a revolt against machines
working separately without the straight physical direction of human hands?
One thing is for sure: precisely as computers were applied in styles that most
exposed humanity's factual personality, robots will tell us to a greater extent
about ourselves in an unimaginable way. Asimo is equipped to bond your
family. What can he do? It is left to the purchaser to decide, but the rule of
Caveat Emptor has to be followed in the process.
So when can you buy a robot for the home? Right now! You can get an
Asimo for only $1 million.
The Brains and Body of a Robot
Robots are an essential part of technology today. However, most people
overlook the similarities robots have with humans, in terms of a control
center and a structure. Two key components of most robots are the brains, in
the form of microcontroller chips, and the body.
Introduction
From an anatomical point of view, most robots generally have a brain, a
body, a power source, sensors, and action and feedback mechanisms.
Specifically, I would like to discuss the brains and body of a robot.
Brains
Robot brains come in a wide variety of forms. In fact, some robots are built
without brains and are controlled by people through remote control. Robots
can also be built with a brain that is spread out in different parts of the robot.
For instance, basic chips can be used to operate individual parts of the robot,
such as an arm or leg, with these individual parts working independently of
the other parts. Furthermore, robots can be built with brains that are located
far away from its body, such as in a computer.
All things considered, the number one choice for robot brains is the
microcontroller chip. Like the microprocessor chips found in computers,
microcontrollers are similar, except that microcontroller chips are somewhat
like a tiny computer themselves. A small amount of memory and storage
space is built directly into the microcontroller chip. When comparing the
microprocessor chips found in computers and microcontroller chips, the chips
found in computers dedicate their channels to high speed memory connectors,
whereas microcontroller chips have a much larger variety of input and output
ports. These ports can connect to buttons, sensors, and other devices.
Although it may not seem evident, we are surrounded by microcontrollers.
These useful chips are found in vehicles, household appliances such as
dishwashers, VCRs, TVs, radios, and other home and work appliances.
Fortunately, the high demand for these microcontroller chips has made these
chips inexpensive and abundant.
Body
Although the body of a robot may seem unimportant in comparison to the
other parts of a robot, many who attempt to build a robot fail to incorporate a
framework into their design. Many who build homemade robots end up with
robots that are too susceptible due to circuit boards that are exposed and
wires sticking out. This usually leads to a robot that either collapses or moves
unevenly. A proper frame for a robot not only ensures that the robot stays in
one piece, but also protects the robot from injury. An excellent example of
the use of a reliable framework is in a production line robot. These robots
have special designs that enable them to efficiently operate in environments
that most humans would find potentially dangerous. Production line robot
design involves a certain number of axes that determines the versatility of the
robot. However, it is the kinematics of the assembly line robot that
determines the arrangement of the robot. Some parts of the robot are bolted
together while others are welded together, allowing for both rigid and
dynamic parts in the same robot. It is the kinematics of the framework of a
production line robot that defines its purpose, whether for welding car parts
together or lifting heavy loads.
Another underestimated aspect of a robot is the visual appeal. Those who
desire to take up robot building as a hobby should learn that showmanship is
critical in how others will view your robot. Despite the fact that a robot may
be technically impressive, the finishing touches to the appearance of the robot
is what draws attention. The M&M robot is an example of an attractive and
innovative robot body design.
The Future of Robotics
It is anticipated by engineers and scientists that in the near future robots will
be seen generally at numerous establishments, including production units,
farming, hospitals, maintenance, construction, and in homes. Robots will be
able to substitute for individuals in most factories where tasks of extra
precision are necessary and production rate is important, which is difficult to
be performed correctly by human labor.
General Usage Of Robots In the Future
International experts on robotics are of the view that by year 2020, robots
will be capable of observing and performing tasks, talking, and will possess
aptitude and intellect. The association of human beings with robots will be
ordinary and usual. In the near future, robots will not be a complex machine,
but equipment or machinery to be utilized in every day life, including
washing, assisting in moving of disabled or injured people, working in
factories, etc.
Robotic Surgery
Doctors visualize that in the near future advanced robots will be utilized to
assist in carrying out long distance medical treatment including surgery,
diagnosis, and other medical treatment. This will enable the treatment to be
carried out in a shorter time, and it may not be necessary for the patients to
travel long distances, which presently may even involve travel from one
continent to another. Robots may also assist in carrying out minor medical
treatment, instead of advising a pill for certain ailment, a small robot may be
introduced in the blood, which will sense the reason of ailment, and
subsequently arrange appropriate medicines in the affected part of the body.
Improvement In Human Brain
Robots will be introduced into parts of human beings, such as intellectual
insertion in the brain, which will enhance memory and improve ideas in the
mind. Nano robots will even be injected into the blood to wash and scrub
blood vessels. The human mind with the assistance of robotic brains will be
able to perform 100 trillion commands per second.
Robots In Biomimetics
The next concentration for modern robots will be biomimetics, an area which
will concentrate on the manufacture of equipment that obtain guidance from
the environment as motivation for their looks and attitude. Presently, broad
research is being carried out in this field.
Manipulation Robotic Systems: Automatic Type
Robot
Manipulation Robotic Systems, an integral part of many industrial
manipulators, are mainly divided on the basis of the type of control system
they have. Automatic robots, a type of manipulation robotic system, are
considered to be one of the earliest robotic systems.
Automatic Type Robots are an integral part of several industrial robotics
systems, and are supposed to be the earliest type of robotic system present in
the market today. Automatic robots are divided into four main categories,
mainly based on their characteristics and application.
Manipulation robotic system can be can be classified into three main types:
○
Autonomous controlled
○
Remotely controlled
○
Manually controlled
An autonomous robotic system is mainly used as industrial robots whereas
the remotely controlled robots are used in areas or environments which are
inaccessible or harmful to humans. The manually controlled system is used
for handling goods or for transportation purposes.
Classification of Autonomous Robotic system
Out of the three types of manipulation robotic systems, the autonomous
system can be further classified into
○
Programmable
○
Non Programmable
○
Adaptive
○
Intelligent
Programmable and Non Programmable Automatic Robots
Out of these, non-programmable robots are of the most basic type. In fact a
non-programmable robot is not even considered a robot, but a manipulator
devoid of any reprogrammable controlling device. One example of such
robots is the mechanical arm used in industries. Non-programmable robots
are generally attached to programmable equipment used in manufacturing
industries for mass production.
A programmable robot, as the name suggests, is a first generation robot with
an actuator with the facility of each of its joints being reprogrammable
according to the kind of application. The function and application of the
robots can be changed just by reprogramming the robot, however once
programmed, they perform a specific function in a fixed sequence and fixed
pattern. All the industrial robots are of programmable type which would
perform a monotonous motion both in the presence or absence of any part in
its grip. The main drawback of this type of robot is that, once programmed, it
can be used to hold an object of a specific type and shape and that too placed
in a particular position. As this type of robot cannot change its position when
required, it is always a bit difficult to use in a changing application system.
Adaptive Robots
Adaptive robots are also industrial robots, but of a kind more sophisticated
than programmable robots. Unlike programmable robots, adaptive robots can
adapt to a certain extent and, after evaluating a particular situation, perform
the action required. In order to enable them to perform these tasks, adaptive
robots are equipped with sensors and control system. The sensors sense the
change in environmental conditions, and the control system, by assessing the
signals from sensors, provides the required motion. Adaptive robots are
generally used in situations wherein it is difficult to program a robot to
perform actions in a particular pattern due to obstacles or other moving parts.
Adaptive robots are used in functions such as welding, spray painting, etc.
Intelligent Robots
Intelligent robots, as the name suggests, are the most intelligent of all types,
with several sensors and microprocessors for storing, analyzing, and
processing the data. Intelligent robots can perform any kind of work because
of their ability to analyze any situation and provide the necessary movement
according to that. For this, the system is provided with several manipulators,
having their own controllers.
Recommended Additional Features for Multimeters
in Robot Building
In order to make robot building even more efficient, you don't only purchase
any multimeter, but you can purchase one with extra features that are not
typically included in basic multimeters. Although the price will be higher, it
is well worth the purchase.
Previously, I wrote about the required features of multimeters for robot
building. In this article, I will cover other nice-to-have recommended features
that are worth paying the extra for. However, if you’re budget is limited, you
can stick with a more basic multimeter. The recommended features make the
task of robot building easier and more efficient.
Capacitance
Capacitance is an indication of how many electrons can be stored, and is
measured in farads. For robotic purposes, it is recommended to have a
capacitance range of 0.000000000020 f to 0.01 F. Although most multimeters
do have the capacitance feature, most do not have as wide a range.
Diode
The diode mode is an indication as to how much electrical pressure is needed
to turn a semiconductor on, and is measured in volts. For robotic purposes,
the diode mode is critical since it can aid us in identifying different types of
diodes, and identifying and testing transistors.
Continuity
Continuity is an indication as to whether or not there is an electrical
connection between two different points, and is measured in ohms. The
continuity feature comes in two forms--audible and inaudible. Audible
continuity means that a beep is sounded when an electrical connection exists
between two points. Inaudible continuity means that you will have to use the
resistance setting, in which you must look at the display instead of listening
for a beep.
Frequency
Frequency is an indication as to how many times something is happening
every second, and is measured in hertz. For robotic purposes, it is
recommended to have a frequency range of 0.1 Hz to 50,000,000 Hz.
Although some multimeters do have a frequency feature, most do not have as
wide a range.
Duty Cycle
Duty cycle is an indication as to how frequently a measurement is high as
compared to how frequently a measurement is low, and is measured in
percentages. For robotics, the duty cycle measurement is crucial for pulsewidth modulation. This feature is usually accessed in the frequency mode.
Most multimeters do not have the duty cycle feature.
Transistor
The transistor mode gives an indication as to the amount of amplification of a
transistor by measuring the hFE. By inserting a transistor into the socket
holes, you can determine its bipolar nature, whether npn or pnp. If your
multimeter does not have a transistor mode, you can use the diode feature to
determine the bipolar nature of a transistor.
Identifying and Purchasing Resistors
Robots use many resistors on their circuit boards. These resistors are so
crucial for electronics that they would not operate without them. Therefore, it
is important to know how to identify resistors if one wants to build or analyze
an electronic part.
The water supply that is connected to your home comes through a rather large
pipe. But the pipe leading to the shower is much smaller. The reason for this
is to take up less space and deliver less water to the location. Similarly,
resistors limit the flow of electricity in order to reduce waste and efficiently
deliver the required amount of electricity to each part.
Obtaining a Resistor Variety Pack
Resistors are extremely valuable, yet inexpensive, so you’ll want to acquire a
variety of values. A place to start is with a ½-watt, 5% tolerance, carbon-film
variety pack, as is shown below.
The resistor packs in the above table are sufficient for starting. The Jameco
#107879 has a cabinet as well, which is always nice.
Understanding Size and Tolerance
It is recommended to begin with the ½-watt through-hole resistors since their
size makes it easier to decipher their color-coded bands. It is also fine to use
the smaller ¼-watt through-hole resistors. Surface-mount resistors, however,
are too small to experiment with, unless you’re experienced, so it’s best not
to use them if you’re a beginner.
A 5% tolerance indicates that a 100 Ω resistor could be low as 95 Ω or high
as 105 Ω, which is accurate enough for homemade robots. One can purchase
the 1% tolerance, which is more expensive, but the robot will not notice the
change.
Cutting it Out
Resistors often appear connected together with tape. This is done because
they are manufactured in long reels to be fed into robotic part-placement
machines. A distributor can purchase a reel and cut different lengths for
custom orders.
Peeling off the tape is an option, but residue does remain on the ends of the
resistors. This residue can prevent an effective metal-to-metal connection
when it comes to prototyping, and can stick to holes and sockets. A better
option is to use a wire cutter tool to snip the ends off the resistors from the
reel tape.
Such a wire cutter is depicted below. It is advisable that you do not use a
scissor instead, as the scissor blades will become dull and may deform.
Resistance and Ohms
Ohms is the unit applied to resistance and is abbreviated with the symbol Ω.
Therefore, 100 ohms is exactly the same as 100 Ω. If you can remember that
resistance is like the small pipe leading to the shower head, a large Ω
indicates larger resistance, or the small pipe.
Self-Learning Control System Concepts Simplified
Control system design and operation have improved with the advance of
technology, but the basic premises of their functioning and optimization are
necessary information for today's engineers. One can think of them as
complicated feedback systems, with better brains than they had before.
What is Control System?
In general, a control system is a part of a larger system that manages the
behavior of the larger system itself. The control system of a computer is the
set of programs in the operating system that interacts with the CPU and
peripherals in order to run the computer. In a CNC machine, the control
system likely consists of a controller board (logic board) and a number of
hydraulic actuators operated by control valve switches.
With the advancement of technology, control systems have become more
sophisticated and intelligent. In the old days, mechanical switches were used
for controlling a system, and then came relays, then programmable logic
controllers (PLCs), microcontrollers, and now, microprocessors.
All the control systems can be broadly classified under two headers:
sequential control systems and feedback control systems.
What is a Self-learning Control System?
The self learning or adaptive control system is a kind of advanced intelligent
feedback based control system.
○
Actuators: This part of a system actually performs the
output function of the system. For example, the pneumatic
cylinder of the automatic door closing system is the output
system. Hydraulic and pneumatic cylinders, electric and
hydraulic motors are used as actuators in most systems.
○
System Performance Parameter Sensors: Performance of
the actuators is sensed using these sensors to get the feedback
signals about how much deviation exists in actuator
performance. Mechanical sensors, laser-based sensors, and
proximity sensors are a few names of the huge variety of the
sensors used in industry.
○
○
Environment Parameter Sensors:
These sensors are used for monitoring the change in the
operating environments. The signals from these sensors help in
refining the control parameters otherwise designed for the ideal
operating environment.
○
Data Acquisition System: The signals sent by the System
Performance Parameter Sensors as well as the Environment
Parameter Sensors are collected and converted to useful data
or knowledge by the data acquisition system.
○
Knowledge Base: Useful data or knowledge is stored
systematically here. As the system matures, the size of the
knowledge base increases and so does the efficiency of the selflearning control system.
○
Decision Making System: This is the brain. It sends
optimized signals to the actuators based on the knowledge,
experience, and present situation.
Applications of Adaptive Self-learning Control Systems
○
Active suspension systems: These intelligent automobile
suspension systems use separate actuators to support the
suspensions for the individual wheels. When a wheel rolls over
a bump in the road, the control system senses it and makes a
decision to release some pressure from the actuator connected
to that wheel, which in turns allows the suspension of the wheel
to rise, without disturbing the rest of car. When one wheel of
the car finds a depression or pot hole in the road, the system
increases the hydraulic pressure of that actuator in such a way
that the actuator pushes the suspension of that wheel
downward, and thus the rest of the car don’t get destabilized.
○
Auto-pilot or cruise control systems: The autopilot is an
intelligent control system used mainly for aircraft to fly without
the need for human interventions. The system continuously
monitors the system parameters (like engine speed, vibrations,
engine temperature, and airspeed) as well as the environment
parameters (like altitude, humidity, and wind speed) and make
optimum flying decision continuously. Similar systems are
used for spacecraft and ships as well.
○
Adaptive mobile robots: The adaptive control system of
intelligent mobile robots helps in acquiring and applying
knowledge from the surrounding environment. It learns about
things like new obstacles, and the motion of the robot keeps
improving.
Conclusion
The self-learning adaptive control system is a kind of advanced intelligent
feedback-based control system. It has complex feedback and environment
sensors, a decision making system, a knowledge base, and work-producing
actuators. The system makes intelligent operating decisions based on the
current operating conditions and past experiences. This kind of control
system is most suitable for applications where complete prior information of
the time-varying control parameters is not possible. For example, the weight
of an airplane continuously decreases as the fuel is depleted, thus increasing
the range the airplane can fly on the remaining fuel. While designing this
kind of system, it must be kept in mind that these applications cannot afford a
dysfunctional control system, so system reliability is paramount.
Applying Artificial Intelligence and Machine
Learning in Robotics
Artificial intelligence (AI) is set to disrupt practically every industry imaginable, and industrial robotics
is no different. The powerful combination of robotics and AI or machine learning is opening the door to
entirely new automation possibilities.
Currently, artificial intelligence and machine learning are being applied in limited ways and enhancing
the capabilities of industrial robotic systems. We have yet to reach the full potential of robotics and
machine learning, but current applications are promising.
4 Tenets of Artificial intelligence and Machine Learning in
Robotics
There are four areas of robotic processes that AI and machine learning are impacting to make current
applications more efficient and profitable. The scope of AI in robotics includes:
1.
Vision – AI is helping robots detect items they’ve never seen before
and recognize objects with far greater detail.
2.
Grasping – robots are also grasping items they’ve never seen before
with AI and machine learning helping them determine the best position
and orientation to grasp an object.
3.
Motion Control – machine learning helps robots with dynamic
interaction and obstacle avoidance to maintain productivity.
4.
Data – AI and machine learning both help robots understand physical
and logistical data patterns to be proactive and act accordingly.
AI and machine learning are still in their infancy in regards to robotic applications, but they’re already
having an important impact.
Two Types of Industrial Robot Applications Using Artificial Intelligence and
Machine Learning
Supply chain and logistics applications are seeing some of the first implementations of AI and machine
learning in robotics.
In one example, a robotic arm is responsible for handling frozen cases of food that are covered in frost.
The frost causes the shape of the objects to change – the robot is not just presented different parts
occasionally, it’s being continuously presented with differently shaped parts. AI helps the robot detect
and grasp these objects despite the variations in shape.
Another prime example of machine learning involves picking and placing over 90,000 different part
types in a warehouse. This volume of part types wouldn’t be profitable to automate without machine
learning, but now engineers can regularly feed robots images of new parts and the robot can then
successfully grasp these part types.
AI and machine learning will have a transformative impact on industrial robots. While these
technologies are still in their infancy.
A safe, wearable soft sensor
This biocompatible sensor is made from a non-toxic, highly conductive liquid
solution that could be used in diagnostics, therapeutics, human-computer
interfaces, and virtual reality.
By Leah Burrow
Children born prematurely often develop neuromotor and cognitive
developmental disabilities. The best way to reduce the impacts of those
disabilities is to catch them early through a series of cognitive and motor
tests. But accurately measuring and recording the motor functions of small
children is tricky. As any parent will tell you, toddlers tend to dislike wearing
bulky devices on their hands and have a predilection for ingesting things they
shouldn’t.
Harvard University researchers have developed a soft, non-toxic wearable
sensor that unobtrusively attaches to the hand and measures the force of a
grasp and the motion of the hand and fingers.
The research was published in Advanced Functional Materials and is a
collaboration between the Wyss Institute for Biologically Inspired
Engineering, The Harvard John A. Paulson School of Engineering and
Applied Sciences (SEAS), Beth Israel Deaconess Medical Center, and
Boston Children’s Hospital.
One novel element of the sensor is a non-toxic, highly conductive liquid
solution.
“We have developed a new type of conductive liquid that is no more
dangerous than a small drop of salt water,” said Siyi Xu, a graduate student
at SEAS and first author of the paper. “It is four times more conductive than
previous biocompatible solutions, leading to cleaner, less noisy data.”
The sensing solution is made from potassium iodide, which is a common
dietary supplement, and glycerol, which is a common food additive. After a
short mixing period, the glycerol breaks the crystal structure of potassium
iodide and forms potassium cations (K+) and iodide ions (I-), making the
liquid conductive. Because glycerol has a lower evaporation rate than water,
and the potassium iodide is highly soluble, the liquid is both stable across a
range of temperatures and humidity levels, and highly conductive.
“Previous biocompatible soft sensors have been made using sodium
chloride-glycerol solutions but these solutions have low conductivities,
which makes the sensor data very noisy, and it also takes about 10 hours to
prepare,” said Xu. “We’ve shortened that down to about 20 minutes and get
very clean data.”
Safe, soft sensors on the top and tip of the index finger detect the movements,
strain and force of the finger while performing different activities, such as
flexing and extending the finger and picking up weights and boxes.
The design of the sensors also takes the need of children into account.
Rather than a bulky glove, the silicon-rubber sensor sits on top of the finger
and on the finger pad.
“We often see that children who are born early or who have been diagnosed
with early developmental disorders have highly sensitive skin,” said Eugene
Goldfield, Ph.D., coauthor of the study, and Associate Faculty Member of
the Wyss Institute at Harvard University and an Associate Professor in the
Program in Behavioral Sciences at Boston Children’s Hospital and Harvard
Medical School. “By sticking to the top of the finger, this device gives
accurate information while getting around the sensitively of the child’s
hand.”
Goldfield is the Principal investigator of the Flexible Electronics for
Toddlers project at the Wyss Institute, which designs modular robotic
systems for toddlers born prematurely and at risk for cerebral palsy.
Goldfield and his colleagues currently study motor function using the
Motion Capture Lab at SEAS and Wyss. While motion capture can tell a lot
about movement, it cannot measure force, which it critical to diagnosing
neuromotor and cognitive developmental disabilities.
“Early diagnosis is the name of the game when it comes to treating these
developmental disabilities and this wearable sensor can give us a lot of
advantages not currently available,” said Goldfield.
This paper only tested the device on adult hands. Next, the researchers plan
to scale down the device and test it on the hands of children.
“The ability to quantify complex human motions gives us an unprecedented
diagnostic tool,” says senior author Robert Wood, Ph.D., Founding Core
Faculty Member of the Wyss Institute and the Charles River Professor of
Engineering and Applied Sciences at SEAS. “The focus on the development
of motor skills in toddlers presents unique challenges for how to integrate
many sensors into a small, lightweight, and unobtrusive wearable device.
These new sensors solve these challenges – and if we can create wearable
sensors for such a demanding task, we believe that this will also open up
applications in diagnostics, therapeutics, human-computer interfaces, and
virtual reality.”
How robots are helping doctors save lives in
the Canadian North
It is the middle of the winter and a six-month-old child is brought with acute
respiratory distress to a nursing station in a remote community in the
Canadian North.
The nurse realizes that the child is seriously ill and contacts a pediatric
intensivist located in a tertiary care centre 900 kilometres away. The
intensivist uses her tablet to activate a remote presence robot installed in the
nursing station and asks the robot to go to the assessment room.
The robot autonomously navigates the nursing station corridors and arrives
at the assessment room two minutes later. With the help of the robot’s
powerful cameras, the doctor “sees” the child and talks to the nurse and the
parents to obtain the medical history. She uses the robot’s stethoscope to
listen to the child’s chest, measures the child’s oxygen blood saturation with
a pulse oximeter and performs an electrocardiogram.
With the robot’s telestrator (an electronic device which enables the user to
write and draw freehand over a video image) she helps the nurse to start an
intravenous line and commences therapy to treat the child’s life-threatening
condition.
This is not science fiction. This remote presence technology is currently in
use in Saskatchewan, Canada — to provide care to acutely ill children living
in remote Northern communities.
Treating acutely ill children
Advances in telecommunication, robotics, medical sensor technology and
artificial intelligence (AI) have opened the door for solutions to the
challenge of delivering remote, real-time health care to underserviced rural
and remote populations.
A team uses a remote presence robot to see a patient in the emergency room.
(University of Saskatchewan),
In Saskatchewan, we have established a remote medicine program that
focuses on the care of the most vulnerable populations — such as acutely ill
children, pregnant women and the elderly.
We have demonstrated that with this technology about 70 per cent of acutely
ill children can be successfully treated in their own communities. In similar
communities without this technology, all acutely ill children need to be
transported to a tertiary care centre.
We have also shown that this technology prevents delays in diagnosis and
treatment and results in substantial savings to the health-care system.
Prenatal ultrasounds for Indigenous women
Remote communities often lack access to diagnostic ultrasonography
services. This gap disproportionally affects Indigenous pregnant women in
the Canadian North and results in increases in maternal and newborn
morbidity and mortality.
We are pioneering the use of an innovative tele-robotic ultrasound system
that allows an expert sonographer to perform a diagnostic ultrasound study,
in real time, in a distant location.
Research shows that robotic ultrasonography is comparable to standard
sonographyand is accepted by most patients.
Emergency room trauma assessment
Portable remote presence devices that use available cellular networks could
also be used in emergency situations, such as trauma assessment at the scene
of an accident or transport of a victim to hospital.
For example, emergency physicians or trauma surgeons could perform realtime ultrasonography of the abdomen, thorax and heart in critically injured
patients, identify life-threatening injuries and start life-saving treatment.
Wearable remote presence devices such a Google Glass technology are the
next step in remote presence health care for underserviced populations.
For example, a local nurse and a specialist in a tertiary care centre thousand
of kilometres away could assess together an acutely ill patient in an
emergency room in a remote community through the nurse’s eyes.
A nurse examines a patient with Google Glass.
Although remote presence technology may be applied initially to emergency
situations in remote locations, its major impact may be in the delivery of
primary health care. We can imagine the use of mobile remote presence
devices by health professionals in a wide range of scenarios — from homecare visits to follow-up mental health sessions — in which access to medical
expertise in real time would be just a computer click away.
A paradigm shift in health-care delivery
The current model of centralized health care, where the patient has to go to a
hospital or a clinic to receive urgent or elective medical care, is inefficient
and costly. Patients have to wait many hours in emergency rooms. Hospitals
run at overcapacity. Delays in diagnosis and treatment cause poor outcomes
or even death.
Underserviced rural and remote communities and the most vulnerable
populations such as children and the elderly are the most affected by this
centralized model.
Remote presence technologies have the potential to shift this — so that we
can deliver medical care to a patient anywhere. In this decentralized model,
patients requiring urgent or elective medical care will be seen, diagnosed
and treated in their own communities or homes and patients requiring
hospitalization will be triaged without delay.
This technology could have important applications in low-resource settings.
Cellular network signals around the globe and rapidly increasing bandwidth
will provide the telecommunication platform for a wide range of mobile
applications.
Low-cost, dedicated remote-presence devices will increase access to
medical expertise for anybody living in a geographical area with a cellphone
signal. This access will be especially beneficial to people in developing
countries where medical expertise is insufficient or not available.
The future of medical care is not in building more or bigger hospitals but in
harnessing the power of technology to monitor and reach patients wherever
they are — to preserve life, ensure wellness and speed up diagnosis and
treatment.
Robocar year in review for 2018
In spite of what most are writing, it was a year of much progress.
A number of other summaries of 2018 in robocars have called it a bad year,
the year it all went south, even the year the public realized that robocars will
never come.
In fact, 2018 was the year the field reached a new level of maturity, as its
warts began to show, and we saw the first missteps (minor and major) and
the long anticipated popping of some of the hype.
As predicted by Gartner’s famous “hype cycle” any highly-hyped
technology goes through a “trough of disillusionment” after the initial
surge.I see several reasons why the trough is happening now:
·
The public is starting to understand some realities which have not
been very well conveyed to them, though they were known by the major
teams:
o
This is a very hard task
o
It is geographic in nature, due to the need of mapping and local driving
rules, and so it begins in limited areas and grows from there.
o
The amount of QA needed to get to unmanned operation is immense,
and if you have money, there is no reason to remove safety drivers until
you’re quite sure.
o
The so called “level 5” isn’t on any serious roadmap, and may never
happen
·
Cars showed up at the peak of Gartner’s very own chart a couple of
years ago, so it’s just about time in their rulebook
·
It’s very typical in software for dates to slip.People predict when they
hope they can make a target, which is really the beginning of the time
window in their estimate.
·
Some people, such as Elon Musk, have made irresponsibly optimistic
predictions, leading the public to imagine their drive-everywhere robocar
is coming next year.
·
If you follow Gartner’s literal curve, they had robocars at the top in
2015.So the trough doesn’t really even need a reason.But I think there will
be multiple peaks and troughs.
In reality, the plan at Waymo and many other companies has always been to
build a car that serves a limited set of streets — a small set of easy streets —
at the start, and then starts growing the network of streets, and eventually
towns with streets, as time goes by.
There is going to be a big surge once the technology reaches a level where
the remaining problems are no longer technological as logistic.That is to
say, when the barrier to expanding to new streets and cities is the detailed
work of mapping those streets, learning the local rules and working with
local governments.That’s when the “land rush” happens.The limiting factor
there is time and talent more than it’s money.
But none of that happens until the cars are ready for deployment, and until
they are, they will be tested as prototypes with safety drivers in them.Even
the first prototype services, like Waymo’s and Zoox’s and others, will have
safety drivers in them.
The Uber fatality — the top story of the year
No question the big story this year was the death of Elaine Herzberg as the
result of a compound series of errors and bad practices at Uber.The story is
notable for many reasons, including of course how it happened, but also in
the public’s reaction. For a long time, I’ve been assured by many skeptics
that the first death would mean the end of the robocar dream.The public
actually thinks the first deaths were in Teslas (they weren’t) and Tesla stock
went up after they took places. The Uber fatality was real, and did teach us
that teams are capable of more negligence than I had thought. While it did
scale up public distrust, and Uber did shut down their program for at least a
year, the overall effect still seems modest.(The larger effect will be much
greater intolerance for the next fatality, the one that would have been the
first.)
Here’s some of my many posts on Uber this year:
·
·
It certainly looks bad for Uber
How does a robocar see a pedestrian and how might Uber have gone
wrong?
·
Comparing the Uber and Tesla fatalities with a table
·
Uber reported to have made an error tuning perception system
·
NTSB Report implies serious fault for Uber in fatality
·
Did Uber really botch the switch to one driver and a timeline of the
failure
·
Almost every thing that went wrong in the Uber fatality is both
terrible and expected
·
Don’t watch TV while safety driving
Story #2 — Waymo’s non-launch
Waymo remains the clear leader in the field, so the next top story has to be
about them, but sadly it’s the story of their first miss — promising to launch
in 2018 and feeling forced to do a “launch” that was really just a
formalization of existing activity. I believe that Uber is partly to blame here,
in that it did use up a lot of the public’s tolerance for errors, especially in the
Phoenix area.Waymo soft launches in Phoenix, but
The better story for Waymo, however, was their first totally unmanned
operations earlier in the year.This also disappointed people because these
unmanned operations were on a much more limited scale than people
originally imagined, but it’s still a major milestone.It means Waymo’s team
convinced the lawyers and board that the systems were good enough to take
this risk, even if only in a limited area.
Waymo goes totally unmanned, arbitration and other news
·
Uber and Waymo settle lawsuit in a giant victory for Uber
The Waymo-Uber battle, triggered by Anthony Levandowski, was
concluded this year.
·
Anthony Levandowski, Jiajun Zhu and Dave Furguson, all formerly
of Google Car, make big announcements Anthony even came back, and
other alumni of Waymo/Google Car had their milestones.Nuro’s launch of
a truly unmanned road robot is also significant.
·
Aurora’s manifesto is worth a readAurora, lead by former Waymo
leader Chris Urmson, laid out their excellent philosophy well.
·
Recent Waymo announcements are slightly underwhelming
·
Waymo’s left turns frustrate other drivers
Flying Cars
This was also the year that “flying cars” also known as e-VTOL aircraft,
“took off.”It’s now clear the engineering problems are close to solved,
though many social and logistic problems remain.These vehicles are at the
stage robocars were 10 years ago, and the excitement is building.Sebastian
Thrun, the modern “father of self-driving cars” and the man who first got
me excited about them, has switched his efforts to flying.I’ll be writing
more on this in the coming year.
Tons of new ideas in aviation.Will regulation stop them?
Robocars, Flying Cars and Hyperloops, oh my!The not so fictional future of
the city
The Flying car — and Flying Ambulance — is closer than we thought
Kitty Hawk “Cora” comes out of stealth
Flying cars, robocars and more will rewrite the rural landscape, for good
and ill
Other notable news
In chronological order, not order of importance
Lidars and other sensors
o
Tons of LIDARs at CES 2018Sensors were a big story this year
o
New “Shared Mobility Principles” have too much 2018 thinking.
o
FMCW LIDAR is coming, what does it mean?
o
Nice summary of LIDAR technologies — is it a “crutch?”
·
Robocars enter phase two as different strategies aboundWhat
approaches are the teams taking?
·
The GM/Cruise robocar interior is refreshingly spartan What robocar
interiors will look like.
·
All about sensors: Advanced radar and more for the future of
perception
·
Local Motors hopes to win with 3D printed robocars A different,
small volume approach
·
Cruise getting a ticket is a very positive story Cruise got too close to a
pedestrian, or did it?
·
Tesla model X fatality in Silicon Valley had Autopilot turned on We
learn more about what Tesla autopilot isn’t.
·
Tesla makes a custom neural network chip, is that wise?A first taste of
robocar processor battles.
·
Google gets a bunch of news — and that is news.Also, contract
manufacturing
·
Tim Kentley-Klay suddenly ousted from Zoox See also my analysis
of Zoox, written just before this.
·
New NHTSA regulations, Waymo miles and big deals
My essays on the issues
The main focus of this site are my essays on the issues and future of
robocars.Here are the ones from this year I think you will find most
valuable.
·
Designing a metric to measure robocar safety — what does insurance
teach? I examine the hard problem of measuring the safety level of a
robocar
·
Making tunnels for robocars would be vastly cheaper than subways
for trains like SF’s new Central SubwayOnce we have robocars, we
should rethink even what we put in private right-of-way like subway
tunnels
·
The paradox on robocar accidentsA fundamental change of thinking
for understanding trends in accidents
·
NHTSA/SAE’s “levels” of robocars may be contributing to highway
deaths
·
Driving without a map is another example of being cheap rather than
being safeA mistake I think a few of the major teams are making.This is
not the time to be cheap, this is the time to be safe as quickly as you can.
·
Safety Drivers for Robocars — the issues and rationaleIn light of
Uber’s mistakes, how should we think of the role of the safety driver?
·
No, ads won’t pay for your robotaxi ride — but your employer might,
and that has big consequencesDebunking one of the sillier ideas out thre,
but realizing the basic concept may mean employers pay for commutes,
which affects a lot of the economics of travel.
·
The road trip robocar and tourist robocarWhat a robocar meant for
tourists and road trips might be like.
·
Is BRT the best answer for bewildered city planners?City planners
should future-proof their urban plans by using BRT instead of LRT or
Subways.
·
Is the Zoox plan for a custom car the path to robotaxi domination?I
talk about Zoox’s radical plan and wonder if it’s smart.
·
Sharing the ride:Less sharing is better for transit, more sharing better
for carsBreaking up the biggest myth of transit planning, that bigger is
better.
The death of parking services has already begunRobocars change
·
parking greatly.Uber has already started that change.
·
Calculating all the externalities of drivingWhat we need to do to really
plan transportation well.
·
The Spot Market in ParkingA plan for how parking will get sold in the
future.
·
Cars will go to the chargers, you don’t need to bring the chargers to
the carsSolving the charging infrastructure problem for electric cars.
·
·
What does airline competition tell us about robotaxi competition?
Sleeper carsThe first of two essays on cars designed for long-haul
overnight trips.
·
·
Sleeper cars and the unexpected efficiency of solo transport
The dance between pedestrians and robocarsHow should cars and
pedestrians interact?
·
The future timeline of robocars — 2020s land rush, 2030s
maturityThe answer to that ever-popular question of “When will the
robocars arrive?”
·
Will people with robocars hire them out as taxis when they are not
using them?Breaking apart the economics of sharing your private robocar.
·
What processors will be important in robocars?Is the battle for the
chip market important?
What happens to human driven cars in the robocar world?No, we
·
don’t kick the humans off the road for a long time.
·
Waymo is first, but is Cruise second, and how can you tell?Why it’s
hard to rank the teams.
Study claims robocars increase pollution; they could not be more
·
wrong
·
The utilitarian math overwhelming says we should be aggressive in
robocar development.How do we do that?A look into the moral calculus
of taking risks in order to save lives.
Robots with sticky feet can climb up, down,
and all around
Jet engines can have up to 25,000 individual parts, making regular
maintenance a tedious task that can take over a month per engine. Many
components are located deep inside the engine and cannot be inspected
without taking the machine apart, adding time and costs to maintenance.
This problem is not only confined to jet engines, either; many complicated,
expensive machines like construction equipment, generators, and scientific
instruments require large investments of time and money to inspect and
maintain.
HAMR-E uses electroadhesive pads on its feet and a special gait pattern to
climb on vertical, inverted, and curved surfaces, like the inside of this jet
engine.
Researchers at Harvard University’s Wyss Institute for Biologically Inspired
Engineering and John A. Paulson School of Engineering and Applied
Sciences (SEAS) have created a micro-robot whose electroadhesive foot
pads, origami ankle joints, and specially engineered walking gait allow it to
climb on vertical and upside-down conductive surfaces, like the inside walls
of a commercial jet engine. The work is reported in Science Robotics.
“Now that these robots can explore in three dimensions instead of just
moving back and forth on a flat surface, there’s a whole new world that they
can move around in and engage with,” said first author Sébastien de Rivaz,
a former Research Fellow at the Wyss Institute and SEAS who now works
at Apple. “They could one day enable non-invasive inspection of hard-toreach areas of large machines, saving companies time and money and
making those machines safer.”
The new robot, called HAMR-E (Harvard Ambulatory Micro-Robot with
Electroadhesion), was developed in response to a challenge issued to the
Harvard Microrobotics Lab by Rolls-Royce, which asked if it would be
possible to design and build an army of micro-robots capable of climbing
inside parts of its jet engines that are inaccessible to human workers.
Existing climbing robots can tackle vertical surfaces, but experience
problems when trying to climb upside-down, as they require a large amount
of adhesive force to prevent them from falling
The team based HAMR-E on one of its existing micro-robots, HAMR,
whose four legs enable it to walk on flat surfaces and swim through water.
While the basic design of HAMR-E is similar to HAMR, the scientists had
to solve a series of challenges to get HAMR-E to successfully stick to and
traverse the vertical, inverted, and curved surfaces that it would encounter in
a jet engine.
First, they needed to create adhesive foot pads that would keep the robot
attached to the surface even when upside-down, but also release to allow the
robot to “walk” by lifting and placing its feet. The pads consist of a
polyimide-insulated copper electrode, which enables the generation of
electrostatic forces between the pads and the underlying conductive surface.
The foot pads can be easily released and re-engaged by switching the
electric field on and off, which operates at a voltage similar to that required
to move the robot’s legs, thus requiring very little additional power. The
electroadhesive foot pads can generate shear forces of 5.56 grams and
normal forces of 6.20 grams – more than enough to keep the 1.48-gram
robot from sliding down or falling off its climbing surface. In addition to
providing high adhesive forces, the pads were designed to be able to flex,
thus allowing the robot to climb on curved or uneven surfaces.
HAMR-E is small but mighty, and could one day carry instruments and
cameras to inspect small spaces.
The scientists also created new ankle joints for HAMR-E that can rotate in
three dimensions to compensate for rotations of its legs as it walks, allowing
it to maintain its orientation on its climbing surface. The joints were
manufactured out of layered fiberglass and polyimide, and folded into an
origami-like structure that allows the ankles of all the legs to rotate freely,
and to passively align with the terrain as HAMR-E climbs.
Finally, the researchers created a special walking pattern for HAMR-E, as it
needs to have three foot pads touching a vertical or inverted surface at all
times to prevent it from falling or sliding off. One foot releases from the
surface, swings forward, and reattaches while the remaining three feet stay
attached to the surface. At the same time, a small amount of torque is
applied by the foot diagonally across from the lifted foot to keep the robot
from moving away from the climbing surface during the leg-swinging
phase. This process is repeated for the three other legs to create a full
walking cycle, and is synchronized with the pattern of electric field
switching on each foot.
When HAMR-E was tested on vertical and inverted surfaces, it was able to
achieve more than one hundred steps in a row without detaching. It walked
at speeds comparable to other small climbing robots on inverted surfaces
and slightly slower than other climbing robots on vertical surfaces, but was
significantly faster than other robots on horizontal surfaces, making it a
good candidate for exploring environments that have a variety of surfaces in
different arrangements in space. It is also able to perform 180-degree turns
on horizontal surfaces.
HAMR-E also successfully maneuvered around a curved, inverted section of
a jet engine while staying attached, and its passive ankle joints and adhesive
foot pads were able to accommodate the rough and uneven features of the
engine surface simply by increasing the electroadhesion voltage.
This iteration of HAMR-E is the first and most convincing step
towards showing that this approach to a centimeter-scale
climbing robot is possible, and that such robots could in the
future be used to explore any sort of infrastructure, including
buildings, pipes, engines, generators, and more
ROBERT WOOD
The team is continuing to refine HAMR-E, and plans to incorporate sensors
into its legs that can detect and compensate for detached foot pads, which
will help prevent it from falling off of vertical or inverted surfaces. HAMRE’s payload capacity is also greater than its own weight, opening the
possibility of carrying a power supply and other electronics and sensors to
inspect various environments. The team is also exploring options for using
HAMR-E on non-conductive surfaces.
“This iteration of HAMR-E is the first and most convincing step towards
showing that this approach to a centimeter-scale climbing robot is possible,
and that such robots could in the future be used to explore any sort of
infrastructure, including buildings, pipes, engines, generators, and more,”
said corresponding author Robert Wood, Ph.D., who is a Founding Core
Faculty member of the Wyss Institute as well as the Charles River Professor
of Engineering and Applied Sciences at SEAS.
The team based HAMR-E on one of its existing micro-robots, HAMR,
whose four legs enable it to walk on flat surfaces and swim through water.
While the basic design of HAMR-E is similar to HAMR, the scientists had
to solve a series of challenges to get HAMR-E to successfully stick to and
traverse the vertical, inverted, and curved surfaces that it would encounter in
a jet engine.
First, they needed to create adhesive foot pads that would keep the robot
attached to the surface even when upside-down, but also release to allow the
robot to “walk” by lifting and placing its feet. The pads consist of a
polyimide-insulated copper electrode, which enables the generation of
electrostatic forces between the pads and the underlying conductive surface.
The foot pads can be easily released and re-engaged by switching the
electric field on and off, which operates at a voltage similar to that required
to move the robot’s legs, thus requiring very little additional power. The
electroadhesive foot pads can generate shear forces of 5.56 grams and
normal forces of 6.20 grams – more than enough to keep the 1.48-gram
robot from sliding down or falling off its climbing surface. In addition to
providing high adhesive forces, the pads were designed to be able to flex,
thus allowing the robot to climb on curved or uneven surfaces.
HAMR-E is small but mighty, and could one day carry instruments and
cameras to inspect small spaces. Credit: Wyss Institute at Harvard University
The scientists also created new ankle joints for HAMR-E that can rotate in
three dimensions to compensate for rotations of its legs as it walks, allowing
it to maintain its orientation on its climbing surface. The joints were
manufactured out of layered fiberglass and polyimide, and folded into an
origami-like structure that allows the ankles of all the legs to rotate freely,
and to passively align with the terrain as HAMR-E climbs.
Finally, the researchers created a special walking pattern for HAMR-E, as it
needs to have three foot pads touching a vertical or inverted surface at all
times to prevent it from falling or sliding off. One foot releases from the
surface, swings forward, and reattaches while the remaining three feet stay
attached to the surface. At the same time, a small amount of torque is
applied by the foot diagonally across from the lifted foot to keep the robot
from moving away from the climbing surface during the leg-swinging
phase. This process is repeated for the three other legs to create a full
walking cycle, and is synchronized with the pattern of electric field
switching on each foot.
When HAMR-E was tested on vertical and inverted surfaces, it was able to
achieve more than one hundred steps in a row without detaching. It walked
at speeds comparable to other small climbing robots on inverted surfaces
and slightly slower than other climbing robots on vertical surfaces, but was
significantly faster than other robots on horizontal surfaces, making it a
good candidate for exploring environments that have a variety of surfaces in
different arrangements in space. It is also able to perform 180-degree turns
on horizontal surfaces.
HAMR-E also successfully maneuvered around a curved, inverted section of
a jet engine while staying attached, and its passive ankle joints and adhesive
foot pads were able to accommodate the rough and uneven features of the
engine surface simply by increasing the electroadhesion voltage.
This iteration of HAMR-E is the first and most convincing step
towards showing that this approach to a centimeter-scale
climbing robot is possible, and that such robots could in the
future be used to explore any sort of infrastructure, including
buildings, pipes, engines, generators, and more
ROBERT WOOD
The team is continuing to refine HAMR-E, and plans to incorporate sensors
into its legs that can detect and compensate for detached foot pads, which
will help prevent it from falling off of vertical or inverted surfaces. HAMRE’s payload capacity is also greater than its own weight, opening the
possibility of carrying a power supply and other electronics and sensors to
inspect various environments. The team is also exploring options for using
HAMR-E on non-conductive surfaces.
“This iteration of HAMR-E is the first and most convincing step towards
showing that this approach to a centimeter-scale climbing robot is possible,
and that such robots could in the future be used to explore any sort of
infrastructure, including buildings, pipes, engines, generators, and more,”
said corresponding author Robert Wood, Ph.D., who is a Founding Core
Faculty member of the Wyss Institute as well as the Charles River Professor
of Engineering and Applied Sciences at SEAS.
Growing bio-inspired shapes with a 300-robot
swarm
Artistic photo taken by Jerry H. Wright showing a hand-made shape
generated following an emergent Turing pattern (displayed using the LEDs).
The trajectory of one of the moving robots can be seen through long
exposure. Jerry also used a filter to see the infrared communication between
the robots (white light below the robots reflected on the table). Reprinted
with permission from AAAS.
Our work published today in Science Robotics describes how we grow fully
self-organised shapes using a swarm of 300 coin-sized robots. The work was
led by James Sharpe at EMBL and the Centre for Genomic Regulation
(CRG) in Barcelona – together with my team at the Bristol Robotics
Laboratory and University of Bristol.
Self-organised shapes
Nature is capable of producing impressive functional shapes throughout
embryonic development. Broadly, there are two ways to form these shapes.
1) Top-down control. Cells have access to information about their position
through some coordinate system, for example generated through their
molecular gradients. Cells use this information to decide their fate, which
ultimately creates the shapes. There are beautiful examples of this strategy
being used for robot swarms, check here for work by Rubenstein et al.
(video).
2) Local self-organisation. Cells generate reaction-diffusion systems, such
as those described by Alan Turing, resulting in simple periodic patterns.
Cells can use these patterns to decide their fate and the resulting shape.
We use the second strategy, here’s how it works.
Patterning
We start from a swarm of 300 closely packed robots in a disc – each running
the same code. Each robot stores two morphogens u and v, which you can
think of as virtual chemical signals. Morphogen u activates itself and the
other morphogen v, whereas v inhibits itself and the other morphogen u –
this is a ‘reaction’ network. ‘Diffusion’ of u and v happens through
communication from robot to robot. Symmetry breaking caused by the
‘reaction-diffusion’ system results in spots emerging on the swarm (or
stripes if we change the parameters!). Areas with high-levels of morphogens
are shown in green – that’s what we call a “Turing spot”.
Image adapted from Slavkov, I., Zapata D. C. et al., Science Robotics (2018).
Tissue movement
In biology, cells may die or multiply depending on their patterning. As we
can’t do either of those things with robots, we simply move robots from
areas where they are no longer needed to areas of growth. The general idea
is that robots that are on the edge of the swarm, and are not in a Turing spot,
move along the edge of the swarm until they are near the spot. This causes
protrusions to grow at the location of the Turing spots.
Following these simple rules, we are able to grow shapes in a repeatable
manner, although all the shapes are slightly different. If you watch the
video, you’ll see that these shapes look quite organic. We did over 20
experiments with large robot swarms, each one taking about 3 hours.
Because the rules are so simple, and only rely on local information, we get
adaptability and robustness for free.
Adaptability
First, as the shape grows, the Turing spots move, showing that the
patterning adapts to the shape of the swarm, and that the shape further
adapts to the patterning. Second, we can easily change the starting
configuration of the swarm (smaller number of robots, or a ‘rectangular’
starting conditions) and the shape still forms.
Image adapted from Slavkov, I., Zapata D. C. et al., Science Robotics (2018).
Robustness
Chopping off a protrusion, causes the robots to regrow it, or to reallocate
robots to other protrusions in the swarm. Splitting the swarm causes it to
self-heal.
Image adapted from Slavkov, I., Zapata D. C. et al., Science Robotics (2018).
Potential for real world applications
While inspiration was taken from nature to grow the swarm shapes, the goal
is ultimately to make large robot swarms for real-world applications.
Imagine hundreds or thousands of tiny biodegradable robots growing shapes
to explore a disaster environment after an earthquake or fire, or sculpting
themselves into a dynamic 3D structure such as a temporary bridge that
could automatically adjust its size and shape to fit any building or terrain.
There is still a long way to go however, before we see such swarms outside
the laboratory.
Drones and satellite imaging to make forest
protection pay
Better tracking of forest data will make the climate change reporting process
easier for countries who want compensation for protecting their carbon stock.
Image credit – lubasi, licensed under CC BY-SA 2.0
Every year 7 million hectares of forest are cut down, chipping away at the
485 gigatonnes of carbon dioxide (CO2) stored in trees around the world, but
low-cost drones and new satellite imaging could soon protect these carbon
stocks and help developing countries get paid for protecting their trees.
‘If you can measure the biomass you can measure the carbon and get a
number which has value for a country,’ said Pedro Freire da Silva, a satellite
and flight system expert at Deimos Engenharia, a Portuguese technology
company.
International financial institutions, such as the World Bank and the
European Investment Bank, provide developing countries with economic
support to keep forests’ carbon stocks intact through the UN REDD+
programme.
The more carbon a developing country can show it keeps in its forests, the
more money the government could get, which would give them a greater
incentive to protect these lands. But, according to Silva, these countries
often lack the tools to determine the exact amount of carbon stored in their
forests and that means they could be missing out on funding.
‘If you have a 10% error in your carbon stock (estimation), that can have a
financial value,’ he said, adding that it also takes governments a lot of time
and energy to collect the relevant data about their forests.
To address these challenges, a project called COREGAL developed
automated low-cost drones that map biomass. They put a special sensor on
drones that fly over forests and analyse Global Positioning System (GPS)
and Galileo satellite signals as they bounce back through a tree canopy,
which then reveals the biomass density of an area and, in turn, the carbon
stocks.
‘The more leaves you have, the more power (from GPS and Galileo) is lost,’
said Silva who coordinated the project. This means when the drone picks up
weaker satellite navigation readings there is more biomass below.
‘If you combine this data with satellite data we get a more accurate map of
biomass than either would (alone),’ he added.
Sentinels
The project trialled their drone prototype in Portugal, with Brazil in mind as
the target end user as it is on the frontline of global deforestation. According
to Brazilian government data, an area about five times to size of London
was destroyed between August 2017 and July this year.
COREGAL’s drones could end up enabling countries such as Brazil to
access more from climate funds, in turn creating a stronger incentive for
governments to protect their forests. Silva also believes the drones could act
as a deterrent against illegal logging.
‘If people causing deforestation know that there are (drone) flight
campaigns or people going to the field to monitor forests it can demotivate
them,’ he said. ‘It is like a sentinel system.’
In the meantime, governments in other developing countries still need the
tools to help them fight deforestation. According to Dr Thomas Häusler, a
forest and climate expert at GAF, a German earth observation company, the
many drivers of deforestation make it very difficult to sustainably manage
forests.
‘(Deforestation) differs between regions and even in regions you have
different drivers,’ said Dr Häusler. ‘In some countries they (governments)
give concessions for timber logging and companies are (then) going to huge
(untouched forest) areas to selectively log the highest value trees.’
Logging like this is occurring in Brazil, central Africa and Southeast Asia.
When it happens, Dr Häusler says this can cause huge collateral damage
because loggers leave behind roads that local populations use to access
previously untouched forests which they further convert for agriculture or
harvest wood for energy.
Demand for timber and agricultural produce from developed countries can
also drive deforestation in developing countries because their governments
see the forest as a source of economic development and then allow
expansion.
With such social, political and economic dependency, it can be difficult, and
expensive, for governments to implement preventative measures. According
to Dr Häusler, to protect untouched forests these governments should be
compensated for fighting deforestation.
‘To be compensated you need strong (forest) management and observation
(tools),’ said Dr Häusler, who is also the coordinator of EOMonDis, a
project developing an Earth-observation-based forest monitoring system that
aims to support governments.
Domino effect
They combine high-resolution data from the European Sentinel satellites,
available every five days through Copernicus, the EU’s Earth observation
system, along with data from the North American Landsat-8 satellite.
Automated processing using special algorithms generates detailed maps on
the current and past land use and forest situation to identify the carbon-rich
forest areas. The project also has access to satellite data going as far back as
the 1970s which can be used to determine how much area has been affected
by deforestation.
Like COREGAL, using these maps, and the information they contain, a
value is put on the most carbon-rich forest areas, meaning countries can
access more money from international financial institutions. The project is
almost finished and they soon hope to have a commercially viable system
for use.
‘The main focus is the climate change reporting process for countries who
want compensation in fighting climate change,’ said Dr Häusler. ‘We can
support this process by showing the current land-use situation and show the
low and high carbon stocks.’
Another potential user of this system is the international food industry that
sells products containing commodities linked to deforestation such as palm
oil, cocoa, meat and dairy. In response to their contribution, and social
pressure, some of these big companies have committed to zero-deforestation
in their supply chain.
‘When someone (a company) is declaring land as zero deforestation, or that
palm plantations fit into zero deforestation, they have to prove it,’ said Dr
Häusler. ’And a significant result (from the project) is we can now prove
that.’
3Q: Aleksander Madry on
trustworthy artificial intelligence
building
Aleksander Madry is a leader in the emerging field of building guarantees
into artificial intelligence, which has nearly become a branch of machine
learning in its own right.
Photo courtesy of CSAIL
Machine learning algorithms now underlie much of the software we use,
helping to personalize our news feeds and finish our thoughts before we’re
done typing. But as artificial intelligence becomes further embedded in daily
life, expectations have risen. Before autonomous systems fully gain our
confidence, we need to know they are reliable in most situations and can
withstand outside interference; in engineering terms, that they are robust.
We also need to understand the reasoning behind their decisions; that they
are interpretable.
Aleksander Madry, an associate professor of computer science at MIT and a
lead faculty member of the Computer Science and Artificial Intelligence Lab
(CSAIL)’s Trustworthy AI initiative, compares AI to a sharp knife, a useful
but potentially-hazardous tool that society must learn to weild properly.
Madry recently spoke at MIT’s Symposium on Robust, Interpretable AI, an
event co-sponsored by the MIT Quest for Intelligence and CSAIL, and held
Nov. 20 in Singleton Auditorium. The symposium was designed to showcase
new MIT work in the area of building guarantees into AI, which has almost
become a branch of machine learning in its own right. Six faculty members
spoke about their research, 40 students presented posters, and Madry
opened the symposium with a talk the aptly titled, “Robustness and
Interpretability.” We spoke with Madry, a leader in this emerging field,
about some of the key ideas raised during the event.
Q: AI owes much of its recent progress to deep learning, a branch of
machine learning that has significantly improved the ability of algorithms to
pick out patterns in text, images and sounds, giving us automated assistants
like Siri and Alexa, among other things. But deep learning systems remain
vulnerable in surprising ways: stumbling when they encounter slightly
unfamiliar examples in the real world or when a malicious attacker feeds it
subtly-altered images. How are you and others trying to make AI more
robust?
A: Until recently, AI researchers focused simply on getting machinelearning algorithms to accomplish basic tasks. Achieving even average-case
performance was a major challenge. Now that performance has improved,
attention has shifted to the next hurdle: improving the worst-case
performance. Most of my research is focused on meeting this challenge.
Specifically, I work on developing next-generation machine-learning
systems that will be reliable and secure enough for mission-critical
applications like self-driving cars and software that filters malicious content.
We’re currently building tools to train object-recognition systems to identify
what’s happening in a scene or picture, even if the images fed to the model
have been manipulated. We are also studying the limits of systems that offer
security and reliability guarantees. How much reliability and security can
we build into machine-learning models, and what other features might we
need to sacrifice to get there?
My colleague Luca Daniel, who also spoke, is working on an important
aspect of this problem: developing a way to measure the resilience of a deep
learning system in key situations. Decisions made by deep learning systems
have major consequences, and thus it’s essential that end-users be able to
measure the reliability of each of the model’s outputs. Another way to make
a system more robust is during the training process. In her talk, “Robustness
in GANs and in Black-box Optimization,” Stefanie Jegelka showed how the
learner in a generative adversarial network, or GAN, can be made to
withstand manipulations to its input, leading to much better performance.
Q: The neural networks that power deep learning seem to learn almost
effortlessly: Feed them enough data and they can outperform humans at
many tasks. And yet, we’ve also seen how easily they can fail, with at least
three widely publicized cases of self-driving cars crashing and killing
someone. AI applications in health care are not yet under the same level of
scrutiny but the stakes are just as high. David Sontag focused his talk on the
often life-or-death consequences when an AI system lacks robustness. What
are some of the red flags when training an AI on patient medical records and
other observational data?
A: This goes back to the nature of guarantees and the underlying
assumptions that we build into our models. We often assume that our
training datasets are representative of the real-world data we test our models
on — an assumption that tends to be too optimistic. Sontag gave two
examples of flawed assumptions baked into the training process that could
lead an AI to give the wrong diagnosis or recommend a harmful treatment.
The first focused on a massive database of patient X-rays released last year
by the National Institutes of Health. The dataset was expected to bring big
improvements to the automated diagnosis of lung disease until a skeptical
radiologist took a closer look and found widespread errors in the scans’
diagnostic labels. An AI trained on chest scans with a lot of incorrect labels
is going to have a hard time generating accurate diagnoses.
A second problem Sontag cited is the failure to correct for gaps and
irregularities in the data due to system glitches or changes in how hospitals
and health care providers report patient data. For example, a major disaster
could limit the amount of data available for emergency room patients. If a
machine-learning model failed to take that shift into account its predictions
would not be very reliable.
Q: You’ve covered some of the techniques for making AI more reliable and
secure. What about interpretability? What makes neural networks so hard to
interpret, and how are engineers developing ways to peer beneath the hood?
A: Understanding neural-network predictions is notoriously difficult. Each
prediction arises from a web of decisions made by hundreds to thousands of
individual nodes. We are trying to develop new methods to make this
process more transparent. In the field of computer vision one of the pioneers
is Antonio Torralba, director of The Quest. In his talk, he demonstrated a
new tool developed in his lab that highlights the features that a neural
network is focusing on as it interprets a scene. The tool lets you identify the
nodes in the network responsible for recognizing, say, a door, from a set of
windows or a stand of trees. Visualizing the object-recognition process
allows software developers to get a more fine-grained understanding of how
the network learns.
Another way to achieve interpretability is to precisely define the properties
that make the model understandable, and then train the model to find that
type of solution. Tommi Jaakkola showed in his talk, “Interpretability and
Functional Transparency,” that models can be trained to be linear or have
other desired qualities locally while maintaining the network’s overall
flexibility. Explanations are needed at different levels of resolution much as
they are in interpreting physical phenomena. Of course, there’s a cost to
building guarantees into machine-learning systems — this is a theme that
carried through all the talks. But those guarantees are necessary and not
insurmountable. The beauty of human intelligence is that while we can’t
perform most tasks perfectly, as a machine might, we have the ability and
flexibility to learn in a remarkable range of environments.
What are some of the basic sensors used in
Robotics?
Sensors are everywhere, and it would be impossible to imagine modern life
without them. We commonly use several sensors in various electronic
devices and machines. But what exactly is a “Sensor” and what are its
applications? Let’s start with understanding a few things.
What is Sensor?
A sensor is a device that detects and responds to some type of input from the
physical environment. The specific input could be light, heat, motion,
moisture, pressure, or any one of a great number of other environmental
phenomena.
The output is generally a signal that is converted to human-readable display
at the sensor location or transmitted electronically over a network for reading
or further processing.
Why use Sensors?
Sensor technology can store the data in memory, from where it can be
retrieved later for processing, analysis, and presentation.
Alternatively, sensor technology can display graphs of data in "real time", the
graph is constructed as the data is being collected, and the graph is then
modified on the screen as the data is processed.
Sensors in Robotics
The use of sensors in Robots has taken them to the next level of creativity.
Most importantly, the sensors have increased the performance of robots to a
large extent. It also allows the robots to perform several functions like a
human being.
Some commonly used sensors along with their principle and
applications are explained as follows:
#1 Light Sensor
A Light sensor is used to detect light and create a voltage difference.
Photoresistors are widely used light sensors whose resistance varies with
change in light intensity; more light leads to less resistance and less light
leads to more resistance
Automatic Solar Tracker project is developed using the same sensor.
#2 Proximity Sensor
This sensor can detect the presence of a nearby object within a given
distance, without any physical contact and working principle of a proximity
sensor is very simple - Transmitter transmits an electromagnetic radiation or
creates an electrostatic field and a receiver receives and then analyzes the
return signal for interruptions.
Infrared (IR) Transceivers
An IR LED transmits a beam of IR light and if it finds an obstacle, the light is
simply reflected back which is captured by an IR receiver. Few IR
transceivers are also be used for the measurement of distance.
Ultrasonic Sensor
These sensors generate high-frequency sound waves; the received echo
suggests an object interruption. Ultrasonic Sensors can also be used for
distance measurement.
Photoresistor
As discussed earlier, photoresistor is a light sensor but, it can still be used as
a proximity sensor. When an object comes in close proximity to the sensor,
the amount of light changes which in turn changes the resistance of the
photoresistor. This change can be detected and then processed.
Projects like Sensor Guided Robotics, Maze Solver Robot and 7 Robots
(Combo Course) are developed using IR Sensors.
#3 Accelerometer
An accelerometer is an electromechanical device used to measure
acceleration forces. Such forces may be static, like the continuous force of
gravity or, as is the case with many mobile devices, dynamic to sense
movement or vibrations. Smartphones and other mobile technology identify
their orientation through the use of an accelerometer.
Gesture based Robot project is developed using the Accelerometer sensor.
#4 Temperature Sensor
Temperature is the most common of all physical measurements. We have
temperature measurement-and-control units, called thermostats, on our home
heating systems, refrigerators, air conditioners, and ovens.
How to make your first Robot?
Your first robot is one which you never forget. Watching your creation do
something as simple as scoot around the floor is exhilarating, particularly if
you've built it from scratch. The best part of building a Robot is that it offers
something that is increasingly rare in the world of electronics: the opportunity
to create a moving, working machine with your bare hands.
Getting started with robotics can be hard, especially if one doesn’t know
where and how to start. The best way to start is to build a simple model with
less expensive parts available locally. Initially, you need to begin by building
a robot which moves forward and backward, left and right through
controlling it wirelessly. However after you get the basics right, you can
actually add and modify things. Move onto a little more complex robot where
you guide it by using a camera or make a manually controlled robot which is
controlled using a wireless remote.
Top 5 Tips to follow while building a Robot:
#1: Learn about Electronics
This is an essential part of building a robot. You don’t need to have an
engineering degree but you need to know some of the basics. Before you
build a robot you need to understand what does resistor, transistor,
thermistor, diode, Capacitor mean. Referring right book provides invaluable
help. You can also go through some magazines.
#2: Start off small
Stay small! Resist the urge to let your mind run wild with possibilities of
cooking robot that will dust and vacuum at the same time. You need to start
off small. Robots can get very expensive. One should learn to build your own
parts rather than buying the pre-made robot. You can make most of the parts
from the household things initially but as the feature increases the robot
becomes expensive.
#3: Plan Robot for an application
Once done with the initial robot, one has to start and plan the robot which can
actually do something. A common mistake among the students is that they
don’t plan their robot ahead of time. When you have a definite goal in your
mind then you are much more motivated to finish it. A great way to do this is
to participate in a contest. There are many clubs that will conduct competition
annually. Participation will also help in exploring new ideas and technology
in robotics.
#4: Work regularly on your Robot
Make yourself work on your robots regularly, especially if you want to
participate in a contest and coming back to a project after weeks of ignoring
is tough. Take that time to think about the project and plan. It will help, even
if it’s just for a few minutes before bed. Also, keep a regular journal of what
you’ve done. Documenting your work is important.
#5: Ask questions and share your experiences
You get a lot of information on the internet, subscribe to every email list,
forums, and newsgroup that you can find and just ask questions. You’ll learn
more that way than any book or website. Questions are never stupid. Don’t be
shy. If you have Imaged out something then write an article or email. Let
others know how to do things in a right way.
What is Mechatronics and Robotics?
Mechatronics is combination or junction of Electrical, Mechanical, and
Computer ScienceEngineering. Mechatronics is the closest to robotics with
the slight and main difference in mechatronics systems inputs are provided
whereas in robotics systems it acquires the inputs by their own.
A Mechanical Engineer is mostly responsible for the Mechanical body parts,
Electrical/Computer Engineer for the Electrical aspect and Computer
Engineer for Programming. And the Mechatronics Engineer must be pretty
much qualified for every partition we discussed above.
What is the functional unit of Robotics?
Body Structure- Most robots consist of a metal-based physical body
structure that provides protection to the internal mechanisms within the
robot. The body shape or structure is based on the robots intended use or
function.
Muscle Systems- The muscle systems of robots can be made up of a number
of different mechanisms. The most popular muscle mechanisms are gears,
pneumatic actuators, artificial muscle, and thrusters.
Sensory Systems- A robot’s sensory system can be very simple or very
complex, depending on the function of the robot. Simple sensory systems
make use of devices such as simple motion detectors, while complex sensory
systems use optical cameras that enable the robot to not only see the
environment around it but interact with the environment based on its
surroundings.
Power Source- The power source used in most robots comes from an electric
power supply. The two most common forms of electric power supplies for
robots are grounded electrical power outlets for stationary robots or internal
battery units for mobile robots.
Brain System- The “brain” system of robots is closely related to and work
for hand in hand with the sensory system. Brain systems of robots can also
be either very simple or very complex, depending on the function of the
robot. A simple “brain” system would send a signal to the robot’s muscle
system telling it to stop if a sensory system detected was activated. A
complex “brain” system would allow a robot to identify objects within the
environment around it, based on the information gathered by the sensory
systems. The “brain system” would then send signals to the muscle systems
based on that information, enabling the robot to interact with the objects
surrounding it. If we go beyond the Artificial Intelligence come into play.
Some ExamplesRobotics is a very broad term, we must go through different application to
understand it better.
Medical Robots – which mimic the movement of the surgeon.
Military Robots- Drones that are Unmanned used for surveillance to the
various around the world. Robots used to defuse bombs, survey the package,
sentry guns which work with the help of sensors track targets the military is
developing huge in robotics
Automation and Manufacturing – Cars, Dishwashers, computers and many
which can assemble, drill, paint and build things with extreme precision. The
robots do the same movement over and over and even keep the precision
which is not possible for a human.
There are household robots that can automatically vacuum your floor,
Agricultural robots that use for harvesting,
Underwater Photography and Videography
I had somebody ask me questions this week about underwater photography
and videography with robots (well, now it is a few weeks ago…). I am not an
expert at underwater robotics, however as a SCUBA diver I have some
experience that can be applicable towards robotics.
Underwater Considerations
There are some challenges that exist with underwater photography and
videography, that are less challenging above the water. Some of them
include:
·
Water reflects some of the light that hits the surface, and absorbs the
light that travels through it. This causes certain colors to not be visible at
certain depths. If you need to see those colors you often need to bring
strong lights to restore the visibility of those wavelengths that were
absorbed. Red’s tend to disappear first, blues are the primary color seen as
camera depth increases. A trick that people often try is to use filters on the
camera lens to make certain colors more visible.
If you are using lights then you can get the true color of the target.
Sometimes if you are taking images you will see one color with your eye,
and then when the strobe flashes a “different color” gets captured. In
general you want to get close to the target to minimize the light absorbed
by the water.
Visible colors at given depths underwater.
·
For shallow water work you can often adjust the white balance to sort
of compensate for the missing colors. White balance goes a long ways for
video and compressed images (such as .webp). Onboard white balance
adjustments are not as important for photographs stored as with a raw
image format, since you can deal with it in post processing. Having a
white or grey card in the camera field of view (possibly permanently
mounted on the robot) is useful for setting the white balance and can make
a big difference. The white balance should be readjusted every so often as
depth changes, particularly if you are using natural lighting (ie the sun).
Cold temperate water tends to look green (such as a freshwater quarry) (I
think from plankton, algae, etc..). Tropical waters (such as in the
Caribbean) tend to look blue near the shore and darker blue as you get
further away from land (I think based on how light reflects off from the
bottom of the water)… Using artificial light sources (such as strobes) can
minimize those colors in your imagery.
Auto focus generally works fine underwater. However if you are in the
dark you might need to keep a focus light turned on to help the autofocus
work, and then a separate strobe flash for taking the image. Some systems
turn the focus light off when the images are being taken. This is generally
not needed for video as the lights are continuously turned on.
·
Objects underwater appear closer and larger than they really are. A
rule of thumb is that the objects will appear 25% larger and/or closer.
·
Suspended particles in the water (algae, dirt, etc..) scatters light which
can make visibility poor. This can obscure details in the camera image or
make things look blurry (like the camera is out of focus). A rule of thumb
is your target should be less than 1/4 distance away from the camera as
your total visibility.
The measure of the visibility is called turbidity. You can get turbidity
sensors that might let you do something smart (I need to think about this
more).
To minimize the backscatter from turbidity there is not a “one size fits all”
solution. The key to minimizing backscatter is to control how light strikes
the particles. For example if you are using two lights (angled at the left and
right of the target), the edge of each cone of light should meet at the target.
This way the water between the camera and the target is not illuminated.
For wide-angle lenses you often want the light to be behind the camera (out
of its plane) and to the sides at 45° angles to the target. With macro lenses
you usually want the lights close to the lens.
If you have a wide-angle lens you probably will use a domed port to
protect the camera from water and get the full field of view of the camera.
The dome however can cause distortion in the corners. Here is an
interesting article on flat vs dome ports.
Another tip is to increase the exposure time (such as 1/50th of a second) to
allow more natural light in, and use less strobe light to reduce the effect
from backscatter.
·
Being underwater usually means you need to seal the camera from
water, salts, (and maybe sharks). Make sure the enclosure and seals can
withstand the pressure from the depth the robot will be at. Also remember
to clean (and lubricate) the O rings in the housing.
Pro Tip:Here are some common reasons for O ring seals leaking:
1. Old or damaged O rings. Remember O rings don’t last forever and need
to be changed.
2. Using the wrong O ring
3. Hair, lint, or dirt getting on the O ring
4. Using no lubricant on the O ring
5. Using too much lubricant on the O rings. (Remember on most systems
the lubricant is for small imperfections in the O ring and to help slide the
O rings in and out of position.
·
On land it is often easy to hold a steady position. Underwater it is
harder to hold the camera stable with minimal motion. If the camera is
moving a faster shutter speed might be needed to avoid motion blur. This
also means that less light is entering the camera, which is the downside of
having the faster shutter speed.
When (not if) your camera floods
When your enclosure floods while underwater (or a water sensor alert is
triggered):
1. Shut the camera power off as soon as you can.
2.
Check if water is actually in the camera. Sometimes humidity can
trigger moisture sensors. If it is humidity, you can add desiccant packets in
the camera housing.
3.
If there is water, try to take the camera apart as much as you
reasonably can and let it dry. After drying you can try to turn the camera
on and hope that it works. If it works then you are lucky, however
remember there can be residual corrosion that causes the camera to fail in
the future. Water damage can happen instantaneously or over time.
4.
Verify that the enclosure/seals are good before sending the camera
back in to the water. It is often good to do a leak test in a sink or pool
before going into larger bodies of water.
5.
The above items are a standard response to a flooded camera. You
should read the owner’s manual of your camera and follow those
instructions.
Lithium, Lithium-ion, LiPo & LiFePO4Battery
Safety and Fire Handling
Lithium battery safety is an important issue as there are more and more
reports of fires and explosions. Fires have been reported in everything from
cell phones to airplanes to robots.
I am not a fire expert. This post is based on conversations with experts and
some basic research. Contact your local fire department for advice specific to
your situation. I had very little success contacting my local fire department
about this, hopefully you will have more luck.
Preventing Problems
1. Use a proper charger for your battery type and voltage. This will help
prevent overcharging. In many cases lithium-ion batteries catch fire when the
chargers keep dumping charge into the batteries after the maximum voltage
has been reached.
2. Use a battery management system (BMS) when building battery packs
with multiple cells. A BMS will monitor the voltage of each cell and halt
charging when any cell reaches the maximum voltage. Cheap BMS’s will
stop all charging when any cell reaches that maximum voltage. Fancier/better
BMS’s can individually charge each cell to help keep the battery pack
balanced. A balanced pack is good since each cell will be a similar voltage
for optimal battery pack performance. The fancy BMS’s can also often detect
if a single cell is reading wrong. There have been cases of a BMS’s working
properly but a single cell going bad which confuses the BMS; and yields a
fire/explosion.
3. Only charge batteries in designated areas. A designated area should be non
combustible. For example cement, sand, cinder block and metal boxes are not
uncommon to use for charging areas. For smaller cells you can purchase fire
containment bags designed to put the charging battery in.
In addition the area where you charge the batteries should have good
ventilation.
I have heard that on the Boeing Dreamliner, part of the solve for their
batteries catching fire on planes, was to make sure that the metal enclosure
that the batteries were in could withstand the heat of a battery fire. And also
to make sure that in the event of a fire the fumes would vent outside the
aircraft and not into the cabin.
Dreamliner battery pack before and after fire. [SOURCE]
4. Avoid short circuiting the batteries. This can cause a thermal runoff which
will also cause a fire/explosion. When I say avoid short circuiting the battery
you are probably thinking of just touching the positive and negative leads
together. While that is an example you need to think of other methods as
well. For example puncturing a cell (such as with a drill bit or a screw driver)
or compressing the cells, can cause a short-circuit with a resulting thermal
runoff.
5. Avoid connecting battery packs in parallel (with no BMS). If the voltage
on each battery pack is different (ie. unbalanced) then one of the batteries can
push a lot of charge to the other batteries. This can also yield a fire.
6. Don’t leave batteries unattended when charging. This will let people be
available in case of a problem. However, as you saw in the video above, you
might want to keep a distance from the battery in case there is a catastrophic
event with flames shooting out from the battery pack.
I have seen several designs for self-contained charging stations for
extinguishing fires. Most of them involve suspending sandbags over the
charging station. If the plastic bags holding the sand melts, all of the sand
will fall on the batteries and smother the fire. See youtube for some examples.
7. Store batteries within the specs of the battery. Usually that means room
temperature and out of direct sunlight (to avoid overheating). Also charging
lithium batteries in extreme cold can cause internal battery damage that will
cause a fire during subsequent charges.
8. Training of personnel for handling batteries, charging batteries, and what
to do in the event of a fire. Having people trained in what to do can be
important so that they stay safe. For example, without training people might
not realize how bad the fumes are. Also make sure people know where the
fire pull stations are and where the extinguishers are.
9. Don’t attempt to charge lithium cells that have been over-discharged, as
over-discharging the cell can result in damage to the cell that can cause future
charge cycles to catch fire. Over-discharging can happen if the cell is
discharged without proper BMS electronics or if the battery is left to sit for
extended periods of time (years) and self-discharges.
10. Visually inspect the battery for defects before charging and using. If any
bulging, punctures, burn marks, etc.. are found, do not use the batteries.
Dispose of the batteries in a safe manner. If you are being very good, you can
caliper the batteries from time to time to make sure their is no bulging.
Handling Fires
1. There are 2 primary issues with a lithium fire. The fire itself and the gases
released. This means that even if you think you can safely extinguish the fire,
you need to keep in mind the fumes and possibly clear away from the fire.
2a. Lithium batteries which are usually in the form of small non-rechargeable
batteries (such as in a watch) in theory require a class D fire extinguisher.
However most people do not have one available. As such, for the most part
you need to just let it burn itself out (it is good that the batteries are usually
small). You can use a standard class ABC fire extinguisher to prevent the
spread of the fire. Avoid using water on the lithium battery itself since the
lithium and water can react violently.
2b. Lithium-ion batteries (including LiFePO4, etc…) that are used on many
robots, are often larger and rechargeable. For these batteries there is not a lot
of actual lithium metal in the battery, so you can use water, class ABC fire
extinguisher, or CO2 extinguisher. The CO2 extinguisher are often nice since
they do not leave a bad residue on your electronics. You do not use a class D
extinguisher with these batteries. You will most likely need a lot of water or
extinguishers.
With both of these types of fires, there is a good chance that you will not be
able to extinguish it. There is also a good chance that by the time you get an
extinguisher the battery part of the fire will be out, and you will be dealing
with surrounding combustibles that are on fire. If you can safety be in the
area your primary goal is to allow the battery to burn in a controlled and safe
manner. If possible try to get the battery outside and on a surface that is not
combustible. As a reminder lithium-ion fires are very hot and flames can
shoot out from various places unexpectedly; you need to be careful and only
do what you can do safety. If you have a battery with multiple cells it is not
uncommon for each cell to catch fire separately. So you might see the flames
die down, then shortly after another cell catches fire, and then another; as the
cells cascade and catch fire.
A quick reminder about how to use a fire extinguisher. Remember first you
Pull the pin, then you Aim at the base of the fire, then you Squeeze the
handle, followed by Sweeping back and forth at the base of the fire.
[SOURCE]
3. In many cases the batteries are in an enclosure where if you spray the
robots with an extinguisher you will not even reach the batteries. In this case
your priority is your safety (from fire and fumes), followed by preventing the
fire from spreading. To prevent the fire from spreading you need to make
sure all combustible material is away from the robot. If possible get the
battery pack outside.
In firefighting school a common question is: Who is the most important
person? To which the response is, me!
4. If charging the battery, try to unplug the battery charger from wall. Again
only if you can do this safely. Electricity should be disconnected before
dousing the fire with water.
CAN and CCP/XCP for Error Detection and
Calibration
Kvaser reached out to me about writing a sponsored post to put on this site. I
have used many Kvaser CAN interfaces in the past. Since they work well and
are reliable I agreed. I also dont know anything about CCP/XCP; so this
should be interesting.
This is my first time accepting a sponsored post (and is the only post on this
site that I did not write). If you like or do not like this let me know in the
comments below.
The Control Area Network (CAN) is an incredibly robust system. It is
because of this that the protocol is can be used for data collection and
calibration across a variety of industries. CAN contains five different error
checks throughout the communication protocol to ensure data are transmitted
correctly. If an error is detected, CAN will retransmit the data until they are
received without error. Partially because of the robustness of the system,
CAN has become the standard within the automotive industry and a
compelling choice for autonomous systems in robotics.
Error Detection With CAN
One of the greatest advantages of using CAN for device communication is
the protocol’s stellar acuity at error detection. During transmission, messages
are tested for errors by each CAN controller connected to the common bus
line. Each message in a CAN bus is given a priority, with error messages
ranking as the highest priority. If an error condition occurs, the CAN frame is
interrupted to force retransmission as soon as possible.
The CAN protocol has five methods of detecting errors, including:
·
Bit Monitoring: Each transmitter monitors signal levels, signaling a
Bit Error if the actual bit level reads differently than the one transmitted.
This ensures that all global errors are detected.
·
Bit Stuffing: A 6th bit of the opposite level is added to the outgoing
bit stream after 5 consecutive bits of the same level have been transmitted
(it is then removed by the receivers). If more than 5 bits of the same level
are transmitted, a Stuff Error will occur.
·
Frame Check: Parts of a message will have a fixed format. If the
controller identifies an invalid value in one of these standardized fields, an
Error Form will signal.
·
Acknowledgement Check: Each node on the bus will send a
dominant level to the acknowledge (ACK) slot when a message is
received. If this is undetected by a transmitter, an Acknowledgement Error
will occur.
·
Cyclic Redundancy Check: Each message has a 15-bit cyclic
redundancy check (CRC). If this is not detected by a node, a CRC Error
will be signaled. This is the primary way local errors found among
different receivers are detected. In other words, all CAN frames are
broadcast in the CAN protocol, and the receivers of the information are
selected by the software reading the received CAN frame from the CAN
controller. This also ensures that all devices have exactly the same
information available at exactly the same time so that processing is done
with all the same information, allowing for a consistent control system.
CCP/XCP for Calibration
CCP (CAN Calibration Protocol) is the protocol used primarily within the
automotive industry for the calibration of electronic control units (ECUs).
CCP/CXP is a basic specification that makes it possible to read and write to
the internal memories of an ECU. However, this protocol can be used in other
industries, like robotics, for microcontroller development and similar
functions. CCP is used to support the following functions:
·
Read and write to ECU memory
·
Cyclical data acquisition and concurrent calibration from an ECU
·
Handling of multiple nodes on the CAN bus
The most recent version of CCP is the XCP standard, which improves upon
several areas of calibration, including:
·
Ease of Use: The XCP protocol has a more clearly defined standard,
leaving less up to user interpretation and needing fewer proprietary
variations.
·
Improved Efficiency: More download and programming commands
are available, which improves data transfers between devices and the
ECU. XCP also features auto detection of slaves, a benefit exclusive to
this calibration protocol.
Basic CCP/XCP Commands and Error Handling
Before commands can be sent, a logical connection between master/slave
must be established. This connection will continue until a disconnect
command or new slave node appears. The master controls all communication
between the two. Because commands used in CCP/XCP are generic and not
node-specific, each node has a unique station address. Every reply from slave
to master will contain either data or an error code.
In order to establish a connection with an ECU it is always necessary to use
the commands CONNECT at start and DISCONNECT to stop the
communication link. For configuration of the CCP/XCP link there is a
command EXCHANGE_ID to setup the session.
Common commands for CCP/CXP.[SOURCE]
CCP/XCP has changed over time so in many cases it is necessary for the tool
to handle different versions of the CCP/XCP communication. To make this
possible there is a command GET_CCP_VERSION to get knowledge how to
use CCP/XCP with this particular ECU.
In some cases it is necessary to have some protection and in that case there is
keying where commands GET_SEED, UNLOCK are used.
After the configuration all exchange is a write and read of the memory. There
are two different means to do that, one for larger memory sections where
SET_MTA define the memory address (virtual or real) and the reading and
writing is done by UPLOAD and DNLOAD. This function is used when
programming the ECU or to read and write larger sections of the memory.
To write parameters there is a second solution using the commands
SET_DAQ_PTR and WRITE_DAQ. For the administration of this there are
two commands GET_DAQ_SIZE and START_STOP. By this function it is
possible to read or write a number of parameters in a data structure that fits
into a data-package (8 byte in CAN). This function is used to set and get
parameters without any knowledge about the actual location in a memory
map. This is used to read back some internal parameters and adjust
controllable parameters to calibrate the ECU to the best behavior.
CCP/XCP is not a complete standard for calibrating an ECU, but it is
essential for making it possible to exchange information with an ECU. Which
information to exchange and how that information is used in the ECU, is not
defined in the CCP/XCP specification.
The CAN Protocol in Action
In 2008, Nexi, an MDS (Mobile, Dexterous, Social) robot, was named by
Time magazine as one of the 50 best inventions for that year. This robot was
considered revolutionary because of its ability to use verbal and non-verbal
communication. CAN was chosen as the communications protocol for Nexi
partly because of the system’s quick and reliable error detection methods. A
dual channel CAN card was used for position feedback for motor control, and
CANLib software was integrated into the existing API to interface with the
CAN bus. In addition to humanoid robots of the future, CAN is also expected
to continue as the staple for the auto industry—including use in autonomous
vehicles.
PolySync (formerly Harbrick) has created a vehicle kit that will allow
developers to design an autonomous vehicle. The revolutionary operating
system simplifies the backend infrastructure, giving developers the freedom
to focus on the code necessary for their unique vehicle. In this system, CAN
continues as the chosen communications protocol because of the system’s
proven reliability. Co-Founder and CEO Josh Hartung, explains “CAN’s realtime behavior, high message integrity and simplicity, among its many
attributes, assures its place within the communication infrastructure of
tomorrow’s vehicles, particularly for device/motion control, timing and data
critical applications.”
CAN has long been the chosen protocol within the automotive industry, and,
as autonomous vehicles emerge, it looks as though this will continue to be
true. Because adoption of autonomous vehicles relies heavily on the
guaranteed safety of users, the reliability of CAN makes the protocol an
obvious choice.
LIDAR vs RADAR: A Detailed Comparison
Hi all
I was recently asked about the differences between RADAR and LIDAR. I
gave the generic answer about LIDAR having higher resolution and accuracy
than RADAR. And RADAR having a longer range and performing better in
dust and smokey conditions.
When prompted for why RADAR is less accurate and lower resolution I was
asked why. I sort of mumbled through a response about the wavelength;
however I did not have a good response, so this post will be my better
response.
To start with here is a post about how LIDAR works that you might want to
check out (don’t forget to come back after you read that post).
LIDAR
LIDAR which is short for Light Detection and Ranging uses a laser that is
emitted and then received back in the sensor. In most of the LIDAR sensors
used for mapping (and self driving vehicles) the time between the emission
and reception is computed to determine the time of flight (ToF). Knowing the
speed of light and (1/2 of the) time for the wave to return (since the signal
traveled out and back) we can compute how far away the object was that
caused the light to bounce back. That value is the range information that is
reported by the sensor. LIDAR’s generally use light in the near-infrared,
visible (but not really visible), and UV spectrum’s.
There are some sensors that use triangulation to compute the position
(instead of ToF). These are usually high accuracy, high resolution sensors.
These sensors are great for verifying components on an assembly lines or
inspecting thermal tile damage on the space shuttle. However that is not the
focus of this post.
LIDAR data. The top shows the reflectivity data. The bottom shows the range
data with brighter points being farther away. [Source]
The laser beam can also be focused to have a small spot size that does not
expand much. This small spot size can help give a high resolution. If you
have a spinning mirror (which is often the case) then you can shoot the laser
every degree or so (based on the accuracy of the pointing mechanism) for
improved resolution. It is not uncommon to find LIDAR’s operating at 0.25
degrees of angular resolution.
RADAR
RADAR which is short for Radio Detection and Ranging uses radio waves to
compute velocity, and/or range to an object. Radio waves have less
absorption (so less attenuation) than the light waves when contacting objects,
so they can work over a longer distance. As you can see in the image below
the RF waves have a larger wavelength than the LIDAR waves. The down
side is that if an object is much smaller than the RF wave being used, the
object might not reflect back enough energy to be detected. For that reason
many RADAR’s in use for obstacle detection will be “high frequency” so
that the wavelength is shorter (hence why we often use mm-wave in robotics)
and can detect smaller objects. However since LIDAR’s have the
significantly smaller wavelength, they will still usually have a finer
resolution.
Electromagnetic spectrum showing radio waves all the way on the left for
RADAR and near-infrared/visible/ultra-violet waves towards the right for
LIDAR usage.
Source: Inductiveload, NASA [GFDL (http://www.gnu.org/copyleft/fdl.html)
via Wikimedia Commons
Most of the RADAR’s that I have seen will have a narrow field of view (10’s
of degrees) and then just return a single value (or they might have up to a few
dozen channels, see multi-mode RADAR below) for the range of the detected
object. There are tricks that some systems can do using multiple channels to
also get the angle for the range measurement. The angle will not be as high
resolution as most LIDAR’s. There are also some RADAR’s on the market
that scan to get multiple measurements. The 2 approaches are a spinning
antenna (such as what you see at airports or on ships) or electronically
“spinning” which is a device using multiple internal antennas with no moving
parts. More advanced (newer) RADAR’s can do things like track multiple
objects. In many cases they will not actually return the points that outline the
objects (like a LIDAR), but will return a range, bearing, and velocity (range
rate) to an estimated centroid of the detected item. If multiple objects are near
each other, the sensor might confuse them as being one large object and
return one centroid range [Here is the source and a good reference to read].
Using the Doppler frequency shift the velocity of an object can also be easily
determined with relatively little computational effort. If the RADAR sensor
and the detected object are both moving than you get a relative velocity
between the two objects.
[Source]
With RADAR there are commonly two operating modes:
1. Time of Flight – This operates similar to the LIDAR sensor above,
however it uses radio wave pulses for the Time of Flight calculations. Since
the sensors are pulsed it know’s when the pulse was sent so computing the
range can be easier than the continuous wave sensors (described below). The
resolution of the sensor can be adjusted by changing the pulse width and the
length of time you listen for a response (a ping back). These sensors often
have fixed antennas leading to a small operating field of views (compared to
LIDAR).
There are some systems that will combine multiple ToF radio waves into one
package with different pulse widths. These will allow for various ranges to be
detected with higher accuracy. These are sometimes called multi-mode
RADAR
2. Continuous Wave – This approach frequency modulates (FMCW) a wave
and then compares the frequency of the reflected signal to the transmitted
signal to determine a frequency shift. That frequency shift can be used to
determine the range to the object that reflected it. The larger the shift the
farther the object is from the sensor (within some bounds). Computing the
frequency shift and the corresponding range is computationally easier than
ToF, plus the electronics to do so are easier and cheaper. This makes
continuous frequency modulated systems popular. Also since separate
transmit and receive antennas are often used this approach can continuously
be transmitting and receiving at the same time; unlike the pulsed ToF
approach which needs to transmit then wait for a response. This feature and
the simple electronics can make FMCW RADAR very fast at detecting
objects.
There is another version of the Continuous Wave RADAR where the wave is
not modulated. These systems are cheap and good for quickly detecting speed
using the Doppler effect, however they can not determine range. They are
often used by police to detect vehicle speed where the range is not important
[Wikipedia].
SONAR
Unrelated, but while we look at the spectrum’s above I should note that
SONAR or Sound Navigation and Ranging, can work in both of the modes as
RADAR. The wavelength used is even larger than RADAR. It is farther to
the left in the spectrum image shown before and is off the chart.
I should point out that there are cool imaging SONAR sensors. The general
idea is that you can make the sensing wave be vertical so that the horizontal
resolution is very fine (<1 degree) and the vertical resolution is larger (10+
degrees). You can then put many of these beams near each other in a sensor
package. There are similiar packages that do this with small wave length
RADAR.
Cost
$$$
LIDAR Sensors tend to cost more than RADAR sensors for several reasons:
1. LIDAR using ToF needs high speed electronics that cost more
2. LIDAR sensors needs CCD receivers, optics, motors, and lasers to
generate and receive the waves used. RADAR only needs some stationary
antennas.
3. LIDAR sensors have spinning parts for scanning. That requires motors and
encoders. RADAR only needs some stationary antennas. (I know this is sort
of similar to the line above).
Computationally Speaking
RADAR sensors tend to generate a lot less data as they just return a single
point, or a few dozen points. When the sensor is multi-channel it is often just
returning a range/velocity to a few centroid(ish) objects. The LIDAR sensors
are sending lots of data about each individual laser point of range data. It is
then up to the user to make that data useful. With the RADAR you have a
simple number, but that number might not be the best. With LIDAR it
depends on the roboticist to generate algorithms to detect the various objects
and discern what is being viewed by the sensor.
Motor Control Systems – Bode Plot &
Stability
What is a control system? A control system alters the future state of its
system to a more desirable outcome. We often work with feedback control
systems (also called closed-loop control), where the result of the command is
fed back into the control system. In particular we are looking for the error
between the command and the desired response. If the output state is not fed
back into the control system it is referred to as an open loop system.
I recently got a question on the Robots for Roboticists forum about tuning
motors and understanding bode plots. Many motor control software packages
come with tuning tools that can do things such as generating bode plots. This
post is mostly a copy of my response to that question.
The frequency of a system can be hard to visualize by just watching the
motors motion. The frequency of the system is when you convert the time
domain signal (ie. you look at the motors motion over time), and convert it
into the frequency domain using a Fourier transform. Once you convert the
signal into the frequency domain we can use Bode plots. Bode plots help us
visualize (the transfer function) of a control system response to verify its
stability.
Random Note:
Assuming you know the transfer function, in an open loop system you can just
use root finding (ie. finding the values that make the equation equal to 0) to
check stability (by making sure that all roots are negative real values). For
feedback based closed loop system we can modify the above and solve with a
computer (since math is hard), or use the Bode plot to help understand the
control system better. (I should also point out you can use Routh-Hurwitz to
avoid the complex math, this will need to be another post…)
In general the Bode plots are showing the phase and gain (magnitude) from
your input control signal until it reaches the output (command) in the
frequency domain.
Parts of the Bode Plot
Gain is the shift in value between the input signal and resultant command. If
you had a system with no gain and no loses, then you would have a straight
horizontal line with 0dB of gain. You will usually see the gain drop off
(decrease) on the right side of the magnitude plot.
The larger the frequency range until it starts dropping off, shows a system
that will be stable in more conditions. Random bumps and curves in the main
curve are often signs of instability. If the gain increases anywhere to infinity
(ie. large values) that is often a sign of instability, and you need to rethink
your controller settings! If you have spikes in gain see the filtering section
below.
Quick reminder: Getting gain in dB from the original time domain signal
amplitude:
Bandwidth is the area from the top of the curve until the magnitude degrades
by -3dB. So in the image above you can see the curve dropping at around
100Hz. So the bandwidth of your system is around 100Hz. This tool actually
shows the exact bandwidth value of 118.9Hz in the box at the lower right of
the image. Controls past 118.9Hz will by sluggish or unresponsive.
Phase describes the time shift between the input signal and the output
command. A phase of 360 is for a single cycle. So if you have a 1KHz (ie
1000Hz) command signal each 360 degrees in the chart would represent
1/1000 of a second (converting frequency to time) or 1 millisecond. You can
see in the phase diagram that once you exceed the bandwidth it will take
longer for the desired input signal to be transferred to the output signal.
Filtering Specific Frequencies
Various filters (low-pass, notch, etc..) are often applied to remove a
resonance at a specific frequency. For example if you spin a motor and you
see at a specific frequency things start to shake violently, you can add a filter
so the gain will be reduced at that frequency. In general you should tune a
motor with no filters, and only add the filters if needed.
Verifying Control Parameters
You can verify that the selected gains are good by looking at the output
waveform when a step command is applied (in the time domain); this is
sometimes referred to as oscilloscope mode. If there is a lot of initial ringing
(constant changing) or overshoot in the beginning of motion, your gains are
probably to high. If the initial command is slowly reaching the desired output
you might need to increase your gains.
Plots showing common tuning problems followed by a good response curve.
The blue lines are the command, and the red lines are the actual motor
output.
In the image above starting from the left we can see:
1. Overshooting of the output motion, with a little signal ringing as it settles
to the command. A little overshoot is often fine, but we do try to minimize
overshoot while still having a responsive system.
2. Slow response of the output. We might want a faster response so the
system will be less sluggish. As you make the system less sluggish you often
increase the overshooting and potential for instability.
3. Highly unstable. Starts with a bunch of oscillation, followed by a large
spike in command output. We want to avoid this!
4. Finally this right-most plot looks good. The motor output is directly on top
of the commanded motion, with a very slight overshoot
Perception in Smoke, Dust or Fog
I recently had the fortune to attend a talk titled Multi-Modal Data for Perception in Smoke-Filled
Underground Mines by Joe Bartels. It was an interesting talk about pushing sensor technology into
difficult environments that can have smoke, dust or fog . The following are notes that I took during the
talk, as always I add a little of my own content to the post. I want to thank Joe for giving the talk and
sharing the images that are used below.
LIDAR
Standard LIDAR often fails in smoke, however newer multi echo return LIDAR’s do better.
Standard LIDAR operation showing initial pulse and the returned pulse
LIDAR operation in smoke showing initial pulse and the multiple returned pulse
Point cloud showing LIDAR initial returns and final returns. Note that the initial returns are just a
cloud in the center around where the robot drove from the smoke. The final return represents the real
structure of the walls, the resolution and quality of the walls is less than if there was no smoke.
(Generated with the Hokuyo UTM-30LX-EW)
Sonar & Radar
Sonar and Radar works better penetrating the smoke, but lacks the resolution of LIDAR. This can be a
problem for creating detailed maps.
Sonar also suffers from a lot of multipath
Cameras
This work was focused in mines where external lighting is required for visual cameras, as opposed to
LIDAR/Radar/Sonar which do not need illumination to operate.
Lighting
Typical fog lights are angled downwards at the road to minimize light scattering and minimize having
the light reflecting back into the camera. Also the lights should be placed low so that they are far away
from the sensor camera.
Road as viewed with different car lighting methods.
The downward light helps illuminate a path on the ground, but is not high
enough for seeing the environment and building a map (such as for SLAM or
data purposes).
Scattered light is not on the epi-polar lines which interferes with the principles of epi-polar geometry
and stereo reconstruction.
You can add a line filter to the camera which allows only a single line of light at a time into the camera
sensor. This prevents the scattered light from entering the camera sensor. The downside to this is the
reduced data returned from the camera.
Thermal Cameras
·
Thermal cameras often work well and see through smoke. They are
based on emissivity (energy emitted from an object) as opposed to the
light reflecting from the object (such as with a typical camera).
·
Emissivity of 0 would be a mirror, while an emissivity of 1 would be a
true blackbody.
·
A problem is that the differential response is small in non man-made
materials.
·
2 Types of sensors: short wave & long wave. The short wave sensors
are more expensive, have better resolution, but less visibility through
smoke.
·
Lacks the sharp features for many mapping applications
·
You need to be careful since lights and people can appear to be similar
temperatures
·
Wet surfaces can be confusing to interpret
Same image as shown from a visual camera and a thermal camera
Calibrating Thermal Cameras
People often use a standard calibration target however this is far from ideal
The next step that people often try is to artificially heat/cool a target, however it is hard to make the
target a constant temperature and the temperature changes are short-lived. When you do this the black
will heat up quickly. When viewed on a thermal camera the checkerboard colors get inverted.
A better way to calibrate thermal cameras is to make a target of dissimilar materials (multi-modal) that
have different thermal properties.
Standard calibration target viewed from visual and thermal cameras
Standard calibration target that is heated viewed from visual and thermal cameras
Calibration target made from a white surface with black circles attached of a different material
“Smoke” Generation
Different types of smoke/fire produce different perception challenges.
Smoke Candles – White smoke that hangs around. It goes to the ceiling and then slowly drops down.
Wood Fire – Lighter color smoke that hangs near the top of the room. Carbon particles are suspended
in the air.
Diesel Fire – Dark smoke that sticks to the ceiling and is very persistent. Carbon particles are
suspended in the air.
Dust – Harder than smoke since the particles are bigger, however the particle suspension time in the air
is shorter. Radar tends to be a good choice in dusty environments.
Fog – Fog is based on small water droplets suspended in the air. Fog tends to stay lower in the space
and not hug the ceiling. Particularly cold fog stays near the ground, as it heats up it rises before
dissipating. This can be simulated by putting dry ice into hot water.
How to Mount an External Removable Hard
Drive on a Robot
We often want to add an external hard drive to our robots in order to have a dedicated drive for data
collection. Having the dedicated drive lets the primary drive remain uncluttered, have less wear-andtear, split logging between drives for performance, and potentially be a larger spinning disk (as opposed
to an expensive SSD). It is also useful when the main hard drive is read only.
(For mechanical mounting skip to the end of this post)
So how should you mount an external hard drive to your robot? There are many tutorials online and
resources, however this is how I do it and why (assuming Linux)?
In the /etc/fstab file I add the following line:
/dev/sdb1
/media/data
ext4 async,auto,user,exec,suid,rw,nobootwait,noatime
0
0
The /dev/sdb1 needs to be changed based on your hard disk configuration, however typically if you
have one primary drive and one external drive, the external will be sdb1. While you can name a drive
based on its unique ID, I usually just do it based how the drive appears in /dev. This lets us plug any
(one) drive into the computer and let it be used for data collection.
You need to create a location that the hard disk should be mounted to (for you to access it from). So
before the above change you should create that mount directory. To make the directory above you can
type sudo mkdir /media/data. I generally use Ubuntu, however in other distributions you might want to
use sudo mkdir /mnt/data if /media does not exist.
Next is the file system. Generally in a Linux only environment ext4 is used. If you are in a mixed mode
where you need to connect to Windows or Mac machines you might want to use vfat.
Now for the options. The options I often use are async,auto,user,exec,suid,rw,nobootwait,noatime
async – Lets data be written to a drive asynchronously. If many cases you might want this to be set to
synchronous sync
auto – Mounts drive on boot and/or with mount -a
user – Allows any user to mount this drive
exec – Lets you execute binaries from the drive (might not be needed if you are only writing data)
suid – Allows the user ID to be set when executing a binary (might not be needed if you are only
writing data)
rw – Read & Write access to the drive when mounted
nobootwait – Does not interrupt the boot sequence if drive not connected.
noatime – Disables writing file access times to the drive every time you read a file
Depending on the distro I also sometimes add nofail to work with nobootwait.
With the method of mounting the drive above the secondary drive can be auto mounted to the same
mount point at boot. If the drive is not present at boot, the boot sequence will not fail. After the
computer is up the drive will automatically mount or you might need to type sudo mount -a to mount
the drive.
I spent the top part of this post talking about mounting from the operating
system perspective, however some people might get to this page for physical
mounting.
There are many ways to mount a hard drive, these range from velcro, to
external enclosures, to removable bays.
Here is one way to mount a hard disk for removable data drives that I like:
ICY DOCK ToughArmor MB991SK-B Full Metal 2.5″ SATA 6Gb/s
SSD/HDD Hot Swap Mobile Rack for 3.5″ Device Bay (Fits 7, 9.5, 12.5,
15mm height drive) From Newegg.com
Sled that slides and locks into the hard drive bay. The hard drive goes inside
of this sled. From newegg.com
This provides a secure mount that will not let the drive fall out, while also
protecting the drive. It also allows for swapping drives as disks get full and
we want to log more data. We often fill up a drive and take it offsite to
transfer to a server while filling up another drive. This allows us to easily
rotate drives.
Tether’s: Should Your Robot Have One?
Some people swear that all tethers are bad. Some people recommend attaching a tether to robots in
order to provide power (to save the mass and volume of the batteries), for reliable fast communications,
transfer of pressure or fluid, to track a robots position, or as a safety harness (in case the robot needs to
be dragged out). However before designing a tether there are several things that you need to consider.
Here are some things to think about:
1.
What should be in the tether. For example can you put power on the
robot and then only run communications to the robot?
2.
Tethers can easily snag on obstacles as the robot drives. This can
cause your robot to get stuck and not be able to proceed forward or get
back out.
3.
Tether needs to be managed so that you do not get snagged (as
discussed above) and so that you do not drive over it which can damage
the tether.
4. Tethers can get bulky. The longer the tether you need, the more mass
and volume you will need to carry and the more heat you will produce.
5. Tethers introduce drag, which the robot will need to overcome while
driving.
6. Abrasion of the tether with use
7.
Cables, wire, and tubes in the tether can break which might
incapacitate the robot. If the robot is going into a critical environment you
might not be able to retrieve the robot should it fail.
The image at the top of this post shows how a tether can tighten around corners as the robot drives. As
the tether gets pulled into the corners the force on the tether increases, and increases the chance of the
robot getting stuck, breaking, and making mobility more difficult.
Tension on tether base (neglecting the coefficient of friction from contact points) can be computed as
shown in the image below.
Force on a tethered robot. This shows why it is much harder to pull a robot after it has gone around a
bend.
For the reasons above tethers are often deployed from the robot and not from base equipment. In
underwater applications you can usually feed the tether from the base without the problems seen on
land. When choosing where to deploy the tether from you need to consider:
·
Can your robot support the volume/mass of the tether
·
Where are the snag points
·
Volume of the tether which is: Volume = Area of cable x length of
the cable
·
Heat generated by the tether from I2R losses.
This paper from Dante is a must read for anyone developing a tethered robot. Dante is a robot that
flipped while trying to climb out from a volcano partially due to bad tether angles on difficult terrain.
While people often point to Dante as an example of why tethers are bad, there are many more examples
of tethers being used successfully. In the nuclear accident of Chernobyl many robotic (tele-operated)
vehicles (click for robot pictures)were used in the recovery and cleanup. The robots were used for many
things such as inspection, pushing radioactive material out of the way, turning over radioactive dirt and
moving containers. Most of those robots were tethered (about 55 of the 60 vehicles). Many of those
robots ultimately died due to tether malfunctions, but that was after operating for extended times and
doing good work. With new technology, better cable design, lower powered devices, better
proprioception of the tether we should be able to do a better job building tethers.
TReX is another paper that you need to check out before designing a tethered robot.
Roboticists EDC (Every Day Carry)
Recently I have been doing a lot of field work and have seen a lot of people looking for random things
or asking people if they had those items. In many of those cases I had the items they needed. This is not
because I am great or anything, this is from building up my bag over the years. I keep all of these items
in my laptop bag where it becomes my every day carry (EDC) for robotics. So here is the list of what I
carry.
·
Laptop Bag
·
Laptop (I like to get a real serial port, but they are getting harder and
harder to find. If not I get a laptop card with a native serial port.
·
Laptop charging cable (I use a dock at my desk so I can leave the
cable in my bag)
·
Serial cable with DB-9 connectors (w/ male & female gender
changers and null modem)
· Ethernet cable w/ coupler
·
Pens
·
Marker (Black Sharpie)
·
·
Zip ties
Tweaker (small screwdriver with Phillips on one side and flat head
on the other side)
· Multi tip screwdriver
·
·
Leatherman – I always carry this on my belt
Medical kit (band-aid, alcohol wipe, gauze, tourniquet). Should I
add a CPR mask?
· Multimeter (Minimum voltage and continuity)
·
·
Head light (hands free light is great)
Electrical tape or duct tape (you can wrap tape around a business
card to save space)
· USB memory drive
·
VGA to X display adapter (what does your computer use??)
·
Cell phone
·
Cell phone charger
·
$20 (or so) tucked away in a laptop bag pocket
·
a few granola bars (or similar) for when you get stuck somewhere
·
Sunscreen (travel packets are nice)
·
Hand warmers (air activated pads) climate dependent
Things I used to carry and still should:
·
Emergency mylar blanket (for covering robot in rain or medical)
·
Cloth tape measure
·
8.5 x 11 camera calibration and focus target. It was attached to thin
wood.
· Rope (paracord)
·
Rain gear
While I was making this list my wife said I should add the following items that I never really gave a
second though to:
·
Feminine hygiene products
·
Pain killer (Tylenol, Motrin, etc..)
Automation
Hard automation:
This kind of automation cannot handle product design variations, mass
production for example; conventional machinery, packaging, sewing and
manufacturing small parts. Adjustability is possible, but it can only handle
specific tasks with no possibility of changing its own task. These machines
can be seen in our homes (washing machines, dish washers, etc).
Programmable Automation:
This form of automation began with the arrival of the computer. People
began programming machines to do a variety of tasks. It is flexible because
of a computer control, can handle variations, batch product, and product
design.
Autonomous (Independent):
Endowed with a decision-making capability through the use of sensors. A
robot belongs to this kind of automation and it is a combination of
microprocessor and conventional automation systems which can provide a
very powerful system. Its high-level machinery capabilities combined with
fault recognition and correction abilities provided by highly evolved
computer systems. This means it can carry out work traditionally carried out
by humans. Examples of existing autonomous systems are animals and
human beings.
Animals when they see food they move toward it using sense of smell or
they escape when they react against danger due to senses of fear (sensors).
Human beings are the highest level of autonomous systems because they
think, and they can change plan at any moment due to their high intelligence.
Robots cannot reach the same high level as humans because they are
programmed to do certain tasks according to certain factors which are
completely programmed by human beings, but they have no possibilities to
change plan like humans or plan new things unless the programmer programs
them to change the plan. Because of high development of machines, sensors,
actuator, digital electronics and microprocessor technology it became
possible to create a robot which is autonomous (Teijo Lahtinen, Lecture at
Lahti University of Applied Sciences 2009).
Robot applications in our lives
Welding
Considered as a dangerous task for a human because of toxic gases
emissions.
Welding robot examples in car factory
The welding job is quite difficult for a person who is required to weld two
pipes from different sides and angles and to sit in a difficult position for a
long time. It can be hard on ones physic and can cause health problems for
the worker. The difficulty for a human is to see all the sides of welded
devices when he needs to weld around a pipe as he can only see one side of
the pipe.
Painting has similar problems to welding due to the use of toxic chemical
products. Below is an example picture 2.2 of a factory robot painting a car as
it moves slowly along a conveyer.
Painting robot examples in car factory
Assembly operation: When we assemble a chip we need to be very precise
because of very fine wires which require very precise and accurate tasks
which a human cannot handle but, on the other hand, is easy for a robot.
Consistent quality at high standards can be achieved by a robot. A robot can
easily be re- programmed many times to reach the highest possible quality
which a human cannot often achieve.
Safety is especially important when a robot handles chemicals, bio chemicals,
toxic and nuclear products. They can be handled very safely and smoothly,
saving humans from carrying out high risk, stress inducing work.
Robots can carefully handle fragile and tiny parts, such as glass, small
chips and wires.
Inspection and maintenance tasks in dangerous areas: for example
handling explosives, exploring the deep sea, space and other planets. One
example is the shipwrecked Titanic. A robot was used to discover the ships
content as it lay so deep under the ocean it was beyond human reach.
Space missions: to gather samples from other planets and to analyse them
from remote distances.
Types of robot
1. Industrial robots. painting and welding robots
Advantages of a painting robot:
Robot painting is equal, uniform with high quality and precision. It can reach
very difficult places due to their high degree of flexibility which can be
difficult for humans, but can be achieved easily by robots. A human needs to
carry heavy painting gun and wear a mask for protection against toxic
chemicals. A robot´s repetition rate is high as it does not suffer from fatigue.
Safety levels which can be achieved by using a robot are high by saving
humans from the smell chemical toxics.
2. Medical robot to make surgery
One example of a medical robot
Advantages of a medical robot:
Patient gets fast recovery. The operation is more precise with fewer mistakes.
Robot can open small incisions in the body and carry out major operations
with minimal damage to the patient. Therefore recovery time is decreased.
The equipment is more hygienic and safe.
3.Mobile robot with legs or wheel for chemical power plant, under see
or remote areas and bombs fields. The advantage in leg robot is that it can
avoid step over obstacles which can be dangerous like bomb or even to
protect objects from being destroyed due to robot moving over them.
Leg robot picture
Example of mobile robot
4. Robotics aircrafts and boats without pilot which are guided from a
station on the ground, which are used by army or rescue mission
example of a robot aircraft
5. Robotic toys for entertainment
6. Robot for cleaning at home and industry
Vacuum cleaner robot
Required studies in robotics
It is multidimensional area which uses almost all of the engineering studies.
These studies are mechanical engineering, electronic sensors, actuators,
computer sciences and artificial intelligence
Extrapolating from nature
As an example humans and animals have arms and fingers to manipulate
objects. Legs for locomotion, muscles as actuators, eyes provide vision, nose
for smelling, ears for hearing, tongue for tasting, skin for feeling and nerves
for communication between the brain and actuators.
Comparing robots to humans
Manipulation is equal to Arms and fingers driven by motors and other forms
of actuation. Vision is equal to camera. Hearing is equal to microphone.
Feeling is equal to tactile sensors. Communication is equal to wires, fiber
optics and radio. Brain is equal to computers and microprocessors. Smell and
taste are still under development (Matti Pitkälä, Lecture on Lahti University
of Applied sciences 2011).
Programming a robot by teaching method
The same technique we use to teach children to write the alphabet by holding
the child’s hand and going through the writing process step by step. When we
are teaching the robot to do a certain job we control the movement of the
robot hand or end effector at the same time we record the motion of each
individual joints. Then we play back the recording and the robot begins to
move independently as taught. The quality of recording results in the work
carried out. This work is carried out by a skilled worker. When the work
arrives on a conveyer to the robot, the robot replays the stored recording then
robot performs the required task. Other ways to teach a robot to undertake
certain tasks is by use of a program that creates a virtual world. Then we
stimulate the work to be carried out by the robot’s joint motion parameters
stored in the memory. The robot is then capable of replaying the recording.
Typical programming of an industrial robot
Industrial robot is programmed by moving objects from position 1 to position
5 by moving joints vertically or horizontally to pick up and place an object
through the following steps:
Define points from P1 to P5:
1. Safely move above work piece (defined as P1)
2. 10 cm above work piece (defined as P2)
3. At position to take work piece from conveyer (defined as P3)
4. 10 cm above conveyer with low speed (defined as P4)
5. At position to leave work piece (defined as P5)
Define program:
1. Move to P1
2. Move to P2
3. Move to P3
4. Close gripper
5. Move to P2
6. Move to P4
7. Move to P5
8. Open gripper
9. Move to P4
10. Move to P1 and finish
Accuracy and repeatability of addressable points
Repeatability is the playback of the recording of the position of joint space
when we try to program a robot through teaching method and it describes
how precise the robot to return to the stored position.
Accuracy is connected to repeatability. “The precision with which a
computed point can be attained is called the accuracy of the manipulator”
Example of good and bad accuracy
Technologies Of A Robot
Introduction
In this chapter I will introduce robot sub systems and some parts that are used
in robot structure. This section will give a brief introduction to actuators,
sensors, motor drive, electronics, power supplies, algorithms and software,
mechanical parts and combining methods between these parts.
Sub systems
Actuators and transmission systems they are solenoid, motor drive,
pneumatic and hydraulic system which allows the robot to move. Mechanics
parts are motors usually rotate and a mechanism to transfer motion to all the
necessary parts of a robot to create the motion that is required. Usually robots
require a power supply, this kind of supply depends on what a robot is
required to do, and if it is a mobile robot then you need to decide the size of
battery beside the efficiency since power supply will be in the board of robot,
but if it is not mobile robot then electricity can be fed through a supply cable.
Power storage system is battery or some other electronic devices. Sensors are
two types Internal and external, there are many sensors in a robot which
considered as the senses in a robot. Micro- controller and processors are the
brain that controls the whole system. Algorithms and software are two
models higher level and low level, programmer need to create software and
algorithms to run the robot in a desired way.
Actuators:
Actuators are essentially the prime movers providing linear force and motion.
Conventional: Pneumatics, hydraulics.
Pneumatic valve system
Cylinder
Pneumatic
Pneumatic and hydraulic design consideration:
With this kind of system there is input and output in the cylinder, through
these input and output we pump air for pneumatic system and clean filtered
oil for hydraulic system to make the piston move outside and inside to
provide us with linear force and motion. You need to know in robot system
how far the piston should go outside or go inside, in pneumatic system we
cannot control how far the piston can go outside or inside unless you put ring
in the piston rod, but in hydraulic system we can control the extension of
piston by controlling the oil flow through flow control valves. Pneumatic
system is used when we do not need a large force to push, but hydraulics is
used when a system demands a large force, especially with big machines. The
problem with hydraulic system is leakage on the other hand is not a big
problem in pneumatic system since it uses air
Permanent magnet motors and stepper motors are the joint space in a
robot that creates rotational motion
Servo motor
Design consideration for servo motor:
When we design a robot, we take into consideration the torque, speed and the
gearbox size which should not be so heavy to the motor drive capacity. We
should pay attention to the weight of motor drives and gearboxes because the
base motor drive needs to carry all the motor drives and gearboxes which
require quite big torque and stronger motor in the base. The selection should
be harmonic, and motor should match the load. When motor rotates in a
certain degree it should send feedback to the controller and to take feedback
from the controller when it needs to stop rotating, this happens through an
encoder which can read the degree of rotation. Nowadays these controllers
are mounted in the back of the motor drive. Controller manipulates voltage
and ampere to control the motor drive speed.
Linear motors actuators
Are used in positioning applications where high speed and accuracy are
required. Main job is to produce a linear force along its length whether up
and down or left and right. It has almost the same idea as hydraulics and
pneumatics cylinder but the only difference that these does not use oil or air
to generate force but it uses electricity .
Linear motor drive actuator
Power supplies (PWM amplifiers): is a device for increasing or decreasing
the electrical power voltage and ampere. To be able to increase the velocity
of the motor drive you need to increase the voltage and ampere through chart
meter power supply amplifiers. It is very important to notice that the motor
does not heat up because of high voltage or ampere.
Power supply circuit
Power supply circuit
Transmission system (Mechanics)
1. Gears: the lighter the gear the better motion, less torque and higher speed.
Some of this model is spur helical, bevel, worm, rack and pinion, and many
others
Gear picture
2. Chains:
3. Timing belts: have some kind of teeth and these teeth go around with
some kind of pulley that drives this belt around it to transfer motion. It is
used nowadays with robot walking machine
Timing belt with a pulley
Timing belt connected to a pulley
4. Metal belts, cables and pulleys
5. Linkages:
Robot example of linkages between a servo motor and pulleys
6. Ball screws: are very important to create linear motion backward and
forward with low speed. We can use some kind of nuts, by tightening the nut
we control the speed of motion
Sensors
Simple switch sensors are used to turn on and off the whole cycle or some
part of the cycle.
Simple switch
Simple circle with simple switch
Force sensor is to measure and control the force power applied. These are
mostly in use in the robot end-effectors to measure how strong the grip
should be so it does not smash work pieces. They are different models with
different applications for example variable force control, load and
compression sensing, pumps, contact sensing, weighing and household
appliances
Force sensor
Gyroscopes: Is a device for measuring and maintaining orientation, based on
the principles of momentum. In essence, a mechanical gyroscope is a
spinning wheel or disc whose axle is free to take any orientation. Although
this orientation does not remain fixed, it changes in response to an external
torque much less and in a different direction than it would without the large
angular momentum associated with the disk's high rate of spin and moment
Potentiometer has the same task as encoder but uses different method for
measuring degree of rotation, it convert the analogue voltage value from 0 10 volt to digital signal bit, which give how many degree of rotation in the
motor drive. In picture 3.18 a potentiometer is mounted at the gear motor
which enables the DC motor controller to measure the position of the axle.
Servo motor with Potentiometer
Digital rotary Encoder is for measuring rotating degree of a shaft by using
lines which define the degrees of rotation and to give the position of a shaft.
On other way we can say the same work as potentiometer, but they are using
different method for measuring degree of rotation
Wheel encoder circle
Wheel encoder
Tachometer Essentially is a generator. Depending on the velocity of the
shaft, you get certain amount of voltage output and this amount is measured
by tachometer to give us visual feedback about the motor state. It is used for
controlling and adjusting. Sometimes tachometer information is obtained
from an encoder.
Tachometer
tachometer
Digital
Cameras are used to locate object in the robot environment. They are equal
for vision system in human.
Vision system in robot
Proximity sensors: A sensor is able to detect or recognize the presence of
close objects without any physical contact with them; there are different types
of these sensors which are mechanical or infrared by using light. A proximity
sensor often emits an electromagnetic force or a beam of electromagnetic
radiation (for instance infrared), and looks for changes in the field by reading
the return signal. The object being sensed is often referred to as the proximity
sensor's target. Different proximity sensor targets demand different sensors.
For example, a capacitive or photoelectric sensor might be suitable for a
plastic target; an inductive proximity sensor requires a metal target.
Proximity sensor from Omron
Electronics
A to D converter and D to A converter: these converters convert analogue
signal to digital signal by converting 0- 12V into single 8 byte or vice versa
(Robert 2006, 46).
DA CONVERTER
Basic circle for converters
Microcontrollers are very small computer devices used for robot control, it
contains processor core, memory, and programmable input/output peripherals
Microcontroller
Programmable logic controller or PLC has input and output that are used to
create communication between sensors and actuators. Timers are included
inside PLC which can be programmed. Outputs are the actuators and inputs
are the sensors.
Power Electronics are used for running motor drive and controlling the
motor speed by converting electrical power voltage and ampere to a suitable
amount to produce suitable speed in the motor drive.
Power electronics
Algorithms and software
Mean step by step procedure and logic programming language through
logical event sequence by planning the whole task at the beginning, then
controlling the motors and actuators through using feedback signal that are
obtained from sensors., programmer need to plan trajectory of each
individual actuator motions and to plan trajectories of end effectors. To get in
the end harmonic motion with suitable speed based on logic system and task
requirement.
Servo Motor Design
Introduction
Servo motor is the main prime mover of the robot. This section will cover the
most important of servo motor types which concerns mainly robot, servo
motor behaviour in respect to torque, speed, current and voltage, and how to
control the speed, type of application and how to choose the right servo
motor with a suitable gearbox.
Servo motor main types
Dc servo motors are compact and light. They are two main modules
permanent magnet motor (PM motors) and permanent rare earth magnets.
Servo motor
The principle is similar if we talk about DC or AC motor. A conventional
motor has stator magnets, rotor wound commutator and brushes. The
negative side of these models is the brushes that cause electrical sparks that
creates noise and electric disturbance for other surrounding electrical devices.
Then by the arrival of brushless servo motor which is faster, up to 50,000
rpm. In these modules magnets are in the rotor, coil in the stator or around it,
electronic circuits features the magnetic fields and the rotor motion is sensed
by hall effect sensor. These models became the most usable system because it
gives more reliable operation, but they are slightly more costly.
Performance characteristic of motor drive based on Image 4.2:
According to the Image 4.2 there is a stall torque point, no load speed point,
there is also specific voltage, which drives the motor to no load speed and
stall torque. We notice that if we heavily load the motor then the speed is zero
Behaviour of a servo motor with different speed and torque
We notice from the following Image 4.3:
There is no load current Kt is the motor constant value
Load torque and current
Power control of the motor: this is how the system behaves during
operation.
Load torques and power output diagram
efficiency
Load torque and
How to select a motor in a given task:
We need to check if the motor can supply a particular torque and speed from
the manufacturer user manual catalog, if electronic amplifier is able to carry
the required current, if we have enough voltage to carry the load, we need to
be sure that a motor does not heat up during operation time. It is easy to
predict how a motor behaves beforehand because there are several formulas
and curves provided by motor manufacturers, helping us to choose a suitable
motor drive.
Application types in servo motor
A. Application – continues duty operation
When we drive a certain load in a particular speed or variable speed during a
period of time, we need to take into consideration the load torque, speed and
if electronic circuit is able to supply the required current and voltage.
B. Application -Intermittent operation (Intermittent motion):
that has variable speed and variable periods of time, this drawing describes
the motion.
Angular velocity with relative of time
We notice from the curve Image up that we have several different periods of
time. From 0 to A which lasts during t1 is acceleration. From A to B which
lasts during t2 is a uniform speed with 0 acceleration. From B to C which
lasts during t3 is deceleration. From C to D which lasts during t4 is dwell
where acceleration and velocity speed is 0.
How to define a suitable servo motor speed
We need first to calculate the speed of load, reduction ratio value by gearbox
and the horse power or KW of the motor drive capacity.
Servo motor gearbox
Every motor drive has a certain load and the motor speed is quite high for
example 3000 rpm or more. We need to make reduction for the speed through
choosing suitable size for the gear box since the gear box has contributed for
the carried load speed. If the speed is not continued at the same level, but it is
variable during variable time, we need to Image out how to solve this
problem.
Servo motor gearbox
Every motor drive has a certain load and the motor speed is quite high for
example 3000 rpm or more. We need to make reduction for the speed through
choosing suitable size for the gear box since the gear box has contributed for
the carried load speed.
If the speed is not continues in the same level, but it is variable during
variable time, we need to Image out how to solve this problem.
Choosing a suitable gearbox
Reduction: most of the cases we face are reductions but there are little cases
of increases. We need to know the maximum speed of load (rpm) of motor
drive from the guide manual which has been provided by a motor drive
manufacturer.
For example maximum allowable speed for a motor is 3000 rpm and
transmission ratio is 0.1.
How to calculate maximum speed of load?
(Max speed of load)*2= (3000*0,1)
Maximum speed of load = 150
Conclusion:
If we know the maximum speed of load, we can base our choosing the motor
drive and gear box size or vice versa on it.
Speed and load torque
diagram
Notice: The more speed, the more available torque drops, the more voltage
the more speed.
A Base servo motor example in a robot
In the below picture an arm operation mechanism for an industrial robot
includes a support, a first arm, a second arm, a link base, a parallel link and a
conversion mechanism. The first arm has a base end pivotally connected to
the support for rotation relative to the support. The second arm has a base end
pivotally connected to a tip end of the first arm for rotation relative to the first
arm. The link base is pivotally connected to the first arm for rotation relative
to the first arm. The parallel link keeps a constant posture of the link based
upon the rotation of the first arm. The conversion mechanism converts the
rotation of the link base relative to the first arm into the rotation of the second
arm relative to the link base
This example shows some servo motor linkages through gears and cables
Controlling inertia
We have to find two inertias
Angular velocity with respect of time
Load torque with respect of time
In Above Image the sum of torque from 0 to A = to sum of torque A to B +B
to C
Angular velocity with respect of several period of time
From 0 to A during time t1 according to above Image
Diagram for selecting suitable motor drive
Resolution
The resolution of a stepper motor
Assume that we connect stepper motor on a screw with nut on the screw, then
we run the motor drive forward and backward then nut begin to move with
the movement of stepper motor and there is minimum distance that nut can’t
go below which is the limit, this some kind of example of motor drive
resolution.
Servo motor with screw ball
Servo motor drive gets feedback from an encoder or a potential
meter
Resolution depends on the number of lines inside encoder, the more
resolution you want the more expensive encoder and the more lines it has.
For example, encoder that has 360 lines means that it has one degree of
resolution, but it cannot go below one degree.
Optical incremental rotary encoder
Potential meter uses different method, which is analogue signal, which is
converted to digital through electronics. Example: let us assume potential
meter signal is 10 volt which equal 8 bit then: 2 8 = 256-digit 360o / 256= 1.4
Resolution per step.
Main types of an industrial robot
There are two main types of industrial robot the first one is called an
Industrial manipulated and the second one is automated guide vehicles robot.
Industrial robot
For example if you think of your hand when you use it to pick up a pencil,
there is rolling motion on the rest, but you don’t use this motion while
writing, so you eliminate this motion because the axis is symmetrical. These
six motor motions we called six axis which are driven independently.
Main robot motions
Robot types according to their motion
Rectangular coordinate motion (Cartesian):
there are three different motions which are X, Y, Z or in other word this robot
can move up and down, left and right, backward and forward, but it has no
rotation or degrees. In this kind of model it is easy to control motion just by
giving the coordinates, then a robot moves according to (x, y, z) values.
Cylindrical coordinate Robot: it has rotational movement on the base and
Cartesian motion in the upper part.
Spherical coordinate robot: is a robot with two rotary joints and one
prismatic joint.
Articulated arm robot: it looks like human arms base rotational like a
shoulder, an elbow and a rest which give us more motion with certain angels
which is not possible by Cartesian robot. This model is more complicated to
control because you need to calculate angles, velocity and acceleration to get
a desired motion and requires solving plenty of equation.
Gantry robot: is a linear motion robot and has another industrial name as a
linear robot.
Scara robot: is created by Japanese 1979 for assembly tasks because it
moves in two planes. It is simple to use in assembly operation, when you
need to tight a screw and to hold it vertically then to rotate the screw and
push down you don’t require very big sophisticated robot, so Scara robot is
the best choice for a similar operation or like pushing object down like gear
box and so on.
Scara robot
All these models are used by engineers and every model has positive and
negative sides. Depended on the job requirement we try to choose the right
model to suite our requirement
Scara robot vs articulated robot:
There is even more features to compare but these are the main features that
can make the difference between an articulated robot and a Scara robot.
End effectors
Is a robot hand that grabs an object and moves it from one place to another.
In the end effectors usually there are three rotational motions with three
different motors and it equal human rest for lifting objects. End effector are
different model with different task option.
Robot end effector
End effectors motions are three: Rotating motion, up and down motion with
angle, holding object motion.
Robot end effector
Notice: since the motor drive is heavy it would be better, if we put all the
motors on the base and try to move them through linkages, cables and pulleys
so we do not need to carry heavy load.
Industrial Manipulators And Its Kinematics
Introduction
In this section I will try to give an idea about types of links and joints and the
serial chains combination, also to focus on explaining the term (degree of
freedom) in an open chain and a closed chain and how to calculate it. Beside
that I give several drawing examples about different types of links and chains
to make the idea easy and clear to understand. In the end of this section I will
define the work space area for a robot and what type of work space we have.
I explain 2R and 3R manipulator work space beside the direct and inverse
kinematics work space.
First we need to define the following:
Serial chain is a combination of links and joints in the space. Notice: we need
to understand the word degree of freedom and to know how to define how
many degrees of freedom a robot has.
Links and joints
Joints:
Two different types of joints:
1. Revolute joints(R): this joint is powered by a servo motor.
Revolute joint
Example a robot has three revolute joints, so we call it (RRR) or (3R), which
mean three degree of freedom with so called planar manipulator.
Notice: we begin to calculate (R) beginning from base to end effector.
Example of robot with 3 revolute joints
2. Prismatic joints (P): is powered by a cylindrical piston like pneumatic
system or hydraulic
3 Example of prismatic joints
Example of one prismatic and two revolute joints: we call it (RPR) with three
degrees of freedom and this model can be called redundant.
Planar RPR
Example of a robot with five degrees of freedom
Example of a robot with 2RP2R
In this example we calculate from the base first revolute joint as R then
second revolute joint as R. After that comes one prismatic joint so we have so
far 2RP, then we end up with 2 revolute joint, then the total will be 2RP2R.
Degree of freedom
First I need to explain the term degree of freedom (DOF). When I fix a joint
and prevent any movement then I can say that this joint has zero degree of
freedom but when I mount a joint with a motor drive, then it loses two
degrees of freedom and it will have just one degree of freedom because it
moves in one plane.
Notice: in the space there is six degrees of freedom.
Spherical joints have three degrees of freedom and it moves in three planes.
Hooke joint has two degrees of freedom and it move in two planes.
Hooke joint
Types of robotic chains
Degree of freedom in opened chains
Example of four degrees of freedom in open chain
In open chains it is easy to calculate how many degrees of freedom. For a
robot just by calculating the rotations axes and prismatic axes. In the example
up we have four revolute joints that means we have four degrees of freedom
Degree of freedom in closed chains
How to calculate degree of freedom for closed chains?
We need to define how many links, revolute joints and prismatic joints.
Example 1
Closed chain
In the example up we calculate degree of freedom this way:
3(5-1)-2(5)-2(0)= 2 dof
Example 2
Closed chain by a prismatic joint
Example 3
Closed chain
Stewart platform
How to calculate Stewart platform degree of freedom?
Defining work space area
There are two types of work space area: parallel work space and
perpendicular work space. It is very important to determine work space area
and to know the planes of work space. In the picture 6.14 the robot of five
degrees of freedom as we can see there are several rotations with two work
spaces. First Base and wrest they rotate in parallel work space. Then waist,
shoulder and elbow they are rotating in parallel work space to each other.
Then base and wrest they rotate in perpendicular work space against waist,
shoulder and elbow.
Spatial manipulator: that has more planes to move through with more
perpendicular and parallel axes to each other like industrial robot as an
example in Image.
Example with spatial robot
Notice: This example is called 5RP manipulator; all these axes are moved
with series of cables and pulleys which are connected to the drive motor.
Manipulator task is to position an object and to define how many
orientations are possible for a specific position. This issue is required for
mechanical engineers to answer. By adding more degree of freedom you can
add more orientations and ranges of orientation, but control problem gets
bigger. Notice number of possible orientations (directions) depends on the
position of the object.
Example of positioning an object
In the following picture we have just one orientation
one orientation example
Work space:
we notice from the following picture 6.17 that if B rotates, then work space
will be as we see in the picture but if we assume that L1 = L2 then C1 will
touch the base A and we notice that we have a bigger workspace (Craig 2005,
102-103.)
Example to show a work space
Direct kinematics 2R
Inverse kinematics 2R
There are 3 axis or 3R manipulator robot in pictures. There is The Inverse
kinematics in picture and direct kinematics (Craig 2005, 103- 104.). Notice is
the orientation angle.
Direct kinematics 3R
3R
Inverse kinematics
How to define the inverse kinematics in 2R manipulator
When the base position, end effector position and the linkage length are
given, then we have unique solution by drawing two circles. The centre of
these circles is: the position of the base A and the position of the end effector.
Then we take the 2 cross point which represents direct kinematics and inverse
kinematics solutions.
How to define the inverse kinematics in 3R manipulator
When end effector position and the linkage length are given, then we have
unique solution by drawing two circles and the centre of these circles is: The
position of the base A and the position of C. Notice that the position of C
stays the same. (Craig 2005, 102-105.)
Trajectory Definition
This example is how to move the box from position p1 to final position p4
during t1 to t4
Moving object from position 1 to position 4
Solution: First we need to calculate
at t1, t2, t3, t4. Then we need to
calculate the position of the links. This kind of problem can be called inverse
kinematics. We should be careful that during motion no accident happens.
Given s at each moment determines the position and orientation of all links.
Forward position problem
Fixed parameters of the mechanisms values of joint variables will determine
position and orientations of all links.
Inverse position problem
Fixed bars of the mechanisms position, and orientations of end effector will
determine the values of joint variables.
Simple example with planar 2R
2R Manipulator
Sketching the position
Example1:
This is a numerical example for 2 planar:
The solution drawn in SolidWorks
3R planar manipulator
3R Manipulator
Sketching the position for 3R manipulator
Example 2:
For 3 planar manipulator
We can see the result of our calculation in the Image on the following page.
Sketching the position in SolidWorks
Prismatic joints calculation
3p manipulator
We assume according to the following drawing that we have three prismatic
joints which move in 3D space, so we just need to find values of S1, S2 and
S3.
Position, Orientation, Frames
Introduction
In this section I will try to summarize how to define position coordinate on
the space with respect to the origin frame and to calculate the transformation
when this frame rotates with respect to the base frame.
There is frame A and frame B.
Frame B rotates with respect to frame A
1. Find rotation B in A?
2. Find the coordinate ? The coordinate of p (0 3 1) The coordinate of pB (0 1
1)
Frame B rotate with respect to frame A
Now we need to find the translation.
We add extra row which represents rotation axis (0 0 0 1), then we make dot
product between rotation matrix and the coordinate of pB (0 1 1) and then we
get the coordinate of PA (Tapani Kuusi, lecture on Lahti University of
Applied Sciences 2010).
Example 2:
There is rotation from P to P2 around X axis
Rotation around X axis
Example
In the following example frame B rotates with respect to frame A.
We have 3 different rotations.
Rotation axes
Important Notice: How we can make the calculation for B rotating around
A?
Transformation
We use matrices to transform vectors.
Example: In the following picture frame B rotates around frame A about z
axis by 30 Here Z axis is pointing out of the page B P (0 2 0)
Mapping involving general frames
We need to know how we can get the parameters of A P which parameters
are given in frame B and the parameters of A PBORG are given (Craig 2005,
27-28). We use the following formula
We notice
1. That we add just for rotation matrix one row of zeros (0 0 0). 2. For vector
´s coordinates we add number 1 below all of them. 3. This kind of matrix is
called homogeneous transform (4x4).
Example:
Frame B has rotated with relative to frame A bout Z axis by 30 degrees and
Translation operators
Operator is the same as rotation and translation, but the interpretation is
different.
Example of operator:
Compound transformation
Trajectory Planning In Robotics
Introduction
In this section I will try to cover how to make a path around trajectory, the
base of creating path, what is the required data for creating path and how we
can make the system to choose the speed, also time and acceleration when we
just define the basic required data for trajectory planning.
Required data for trajectory planning
When we think about trajectory we mainly focus in moving object from
position A to position B.
In trajectory planning we try to define first the following data
·
Initial point
·
Final point
·
Via point: intermediate point between initial and final points.
Point to point planning is a continuous path motion like in welding for
example. How we plan point to point: First we define task specification.
Mapping:
·
World coordinate
·
Joint space
Constraints
In order to make smooth motion we need to put some constraints between via
points through via points because of intermediate point (obstacles) within
specified duration of time
Robot should Move from initial points to final points through via points
because of intermediate point (obstacles) within specified duration of time.
Subject to constraints
Cubic polynomials
Below Image shows the position, velocity and acceleration functions for this
motion sampled at 40 Hz. Note that the velocity profile for any cubic
function is a parabola and that the acceleration profile is linear.
User specify n+2
Final position
Position velocity User has to give Cartesian data and large data which is
kinematically consistent:
Another constraints
user interface must be simple
User should specify the following for creating trajectory:
·
Initial position
·
Final position
·
Via points
Notice: Position specifies velocity to be chosen by the system.
Example:
3 points specified i, j, k
this picture specifies time with respect of speed
Two cubic curves (segment)
at j:
Continuity of velocity and acceleration
Trajectory Planning By Using Robot Studio
Introduction
In this section I will try to give brief introduction about how we can create
path using robot studio through using simulation and virtual flex pendant
since they are two different ways, so I will try to explain each way separately
beside give introduction about how we can choose from ABB Library
geometry and tools and how we can set their position in robot environment.
Creating new station and saving it
First we need to choose any robot type by clicking ABB library (watch
picture 1)
Picture 1 (robot studio screenshot)
How to open new station and to choose robot from ABB library
Select the Icon import library then select equipment then you go down and
select any tool by clicking any tool for example I will choose any end
effector tool from the library (watch picture 2).
Picture 2 (robot studio screenshot)
How to select from ABB library end effector tool and some other geometry
Then on the left tree you will see the tool name, but it is not connected to
robot’s end effector on the right window, so by dragging the tool name
(PKI_500_di_M2001) and drop it to your (IRB1600ID_4_5_150_03) robot
then new window will open to ask you (do you want to keep the current
position of the tool) then you should choose no then the tool will be
connected to the end effector of the robot (watch picture 3).
Picture3 (robot studio screenshot)
Example to explain how to connect end effector to the robot
Now you need to select geometry or table, you need to click on import library
then Equipment follow the arrow and choose any table by clicking the object
then it will appear on red on the robot environment (watch picture 4).
Picture 4 (robot studio screenshot)
How to select table or some other geometry from ABB library
We need to change the position of the table by clicking from home section
move order from freehand group button the will appear x and y cross line and
we can move by dragging the arrow head right or left and up and down, but
this way is not precise and I don’t recommend to use it.
To get precise position to your geometry look up on the program you will see
highlighted button name part tools and under it there is modify button, click
modify button so new tool bar will open then search the button set position
and after that you can choose any coordinate you need for your table position
after that you need to click apply to activate changes (watch Picture 5).
Picture 5 (robot studio screenshot)
How to change the coordinate of the table position through set position icon
Now you need to save the station by clicking up in the left side corner by
selecting save as then new window will open to rename your station and to
select where to save your station.
Picture 6 (robot studio screenshot)
How to save new station
Picture 7 (robot studio screenshot)
Giving a name the station and to choose location on the computer to save it
Moving robot joint space
Firstly we make right click on the robot tree on the left side on layout section
then we have several options and from these options we have jump home
which return robot to default position, but you need to focus also on using
mechanism joint log and mechanism linear jog beside the free hand option.
Picture 8 (robot studio screenshot)
Right click on IRB robot so we get the option we need
When you click on mechanism joint jog then it will open new window which
look like picture 9. We have 6 joints with six coordinate to manipulate so we
can choose the degree by moving the slide button left and right on the tree
section. First joint move robot left and right, second joint move robot up and
down, third joint up and down as well, the last three joint move end effector
up and down beside left and right.
When you choose mechanism linear jog then it will appear new window like
in picture10 the same way like previous step by moving the slide button left
and right we change the coordinate of each joint until we get the desired
position. First joint move forward and backward, second joint move right and
left with linear motion, third joint move up and down with linear motion, the
last three joint move the same like first three but by steps.
Picture 10 (robot studio screenshot)
This picture shows how to change the linear coordinate of each joint
Target teach method
There are two different ways, the first one by using Target teach Icon and the
second one is by using Virtual flex pendant, by choosing teach Target you
can create path easier than virtual flex pendant, but I will try to give small
introduction on both ways and the user has the choice to choose the suitable
way for creating the path. First we right click the robot from home
layout
then we jump home like we mentioned in the previous tutorial, then we clicks
on mechanism joint jog and we move the robot joint space (watch picture11).
Picture 11 (robot studio screenshot)
Robot movement options
Move the robot joint the first desired motion the go up on the icon target in
home section then we choose create target icon then it will open new window
like in picture12.
Picture 11 (robot studio screenshot)
How to create path by choosing Target
Picture 12 (robot studio screenshot)
How to create path by choosing Target
We go back to first step to repeat the same process again by right click the
robot from home
layout then we click on mechanism joint jog or linear
joint jog like we mentioned in the previous tutorial.
Picture 13 (robot studio screenshot)
Moving linear joint
Repeat step3 by clicking create target then add new point after that go down
and choose create , then the previous move you have done is saved by the
system. We repeat the same again and again until the path point is completely
created. When we open the tree of IRB on the lift in paths & targets section
we will see that we have target 10, target 20, target 30, ……target n. Now we
create an empty path then we drag targets to the new path (watch picture 15).
Picture 14 (robot studio screenshot)
Creating empty path
Picture 15 (robot studio screenshot)
Dragging Targets to the path
Up we will see path tools and modify button under it. Select modify button
and new list up will open, search the icon move along path and the robot will
debug for few second and robot should move according to the trajectory
which is planned or created.
Picture 16 (robot studio screenshot)
Robot simulation
Create program using virtual flex pendant part
1
First click on offline button then click virtual flexes pendant then ABB on the
corner of the new window
We click on save program as and we save new program and we name as well
Give the file name or select certain file in your hard disk then press ok
Click on load program.
We click on File and choose New routine..
Notice: If New Routine is not possible to select and the colour is gray then
check if the motor is on or off and click also on enable button to activate and
should be in green.
Then we need to save the new routine to a new name.
Then it will look like this, select any routine and click Show Routine.
From add instruction icon we will get the entire command menu on the right
side but if we need more command we need to click next.
Create program using virtual flex pendant part
-2
We first move the robot coordinate of one joint then we stop to save the
movement by clicking move joint J click. Then we paint just the star (*) and
double click the star the it open new window like in picture 3 then we give
name for the movement for example pist1
Now we need to change the curve radius motion by clicking z50 to choose
for example to z40 from the list of curve list.
Now we can change tool name by clicking new so new window open to
rename your tool name or select tool from tool list.
We repeat the same step many times by using the same previous procedure .
Now we need to give command pick up and pick off the object we click first
on command up then 1/0
We rename it here.
Notice: When you select number 1 then you choose to pick up the object but
when you select 0 the you choose to leave the object
Notice: We can also use the command set (1) and reset (0) for tool to pick up
object and to leave it. The command set and reset is more simple to use for
example set is to pick up and reset to leave object.
Edit command we can use to copy and paste and some other command
instruction in the following Icon.
Finally debug command for simulation and running the program.
Toto the Robot
There once was a robot named Toto (see Toto’s photo in Image 16.2) that
used just such a representation to learn the map of its office environment at
MIT, in order to hang out with students and help someone1 get her PhD in
robotics. Let’s see how Toto learned its environment and got around to do its
task. To be honest, Toto did not in fact water any plants, as water and mobile
robots rarely mix well.
Toto the robot.
Toto’s Navigation
Toto was a behaviour-based system; you can see its control diagram in Image
16.3. It was equipped with a simple ring of twelve Polaroid sonars (if you
can’t remember why twelve, go back to Chapter 14), and a low-resolution
compass (2 bits). Toto lived around 1990, in the "old days," well before laser
scanners became commonplace. As in any good BBC, Toto’s controller
consisted of a collection of behaviours. Its lowest levels of the system took
care of moving Toto around safely, without collisions. Next, Toto had a
behaviour that kept it near walls and other objects, based on the distance
perceived
Toto’s behaviour-based controller
Toto’s sonar-based navigation regions
with its sonar sensors. Toto used a set of sonar sensing regions for navigation
(shown in above Image); its low-level safe navigation was in fact a reactive
system. In fact, Toto’s navigation controller used some of the programs.
Following boundaries was a good idea for Toto. Remember why? sonars are
most likely to be accurate at short and intermediate distances from objects. It
pays to get accurate sensor data when building maps and going places, so
Toto did it by staying near objects whenever possible. It turns out that rats
and mice also like to stick near objects, perhaps for other reasons (safety
from people?), but nobody would have confused Toto with a rat, at least
based on its looks. We’ll see later that Toto’s brain, in particular its use of
distributed representation, was also rat-like, but more on that in a bit.
To summarize, Toto’s navigation layer manifested the following behaviours:
1. The lowest-level behaviour kept it moving safely around its environment
without colliding with static obstacles or moving people.
2. The next level behaviour kept it near walls and other objects in the
environment.
Toto’s Landmark Detection
The next thing Toto did was to pay attention to what it was sensing and how
it was moving. This is generally a good thing to do. It allowed Toto to notice
when it was moving straight in the same direction for a while, or when it was
meandering around a lot. Moving straight indicated that it might be next to a
wall or in a corridor, and so Toto was able to detect walls and corridors.
Meandering around indicated that it was in a cluttered area, and so Toto
would detect that as well. So while Toto moved about, one of its behaviours
was paying attention to what its sonars “saw,” and in that way detecting and
recognizing landmarks: If Toto kept moving along in a consistent compass
direction (in a near-straight line) and kept close to a wall on one side (say the
left), it would recognize and detect a left-wall (LW). If the wall was on the
other side, it detected a right-walls (RW), and if it saw walls on both sides, it
recognized a corridor (C). Finally, if it detected nothing straight for a while
(no walls on either side), causing it to meander about, it declared a messy
area (MA). In Toto’s mind, left walls, right walls, corridors, and messy areas
were special, because it could notice them easily from its sensors and its own
behaviour. Therefore, they formed nice landmarks for his map.
Toto’s landmark detection layer also noted the compass direction for each of
the landmarks it detected. Its 2-bit compass (that’s not meant as an insult, it is
an accurate description) indicated north as 0, east as 4, south as 8, west as 12,
and so on, as shown in Image. In addition to the compass direction of the
landmark, Toto also noted the landmark’s approximate length.
To summarize, Toto’s landmark detection layer manifested the following
behaviour:
1. It kept track of the consistency in the sonar and compass readings, and,
noticing those, recognized one of the landmark types (left-wall, right wall,
corridor, messy-area).
2. For each landmark type, it also noted the compass direction and length of
the landmark.
Toto’s Mapping Behaviour’s
In order to store those landmarks, Toto constructed a map, as shown in
Image. Here we come upon the really interesting fact about Toto: its
distributed map representation. Each of the landmarks that Toto discovered
was stored in its own behaviour. When Toto’s landmark detection layer
discovered a corridor (C) going north (0) for 6.5 feet, it created a behaviour
that looked something like this:
What you see above is pseudo-code, not really Toto’s code but something
close to it and easier to read. The code above simply says that the behaviour
represents a corridor (C) going north (0), so that whenever the input into that
behaviour matches that description (i.e., when the input is C and 0, and the
An example of an indoor environment Toto navigated in, and a map it
constructed of that environment.
approximate length and location are right), the landmark matches and the
robot is in fact in that particular corridor.
Toto’s landmarks were clever: they worked regardless of which way Toto
was going down a corridor or along a wall. A C0 shown above would also
match C8 (because that corridor goes north-south, so Toto might be heading
down the same corridor in the opposite direction), and C4 would also match
C12 (because that corridor goes east-west). The same rule applied to walls as
well: a left wall going north (LW0) matched a right wall going South (RW8),
and so on.
Whenever the landmark-detecting layer detected a landmark, its description
(type and compass direction) was sent to all map behaviours at the same time
(in parallel). If any behaviour in the map matched the input, as described
above, then that behaviour would become active, meaning Toto knew where
it was in its map; this is called localization and is a very important part of the
navigation problem.
Ok, so far we have Toto moving around safely, detecting landmarks, and
reporting those to the behaviour map. Whenever one matches, Toto
recognizes that it is in a place where it has been before.
What happens if no behaviour in the map matches the landmark Toto is
detecting?
That means Toto has discovered a new place/landmark, which needs to be
added to the map. Suppose Toto is seeing a corridor (C) going east (4) that is
approximately 5 feet long. Here is how it adds that new landmarks/behaviour
to its map: it takes a brand new, empty behaviour “shell” and assigns it to the
newly found landmark information, so it becomes:
It then connects the previous behaviour (suppose it is the C0 we just saw
earlier) to the new one by putting a communication link between them. This
means that when Toto is going north on C0, the next landmark it will see is
C4.
To take advantage of knowing where it is (being localized), Toto’s map did
another clever thing: the behaviour that was active sent messages to its
neighbour in the direction in which Toto was traveling (for example, in the
map shown in Image 16.5, C0 sent messages to C4 if Toto was going north,
but to C12 if Toto was going south), to warn it that it should be next to be
active. If the neighbour was next to be recognized, then Toto was even more
confident about the accuracy of its map and its localization. On the other
hand, if the expecting neighbour was not recognized next, that usually meant
that Toto had branched off on a new path it had not tried before, and would
soon discover new landmarks to be added to the map.
Besides sending messages to its neighbours to tell them who is coming next,
Toto’s map behaviours sent inhibitory messages to one another. Specifically,
when a behaviour matches the incoming landmark, it sends messages to all of
its neighbours to inhibit them, so that only one map behaviour can be active
at one time, since Toto can really be only in one place at one time. That’s
another part of Toto’s localization. (If this seems like a lot of machinery to
assure Toto where it is, you’ll see in Chapter 19 how hard it is for any robot
to know where it is.)
To summarize, Toto’s mapping layer performed the following behaviours:
1. Matched the received landmark against all landmark behaviours in the map
2. Activated the map behaviour that matched the current landmark
3. Inhibited the previous landmark
4. Reported if no landmark matched, and created a new landmark behaviour
in response, storing the new landmark and connecting it to the previous
landmark.
Path Planning in Toto’s Behaviour Map
As we have seen, while Toto roamed around its environment, it built up a
landmark map of its world. This is an illustration of how you can build a
distributed map with a behaviour-based system. Now let’s see how such a
map is used to find/plan paths.
Let’s suppose that Toto’s watering can was empty, and it needed to go to the
corridor with the water supply, so this landmark became Toto’s goal. If the
goal landmark and Toto’s current position were the same behaviour in the
map, then Toto was already at the goal and did not need to go anywhere.
That’s too lucky to have been the case most of the time. Instead, Toto had to
plan a path between its current location and the goal. Here is how it did that.
The behaviour that corresponded to the goal sent messages saying, “Come
this way!” to all of its neighbours. The neighbours, in turn, sent the messages
on to their own neighbours (but not back the way the message came; that
would be a waste and create some problems). Each map behaviour, when
passing a message along, also added to it its landmark length, so that as the
path went
Toto’s way of path planning
through the network, its length grew based on the number and lengths of map
behaviours it passed through.
Rather quickly, these messages all reached the currently active behaviour
which represented wherever Toto was at the time. When they arrived, the
messages contained the total length of the path taken to get there, in terms of
physical landmark length. The current landmark paid attention only to the
message with the minimum summed landmark length, since that indicated the
direction toward the shortest path to the goal.
Besides going to a particular landmark, such as a specific corridor, Toto
could also find the nearest landmark with a particular property. For example,
suppose that Toto needed to find the nearest right wall. To make this happen,
all right wall landmarks in the map would start sending messages. Toto
followed the shortest path, and reached the nearest right wall in the map.
Go, Team! Group Robotics
What makes a group of robots a team? If you can control one robot, what’s
different about getting a group of them to do something together? Why is it a
hard thing to do well?
Controlling a group of robots is an interesting problem that presents a whole
new set of challenges compared with controlling one robot. These include:
1. Inherently dynamic environment
2. Complex local and global interaction
3. Increased uncertainty
4. The need for coordination
5. The need for communication.
Having an environment full of robots creates a very fast-changing, dynamic
world. As we already know, the more and the faster the environment changes
around the robot, the harder control can be. Multiple robots inherently create
a complex, dynamic environment because each moves around and affects the
environment and other robots around it. All this naturally leads to increased
complexity of the environment, and also more uncertainty for everyone
involved. A single robot has to contend with uncertainty in its sensors,
effectors, and any knowledge it has. A robot that is part of a group or team
also has to deal with uncertainty about the other robot’s state (Who is that
guy? Where is that guy?), actions (What is he doing?), intentions (What is he
going to do?), communication (What did he say? Did he actually say that?
Who said that?), and plans (What does he plan to do? Does he have a plan?
Will that interfere with my plan?). In a group situation, a robot has a whole
new type of interaction to deal with, one that involves other robots. Just like
people, groups of robots must coordinate their behavior effectively in order to
get a job done as a team. There are different kinds of "jobs" that teams can
do, and different kinds of COORDINATION teams. Coordination has to do
with arranging things in some kind of order (ord and ordin mean “order” in
Latin). Notice that the definition does not COOPERATION specify how that
order is achieved. Cooperation, on the other hand, refers to joint action with a
mutual benefit. As you will see, some robot teams are coordinated without
really being cooperative, while others must cooperate in order to coordinate
team performance. Quite a large area of active robotics research is concerned
with the challenges of coordinating teams of robots. It is variously called
group robotics, social robotics, team robotics, swarm robotics, and multirobot systems. Let’s see what kinds of things it studies.
Benefits of Teamwork
To begin with, let’s consider why we would want to use more than one robot
at a time. Here are some potential benefits of using multiple robots:
·
It takes two (or more): Some tasks, by their very nature, simply
cannot be done by a single robot. Perhaps the most popular example is
pushing a box (as in Image), a prototypical task that has to do with
transporting a large, heavy, awkward, or fragile object that cannot be
effectively handled by a single robot. Cooperative transportation is
useful in construction and habitat building, clearing barriers after a
disaster, moving wounded people, and many other real-world
applications that require teamwork.
·
Better, faster, cheaper: Some tasks do not require multiple robots,
but can be performed better by a team than by an individual robot.
Foraging is the most popular example. Foraging is the process of
finding and collecting objects (or information) from some specified
area. Foraging has been studied a great deal, because it is a prototype
for a variety of real-world applications of group robotics, such as
locating and disabling land mines, collectively distributing seeds or
harvesting crops in agriculture, laying cables in construction, covering
an area with sensors in surveillance, and so on.
Foraging can be performed by a single robot, but if the area to be foraged
A robot team collectively pushing a box; one robot (behind the box) is the
watcher, which steers the team toward the goal, while the two others are the
pushers. In this case, it takes three to tango, or at least to push this particular
box
is large compared with the size of the robot, which is usually the case, the job
can be performed faster by a robot team. However, determining the right
number of robots for such a task is a tricky proposition, since too many will
get in each other’s way, and too few will not be as efficient as a larger team
might be. Determining the right team size is just one of the challenges of
group robotics. We’ll talk about others later in this chapter.
Being everywhere at once: Some tasks require that a large area be
monitored so that robots can respond to an emergency event if/wherever it
ocSENSOR-ACTUATOR curs. Sensor networks are a popular example.
Sensor-actuator networks are NETWORKS groups of mobile sensors in the
environment that can communicate with each other, usually through wireless
radio, and can move about. By having a large number of sensors in a
building, an intruder or an emergency can be detected in any area, and nearby
robots can be sent to fix the problem. Such networks are also used for habitat
monitoring – for example, for tracking animals and fish in the wild in order to
conserve their environment, for detecting and measuring oil spills in the
ocean, and for tracking the growth of algae or some contaminant in the water
supply. The more such robots there
Robot dogs playing soccer.
are, the larger the area that can be monitored.
Having nine lives: Having a team of robots can result in increased
robustness in performing a task. Robustness in robotics refers to the ability to
resist failure. In a team of robots, if one or a small number of robots fails, the
others can make up for it, so that the overall team is more robust than any
REDUNDANCY individual alone would be. This type of robustness results
from redundancy, the repetition of capabilities on the team. Of course, not all
teams are redundant: if every member on a team is different, and thereby
none can make up for the failures of any others, then the team is not
redundant, and is accordingly is not very robust, since the whole team
depends on each and every member, and the failure of any one team member
can break down the entire team. This type of team organization is usually
avoided everywhere, from sports to armies.
Challenges of Teamwork
The above-listed advantages of robot teams have their price. Here are some
disadvantages that create the challenges of multi-robot control:
·
Get out of my way! Interference among robots on a team is the
main challenge of group robotics. The more robots there are on a team,
the more chance there is that they will interfere with each other. Since
the laws of physics just won’t allow two or more robots to be in the
same place at the same time, there is always the potential for physical
interference on a team. Besides this basic type of interference which
stems from the robot’s physical embodiment, there is also another kind:
goal interference, which has to do with robots’ goals conflicting. One
robot can undo the work of another, intentionally or otherwise, if their
goals are conflicting. It turns out that understanding, predicting, and
managing interference is one of the great challenges of working with
more than one robot at a time.
· It’s my turn to talk! Many multi-robot tasks require the members of
the team to communicate in some way. The most popular means of
communication is through wireless radio. A whole field of engineering
deals with communications and all the challenges that come with it. As
in human communication, there are interference issues here as well. If
the robots share a communication channel, they may need to take turns
sending messages and receiving them, to avoid "talking over each
other", which is the same as collisions in communication-space (rather
than in physical space, as in physical interference). Simply put, the
more robots there are, the more communication may be useful but also
challenging.
·
What’s going on? We have already seen that uncertainty is an
inescapable part of a robot’s life. The more robots are involved in a
task, the more uncertainty there can be. Similarly, the less each
individual robot team member knows, the harder it is to get the whole
team to agree, in case such agreement is needed (such as in moving to
the next part of the task, or recognizing when the task is over; not all
tasks require this, as you will see shortly). Some clever multi-robot
controllers use multiple members of a robot team to reduce uncertainty,
but this does not come for free: it requires clever algorithms and
communication among robots.
· Two for the price of one? Even with discounts for bulk, more robots
are always costlier than fewer, and more robots have more hardware
failures and maintenance requirements to worry about as well. Cost is a
practical issue that influences real-life decisions in robotics as it does
everywhere else.
The challenges of group robotics are what makes programming a team of
robots a new and different problem from programming one robot. Before we
look into how to program them, let’s see what kinds of robot teams there are.
Types of Groups and Teams
How would you program a bunch of robots to play soccer
A wheeled robot and a six-legged robot cooperatively pushing a box.
Would you program each robot to act is if it were alone – to chase the ball
and try to take it, run for the goal and shoot, and run back and protect its own
goal or goalie? What would happen if you did? (What would happen if that
were how people played soccer?) It would be a complete mess! All robots
would run around and into each other, trying to play both offense and
defense, and getting into each other’s way. Soccer, like most other group
DIVISION OF LABOR activities, requires teamwork and some type of
division of labor or role assignment, giving each member of the team a job so
each knows what to do to help the team as a whole without getting in the way
of the others.
Do all group tasks require division of labor? What about foraging?
Could you just have a group of identical robots all doing foraging and
staying out of each other’s way?
Definitely. This shows us that there are different types of tasks and different
types of collective multi-robot systems. First, teams can be most easily
divided into two types, based on the individuals that constitute them:
Homogeneous teams are those with identical, and therefore interchangeable,
members. Members may be identical in their form (such as all having four
wheels and a ring of sonars) and/or in their function (such as all being able to
find and pick up objects and send messages to other team members).
Homogeneous teams can be coordinated with simple mechanisms, and in
some cases require no active, intentional cooperation to achieve effective
group behaviour.
A highly effective foraging group of merely coexisting robots.
Heterogeneous teams are those with different, non-interchangeable members.
Members may differ in form (some have four wheels, while others have two
wheels and a caster) and/or in function (some play offense by chasing the ball
while some play defense by getting in the way of opponents). Heterogeneous
teams typically require active cooperation in order to produce coordinated
behaviour.
Images show examples of real-world robot teams pushing boxes. Both teams
are heterogeneous, but one is heterogeneous only in form (different bodies,
same function), while the other is heterogeneous in both form and function.
Can you tell which is which? The first, a three-member team, features
members with different roles: two pushers and a watcher, so it is
heterogeneous in both form and function. The second, a two-member team,
features two members with different from (wheels vs. legs) but identical
function (pushing the box).
The next way of classifying robot teams is based on the type of coordination
strategy they use:
Merely coexisting: In this approach, multiple robots work on the same task
in a shared environment, but do not communicate or, in some cases, even
recognize each other. They merely treat each other as obstacles. Such systems
require no algorithms for coordination or communication, but as the number
of robots grows and the environment shrinks, interference increases, quickly
removing any benefit of having a group. Foraging, construction, and other
tasks can be achieved with this approach as long as the team size is
The Nerd Herd, one of the first multi-robot teams, flocking around the lab.
carefully designed to suit the task and environment so that interference is
minimized.
Let’s take a closer look, using the foraging example: We have a group of
foraging robots whose job is to look over a field for scattered objects, pick
them up, and bring them back to a particular deposit location. While they do
that task, they must avoid collisions with obstacles, including other robots.
You can see how the more robots are doing the task, the more potential
interference there is between them, as long as they are in a bounded, limited
amount of space (which is reasonable to expect, since foraging in the vastness
of outer space is not currently a major robotics application). In this approach,
the robots do not help each other or even recognize each other. Therefore, to
make this work, there is a carefully designed balance among the task
parameters (including the size of the space and the number of objects being
collected), the number of robots, their physical size relative to the
environment, their sensor range, and their uncertainty properties. As you can
see, this takes some work. Image shows a group of robots that very
effectively collected objects into a single pile through merely coexisting.
We’ll see later how they did it.
Loosely coupled: In this approach, the robots recognize each other as
members of a group and may even use simple coordination, such as moving
away from each other to make space and minimize interference. However,
they do not depend on each other for completing the task, so members of the
group can be removed without influencing the behaviour of the others. Such
teams are robust, but difficult to coordinate to do precise tasks. Foraging,
herding, distributed mapping, and related group tasks are well suited for this
approach.
Back to our foraging example. Now, instead of treating each other as
obstacles, the robots can actually react to each other in more interesting ways.
For example, a robot that has found no objects can follow another robot that
is carrying an object, in hopes that it will lead it toward more objects.
Alternatively, a robot can avoid other robots under the assumption that it
should go where others haven’t been yet, and so find as yet undiscovered
objects. The robots can also flow or form lanes as they are heading to deposit
the objects, in order to minimize interference.
Tightly Coupled: In this approach, the robots cooperate on a precise task,
usually by using communication, turn-taking, and other means of tight
coordination. They depend on each other, which gives the system improved
group performance, but also less robustness through redundancy. Removing
team members makes the team performance suffer. Playing soccer, moving in
formation, and transporting objects are done well through the use of tightly
coupled teams.
Returning to foraging yet again: A tightly coupled foraging team would likely
use a global plan in which each member of the team goes to a specific area of
the environment to explore. This is convenient when a map of the area is
available. If no map is given, then the robots may use collaborative SLAM to
build one as they coordinate their exploration and object collection.
Where To Next? The Future of Robotics
Self-Reconfigurable Robotics
In theory at least, a swarm of tiny robots can get together to create any shape.
In reality, robots are constructed with rigid parts and do not shape-shift
easily. But an area of robotics research called reconfigurable robotics has
been de- signing robots that have modules or components that can come
together in various ways, creating not only different shapes but differently
shaped robot bodies that can move around. Such robots have been
demonstrated to shapeshift from being like six-legged insects to being like
slithering snakes. We say “like” because six-legged insect robots and
snakelike robots have been built before, and were much more agile and
elegant than these reconfigurable robots. However, they could not shapeshift.
The mechanics of physical reconfiguration are complex, involving wires and
connectors and physics, among other issues to consider and overcome.
Individual modules require a certain amount of intelligence to know their
state and role at various times and in various configurations. In some cases,
the modules are all the same (homogeneous), while in others, they are not
A reconfigurable robot striking (just one) pose
(heterogeneous), which makes the coordination problem in some ways
related to multi-robot control we learned about in Chapter 20. Reconfigurable
robots that can autonomously decide when and how to change their shapes
are called self-reconfigurable. Such systems are under active research for
navigation in hard-to-reach places, and beyond.
Humanoid Robotics
Some bodies change shape, and others are so complicated that one shape is
plenty hard to manage. Humanoid robots fit into that category. As we learned
in Chapter 4, parts of the human body have numerous degrees of freedom
(DOF), and control of high-DOF systems is very difficult, especially in the
light of unavoidable uncertainty found in all robotics. Humanoid control
brings together nearly all challenges we have discussed so far: all aspects of
navigation and all aspects of manipulation, along with balance and, as you
will see shortly, the intricacies of human-robot interaction.
There is a reason why it takes people so long to learn to walk and talk and be
productive, in spite of our large brains; it is very, very hard. Human beings
take numerous days, months, and years to acquire skills and knowledge,
Sarcos humanoid robot
while robots typically do not have that luxury. Through the development of
sophisticated humanoid robots with complex biomimetic sensors and
actuators, the field of robotics is finally having the opportunity to study
humanoid control, and is gaining even more respect for the biological
counterpart in the process.
Social Robotics And Human-Robot Interaction
Robotics is moving away from the factory and laboratory, and ever closer to
human everyday environments and uses, demanding the development of
robots capable of naturally interacting socially with people. The rather new
field of human-robot interaction (HRI) is faced with a slew of challenges
which include perceiving and understanding human behaviour in real time
(Who is that talking to me? What is she saying? Is she happy or sad or giddy
or mad? Is she getting closer or moving away?), responding in real-time
(What should I say? What should I do?), and doing so in a socially
appropriate and natural way that engages the human participant.
Humans are naturally social creatures, and robots that interact with us will
need to be appropriately social as well if we are to accept them into our lives.
When you think about social interaction, you probably immediately think
about two types of signals: facial expressions and language. Based on what
you know about perception in robotics, you may be wondering how robots
will ever become social, given how hard it is for a moving robot to find,
much less read, a moving face in real time (recall Chapter 9), or how hard it
is for a moving target to hear and understand speech from a moving source in
an environment with ambient noise.
Fortunately, faces and language are not the only ways of interacting socially.
Perception for human-robot interaction must be broad enough to include
human information processing, which consists of a variety of complex
signals. Consider speech, for example: it contains important nuances in its
rate, volume level, pitch, and other indicators of personality and feeling. All
that is extremely important and useful even before the robot (or person) starts
to understand what words are being said, namely before language processing.
Another important social indicator is the social use of space, called
proxemics: how close one stands, along with body posture and movement
(amount, size, direction, etc.), are rich with information about the social
interaction. In some cases, the robot may have sensors that provide
physiological responses, such as heart rate, blood pressure, body temperature,
and galvanic skin response (the conductivity of the skin), which also give
very useful cues about the interaction.
But while measuring the changes in heart rate and body temperature is very
useful, the robot still needs to be aware of the person’s face in order to “look"
at it. Social gaze, the ability to make eye contact, is critical for normal social
interpersonal interaction, so even if the robot can’t understand the facial
expression, it needs at least to point its head or camera to the face. Therefore,
vision research is doing a great deal of work toward creating robots (and
other machines) capable of finding and understanding faces as efficiently and
correctly as possible. And then there is language. The field of natural
language processing (NLP)is one of the early components of artificial
intelligence (AI) , which split off early and has been making great progress.
Unfortunately for robotics, the vast majority of success in NLP has been in
written language, which is why search engines and data mining are very
powerful already, but robots and computers still can’t understand what you
are saying unless you say very few words they already know, very slowly and
without an accent. Speech processing is a challenging field independent from
NLP; the two sometimes work well together, and sometimes not, much the
same as has been the case for robotics and AI. Great strides have been made
in human speech processing, but largely for speech that can be recorded next
to the mouth of the speaker. This works well for telephone systems, which
can now detect when you are getting frustrated by endless menus ("If you’d
like to learn about our special useless discount, please press 9" and the alltime favourite "Please listen to all the options, as our menu has changed"), by
analysing the qualities of emotion in speech we mentioned above. However,
it would be better if robot users did not have to wear a microphone to be
understood by a robot.
The above just scrapes the surface of the many interesting problems in HRI.
This brand-new field that brings together experts with backgrounds not only
in robotics but also in psychology, cognitive science, communications, social
science, and neuroscience, among others, will be great to watch as it
develops.
Manipulation and Dexterity
Contemporary industrial needs drive the applications of robots to ever more
advanced tasks. Robots are required to perform highly skilled jobs with
minimum human assistance or intervention. To extend the applications and
abilities of robots, it becomes important to develop a sound understanding of
the tasks themselves.
In order to devise appropriate arm mechanisms and to develop effective
control algorithms, we need to precisely understand how a given task should
be accomplished and what sort of motions the robot should be able to
achieve. To perform an assembly operation, for example, we need to know
how to guide the assembly part to the desired location, mate it with another
part, and secure it in an appropriate way. In a grinding operation, the robot
must properly position the grinding wheel while accommodating the contact
force. We need to analyze the grinding process itself in order to generate
appropriate force and motion commands.
Remote-center compliance hand
A detailed understanding of the underlying principles and "know-how"
involved in the task must be developed in order to use industrial robots
effectively, while there is no such need for making control strategies explicit
when the assembly and grinding operations are performed by a human
worker. Human beings perform sophisticated manipulation tasks without
being aware of the control principles involved. We have trained ourselves to
be capable of skilled jobs, but in general we do not know what the acquired
skills are exactly about. A sound and explicit understanding of manipulation
operations, however, is essential for the long-term progress of robotics. This
scientific aspect of manipulation has never been studied systematically before
and represents an emerging and important part of robotics research.
Locomotion and Navigation
Robotics has found a number of important application areas in broad fields
beyond manufacturing automation. These range from space and under-water
exploration, hazardous waste disposal, and environment monitoring to
robotic surgery, rehabilitation, home robotics, and entertainment. Many of
these applications entail some locomotive functionality so that the robot can
freely move around in an unstructured environment. Most industrial robots sit
on a manufacturing floor and perform tasks in a structured environment. In
contrast, those robots for non-manufacturing applications must be able to
move around on their own
Locomotion and navigation are increasingly important, as robots find
challenging applications in the field. This opened up new research and
development areas in robotics. Novel mechanisms are needed to allow robots
to move through crowded areas, rough terrain, narrow channels, and even
staircases. Various types of legged robots have been studied, since, unlike
standard wheels, legs can negotiate with uneven floors and rough terrain.
Among others, biped robots have been studied most extensively, resulting in
the development of humanoids, as shown in Image 1-9. Combining leg
mechanisms with wheels has accomplished superior performance in both
flexibility and efficiency. The Mars Rover prototype shown below has a
rocker-buggy mechanism combined with advanced wheel drives in order to
adapt itself to diverse terrain conditions.
Navigation is another critical functionality needed for mobile robots, in
particular, for unstructured environment. Those robots are equipped with
range sensors and vision system and are capable of interpreting the data to
locate themselves. Often the robot has a map of the environment and uses it
for estimating the location. Furthermore, based on the real-time data obtained
in the field, the robot is capable of updating and augmenting the map, which
is incomplete and uncertain in unstructured environment. As depicted in
Image 1-10, location estimation and map building are simultaneously
executed in the advanced navigation system. Such Simultaneous Location
and MApping (SLAM) is exactly what we human do in our daily life, and is
an important functionality of intelligent robots.
The goal of robotics is thus two-fold: to extend our understanding about
manipulation, locomotion, and other robotic behaviors and to develop
engineering methodologies to actually perform desired tasks. The goal of this
book is to provide entry-level readers and experienced engineers with
fundamentals of understanding robotic tasks and intelligent behaviors as well
as with enabling technologies needed for building and controlling robotic
systems.
JPL’s planetary exploration robot: an early version of the Mars Rover
Actuators and Drive Systems
Actuators are one of the key components contained in a robotic system. A
robot has many degrees of freedom, each of which is a servoed joint
generating desired motion. We begin with basic actuator characteristics and
drive amplifiers to understand behavior of servoed joints.
Most of today’s robotic systems are powered by electric servomotors.
Therefore, we focus on electromechanical actuators.
DC Motors
Illustrates the construction of a DC servomotor, consisting of a stator, a rotor,
and a commutation mechanism. The stator consists of permanent magnets,
creating a magnetic field in the air gap between the rotor and the stator. The
rotor has several windings arranged symmetrically around the motor shaft.
An electric current applied to the motor is delivered to individual windings
through the brush-commutation mechanism, as shown in the Image. As the
rotor rotates the polarity of the current flowing to the individual windings is
altered. This allows the rotor to rotate continually.
Construction of DC motor
where the proportionality constant is called the torque constant, one of the
key parameters describing the characteristics of a DC motor. The torque
constant is determined by the strength of the magnetic field, the number of
turns of the windings, the effective area of the air gap, the radius of the rotor,
and other parameters associated with materials properties.
In an attempt to derive other characteristics of a DC motor, let us first
consider an idealized energy transducer having no power loss in converting
electric power into mechanical power. Let E be the voltage applied to the
idealized transducer. The electric power is then given by E ⋅i , which must be
equivalent to the mechanical power:
where ω m is the angular velocity of the motor rotor. Substituting eq.(1) into
eq.(2) and dividing both sides by i yield the second fundamental relationship
of a DC motor:
The above expression dictates that the voltage across the idealized power
transducer is proportional to the angular velocity and that the proportionality
constant is the same as the torque constant given by eq.(1). This voltage E is
called the back emf (electro-motive force) generated at the air gap, and the
proportionality constant is often called the back emf constant.
Note that, based on eq.(1), the unit of the torque constant is Nm/A in the
metric system, whereas the one of the back emf constant is V/rad/s based on
eq.(2).
Show that the two units, Nm/A and V/rad/s, are identical.
The actual DC motor is not a loss-less transducer, having resistance at the
rotor windings and the commutation mechanism. Furthermore, windings may
exhibit some inductance, which stores energy. the schematic of the electric
circuit, including the windings resistance R and inductance L.
where u is the voltage applied to the armature of the motor
Electric circuit of armature
we can obtain the actual relationship among the applied voltage u, the rotor
angular velocity ω m , and the motor torque
τm .
This is called the torque-speed characteristic. Note that the motor torque
increases in proportion to the applied voltage, but the net torque reduces as
the angular velocity increases.
illustrates the torque-speed characteristics. The negative slope of the straight
lines, Kt 2 / R− , implies that the voltage-controlled DC motor has an
inherent damping in its mechanical behavior.
The power dissipated in the DC motor is given by
Taking the square root of both sides yields
where the parameter is called the motor constant. The motor constant
represents how effectively electric power is converted to torque. The larger
the motor constant becomes, the larger the output torque is generated with
less power dissipation. A DC motor with more powerful magnets, thicker
winding wires, and a larger rotor diameter has a larger motor constant. A
motor with a larger motor constant, however, has a larger damping, as the
negative slope of the torque speed characteristics becomes steeper, as
illustrated.
Torque-speed characteristics and output power
Taking into account the internal power dissipation, the net output power of
the DC motor is given by
This net output power is a parabolic function of the angular velocity, as
illustrated in Image 2.1.3. It should be noted that the net output power
becomes maximum in the middle point of the velocity axis, i.e. 50 % of the
maximum angular velocity for a given armature voltage u. This implies that
the motor is operated most effectively at 50 % of the maximum speed. As the
speed departs from this middle point, the net output power decreases, and it
vanishes at the zero speed as well as at the maximum speed. Therefore, it is
important to select the motor and gearing combination so that the maximum
of power transfer be achieved.
Dynamics of Single-Axis Drive Systems
DC motors and other types of actuators are used to drive individual axes of a
robotic system. a schematic diagram of a single-axis drive system consisting
of a DC motor, a gear head, and arm links1 . An electric motor, such as a DC
motor, produces a relatively small torque and rotates at a high speed, whereas
a robotic joint axis in general rotates slowly, and needs a high torque to bear
the load. In other words, the impedance of the actuator:
is much smaller than that of the load.
Joint axis drive system
Although a robotic system has multiple axes driven by multiple actuators
having dynamic interactions, we consider behavior of an independent single
axis in this section, assuming that all the other axes are fixed.
To fill the gap we need a gear reducer, as shown in Image 2.2.1. Let r > 1 be
a gear reduction ratio (If d1 and d2 are diameters of the two gears, the gear
reduction ratio is ). The torque and angular velocity are changed to:
where load τ and ωload are the torque and angular velocity at the joint axis,
as shown in the Image. Note that the gear reducer of gear ratio r increases the
impedance r 2 times larger than that of the motor axis Zm:
ω mis the time rate of change of angular velocity, i.e. the angular
acceleration. Let be the inertia of the arm link about the joint axis, and b the
damping coefficient of the bearings supporting the joint axis. Considering the
free body diagram of the arm link and joint axis yields
Note that the effective inertia of the motor rotor is r 2 times larger than the
original value when reflected to the joint axis. Likewise, the motor constant
becomes r 2 times larger when reflected to the joint axis. The gear ratio of a
robotic system is typically 20 ~ 100, which means that the effective inertia
and damping becomes 400 ~ 10,000 times larger than those of the motor
itself. mI
For fast dynamic response, the inertia of the motor rotor must be small. This
is a crucial requirement as the gear ratio gets larger, like robotics
applications. There are two ways of reducing the rotor inertia in motor
design. One is to reduce the diameter and make the rotor longer. The other is
to make the motor rotor very thin, like a pancake.
Long and slender
Pancake
Most robots use the long and slender motors as Image (a), and some heavyduty robots use the pancake type motor. a pancake motor by Mavilor Motors,
Inc.
Assuming that the angular velocity of a joint axis is approximately zero,
obtain the optimal gear ratio r in eq.(7) that maximizes the acceleration of the
joint axis.
Power Electronics
Performance of servomotors used for robotics applications highly depends on
electric power amplifiers and control electronics, broadly termed power
electronics. Power electronics has shown rapid progress in the last two
decades, as semiconductors became faster, more powerful, and more
efficient. In this section we will briefly summarize power electronics relevant
to robotic system development.
Pulse width modulation (PWM)
In many robotics applications, actuators must be controlled precisely so that
desired motions of arms and legs may be attained. This requires a power
amplifier to drive a desired level of voltage (or current indirectly) to the
motor armature, as discussed in the previous section. Use of a linear amplifier
(like an operational amplifier), however, is power-inefficient and impractical,
since it entails a large amount of power loss. Consider a simple circuit
consisting of a single transistor for controlling the armature voltage. Let V be
the supply voltage connected to one end of the motor armature. The other end
of the armature is connected to the collector of the transistor. As the base
voltage varies the emitter-collector voltage varies, and thereby the voltage
drop across the motor armature, denoted u in the Image, varies accordingly.
Let i be the collector current flowing through the transistor. Then the power
loss that is dissipated at the transistor is given by
where R is the armature resistance. plots the internal power loss at the
transistor against the armature voltage. The power loss becomes the largest in
the middle, where half the supply voltage V/2 acts on the armature. This large
heat loss is not only wasteful but also harmful, burning the transistor in the
worst case scenario. Therefore, this type of linear power amplifier is seldom
used except for driving very small motors.
Analogue power amplifier for driving the armature voltage
Power loss at the transistor vs The armature voltage.
An alternative is to control the voltage via ON-OFF switching. Pulse Width
Modulation, or PWM for short, is the most commonly used method for
varying the average voltage to the motor. it is clear that the heat loss is zero
when the armature voltage is either 0 or V. This means that the transistor is
completely shutting down the current (OFF) or completely admitting the
current (ON). For all armature voltages other than these complete ON-OFF
states, some fraction of power is dissipated in the transistor. Pulse Width
Modulation (PWM ) is a technique to control an effective armature voltage
by using the ON-OFF switching alone. It varies the ratio of time length of the
complete ON state to the complete OFF state. Image 2.3.3 illustrates PWM
signals. A single cycle of ON and OFF states is called the PWM period,
whereas the percentage of the ON state in a single period is called duty rate.
The first PWM signal is of 60% duty, and the second one is 25 %. If the
supply voltage is V=10 volts, the average voltage is 6 volts and 2.5 volts,
respectively.
The PWM period is set to be much shorter than the time constant associated
with the mechanical motion. The PWM frequency, that is the reciprocal to the
PWM period, is usually 2 ~ 20 kHz, whereas the bandwidth of a motion
control system is at most 100 Hz. Therefore, the discrete switching does not
influence the mechanical motion in most cases.
Pulse width modulation
As modeled in eq.(2.1.4), the actual rotor windings have some inductance L.
If the electric time constant the is much larger than the PWM period, the
actual current flowing to the motor armature is a smooth curve, as illustrated.
In other words, the inductance works as a low-pass filter, filtering out the
sharp ON-OFF profile of the input voltage. In contrast, if the electric time
constant is too small, compared to the PWM period, the current profile
becomes zigzag, following the rectangular voltage profile, as shown in Image
2.3.4-(b). As a result, unwanted high frequency vibrations are generated at
the motor rotor. This happens for some types of pancake motors with low
inductance and low rotor inertia.
Current to the motor is smoothed due to inductance
PWM switching characteristics
As the PWM frequency increases, the current driven to the motor becomes
smoother, and the nonlinearity due to discrete switching disappears.
Furthermore, high PWM frequencies cause no audible noise of switching.
The noise disappears as the switching frequency becomes higher than the
human audible range, say 15 kHz. Therefore, a higher PWM frequency is in
general desirable. However, it causes a few adverse effects. As the PWM
frequency increases:
•
•
•
The heat loss increases and the transistor may over-heat,
Harmful large voltage spikes and noise are generated, and
Radio frequency interference and electromagnetic interference
become prominent.
The first adverse effect is the most critical one, which limits the capacity of a
PWM amplifier. Although no power loss occurs at the switching transistor
when it is completely ON or OFF, a significant amount of loss is caused
during transition. As the transistor state is switched from OFF to ON or vice
versa, the transistor goes through intermediate states, which entail heat loss.
Since it takes some finite time for a semiconductor to make a transition, every
time it is switched, a certain amount of power is dissipated. As the PWM
frequency increases, more power loss and, more importantly, more heat
generation occur. Image 2.3.5 illustrates the turn-on and turn-off transitions
of a switching transistor. When turned on, the collector current Ic increases
and the voltage Vce decreases. The product of these two values provides the
switching power loss as shown by broken lines in the Image. Note that turnoff takes a longer time, hence it causes more heat loss.
Transient responses of transistor current and voltage and associated
power loss during turn-on and turn-off state transitions
it is clear that a switching transistor having fast turn-on and turn-off
characteristics is desirable, since it causes less power loss and heat
generation. Power MOSFETs (Metal-Oxide-Semiconductor Field-Effect
Transistors) have very fast switching characteristics, enabling 15 ~ 100 kHz
of switching frequencies. For relatively small motors, MOSFETs are widely
used in industry due to their fast switching characteristics. For larger motors,
IGBTs (Insulated Gate Bipolar Transistor) are the rational choice because of
their larger capacity and relatively fast response.
As the switching speed increases, the heat loss becomes smaller. However,
fast switching causes other problems. Consider eq.(2.1.4) again, the dynamic
equation of the armature:
High speed switching means that the time derivative of current i is large. This
generates a large inductance-induced kickback voltage dt di L that often
damages switching semiconductors. As illustrated. a large spike is induced
when turning on the semiconductor. To get rid of this problem a freewheeling-diode (FWD) is inserted across the motor armature, as shown in
Image 2.3.6-(b). As the voltage across the armature exceeds a threshold level,
FWD kicks in to bypass the current so that the voltage may be clamped.
Voltage spike induced by inductance (a), free-wheeling diode (b), and the
clamped spike with FWD (c)
High speed PWM switching also generates Electromagnetic Interference
(EMI), particularly when the wires between the PWM amplifier and the
motor get longer. Furthermore, high speed PWM switching may incur RadioFrequency Interference (RFI). Since the PWM waveforms are square,
significant RFI can be generated. Whenever PWM switching edges are faster
than 10 µs, RFI is induced to some extent. An effective method for reducing
EMI and RFI is to put the PWM amplifier inside the motor body. This motor
architecture, called Integrated Motor or Smart Motor, allows confining EMI
and RFI within the motor body by minimizing the wire length between the
motor armature and the power transistors.
The H-bridge and bipolar PWM amplifiers
In most robotics applications, bi-directional control of motor speed is
necessary. This requires a PWM amplifier to be bipolar, allowing for both
forward and backward rotations. The architecture described in the previous
section needs to be extended to meet this bipolar requirement. The H-Bridge
architecture is commonly used for bipolar PWM amplifiers. As shown in
Image 2.3.7, the H-Bridge architecture resembles the letter H in the
arrangement of switching transistors around the motor armature. Switching
transistors, A and B are pulled up to the supply voltage V, whereas transistors
C and D are connected to ground. Combinations of these four switching
transistors provide a variety of operations. In Image (i), gates A and D are
ON, and B and C are OFF. This gate combination delivers a current to the
armature in the forward direction. When the gate states are reversed, as
shown in Image (ii), the direction of current is reversed. Furthermore, the
motor coasts off when all the gates are turned OFF, since the armature is
totally isolated or disconnected as shown in Image (iii). On the other hand,
the armature windings are shortened, when both gates C and D are turned ON
and A and B are turned OFF. See Image (iv). This shortened circuit provides
a “braking” effect, when the motor rotor is rotating.
Forward motion
Reverse motion
The motor armature coasts off
The motor windings are shortened causing a braking effect.
It should be noted that there is a fundamental danger in the H-bridge circuit.
A direct short circuit can occur if the top and bottom switches connected to
the same armature terminal are turned on at the same time. A catastrophic
failure results when one of the switching transistors on the same vertical line
in Image 2.3.7 fails to turn off before the other turns on. Most of H-bridge
power stages commercially available have several protection mechanisms to
prevent the direct short circuit.
Robot Controls and PWM Amplifiers of the 2.12 Laboratory
DC motors and PWM amplifiers, the two most important components
involved in a robot power train, have been described. Now we are ready to
introduce the specific drive system and controls to be used for building robots
for the design project.
This term we will use controllers and drives produced by Innovation First,
Inc. The system consists of bipolar PWM amplifiers, a PIC-based on-board
robot controller with a wireless modem, a stationary controller hooked up to a
laptop computer. Potentiometers are used for measuring the angular
displacement of joint axes. They are connected to A/D converter ports of the
on-board controller for position feedback control. Additional sensors can be
hooked up to the on-board controllers. A C-language based development
environment is available for the system.
Bipolar PWM amplifier with a built-in cooling fun, IFI, Inc.
On-board and stationary controllers, IFI.Inc
Control system operation diagram, IFI, Inc.
Optical Shaft Encoders
The servomechanism described in the previous section is based on analogue
feedback technology, using a potentiometer and a tachometer generator.
These analogue feedbacks, although simple, are no longer used in industrial
robots and other industrial applications, due to limited reliability and
performance. A potentiometer, for example, is poor in reliability, resolution,
accuracy, and signal to noise ratio. The output tap of the variable resistance
slides on a track of resistive material, making a mechanical contact all the
time. This slide contact causes not only electric noise but also wear of the
contacting surfaces. The resolution and S/N ratio of the sensor are also
limited by the mechanical contact. Furthermore, linearity depends on the
uniformity of the resistive material coated on the substrate, and that is a
limiting factor of a potentiometer’s accuracy. Today’s industrial standard is
optical shaft encoders, having no sliding contact. This will be discussed next.
Basic principle
An optical encoder consists of a rotating disk with grids, light sources,
photodetectors, and electronic circuits. As shown in Image 2.5.1, a pattern of
alternating opaque and translucent grids is printed on the rotating disk. A pair
of light source and photodetector is placed on both sides of the rotating disk.
As an opaque grid comes in, the light beam is blocked, while it is transmitted
through the disk, when the translucent part comes in. The light beam is then
detected by the photodetector. The disk is coupled to a motor shaft or a robot
joint to measure. As it rotates, an alternating ON-OFF signal is obtained with
the photodetector. The number of grids passing through the optical elements
represents the distance traveled.
Basic construction of optical shaft encoder
This optical shaft encoder has no mechanical component making a slide
contact, and has no component wear. An optical circuit is not disturbed by
electric noise, and the photodetector output is a digital signal, which is more
stable than an analogue signal. These make an optical shaft encoder reliable
and robust; it is a suitable choice as a feedback sensor for servomotors.
Position measurement
One problem with the above optical encoder design is that the direction of
rotation cannot be distinguished from the single photodetector output. The
photodetector output is the same for both clockwise and counter-clockwise
rotations. There is no indication as to which way the disk is rotating.
Counting the pulse number merely gives the total distance the shaft has
rotated back and forth. To measure the angular “position”, the direction of
rotation must be distinguished. One way of obtaining the directional
information is to add another pair of light source/photodetector and a second
track of opaque/translucent grids with 90 degrees of phase difference from
the first track. Image 2.5.2 illustrates a double track pattern and resultant
output signals for clockwise and counter-clockwise rotations. Note that track
A leads track B by 90 degrees for clockwise rotation and that track B leads
track A for counter-clockwise rotation. By detecting the phase angle the
direction of rotation can be distinguished, and this can be done easily with an
up-down counter.
By simply feeding both A phase and B phase encoder signals to an up-down
counter, the direction of rotation is first detected, and the number of rising
edges and falling edges of both signals is counted in such a way that the
counter adds the incoming edge number for clockwise rotation and subtract
the edge numbers for counter-clockwise rotation. The up-down counter
indicates the cumulative number of edges, that is, the angular “position” of
the motor. The output of the up-down counter is binary n-bit signals ready to
be sent to a digital controller without A/D conversion.
Double track encode for detection of the direction of rotation
It should be noted that this type of encoder requires initialization of the
counter prior to actual measurement. Usually a robot is brought to a home
position and the up-down counters are set to the initial state corresponding to
the home position. This type of encoder is referred to as an incremental
encoder, since A-phase and B-phase signals provide relative displacements
from an initial point. Whenever the power supply is shut down, the
initialization must be performed for incremental encoders.
Up-down counter for an incremental encoder
An absolute encoder provides an n-bit absolute position as well as the
direction of rotation without use of an up-down counter and initialization. As
shown in Image 2.5.4, the rotating disk has n-tracks of opaque-translucent
grid patterns and n pairs of light sources and photodetectors. The n-tracks of
grid patterns differ in the number of grids; the innermost track has only 1=20
pair of opaque and translucent slits, the second track has 2=21 pairs, and the
i-th track has 2i-1 pairs. The n outputs from the photodetectors directly
indicate the n-bit absolute position of the rotating disk. In general, absolute
encoders are more complex and expensive than incremental encoders. In case
of power failure, incremental encoders need a laborious initialization
procedure for recovery. For quick recovery as well as for safety, absolute
encoders are often needed in industrial applications.
Absolute encoder
Velocity estimate
Velocity feedback is needed for improving accuracy of speed control as well
as for compensating for system dynamics. A salient feature of optical
encoders is that velocity information can be obtained along with position
measurement. Without use of a dedicated tachometer generator, velocity
measurement can be attained by simply processing pulse sequences generated
by an optical encoder.
a pulse sequence coming from an optical encoder.2 Each pulse indicates a
rising edge or a falling edge of phase A & B signals. Therefore, the density of
this pulse train, i.e. the pulse frequency, is approximately proportional to the
angular velocity of the rotating shaft. The pulse density can be measured by
counting the number of incoming pulses in every fixed period, say T=10 ms,
as shown in the Image. This can be done with another up-down counter that
counts A phase and B phase pulses. Counting continues only for the fixed
sampling period T, and the result is sent to a controller at the end of every
sampling period. Then the counter is cleared to re-start counting for the next
period.
As the sampling period gets shorter, the velocity measurement is updated
more frequently, and the delay of velocity feedback gets shorter. However, if
the sampling period is too short, discretization error becomes prominent. The
problem is more critical when the angular velocity is very small. Not many
pulses are generated, and just a few pulses can be counted for a very short
period. As the sampling period gets longer, the discretization error becomes
smaller, but the time delay may cause instability of the control system.
Velocity estimate based on pulse frequency measurement
An effective method for resolving these conflicting requirements is to use a
dual mode velocity measurement. Instead of counting the number of pulses,
the interval of adjacent pulses is measured at low speed. The reciprocal to the
pulse interval gives the angular velocity. As shown in Image 2.5.6, the time
interval can be measured by counting clock pulses. The resolution of this
pulse interval measurement is much higher than that of encoder pulse
counting in a lower speed range. In contrast, the resolution gets worse at high
speed, since the adjacent pulse interval becomes small. Therefore, these two
methods supplement to each other. The dual mode velocity measurement uses
both counters and switches them depending on the speed.
Dual mode velocity measurement
Brushless Dc Motors
The DC motor described in the previous section is the simplest, yet efficient
motor among various actuators applied to robotic systems. Traditional DC
motors, however, are limited in reliability and robustness due to wear of the
brush and commutation mechanism. In industrial applications where a high
level of reliability and robustness is required, DC motors have been replaced
by brushless motors and other types of motors having no mechanical
commutator. Since brushless motors, or AC synchronous motors, are
increasingly used in robotic systems and other automation systems, this
section briefly describes its principle and drive methods.
Construction of brushless DC motor and conventional DC motor
In the brushless motor, the commutation of currents is performed with an
electronic switching system. Image 2.6.1 shows the construction of a
brushless motor, compared with a traditional DC motor. In the brushless
motor, the rotor and the stator are swapped. Unlike the traditional DC motor,
the stator of the brushless motor consists of windings, whereas the rotor
comprises permanent magnets. The commutation is accomplished by
measuring the rotor position using a position sensor. Depending on the rotor
position, currents are delivered to the corresponding windings though
electronic switching circuits. The principle of torque generation remains the
same, and the torque-speed characteristics and other properties are mostly
preserved. Therefore, the brushless motor is highly efficient with added
reliability.
A drawback of this brushless motor design is that the torque may change
discontinuously when switches are turned on and off as the rotor position
changes. In the traditional DC motor this torque ripple is reduced by simply
increasing the commutator segments and dividing the windings to many
segments. For the brushless motor, however, it is expensive to increase the
number of electronic switching circuits. Instead, in the brushless motor the
currents flowing into individual windings are varied continuously so that the
torque ripple be minimum. A common construction of the windings is that of
a three-phase windings.
Let IA, IB and IC be individual currents flowing into the three windings
shown in the Image. These three currents are varies such that:
where IO is the scalar magnitude of desired current, and θ is the rotor
position. The torque generated is the summation of the three torques
generated at the three windings. Taking into account the angle between the
magnetic field and the force generated at each air gap, we obtain
The above expression indicates a linear relationship between the output
torque and the scalar magnitude of the three currents. The torque-current
characteristics of a brushless motor are apparently the same as the traditional
DC motor.
Brushless DC motor and drive amplifier
Robot Mechanisms
A robot is a machine capable of physical motion for interacting with the
environment. Physical interactions include manipulation, locomotion, and
any other tasks changing the state of the environment or the state of the robot
relative to the environment. A robot has some form of mechanisms for
performing a class of tasks. A rich variety of robot mechanisms has been
developed in the last few decades. In this chapter, we will first overview
various types of mechanisms used for generating robotic motion and
introduce some taxonomy of mechanical structures before going into a more
detailed analysis in the subsequent chapters.
Joint Primitives and Serial Linkages
A robot mechanism is a multi-body system with the multiple bodies
connected together. We begin by treating each body as rigid, ignoring
elasticity and any deformations caused by large load conditions. Each rigid
body involved in a robot mechanism is called a link, and a combination of
links is referred to as a linkage. In describing a linkage it is fundamental to
represent how a pair of links is connected to each other. There are two types
of primitive connections between a pair of links. The first is a prismatic joint
where the pair of links makes a translational displacement along a fixed axis.
In other words, one link slides on the other along a straight line. Therefore, it
is also called a sliding joint. The second type of primitive joint is a revolute
joint where a pair of links rotates about a fixed axis. This type of joint is often
referred to as a hinge, articulated, or rotational joint.
Primitive joint types: (a) a prismatic joint and (b) a revolute joint
Combining these two types of primitive joints, we can create many useful
mechanisms for robot manipulation and locomotion. These two types of
primitive joints are simple to build and are well grounded in engineering
design. Most of the robots that have been built are combinations of only these
two types. Let us look at some examples.
Robot mechanisms analogous to coordinate systems
One of the fundamental functional requirements for a robotic system is to
locate its end effecter, e.g. a hand, a leg, or any other part of the body
performing a task, in three-dimensional space. If the kinematic structure of
such a robot mechanism is analogous to a coordinate system, it may suffice
this positioning requirement. three types of robot arm structures
corresponding to the Cartesian coordinate system, the cylindrical coordinate
system, and the spherical coordinate system respectively. The Cartesian
coordinate robot has three prismatic joints, corresponding to three axes
denoted x, y , and z. The cylindrical robot consists of one revolute joint and
two prismatic joints, with r, and z representing the coordinates of the endeffecter. Likewise, the spherical robot has two revolute joints denoted and
one prismatic joint denoted r.
Cartesian coordinate robot
Photo removed for copyright reasons. Robot by Cincinnati Milacron.
Cylindrical coordinate robot
Photo removed for copyright reasons. GMF Robotics model M-100.
Spherical coordinate robot.
Photo removed for copyright reasons.
There are many other ways of locating an end-effecter in three-dimensional
space. three other kinematic structures that allow the robot to locate its endeffecter in three-dimensional space. Although these mechanisms have no
analogy with common coordinate systems, they are capable of locating the
end-effecter in space, and have salient features desirable for specific tasks.
The first one is a so-called SCALAR robot consisting of two revolute joints
and one prismatic joint. This robot structure is particularly desirable for
assembly automation in manufacturing systems, having a wide workspace in
the horizontal direction and an independent vertical axis appropriate for
insertion of parts.
SCALAR type robot.
Photo removed for copyright reasons. Robot by Adept.
The second type, called an articulated robot or an elbow robot, consists of all
three revolute joints, like a human arm. This type of robot has a great degree
of flexibility and versatility, being the most standard structure of robot
manipulators. The third kinematic structure, also consisting of three revolute
joints, has a unique mass balancing structure. The counter balance at the
elbow eliminates gravity load for all three joints, thus reducing toque
requirements for the actuators. This structure has been used for the directdrive robots having no gear reducer.
Articulated robot
Photo removed for copyright reasons. SCORBOT model ER 1X.
Note that all the above robot structures are made of serial connections of
primitive joints. This class of kinematic structures, termed a serial linkage,
constitutes the fundamental makeup of robot mechanisms. They have no
kinematic constraint in each joint motion, i.e. each joint displacement is a
generalized coordinate. This facilitates the analysis and control of the robot
mechanism. There are, however, different classes of mechanisms used for
robot structures. Although more complex, they do provide some useful
properties. We will look at these other mechanisms in the subsequent
sections.
Parallel Linkages
Primitive joints can be arranged in parallel as well as in series. Image 3.2.1
illustrates such a parallel link mechanism. It is a five-bar-linkage consisting
of five links, including the base link, connected by five joints. This can be
viewed as two serial linkage arms connected at a particular point, point A in
the Image. It is important to note that there is a closed kinematic chain
formed by the five links and, thereby, the two serial link arms must conform
to a certain geometric constraint. It is clear from the Image that the endeffecter position is determined if two of the five joint angles are given. For
example, if angles 1? and 3? of joints 1 and 3 are determined, then all the link
positions are determined, as is the end-effecter’s. Driving joints 1 and 3 with
two actuators, we can move the end-effecter within the vertical plane. It
should be noted that, if more than two joints were actively driven by
independent actuators, a conflict among three actuators would occur due to
the closed-loop kinematic chain. Three of the five joints should be passive
joints, which are free to rotate. Only two joints should be active joints, driven
by independent actuators.
Five-bar-link parallel link robot
This type of parallel linkage, having a closed-loop kinematic chain, has
significant features. First, placing both actuators at the base link makes the
robot arm lighter, compared to the serial link arm with the second motor
fixed to the tip of link 1. Second, a larger end-effecter load can be born with
the two serial linkage arms sharing the load. Image 3.2.2 shows a heavyduty
robot having a parallel link mechanism.
Stewart mechanism, which consists of a moving platform, a fixed base, and
six powered cylinders connecting the moving platform to the base frame. The
position and orientation of the moving platform are determined by the six
independent actuators. The load acting on the moving platform is born by the
six "arms". Therefore, the load capacity is generally large, and dynamic
response is fast for this type of robot mechanisms. Note, however, that this
mechanism has spherical joints, a different type of joints than the primitive
joints we considered initially.
Stewart mechanism parallel-link robot
Planar Kinematics
Kinematics is Geometry of Motion. It is one of the most fundamental
disciplines in robotics, providing tools for describing the structure and
behavior of robot mechanisms. In this chapter, we will discuss how the
motion of a robot mechanism is described, how it responds to actuator
movements, and how the individual actuators should be coordinated to obtain
desired motion at the robot end-effecter. These are questions central to the
design and control of robot mechanisms.
To begin with, we will restrict ourselves to a class of robot mechanisms that
work within a plane, i.e. Planar Kinematics. Planar kinematics is much more
tractable mathematically, compared to general three-dimensional kinematics.
Nonetheless, most of the robot mechanisms of practical importance can be
treated as planar mechanisms, or can be reduced to planar problems. General
three-dimensional kinematics, on the other hand, needs special mathematical
tools, which will be discussed in later chapters.
Planar Kinematics of Serial Link Mechanisms
Consider the three degree-of-freedom planar robot arm shown in Image. The
arm consists of one fixed link and three movable links that move within the
plane. All the links are connected by revolute joints whose joint axes are all
perpendicular to the plane of the links. There is no closed-loop kinematic
chain; hence, it is a serial link mechanism.
Three dof planar robot with three revolute joints
To describe this robot arm, a few geometric parameters are needed. First, the
length of each link is defined to be the distance between adjacent joint axes.
Let points O, A, and B be the locations of the three joint axes, respectively,
and point E be a point fixed to the end-effecter. Then the link lengths are A1
= OA, A 2 = AB, A 3 = BE . Let us assume that Actuator 1 driving link 1 is
fixed to the base link (link 0), generating angle θ1 , while Actuator 2 driving
link 2 is fixed to the tip of Link 1, creating angle θ 2 between the two links,
and Actuator 3 driving Link 3 is fixed to the tip of Link 2, creating angle θ3 ,
as shown in the Image. Since this robot arm performs tasks by moving its
end-effecter at point E, we are concerned with the location of the endeffecter. To describe its location, we use a coordinate system, O-xy, fixed to
the base link with the origin at the first joint, and describe the end-effecter
position with coordinates e and e . We can relate the end-effecter coordinates
to the joint angles determined by the three actuators by using the link lengths
and joint angles defined above:
This three dof robot arm can locate its end-effecter at a desired orientation as
well as at a desired position. The orientation of the end-effecter can be
described as the angle the centerline of the end-effecter measured from the
positive x coordinate axis. This end-effecter orientation φe is related to the
actuator displacements as
The above three equations describe the position and orientation of the robot
end-effecter viewed from the fixed coordinate system in relation to the
actuator displacements. In general, a set of algebraic equations relating the
position and orientation of a robot end-effecter, or any significant part of the
robot, to actuator or active joint displacements, is called Kinematic
Equations, or more specifically, Forward Kinematic Equations in the robotics
literature.
a planar robot arm with two revolute joints and one prismatic joint. Using the
geometric parameters and joint displacements, obtain the kinematic equations
relating the end-effecter position and orientation to the joint displacements.
Three dof robot with two revolute joints and one prismatic joint
Now that the above Example and Exercise problems have illustrated
kinematic equations, let us obtain a formal expression for kinematic
equations. As mentioned in the previous chapter, two types of joints,
prismatic and revolute joints, constitute robot mechanisms in most cases. The
displacement of the i-th joint is described by distance di if it is a prismatic
joint, and by angle θi for a revolute joint. For formal expression, let us use a
generic notation: qi. Namely, joint displacement qi represents either distance
di or angle θi depending on the type of joint.
We collectively represent all the joint displacements involved in a robot
mechanism with a column vector: , where n is the number of joints.
Kinematic equations relate these joint displacements to the position and
orientation of the end-effecter. Let us collectively denote the end-effecter
position and orientation by vector p. For planar mechanisms, the end-effecter
location is described by three variables:
Using these notations, we represent kinematic equations as a vector function
relating p to q:
For a serial link mechanism, all the joints are usually active joints driven by
individual actuators. Except for some special cases, these actuators uniquely
determine the end-effecter position and orientation as well as the
configuration of the entire robot mechanism. If there is a link whose location
is not fully determined by the actuator displacements, such a robot
mechanism is said to be under-actuated. Unless a robot mechanism is underactuated, the collection of the joint displacements, i.e. the vector q, uniquely
determines the entire robot configuration. For a serial link mechanism, these
joints are independent, having no geometric constraint other than their stroke
limits. Therefore, these joint displacements are generalized coordinates that
locate the robot mechanism uniquely and completely. Formally, the number
of generalized coordinates is called degrees of freedom. Vector q is called
joint coordinates, when they form a complete and independent set of
generalized coordinates.
Inverse Kinematics of Planar Mechanisms
The vector kinematic equation derived in the previous section provides the
functional relationship between the joint displacements and the resultant endeffecter position and orientation. By substituting values of joint
displacements into the right-hand side of the kinematic equation, one can
immediately find the corresponding end-effecter position and orientation. The
problem of finding the end-effecter position and orientation for a given set of
joint displacements is referred to as the direct kinematics problem. This is
simply to evaluate the right-hand side of the kinematic equation for known
joint displacements. In this section, we discuss the problem of moving the
end-effecter of a manipulator arm to a specified position and orientation. We
need to find the joint displacements that lead the end-effecter to the specified
position and orientation. This is the inverse of the previous problem and is
thus referred to as the inverse kinematics problem. The kinematic equation
must be solved for joint displacements, given the end-effecter position and
orientation. Once the kinematic equation is solved, the desired end-effecter
motion can be achieved by moving each joint to the determined value.
set of joint displacements. On the other hand, the inverse kinematics is more
complex in the sense that multiple solutions may exist for the same endeffecter location. Also, solutions may not always exist for a particular range
of end-effecter locations and arm structures. Furthermore, since the kinematic
equation is comprised of nonlinear simultaneous equations with many
trigonometric functions, it is not always possible to derive a closed-form
solution, which is the explicit inverse function of the kinematic equation.
When the kinematic equation cannot be solved analytically, numerical
methods are used in order to derive the desired joint displacements.
Consider the three dof planar arm shown in Image 4.1.1 again. To solve its
inverse kinematics problem, the kinematic structure is redrawn in Image
4.2.1. The problem is to find three joint angles 1 2 3 θ ,θ ,θ that lead the end
effecter to a desired position and orientation, e e e x , y ,φ . We take a twostep approach. First, we find the position of the wrist, point B, from e e e x , y
,φ . Then we find 1 2 θ ,θ from the wrist position. Angle θ3 can be
determined immediately from the wrist position.
Skeleton structure of the robot arm
Let w and w be the coordinates of the wrist. As shown in Image 4.2.1, point
B is at distance 3 from the given end-effecter position E. Moving in the
opposite direction to the end effecter orientation x y A φe , the wrist
coordinates are given by
Note that the right hand sides of the above equations are functions of Xe, Ye
,φe alone. From these wrist coordinates, we can determine the angle α.
a set of joint angles that locates the end-effecter at the desired position and
orientation. It is interesting to note that there is another way of reaching the
same end-effecter position and orientation, i.e. another solution to the inverse
kinematics problem. two configurations of the arm leading to the same endeffecter location: the elbow down configuration and the elbow up
configuration. The former corresponds to the solution obtained above. The
latter, having the elbow position at point A’, is symmetric to the former
configuration with respect to line OB, as shown in the Image. Therefore, the
two solutions are related as
Inverse kinematics problems often possess multiple solutions, like the above
example, since they are nonlinear. Specifying end-effecter position and
orientation does not uniquely determine the whole configuration of the
system. This implies that vector p, the collective position and orientation of
the end-effecter, cannot be used as generalized coordinates.
The existence of multiple solutions, however, provides the robot with an
extra degree of flexibility. Consider a robot working in a crowded
environment. If multiple configurations exist for the same end-effecter
location, the robot can take a configuration having no interference with the
environment. Due to physical limitations, however, the solutions to the
inverse kinematics problem do not necessarily provide feasible
configurations. We must check whether each solution satisfies the constraint
of movable range, i.e. stroke limit of each joint.
Multiple solutions to the inverse kinematics problem
Kinematics of Parallel Link Mechanisms
Consider the five-bar-link planar robot arm
Note that Joint 2 is a passive joint. Hence, angle θ2 is a dependent variable.
Using θ2 , however, we can obtain the coordinates of point A:
Equating these two sets of equations yields two constraint equations:
Note that there are four variables and two constraint equations. Therefore,
two of the variables, such as θ1 ,θ2 , are independent. It should also be noted
that multiple solutions exist for these constraint equations.
Five-bar-link mechanism
Although the forward kinematic equations are difficult to write out explicitly,
the inverse kinematic equations can be obtained for this parallel link
mechanism. The problem is to find 1 3 θ ,θ that lead the endpoint to a desired
position: . We will take the following procedure:
Step 1 Given , find e e x , y 1 2 θ ,θ by solving the two-link inverse
kinematics problem.
Step 2 Given 1 2 θ ,θ , obtain . This is a forward kinematics problem. A A x ,
y
Step 3 Given , find A A x , y 3 4 θ ,θ by solving another two-link inverse
kinematics problem.
Obtain the joint angles of the dog’s legs, given the body position and
orientation.
A doggy robot with two legs on the ground
Redundant mechanisms
A manipulator arm must have at least six degrees of freedom in order to
locate its end effecter at an arbitrary point with an arbitrary orientation in
space. Manipulator arms with less than 6 degrees of freedom are not able to
perform such arbitrary positioning. On the other hand, if a manipulator arm
has more than 6 degrees of freedom, there exist an infinite number of
solutions to the kinematic equation. Consider for example the human arm,
which has seven degrees of freedom, excluding the joints at the fingers. Even
if the hand is fixed on a table, one can change the elbow position
continuously without changing the hand location. This implies that there exist
an infinite set of joint displacements that lead the hand to the same location.
Manipulator arms with more than six degrees of freedom are referred to as
redundant manipulators. We will discuss redundant manipulators in detail in
the following chapter.
Differential Motion
In the previous chapter, the position and orientation of the manipulator endeffecter were evaluated in relation to joint displacements. The joint
displacements corresponding to a given end-effecter location were obtained
by solving the kinematic equation for the manipulator. This preliminary
analysis permitted the robotic system to place the end-effecter at a specified
location in space. In this chapter, we are concerned not only with the final
location of the end-effecter, but also with the velocity at which the endeffecter moves. In order to move the end-effecter in a specified direction at a
specified speed, it is necessary to coordinate the motion of the individual
joints. The focus of this chapter is the development of fundamental methods
for achieving such coordinated motion in multiple-joint robotic systems. As
discussed in the previous chapter, the end-effecter position and orientation
are directly related to the joint displacements. Hence, in order to coordinate
joint motions, we derive the differential relationship between the joint
displacements and the end-effecter location, and then solve for the individual
joint motions.
Differential Relationship
We begin by considering a two degree-of-freedom planar robot arm. The
kinematic equations relating the end-effecter coordinates and to the joint
displacements e x ye θ1 and θ 2 are given by
Two dof planar robot with two revolute joints
We are concerned with “small movements” of the individual joints at the
current position, and we want to know the resultant motion of the endeffecter. This can be obtained by the total derivatives of the above kinematic
equations:
where are variables of both e e x , y θ1 and θ2 , hence two partial derivatives
are involved in the total derivatives. In vector form the above equations
reduce to
The matrix J comprises the partial derivatives of the functions 1 2 (θ ,θ ) e
and 1 2 x (θ ,θ ) e with respect to joint displacements 1 2 y θ and θ . The
matrix J, called the Jacobian Matrix, represents the differential relationship
between the joint displacements and the resulting end-effecter motion. Note
that most robot mechanisms have a multitude of active joints, hence a matrix
is needed for describing the mapping of the vectorial joint motion to the
vectorial end-effecter motion.
For the two-dof robot arm of, the components of the Jacobian matrix are
computed as
By definition, the Jacobian collectively represents the sensitivities of
individual end-effecter coordinates to individual joint displacements. This
sensitivity information is needed in order to coordinate the multi dof joint
displacements for generating a desired motion at the end-effecter.
Thus the Jacobian determines the velocity relationship between the joints and
the end-effecter.
Properties of the Jacobian
The Jacobian plays an important role in the analysis, design, and control of
robotic systems. It will be used repeatedly in the following chapters. It is
worth examining basic properties of the Jacobian, which will be used
throughout this book.
We begin by dividing the 2-by-2 Jacobian of into two column vectors:
Then can be written as
The first term on the right-hand side accounts for the end-effecter velocity
induced by the first joint only, while the second term represents the velocity
resulting from the second joint motion only. The resultant end-effecter
velocity is given by the vectorial sum of the two. Each column vector of the
Jacobian matrix represents the end-effecter velocity generated by the
corresponding joint moving at a unit velocity when all other joints are
immobilized.
Illustrates the column vectors 1 2 of the 2 dof robot arm in the two
dimensional space. Vector 2 J , given by the second column of eq.(5.1. 8),
points in the direction perpendicular to link 2. Note, however, that vector 1 is
not perpendicular to link 1 but is perpendicular to line OE, the line from joint
1 to the endpoint E. This is because 1 represents the endpoint velocity
induced by joint 1 when joint 2 is immobilized. In other words, links 1 and 2
are rigidly connected, becoming a single rigid body of link length OE, and is
the tip velocity of the link OE.
Geometric interpretation of the column vectors of the Jacobian
In general, each column vector of the Jacobian represents the end-effecter
velocity and angular velocity generated by the individual joint velocity while
all other joints are immobilized. Letpbe the end-effecter velocity and angular
velocity, or the end-effecter velocity for short, and Ji be the i-th column of the
Jacobian. The end-effecter velocity is given by a linear combination of the
Jacobian column vectors weighted by the individual joint velocities.
where n is the number of active joints. The geometric interpretation of the
column vectors is that is the end-effecter velocity and angular velocity when
all the joints other than joint i are immobilized and only the i-th joint is
moving at a unit velocity.
Consider the two-dof articulated robot again. This time we use “absolute”
joint angles measured from the positive x-axis, as shown in Image 5.2.2. Note
that angle θ 2 is measured from the fixed frame, i.e. the x-axis, rather than a
relative frame, e.g. link 1. Obtain the 2-by-2 Jacobian and illustrate the two
column vectors on the xy plane. Discuss the result in comparison with the
previous case.
Absolute joint angles measured from the x-axis.
Note that the elements of the Jacobian are functions of joint displacements,
and thereby vary with the arm configuration. As expressed in eq.(5.1.8), the
partial derivatives, e i e i ∂x / ∂θ , ∂y / ∂θ , are functions of 1 2 θ and θ .
Therefore, the column vectors 1 2 vary depending on the arm posture.
Remember that the end-effecter velocity is given by the linear combination of
the Jacobian column vectors 1 2 . Therefore, the resultant end-effecter
velocity varies depending on the direction and magnitude of the Jacobian
column vectors 1 2 spanning the two-dimensional space. If the two vectors
point in different directions, the whole two-dimensional space is covered with
the linear combination of the two vectors. That is, the end effecter can be
moved in an arbitrary direction with an arbitrary velocity. If, on the other
hand, the two Jacobian column vectors are aligned, the end-effecter cannot be
moved in an arbitrary direction. this may happen for particular arm postures
where the two links are fully contracted or extended. These arm
configurations are referred to as singular configurations. Accordingly, the
Jacobian matrix becomes singular at these positions. Using the determinant of
a matrix, this condition is expressed as
det J = 0
Note that both column vectors point in the same direction and thereby the
determinant becomes zero.
Singular configurations of the two-dof articulated robot
Inverse Kinematics of Differential Motion
Now that we know the basic properties of the Jacobian, we are ready to
formulate the inverse kinematics problem for obtaining the joint velocities
that allow the end-effecter to move at a given desired velocity. For the two
dof articulated robot, the problem is to find the joint velocities , for the given
end-effecter velocity . If the arm configuration is not singular, this can be
obtained by taking the inverse of the Jacobian matrix
Note that the solution is unique. Unlike the inverse kinematics problem
discussed in the previous chapter, the differential kinematics problem has a
unique solution as long as the Jacobian is nonsingular.
The above solution determines how the end-effecter velocity ve must be
decomposed, or resolved, to individual joint velocities. If the controls of the
individual joints regulate the joint velocities so that they can track the
resolved joint velocities q , the resultant end-effecter velocity will be the
desired ve. This control scheme is called Resolved Motion Rate Control,
attributed to Daniel Whitney (1969). Since the elements of the Jacobian
matrix are functions of joint displacements, the inverse Jacobian varies
depending on the arm configuration. This means that although the desired
end-effecter velocity is constant, the joint velocities are not. Coordination is
thus needed among the joint velocity control systems in order to generate a
desired motion at the end-effecter.
Consider the two dof articulated robot arm again. We want to move the
endpoint of the robot at a constant speed along a path staring at point A on
the x-axis, (+2, 0), go around the origin through points B (+ε, 0) and C (0,
+ε), and reach the final point D (0, +2) on the y-axis. For simplicity, each arm
link is of unit length. Obtain the profiles of the individual joint velocities as
the end-effecter tracks the path at the constant speed.
trajectory tracking near the singular points
The resolved joint velocities 1 2 computed along the specified trajectory.
Note that the joint velocities are extremely large near the initial and final
points, and are unbounded at points A and D. These are at the arm’s singular
configurations, 2 θ ,θ θ = 0 . As the end-effecter gets close to the origin, the
velocity of the first joint becomes very large in order to quickly turn the arm
around from point B to C. At these configurations, the second joint is almost
–180 degrees, meaning that the arm is near a singularity. This result agrees
with the singularity condition using the determinant of the Jacobian:
The numerators are divided by 2 sinθ , the determinant of the Jacobian.
Therefore, the joint velocities blow up as the arm configuration gets close to
the singular configuration.
Joint velocity profiles for tracking the trajectory
Furthermore, the arm’s behavior near the singular points can be analyzed by
substituting θ 2 = 0, π into the Jacobian, as obtained. For 1 A1 = A 2 = and 0
θ 2 = , the Jacobian column vectors reduce to the ones in the same direction:
As illustrated (singular configuration A), both joints 1 2 generate the
endpoint velocity along the same direction. Note that no endpoint velocity
can be generated in the direction perpendicular to the aligned arm links. For θ
,θ θ2 = π ,
The first joint cannot generate any endpoint velocity, since the arm is fully
contracted. See singular configuration B.
At a singular configuration, there is at least one direction in which the robot
cannot generate a non-zero velocity at the end-effecter. This agrees with the
previous discussion; the Jacobian is degenerate at a singular configuration,
and the linear combination of the Jacobian column vectors cannot span the
whole space.
A three-dof spatial robot arm is shown in the Image below. The robot has
three revolute joints that allow the endpoint to move in the three-dimensional
space. However, this robot mechanism has singular points inside the
workspace. Analyze the singularity, following the procedure below
Step 1 Obtain each column vector of the Jacobian matrix by considering the
endpoint velocity created by each of the joints while immobilizing the other
joints.
Step 2 Construct the Jacobian by concatenating the column vectors and set
the determinant of the Jacobian to zero for singularity: detJ = 0 .
Step 3 Find the joint angles that make detJ = 0 .
Step 4 Show the arm posture that is singular. Show where in the workspace it
becomes singular. For each singular configuration, also show in which
direction the endpoint cannot have a nonzero velocity.
Schematic of a three dof articulated robot
Singularity and Redundancy
We have seen in this chapter that singular configurations exist for many robot
mechanisms. Sometimes, such singular configurations exist in the middle of
the workspace, seriously degrading mobility and dexterity of the robot. At a
singular point, the robot cannot move in certain directions with a non-zero
velocity. To overcome this difficulty, several methods can be considered.
One is to plan a trajectory of the robot motion such that it will not go into
singular configurations. Another method is to include additional degrees of
freedom, so that even when some degrees of freedom are lost at a certain
configuration, the robot can still maintain the necessary degrees of freedom.
Such a robot is referred to as a redundant robot. In this section we will
discuss singularity and redundancy and obtain general properties of
differential motion for general n degree of freedom robots.
As studied in Section 5.3, a unique solution exists to the differential
kinematic equation, (5.3.1), if the arm configuration is non-singular.
However, when a planar (spatial) robot arm has more than three (six) degrees
of freedom, we can find an infinite number of solutions that provide the same
motion at the end-effecter. Consider for instance the human arm, which has
seven degrees of freedom excluding the joints at the fingers. When the hand
is placed on a desk and fixed in its position and orientation, the elbow
position can still vary continuously without moving the hand. This implies
that a certain ratio of joint velocities exists that does not cause any velocity at
the hand. This specific ratio of joint velocities does not contribute to the
resultant endpoint velocity. Even if these joint velocities are superimposed
onto other joint velocities, the resultant end-effecter velocity is the same.
Consequently, we can find different solutions of the instantaneous kinematic
equation for the same end-effecter velocity. In the following, we investigate
the fundamental properties of the differential kinematics when additional
degrees of freedom are involved.
To formulate a differential kinematic equation for a general n degree-offreedom robot mechanism, we begin by modifying the definition of the
vector dxe representing the end-effecter motion. In eq. (5.1.6), dxe was
defined as a two-dimensional vector that represents the infinitesimal
translation of an end-effecter. This must be extended to a general mdimensional vector. For planar motion, m may be 3, and for spatial motion, m
may be six. However, the number of variables that we need to specify for
performing a task is not necessarily three or six. In arc welding, for example,
only five independent variables of torch motion need be controlled. Since the
welding torch is usually symmetric about its centerline, we can locate the
torch with an arbitrary orientation about the centerline. Thus, five degrees of
freedom are sufficient to perform arc welding. In general, we describe the
differential end-effecter motion by m independent variables dp that must be
specified to perform a given task.
Then the differential kinematic equation for a general n degree-of-freedom
robot is given by
where the dimension of the Jacobian J is m by n; . When n is larger than m
and J is of full rank, there are (n-m) arbitrary variables in the general solution
of eq.(2). The robot is then said to have (n-m) redundant degrees of freedom
for the given task. m×n J ∈ℜ
Associated with the above differential equation, the velocity relationship can
be written
where pand qare velocities of the end effecter and the joints, respectively.
Equation (3) can be regarded as a linear mapping from n-dimensional vector
space Vn to m-dimensional space Vm. To characterize the solution to eq.(3),
let us interpret the equation using the linear mapping diagram shown in
Image 5.4.1. The subspace R(J) in the Image is the range space of the linear
mapping. The range space represents all the possible end-effecter velocities
that can be generated by the n joints at the present arm configuration. If the
rank of J is of full row rank, the range space covers the entire vector space
Vm. Otherwise, there exists at least one direction in which the end-effecter
cannot be moved with non-zero velocity. The subspace N(J) of Image 5.4.1 is
the null space of the linear mapping. Any element in this subspace is mapped
into the zero vector in Vm. Therefore, any joint velocity vector q that belongs
to the null space does not produce any velocity at the end-effecter. Recall the
human arm discussed before. The elbow can move without moving the hand.
Joint velocities for this motion are involved in the null space, since no endeffecter motion is induced. If the Jacobian is of full rank, the dimension of
the null space, dim N(J), is the same as the number of redundant degrees of
freedom (n-m). When the Jacobian matrix is degenerate, i.e. not of full rank,
the dimension of the range space, dim R(J), decreases, and at the same time
the dimension of the null space increases by the same amount. The sum of the
two is always equal to n:
dim R(J) + dim N(R) = n
Let q * be a particular solution of eq.(3) and be a vector involved in the null
space, then the vector of the form q0 q q* q0 = +k is also a solution of eq.
(3), where k is an arbitrary scalar quantity. Namely,
Since the second term can be chosen arbitrarily within the null space, an
infinite number of solutions exist for the linear equation, unless the
dimension of the null space is 0. The null space accounts for the arbitrariness
of the solutions. The general solution to the linear equation involves the same
number of arbitrary parameters as the dimension of the null space.
Linear mapping diagram
Statics
Robots physically interact with the environment through mechanical contacts.
Mating work pieces in a robotic assembly line, manipulating an object with a
multi-fingered hand, and negotiating a rough terrain through leg locomotion
are just a few examples of mechanical interactions. All of these tasks entail
control of the contacts and interference between the robot and the
environment. Force and moment acting between the robot end-effecter and
the environment must be accommodated for in order to control the
interactions. In this chapter we will analyze the force and moment that act on
the robot when it is at rest.
A robot generates a force and a moment at its end-effecter by controlling
individual actuators. To generate a desired force and moment, the torques of
the multiple actuators must be coordinated. As seen in the previous chapter,
the sensitivities of the individual actuators upon the end-effecter motion, i.e.
the Jacobian matrix, are essential in relating the actuator (joint) torques to the
force and moment at the end-effecter. We will obtain a fundamental theorem
for force and moment acting on a multi degree-of-freedom robot, which we
will find is analogous to the differential kinematics discussed previously.
Free Body Diagram
We begin by considering the free body diagram of an individual link
involved in an open kinematic chain. Image 6.1.1 shows the forces and
moments acting on link i, which is connected to link i-1 and link i+1 by joints
i and i+1, respectively. Let Oi be a point fixed to link i located on the joint
axis i+1 and Oi-1 be a point fixed to link i-1 on the joint axis i. Through the
connections with the adjacent links, link i receives forces and moments from
both sides of the link. Let fi-1,i be a three-dimensional vector representing the
linear force acting from link i-1 to link i. Likewise let fi,i+1 be the force from
link i to link i+1. The force applied to link i from link i+1 is then given by –
fi,i+1. The gravity force acting at the mass centroid Ci is denoted mig, where
mi is the mass of link i and g is the 3x1 vector representing the acceleration
of gravity. The balance of linear forces is then given by
Note that all the vectors are defined with respect to the base coordinate
system O-xyz.
Next we derive the balance of moments. The moment applied to link i by link
i-1 is denoted Ni-1,i, and therefore the moment applied to link i by link i+1 is
–Ni,i+1. Furthermore, the linear forces fi-1,i and –fi,i+1 also cause moments
about the centroid Ci. The balance of moments with respect to the centroid Ci
is thus given by
where ri-1,i is the 3x1 position vector from point Oi-1 to point Oi with
reference to the base coordinate frame, and ri,Ci represents the position
vector from point Oi to the centroid Ci.
Free body diagram of the i-th link
The force i−1, i f and moment i−1,i are called the coupling force and moment
between the adjacent links i and i-1. For i=1, the coupling force and moment
are 0,1 and 0,1 . These are interpreted as the reaction force and moment
applied to the base link to which the arm mechanism is fixed. See Image
6.1.2-(a). When i = n, on the other hand, the above coupling force and
moment become n, n+1 and n, n+1 . As the end-effecter, i.e. link n, contacts
the environment, the reaction force acts on the end-effecter. See Image 6.1.2(b). For convenience, we regard the environment as an additional link,
numbered n+1, and represent the reaction force and moment by - and - ,
respectively.
Force and moment that the base link exerts on link 1 (a), and the ones
that the environment exerts on the end-effecter, the final link (b)
The above equations can be derived for all the link members except for the
base link, i.e. i=1,2, …, n. This allows us to form 2n simultaneous equations
of 3x1 vectors. The number of coupling forces and moments involved is
2(n+1). Therefore two of the coupling forces and moments must be specified;
otherwise the equations cannot be solved. The final coupling force and
moment, n, n+1 f and n, n+1 , are the force and moment that the end-effecter
applies to the environment. It is this pair of force and moment that the robot
needs to accommodate in order to perform a given task. Thus, we specify this
pair of coupling force and moment, and solve the simultaneous equations. For
convenience we combine the force and the moment , to define the following
six-dimensional vector:
We call the vector F the endpoint force and moment vector, or the endpoint
force for short.
Energy Method and Equivalent Joint Torques
In this section we will obtain the functional relationship between the joint
torques and the endpoint force, which will be needed for accommodating
interactions between the end-effecter and the environment. Such a functional
relationship may be obtained by solving the simultaneous equations derived
from the free body diagram. However, we will use a different methodology,
which will give an explicit formula relating the joint torques to the endpoint
force without solving the simultaneous equations. The methodology we will
use is the energy method, sometimes referred to as the indirect method. Since
the simultaneous equations based on the balance of forces and moments are
complex and difficult to solve, we will find that the energy method is the
ideal choice when dealing with complex robotic systems.
In the energy method, we describe a system with respect to energy and work.
Therefore, terms associated with forces and moments that do not produce,
store, or dissipate energy are eliminated in its basic formula. In the free body
diagram shown in Image 6.1.1, many components of the forces and moments
are so called “constraint forces and moments” merely joining adjacent links
together. Therefore, constraint forces and moments do not participate in
energy formulation. This significantly reduces the number of terms and, more
importantly, will provide an explicit formula relating the joint torques to the
endpoint force.
To apply the energy method, two preliminary formulations must be
performed. One is to separate the net force or moment generating energy
from the constraint forces and moments irrelevant to energy. Second, we
need to find independent displacement variables that are geometrically
admissible satisfying kinematic relations among the links.
The actuator torques and the coupling forces and moments acting at adjacent
joints. The coupling force i−1, i f and moment Ni−1,i are the resultant force
and moment acting on the individual joint, comprising the constraint force
and moment as well as the torque generated by the actuator. Let bi-1 be the
3x1 unit vector pointing in the direction of joint axis i, as shown in the Image.
If the i-th joint is a revolute joint, the actuator generates joint torque i τ about
the joint axis. Therefore, the joint torque generated by the actuator is one
component of the coupling moment along the direction of the joint axis:
For a prismatic joint, such as the (i+1)-st joint illustrated in Image 6.2.1, the
actuator generates a linear force in the direction of the joint axis. Therefore, it
is the component of the linear coupling force projected onto the joint axis.
Note that, although we use the same notation as that of a revolute joint, the
scalar quantity i τ has the unit of a linear force for a prismatic joint. To unify
the notation we use i τ for both types of joints, and call it a joint torque
regardless the type of joint.
Joint torques as components of coupling force and moment
We combine all the joint torques from joint 1 through joint n to define the
nx1 joint torque vector:
The joint torque vector collectively represents all the actuators’ torque inputs
to the linkage system. Note that all the other components of the coupling
force and moment are borne by the mechanical structure of the joint.
Therefore, the constraint forces and moments irrelevant to energy formula
have been separated from the net energy inputs to the linkage system.
In the free body diagram, the individual links are disjointed, leaving
constraint forces and moments at both sides of the link. The freed links are
allowed to move in any direction. In the energy formulation, we describe the
link motion using independent variables alone. Remember that in a serial link
open kinematic chain joint coordinates ( q " q) T q = 1 n are a complete and
independent set of generalized coordinates that uniquely locate the linkage
system with independent variables. Therefore, these variables conform to the
geometric and kinematic constraints. We use these joint coordinates in the
energy-based formulation.
The explicit relationship between the n joint torques and the endpoint force F
is given by the following theorem:
Theorem
Consider an n degree-of-freedom, serial link robot having no friction at the
joints. The joint torques that are required for bearing an arbitrary endpoint
force are given by
where J is the 6 x n Jacobian matrix relating infinitesimal joint displacements
dq to infinitesimal end-effecter displacements dp:
Note that the joint torques in the above expression do not account for gravity
and friction. They are the net torques that balances the endpoint force and
moment. We call of eq.(3) the equivalent joint torques associated with the
endpoint force F.
Proof
We prove the theorem by using the Principle of Virtual Work. Consider
virtual displacements at individual joints, 1 n , and at the end-effecter, T e T e
, as shown in Image 6.2.2. Virtual displacements are arbitrary infinitesimal
displacements of a mechanical system that conform to the geometric
constraints of the system. Virtual displacements are different from actual
displacements, in that they must only satisfy geometric constraints and do not
have to meet other laws of motion. To distinguish the virtual displacements
from the actual displacements, we use the Greek letter δ rather than the
roman d.
Virtual displacements of the end effecter and individual joints
We assume that joint torques 1 2 n and endpoint force and moment, -F, act on
the serial linkage system, while the joints and the end-effecter are moved in
the directions geometrically admissible. Then, the virtual work done by the
forces and moments is given by
According to the principle of virtual work, the linkage system is in
equilibrium if, and only if, the virtual work δWork vanishes for arbitrary
virtual displacements that conform to geometric constraints. Note that the
virtual displacements δq and δp are not independent, but are related by the
Jacobian matrix given in eq.(5). The kinematic structure of the robot
mechanism dictates that the virtual displacements δp is completely dependent
upon the virtual displacement of the joints, δq. Substituting eq.(5) into eq.(6)
yields
Note that the vector of the virtual displacements δq consists of all
independent variables, since the joint coordinates of an open kinematic chain
are generalized coordinates that are complete and independent. Therefore, for
the above virtual work to vanish for arbitrary virtual displacements we must
have
the theorem has been proven.
The above theorem has broad applications in robot mechanics, design, and
control. We will use it repeatedly in the following chapters.
A two-dof articulated robot having the same link dimensions as the previous
examples. The robot is interacting with the environment surface in a
horizontal plane. Obtain the equivalent joint torques T 1 needed for pushing
the surface with an endpoint force of T . Assume no friction.
The Jacobian matrix relating the end-effecter coordinates e and to the joint
displacements x e y θ1 and θ 2 has been obtained in the previous chapter:
the equivalent joint torques are obtained by simply taking the transpose of the
Jacobian matrix.
Two-dof articulated robot pushing the environment surface
Duality of Differential Kinematics and Statics
We have found that the equivalent joint torques are related to the endpoint
force by the Jacobian matrix, which is the same matrix that relates the
infinitesimal joint displacements to the end-effecter displacement. Thus, the
static force relationship is closely related to the differential kinematics. In this
section we discuss the physical meaning of this observation.
To interpret the similarity between differential kinematics and statics, we can
use the linear mapping diagram of Image 5.4.1. Recall that the differential
kinematic equation can be regarded as a linear mapping when the Jacobian
matrix is fixed at a given robot configuration. reproduces and completes it
with a similar diagram associated with the static analysis. As before, the
range space R(J) represents the set of all the possible end-effecter velocities
generated by joint motions. When the Jacobian matrix is degenerate, or the
robot configuration is singular, the range space does not span the whole
vector space. Namely, there exists a direction in which the end-effecter
cannot move with a non-zero velocity. See the subspace S2 in the Image. The
null space N(J), on the other hand, represents the set of joint velocities that do
not produce any velocity at the end-effecter. If the null space contains a
nonzero element, the differential kinematic equation has an infinite number
of solutions that cause the same end-effecter velocity.
The lower half of is the linear mapping associated with the static force
relationship given by eq.(6.2.4). Unlike differential kinematics, the mapping
of static forces is given by the transpose of the Jacobian, generating a
mapping from the m-dimensional vector space Vm, associated with the
Cartesian coordinates of the end-effecter, to the n-dimensional vector space
Vn , associated with the joint coordinates. Therefore, the joint torques τ are
always determined uniquely for any arbitrary endpoint force F. However, for
given joint torques, a balancing endpoint force does not always exist. As in
the case of the differential kinematics, let us define the null space N(JT ) and
the range space R(JT ) of the static force mapping. The null space N(JT )
represents the set of all endpoint forces that do not require any torques at the
joints to bear the corresponding load. In this case the endpoint force is borne
entirely by the structure of the linkage mechanism, i.e. constraint forces. The
range space R(JT ), on the other hand, represents the set of all the possible
joint torques that can balance the endpoint forces.
Duality of differential kinematics and statics
The ranges and null spaces of J and JT are closely related. According to the
rules of linear algebra, the null space N(J) is the orthogonal complement of
the range space R(JT ). Namely, if a non-zero n-vector x is in N(J) , it cannot
also belong to R(JT ), and vice-versa. If we denote by S1 the orthogonal
complement of N(J), then the range space R(JT ) is identical to S1, as shown
in the Image. Also, space S3, i.e., the orthogonal complement of R(JT ) is
identical to N(J). What this implies is that, in the direction in which joint
velocities do not cause any end-effecter velocity, the joint torques cannot be
balanced with any endpoint force. In order to maintain a stationary
configuration, the joint torques in this space must be zero.
There is a similar correspondence in the end-effecter coordinate space Vm.
The range space R(J) is the orthogonal complement to the null space N(JT ).
Hence, the subspace S2 in the Image is identical to N(JT ), and the subspace
S4 is identical to R(J). Therefore, no joint torques are required to balance the
end point force when the external force acts in the direction in which the endeffecter cannot be moved by joint movements. Also, when the external
endpoint force is applied in the direction along which the end-effecter can
move, the external force must be borne entirely by the joint torques. When
the Jacobian matrix is degenerate, or the arm is in a singular configuration,
the null space N(JT ) has a non-zero dimension, and the external force can be
borne in part by the mechanical structure. Thus, differential kinematics and
statics are closely related. This relationship is referred to as the duality of
differential kinematics and statics.
Closed-Loop Kinematic Chains
The relationship between joint torques and the endpoint force obtained can be
extended to a class of parallel-link mechanisms with closed kinematic-chains.
It can also be extended to multi-fingered hands, leg locomotion, and other
robot mechanisms having closed kinematic chains. In this section we discuss
classes of closed kinematic chains based on the principle of virtual work.
Five-bar-link robot exerting endpoint force
We begin by revisiting the five-bar-link planar robot shown in Image 6.4.1.
This robot has two degrees of freedom, comprising two active joints, Joints 1
and 3, and three passive joints, Joints 2, 4, and 5. Therefore the virtual work
associated with the endpoint force and joint toques is given by
We assume no friction at the joints. Therefore, the three passive joints cannot
bear any torque load about their joint axis. Substituting τ 2 = τ 4 = τ 5 = 0
into the above yields
For any given configuration of the robot, the virtual displacements of the endeffecter are uniquely determined by the virtual displacements of Joints 1 and
3. In fact, the former is related to the latter via the Jacobian matrix:
Using this Jacobian,
Note that the joint coordinates associated with the active joints are not
necessarily generalized coordinates that uniquely locate the system. For
example, the arm configuration of the five-bar-link robot is not uniquely
determined with joint anglesθ1 and θ3 alone. There are two configurations
for given θ1 and θ3 . The corollary requires the differential relation to be
defined uniquely in the vicinity of the given configuration.
Over-Actuated Systems
If an n degree-of-freedom robot system has more than n active joints, or less
than n active joints, the above corollary does not apply. These are called
over-actuated and under-actuated systems, respectively. Over-actuated
systems are of importance in many manipulation and locomotion
applications. In the following, we will consider the static relationship among
joint torques and endpoint forces for a class of over-actuated systems.
a two-fingered hand manipulating an object within a plane. Note that both
fingers are connected at the fingertips holding the object. While holding the
object, the system has three degrees of freedom. Since each finger has two
active joints, the total number of active joints is four. Therefore, the system is
over-actuated.
Using the notation shown in the Image, the virtual work is given by
Note that only three virtual displacements of the four joint angles are
independent. There exists a differential relationship between one of the joints,
say θ 4 , and the other three due to the kinematic constraint. Let us write it as
This gives a particular combination of joint torques that do not influence the
force balance with the external endpoint load F. The joint torques having this
proportion generate the internal force applied to the object, as illustrated in
the Image. This internal force is a grasp force that is needed for performing a
task.
Two-fingered hand manipulating a grasped object
Define geometric parameters needed and obtain the two Jacobian matrices
associated with the two-fingered hand holding an object. Furthermore, obtain
the grasp force using the Jacobian matrices and the joint torques.
Dynamics
In this chapter, we analyze the dynamic behavior of robot mechanisms. The
dynamic behavior is described in terms of the time rate of change of the robot
configuration in relation to the joint torques exerted by the actuators. This
relationship can be expressed by a set of differential equations, called
equations of motion, that govern the dynamic response of the robot linkage to
input joint torques. In the next chapter, we will design a control system on the
basis of these equations of motion.
Two methods can be used in order to obtain the equations of motion: the
Newton-Euler formulation, and the Lagrangian formulation. The NewtonEuler formulation is derived by the direct interpretation of Newton's Second
Law of Motion, which describes dynamic systems in terms of force and
momentum. The equations incorporate all the forces and moments acting on
the individual robot links, including the coupling forces and moments
between the links. The equations obtained from the Newton-Euler method
include the constraint forces acting between adjacent links. Thus, additional
arithmetic operations are required to eliminate these terms and obtain explicit
relations between the joint torques and the resultant motion in terms of joint
displacements. In the Lagrangian formulation, on the other hand, the system's
dynamic behavior is described in terms of work and energy using generalized
coordinates. This approach is the extension of the indirect method discussed
in the previous chapter to dynamics. Therefore, all the workless forces and
constraint forces are automatically eliminated in this method. The resultant
equations are generally compact and provide a closed-form expression in
terms of joint torques and joint displacements. Furthermore, the derivation is
simpler and more systematic than in the Newton-Euler method.
The robot’s equations of motion are basically a description of the relationship
between the input joint torques and the output motion, i.e. the motion of the
robot linkage. As in kinematics and in statics, we need to solve the inverse
problem of finding the necessary input torques to obtain a desired output
motion. This inverse dynamics problem is discussed in the last section of this
chapter. Efficient algorithms have been developed that allow the dynamic
computations to be carried out on-line in real time.
Newton-Euler Formulation of Equations of Motion
Basic Dynamic Equations
In this section we derive the equations of motion for an individual link based
on the direct method, i.e. Newton-Euler Formulation. The motion of a rigid
body can be decomposed into the translational motion with respect to an
arbitrary point fixed to the rigid body, and the rotational motion of the rigid
body about that point. The dynamic equations of a rigid body can also be
represented by two equations: one describes the translational motion of the
centroid (or center of mass), while the other describes the rotational motion
about the centroid. The former is Newton's equation of motion for a mass
particle, and the latter is called Euler's equation of motion.
We begin by considering the free body diagram of an individual link. all the
forces and moments acting on link i. The Image is the same as, which
describes the static balance of forces, except for the inertial force and
moment that arise from the dynamic motion of the link. Let be the linear
velocity of the centroid of link i with reference to the base coordinate frame
O-xyz, which is an inertial reference frame. The inertial force is then given
by , where mi is the mass of the link and is the time derivative of . Based on
D’Alembert’s principle, the equation of motion is then obtained by adding
the inertial force to the static balance of forces in eq.(6.1.1) so that
where, as in Chapter 6, are the coupling forces applied to link i by links i-1
and i+1, respectively, and g is the acceleration of gravity.
Free body diagram of link i in motion
Rotational motions are described by Euler's equations. In the same way as for
translational motions, adding “inertial torques” to the static balance of
moments yields the dynamic equations. We begin by describing the mass
properties of a single rigid body with respect to rotations about the centroid.
The mass properties are represented by an inertia tensor, or an inertia matrix,
which is a 3 x 3 symmetric matrix defined by
where ρ is the mass density, are the coordinates of the centroid of the rigid
body, and each integral is taken over the entire volume V of the rigid body.
Note that the inertia matrix varies with the orientation of the rigid body.
Although the inherent mass property of the rigid body does not change when
viewed from a frame fixed to the body, its matrix representation when viewed
from a fixed frame, i.e. inertial reference frame, changes as the body rotates.
The inertial torque acting on link i is given by the time rate of change of the
angular momentum of the link at that instant. Let be the angular velocity
vector and be the centroidal inertia tensor of link i, then the angular
momentum is given by . Since the inertia tensor varies as the orientation of
the link changes, the time derivative of the angular momentum includes not
only the angular acceleration term , but also a term resulting from changes in
the inertia tensor viewed from a fixed frame. This latter term is known as the
gyroscopic torque and is given by . Adding these terms to the original
balance of moments (4-2) yields
using the notations of Image 7.1.1. Equations (2) and (3) govern the dynamic
behavior of an individual link. The complete set of equations for the whole
robot is obtained by evaluating both equations for all the links, i = 1, · ,n.
Closed-Form Dynamic Equations
The Newton-Euler equations we have derived are not in an appropriate form
for use in dynamic analysis and control design. They do not explicitly
describe the input-output relationship, unlike the relationships we obtained in
the kinematic and static analyses. In this section, we modify the NewtonEuler equations so that explicit input-output relations can be obtained. The
Newton-Euler equations involve coupling forces and moments . As shown in
eqs.(6.2.1) and (6.2.2), the joint torque τi, which is the input to the robot
linkage, is included in the coupling force or moment. However, τi is not
explicitly involved in the Newton-Euler equations. Furthermore, the coupling
force and moment also include workless constraint forces, which act
internally so that individual link motions conform to the geometric
constraints imposed by the mechanical structure. To derive explicit inputoutput dynamic relations, we need to separate the input joint torques from the
constraint forces and moments. The Newton-Euler equations are described in
terms of centroid velocities and accelerations of individual arm links.
Individual link motions, however, are not independent, but are coupled
through the linkage. They must satisfy certain kinematic relationships to
conform to the geometric constraints. Thus, individual centroid position
variables are not appropriate for output variables since they are not
independent.
The appropriate form of the dynamic equations therefore consists of
equations described in terms of all independent position variables and input
forces, i.e., joint torques, that are explicitly involved in the dynamic
equations. Dynamic equations in such an explicit input- output form are
referred to as closed-form dynamic equations. As discussed in the previous
chapter, joint displacements q are a complete and independent set of
generalized coordinates that locate the whole robot mechanism, and joint
torques τ are a set of independent inputs that are separated from constraint
forces and moments. Hence, dynamic equations in terms of joint
displacements q and joint torques τ are closed-form dynamic equations.
The two dof planar manipulator that we discussed in the previous chapter. Let
us obtain the Newton-Euler equations of motion for the two individual links,
and then derive closed-form dynamic equations in terms of joint
displacements 1 2 θ andθ , and joint torques τ1 and τ2. Since the link
mechanism is planar, we represent the velocity of the centroid of each link by
a 2-dimensional vector vi and the angular velocity by a scalar velocity ωi .
We assume that the centroid of link i is located on the center line passing
through adjacent joints at a distance from joint i, as shown in the Image. The
axis of rotation does not vary for the planar linkage. The inertia tensor in this
case is reduced to a scalar moment of inertia denoted by Ii.
From eqs. (1) and (3), the Newton-Euler equations for link 1 are given by
Note that all vectors are 2 x 1, so that moment N i-1,i and the other vector
products are scalar quantities. Similarly, for link 2
Mass properties of two dof planar robot
To obtain closed-form dynamic equations, we first eliminate the constraint
forces and separate them from the joint torques, so as to explicitly involve the
joint torques in the dynamic equations. For the planar manipulator, the joint
torques τ1 and τ2 are equal to the coupling moments:
Substituting eq.(6) into eq.(5) and eliminating f1,2 we obtain
Similarly, eliminating f0,1 yields,
Next, we rewrite , 1 ci , i , and i i+ v ω r using joint displacements 1 2 θ and θ
, which are independent variables. Note that ω2 is the angular velocity
relative to the base coordinate frame, while θ2 is measured relative to link 1.
Then, we have
The linear velocities can be written as
Substituting eqs. (9) and (10) along with their time derivatives into eqs. (7)
and (8), we obtain the closed-form dynamic equations in terms of 1 2 θ andθ :
Where
The scalar g represents the acceleration of gravity along the negative y-axis.
More generally, the closed-form dynamic equations of an n-degree-offreedom robot can be given in the form
where coefficients Hij , hijk, and Gi are functions of joint displacements .
When external forces act on the robot system, the left-hand side must be
modified accordingly.
Physical Interpretation
Equations
of
the
Dynamic
In this section, we interpret the physical meaning of each term involved in the
closed form dynamic equations for the two-dof planar robot.
The last term in each of eqs. (11-a, b), Gi , accounts for the effect of gravity.
Indeed, the terms G1 and G2, given by (12-e, f), represent the moments
created by the masses m1 and m2 about their individual joint axes. The
moments are dependent upon the arm configuration. When the arm is fully
extended along the x-axis, the gravity moments become maximums.
Next, we investigate the first terms in the dynamic equations. When the
second joint is immobilized, i.e. , the first dynamic equation reduces to ,
where the gravity term is neglected. From this expression it follows that the
coefficient H11 accounts for the moment of inertia seen by the first joint
when the second joint is immobilized. The coefficient H11 given by eq. (12a) is interpreted as the total moment of inertia of both links reflected to the
first joint axis. The first two terms, , in eq. (12-a), represent the moment of
inertia of link 1 with respect to joint 1, while the other terms are the
contribution from link 2. The inertia of the second link depends upon the
distance L between the centroid of link 2 and the first joint axis, as illustrated
in Image 7.1.3. The distance L is a function of the joint angle θ 2 = 0 and θ 2
= 0 1 11θ1 τ H = 1 2 1 1 m I A c + θ2 and is given by
Using the parallel axes theorem of moment of inertia (Goldstein, 1981), the
inertia of link 2 with respect to joint 1 is m2L2 +I2 , which is consistent with
the last two terms in eq. (12-a). Note that the inertia varies with the arm
configuration. The inertia is maximum when the arm is fully extended ( 0 θ 2
= ), and minimum when the arm is completely contracted (θ 2 = π ).
Varying inertia depending on the arm configuration
Let us now examine the second terms on the right hand side of eq. (11).
Consider the instant when 1 2 1 , then the first equation reduces to 1 12 2 ,
where the gravity term is again neglected. From this expression it follows that
the second term accounts for the effect of the second link motion upon the
first joint. When the second link is accelerated, the reaction force and torque
induced by the second link act upon the first link. This is clear in the original
Newton-Euler equations (4), where the coupling force -fl,2 and moment N1,2 from link 2 are involved in the dynamic equation for link 1. The
coupling force and moment cause a torque θ = θ = 0 and θ = 0
τθH
= int τ about the first joint axis given by
where N1,2 and fl,2 are evaluated using eq. (5) for 1 2 1 . This agrees with
the second term in eq. (11-a). Thus, the second term accounts for the
interaction between the two joints.
The third terms in eq. (11) are proportional to the square of the joint
velocities. We consider the instant when 2 1 2. In this case, a centrifugal
force acts upon the second link. Let fcent be the centrifugal force. Its
magnitude is given by
where L is the distance between the centroid C2 and the first joint O. The
centrifugal force acts in the direction of position vector O,C2 r . This
centrifugal force causes a moment τcent about the second joint. Using eq.
(16), the moment τcent is computed as
This agrees with the third term h 1 in eq. (11-b). Thus we conclude that the
third term is caused by the centrifugal effect on the second joint due to the
motion of the first joint. Similarly, rotating the second joint at a constant
velocity causes a torque of due to the centrifugal effect upon the first joint.
Centrifugal (a) and Coriolis (b) effects
Finally, we discuss the fourth term of eq. (11-a), which is proportional to the
product of the joint velocities. Consider the instant when the two joints rotate
at velocities at the same time. Let Ob-xbyb be the coordinate frame attached
to the tip of link 1. Note that the frame Ob-xbyb is parallel to the base
coordinate frame at the instant shown. However, the frame rotates at the
angular velocity together with link 1. The mass centroid of link 2 moves at a
velocity of relative to link 1, i.e. the moving coordinate frame Ob-xbyb.
When a mass particle m moves at a velocity of vb relative to a moving
coordinate frame rotating at an angular velocity ω, the mass particle has the
so-called Coriolis force given by . Let fCor be the force acting on link 2 due
to the Coriolis effect. The Coriolis force is given by
This Coriolis force causes a moment τ C or about the first joint, which is
given by
The right-hand side of the above equation agrees with the fourth term in eq.
(11-a). Since the Coriolis force given by eq. (18) acts in parallel with link 2,
the force does not create a moment about the second joint in this particular
case.
Thus, the dynamic equations of a robot arm are characterized by a
configuration dependent inertia, gravity torques, and interaction torques
caused by the accelerations of the other joints and the existence of centrifugal
and Coriolis effects.
Lagrangian Formulation of Robot Dynamics
Lagrangian Dynamics
In the Newton-Euler formulation, the equations of motion are derived from
Newton's Second Law, which relates force and momentum, as well as torque
and angular momentum. The resulting equations involve constraint forces,
which must be eliminated to obtain closed form dynamic equations. In the
Newton-Euler formulation, the equations are not expressed in terms of
independent variables, and do not include input joint torques explicitly.
Arithmetic operations are needed to derive the closed-form dynamic
equations. This represents a complex procedure that requires physical
intuition, as discussed in the previous section.
An alternative to the Newton-Euler formulation of manipulator dynamics is
the Lagrangian formulation, which describes the behavior of a dynamic
system in terms of work and energy stored in the system rather than of forces
and moments of the individual members involved. The constraint forces
involved in the system are automatically eliminated in the formulation of
Lagrangian dynamic equations. The closed-form dynamic equations can be
derived systematically in any coordinate system.
Let be generalized coordinates that completely locate a dynamic system. Let
T and U be the total kinetic energy and potential energy stored in the
dynamic system. We define the Lagrangian L by
Note that the potential energy is a function of generalized coordinates qi and
that the kinetic energy is that of generalized velocities as well as generalized
coordinates qi. Using the Lagrangian, equations of motion of the dynamic
system are given by
where Qi is the generalized force corresponding to the generalized coordinate
qi. Considering the virtual work done by non-conservative forces can identify
the generalized forces acting on the system.
Planar Robot Dynamics
Before discussing general robot dynamics in three-dimensional space, we
consider the 2 dof planar robot, for which we have derived the equations of
motion based on Newton-Euler Formulation. Image 7.2.1 shows the same
robot mechanism with a few new variables needed for the Lagrangian
Formulation.
Two dof robot
The total kinetic energy stored in the two links moving at linear velocity and
angular velocity vci ωi at the centroids, as shown in the figur
e, is given by
where vci represents the magnitude of the velocity vector. Note that the linear
velocities and the angular velocities are not independent variables but are
functions of joint angles and joint angular velocities, i.e. the generalized
coordinates and the generalized velocities that locate the dynamic state of the
system uniquely. We need to rewrite the above kinetic energy so that it is
with respect to . The angular velocities are given by
The linear velocity of the first link is simply
However, the centroidal linear velocity of the second link vc2 needs more
computation. Treating the centroid C2 as an endpoint and applying the
formula for computing the endpoint velocity yield the centroidal velocity. Let
be the 2x2 Jacobian matrix relating the centroidal velocity vector to joint
velocities. Then,
where ( . Substituting eqs.(4~6) to eq.(3) yields
Substituting the above two equations into eq.(2) yields the same result as eq.
(7.1.11-a). The equation of motion for the second joint can be obtained in the
same manner, which is identical to eq.(7.1.11-b). Thus, the same equations of
motion have been obtained based on Lagrangian Formulation. Note that the
Lagrangian Formulation is simpler and more systematic than the NewtonEuler Formulation. To formulate kinetic energy, velocities must be obtained,
but accelerations are not needed. Remember that the acceleration
computation was complex in the Newton-Euler Formulation, as discussed in
the previous section. This acceleration computation is automatically dealt
with in the computation of Lagrange’s equations of motion. The difference
between the two methods is more significant when the degrees of freedom
increase, since many workless constraint forces and moments are present and
the acceleration computation becomes more complex in Newton-Euler
Formulation.
Inertia Matrix
In this section we will extend Lagrange’s equations of motion obtained for
the two d.o.f. planar robot to the ones for a general n d.o.f. robot. Central to
Lagrangian formulation is the derivation of the total kinetic energy stored in
all of the rigid bodies involved in a robotic system. Examining kinetic energy
will provide useful physical insights of robot dynamic. Such physical insights
based on Lagrangian formulation will supplement the ones we have obtained
based on Newton-Euler formulation.
As seen in eq.(3) for the planar robot, the kinetic energy stored in an
individual arm link consists of two terms; one is kinetic energy attributed to
the translational motion of mass mi and the other is due to rotation about the
centroid. For a general three-dimensional rigid body, this can be written as
where and Ii are, respectively, the 3x1 angular velocity vector and the 3x3
inertia matrix of the i-th link viewed from the base coordinate frame, i.e.
inertial reference. The total kinetic energy stored in the whole robot linkage is
then given by
since energy is additive.
The expression for kinetic energy is written in terms of the velocity and
angular velocity of each link member, which are not independent variables,
as mentioned in the previous section. Let us now rewrite the above equations
in terms of an independent and complete set of generalized coordinates,
namely joint coordinates q = [q1, .. ,qn] T . For the planar robot example, we
used the Jacobian matrix relating the centroid velocity to joint velocities for
rewriting the expression. We can use the same method for rewriting the
centroidal velocity and angular velocity for three-dimensional multi-body
systems.
where JL i and JA i are, respectively, the 3 x n Jacobian matrices relating the
centroid linear velocity and the angular velocity of the i-th link to joint
velocities. Note that the linear and angular velocities of the i-th link are
dependent only on the first i joint velocities, and hence the last n-i columns of
these Jacobian matrices are zero vectors. Substituting eq.(13) into eqs.(11)
and (12) yields
The matrix H incorporates all the mass properties of the whole robot
mechanism, as reflected to the joint axes, and is referred to as the Multi-Body
Inertia Matrix. Note the difference between the multi-body inertia matrix and
the 3 x 3 inertia matrices of the individual links. The former is an aggregate
inertia matrix including the latter as components. The multi-body inertia
matrix, however, has properties similar to those of individual inertia matrices.
As shown in eq. (15), the multi-body inertia matrix is a symmetric matrix, as
is the individual inertia matrix defined by eq. (7.1.2). The quadratic form
associated with the multi-body inertia matrix represents kinetic energy, so
does the individual inertia matrix. Kinetic energy is always strictly positive
unless the system is at rest. The multi-body inertia matrix of eq. (15) is
positive definite, as are the individual inertia matrices. Note, however, that
the multi-body inertia matrix involves Jacobian matrices, which vary with
linkage configuration. Therefore, the multi-body inertia matrix is
configuration-dependent and represents the instantaneous composite mass
properties of the whole linkage at the current linkage configuration. To
manifest the configuration-dependent nature of the multi-body inertia matrix,
we write it as H(q), a function of joint coordinates q.
Using the components of the multi-body inertia matrix H={Hij}, we can
write the total kinetic energy in scalar quadratic form:
Most of the terms involved in Lagrange’s equations of motion can be
obtained directly by differentiating the above kinetic energy. From the first
term in eq.(2),
It is clear that the interactive inertial torque caused by the j-th joint
acceleration upon the ith joint has the same coefficient as that of caused by
joint i upon joint j. This property is called Maxwell’s Reciprocity Relation.
Hijqj H jiqi
The second term of eq.(17) is non-zero in general, since the multi-body
inertia matrix is configuration-dependent, being a function of joint
coordinates. Applying the chain rule,
The second term in eq.(2), Lagrange’s equation of motion, also yields the
partial derivatives of Hij.
Substituting eq.(19) into the second term of eq.(17) and combining the
resultant term with eq.(20), let us write these nonlinear terms as
where coefficients Cijk is given by
This coefficient Cijk is called Christoffel’s Three-Index Symbol. Note that
eq.(21) is nonlinear, having products of joint velocities. Eq.(21) can be
divided into the terms proportional to square joint velocities, i.e. j=k, and the
ones for j ≠ k : the former represents centrifugal torques and the latter
Coriolis torques.
These centrifugal and Coriolis terms are present only when the multi-body
inertia matrix is configuration dependent. In other words, the centrifugal and
Coriolis torques are interpreted as nonlinear effects due to the configurationdependent nature of the multi-body inertia matrix in Lagrangian formulation.
Generalized Forces
Forces acting on a system of rigid bodies can be represented as conservative
forces and non-conservative forces. The former is given by partial derivatives
of potential energy U in Lagrange’s equations of motion. If gravity is the
only conservative force, the total potential energy stored in n links is given by
where is the position vector of the centroid Ci that is dependent on joint
coordinates. Substituting this potential energy into Lagrange’s equations of
motion yields the following gravity torque seen by the i-th joint:
where is the i-th column vector of the 3 x 1 Jacobian matrix relating the linear
centroid velocity of the j-th link to joint velocities. L J j,i
Non-conservative forces acting on the robot mechanism are represented by
generalized forces Qi in Lagrangian formulation. Let δWork be virtual work
done by all the non-conservative forces acting on the system. Generalized
forces Qi associated with generalized coordinates qi, e.g. joint coordinates,
are defined by
If the virtual work is given by the inner product of joint torques and virtual
joint displacements, q nδqn τ δ +"+τ 1 1 , the joint torque itself is the
generalized force corresponding to the joint coordinate. However, generalized
forces are often different from joint torques. Care must be taken for finding
correct generalized forces. Let us work out the following example.
Consider the same 2 d.o.f. planar robot as Example 7.1. Instead of using joint
angles θ1 and θ2 as generalized coordinates, let us use the absolute angles, φ1
and φ2 , measured from the positive x-axis. See the Image below. Changing
generalized coordinates entails changes to generalized forces. Let us find the
generalized forces for the new coordinates.
Absolute joint angles φ1 and φ2 and disjointed links
As shown in the Image, joint torque 2 τ acts on the second link, whose virtual
displacement is δφ2 , while joint torque 1 τ and the reaction torque 2 −τ act
on the first link for virtual displacementδφ1 . Therefore, the virtual work is
Comparing this equation with eq.(26) where generalized coordinates are 1 1 2
2 φ = q , φ = q , we can conclude that the generalized forces are:
Note that, since generalized coordinates q can uniquely locate the system, the
position vector r must be written as a function of q alone.
Force and Compliance Controls
A class of simple tasks may need only trajectory control where the robot endeffecter is moved merely along a prescribed time trajectory. However, a
number of complex tasks, including assembly of parts, manipulation of tools,
and walking on a terrain, entail the control of physical interactions and
mechanical contacts with the environment. Achieving a task goal often
requires the robot to comply with the environment, react to the force acting
on the end-effecter, or adapt its motion to uncertainties of the environment.
Strategies are needed for performing those tasks.
Force and compliance controls are fundamental task strategies for performing
a class of tasks entailing the accommodation of mechanical interactions in the
face of environmental uncertainties. In this chapter we will first present
hybrid position/force control: a basic principle of strategic task planning for
dealing with geometric constraints imposed by the task environment. An
alternative approach to accommodating interactions will also be presented
based on compliance or stiffness control. Characteristics of task compliances
and force feedback laws will be analyzed and applied to various tasks.
Hybrid Position/Force Control
Principle
To begin with let us consider a daily task. Image 9.1.1 illustrates a robot
drawing a line with a pencil on a sheet of paper. Although we humans can
perform this type of task without considering any detail of hand control, the
robot needs specific control commands and an effective control strategy. To
draw a letter, “A”, for example, we first conceive a trajectory of the pencil
tip, and command the hand to follow the conceived trajectory. At the same
time we accommodate the pressure with which the pencil is contacting the
sheet of paper. Let o-xyz be a coordinate system with the z-axis
perpendicular to the sheet of paper. Along the x and y axes, we provide
positional commands to the hand control system. Along the z-axis, on the
other hand, we specify a force to apply. In other words, controlled variables
are different between the horizontal and vertical directions. The controlled
variable of the former is x and y coordinates, i.e. a position, while the latter
controlled variable is a force in the z direction. Namely, two types of control
loops are combined in the hand control system, as illustrated.
Robot drawing a line with a pencil on a sheet of paper
Position and force control loops
The above example is one of the simplest tasks illustrating the need for
integrating different control loops in such a way that the control mode is
consistent with the geometric constraint imposed to the robot system. As the
geometric constraint becomes more complex and the task objective is more
involved, an intuitive method may not suffice. In the following we will obtain
a general principle that will help us find proper control modes consistent with
both geometric constraints and task objectives. Let us consider the following
six-dimensional task to derive a basic principle behind our heuristics and
empiricism.
Shown below is a task to pull up a peg from a hole. We assume that the peg
can move in the vertical direction without friction when sliding in the hole.
We also assume that the task process is quasi-static in that any inertial force
is negligibly small. A coordinate system O-xyz, referred to as C-frame, is
attached to the task space, as shown in the Image. The problem is to find a
proper control mode for each of the axes: three translational and three
rotational axes.
Pulling up a peg from a hole
The key question is how to assign a control mode, position control or force
control, to each of the axes in the C-frame in such a way that the control
action may not conflict with the geometric constraints and physics. M. Mason
addressed this issue in his seminal work on hybrid position/force control. He
called conditions dictated by physics Natural Constraints, and conditions
determined by task goals and objectives Artificial Constraints. Table 9.1.1
summarizes these conditions.
it is clear that the peg cannot be moved in the x and y directions due to the
geometric constraint. Therefore, the velocities in these directions must be
zero: . Likewise, the peg cannot be rotated about the x and y axes. Therefore,
the angular velocities are zero: . These conditions constitute the natural
constraints in the kinematic domain. The remaining directions are linear and
angular z axes. Velocities along these two directions can be assigned
arbitrarily and may be controlled with position control mode. The reference
inputs to these position control loops must be determined such that the task
objectives may be satisfied. Since the task is to pull up the peg, an upward
linear velocity must be given: . The orientation of the peg about the z-axis, on
the other hand, doesn’t have to be changed. Therefore, the angular velocity
remains zero: vx = 0, vy = 0 ωx = 0, ωy = 0 vz = V > 0 ωz = 0 . These
reference inputs constitute the artificial constraints in the kinematic domain.
Natural and artificial constraints of the peg-in-the-hole problem
In the statics domain, forces and torques are specified in such a way that the
quasi-static condition is satisfied. This means that the peg motion must not be
accelerated with any unbalanced force, i.e. non-zero inertial force. Since we
have assumed that the process is frictionless, no resistive force acts on the
peg in the direction that is not constrained by geometry. Therefore, the linear
force in the z direction must be zero: fz = 0 . The rotation about the z axis,
too, is not constrained. Therefore, the torque about the z axis must be zero: τ
z = 0 . These conditions are dictated by physics, and are called the natural
constraints in the statics domain. The remaining directions are geometrically
constrained. In these directions, forces and torques can be assigned
arbitrarily, and may be controlled with force control mode. The reference
inputs to these control loops must be determined so as to meet the task
objectives. In this task, it is not required to push the peg against the wall of
the hole, nor twist it. Therefore, the reference inputs are set to zero f x = 0, f y
= 0, τ x = 0, τ y = 0: . These constitute the artificial constraints in the statics
domain.
•
Each C-frame axis must have only one control mode, either position
or force,
• The process is quasi-static and friction less, and
• The robot motion must conform to geometric constraints.
•
In general, the axes of a C-frame are not necessarily the same as the direction
of a separate control mode. Nevertheless, the orthogonality properties hold in
general. We prove this next.
Let V6 be a six-dimensional vector space, and be an admissible motion
space, that is, the entire collection of admissible motions conforming to the
geometric constraints involved in a given task. Let Vc be a constraint space
that is the orthogonal complement to the admissible motion space:
Let F ∈V be a six-dimensional endpoint force acting on the end-effecter,
and be an infinitesimal displacement of the end-effecter. The work done on
the end-effecter is given by
Decomposing each vector to the one in the admissible motion space and the
one in the constraint space,
and substituting them to eq.(2) yield
This implies that any force and moment in the admissible motion space must
be zero, i.e. the natural static constraints:
The reader will appreciate Mason’s Principle when considering the following
exercise problem, in which the partition between admissible motion space
and constraint space cannot be described by a simple division of C-frame
axes. Rather the admissible motion space lies along an axis where a
translational axis and a rotational axis are coupled.
Architecture of Hybrid Position/Force Control
System
Based on Mason’s Principle, a hybrid position/force control system can be
constructed in such a way that the robot control system may not have a
conflict with the natural constraints of the task process, while performing the
task towards the task goal. Image 9.1.5 shows the block diagram of a hybrid
position/force control system. The upper half of the diagram is position
control loops, where artificial kinematic constraints are provided as reference
inputs to the system and are compared with the actual position of the endeffecter. The lower half of the diagram is force control loops, where artificial
static constraints are provided as reference inputs to the feedback loops and
are compared with the actual force and moment at the end-effecter. Note that
feedback signals are described in an appropriate C-frame attached to the endeffecter.
If the feedback signals are noise-less and the C-frame is perfectly aligned
with the actual task process, the position signal of the feedback loop must lie
in the admissible motion space, and the force being fed back must lie in the
constraint space. However, the feedback signals are in general corrupted with
sensor noise and the C-frame may be misaligned. Therefore, the position
signal may contain some component in the constraint space, and some
fraction of the force signal may be in the admissible motion space. These
components are contradicting with the natural constraints, and therefore
should not be fed back to the individual position and force controls. To filter
out the contradicting components, the feedback errors are projected to their
own subspaces, i.e. the positional error ep mapped to the admissible motion
space Va and the force feedback error ef mapped to the constraint space Vc.
In the block diagram these filters are shown by projection matrices, Pa and Pc
:
When the C-frame axes are aligned with the directions of the individual
position and force control loops, the projection matrices are diagonal,
consisting of only 1 and 0 in the diagonal components. In the case of the pegin-the-hole problem, they are:
In case of Example 9.2 where the C-frame axes are not the directions of the
individual position and force control loops, the projection matrices are not
diagonal.
Block diagram of hybrid position/force control system
These feedback errors, p f e e and , are in the C-frame, hence they must be
converted to the joint space in order to generate control commands to the
actuators. Assuming that the positional error vector is small and that the robot
is not at a singular configuration, the position feedback error in joint
coordinates is given by
where J is the Jacobian relating the end-effecter velocities in the C-frame to
joint velocites. The force feedback error in the joint coordinates, on the other
hand, is obtained based on the duality principle:
These two error signals in the joint coordinates are combined after going
through dynamic compensation in the individual joint controls.
Compliance Control
Task strategy
Use of both position and force information is a unique feature in the control
of robots physically interacting with the environment. In hybrid
position/force control, separation was made explicitly between position and
force control loops through projections of feedback signals onto admissible
motion space and constraint space. An alternative to this space separation
architecture is to control a relationship between position and force in the task
space. Compliance Control is a basic control law relating the displacement of
the end-effecter to the force and moment acting on it. Rather than totally
separating the task space into subspaces of either position or force control,
compliance control reacts to the endpoint force such that a given functional
relationship, typically a linear map, is held between the force and the
displacement. Namely, a functional relationship to generate is given by
where C is an m x m Compliance Matrix, and ∆pand F are endpoint
displacement and force represented in an m-dimensional, task coordinate
system. Note that the inverse to the compliance matrix is a stiffness matrix:
if the inverse exists.
The components of the compliance matrix, or the stiffness matrix, are design
parameters to be determined so as to meet task objectives and constraints.
Opening a door, for example, can be performed with the compiance
illustrated in Image 9.2.1. The trajectory of the doorknob is geometrically
constrained to the circle of radius R centered at the door hinge. The robot
hand motion must comply to the constrained doorknob trajectory, although
the trajectory is not exactly known. The robot must not break the doorknob,
although the conceived trajectory is different from the actual trajectory. This
task requirement can be met by assigining a small stiffness, i.e. a high
compliance, to the radial direction perpendicular to the trajectory. As
illustrated in the Image,
such a small spring constant generates only a small restoring force in
response to the discrepancy between the actual doorknob trajectory and the
reference trajectory of the robot hand. Along the direction tangent to the
doorknob trajectory, on the other hand, a large stiffness, or a small
compliance, is assigned. This is to force the doorknob to move along the
trajectory despite friction and other resistive forces. The stiffness matrix is
therefore given by
with reference to the task coordinate system O-xy. Using this stiffness with
which the doorknow is held, the robot can open the door smoothly and
dexterously, although the exact trajectory of the doorknob is not known.
Door opening with compliance control
Compliance control synthesis
Now that a desired compliance is given, let us consider the method of
generating the desired compliance. There are multiple ways of synthesizing a
compliance control system. The simplest method is to accommodate the
proportional gains of joint feedback controls so that desired restoring forces
are generated in proportion to discrepancies between the actual and reference
joint angles. As shown in Image 9.2.2, a feedback control error is generated
when a disturbance force or torque acts on the joint. At steady state a ststic
balance is made, as an actuator torque i e i τ proportional to the control error
cancels out the disturbance torque. The proportionality constant is determined
by the position feedback gain ki, when friction is neglected. Therefore, a
desired stiffness or compliance can be obtained by tuning the position
feedback gain.
Compliance synthesis is trivial for single joint control systems. For general n
degree-of freedom robots, however, multiple feedback loops must be
coordinated. We now consider how to generate a desired m x m compliance
or stiffness matrix specified at the endpoint by tuning joint feedback gains.
Single joint position feedback control system
Therefore, the obtained joint feedback gain provides the desired endpoint
stiffness given by eq.(8), or the equivalent compliance.
Two link robots
Braitenberg Vehicles
In 1984 Valentino Braitenberg published a book titled Vehicles—
Experiments in Synthetic Psychology. In his book Valentino describes a
number of wondrous vehicles that exhibit interesting behaviors based on the
use of a few electronic neurons
Similar in concept to Walter’s seminal neural work with his robot tortoises,
Valentino’s vehicle behavior is more straightforward, making it somewhat
easier to follow both theoretically and logically. This also makes it easier to
implement his ideas into real designs for robots.
In this chapter we will build a few Braitenberg type vehicles. At the heart of
Braitenberg vehicles is his description of a basic vehicle, which is a sensor
connected to a motor. Braitenberg continues to explain the relationship
between the sensor and motor. The relationship is essentially the connection
between the sensor and motor, and this connection ought to be considered as
a neuron. With the connection conImaged as a neuron, the structure is Instead
of a vehicle we will describe the structure diagram as a small neural network.
At the front end of the network we find a sensor, followed by the neuron and
finally the output motor. The sensor detects the intensity of light and outputs
a proportional signal to the motor. High intensity light produces high rpm’s
(revolutions per minute) from the motor. Low intensity light produces slow
rpm’s
Consider the sensor portion as modular and interchangeable. Other sensors
can be plugged in and incorporated to detect any number of environmental
variables, for example, heat, pressure, sound, vibration, magnetic fields
(compass), electrical fields, radioactivity, and gases (toxic or otherwise). In
addition, the motor, like the sensor, represents a singular example of an
output module. Other output modules could include a second neuron (or
neural layer), electric circuit, on/off switch, light source, etc.
Basic neuron setup, sensor input, neuron, and motor output.
The neuron’s input is the output of the sensor, and the neuron’s output
activates a motor in relationship to its input. The input/output “relationship”
of the neuron can be made to be one of many different mathematical
functions. The relationship may also be called connection strength or
connection function when you are reading the neural network literature. The
relationship is one of the most important variables we can modify when
programming our robot.
Neural I/O Relationships
When the neuron is stimulated, it generates an output. As stated, there are a
number of mathematical functions that can exist inside the neuron. These
functions act upon the neuron’s input (sensor output) and pass through the
results to the neuron’s output. Let’s examine a few of them
·
Positive proportional. As input from the sensor increases, activation
(rpm’s) of the motor increases in proportion
·
Negative proportional. As input from the sensor increases,
activation (rpm’s) of the motor decreases in proportion
·
Digital. As input from the sensor output exceeds a predetermined
(programmed) threshold (that may be positive or negative), the motor is
activated
· Gaussian. As input from the sensor increases, output passes through
a gaussian function for motor activation
Essentially the neuron may incorporate any mathematical function. It would
perform this function on the sensory input to generate an appropriate output. I
have provided an example of only a few of the more common functions
available
Vehicles
Using the basic neural setup, we can construct a few simple vehicles that
exhibit interesting behaviors. Image 9.6 illustrates two vehicles labeled A and
B. Both vehicles use the positive proportional neural setup with a light
intensity sensor.
Graph of positive proportional transfer function. As sensor output
increases, motor output increases.
Graph of negative proportional transfer function. As sensor output
increases, motor output decreases.
Graph of digital transfer function. As sensor output increases, output
remains unchanged until threshold is reached, then output switches full
on.
Vehicle A, if both sensors are evenly illuminated by a light source, will speed
up and, if possible, run into the light source. However, if the light source is
off to one side, the sensor on the side of the light source will speed a little
faster than the sensor/motor on other side. This will cause the vehicle to veer
away from the light source
Vehicle B, if both sensors are evenly illuminated by a light source, will speed
up and, if possible, run into the light source (same as vehicle A). If the light
source is off to one side, vehicle B will turn toward the light source
Graph of gaussian function. As sensor output increases, output follows a
gaussian curve.
Wiring of two Braitenberg vehicles labeled A and B.
Building Vehicles
It’s time to put the theory to the test and see if it works. Let’s assemble the
materials needed to build a vehicle. The photovore’s basic operating
procedure is like Walter’s robot. It tracks and follows a light source. The base
of the vehicle is a sheet of aluminum 8 in long by 4 in wide by 1 �8 in thick.
We will use two gearbox motors for propulsion and steering and one
multidirectional front wheel.
We will try a new construction method with this robot. Instead of securing
the gearbox motors with machine screws and nuts, we will use 3M’s
industrial brand double sided tape. This double sided tape, once cured, is as
strong as pop rivets. I tried to separate a sample provided by 3M. It consisted
of two flat pieces of metal secured with the tape. Even when I used pliers, it
was impossible. 3M states that the tape requires 24 h to reach full strength.
You may not achieve the fullstrength capability of the tape unless you follow
the 3M procedure.
Function of A and B Braitenberg vehicles
The gearbox motor is a 918D type (see Fig. 9.8). The gearbox motor at the
top of the picture has an orange cowl that is covering the gears. Notice the
flat mounting bracket that is perfect for securing to the vehicle base. The
doublesided tape is cut lengthwise to fit the base of bracket to the gearbox
motor. The exposed side of the tape is immediately secured to the gearbox
motor bracket. Then the motor is positioned on the bottom of the vehicle
base, the protective covering of the tape is removed, and the gearbox motor is
firmly placed onto the bottom of the vehicle base The second gearbox motor
is secured to the other side in a similar manner.
Back wheels
The shaft diameter of the gearbox motor is a little too small to make a good
friction fit to the rubber wheel. To beef up the diameter, cut a small 1 to 1.5in length of the 3mm tubing; see Parts List. Place the tubing over the gearbox
motor shaft, and collapse the tubing onto the shaft, using pliers. There is a
small cutaway on the gearbox motor shaft (see Fig. 9.10). If you can collapse
the tubing into this cutaway, you will create a strong fit between the shaft and
the tubing that will not pull off easily .
The tubing adds to the diameter of the shaft and will make a good friction fit
with the rubber wheels. Simply push the center holes of the wheels onto the
tubing/shaft, and you are finished.
A 918D 100:1 Gearbox motor.
3M double sided tape is used to secure gearbox motor to base of vehicle.
Gearbox motor showing cutaway on output shaft
Hexapod Walker
Legged walkers are a class of robots that imitate the locomotion of animals
and insects, using legs. Legged robots have the potential to transverse rough
terrains that are impassable by standard wheeled vehicles. It is with this in
mind that robotics are developing walker robots.
Imitation of Life
Legged walkers may imitate the locomotion style of insects, crabs, and
sometimes humans. Biped walkers are still a little rare, requiring balance and
a good deal more engineering science than multilegged robots. A bipedal
robot walker is discussed in detail in Chap. 13. In this chapter we will build a
six legged walker robot.
Six Legs—Tripod Gait
Using a six legged model, we can demonstrate the famous tripod gait used by
the majority of legged creatures. In the following drawings a dark circle
means the foot is firmly planted on the ground and is supporting the weight
of the creature (or robot). A light circle means the foot is not supporting any
weight and is movable.
our walker at rest. All six feet are on the ground. From the resting position
our walker decides to move forward. To step forward, it leaves lifts three of
its legs (white circles), leaving its entire weight distributed on the remaining
three legs (dark circles). Notice that the feet supporting the weight (dark
circles) are in the shape of a tripod. A tripod is a very stable weightsupporting position. Our walker is unlikely to fall over. The three feet that are
not supporting any weight may be lifted (white circles) and moved without
disturbing the stability of the walker. These feet move forward.
Sample biological tripod gait.
illustrates where the three lifted legs move. At this point, the walker’s weight
shifts from the stationary feet to the moved feet (see Fig. 10.1D). Notice that
the creature’s weight is still supported by a tripod position of feet. Now the
other set of legs moves forward and the cycle repeats. This is called a tripod
gait, because a tripod positioning of legs always supports the weight of the
walker.
Three Servo motor Walker Robot
The robot we will build is shown in Fig. 10.2. This walker robot is a
compromise in design but allows us to build a six legged walker using just
three servomotors. The three servomotor hexapod walker demonstrates a true
tripod gait. It is not identical to the biological gait we just looked at, but close
enough.
This legged hexapod uses three inexpensive HS322 (42oz torque)
servomotors for motion and one PIC 16F84 microcontroller for brains. The
microcontroller stores the program for walking, controls the three
servomotors, and reads the two sensor switches in front. The walking
program contains subroutines for walking forward and backward, turning
right, and turning left. The two switch sensors positioned in the front of the
walker inform the microcontroller of any obstacles in the walker’s path.
Based on the feedback from these switch sensors, the walker will turn or
reverse to avoid obstacles placed in its path.
Function
The tripod gait I programmed into this robot isn’t the only workable gait.
There are other perfectly usable gaits you can develop on your own. Consider
this walking program a working start point. To modify the program, it’s
important to understand both the program and robot leg functions. First let’s
look at the robot.
Hexapod robot.
At the rear of the walker are two servomotors. One is identified as L for the
left side, the other as R for the right side. Each servomotor controls both the
front and back legs on its side. The back leg is attached directly to the horn of
the servomotor. It is capable of swinging the leg forward and backward. The
back leg connects to the front leg through a linkage. The linkage makes the
front leg follow the action of the back leg as it swings forward and back. The
third servomotor controls the two center legs of the walker. This servomotor
rotates the center legs 20° to 30° clockwise (CW) or counterclockwise
(CCW), tilting the robot to one side or the other (left or right). With this
information we can examine how this legged robot will walk.
Moving Forward
We start in the rest position as before, each circle represents a foot, and the
dark circles show the weightbearing feet. Notice in the rest position, the
center legs do not support any weight. These center legs are made to be 1 /8
in shorter than the front and back legs.
In position A the center legs are rotated CW by about 25° from center
position. This causes the robot to tilt to the right. The weight distribution is
now on the front and back right legs and the center left leg. This is the
standard tripod position as described earlier. Since there is no weight on the
front and back left legs, they are free to move forward as shown in the B
position
Forward gait for hexapod robot
In the C position the center legs are rotated CCW by about 25° from center
position. This causes the robot to tilt to the left. The weight distribution is
now on the front and back left legs and the center right leg. Since there is no
weight on the front and back right legs, they are free to move forward, as
shown in the D position.
In position E the center legs are rotated back to their center position. The
robot is not in a tilted position so its weight is distributed on the front and
back legs. In the F position, the front and back legs are moved backward
simultaneously, causing the robot to move forward. The walking cycle can
then repeat.
Moving Backward
We start in the rest position (see Fig. 10.4), as before. In position A the center
legs are rotated CW by about 25° from center position. The robot tilts to the
right. The weight distribution is now on the front and back right legs and the
center left leg. Since there is no weight on the front and back left legs, they
are free to move backward, as shown in the B position
Backward gait for hexapod robot.
In the C position the center legs are rotated CCW by about 25° from center
position. The robot tilts to the left. Since there is no weight on the front and
back right legs, they are free to move backward, as shown in the D position.
In position E the center legs are rotated back to their center position. The
robot is not in a tilted position, so its weight is distributed on the front and
back legs. In the F position, the front and back legs are moved forward
simultaneously, causing the robot to move backward. The walking cycle can
then repeat.
Turning Left
The leg motion sequence to turn left is shown in Fig. 10.5. In position A the
center legs are rotated CW by about 25° from center position. The robot tilts
to the right. The weight distribution is now on the front and back right legs
and the center left leg. Since there is no weight on the front and back left legs,
they are free to move forward.
In the B position, the center legs are rotated CCW by about 25° from center
position. The robot tilts to the left. Since there is no weight on the front and
back right legs, they are free to move backward, as shown in the C position.
Turning left gait for hexapod robot.
In position D, the center legs are rotated back to their center position. The
robot is not in a tilted position, so its weight is distributed on the front and
back legs. In position, the left legs moved backward while the right legs
moved forward, simultaneously causing the robot to turn left. It typically
takes three turning cycles to turn the robot 90°.
Turning Right
Turning right follows the same sequence as turning left, with the leg positions
reversed.
Speech Recognition
In the near future, speech will be the method for controlling appliances, toys,
tools, computers, and robotics. There is a huge commercial market waiting
for this technology to mature.
Our speech recognition circuit is a standalone trainable speech recognition
circuit that may be interfaced to control just about anything electrical. The
interface circuit we will build in the second part of this chapter will allow this
speech recognition circuit to control a variety of electrical devices such as
appliances, test instruments, VCRs, TVs, and of course robots. The circuit is
trained (programmed) to recognize words you want it to recognize. The unit
can be trained in any language and even nonlanguages such as grunts,
birdcalls, and whistles.
To be able to control and operate an appliance (computer, VCR, TV security
system, etc.) or robot by speaking to it makes it easier to work with that
device, while increasing the efficiency and effectiveness. At the most basic
level, speech commands allow the user to perform parallel tasks (i.e., hands
and eyes are busy elsewhere) while continuing to work with the computer,
appliance, instrument, or robot.
The heart of the circuit is the HM2007 speech recognition integrated circuit.
The chip provides the options of recognizing either 40 words each with a
length of 0.96 s or 20 words each with a length of 1.92 s. This speech
recognition circuit has a jumper setting (jumper WD on main board) that
allows the user to choose either the 0.96s word length (40-word vocabulary)
or the 1.92sword length (20-word vocabulary).
For memory the circuit uses an 8K � 8 static RAM. There is a backup
memory battery for the SRAM on the main board. This battery keeps the
trained words safely stored in the SRAM when the main power is turned off.
The button battery lasts approximately 2 years. Without the battery backup
you would have to retrain the circuit every time the circuit was switched off.
Speech recognition circuit assembled.
HM2007 integrated circuit.
The chip has two operational modes: manual mode and CPU mode. The CPU
mode is implemented when it is necessary for the chip to work as a speech
recognition coprocessor under a host computer.This is an attractive approach
to speech recognition for computers because the job of listening to sound and
recognition of command words doesn’t occupy any of the main computer’s
CPU time. In one type of programming scenario, when the HM2007
recognizes a command, it can signal an interrupt to the host CPU and then
relay the command it recognized. The HM2007 chip can be cascaded to
provide a larger word recognition library.
The SR06 circuit we are building operates in the standalone manual mode. As
a standalone circuit, the speech recognition circuit doesn’t require a host
computer and may be integrated into other devices to add speech control.
Applications
Applications of command and control of appliances and equipment include
these:
·
·
·
·
·
Telephone assistance systems
Data entry
Speech controlled toys
Speech and voice recognition security systems
Robotics
Software Approach
Currently most speech recognition systems available today are software
programs that run on personal computers. The software requires a compatible
sound card be installed in the computer. Once activated, this software runs
continuously in the background of the computer’s operating system
(Windows, OS/2, etc.) and any other application program.
While this speech software is impressive, it is not economically viable for
manufacturers to add personal computer systems to control a washing
machine or VCR. The speech recognition software steals processing power
from the operating system and adds to the computer’s processing tasks.
Typically there is a noticeable slowdown in the operation and function of the
computer when voice recognition is enabled.
Learning to Listen
We take our ability to listen for granted. For instance, we are capable of
listening to one person speak among several at a party. We subconsciously
filter out the extraneous conversations and sound. This filtering ability is
beyond the capabilities of today’s speech recognition systems.
Speech recognition is not speech understanding. Understanding the meaning
of words is a higher intellectual function. The fact that a computer can
respond to a vocal command does not mean it understands the command
spoken. Voice recognition systems will one day have the ability to
distinguish linguistic nuances and the meaning of words, to “Do what I mean,
not what I say!”
Speaker Dependent and Speaker Independent Recognition
Speech recognition is classified into two categories,
speaker dependent and speaker independent. Speakerdependent systems are
trained by the individual who will be using the system. These systems are
capable of achieving a high command count and better than 95 percent
accuracy for word recognition. The drawback to this approach is that the
system only responds accurately to the individual who trained the system.
This is the most common approach employed in software for personal
computers.
Speakerindependent systems are trained to respond to a word regardless of
who speaks. Therefore, the system must respond to a large variety of speech
patterns, inflections, and enunciation of the target word. The command word
count is usually lower than that of the speakerdependent system; however,
high accuracy can still be maintained within processing limits. Industrial
requirements more often require speakerindependent voice systems, such as
the AT&T system used in the telephone systems.
Recognition Style
Speech recognition systems have another constraint concerning the style of
speech they can recognize. They are three styles of speech: isolated,
connected, and continuous.
Isolated speech recognition systems can just handle words that are spoken
separately. This is the most common speech recognition system available
today. The user must pause between each word or command spoken. The
speech recognition circuit is set up to identify isolated words of 0.96s length.
peech recognition circuit is set up to identify isolated words of 0.96s length.
Connected speech recognition system is a halfway point between isolated
word and continuous speech recognition. It allows users to speak multiple
words. The HM2007 can be set up to identify words or phrases 1.92 s in
length. This reduces the word recognition vocabulary number to 20.
Continuous speech is the natural conversational speech we are used to in
everyday life. It is extremely difficult for a recognizer to sift through the text
as the words tend to merge together. For instance, “Hi, how are you doing?”
sounds like “Hi, howyadoin.” Continuous speech recognition systems are on
the market and are under continual development.
Speech Recognition Circuit
The speech recognition circuit is available as a kit from Images SI Inc. You
can purchase the main components, HM2007, SRAM, and printed circuit
boards separately if you like and build from scratch. The kit takes a modular
approach and uses three separate printed circuit (PC) boards. The three PC
boards are the main circuit board containing the speech recognition circuit,
digital display board, and keypad. The keypad and digital display are
removable from the main circuit board. They are needed to communicate
with and program the main speech recognition circuit. After the programming
is accomplished, the digital display and keyboard can be removed, and the
main circuit embedded into another circuit to add speech control.
Circuit construction
The schematic You can hardwire this circuit to a breadboard if you like. I
would recommend purchasing the three PCB boards that are available for this
project; see Parts List. When you use the PC board, the components are
mounted on the top silkscreen side of the board. Begin construction by
soldering the IC sockets onto the PC boards. Next mount and solder all the
resistors. Now mount and solder the 3.57MHz crystal and red LED. The long
lead of the LED is positive. Next solder the capacitors and 7805 voltage
regulators. Solder the seven position headers on the keypad to the main
circuit board. Next solder the 10 position headers on the display board and
main circuit board.
Three modular circuit boards.
Keypad
The keypad is made up of 12 normally open (N.O.) pushbutton switches
To train
To train the circuit, first attach the keypad and digital display to the main
circuit board . Next select your word length. Place a jumper on the two pin
WD header on the main circuit board to select a 20words vocabulary, each
with a 2s word length. Leave the jumper off to select a 40words vocabulary,
each with a 1s word length. Plug in the headset microphone. When power is
applied, the HM2007 checks the static RAM, outputs “00” on the digital
display, and lights the red LED (READY). The circuit is in the ready mode.
In the ready mode the circuit is listening for a verbal command or waiting to
be trained.
Schematic of speech recognition circuit
Keypad wiring
Modular components put together for training.
To train the circuit, begin by pressing the word number you want to train on
the keypad. In this exercise I am assuming you choose the 20words
vocabulary. In this mode the circuit can be trained to recognize up to 20
words. Use any numbers between 1 and 20. For example, press the number 1
to train word number 1. When you press the number(s) on the keypad, the red
LED will turn off. The number pressed on the keypad is shown on the digital
display. Next press the # key for train. When the # key is pressed, it signals
the chip to listen for a training word, and the red LED turns back on. Now
speak the word you want the circuit to recognize into the headphone
microphone clearly. The LED should blink off momentarily; this is a signal
that the word has been accepted.
Continue training new words in the circuit, using the procedure outlined
above. Press the 2 key, then the # key to train the second word, and so on.
The circuit will accept up to either 20 or 40 words, depending on the lengths
of the words. You do not have to enter 20 words into memory to use the
circuit. If you want, you can use as few word spaces as you require. The
procedure for training 40 words is identical, except that you can choose word
numbers between 1 and 40.
Testing Recognition
The circuit is continually listening. Repeat a trained word into the
microphone. The number of the word should be displayed on the digital
display. For instance, if the word directory was trained as word number 5,
then saying the word directory into the microphone will cause the number 5
to be displayed.
Error codes
The chip provides the following error codes.
Clearing the trained word memory
To erase all the words in the SRAM memory (training), press 99 on the
keypad and then press the * key. The display will scroll through the numbers
1 through 20 (or 1 through 40 if in 1s word length mode) quickly, clearing
out the memory.
To erase a single word space, press the number of the word you want to clear
and then press the * key
Independent Recognition System
In addition to speech commands, this circuit allows you to experiment with
other facets of speech recognition technology. For instance, you can
experiment with speaker independent systems. This system is inherently
speaker dependent, meaning that the voice that trained the circuit also uses it.
To experiment with speaker independent recognition (multiuser), try the
following technique. Set the WD jumper on the main circuit board to the 40words vocabulary with a 0.96s word length. Now we will use four word
spaces for each command word. We will arrange the words so that the
command words will be recognized by just decoding the least significant
digit (number) on the digital display.
This is accomplished by allocating the word spaces 01, 11, 21, and 31 to the
first target or command word. When the circuit is in recognition mode, we
only decode the least significant digit number, in this case X1 (where X is
any number from 0 to 3) to recognize the target word.
We do this for the remaining word spaces. For instance, the second target
word will use word spaces 02, 12, 22, and 32. We continue in this manner
until all the words are programmed.
If possible, use a different person to speak the word. This will enable the
system to recognize different voices, inflections, and enunciations of the
target word. The more system resources that are allocated for independent
recognition, the more robust the circuit will become.
There are certain caveats to be aware of. First you are trading off word
vocabulary number for speaker independence. The effective vocabulary drops
from 40 words to 10 words.
The speech interface control circuit shown later may be used in this speaker
independent experimental capacity.
Voice Security System
This HM2007 wasn’t designed for use in a voice security system. But this
doesn’t prevent you from experimenting with it for that purpose. You may
want to use three or four keywords that must be spoken and recognized in
sequence in order to activate a circuit that opens a lock or allows entry.
How the circuit works
Before we can get into the nuts and bolts of how the interface circuit
functions, we must look at the binary information output by the speech
recognition circuit. The output of the speech recognition circuit consists of
two 4bit binarycoded decimal (BCD) numbers. This binary (BCD)
information is shown on the speech circuit’s two-digit digital display.
Whenever a word is detected, the circuit uses the digital display to output the
word number it has recognized, or else it outputs its unrecognized/error code.
If the word detected is not recognized, the circuit will display one of the
following error codes:
Our interface design incorporates a PIC microcontroller. A preprogrammed
microcontroller’s (16F84) first job is to determine if a word has been spoken.
To do this, we use an LM339 comparator. A reference voltage for the
comparator is generated using a voltage divider made up of
Speech recognition interface (active high outputs) SRI03.
resistors R4 and R5. The reference voltage is placed on pin 5 of the
comparator. Pin 4 of the comparator is connected to the LED lead on the
speech recognition circuit. Whenever a word is detected, the LED blinks off
momentarily. The output of the comparator (pin 2) is connected to pin 10
(RB4) of the 16F84 microcontroller. The output of the comparator (pin 2) is
usually high (�5 V). When a word is detected, the output (pin 2) drops to
ground momentarily. The microcontroller monitors this line to determine
when a word has been detected.
Once a word has been detected, it is necessary for the interface to read the
BCD output from the speech recognition circuit. By using the high and low
digit BCD nibbles, it’s possible to distinguish trained target words. To do so,
the interface must distinguish the error codes 55, 66, and 77 from trained
words numbered 5, 6, and 7. To accomplish this, the interface circuit uses
four NAND gates off the 4011 integrated circuit. The NAND gates are
connected to the high digit nibble. If the high digit BCD nibble has the
equivalent word numbers of 5, 6, or 7, the output from the four NAND gates
is low. The output from the four NAND gates is connected to pin 11 (RB5) of
the 16F84. The 16F84 reads this pin to determine if the highdigit nibble is a
5, 6, or 7 (0 V or ground). If these numbers are not displayed, the output of
the NAND gates is high (�5 V).
So far our circuit can tell when a word has been detected and if the resulting
word is an error code. If the output of the speech recognition circuit is an
error code, nothing else happens; the microcontroller loops back to the
beginning of the program, waiting for another word detection. On the other
hand, if a word is detected and it is not an error code, the microcontroller
passes the lowdigit number through to the 74HC154 (4 to 16line decoder) IC.
The 74HCT154 decoder reads the binary number passed to it and brings the
corresponding pin equivalent to that number low.
PIC 16F84 microcontroller program
The PIC 16F84 used in both interface circuits contains the following Pic
Basic program:
Active high output
The outputs from the 74HCT154 each pass through a 4049 inverting buffer to
supply a 15Vdc active high output signal.
SPDT relay output
The front end of the circuit is identical. The changes are seen in the back end
of the circuit. The active low output signals from the 74HCT154 each
connect to one of the 10 PNP transistors, each of which controls a
corresponding relay. Each relay has a normally open (N.O.) switch and
normally closed (N.C.) switch. The relay switches are rated at 124 V ac at 0.5
A or 24 V dc at 1 A. The relay itself consumes approximately 30 mA of
current when it is turned on.
Robotic Arm
Servomotor Building Blocks for Robotics
The servomotor brackets discussed in this chapter will allow you to create
various servomotor robots and projects.
Servomotors are ideal for powering robots. They are readily available in
many sizes, are inexpensive, provide powerful torque for their size and
weight, and are positional. The output shafts on most hobby servomotors are
guaranteed positional between 0° and 90°. Most servomotors’ output shaft
range extends past 90°, coming close to 180°.
The servomotor bracket components are shown in Fig. 12.1. Each of the
aluminum U brackets that make up the assembly has multiple holes for
connecting a standard Hi Tec servomotor horn as well as bottom and top
holes for connecting U brackets and assemblies to one another.
The servomotor horns used on these servomotor brackets are included with
all the compatible Hi Tec servomotors, such as HS322, HS425, HS475, and
HS35645. These brackets may also be used with similarsize Futaba
servomotors, but you may have to purchase the horns separately.
Each servomotor bracket assembly consists of the following components: two
aluminum U brackets, labeled A and B, one binding head post screw, four 632 plastic machine screws with nuts, and four sheet metal screws for
mounting a servomotor horn. When assembled with a compatible servomotor
(see Fig. 12.2), the bracket becomes a modular component that may be
attached to other brackets and components. The bracket allows the top and
bottom components to swivel along the axis of the servomotor’s shaft.
By connecting multiple servomotors using the brackets, you can create a
variety of robotic designs. In this chapter we will use the brackets to create a
five-servo motor robotic arm. In Chap. 13 we use these same brackets to
create a bipedal walker robot.
The bottom and top have multiple holes for attaching other brackets or
servomotor horns
Servomotor bracket kit.
Front and side views of servomotor bracket
Basic Servomotor Bracket Assembly
To assemble a servomotor bracket, begin by placing the binding post through
the back hole on part a .Next place servomotor into the A bracket, as Attach
the servomotor using 632 � 3 /8inlong machine screws and nuts. Notice the
servomotor’s horn has been removed from the servomotor. To secure the
screws at the bottom two positions of the servomotor, place the screw
through the hole from the inside of the bracket. It helps if you have a small
screwdriver to hold the screw in place. Then the plastic nuts are chased down
on the screws from the outside of the bracket.
The servomotor horn. is attached to the side holes on the B bracket
Servomotor bracket travel
HI Tec servomotor horn.
bracket with servomotor horn attached.
To place the servomotor secure in bracket A into its mating part bracket B,
slip the end of the bindingheld post through the hole in the mating part. Next
slip the servomotor’s spindle into the horn. Finished assembly.
Side view of placing servomotor in A bracket.
A bracket with servomotor attached with plastic screws and nuts.
Hi Tec servomotor horn.
B bracket with servomotor horn attached.
AUTONOMOUS VEHICLE
Application of Sampling-Based Motion
Planning Algorithms in Autonomous
Vehicle Navigation.
As information technology and artificial intelligence develop rapidly, it is
becoming possible to use computers to assist daily driving, even to make the
driving process entirely autonomous. Due to the recent research advances in
robotics and intelligent transportation systems, autonomous vehicles have
attracted dramatic attentions in the past decades.
An important milestone has been reached in the field of autonomous driving
on the urban road, which can be seen in DARPA Urban Challenge (DUC). As
an important achievement in the progress of autonomous vehicle, the
suggested vehicle navigation systems in DUC have been well accepted as the
instruction for the design process of autonomous vehicle systems. The
majority of the available architectures for autonomous vehicle focused on
autonomous driving in structured roads and unstructured parking
environments, and thus the performance of these proposals for more
complicated driving scenarios is not clear, and despite interesting solutions
for this problem in the literature, it is still a basic challenge to design an
efficient motion planner for navigation in dynamic and cluttered
environments where high number of obstacles, extensive effects of failure
and safety considerations make it even more challenging.
One of the challenging research areas in autonomous vehicle is the
development of an intelligent motion planner that is able to guide the vehicle
in dynamic changing environments. Motion planning is an essential and
fundamental part of autonomous vehicle, which can find solutions for the
problem of finding a set of control values or feasible motion states for the
vehicle to manoeuvre among obstacles from a given initial configuration
towards a final one, taking into account the vehicle’s kinematic and dynamic
systems. Furthermore, the traffic regulations and the available information on
the geometric structures of roads are also included in the motion planning for
autonomous driving.
Motion planning can be considered as one of the basic challenges within
robotics sciences which have been studied by many researchers over the past
few decades resulting in different algorithms with various specifications.
Motion planning for an autonomous vehicle is a procedure to find a path from
an initial position to a final state, while avoiding any collision with obstacles.
In the simplest form of motion planning problem, which is called the piano
mover’s problem, the running time of each algorithm is exponentially the
degrees of freedom, which indicates that the path-planning problem is NPComplete. It is also shown that the motion planner needs memory
exponential in the degrees of freedom, which proves that the problem is
PSPACE-Complete.
When the environment is not known for the robot, the path planning is called
online, local or sensor-based. During the past few decades, online path
planning has been a challenging but attractive field for many robotic
researchers. In unknown environments, there are two main aspects. First, the
motion decisions are based on local information, and second, sensing is
contemporaneous to the decision-making process. In this class of path
planning, the robot acquires the information via its sensors that could be
touch or vision sensors.
The field of sensor-based path planning has been studied by many researchers
resulting in various algorithms with different characteristics. One of the
earliest and simplest online path-planning algorithms is the Bug algorithm
that acquires information from touch sensors and guides the robot to the goal
through the boundaries of obstacles. An extension of the classic Bug
algorithm is Tangent Bug, which incorporates vision sensor information with
the algorithm. A performance comparison between different Bug-like
algorithms can be found in. Despite interesting performances of the
conventional motion planning methods in robotics, their efficiency in
navigation of autonomous vehicle is considerably low due to the complex and
unstable dynamics of such systems.
Another well-known class of path-planning algorithms is sampling-based
path planning. In this class of algorithms, the only source of information is a
collision detector, and hence, there is no need to characterize all possible
collision situations. Sampling-based methods operate several strategies for
creating samples in free space and for connecting them with collision-free
paths in order to provide a solution for path-planning problem. Three of the
more popular sampling-based approaches include probabilistic roadmap
(PRM), randomized potential fields (RPFs) and rapidly exploring random
trees (RRTs). PRM approach finds collision-free samples in the environment
and adds them to a roadmap graph. Afterwards, by using a cost function, best
samples are chosen in the graph and a simple local path planner is used to
connect them together. RPFs approach builds a graph by connecting the local
minimums of the potential function defined in the environment. Then, the
planner searches this graph for paths. RRTs are specially proposed to deal
with non-holonomic constraints and high degrees of freedom. This approach
builds a tree by randomly choosing a node in the free space and finding the
nearest node in the tree. Next, the planner expands this nearest node in the
direction of the random node. Several extension of RRT for solving sensorbased path-planning problems has been proposed. In, a new data structure
called sensor-based random tree (SRT) is proposed, which represents a
roadmap of the visited area. Similar to the RRT, the SRT is a tree of already
visited nodes with a local safe region. The tree is increasingly expanded
towards a random direction. An improvement of RRT is proposed in. This
method directs the robot in an RRT-derived direction while avoiding
collision. In, two heuristics are added to the RRT to handling unknown
environments. The first heuristic checks if the goal is reachable from the
current position of the robot. The second heuristic tries to avoid the robot
from getting close to the obstacles. Despite interesting solutions for this
problem in the literature, it is still a basic challenge to design an efficient
motion planner for navigation in dynamic and cluttered environments where
high number of obstacles, extensive effects of failure and safety
considerations make it even more challenging. Due to the interesting
advantages and properties of sampling-based motion planning algorithms,
they seem to be efficient and powerful tools for autonomous vehicle
navigation. In this chapter, a novel motion planning architecture is proposed
for autonomous vehicle navigation, which employs recent advances in
sampling-based motion planning. The resulted approach is capable of
planning safe and efficient motions in different situations such as following
the course of the road, overtaking the front vehicle and following the front
vehicles while overtaking is not possible. The proposed architecture employs
the optimal strategy proposed in RRT* planner to avoid meandering paths.
The proposed planner also utilizes a unique low-dispersion strategy to reduce
the running time of the planner which is essential in autonomous vehicle
navigation. The rest of the chapter is organized as follows. In Section 2, the
category of sampling-based motion planning algorithms is summarized. The
existing applications of sampling-based planners in autonomous vehicle
navigation are reviewed in Section 3. Then, the proposed architecture is
introduced in Section 3, supported by the simulation studies in Section 4.
Finally, discussion and conclusion is provided in Section 5.
Sampling-based motion planning
Naturally, offline path-planning algorithms need a complete map of the
configuration space which usually increases the cost and runtime of the
search in most problems. Sampling-based motion planning was proposed to
solve navigation queries without trying to construct the map of the entire
configuration space. Instead, they only depend on a procedure that decides on
whether a given configuration of the robot is in collision with obstacles or
not. The available sampling-based path-planning algorithms can be
categorized into two main classes, namely, roadmap-based or multi-query
algorithms and tree-based or single-query algorithms. Roadmap-based
algorithms are normally applicable in multi-query scenarios as they form a
graph that later will be used to solve different navigation problems. The
graph is the main structure used for saving the environmental information,
where the nodes represent different configurations in the robot’s
configuration space. An edge exist between any two configurations if the
corresponding geometrical properties satisfy a set of conditions such as
vicinity, and the autonomous agent can navigate from one point to another
without any collision. A typical algorithm consists of two main stages:
learning or observation stage and search stage. In the first stage, the graph
structure is constructed. Collision-free positions or samples are chosen
randomly and considered as the vertices of the graph. In the search stage,
each vertex is tested to be connected to the closest neighbour configurations,
and the resulted successful connection will be an edge in the roadmap. To
answer a given planning task, the initial and final configurations are added to
the existing graph and a normal search technique is considered for finding the
shortest path. The most important factor in the efficiency of the algorithm is
how effectively the roadmap can learn the free-space connectivity of the
configuration space. In addition, the main problem in the performance of
such algorithm is the construction of the graph because the search algorithms
usually perform satisfactory in terms of speed and runtime. The original form
of PRM planner consists of the following steps. First, a random configuration
is selected from the configuration space and is tested if it lies in the free
configuration space. If the sample passes this, it will be included in the
roadmap structure as a vertex or a node. Second, a search technique takes
place to find the closest neighbours in the graph to the new node. Finally, a
local planner attempts to connect the closest neighbour and the new
configurations based on criteria posed by the planning query. This connection
will be added to the graph as a new edge if it is collision free. In several
situations, quickly finding a solution for a particular problem is the main
objective. In these situations, single-query algorithms can be implemented. In
these algorithms, the base of the data structure is normally a tree. The basic
idea is to find an initial configuration (the starting configuration) that is the
root of the tree and all incrementally generated configurations will be
connected to other configurations already included in the tree. The initial
tree-based algorithm is the rapidly exploring random tree (RRT) planner.
Started to grow from the start configuration, a tree continuously grows in the
free configuration space until it gets close enough to the final configuration.
An essential feature of this planner is that configurations with larger Voronoi
regions (i.e. the part of the free configuration space that is closer to that node
than to other members of the tree) have higher chances to be selected in the
process of tree expansion. Therefore, the tree normally grows towards yet
unknown parts of the environment, circumfusing rapidly in the free
configuration space. The initial version of the RRT consists of the following
steps. First, a random sample is generated in the free search environment.
Then, the planner searches the tree for a particular configuration, which is the
nearest node to this newly generated sample. Next, a new node is created by
moving a predefined distance from in the direction of the selected node using
a local planner or an interpolation technique that relies on the robotic system.
And, finally, if the new node is a valid point that falls in free configuration
space, and if the local path between it and the nearest node is collision free,
then, the new node is added to the tree as a new node and an edge is created
between the tree and the new node. This process is repeated until the goal
configuration can be connected to the tree or a maximum number of iterations
are reached.
One of the key issues in sampling-based planners is that they usually result in
undesirable lengthy resulted paths that stray within the free configuration
space. Recently, the RRT planner has been proven not to be able to find the
optimum answer. Thus, the optimal improved forms of original samplingbased algorithms, i.e. PRM* and RRT*, have been proposed and proven to
possess the asymptotical optimality in which the chance of generating the
optimum path approaches unity as the cardinality of the generated structure
approaches infinity. The RRT* planner contains a unique ability which is the
sufficiency to find the optimal path from the start position to any
configuration within the free space. Image 1 shows the performance of
PRM* and RRT* algorithms in a 2D indoor environment.
Image 1.
Performances of (a) PRM* and (b) RRT* algorithms in a simple 2D
environment. Start and goal configurations are shown by yellow and green
squares, respectively.
Related work
Over the past two decades, sampling-based motion planning algorithms have
attracted the attention of researchers in the field of autonomous vehicle
resulting in various approaches with different advantages and drawbacks. In
this section, some of the well-known past applications of sampling-based
methods in autonomous vehicle navigation is summarized.
In one of the first reports of such applications, a probabilistic planning
process has been proposed to deal with dynamic systems in dealing with
static and dynamic obstacles. This planner evaluates the dynamic limitations
of the vehicle’s movement and provides a steady and solid decoupling
between low-level control and motion planning at the same time. This
planning method maintains the convergence characteristics of its kinematic
peers. The safety of the system is also considered in the form of finite
running times by checking the behaviour of the planner when the existed
embedded computational resources are moderate, and the motion generation
process must take place in real time. This method is capable of dealing with
vehicles if their dynamics are defined by ordinary differential equations or
even by other hybrid complex representations.
A detailed analysis of the motion planning subsystem for the MIT DARPA
Urban Challenge vehicle based on the rapidly exploring random tree (RRT)
algorithm has been provided. The purpose was to present the numerous
extensions made to the standard RRT algorithm that enables the online use of
RRT on robotic vehicles with complex, unstable dynamics and significant
drift, while preserving safety in the face of uncertainty and limited sensing.
A real-time motion planning algorithm has been introduced based on the
rapidly exploring random tree (RRT) approach for autonomous vehicle
operating in an urban environment. For several motivations, such as the need
to generate dynamically feasible plans in real time, safety requirements and
the constraints dictated by the uncertain operating (urban) environment,
several extensions to the standard RRT planner have been considered.
A rapidly exploring random tree (RRT) based on path planner has been
implemented for autonomous vehicle parking problem, which treats all the
situations in a unified manner. As the RRT method sometimes generates
some complicated paths, a smoothing sub-process has also been implemented
for smoothing generated paths.
The use of rapidly exploring random trees (RRT) for the planning of multiple
vehicles in traffic scenarios has been proposed. According to this method, the
controller for each car uses RRT to produce a navigation plan. Spline curves
are used for smoothing the route resulted from the RRT, which includes nonholonomic constraints. Priority is taken into consideration as a coordination
method where a vehicle with higher priority attempts to avoid all lower
priority vehicles. This algorithm attempts to find the maximum possible
velocity at which the vehicle can travel and the corresponding path.
Time-efficient manoeuvres for an off-road vehicle taking tight turns with
high speed on a loose surface have been studied using the RRT* planner. The
experimental results have shown that the aggressive skidding manoeuvre,
normally known as the trail-braking manoeuvre, naturally rises from the
RRT* planner as the minimum-time path. Along the way, the RRT* planner
has been extended to deal with complicated dynamical systems, such as
systems that are explained by nonlinear differential equations and include
high-dimensional state spaces, which may be of independent interest. The
RRT* has also been exploited as an anytime computation framework for
nonlinear optimization problems.
An efficient motion planning method for on-road driving of the autonomous
vehicle has been reported based on the rapidly exploring random tree (RRT)
algorithm]. To address the issue and taking into account the realistic context
of on-road autonomous driving, they have proposed a fast RRT planner that
utilizes a rule-template set according to the traffic scenes and an aggressive
extension strategy for searching the tree. Both justification result in a faster
and more accurate RRT planner towards the final state compared with the
original RRT algorithm. Meanwhile, a model-based post-process estimation
approach has been taken into account, where the resulted paths can be more
smoothed and a feasible control sequence for the vehicle would be prepared.
Furthermore, in the cases with moving obstacles, a combined method of the
time-efficient RRT algorithm and the configuration-time space has been used
to improve the quality of the resulted path and the re-planning.
A human-RRT (rapidly exploring random tree) collaborative algorithm has
been presented for path planning in urban environments. The well-known
RRT algorithm has been modified for efficient planning in cluttered yet
structured urban environments. To engage the expert human knowledge in
dynamic re-planning of autonomous vehicle, a graphical user interface has
been developed that enables interaction with the automated RRT planner in
real time. The interface can be used to invoke standard planning attributes
such as way areas, space constrains and waypoints. In addition, the human
can draw desired trajectories using the touch interface for the RRT planner to
follow. Based on new information and evidence collected by human, state-
dependent risk or penalty to grow paths based on an objective function can
also be specified using the interface.
An online motion planner has been proposed based on the drivers’ visual
behaviour-guided rapidly exploring random tree (RRT) approach that is valid
for on-road driving of autonomous vehicle. The main contribution of this
work is the implementation of a guidance system for drivers’ visual search
behaviour in the form of the RRT navigation algorithm. RRT usually
performs poorly in various planning situations such as on-road driving of
autonomous vehicles because of the uncanny trajectory, wasteful sampling
and low-speed exploration strategy. In order to overcome these drawbacks,
they have proposed an extension of the RRT planner that utilizes a powerful
supervised sampling strategy according to the drivers’ on-road behaviour in
visual search and a continuous-curvature smoothing technique based on Bspline.
A sampling-based planning technique for planning manoeuvring paths for
semi-autonomous vehicle has been reported where the autonomous driving
system may be taking over the driver operation. They have used rapidly
exploring random tree star (RRT*) and have proposed a two-stage sampling
strategy and a particular cost function to adjust RRT* to semi-autonomous
driving, where, besides the standard goals for autonomous driving such as
collision avoidance and lane maintenance, the deviations from the estimated
path planned by the driver are accounted for. They have also proposed an
algorithm to remove the redundant waypoints of the path returned by RRT*,
and, by applying a smoothing technique, our algorithm returns a G-squared
continuous path that is suitable for semi-autonomous vehicles.
Despite the great effort that has been put to employ sampling-based methods
for autonomous vehicle navigation, there are still some problems with the
performance of the current methods. For instance, there is no proper method
for takeover which is one of the most common behaviours in driving.
Takeover is a complicated and critical task that sometimes is not avoidable
and the motion planner should be able to plan takeovers while considering
the safety issues. Furthermore, the running time of the planner is usually too
high which makes the method less practical. Finally, the quality of the
generated solutions, i.e. paths, is a critical issue. The quality of the resulted
paths from sampling-based methods is normally low and finding the optimal
path is usually a challenging task in randomized algorithms.
Proposed approach
In this work, a new sampling-based motion planner is introduced for
autonomous vehicle navigation. The proposed method is based on the
optimality concept of the RRT* algorithm and the low-dispersion concept of
the LD-RRT algorithm. Furthermore, this planner utilizes a novel procedure
that divides the sampling domain based on the updates received from the
vehicle’s sensory system.
ALGORITHM 1.
RRT* Planner
The first component of the proposed planner is the RRT* algorithm that can
be described in Algorithm 1.
Image 2.
The implementation of the boundary sampling in a simple 2D environment
with three obstacles. The new sample (orange circle) will be generates on the
boundary of the union of forbidden region after the occurrence of the first
rejection.
In the RRT* algorithm, the important recoveries can be seen in Algorithm 1,
where an edge is added to the tree only if it can be connected to the latest
generated point through a set of path segments with minimal cost. Then, if
other nodes exist in the vicinity of the current node with better costs, these
nodes will take the position of the parent for the current node. These
improvements facilitate the RRT* algorithm with the unique capability of
finding the optimal solution. Image 2 shows the optimality behaviour of the
RRT* algorithm in an empty environment.
The second component of the proposed method is the low-dispersion
properties of the LD-RRT planner. This component utilizes the poison disk
sampling strategy to reduce the number of samples required to capture the
connectivity of the sampling domain. The proposed method performance is
similar to the PRM*/RRT* algorithms with a basic different in the sampling
technique. Unlike the original planners that would let any collision-free
samples to be included in the tree, the proposed method contains an extra
checking procedure that makes sure that the samples possess the Poisson-disk
distribution property.
There are various efficient techniques for creating fast Poisson-disk
structures. In this research, the boundary sampling technique is selected for
generating the Poisson-disk samples for its simplicity and implementation
facility. In this method, the existing neighbourhood of all samples reduces to
a set of spheres located at the sample’s position with the radius of rs. The first
result of such arrangement is that the existing neighbourhood can be
represented by a set of spherical ranges at which a point can be placed on the
boundary. Image 3 shows the sampling phase as proposed in where the
Poisson disks and the forbidden sampling areas are illustrated by grey and
green circles, respectively. After the first rejection, the boundary of the union
of the forbidden areas will be selected and a random sample is generated
accordingly as shown by the orange circle. The sampling radius is defined as
follows:
Image 3.
The optimal behaviour of RRT* algorithm in a planar empty environment.
ALGORITHM 2.
The proposed asingle–uery algorithm
where τ is a scaling coefficient ranging within (0, 1]. The main idea behind
this radius is that the volume of the space should be approximately equal to
n(rs)2 in order to fill an obstacle-free square space with n disks with radius rs.
Compelling the samples to follow the Poisson-disk distribution usually
decreases the cardinality of the resulted graph. In other words, it is almost
impossible to find n samples with √τQfree/n distance from one another with a
randomized sample generator. Upon the generation of the Poisson-disk
samples, the algorithm will follow normal steps in the original PRM*/RRT*.
The pseudo code of the proposed algorithms can be seen in Algorithm 2.
As can be seen in Algorithm 2, the Poisson-disk sampling takes place at line
4 by forcing the generated samples to satisfy the sampling radius rule. The
rest of both algorithms are same as the originals. The relevance between the
neighbourhood and sampling radii is an important index about the
performance of the proposed algorithm. It is essential for the sampling radius
to be smaller than the connection radius. Otherwise, it will not be possible to
connect any two samples and the resulted structure will be a set of separate
configurations. Image 4 shows the relation between these two radii along
with the ratio of rs(n) over r*(n).
Image 4.
Different values for neighbourhood and sampling radii and the corresponding
ratio for μ(Qfree) = 1 and τ = 1. For n ≥ 4, the neighbourhood radius is always
greater than the sampling radius.
According to the definitions of neighbourhood and sampling radii, the
cardinality of the graph/tree should follow the following rule:
This requirement ensures that there will be at least one eligible sample within
the neighbourhood region of the current sample. Considering the acceptable
range of τ, i.e. (0, 1], the sampling radius in a 2D configuration space is
smaller than the neighbourhood radius if and only if the number of samples
exceeds four.
Another important property of the proposed planner takes place in the RRT*
algorithm where the steering factor in the original RRT*, which is 'step', will
be automatically replaced by the sampling radius. As stated before, the
samples will be created randomly from the perimeter of the current Poisson
disks. As a result, the highest distance between any two samples exactly
equals the sampling radius rs(n). As stated before, the cost of the final
optimum solution can be calculated as the total Euclidean distance between
all members (nodes) of the optimum path which can be calculated as follows:
where n* is the number of nodes in the optimal path resulted from the
algorithm. Considering the fact that ∥ω∗i+1−ω∗i∥<rs(n), now it is possible
to find an upper bound for the path of the optimal solution:
This upper bound merely depends on the size of the final graph/tree structure.
On the other hand, reducing the total number of samples (n), will reduce the
number of samples in any solution. Therefore, it can be concluded that using
a Poisson-disk distribution as the sampling domain will improve the cost of
the final solution and maintains the asymptotic optimality property of the
proposed algorithm. Image 5 illustrates the graph construction phase for the
PRM* planner.
Image 5.
The connection strategy in the proposed algorithm.
The next component of the proposed method is a novel procedure that divides
the sampling domain into different sampling segments based on the
information received online from the sensory system of the vehicle. The
proposed approach divides the sampling domain into two regions namely
tabu region and valid region. The sampling procedure takes place only in the
valid region and the tabu region is forbidden to be included in the sampling.
Image 6 shows different possibilities of tabu and valid segments in a
sampling domain.
Image 6.
The performance of the segmentation procedure. The sampling domain is
divided into different valid or tabu segments.
Three different scenarios have been considered in the simulation studies. In
the first one, there is no vehicle in front of the autonomous car and the
sampling domain is only the current lane. In the second situation, there is a
vehicle in front but takeover is possible and the valid sampling region is
expanded to include proper space for the takeover. Finally, when takeover is
not possible, the valid sampling domain is restricted to the available space
between the two vehicles.
Simulation studies
According to the properties of the proposed algorithm, three different
situations have been designed for simulation studies. Image 7 depicts an
instance of the solutions in these scenarios. The proposed method is able to
plan difficult motions for the vehicle such as following or takeover. As can
be seen in Image 7, when there are no other vehicles, the planner restricts the
sampling domain to the current lane and other parts of the road are not
included in the sampling procedure. When there is another car in front but
takeover is not an option, the generated rapidly exploring random tree just
follows the front vehicle with a proper distance. Finally, when takeover is
possible, the sampling domain will be expanded to include suitable space.
Image 7.
Simulation results in different scenarios. The initial and final positions of the
vehicle are shown by yellow and green, respectively.
The average results of the performance of the proposed planner as well as the
performances of RRT and RRT* planners are shown in Table 1.
As can be seen, the proposed planner provides optimal solutions with
surprisingly smaller set of samples. The runtime of the planner is also less
than other planners. Image 8 shows the variations of performance variables
for 1000 iterations of each planner.
Image 8.
Variation of the results over 1000 iterations for the proposed algorithm
(green), RRT (red) and RRT* (blue) in terms of the number of nodes,
optimality (%) and runtime (s).
Conclusion
In this chapter, a novel sampling-based navigation architecture has been
introduced that is capable of driving autonomous vehicles in crowded
environments. The proposed planner utilizes the optimal behaviour of the
RRT* algorithm combined with the low runtime requirements of lowdispersion sampling-based motion planning. Furthermore, a novel
segmentation procedure is introduced which differentiates between valid and
tabu segments of the sampling domain in different situations.
Simulation results show the robust performance of this planner in different
scenarios such as following the front car and takeover. This method also
outperforms the classical RRT and RRT* planners in terms of runtime and
the size of the generated tree while maintaining the same optimality rate as
RRT*.
Robust Accelerating Control for Consistent
Node Dynamics in a Platoon of CAVs.
The platoon driving of automated connected vehicles (CAVs) has
considerable potential to benefit road traffic, including increasing highway
capacity, less fuel/energy consumption and fewer accidents. The R&D of
CAVs has been accelerated with increasing usage of wireless communication
in road transportation, such as dedicated short range communications
(DSRC). Pioneering studies on how to control a platoon of CAVs can date
back to 1990s, and as pointed out by Hedrick et al. , the control topics of a
platoon can be divided into two tasks: (1) to implement control of platoon
formation, stabilization and dissolution; and (2) to carry out controls for
throttle/brake actuators of each vehicle. These naturally lead to a hierarchical
control structure, including an upper level controller and a lower level
controller. The upper one is to retain safe and string stable operation, whereas
the lower one is to track the desired acceleration by determining
throttle/brake commands.
The upper level control of a platoon of CAVs has been investigated
extensively. An earlier work done by Shladover introduced many known
control topics, among which the most famous is the concept of string
stability. The string stability ensures that range errors decrease as propagating
along downstream. Stankovic et al. proposed a decentralized overlapping
control law by using the inclusion principle, which decomposes the original
system into multiple ones by an appropriate input/state expansion. Up to now,
many other upper level control topics have already been explored, including
the influence of spacing policies, information flow topologies, time delay and
data loss of wireless communications, etc.
The lower level controller determines the commands for throttle and/or brake
actuators. The lower level controller, together with vehicle itself, actually
plays the role of node dynamics for upper level control. Many research
efforts have been attempted on acceleration control in the past decades, but
still few gives emphasis on the request of platoon level automation. Most
platoon control relies on one critical assumption that the node dynamics are
homogeneous and approximately linear. Then, the node dynamics can be
described by simple models, e.g. double-integrator and three-order model.
This requires that the behaviour of acceleration control is rather accurate and
consistent, which is difficult to be achieved. One is because the salient nonlinearities in powertrain dynamics, both traditional and hybridized, and any
linearization, will lead to errors; the other is that such uncertainties as
parametric variations and external disturbances significantly affect the
consistence of control behaviour.
One of the major issues of acceleration control is how to deal with nonlinearities and uncertainties. The majority to handle non-linearities are to
linearize powertrain dynamics, including exact linearization, Taylor
linearization and inverse model compensation. Fritz and Schiehlen use the
exact linearization technique to normalize node dynamics for synthesis of
cruising control. After linearization, a pole placement controller was
employed to control the exactly linearized states. The Taylor expansion
approach has been used by Hunt et al. to approximate the powertrain
dynamics at equilibrium points. The gain-scheduling technique was then used
to conquer the discrepancy caused by linearization. The inverse model
compensation is widely used in engineering practice. This method is
implemented by neglecting the powertrain dynamics. For the uncertainties,
the majority rely on robust control techniques, including sliding model
control (SMC), H∞ control, adaptive control, fuzzy control, etc. Considering
parametric variations, an adaptive SMC was designed by Swaroop et al. by
adding an on-line estimator for vehicle parameters, such as mass,
aerodynamic drag coefficient and rolling resistance. Higashimata and Adachi
and Yamamura and Seto designed a Model Matching Controller (MMC)
based controller for headway control. This design used an H∞ controller as
feedback and a forward compensator for a faster response. Xu and Ioannou
approximated vehicle dynamics to be a first-order transfer function at
equilibrium points, and then the Lyapunov approach was used to design an
adaptive thriller controller for tracking control of vehicle speed. Keneth et al
(2008) designed an adaptive proportional-integral (PI) controller for robust
tracking control in resistance to parametric variations. The adaptive law is
designed by using the gradient algorithm. The aforementioned robust
controllers are useful to resist small errors and disturbances in vehicle
longitudinal dynamics, but might not be always effective for large
uncertainties. Moreover, the use of adaptive mechanism is only able to resist
slowly varying uncertainties, but difficult to respond fast varying
disturbances, e.g. instantaneous wind.
Node dynamic model for control
This chapter proposes a robust acceleration control method for consistent
node dynamics in a platoon of CAVs. This design is able to offer more
consistent and approximately linear node dynamics for upper level control of
platoons even under large uncertainties, including vehicle parametric
variation, varying road slop and strong environmental wind. The controlled
node in the platoon is a passenger car with a 1.6 L gasoline engine, a 4-speed
automatic transmission, two driving and two driven wheels, as well as a
hydraulic braking system. Image 1 presents the powertrain dynamics. Its
inputs are the throttle angle αthr and the braking pressure Pbrh. Its outputs are
the longitudinal acceleration a, vehicle velocity v, as well as other measurable
variables in the powertrain. When driving, the engine torque is amplified by
the automatic transmission, final gear, and then acts on two frontal driving
wheels. When braking, the braking torque acts on four wheels to dissipate the
kinetic energy of vehicle body.
Image 1.
Vehicle longitudinal dynamics.
VEHICLE LONGITUDINAL DYNAMICS
For the sake of controller design, it is further assumed that (1) the dynamics
of intake manifold and chamber combustion are neglected, and overall engine
dynamics are lumped into a first-order inertial transfer function; (2) the
vehicle runs on dry alphabet roads with high road-tyre friction, and so the slip
of tire is neglected; (3) the vehicle body is considered to be rigid and
symmetric, without vertical motion, yaw motion and pitching motion; (4) the
hydraulic braking system is simplified to a first-order inertial transfer
function without time delay. Then, the mathematical model of vehicle
longitudinal dynamics is
where ωe is the engine speed, Tes is the static engine torque, τe is the time
constant of engine dynamics, Te is the actual engine torque, MAP(.,.) is a nonlinear tabular function representing engine torque characteristics, Tp is the
pump torque of torque converter (TC), Je is the inertia of fly wheel, Tt is the
turbine torque of TC, CTc is the TC capacity coefficient, KTC is the torque
ratio of TC, ig is the gear ratio of transmission, io is the ratio of final gear, ηT
is the mechanical efficiency of driveline, rw is the rolling radius of wheels, M
is the vehicle mass, Td is the driving force on wheels, Tb is the braking force
on wheels, v is the vehicle speed, Fi is the longitudinal component of vehicle
gravity, Fa is the aerodynamic drag, Ff is the rolling resistance, Kb is the total
braking gain of four wheels, τb is the time constant of braking system, CA is
the coefficient of aerodynamic drag, g is the gravity coefficient, f is the
coefficient of rolling resistance, ϕ is the road slope and vwind is the speed of
environmental wind. The nominal values of vehicle parameters are shown in
Table 1.
INVERSE VEHICLE MODEL
One major challenge of acceleration control is the salient non-linearities,
including engine static non-linearity, torque converter coupling,
discontinuous gear ratio, quadratic aerodynamic drag and the throttle/brake
switching. These non-linearities can be compensated by an inverse vehicle
model. The inverse models of engine and brake are described by Eqs. (2) and
(3), respectively [22, 31]. The design of the inverse model assumes that (i)
engine dynamics, torque converter coupling, etc. is neglected; (ii) vehicle
runs on dry and flat road with no mass transfer; (iii) the inverse model uses
nominal parameters in Table 1.
where ades is the input for the inverse model, which is the command of
acceleration control, Tedes, αthrdes, Fbdes and Pbrkdes are corresponding
intermittent variables or actuator commands. Note that throttle and braking
controls cannot be applied simultaneously. A switching logic with a
hysteresis layer is required to determine which one is used. The switching
line for separation is not simply to be zero, i.e. ades = 0, because the engine
braking and the aerodynamic drag are firstly used, and followed by hydraulic
braking if necessary. Therefore, the switching line is actually equal to the
deceleration when coasting, shown in Image 2. The use of a hysteresis layer
is to avoid frequent switching between throttle and brake controls.
Image 2.
Switching between throttle and brake controls.
MMS-based acceleration control
The Multi Model Switching (MMS) control is an efficient way to control
plants with large model uncertainties and linearization errors, especially
sudden changes in plant dynamics [27–30]. The overall range of plant
dynamics is covered by a set of models instead of a single one, and then a
scheduling logic switches the most appropriate controller into the control
loop. The speed of adaptation and transient performance can be significantly
improved by the instantaneous switching among candidate controllers [29,
30]. Another benefit of MMS control is its potential to enclose the inputoutput behaviours to a required small range. Image 3 shows the MMS control
structure for vehicle acceleration tracking, where ades and a are the desired
and actual longitudinal acceleration respectively, αthrdes and Pbrkdes are throttle
angle and braking pressure respectively, which are the control inputs of a
vehicle. It consists of the vehicle itself (V), the inverse model (I), a
supervisor (S) and a controller set (C). The inverse model I is used to
compensate for the non-linearities of powertrain; I and V together constructs
the plant for MMS control. The combination of I + V tends to have large
uncertainties, but is divided into small ones under the MMS structure. Such a
configuration is able to maintain a more accurate and consistent input–output
behaviour even under a large model mismatch.
Image 3.
MMS control of vehicle acceleration.
MODEL SET TO SEPARATE LARGE UNCERTAINTY
.
Image 4.
Frequency responses of four linear models. (a) Model P1(s), (b) Model P2(s),
(c) Model P3(s) and (d) Model P4(s).
The model set P={Pi(s),i=1,⋯,N}is defined to have identical structure with a
multiplicative uncertainty:
where Gi(s) is the nominal models listed in Table 2, W(s) is the weight
function for uncertainty, Δi is the model uncertainty, satisfying:
where ∥⋅∥δ∞∥ is the induced norm of Lδ2 norm of signals expressed as
where δ>0δ>0 is a forgetting factor, and x(t) is a vector of signals.
No.
Gi(s)
8.15s+3.333
4.5s+3.333
0.75s+3.333
0.22s+3.333
TABLE 2.
Nominal models of node dynamics.
SYNTHESIS OF THE MMS CONTROLLER
The main idea of this chapter is to use multiple uncertain models to cover
overall plant dynamics, and so the large uncertainty is divided into smaller
ones. Because the range of dynamics covered by each model is reduced using
multiple models, this MMS control can greatly improve both robust stability
and tracking performance of vehicle acceleration control. This MMS
controller includes a scheduling logic S, multiple estimators E and multiple
controllers C. The module E is a set of estimators, which is designed from
model set P and to estimate signals a and z. Note that z is the disturbance
signal arising from model uncertainty and it cannot be measured directly. The
module S represents the scheduling logic. Its task is to calculate and compare
the switching index of each model Ji(i=1,⋯,N), which actually gives a
measure of each model uncertainty compared with current vehicle dynamics.
S chooses the most proper model (with smallest measure) and denoted as σ.
The module C contains multiple robust controllers, also designed from P.
The controller whose index equals σ will be switched into loop to control
acceleration. The signal aref is the desired acceleration.
The scheduling logic is critical to the MMS controller, because it evaluates
errors between current vehicle dynamics and each model in P, and determines
which controller should be chosen. The controller index σ is determined by:
Intuitively, Ji(t) is designed to measure the model uncertainty σt, and so the
estimator set E={Ei,i=1,⋯,N} is designed to indirectly measure σi as follows:
where Λ(s) is the common characteristic polynomial of E, aˆia^i and zˆiz^i
are the estimates of a and z using model Pi. It is easy to know that the
stability of estimators can be ensured by properly selecting Λ(S). Subtracting
Eq. (8) with Eq. (4) yields the estimation error of a:
Then, the switching index Ji(t) is designed to be
Since the system gain from zˆσ(t) to eσ(t) can be bounded by S, zˆσ(t) and
eσ(t) can be treated as the input and output of an equivalent uncertainty.
Considering Eq. (8), E is rewritten into
where AE,BE1,BE2,CE11,CE12,CE13,CE14,CE21,CE22,CE23,CE24 are
matrices with proper dimensions. By selecting weighting function as Wp(s)=
(0.1s+1.15)/s, the required tracking performance becomes:
where aref is the reference acceleration, q=Wp(s)ea and ea=aref−a, which is
expected to converge to zero. Substituting a=aˆσ−eσ and Eq. (12) to Eq. (11),
we have,
where β < 1 is a positive constant, Aδi=Ai+0.5δI, symbol “*” represents the
symmetrical part. Then the controller set C is
The matrices in Eq. (15) are calculated as:
Simulation results and analyses
To validate the improvements MMS controller for tracking of acceleration,
two other controllers are designed, i.e. a sliding mode controller (SMC), and
a single H∞ controller.
DESIGN OF SMC AND H∞ CONTROLLERS
It is known that SMC has high robustness to uncertainties. It is designed
based on the nominal model GM(s)=0.33/(s+0.33). The sliding surface is
selected to be
where λ>0. The reaching law is designed to be ṡ =−ks+ηsgn(s), where k < 0
and η>0. Then the sliding mode controller is
H∞ control is another widely used and effective approach to deal with model
uncertainties. Here, a model matching control structure is applied to balance
between robustness and fastness. The uncertain model of vehicle dynamics
used for design of H∞ controller is
The referenced acceleration response dynamics is GM(s)=1/(s+1). Then, the
feed-forward controller designed by the model matching technique is
The feedback controller designed by the H∞ control method is
This H∞ controller is numerically solved by the Matlab command mixsyn(),
with the weighting function Wp(s)=(0.1s+1.15)s−1.
SIMULATIONS AND ANALYSES
A naturalistic acceleration from real traffic flow is used as the reference
acceleration. This naturalistic acceleration profile is from driver experiment
data, which lasts around 50 min totally and is shown in Image 5. The
2
maximum and minimum desired acceleration is about 1.1 m/s and −1.8
2
m/s , respectively and the vehicle speed varies in the range of 0–33 m/s. This
condition can cover a wide range of vehicle dynamics.
Image 5.
Reference acceleration and speed. (a) Acceleration and (b) Vehicle speed.
Two groups of simulations are conducted: (a) nominal condition; and (b)
uncertain condition. Under the nominal condition, all vehicle parameters are
shown in Table 1 and there is no road slope and wind. Under the uncertain
condition, the disturbed parameters are vehicle mass, road slope and wind.
The maximum value of vehicle mass is used, i.e. M = 1600 kg. The
disturbance of road slope is a sinusoidal signal:
where φmax = 5 deg, and Tslope = 50 sec. The disturbance of wind is a periodic
triangular signal:
where vwmax = 10 m/s, and Twind = 40 sec.
The simulations results are shown in Images 6 and 7, and to show clearly,
only the responses from 0 to 500 sec are plotted as a demonstration. From
Image 6(a) and (b), under the nominal condition, all three controllers can
track the reference acceleration accurately. With the uncertainties, it is found
that from Image 7(a) and (b) the tracking capability of SMC and H∞
controller decreases obviously while MMS can still ensure acceptable
tracking error. Though switching of controller occurs at both nominal and
uncertain conditions (Image 6(e) and Image 7(e)), the control input of MMS
behaves continuously and there is no obvious sudden change (Image 6(c) and
Image 7(c)). Another concern that must be explained is the spike of
acceleration shown in Image 6(b) and Image 7(b). This is mainly caused by
the impact of powertrain when gear switches. A more appropriately designed
transmission model could improve the gear-shifting quality.
Image 6.
Results under normal condition. (a) Acceleration tracking error, (b)
acceleration, (c) throttle control, (d) gear position and (e) switching signal.
Image 7.
Results under disturbed condition. (a) Acceleration tracking error, (b)
acceleration, (c) throttle control, (d) gear position and (e) switching signal.
A more deep simulation is conducted to analyse the influences of uncertain
level on the tracking ability of acceleration and gear shifting behaviours. At
this condition, the level of uncertainty is increased step by step and the
relationship between the uncertain level and model uncertainties is described
by:
where ε represents the level of uncertainty, with maximum mass 1600 kg,
maximum road slope 10 deg, and maximum wind speed 20 m/s (when ε =
10). ε = 0 implies that there is no model uncertainty. The root mean square
error (RMSE) of acceleration is used to measure the capability of tracking.
Image 8presents the RMSE of acceleration error and the number of gear
shifting per minute (denoted as Ngear/min). In nominal condition, the RMSE
of acceleration error of the three robust controllers is almost the same. As the
uncertainty level increases, the tracking capability of the SMC and H∞H∞
quickly drops, whereas the MMS still holds acceptable accuracy. Image 8(b)
is used to release the concern that the MMS might largely increase the
number of gear shifting because of its switching structure.
Image 8.
Performances under different uncertain levels.
Conclusions
This chapter proposes a robust acceleration control method for consistent
node dynamics in a platoon of automated connected vehicles (CAVs). The
design, which is based on multiple model switching (MMS) control structure,
is able to offer more consistent and approximately linear node dynamics for
upper level control design even under large uncertainties, including vehicle
parametric variation, varying road slop and strong environmental wind. The
following remarks are concluded:
1.
(1) Homogeneous and linear node dynamics is important for
platoon control. This requires the acceleration tracking performance
to be accurate and consistent, and accordingly results in critical
challenges because of the linearization error of powertrain dynamics
and large model uncertainties in and around vehicles. The proposed
MMS control structure can divide the large uncertainties of vehicle
longitudinal dynamics into small ones. Accordingly, multiple robust
controllers are designed from the multiple model set, and a
scheduling logic is also presented to automatically select the most
appropriate candidate controller into loop according to the errors
between current vehicle dynamics and models.
2.
(2) The designed switching index can measure the model error
of vehicle longitudinal dynamics properly and the right acceleration
controller is selected into the closed loop. The robust stability and
performance of this acceleration tracking control system can be
ensured. Both the simulation and experiment results demonstrate
that this switching control system has better performances than that
designed by either H∞H∞ control or sliding mode control approach
in large uncertain conditions.
The Reliability of Autonomous Vehicles
Under Collisions.
Over the decades, safety standards of the vehicles have been raised from the
demands of the automobile users and organizations like Federal Motor
Vehicles Safety Standards and Regulations (FMVSS) [1], National Highway
Traffic Safety Administration (NHTSA) [2], European New Car Assessment
Program (Euro-NCAP) [3], Society of Automotive Engineers (SAE) [4], etc.
Superior technology evolved vehicles, which need more resources, offer
diverse facilities for the occupants. Respecting the autonomous systems,
more interventions of the drivers are starting to narrow. Although the yield of
luxury and reliability, sometimes it operates against the occupants. During
computer governing trips, individuals want to attain locations faster. In
unexpected inaccuracy situations, automatic systems could cause fatal results.
However, the most common and important topic of vehicles is the crashworthiness. Ensuring the reliability of the vehicles and safety of the
occupants, the vehicle structure should be constructed in a way to absorb the
impact effects and shock gradually, not to create a sudden deceleration which
causes also injuries and fatalities. To decrease the failure risks, entire vehicle
should be investigated, primarily starting from the basic structure of the
vehicle under the perspectives of statics and strength of the materials. Then,
the natural frequencies and the dynamic loading of the system should be
incorporated into step by step. Generally for the dynamic loading, precaptured dynamic road data also helps developing the vehicle structure.
During the development period, beforehand defined systems like the concept
design of the vehicle, and the shape should be protected, but the adaptation of
the alterations should affect the design in minor.
Before determining the impact response and the behavior of the vehicle
during a collision, the reasons of the accidents should be investigated. Serious
amount of the accidents occur by the carelessness, negligence, and
recklessness of the driver. The first one decreasing the risk of the collision is
preventive driving techniques. This technique is a part of the education for
the expert driving, although it refers time, patience, and talent. Yet, without
any attention and enough reflexes, even this preventive method, results in
frustration. In addition, if the all scenarios are considered the environmental
and road conditions, and the faults of the other vehicles, including the drivers,
effect the vehicles as other factors. Thence, the UAV technology becomes the
redeemer of the system to solve the problems completely or decrease the
errors to a sufficient level for a more reliable drive.
The most proper way to determine the causes is an experimental study,
despite it is not a cost effective and time-consuming way. With the all other
test methods such as positioning, loading, and calibrating, crash tests are the
most significant ones. After the real-time collision tests, widely known
experimental approach on the crash-worthiness researches is the sled test [2,
3, 5, 6]. The sled tests are more under-control when it is compared with the
other methods. But again, the financial hassle is beyond the limits of the most
companies and facilities.
Decreasing the expenses of the crash tests, many companies untangle their
crash-worthiness difficulties after mandatory collision tests, with the
computer-aided simulations. Consequently, computerized methods are
immensely reliable, if almost, the last three decades are considered. To
overcome the real-time simulations of the engineering problems, many
developed computer-aided analysis softwares can be obtained from the
market. Even though some of the softwares for an exact engineering solution
such as structural analysis, impact analysis and thermal analysis remain part
of the market is dealing with the multi discipline engineering solution
packages serving for the finite element method (FEM). But for the general
FEM and its solution methodology [7–9] is not focused on significant issues.
Hence, the more applications in a software sometimes decreasing the
expertise approach. According to our perspective, impact simulation codes
indicate the prominent side of the vehicle collisions. Thereupon, for the
impact procedures, explicit dynamics approach based on the FEM gets
through the formidable problems, satisfactorily [10]. In Image 1, FEM model
(a) and a crash simulation (b) of a wheelchair and a seating system via sled
scenario are given using the human mass adaptation to understand the crash
worthiness [11].
Image 1.
(a) Finite element model of a wheelchair crash simulation via sled test, (b)
Finite element simulation result of the model for a defined specific time.
One of the most popular and extensively used software for the impacts is
LSDYNA, the product of the Livermore Software Technology Corporation
(LSTC) [12]. This code is used by some of the software companies as a
fundamental explicit dynamics solver. With the help of the validated FEM
results, reliable procedures will be determined to Image out the real vehicle
crash scenarios in an inexpensive way, even for the UAV technology. During
improvements invariably, reverse engineering applications are in the process
from beginning till the end to mend the faults scores of times.
Autonomous vehicles: the perspective of collisions
Recently, the importance of the unmanned autonomous vehicles for every
day, industrial and military applications are rapidly increasing, regarding the
demand of the market. It is complex enough not to start the mass productions
for today, especially manufacturing ability in excessive numbers of the
automobile industry is considered. But, when the issue is saving a human
being, we should reconsider everything referring the military developments
against terrorism or wars. The defense industry focuses on the aerial vehicles
to check the required zones and decrease the threats by the enemies or the
terrorists. An unmanned aerial vehicle (UAV) shown in Image 2 in Ref. [13],
widely used as a drone, or an unmanned aircraft system (UAS), and also
referred by several other names, defines an aircraft without a human pilot.
Therefore, an aircraft without a pilot means, alive human, but in the right
hands yet, this sophisticated technology will be so perilous against innocents.
And also, the more professionally UAS requires the more trained and talented
personnel in particular formidable and arduous duties. This can possibly
cause unexpected mistakes during the tasks that provoke impacts and
collisions.
Image 2.
Medium Altitude Long Endurance (MALE) UAV of Turkish Aerospace
Industries, Inc.
Image 3.
Google self-driving car.
The technological improvements in numerous dissimilar segments are
effecting the UAV technology too, like media and entertainment. With the
help of the lately used drones are easily operated, simple and cheap widely
used by the professionals likewise amateurs in various applications. This
option introduces some hazardous results with the ease of the system.
Although the price of the simple applications like drones less when it is
compared with the complex vehicles, consequences could be analogous.
If we focus on the automobile industry some leading companies like Google
have been paying attention for autonomous cars for a while. See Image 3
[14]. After spending many years to research and development, the shown up
products of UAV technology on the roads are a quite impressive story.
With regret, even for the finest tech. companies, some of the undesired
incidents bring about the damaging results of the UAV. The self-driving car
of the Google collided with a public transportation bus in California, USA
(Image 4 in [15]) on February 14, 2016. In spite of the highly developed
vehicle technology, it is indispensable to face with collisions for some cases.
Image 4.
Collided self-driving car of Google.
Whatever is done or planned for evolving the UAVs, sometimes these
vehicles encounter with inevitable problems, and thus, it will be concluded
with accidents as it was not planned. Under these circumstances, vehicle
safety and crash worthiness should be taken care of in a comprehensive
viewpoint.
Causes and prevention methods of collisions
Involuntary emerged incidents by the humans, vehicles, and environmental
effects may cause accidents randomly. Sometimes, the reasons are known
and the sufficient prevention methods can be created for the safety of the
vehicles or the occupants or the both of the occupants and the vehicles.
Although the vehicle is UAV, the opponents or the other vehicles could be
driven by the humans. Thus, the human existence should be considered
evermore, while the robustness of the vehicles is acceptable. With reference
to known conversant reasons of the collisions, inaccurate parts of the
mechanical, electronical, or software-based problems could be adjusted.
However, the human originated mistakes which can differ from one being to
another are much harder to fix with straightforward solution approaches. The
psychological, physiological, behavioral potential, capability, and carefulness
of the humans will alter the results of the happenings. Sometimes, the reasons
are known, but it is not able be Imaged out by anything else the events like
the several natural disasters. Otherwise, if the reasons are not specified or
estimated for the collisions, the new methodologies must be evaluated to
diminish the odds of failures for both humans and the vehicles.
Considering the vehicle, which is not a UAV, can be also investigated for the
safety of the pedestrians, occupants, regular vehicles, and the autonomous
vehicles. The idea of preventive driving technique or expert skills will come
into the minds for the human driver. Initially, validity of this opinion could be
reasonable with the existence of the natural aptitude alongside the education
of the preventive driving. Although the careful driving is very important, the
reflex of a human body is a prominent factor of the innate talent effecting
throughout the trip. Sometimes, this gained time in milliseconds can save
lives when the human body reacts against an unexpected incident. Reflexes
can be improved with the help of the practice, while we can assume this, as a
learned or educated reflex.
Otherwise, if the reflex response is not satisfying the sudden handling
situations, preventive driving education avails to avert the collisions during or
before the accidents. This education covers primarily the calm driving in
contrast with the aggressive and fast driving, refraining from the
instantaneous movements of the steering wheel conducts the vehicle, sudden
acceleration and decelerations in particular in tough weather and
environmental conditions. To ensure a safe trip, an admissible space between
the vehicles is a necessary approach to acquiring an enough time along the
duration of the collision. Subsequently, scanning all around the vehicle with
eyes and computerized systems for every single threat is the key factor for a
reliable trip, like a radar system to perceive the risky ventures which could be
possibly induced by anything else.
Afterward, the driver will face with the most critical phase: Decision! And
the worst scenario is Uncertainty as an opposite situation signifying that “the
worst decision is better than a hesitant behaviour.” If we consider the options
for the decision:
●
Deceleration,
●
Pull over,
●
Suddenly stop,
●
Acceleration,
●
Return,
●
U turn,
●
Slide,
●
Skid,
●
Drift,
●
Changing lanes,
●
Manage to do both of the selected ones or more, simultaneously
or consecutively.
If there is no chance avoiding from the accident, try to be in the safe zone or
select the least hazardous way via changing the speed and maneuvering the
vehicle to save the lives and minimize the collision effects. Doing nothing
and being petrified is not an option, it is just an ambivalent condition
eventuating with an undesirable result.
Considering the hassle of the situation during collisions, these types of
behaviors and applications are so difficult for a human to apply and need
some expertise. It is not for everyone! Thus, if the computerized UAV system
is capable to handle all of these situations at the same time, except some
uncommon incidents, it will be much more robust and reliable for everyone,
although we consider that sometimes human mind and body can create
metaphysical and exceptional solutions.
Vehicle safety standards
When we talk about the Unmanned Autonomous Vehicles, it comprises the
ground, aerospace, and even the naval vehicles. If we try to clarify the safety
standards for all conditions, it will be in the detail redundantly. Ordinarily
when the crash-worthiness and accidents are on the carpet, extensively
known portrayal is covering the automobiles followed up by the aircrafts and
the other vehicles.
For determining the vehicle safety standards of the automobiles, some major
organizations take place in the decisive position in the industry and the
market. However, the most important role in framing the ideology of the
crash testing is in the hands of the Federal Motor Vehicle Safety Standards
and Regulations (FMVSS) and the National Highway Traffic Safety
Administration (NHTSA) in the USA as one of the leading countries for the
vehicle safety. In the NHTSA internet site [16] of the US Department of
Transportation for the Federal Motor Vehicle Safety Standards and
Regulations, safety assurance defined in four major topics given below:
1.
Crash avoidance: Standards between 101 and 135 (Part 571),
2.
Crashworthiness: Standards between 201 and 224 (Part 571),
3.
Post-crash standards: Standards as 301, 302, 303, 304 and 500
(Part 571),
4.
Other regulations: Standards between 531 and 595.
It is very important to point out, especially the contents of the
crashworthiness for our point of view:
●
Occupant Protection in Interior Impact,
●
Head Restraints,
●
Impact Protection for the Driver from the Steering Control
System,
●
Steering Control Rearward Displacement,
●
Glazing Materials,
●
Door Locks and Door Retention Components,
●
Seating Systems,
●
Occupant Crash Protection,
●
Seat Belt Assemblies,
●
Seat Belt Assembly Anchorages,
●
Windshield Mounting,
●
Child Restraint Systems,
●
Side Impact Protection,
●
Roof Crush Resistance,
●
Bus Emergency Exits and Window Retention and Release,
●
Motorcycle Helmets,
●
Windshield Zone Intrusion,
●
School Bus Rollover Protection,
●
School Bus Body Joint Strength,
●
School Bus Passenger Seating and Crash Protection,
●
Rear Impact Guards,
●
Rear Impact Protection.
In another perspective for the crash testing, in the internet site of the crash
network [17] expresses the headings in different groups: interior testing, dash
board, frontal impact, side impact, steering wheel, seats, seat belts, rear
impact, rollover, head rests, and roof crush. Considering the developments in
the vehicle industry including automobiles and aircrafts, etc., especially the
UAVs, many countries trying to improve new methodologies against the
collisions comprising hundreds of standards. The major idea for the crash
worthiness is protect the occupants or all beings, and eliminate or diminish
the damages caused by the collisions.
Crash worthiness and crash tests
The occupant safety and vehicle reliability are the fundamental topics that
should be developed against collisions for the crash worthiness. Although the
technology is evolved in many fields especially for the materials science
comprehending composite materials and withal super collapsible beams to
absorb impact energy, again if we are not able to provide a safe drive for the
occupant or the vehicle it is going to be a fact that can be concluded with the
injuries or fatality for the driver, passengers or the people outside while the
vehicle face with the physical damage like self-destruction as a result of
collision to the wall, barrier or a structure; accident with another vehicle via
front, behind or side impacts; and maybe a rollover incident by some other
effects. These are by definition may differ from one incident to another
defining the accidents of vehicles even for the autonomous systems.
Whatever happens, the engineers should design a structure that can able to
absorb some of the energy by collapsing itself or deforming to an adequate
level to protect the human beings. If the structure is too rigid, there is an
amplification occurs and affect everything in a poor way that is transferring
the accelerations to the occupants causing worse situation. Therefore, even
for the safety and reliability, optimum way is better to protect lives against
accidents.
Image 5.
Crash test modes.
To determine the crash response, broad amount of tests have been carried out
for many years by the validated official agencies for crash worthiness. During
the collisions under the combined loading conditions, the application of the
crash energy management is dependent to plenty of situations, starting with
contact mechanics and collapse modes that could be in the axial and/or in
bending way causing many other reactions. According to Huang [5], the
crash tests were chosen after reviewing some previous works around the
world, accident statistics, and experience with the Ford Tempo air bag fleet
based on the real-world crash investigations. The tests consisted of car-to-car
tests versus barrier and fixed pole tests. Crash data were accumulated from
the 22 vehicles; the chosen tests representing a vast range of accidents at or
near the expected threshold of air bag deployment. There are 22 vehicles in
16 tests, as shown in (Image 5), were used to gather the crash pulse data. For
every test, the selected impact speed is a key factor to check the system
performance of the air bag sensor will be activated or not during the
incidents. Due to mentioned circumstances, we can define the types of the
impacts [5]:
1.
A perpendicular (90°) barrier,
2.
A low- and high-speed rigid pole,
3.
A front-to-front overlap,
4.
An oblique front to front (three types),
5.
A vehicle front to an moving deformable barrier (MDB)
perpendicular,
6.
A vehicle front-to-side,
7.
A front-to-rear,
8.
A bumper over-ride (or truck under-ride).
The tests can be classified in three categories: component tests, sled tests, and
full-scale barrier impacts. Because of the complexity and many distinct test
variables in full-scale tests, there is an acceptable reason in the reduction of
the test’s repeatability. The impact test depicted in (Image 6) is representing
a full-scale vehicle-to-vehicle impact test of Toyota Motor Company while
they are developing a new safety system for their automobiles [18]. The
dynamic and/or quasi-static response of an isolated component is defined by
the component tests are exalted to detect the crush mode and energy
absorbing ability. With the help of the component tests, it also gives the
capability to the designers to develop prototype substructures and
mathematical models. Moreover, the vehicle rear structure full-scale tests are
governed either via a deformable barrier or a bullet car to state the integrity of
the fuel tank. To evaluate roof strength according to FMVSS 216, a quasistatic load is applied on the “greenhouse,” and providing that the roof
deformation falls below a particular level for the applied load [6].
Image 6.
Vehicle-to-vehicle impact test of Totoya Motor Company.
For the sled test, engineers use a representation of a passenger compartment
(buck) with all or some of the interior parts like seat, front panel, steering
system, seat belts, and air bags. Anthropomorphic Test Devices or
“Dummies” (ATD) or cadaver subjects are seated in the buck, utilized to
simulate an occupant subjected to dynamic loads which is similar to a vehicle
deceleration-time pulse, to determine the response in a frontal impact or side
impact. To appraise the restraints of the system the sled test is a significant
topic recorded by the high-speed cameras and photography applications for
the dummy kinematics. It should also be stated that the performance of the
ATD is determined at a higher impact speed of 35 mph in the NCAP test.
Throughout the NCAP test, the restrain system is composed of three-point
lap/shoulder belt system, and also additional restraint air bag. When we pay
attention to the detail of the impact tests, numerous numbers of sensors as
accelerometers, load cells, etc., located in the dummy and on the restraints,
screen the forces and moments assisting to specify the impact reactions.
Event data recorders have a very important role to understand the behavior of
the system during accidents. However, when the recorded data will be
analyzed digital filtering and reducing the noise (unwanted data) effect the
results. Thus, it should be Imaged out cautiously.
Driving through the highways or freeways, the barriers are limiting the
vehicles to get out of the road and protecting it, from crossings. During the
barrier impact test there is a guided vehicle, canalized into a barrier with a
predetermined initial velocity and angle. The purpose of the test is to
maintain a structural integrity of the vehicle and adherence to the mandatory
government regulations. The fully equipped vehicle collides a barrier at a 0°,
+30°, and −30°, respectively, from an initial velocity of 30 mph.
The frontal offset impact test is another method acting with 40–50% overlap
stating that the impact target could be rigid or deformable.
For the full-scale side impact test, a deformable barrier moving towards the
left or right side of the vehicle with an initial speed at a certain angle
(FMVSS 214). Especially for this test, side impact dummies (“SID” For the
US and “EURO SID1” for Europe) which have different shape and
properties, are used in the driver and outboard rear seat positions.
When the human parts are considered, the ATDs are widely used and capable
enough to demonstrate the behaviors of the human nature in a fundamental
way, even when it is indicating the spine, head, brain, organs, knee injuries,
and whiplash incident in particular.
Explicit dynamics approach in FEM for the impact scenarios
The finite element method has been used by many scientists and engineers
for decades to advance systems in different engineering applications for the
validations, while reducing the experimental investments and the expenditure
of the tests. This widely known method developed via the contributions of the
researchers all around the world. While FEM problem is thriving, we will
break the problem domain into a collection of 1D (dimensional), 2D and 3D
shapes to form the discretization. Starting with the simplest way, {F} = [K]
{D} where F, K, and D, are the nodal force vector, global stiffness matrix and
nodal displacement vector, respectively, as usually known. The
displacements in this equation are the unknowns which should be found and
lead us to Image out the strains and stresses, etc. To solve the equations,
generally, the direct stiffness method, variational technique, or weighted
residual approaches are used. However, this fundamental technique is now
capable enough to handle the issues lately popular like the fracture mechanics
with an extended finite element method, cohesive modeling, contact
algorithms, smoothed-particle hydrodynamics (SPH) and Arbitrary
Lagrangian–Eulerian Method (ALE), etc. Regarding recent a few decades,
requisition of a nonlinear mechanics or engineering plasticity, for the delicate
and slightly designs which should be robust in tough conditions, induced the
durability tests including impacts and crashes. Explicit dynamics approach in
FEM emerges with this idea to untangle the difficulties of the impacts, drop
tests, explosions, and collision scenarios.
Vehicle technology under the perspective of the crash worthiness and safety
during the collisions via crash tests to explicit dynamics simulations of the
FEM is a trustworthy way to follow up. If we have a dekko to the literature
some valuable studies can be determined [5, 6, 19] about the vehicle crash
worthiness and the safety, while some books [6, 10, 19] explain the details of
explicit dynamics one of the most challenging nonlinear problems in
structural mechanics.
The implicit FE solution technique delivers accurate results as long at the
stress levels in the beam do not exceed the yield-stress value, thus remaining
in the linear (elastic) region of the material curve. Because the implicit solver
usually obtains a result with a single step, according to the slope of the
material curve, which is represented by Young’s modulus, for the plastic
region of the material there will be an inaccuracy whereat the slope is
changed up the plastic characteristics. It means that the implicit solver is very
powerful for linear (static) finite element analysis (FEA); thence, it provides
an unquestioningly stable solution methodology.
Unfortunately, the implicit solution approximation may have difficulties with
the immensely nonlinear finite element models. Additionally, inverting
extremely large stiffness matrices with the implicit solver might consume too
much time and memory. Although this method is not logical to use in highly
nonlinear problems, it can able to calculate the equations with the help of the
iteration cycle, which is applying the load in minor increments called as
“timesteps” [19].
During the accident, the vehicle body faces with the impact loads propagate
localized plastic hinges and buckling. So the large deformations and rotations
with contact and heaping between the many parts can be determined. One of
the most important ideas for the explicit dynamics is the wave effects,
primarily involved in the deformation stage, associated with the higher stress
levels. When the stress level exceeds the yield strength and/or the critical
buckling load and the localized structural deformations comprise throughout
a few wave transits in the frame of the vehicle. The effects of the inertia will
chase this step and also dominate the ensuing transient reaction. The collision
could be considered as a less dynamic incident if there is a comparison
between the ballistic impact and the vehicle crash. Closed-form analytical
problems for the impact scenarios in structural mechanics offer tough times
for everyone. Therefore, the numerical methods arise with the practical
options to decipher the unknowns.
A set of nonlinear partial differential equations of motion in the space-time
domain is solved numerically by the explicit dynamics approximation, related
to the stress–strain properties of the material about the initial and boundary
conditions. To obtain a set of second order differential equations in time, the
equations in space are discretized by the solution in the course of formulating
the problem in a weak variational form and supposing an acceptable
displacement field. Subsequently, in the time domain, discretization
calculates the system of equations. If the selected integration parameters
deliver the equations coupled, the technique is defined as implicit, while the
solution is unconditionally stable. Furthermore, if the solution will be defined
as explicit, the equations should be decoupled by the selection of the
integration parameters, and it is conditionally stable [6].
As Du Bois et al. [6] described, a set of hyperbolic wave equations in the
region of effectiveness of the wave front is Imaged out via the explicit
dynamics, even then it does not require coupling of large numbers of
equations. Yet, the unconditionally stable implicit solvers requiring assembly
of a global stiffness matrix provide a solution for all coupled equations of
motion. Especially considering the impact and vehicle crash simulations,
including the utilization of contact, several material models and withal a
combination of non-conventional elements, the explicit solvers show up more
potent and computationally more productive than the implicit solvers, while
the timestep is about two to three orders of magnitude of the explicit
timestep. “Explicit” defines an equilibrium at a moment in time that the
displacements of the whole spatial points are known in advance. With the
help of the equilibrium, we can determine the accelerations; moreover, the
use of central differencing technique assures to establish the displacements at
the next timestep and reiterate the process. And also, the only inversion of the
mass matrix, M, is enough for this procedure.
There should be stated that if the lumped-mass approximation is utilized the
mass matrix will be diagonal, so there is no obligation of the matrix
inversion. When the uncoupled equation system is Imaged out, the results
will be in a very fast algorithm. For the minimum memory exigency, the
assembly of a global stiffness matrix should be avoided by using an elementby-element approach for the calculation of the internal forces. The explicit
integration will be second order accurate when it is applied attentively.
Utilization of the related nodal displacements and/or velocities, the stresses
are calculated discretely for each element, causing the effect of the loads on
one part of the element onto the opposite part which is simulated by each
timestep. Accordingly, the stress wave propagation is demonstrated through
the element clearly. The deficiencies of the explicit solution technique are the
conditional stability and the weak approximation of treating the static
problems. The meaning of the conditional stability of the explicit integration
algorithm is the integration timestep must be smaller than or equal to an
upper bound value known as the Courant condition,: Δt≤lccΔt≤lcc explaining
that the timestep must not pass over the smallest of all element timesteps
identified by dividing the element characteristic length through the acoustic
wave speed through the material from which origin the element is made like
Du Bois et al. [6] mentioned. And they also said that the numerical timestep
of the analysis must be smaller than, or equal to, the time required for the
physical stress wave to pass by the element. If we try to explain it in a
straightforward way, c of the mild steel elements is about 5000 m/s with a
characteristic length of 5 mm for the automotive field practices concluding
with 1 μs for the analysis timestep. Consequently, incidents dealing with the
short duration of time are appropriate for the explicit dynamics solution
technique enclosed with superior nonlinear behaviors and loading velocities
demanding small timesteps for more precision sequels.
The integration of the equations of motion defines the time integration:
or in a different representation,
The basic problem is to determine the displacement, xn+1, at time tn+1. We can
also say that [K]{x} = {Fint}. By the way, the explicit direct time integration
can in a conceptual form be written as:
The equation shows that the solution depends on nodal displacements,
velocities, and accelerations at state n, quantities which are known.
Therefore, the equation can be solved directly. And for the implicit:
This equation represents the solution that depends on nodal velocities and
accelerations at state n + 1, quantities which are unknown. This implies that
an iterative procedure is needed to compute the equilibrium at time tn+1.
If we try to define the time increments:
Then, the central difference method is used for the velocity:
And
When Δtn+12/=tn+1−tn is used, we get:
With the help of the acceleration, velocity can be stated as:
With
Δtn=tn+12/−tn−12,
the
velocity
is
given
below:
From the Eq. (2), the acceleration at an+1 can be found like:
Remember, the damping force is taken at x˙n+12/x˙n+12. The new velocity
will be calculated with the old velocity and acceleration by Eq. (13). From
this result, displacement will be updated at Eq. (10). Consequently, the new
internal and external forces help for determination of the new acceleration
[Eq. (14)].
Considering the LSDYNA, the widely known commercial explicit solver, the
procedure for the solution steps is given below:
When t = 0, it will be the beginning of the solution loop or cycle. Whereupon,
the displacement will be updated at Eq. (10), while the velocity is also
updated at Eq. (13). Then, the internal forces (strain rate, strain, stress, and
force magnitude) will be computed looping over the elements. Subsequently,
external forces will be calculated to reach the computation of the
accelerations at Eq. (14) which is finalizing the initial loop. Thereafter,
following loops will go through the process again and again, until the
convergence criteria will be satisfied.
For the robust design, stress wave propagation, timestep adjustment, critical
element length and finally, the mesh size and quality are the factors, will be
considered carefully to affect the simulation performance in a better way.
Due to the reliable side of the explicit dynamics solution methodology, we
indicated the differences between the implicit and explicit solver for the
approximation of the vehicle crash simulations and safety of the occupants
and vehicles. We can possibly say that, for the collision of the regular or
autonomous vehicles the accident results will be similar that should be taken
care of precisely.
CRASH SIMULATIONS OF THE VEHICLES
The studies about the crash worthiness and impact scenarios as explained in
the previous section are based on the explicit dynamics solution technique
resulting with the satisfactory approximations. Some of the commercial codes
for the explicit dynamics simulations are embedded inside the multidiscipline
FEA programs such as ABAQUS, ANSYS, and MSC softwares, even though
some of them working alone for just these types of applications as LSDYNA,
PAMCRASH, etc. As it was mentioned, before these softwares are auxiliary
engineering applications for solving the problems with the numerical
methods. The most important task is the prediction of the failures of the
system or structure of the vehicle and the problems that occupants can be
faced. When the prediction of the simulations is validated, it can be said that
FEM code has done its job properly.
To comprehend the validation of the simulations with the real crash tests,
numerous topics should be investigated before, during or after the
simulations. We can simply name these steps like: before the process is preprocessing, during is processing or simulation, and after the processing is
post-processing.
Pre-process section is dealing with the creation of the finite element model,
starting from the Computer-Aided Design (CAD) of the structure or the
system, material descriptions or modelling, element selections, defining the
boundary conditions (describing the loads, displacements and support
properties, etc.) and initial values like velocities, adjusting the contact
algorithms, total time and timestep size definition. For more elaboration,
mass scaling factor and/or hourglass controls can be adjusted. Considering
the information given, the geometry should be meshed as smooth shapes as
possible. During the mesh generation element, sizes are so prominent, will
affect the simulation time, according to the number of the elements. If the
mesh is coarse, there will be a weak approximation and short period of
processing time, notwithstanding if there is a finer meshed model exists, the
convergence rate will be better, while the processing time is prolonged. Thus,
there should be an optimum mesh size for the common structure; however,
the significant zones are meshed with finer elements to increase the stability
and for the better result.
When the simulation is processed, the finite element model is ready for the
post-processing. The most considerable issue for the impact is absorb or get
rid of the shock energy by different methods like geometrical change of the
structure, material selection, and application. The second problem is the large
deformation of the structural components. Additionally, changing the
parameters of the simulation model will affect the results widely depending
on the experience of the analyst. Because of dealing with the highly plastic
problems and non-linearity during collisions, the analyst needs to possess
deep knowledge about nonlinear finite element analysis for continuum
mechanics [20–22]. It should be stated that the analyst can be faced with the
geometrically nonlinear problems or material nonlinearities which is
imported to be expressed.
Image 7.
The vehicle crash test and simulation.
Regarding the details of the post-processing, we will start to observe and
evaluate the simulation outputs, step by step. While we are investigating the
steps of the impact scenario, kinetic energy change, absorbed energy amount,
deformations and displacements, strain and stress distribution, and some
other major issues should be appraised attentively and the results should be
validated with the real crash tests. Besides, the effects of the different
components, deformation characteristics, and the changes of the parameters
must be observed and the behavioral outcomes of the incident should be
crosschecked with the real situation. The literature is replete with numerous
of interesting researches about the crash-worthiness of the vehicles and the
occupant safety including the Anthropomorphic Test Dummy (ATD) test and
simulations [23–27]. Especially LSDYNA conferences are popular with the
presentations of the explicit dynamics simulations and also most of the papers
can be reachable from the internet site of “DynaLook” [28]. For a better
representation of an explicit dynamics simulation, Image 7 demonstrates the
full vehicle crash scenario [29].
Conclusion
Whole crash-worthiness studies are remarkable starting with the elementary
functions of the profound methods of NCAP tests considering despite the
costs of the experiments are enormous, contribution of the test results is
challenging.
For a robust vehicle, the engineers should design a structure that can able to
absorb some of the energy by collapsing itself or deforming to an adequate
level to protect the human beings. If the structure is too rigid, there is an
amplification occurs and affect everything in a poor way that is transferring
the accelerations to the occupants causing worse situation. Therefore, even
for the safety and reliability, optimum way is better to protect lives against
accidents.
Nevertheless, for diminishing the costs of the studies of FEM and explicit
dynamics methods, which are proceeding under the supervision of the
computers and codes verified by the long-term researches and applications,
are competent to handle the complicated algorithms yielding with satisfactory
results. The incidents dealing with the short duration of time are appropriate
for the explicit dynamics solution technique enclosed with superior nonlinear
behaviors and loading velocities demanding small timesteps for more
precision sequels.
Take into account the results of the experiments, analytic solutions and
computerized methods are mentioned as FEM codes are robust
approximations to understand the behavior of the vehicle and occupants in
the course of statics, dynamics, and collisions. Consequently, the finite
element method and explicit dynamics solutions have been proven to validate
the experiments and real life experiences in the range between 80 and 100%.
It should be clearly stated that the autonomous vehicles decreasing the risks
of the collisions when it is compared to the human driver controlled vehicle.
ADAS-Assisted Driver Behaviour in Near
Missing Car-Pedestrian Accidents.
It is commonly acknowledged that in the near future most of the road
vehicles will travel, on almost the totality of the road network, in an
automated way (autonomous driving). The reason for such a forecast is easily
understood: the influence of the driver on safety, energy efficiency and traffic
fluidity is very high [1]. In fact about 93% of road accidents are originated by
some kind of driving error, as recognition errors, decision errors and
performance errors [1]. Under such point of view, automated driving can
bring dramatic improvements by eliminating the influence of human factors,
thus contributing to reduce serious and fatal road accidents, fuel waste and
traffic congestion. The total elimination of road accidents, however, is not
predictable at the moment, but their reduction to very little numbers is
reasonably attainable. In the vision of many researchers, a roadmap to full
automated vehicles can be defined; for instance, organisation like the Society
of Automotive Engineers (SAE) has defined some steps (SAE Level of
progressive automation, Image 1), ranging from Level 0 (no automation) to
Level 5 (full automation) [2].
Image 1.
SAE levels of progressive automation as defined in SAE International
Standard J3016.
Levels 0–2 (partial automation) require that the human driver be responsible
for monitoring the driving environment, whereas in Levels 3–5 such task is
performed by the automated driving system. Unless having reached a
condition of full automation (Level 5), the driver must be involved in car
driving, that is to say that the driver must be kept “in the loop.” In fact in any
intermediate level of automation, several driving modes will include the
possibility or the necessity that the control of the vehicle is shifted from the
automated system to the driver, or that the driver is willing to keep the
control back. As explained in Refs. [3, 4], such operation must be carefully
designed. Such issue will be particularly important in the case of Level 3
(conditional automation) in which the driver, due to the increasing number of
automated driving modes, will be often called to take the control back. The
driving task can be decomposed into three main activities: recognition,
judgement and operation. In all of such activities errors are possible, likely to
bring to some risks or even to accidents. Within SAE Level 0 (no
automation), all of these tasks are performed by the driver, which can be
defined as “conventional driving.” If the implementation of Advanced Driver
Assistance Systems (ADAS) is carried out, the driver can be assisted, or even
substituted, by some automated or autonomous device or function, up to a
level of full automation (SAE Level 5) in which recognition, judgement and
operation are performed by the system taking full control. In intermediate
levels of assistance (as, for instance, in SAE Levels 1 and 2) only recognition
and/or judgement are assisted so that the responsibility for operation remains
with the driver. Far from being infallible (at least at the present state of the
art) such devices can be of great help to decrease the probability of errors
and, consequently, of accidents.
PRESENT ADVANCED DRIVER ASSISTANCE SYSTEMS
As regards the present state of the art of driving assistance devices, their
functions can be divided into three main categories:
●
– Presentation of information about road environment and
traffic conditions; the driver is helped during his recognition
activity. Under this category, devices, such as road sign recognition
(RSR), blind spot monitoring systems (BSM), and so on, can be
included.
●
– Warning; the driver is helped as regards judgement. A timely
and appropriate warning signal is issued when a possibly critical
situation is detected (as forward collision warning (FCW)).
●
– Control (to help the driver in operation tasks); the system gets
control over the vehicle.
Presently, two types of control can be considered:
●
– Control for normal driving conditions (as, for instance, lane
keep assist (LKA), adaptive cruise control (ACC), etc.), mainly
aimed at improving travelling comfort by reducing the workload on
the driver.
●
– Control for pre-crash conditions (as autonomous emergency
braking (AEB), often in combination with FCW) in which the driver
may be overridden due to the lack of an effective response to some
critical situation, aiming at avoiding or mitigating an imminent
collision. In such cases, the system acts on the brakes but leaves the
choice of steering to the driver.
Even if the control is took by the system, the driving responsibility remains to
the driver: in the first case because the control is handed to the driver when
conditions can become critical; in the second case, the driver is overridden
only at pre-crash conditions (but only as regards braking) so that the accident
consequences are mitigated.
In all of this assistance functions, it can be easily understood that a
convenient interface (Human-Machine Interface (HMI)) between the generic
device and the driver must be designed and implemented. HMI can be
considered as the channel through which information are conveyed to a
vehicle’s occupant; HMI design is one of the main issues that must be
properly allowed for [5], addressing, for instance, the definition of the correct
stimulus (type: visual, acoustic, haptic, etc.; sequence; timing; priority; etc.).
In addition, since it is to be expected a different communication efficiency as
a function of age, experience, education, etc., the interface must be properly
tailored and some adaptation is certainly needed. The necessity of
standardisation is to be expected, as well as the definition of human models
capable to help interpreting correctly the situation and act accordingly.
During the progression towards full automation (especially when high levels
of automation, such as SAE Levels 2, 3 and 4, will be implemented), several
issues should be addressed in order to obtain a fast and successful path to
SAE Level 5; three main topics can be identified as follows:
●
– Definition of suitable strategies for shifting control from the
driver to the system and vice versa. Design of proper HMI systems
will be of fundamental importance, also aiming at carrying out such
operation in a seamless manner.
●
– Definition of procedures aimed at obtaining the functional
assessment of the instrumental part of the automated system.
●
– Obtaining a wide user acceptance rate in order to accelerate
the penetration in the market of automated systems.
The issues presented in the first two points are currently addressed by several
standard practice and regulations (as, for instance, in [6, 7]).
Presently, most of the vehicles can be categorised as belonging to Level 0 of
automation, but all the major car manufacturers (as well as tier one suppliers)
offer, in their sales catalogues, devices that can be defined as Driver
Assistance Systems (typical of SAE Level 1) and some of them show features
that can be defined as partial automation.
In the Italian market, for instance, in the official sales catalogues of the end of
2015 as published by car magazines (basically Quattroruote, Italy), several
automated assistance systems can be found, mainly belonging to the
following categories:
●
– Adaptive cruise control (ACC),
●
– Blind spot monitoring systems (BSM),
●
– Lane departure warning systems (LDWS),
●
●
– Road sign recognition (RSR),
– Autonomous emergency braking/forward collision warning
(AEB/FCW).
It can be easily recognised that such functions will certainly be part of a
hypothetic future full automated system: even if the methods used to obtain
such functions are hardly imaginable, the functions itself are necessary and
the interaction with the driver must be allowed for. As can be seen in Table
1, the above-mentioned devices are offered by a good number of
manufacturers on several models, both as standard equipment and as paid
option; often they are included in a package together with other safety or
comfort devices.
Table 2 shows average price and standard deviation (SD) for the same
devices as Table 1, for the models offering such devices as paid option; as
can be seen, when a device is included in a package, its price can be much
higher. Price is a matter that can influence user acceptance and delay the
diffusion of such safety devices.
USER ACCEPTANCE
Though many researchers are very optimistic on the large implementation of
full automation in the near future [8], many factors can slow down the
process. A survey conducted by IEEE among its members [8] revealed that in
the vision of many experts in the field, six main obstacles to the rapid
diffusion of autonomous vehicles can be identified, i.e., technology, cost,
infrastructure, policy, liability and user acceptance. According to this source,
the first three points should represent a minor problem; technology is rapidly
improving as regards both efficiency and reliability, whereas cost is a
problem that must be shared among private and public stakeholders, also
taking into consideration the potential benefit of accidents reduction, as well
as medical and social costs. The implementation of proper infrastructures is
of the greatest importance (it is difficult to imagine an effective
implementation of driving automation without, e. g., V2V and V2I
communications) so that in relatively short terms it can be predicted that the
largest diffusion of such systems will take place first in advanced
geographical areas, such as North America, Europe and parts of Asia. The
last three points, instead, will play a decisive role; policymakers can boost or
slow down the process since many matters require political decisions and a
proper legislation will most probably be necessary, for instance as regards the
realisation of the needed infrastructures and the settlement of issues related to
legal liability. This last point can be particularly important: who will be
responsible when an accident happens, as certainly will? It can be imagined
that the involvement of car manufacturers and their suppliers will be greater,
in a context that will also involve insurance companies, governments and
customers [9–11]. User acceptance will play a fundamental role; in reference
[12], for instance, a worldwide survey was carried out in order to understand
how autonomous vehicles will be accepted, comparing all levels of
automation (from conventional driving to full automation). In this study, the
major concerns of future customers were indicated, including legal issues,
cost (22% declared themselves unwilling to pay any additional sum for an
automated vehicle), security (regarding especially software being subject to
hacking), privacy concerns (vehicles are subject to be constantly monitored)
and driving pleasantness, etc. Geographical differences were also pointed out.
In reference [13], the intention of French drivers to use a partially or fully
automated vehicle was investigated. In reference [14], the possible effect of
motion sickness on user acceptance is investigated, and the necessity of
considering such issue during the design and development phase is
emphasised. Thus, if a fast and successful introduction in the global market is
desired, such systems must be implemented in such a way as aiming at high
performance and high user acceptance, and such steps require the most
complete understanding of driving behaviour: in other words, a driver model
(or better, driver models) must be set up.
THE ROLE OF SIMULATION
In the initial phase of the development of ADAS, it is a common practice to
carry out testing in controlled environment, namely, by staged driving
sessions or using driving simulators. Since their introduction, driving
simulators have been widely used to study safety and human factor-related
issues. Since the first appearance of advanced driving simulators they were
extensively used to investigate over safety issues [15, 16] and also as an
important tool in the design and assessment of advanced driving assistance
systems [17, 18].
The use of simulators presents numerous advantages:
●
– Situations that normally reveal to be dangerous can be faced
without any risk for the driver and for the researchers as well.
●
– A well-designed testing scenario allows a very good
repeatability of the driving environment and control of all variables
(traffic, course, road and weather condition, etc.).
●
– The situations through which the driver goes can be adapted
to the driver behaviour itself.
●
– All testing parameters can be easily recorded and stored for
successive elaboration.
●
– Experimentation can be speeded up.
On the other hand, the driving scenario must be carefully designed in order to
obtain a sufficient representativeness of the results, and often a validation
activity must be carried out, for instance by carrying out staged tests in
controlled environments or by monitoring real-life driving. Moreover, not all
the drivers are able to drive comfortably in a driving simulator. Although
some of the testing activities regarding ADAS development can be carried
out using static simulators, the use of an advanced immersive driving
simulator allows to have all the needed functionalities together with a
sufficiently realistic testing condition.
In the present chapter, a simulator experimental study is presented, aimed at
understanding drivers’ behaviour when a sudden hazardous situation with
pedestrians is presented; for such aim, 69 young drivers were submitted to
different virtual driving scenarios. The experimentation, far to be definitive,
will anyway provide useful information for setting up a driver model as well
as for determining HMI requirements.
Definition of a reference accident scenario
Among road users pedestrians represent one of the weaker categories, and the
percentage of accidents involving pedestrians is relatively high. According to
WHO [19], in 2013, 22% of about 1.25 million worldwide road traffic deaths
were pedestrians. In the USA, during 2012, 4743 pedestrians were killed
(total casualties 33,561) and 76,000 were injured (total 2,362,000) [20]. It can
be seen that the percentage of deaths with respect to injuries among
pedestrians (6.2%) is much higher than the general one (1.4%), thus
confirming the high level of danger. Pedestrian safety is expected to be
highly boosted by the adoption of assistance systems as pedestrian detection
system (PDS) and V2I communications, and for this reason it was chosen to
study the drivers’ behaviour in a situation with a high risk of being involved
in an accident with a pedestrian.
Accident reconstruction is a powerful tool to explain the reasons of an
accident and identify the main contributing factors. Thus, 26 accidents
involving pedestrians actually happened in Austria were analysed using the
CEDATU (Central Database for In-depth Accident Study) database [21, 22],
by also using multi-body simulations (PC-Crash, DSD, 2015).
Following the indications collected during the preceding phase, a reference
accident scenario was defined having, among others, the following features:
low-density population urban environment, late evening (heavy darkness)
with scarce electric lighting, good weather and road conditions, nonintersection, and pedestrian not using a crosswalk and walking without
running from left to right; moreover, a car is coming from the opposite
direction obstructing the visual.
Set-up of the simulation scenario
Sixty-nine young drivers were employed for the driving simulator testing,
enrolled on a voluntary basis mainly among the students of the School of
Engineering of the University of Florence. Each subject drove the test
scenario once. Sixty-one tests were considered valid, whereas the remaining
were discarded because of excessive simulator sickness or weird behaviour of
drivers and simulator software; the main characteristics of the drivers of valid
tests (18 female, 43 male) are shown in Table 3. Age varied between 19 and
36 years and also driving experience was very different, ranging from few
hundreds of kilometre per year to 50,000. Nineteen tests were actually used
to tune the simulation scenario (set-up tests) and, in particular, to synchronise
the behaviour of the other road users at the emergency situation. The
remaining 42 drivers were used for the actual experimentation.
For the experimentation, the driving simulator available at LaSIS (Laboratory
for Road Safety and Accident Reconstruction, University of Florence, Italy)
was used. It consists of a full-scale dynamic simulator (Image 2) provided by
AutoSIM (Norway, model AS 1200); the complete vehicle cabin and the 200
degrees wide cylindrical screen allow an immersive driving experience.
Image 2.
View of the simulator in use at LaSIS.
Following the indications obtained from the statistical analysis and the
thorough reconstruction of typical road accidents, a generic scenario was
defined, adapting one of the terrains available in the simulator database. In
particular an environment with little population was chosen, in which the
emergency situation described above was inserted (Image 3). The driver
reaches the point after having driven for about 5 minutes, encountering some
vehicular traffic and pedestrians. Since the terrain contains several
intersections and roundabouts, to be sure that the drivers reach the point of
interest in a given time, indications by means of direction arrows were
projected on the screen. The entire test was driven in night time conditions.
Image 3.
Map of the emergency situation area.
Before driving the test scenario, the subjects faced a training scenario in
which, during about 10 minutes, they could get acquainted with the vehicle
cabin and try the most common manoeuvres; the subjects began the drive
under daylight and ended with dark condition in order to get gradually
accustomed with night driving. In some of the tests, the presence of an ADAS
was simulated, consisting in a pedestrian detection system. Those drivers
who were going to drive in the scenario with the ADAS system also
experienced it in the training scenario. Before and after the test, each subject
was submitted a questionnaire in order to collect personal and driving
information and record their impressions. Before the test, the subjects were
informed about the aim of the test and were instructed on the basic functions
of simulator and cabin; they were invited to drive in a normal way, respecting
the normal traffic rules. No hints were given about the emergency situation
they were going to meet. The emergency situation was designed as follows
(Image 3): when the interactive vehicle is at a given distance from the
emergency area, a vehicle starts travelling in the opposite direction,
interfering with the visual. When the interactive vehicle is at a distance that
corresponds to a time to collision of about 4.5 s, a pedestrian starts walking
from the opposite side of the road, heading along an oblique direction, with a
constant speed of 1.67 m/s; in the first part of his path, the pedestrian is
hidden by the distraction vehicle (Image 4a).
Image 4.
Scene of the emergency situation as seen by the subject driver: in scenarios
A, B and C (a) and in scenario D (b).
Four different scenarios were set, one without ADAS and three in which the
drivers were helped by a simulated PDS assistance system, characterised by
different alert modes:
●
– Scenario A: no ADAS was simulated (12 drivers);
– Scenario B: a sound alarm (consisting of beeps with
●
increasing frequency) is issued after 0.5 s the pedestrian start (10
drivers);
●
– Scenario C: as scenario B except that the sound is emitted
after 1.5 s (10 drivers);
●
– Scenario D: as scenario C (10 drivers) with the addition of a
visual warning, consisting in a self-lighted rectangular frame around
the pedestrian (Image 4b). The aim was to provide additional
information to the driver about the position of the hazardous
situation, simulating the use of a head-up display.
Data analysis
To obtain the results, data from three sources were analysed:
●
– Disk recording of several parameter of the vehicle, at the rate
of 20 S/s, including position, speed, acceleration, pedals use, gear,
steering wheel position, vehicle position in the lane, etc.
●
– Visual recording of the entire simulation, as performed by the
simulator software.
●
– Information filled in questionnaires by the subjects.
From such data, besides a qualitative description of the driver’s behaviour,
the following values were calculated:
●
– Time, speed and position when a given event occurred (as
reactions, actions, etc.).
●
– Time to collision (ttc) following a given event; time to
collision is defined as the time needed to reach a given position (for
instance an obstacle) if the speed is maintained constant; such value
provides indications of the closeness of a danger situation, allowing
for speed and distance; ttc of the vehicle relative to the pedestrian
was calculated allowing for its actual position, since it can walk.
●
– Time between braking onset and maximum braking
activation; it provides indications on how fast maximum braking
action is obtained.
●
– Actual degree of emergency (ADE) shown in Eq. (1),
combining speed (V), time to collision (ttc) and reaction time (tR); it
represents the constant deceleration that should be applied to stop
just in front of the obstacle and is expressed as follows [15]:
●
ADE=V2⋅(ttc−tR)
●
(1)
●
●
Options
In the present study, when ttc refers to the moment of braking
onset, tR is put equal to zero.
Results
All of the subjects had some reaction when approaching the pedestrian, but
only 29 drivers out of 42 succeeded in carrying out a successful manoeuvre to
avoid the collision. Usually, the drivers reacted by releasing the accelerator
pedal and pressing the brake pedal; only two out of 29 successful drivers,
after a short use of brakes, steered and avoided the obstacle (tests 31 and 34).
None of the drivers chose to steer instead of braking. Thirteen drivers out of
42 hit the pedestrian (see Table 4): 50% in scenario A (without ADAS) and
23% in all the scenarios with ADAS. Of these only 4 out of 13 tried, while
braking, to steer.
Tests without ADAS had higher speed at collision than those with ADAS,
being in average 33.1 km/h (SD = 11.9 km/h) versus 22.4 km/h (SD = 9.4
km/h), with some statistical significance (P-value = 0.06 following the t-test
[23]). As regards the functions of the simulated ADAS device, 4 out of 30
declared they did not hear the acoustic alarm, whereas 10 out of 10 saw the
visual one.
TIME OF ACTION AND REACTION
Time of action was determined by analysing the signal regarding pedals
(brake and accelerator) and steering wheel. Every effort was put to identify
the first action caused by the perception of a hazard. Approaching the
obstacle an action on the brake was always detected; in the cases of scenarios
with ADAS, sometimes the action on the accelerator was detected before the
alarm was issued, mainly because the driver was prudently reducing the
speed approaching an intersection with some traffic, so that the first action
was considered braking; this happened 11 times out of 30.
Since the moment when the driver saw the pedestrian cannot be determined,
it was not possible to obtain a reaction time following such event. In the cases
of scenarios with ADAS, reaction time was here defined as the time interval
between the alarm and the first successive action (on accelerator or brake); its
mean value resulted to be 0.88 s.
An indication about the degree of emergency perceived by the driver can be
the time difference between the first (releasing the accelerator) and the
second action (pressing the brake pedal) tA2 − tA1; the comparison between
different scenarios, as well as the overall scenarios with ADAS, is shown in
Image 5. The t-test showed that there is no significant difference among the
scenarios with ADAS (P-value > 0.5), whereas these last are significantly
different from scenario A (no ADAS, P-value < 0.05).
Image 5.
Mean values of tA2 − tA1 ± SD.
TIME TO COLLISION
Time to collision [15] is the time at the end of which a collision will occur if
speed does not change; in this study it was evaluated at the moment of the
first action (release of accelerator) and the second action (braking). In Images
6 and 7, mean values of both parameters are shown together with their
dispersion. Scenario A yielded a significantly lower value for ttcA1 (1.46 s vs.
2.52 s, P-value < 0.0001) as well for ttcA2 (1.34 s vs. 1.92 s, P-value < 0.01)
in comparison to all the scenarios with ADAS.
Image 6.
Mean values of ttcA1 ± SD.
Image 7.
Mean values of ttcA2 ± SD.
Among the scenarios with ADAS, as expected, there were significantly
higher values for scenario B as compared to scenario C as regards ttcA1 (2.87
vs. 2.32 s, P-value = 0.06) and above all as regards ttcA2(2.13 vs. 1.46 s, Pvalue = 0.03). No significant difference was found between scenarios C and
D.
BRAKING
Braking represented, for 27 drivers out of 29, the first active actions
attempted by successful drivers (as said, only two avoided the obstacle by
steering only) since releasing the accelerator pedal, in itself, has a little effect
on speed reduction. Braking is one of the actions that showed differences
throughout the experimentation. In Image 8, for instance, some typical
modes of actions on the brake pedal are shown; they are related to the
emergency braking approaching the pedestrian during four tests, two
successful and two failed.
Image 8.
Typical emergency brake pedal activation; tests 28 and 41 were successful,
and tests 26 and 71 were failed.
The main difference lies in the different time that the driver used to reach the
maximum pedal activation. In order to characterise such difference, the
parameter tmax – tA2 is introduced, as the time difference between the
beginning of the braking activation and the instant when the maximum action
is reached. Such value could not be calculated for all the tests since in some
failed tests the collision happened before the braking action reached a
stabilised level. As shown in Image 9, where tmax – tA2 is plotted as a function
of time to collision, a clear trend is visible, indicating that when ttc decreases,
the action on the brake tends to be faster.
Image 9.
Trend of tmax − tA2 as a function of time to collision.
As shown in Image 10, tmax – tA2 is also influenced by the presence of ADAS;
scenarios with ADAS yielded significantly higher values than scenarios
without ADAS (in average 2.03 s vs. 1.09 s, P-value < 0.001). As a
consequence, part of the advantage afforded by ADAS devices (seen above,
for instance, in terms of time to collision) is wasted because of a slower
action on the brake pedal; trials that ended with a collision (indicated by a
cross) are in some cases characterised by relatively high tmax – tA2, indicating
that a different braking approach could sometimes help avoiding the
collision.
Image 10.
Cumulated distribution for tmax − tA2 in tests with and without ADAS; the
cross indicates a failed test.
ACTUAL DEGREE OF EMERGENCY
The parameter ADE introduced above can provide indications on the degree
of emergency (meant as the urgency to react) of a given hazardous situation,
but also indicates if a manoeuvre based on braking only can be successful,
since the deceleration that a vehicle can experience is limited by the friction
available.
In Image 11, the cumulated distributions of actual degree of emergency
corresponding to the action on the brake are shown, for failed and successful
trials. A statistically relevant difference was identified between the two
samples (in average 8.78 m/s2 for the failed tests vs. 3.89 m/s2 for the
successful tests, P-value < 0.001). It is evident that it is impossible to stop
before the collision when having ADE values near or greater than the
maximum possible deceleration. Actually, the maximum value of ADEA2 that
allowed a successful manoeuvre only acting on brakes was equal to 5.89
m/s2; the cases with higher ADE (tests 31 and 34, highlighted in Image 11)
were successful only because a steering manoeuvre was performed.
Image 11.
Cumulated distribution for ADEA2 in failed and successful tests. The cross
indicates trials in which the driver avoided the obstacle by steering instead of
braking (tests 31 and 34).
Conclusion
The effect of the presence of the ADAS was relevant since, for instance, it
was capable to halve the percentage of collisions. Similarly, some of the other
parameters that were examined showed clear advantages of using such
device, as ttCA1, ttCA2, actual degree of emergency and speed at collision.
Parameters as tA2 − tA1 and tmax − tA2, instead, showed that the presence of the
ADAS could not prevent a slower execution of the required actions, perhaps
caused by the anticipated perception of danger, so that sometimes it seemed
that the driver was not capable of fully exploiting the advantages allowed by
ADAS. In such cases, the use of further automation as autonomous braking
or emergency brake assist (helping applying and maintaining the correct
pressure on the brakes, already used by several manufacturers) will certainly
help.
As regards the comparison between the different ADAS modes (scenarios B,
C and D), the conclusions are less straightforward. Scenarios B and C have
the same alert mode (a beep with increasing frequency), but in the latter it
starts one second later. Consequently, in scenario C, time to collision is
significantly lower, as well as tmax − tA2, but no significant difference was
identified as regards the other parameters, though always better. The
advantage of an early alert seems, as expected, evident, and the risk of
increasing the frequency of false positive in the attempt of anticipating the
issue of the alarm must be carefully evaluated. As regards scenario D, in
which a luminous rectangle framing the pedestrian was added to the same
configuration of scenario C, no significant difference was noted, though all
the drivers declared to have seen it but not everyone remembered to have
heard the acoustic signal. Further experimentation and deeper comprehension
is certainly necessary.
Flight Control Development and Test for an
Unconventional VTOL UAV.
Ducted-fan aerial vehicles (DFAVs) have attracted much more interest of the
academic and industrial communities worldwide due to their compact layout
and high-security applications in several tasks such as surveillance, data
acquisition, and even physical interaction within unstructured environments.
Among the different possible configurations, the vertical take-off and landing
(VTOL) ducted-fan aircraft is well-suited for a variety of missions in
environments cluttered with obstacles or close to infrastructures and humans.
This fact is motivated mainly by the ducted-fan configuration in which the
propeller is protected by an annular fuselage. In addition, a prominent
advantage of ducted-fan system is better overall efficiency at low speeds [1].
In this respect, also inspired by the previous works considering test-fly
methods in control [2, 3], the aircraft considered here is a tandem ducted-fan
vehicle configuration characterized by a very simple mechanical structure,
composed only of two tandem contra-rotating propellers inside the vehicle’s
fuselage, a number of control vanes which deviate the propeller’s air flow in
order to obtain full controllability of the attitude dynamics (see also [1, 4])
and a set of auxiliary “direct force control” with small lateral electric ducted
fans (EDFs).
Drawing inspiration from the potential of the well-designed VTOL aircraft,
the focus of this chapter is on the systematic modeling, flight control
development, and implementation methods of the aerial vehicle named BITTDF at Beijing Institute of Technology (BIT). A number of contribution
focuses on the problems of feedback control design for such a class of
systems. In [5], a dynamic inversion controller is proposed to govern the
nonlinear dynamics of a ducted-fan aircraft. In [6], a framework of nonlinear
robust control based on a path following strategy is applied to a ducted-fan
miniature UAV . A structured two-loop feedback controller combined with a
static anti-windup compensator is proposed in [3] for a ducted-fan aircraft.
However, few research laboratories are carrying out advanced theoretical and
experimental works on the system; among others, to mention a few, the
European Community under the 7th Framework Programme through
collaborative projects AIRobots [7], the HoverEye project of autonomous
ducted-fan aircraft [8], and the Martin Jetpack [9].
To actually show the potentials in a real application scenario, the overall
system design of a fully functional UAV has to be validated experimentally
using a real setup, especially the proposed control techniques. The object is to
consider a robust flight control design for our small-scaled ducted-fan
aircraft. The proposed methods have been tested either in simulation,
experimental, or both frameworks where the implementation has been carried
out using the ducted-fan UAV known as BIT-TDF.
Throughout the overall development of the UAV, deriving a high-fidelity
nonlinear model has been a challenging issue due to their inherent instability
and large amount of high-complexity aerodynamic parameters. After the
hardware construction of the ducted-fan prototype, we first obtain a
comprehensive flight dynamics model based on an elaborated approach
which integrates first-principle and system identification techniques. The
frequency-domain identification tool CIFER [10], which was developed by
army/NASA Rotorcraft Division and is one of today’s standard tools for
identifying models of different aircraft configurations, has been used here to
capture the key dynamics. With the identified model in hand, we then carry
out to design a flight control system with two-loop architecture, in which an
inner loop is for stabilization and decoupling the UAV dynamics and an outer
loop is for desired velocity tracking performance. Specifically, we have
combined (1) H∞ technique; (2) nonsmooth optimization algorithm; and (3)
custom-defined gain-scheduling to design a nonlinear flight control law and
successfully realized the automatic flight test.
Description and dynamics of the BIT-TDF system
The ducted-fan UAV of the Department of Mechanical Engineering of BIT is
the BIT-TDF prototype (Image 1a) developed by our team of researchers at
Vehicle Research Centre (VRC). The front and aft propellers are enclosed
within a protective carbon fiber duct to ensure safe operation of the vehicle in
an indoor environment or close to building. The prototype uses a very simple,
lightweight monocoque structure, producing an efficient and very durable
design. The aerodynamic efficiency, in terms of additional thrust increase at
the same power consumption, is improved by increasing the lip radius and
decreasing the blade tip clearance. It uses eight 28-cm propellers for tandem
ducted fans and standard RC motors and speed controllers. It is equipped
with the COM Express Type 10 Mini Carrier Board CCG010, which is
chosen as onboard flight control computer running at RT-Linux. The
embedded computer system has a main processor running at 1.3 GHz, which
can conveniently integrate all of necessary functions for data exchange with
INS/GPS-based navigation system, servo-controller, and wireless modern,
and a data acquisition board for RPM sensor. An 8-channel servo-controller
(UAV100) is used to generate PWM signals necessary to drive the ducted-fan
motors and the actuator servos. Custom-built real-time control software
developed by the BIT VRC research group is used to real-time control the
BIT-TDF.
Image 1.
(a) The BIT-TDF ducted -fan UAV and (b) its schematic representation.
For ease of understanding, the block diagram of the entire UAV system is
illustrated in Image 2. From left to right, it is composed of three main parts:
Image 2.
The system block diagram.
●
The first part represents the actuators along with mixing
function that converts conventional pilot inceptor inputs into a
combination of exit vane deflections, and motor speed. The input to
this part is u = [δcol δlon δlat δped]T which are pulse-width modulation
(PWM) signals. The outputs are the vector Ω = [Ω1Ω2Ω3Ω4]T
representing the motor speed of front-aft ducted-fan propellers and
left-right EDFs, in RPM, and the vector δ = [δ1δ2δ3δ4]T, representing
the four exit vane deflections, in degrees.
●
The second part is the force and moment generation mechanism
block that relates the generated inputs to the applied lift, torques,
and the aerodynamics acting on the system. This block corresponds
to all the body components of the aircraft, that is, the tandem
ducted-fan system, left-right EDFs thrust generation, vanes
deflection system, and the fuselage drag part.
●
The third part is the rigid-body dynamics that relates the applied
force and moments to the attitude, velocity, and position of the BITTDF.
The subsequent sections present a comprehensive nonlinear flight model the
ducted-fan aircraft.
EQUATIONS OF MOTION FOR BIT-TDF
The following flight-dynamics equations of motion describes a general
physical structure based on Newton-Euler formulism, and the reader is
referred to any classical flight-mechanics references for a more complete
development.
where u, v, and w are the velocities in the body frame, p, q, and r are the
angular velocities in the body frame, m is the system mass, and Jxx, Jyy and Jzz
are the moments of inertia along body-frame main axes. ϕ, θ, and ψ are the
roll, pitch, and yaw Euler angles. Faero and Maero are the vector of external
aerodynamic force and moments, respectively, which are given by
where the index i = 1 for front ducted-fan, i = 2 for aft ducted-fan, Dm for the
momentum drag, H for the hub force, (□)fus for the fuselage, ∑Fv∑ for the
exit vane, Tf,a for the thrust of front and aft ducted-fan, Ted(l) and Tedf(r) for the
thrust of left and right EDF, PM for the duct pitching moment, Rmfor the
rolling moment of a propeller, Q for the propeller anti-torque moment, and
the de, hr, dzv and dxvare all the distance parameters related to torque. They are
introduced in detail as follows.
DUCTED-FAN SYSTEM FORCES AND MOMENTS
The aerodynamic forces and moments are derived using a combination of
ducted-fan inflow model and blade element theory. In ref. [1], one
comprehensive and creative inflow model for computing ducted-fan thrust T
and induced velocity vi is proposed. Such procedure is based on some
modifications of classic momentum theory inflow of an open propeller. For
easier reference, we list the inflow model but there involves a revised
derivation of the equation of conservation of energy.
An illustration of the inflow model is shown in Image 3. The velocity vectors
upstream of the propeller, at the disk of the propeller, and far downstream of
the propeller are shown below.
R
∞
The affected angles of attack, α and α , are modeled as a function of the
flow turning efficiency factors kχR and kχ∞, which are given by
Image 3.
Inflow model illustration.
The thrust of ducted-fan system is a combination of propeller thrust TR, and
duct thrust TD,
where kaug is the thrust augmentation factor.
The mass flow rate of air through the duct is given by
where ρ is the local air density and AD is the duct area at the plane of the
propeller.
By considering only the vertical component of conservation of momentum
from free-stream flow to the far-wake flow, the expression for thrust is given
by
Similarly, considering only the horizontal component, the expression for
momentum drag is found to be
It is assumed that energy enters the ducted-fan system through the propeller
thrust and the momentum drag. Therefore, a revised equation of increment in
kinetic energy is formulated here that accounts for the contribution of the
propeller thrust.
Substituting the Eq. (6) into (7), an expression is derived for the velocity ν∞.
Another expression for the velocity ν∞ is found by substituting the Eqs. (5)
and (7) into (9). The final result after joint solution of the velocity ν∞ is
given as the newly formulated inflow equation.
Based on the work of Gary Fay in Mesicopter project [10, 11], the
aerodynamic forces and moments of rotating propeller are resolved using
classic blade element theory. For convenience of the reader, we recall some
symbols. σ: solidity ratio, clα: lift slope, λ: inflow ratio, μ: advance ratio, R:
propeller radius, Cd¯¯¯¯: averaged drag coefficient, θ0: pitch of incidence,
θtw: twist pitch. Thrust force TR is the resultant of the vertical forces acting on
all the blade elements.
Hub force H is the resultant of horizontal forces acting on all the blade
elements.
The propeller anti-torque moment Q is caused by the horizontal forces acting
on the propeller that are multiplied by the moment arm and integrated over
the propeller.
Rolling moment Rm of a propeller exists in forward flight when the advancing
blade is producing more lift than the retreating one.
Once the thrust created by the ducted-fan system is determined from (11), the
quasi-steady induced velocity is found by iterating the inflow Eq. (10), and
vice versa. A simple bisection algorithm is implemented here to obtain the
thrust and induced velocity. Consequently, all the forces and moments of
ducted-fan system are solvable.
CONTROL VANES FORCES AND MOMENTS
The exit control vanes are modeled as all-moving control surfaces that
operate in the wake generated by the propellers, and provide yaw attitude
control and thrust augmentation. To model those forces and moments, we
refer to our mixing schematic shown in Image 4 (also shown in the “mixing”
block of Image 2). First, we will consider all the forces generated by each
vane, and then, we will consider the resultant contributions that affect the
yaw dynamics. Let us denote by i = 1, …4 the each vane. There resulting
force Fv and an induced moment Γv are given by
Image 4.
Exit vanes used to govern yaw attitude.
where Fvy and Fvz are the aerodynamic forces projected into the body-fixed
frame, α0 is the flow deflection angle and the induced moment Γv can be
decomposed into the components dzv Σ Fvy and Σ Fvy dxv in (2).
FUSELAGE FORCES
There are always drag forces caused by the fuselage and have to be modeled.
The fuselage drag model is a key element in the correlation with flight-test
data. A function is integrated into the simulation model that calculates the
fuselage drag forces along three body frame axes, which is given by
where Sfx, Sfy, and Sfz are the effective drag area in the xb, yb, and zb direction.
LATERAL EDF THRUST
The two lateral small electric ducted-fans are responsible of controlling the
roll attitude dynamics, regulating differential thrust on the EDFs.
Specifically, a positive control input results in increased thrust on the left
EDF and a decrease thrust on the right one. It should be mentioned that the
previous prototype BIT-TDF in [12, 13] is a vane control mixing version,
which means roll and yaw control are all achieved by deflecting the exit
vanes in a mixing way. This compact VTOL vehicle was tested extensively
by our research group. Although it realized basic stabilization flight, it
exhibited some serious operational limitations, the most notable being its
poor stability and controllability in windy conditions. Therefore, a set of
auxiliary “direct force control” with small EDFs are mounted on the current
prototype (see Image 1) to optimize the system’s maneuverability.
Controllability analysis and recent flight-test demonstrate that the overall
performance of the newly constructed configuration has been significantly
improved.
The generated thrust Tedf of the EDF is related to the PWM input upwm by a
first-order linear transfer function:
where Kedf is a positive gain and τe is the time constant of EDF.
DUCT PITCHING MOMENT
Perhaps the most challenging issue of ducted-fan system over conventional
configuration is the strong nose-up pitching moment produced by the duct in
edgewise flight or in presence of cross-wind. The duct pitching moment is
caused by the dissymmetry lift of the duct due to the unequal flow
experienced by the front and aft sections of the duct. This moment makes the
vehicle so unstable that even an experienced pilot would not be able to hold it
steady in flight without a stability and control augmentation system (SCAS).
A meaningful observation by [1] stated that the pitching moment may be a
function of airspeed, angle of attack, advance ratio, and inflow velocity.
According to the experimental data in [8], the pitching moment model is
implemented solely as a parabolic function of the relative airspeed for
simplicity, which can be written as
where V0 is the relative airspeed, εc and εx are the constant coefficients to be
determined.
ACTUATOR DYNAMICS
Four linear servo-actuators are used to control the vane deflections, while the
front and aft electronic speed controllers (ESCs) are used to control the speed
of motor-propellers. The lateral EDF dynamics has been addressed in Section
2.1.4. The dynamics of all actuators can be identified in ground tests with an
input of the actuator command and an output of the actuator response or the
control surface deflection. However, in our case, an artificial control mixing
function was used and we would like to explicitly account for this block.
After examination of the respective frequency responses of the four control
input channels, we see that a first-order system should adequately capture the
dynamics. The corresponding differential equations are as follows:
where the states mf and mr are the speed of front and aft motor-propellers, αv
is the vane deflection command that is converted into a combination of four
exit vane deflections, and the last state βe denotes the thrust of lateral EDF,
and the corresponding time constants are τm, τv, and τe. The effective linkage
gains of four input channels (Zcol, Mlon, Nped, and Llat) completely depend on
the defined usable range of physical inputs that correspond to the normalized
inputs in the range [−1, 1]. The equations for the effective linkage gains are
as follows:
IDENTIFICATION OF MODEL PARAMETERS
From the above analysis, a full 15th-order nonlinear dynamic model can be
obtained and a linear model of the BIT-TDF at the prescribed flight condition
can also be extracted. We have implemented an integrated identification
procedure to identify all the physical parameters. Using global optimization
methods, the extracted lineal models can be utilized to tune the unknown
physical parameters of the nonlinear model to match frequency responses
from flight tests, which delivers a very accurate nonlinear model suitable for
flight simulations, and linear models adequate for control design.
The focus of this section is on the identification results obtained by the
particular CIFER tool. The frequency-domain system identification method
allows for rapid updating of the vehicle response model as changes are made
to the unconventional UAV designs by repeating flight tests. At first, the
description of the experimental setup is given. The parameterized model with
the associated stability derivatives is also provided. Then, the final results of
the identification procedure follow. Finally, the accuracy of the extracted
model is validated in the time domain.
EXPERIMENTAL PLATFORM
The experimental platform, as depicted in Image 5, includes an onboard
flight control computer, an INS/GPS module, a sonar sensor, four high-speed
brushless DC ESCs, a servo-controller, a pair of free-wave wireless data
transceivers, and a ground station. The pair of transceivers establish
communication between the onboard system and the ground station. The
onboard system is to collect sensor data, process and fuse them, feed to the
control law, send the servo-driving signals to realize desired control mode.
The C code is programmed, complied, and then downloaded to the onboard
system and ground system to perform specific tasks.
Image 5.
Block diagram of the experimental platform.
PARAMETERIZED STATE-SPACE MODEL
The model structure of the BIT-TDF consists of 12 states. These states
include eight rigid-body states and four explicit states for the actuator
dynamics. The final structure is obtained first systematically eliminating the
derivatives that have high insensitivity and/or high Cramer-Rao bound and
then refining the model in a similar process in [14]. Image 6 shows the
system matrix A and the input matrix B of the minimum parameterized model
structure.
Image 6.
System and input matrix for the state-space model.
The above-parameterized model provides a physically meaningful description
of the system dynamics. All stability derivatives included in this model are
related to kinematic and aerodynamic effects of the ducted fans, exit vanes
and the fuselage. The challenge is the determination of their arithmetic
values.
IDENTIFICATION SETUP
As shown in Image 7, the data-collection experiment is performed in closedloop with an external SCAS engaged. The reason for the closed-loop
experiment arises from that the BIT-TDF is lightly damped and inherently
unstable. When the ducted-fan vehicle is set to hover, we inject a set of
computerized frequency sweep excitation signal to one of the four control
inputs. The resulting control signals along with their corresponding output
responses are inserted into the CIFER software which processes the time
domain experimental data to produce a high quality multi-input multi-output
(MIMO) frequency response pairs. For our BIT-TDF vehicle, the selected
frequency responses and their frequency ranges are depicted in Table 1.
Image 7.
Schematic diagram of data collection in the closed-loop setting.
The extraction of the parametric model is an iterative procedure, which
continues until difference between the actual and simulation frequency
responses is minimized as shown in Image 8. The identification results
illustrate the system can be well characterized as a linear process in the
interested frequency range. Note that the parameters in the model structure
have been designated as the following four categories:
Image 8.
Frequency responses for the flight data (solid line) and frequency responses
predicted by the state-space model (dashed line).
●
●
Fixed parameters known as a priori;
Constrained parameters with constraint equations to enforce
commonality and kinematic interrelationship;
●
Isolated parameters that is difficult to achieve a satisfactory
identification;
●
Free parameters to be determined in the course of the
identification.
Some physical parameters, such as gravity and actuators-related parameters
(determined from input-output ground tests), are known as a priori and
should be fixed. Some stability derivatives in system matrix exist constrained
relationships due to the symmetrical configuration, such as Zmf=Zmr,Mmf=
−MmrM. There are also some parameters difficult to identify due to poor
low-frequency excitation and they can be obtained based on the static trim
estimation. The remaining free parameters for our vehicle are listed in Table
2, from which it is clear that all the parameters identified using CIFER are
highly accurate.
TIME-DOMAIN VALIDATION
Time-domain is more straightforward for evaluating the predictive capability
of the identified model for test inputs, such as steps or doublets that are
dissimilar with the ones used in the identification process. Four individual
flight records are collected, each corresponding to one of the test inputs. The
recorded inputs are used as inputs to the identified model, and the ducted-fan
vehicle’s responses predicted by the model are compared to the responses
recorded during the flight tests. The comparison results are depicted in Image
9, which indicates an excellent fit between the predicted values from the
identified model and the flight data.
Image 9.
Time-domain validation.
Robust flight control system design
During the BIT-TDF project, we explored several control approaches from
theoretical development to final experiments. After the evaluation of all the
control approaches tested in this project, it becomes clear that the way to
follow is the systematic application of the structured robust control based on
nonsmooth optimization algorithm. In fact, the newly developed nonsmooth
optimization algorithm in [15] is well suited for robust control synthesis of
the cascaded structure of the ducted-fan dynamics. Moreover, the fixedstructure controller is easy to implement and fine-tune within a standard
flight control infrastructure based on PID feedback terms or low-order terms.
In addition, the robust framework is able to elegantly reject strong
disturbances and easily extended to robust gain scheduling control. After a
phase of extensive simulation and experimentation, the structured robust
control was proposed as a single approach for flight control system design.
FEEDBACK CONTROL CONFIGURATION
The key task of the control system is to regulate the flight velocity of the
vehicle to a desired value and keep it irrespectively of the exogenous wind
disturbances. However, as a matter of fact, there is a potentially countless
kinds of control structure to achieve the control task. From the perspective of
engineering implementation, decentralized collections of simple control
elements such as PIDs, static gains, structured, or limited-complexity
controllers are preferred. Moreover, it is known that most practical problems
can be controlled by low-order controllers with good performance. In this
context, we chose the very simple control architecture, shown in Image 10,
which consists of two feedback loops. The inner loop (static output
feedback), being fully actuated, provides stability augmentation and channel
decoupling. The outer loop (four PI controllers), cascaded with inner loop, is
in charge of regulating the inner loop in such a way that provides proper
attitude variables as virtual inputs to velocity outer loop to asymptotically
approach the desired velocity references.
Image 10.
Feedback control configuration.
GAIN SCHEDULING COMPENSATION
The control task is complicated by the fact that the plant dynamics are
nonlinear throughout the flight envelope, so the flight controllers must be
adjusted as a function of flight speed, which is also called the scheduling
variable. As for TDF, we have selected the equilibrium conditions for five
flight conditions which are corresponding to the forward flight speed u = 0,
2.5, 5, 7.5, and 10 m/s, respectively, and linearized the flight dynamics
around each equilibrium. Motivated by [16], we combine the linear
controllers design and the scheduled process by parameterizing the controller
gains as low-order polynomials in forward flight speed and directly tuning
the polynomial coefficients for the desired envelope. This approach results in
a compact scheduling formula and guarantees smooth gain variations. More
specifically, we use a quadratic polynomials for the four PI gains
where the three coefficients Kj0, Kj1 and Kj2 are tunable parameters, and V is
the forward flight speed. We use a simple first-order polynomials for the
static output feedback matrix
DESIGN OF THE STRUCTURED CONTROLLER
Image 11.
LFT of an augmented system and a structured controller.
Using simple block diagram manipulations, we can transfer the initial control
architecture of Image 10into a design-oriented form as illustrated by Image
11, which is known as the standard lower linear fractional transformation
(LFT). Two exogenous inputs, wind disturbances and reference signals, are
gathered in W, and exogenous performance-related outputs, the weighted
error signals and actuator effort signals, are gathered in Z. δ denotes the gain
scheduling variable, which is the forward flight speed in our design case. P is
the augmented linear parameter-varying plant composed of the original plant
dynamics and the weight filters used to express the control objectives. K is
the controller to be designed and scheduled as a function of δ. The structured
controller K is just a reformulation of the architecture described in Image 10.
That is, the outer-loop PIs and inner-loop static output feedback form the
components 4-by-4 diagonal PI block and the negative constant matrix-SOF,
respectively, of the compound controller K.
The H∞ control problem is formulated as the following program
It should be emphasized that when smaller and more practical controller
space K are chosen as in the compound controller K, the problem (23) is
much harder to solve and this is exactly the reason for adoption of nonsmooth
optimization techniques.
For good tracking accuracy, the weights associated with each outputs should
include integral action at low-frequency range and a finite gain at high
frequencies is useful in reducing overshoot. Therefore, the following highgain low-pass filters are selected
For attenuating noise, increasing robustness to unmodeled dynamics, and
preventing high-frequency control activity that would saturate the actuator
physical limits, the following high-pass filters Wu are selected
We also consider the multivariable stability margins also called disk margins
discussed in [17], which can be viewed as robustness against simultaneous
gain and phase variations at both the plant inputs and outputs. The target gain
and phase margin values are converted into a normalized scalar function to be
minimized
−1
where x(s) = (1 + L)(1 − L)
, D is a diagonal scaling matrix to be computed
during the iteration process, L is the corresponding open-loop response to be
shaped, and the parameter α is the minimum distance between the open-loop
response and the critical point. Here, we impose a minimum distance of 0.6,
which guarantees margins of nearly 5dB and 40°. Note that the weighted
function blocks Mu and My (see Image 11) are exactly used to express the
stability margins requirement.
The resulting control problem of (23) is concretely rewritten as the following
nonsmooth program
where x is the decision variable in the space formed by collecting all tunable
entries in the structured controller, m denotes a given operating point of the
objective flight envelope, the function fm (x) and gm(x) are of the following
form
where SRSR is the closed-loop sensitivity function from setpoints to tracking
errors.
Image 12.
Multivariable stability margins.
The structured robust controller synthesis is carried out in the framework of
the nonsmooth H∞optimization algorithm using the MATLAB Hinfstruct
solvers. After some trail-and-error iterations and design repetitions, the
smallest values of L2 gain of hard constraint gm(x) and soft constraint fm(x) are
found to be 0.9994 and 0.935, which indicates the resulting design satisfy all
the design requirements. A visual inspection of the multivariable stability
margins can be seen in Image 12. The yellow region denotes margins that
violate the requirements. The blue plot represents the margin objectives to be
optimized in the calculation process, which is an upper bound on the actual
margin. Note that the five different curves represent the five discrete
operating points. The actual stability we obtained in the red dotted lines
indicate that the stability margin requirements are satisfied at all frequencies
in the designed fight envelope. The frequency responses related to the hard
constraints are shown in Image 13, which indicates the disturbance rejection
and tracking errors at low frequencies are expected to perform well and fast
actuator movement can be effectively suppressed inside the closed-loop
bandwidth.
Image 13.
Singular value plots for the closed-loop transfer functions (left for S and right
for KS).
Simulation and flight test results
The simulation and experimental tests have been carried out in the ARC
flight arena at Beijing Institute of Technology. The employed hardware
architecture for the experimental validation is briefly introduced in Section
2.1.1. A variety of simulation tests have been conducted before actual flight
experimentation. As a representative example show in Image 14, we present
the gust alleviation effects of the closed-loop system for hovering flight. The
strong wind gust has been intentionally injected into the x-, z-, and y-axis of
the body frame, with the peaking amplitude of 9, 3, and 7 m/s, respectively.
The system response clearly demonstrates that the wind gust effect has been
effectively attenuated.
Image 14.
Simulation results of wind gust alleviation effect at hovering.
Image 15.
Flight test-velocities.
Image 16.
Flight test-angular rates.
Image 17.
Flight test-Euler angles.
Image 18.
Flight test-control inputs.
The experiment proposed in this part consists of an accelerated flight
maneuver after a short-time hovering flight. The desired maneuver is a
horizontal forward and inverted flight with a trapezoidal velocity in the
longitudinal direction of the inertial frame. Throughout the maneuver the
desired heading rate remains constant with the value rd = 0. The resulting
responses versus the reference signals are illustrated in Images 15–18. The
test results show that the predefined flight maneuver can be well achieved
with small fluctuation of the actual responses from the reference signals.
Such fluctuations are within an acceptable range, which is mainly caused by
the limited accuracy of the output feedback measurement signals. A certain
degree of deviations between the control input signals are due to both the
model uncertainties and the gust disturbances during the flight test.
Conclusion
Summarizing, it can be stated that, following systematic design and
implementation of BIT-TDF, including unconventional configuration design,
hardware and software development, frequency-domain identification as well
as navigation and control algorithms, it is expected that flight control
development described in this chapter would have successful application in
support of Unmanned Aerial Vehicle UAV development. The main concern
is to propose approaches that can be effective, easy to implement and to run
onboard the UAVs. The proposed flight control law has been implemented
and tested with the BIT-TDF ducted-fan vehicle and current work aims to
propose and implement more advanced and practical techniques. Considering
the requirements on various practical implementations, extensive
contributions could be achieved by extending the BIT-TDF research in
physical interactions with the environment in terms of desired or even
unpredictable contacts.
Design
of
Large
Diameter
Mine
Countermeasure Hybrid Power Unmanned
Underwater Vehicle.
The naval mine is one of the most cost-effective weapons in the naval arsenal
and have many synergetic effects in the maritime warfare. Mines are
relatively simple, cheap and can be laid from any type of sea and air platform.
Combat effectiveness of naval mines covers from defending important and
high-valued targets at sea, ports, and offshore structures to denying hostile
forces access to the coastal zone [1]. Mines can quickly stop, or seriously
impair surface, submarine forces and amphibious or seaborne attack. Their
flexibility and cost-effectiveness make mines attractive to the less powerful
belligerent in asymmetric warfare and maritime warfare.
Mines can be used as both offensive, defensive weapons and tools of
psychological warfare in rivers, lakes and oceans. As offensive weapons,
mine are placed in enemy waters and across important shipping routes to
destroy or damage both civilian and military vessels [2]. Defensive
minefields protect tactical areas of coast from enemy ships and submarines,
or keeping them away from sensitive and high-valued targets. Threats of
mines are increasing due to recent technology development, such as
autonomous systems and computer systems with artificial intelligent
capability. There are many solutions to solve MCM problems so far as
difficulties to detect identify and classification. Unmanned systems also
cleared the way to sweep and naturalize mine safely without involvement of
human beings.
Mine countermeasure (MCM) is a tactical measure to clear or neutralize mine
threat. Tactical MCM operations can be preceded with both passive and
active operations. Passive MCM relies on minimizing the influence of the
ship’s physical signature such as emitted acoustic and magnetic and electric
signals to be sensed by mines. Active MCM operations are minesweeping,
neutralization and mine hunting, which are trying to sweep or destroy mines.
Influence minesweeping uses acoustic, magnetic and pressure signals to
detonate targeted mines [1]. Mechanical sweeping uses a towed
minesweeping tools to cut the cables of moored mines. After mines are
floated to the surface, they are detonated by shooting or explosives.
Mine hunting is getting difficulties in most parts of the littoral regions near
enemy territory. Access to these tactically important areas by the sea requires
minesweeping or neutralization operations. Keeping the man out of
dangerous minefield requires various unmanned autonomous MCM systems
as a potential attractive option [3]. Many of developed high technologies that
are operated in manned mine warfare operations could be transformed into an
effort to develop unmanned and autonomous MCM vehicle systems.
Unmanned systems integrated with emerging technologies are the
minesweepers and hunters of the future MCM operations. A focused
technology effort is needed to incorporate unmanned systems into the mine
countermeasure ship and other related MCM fleet forces. It is time to press
ahead with establishing fleet requirements for unmanned MCM systems that
lead to programming decisions allowing mine hunting and minesweeping
missions to be performed without a man onboard, eliminating the risk to
personnel.
The physical and operational capacity of small displacement UUVs will
greatly limit what UUVs can provide as multimission assets and effective
autonomy at a real combat situation. New platform designs that are true
viable organized intelligent assets should be incorporated with large diameter
UUVs [4]. Realization of the full potential of UUV system as a truly
autonomous undersea vehicle (AUV) in MCM warfare will have sufficient
energy system for super long combat endurance and intelligent mission
management capability and mine disposal weapons.
Since current technology is available to deactivate or eliminate mines, an
effort to make a larger and heavier UUV system should be discussed, in order
to produce an unmanned system to integrate complete MCM UUV systems.
With a larger diameter UUV system, however, there are still problems with
vehicle’s operation time and speed at sea, vehicle and mission management
systems with appropriate hardware configurations. Larger displacement
UUVs must be integrated into a new platform design so that they can be a
viable organic asset. Realization of the full potential of the UUV as a truly
autonomous undersea vehicle (AUV) in warfare will begin with a transition
to a large displacement vehicle.
We investigate strategy and threat of mine warfare and recommend optimal
concepts of operation of their mine countermeasure operation of future
maritime warfare. In this chapter, we provide CONOPs developments of
future large diameter MCM UUV in Section 2, specifications of the MCM
UUV system configurations in Section 3, system effectiveness discussions in
Section 4, and the conclusion in Section 5.
Concepts of operation establishment of mine countermeasure
UUV
IMPORTANCE OF INFORMATION, SURVEILLANCE AND
RECONNAISSANCE (ISR)
The intelligence, surveillance and reconnaissance (ISR) aspects of mine
warfare are of particular importance, since that is where mine warfare really
starts, regardless of the specific purposes for using this collected information.
As with most complex military operations, mine warfare operations are
inherently joint operation between theater forces. The tactical commands
from theater headquarters actually operate the mine warfare forces in theater.
The intelligence agencies provide vital ISR information [5], and Marine
Corps units must work closely with the navy in any amphibious operations,
as well as interface with army mine warfare in addition to other operations
ashore. Navy and Coast guard units must work together closely in inshore
mine warfare operations to secure good communication.
Battlefield reconnaissance data are essential for both safe far- and near-shore
operations. Mine warfare is the strategic, operational and tactical use of sea
mines, and the countermeasures to defeat them [6]. Undersea physical
characteristics, ocean climate variations and environment are considered
before an operation is initiated. Ocean environmental characteristics are very
important to determine where mines should be placed and how many and
how to deliver the mine to the position. Oceanographic parameters have
greatest impact on mine warfare operations and need the most enhanced
predictive modeling capabilities for a wide range of oceanic processes. They
will greatly enhance the war fighter’s ability to turn vast amounts of
oceanographic information into knowledge, which is used to control
battlefield operations [7]. Major physical oceanographic data for the mine
warfare consist of current patterns, salinity, temperature, bottom status and
clarity of water of the area [8].
Bathymetry forms the boundary condition for all fluid motions and can
extend beyond the local region. Ocean currents are long-time-scale fluid
flows arising from a wide variety of processes, such as direct wind forcing
and tides. The complexities of forces driving current flow require to work
together with meteorological and oceanographic analysis. They also
determine how these forces are interacting with each other, understand the
time scale of variability driving current flow and understand how this may
affect the mission [5].
Mine laying and mine countermeasure operations are parts of the main
subsystem of mine warfare operations [6]. Better priority weight should be
given to the technical exploitation of threat mines, mine warfare indications
and warning (I&W) tasking. There are several points of consideration in mine
warfare at the combat zone or landing assault area. Thus, currently, both
near-term and long-term mine reconnaissance systems (NMRS and LMRS)
are developed to expand organic and dedicated MCM capabilities. The MCM
command center should disseminate them at all command levels and provide
rules of engagement (ROE) to counter hostile miners and relevant
environmental databases, such as the mine warfare environmental decision
aids library (MEDAL) and the contingency planning tool [2].
COMMUNICATION
OPERATIONS
AND
CONTROL
(C2)
IN
MCM
Naval ISR group is collecting and processing any potential data and
information in order to develop optimized mine detection procedure and
clearance capabilities, organic to carrier and surface battle groups (CV). With
those ISR information and operational procedures, naval forces can identify,
avoid or neutralize mines within tactically acceptable time limit and with
minimum operational risk [9]. On-scene MCM capabilities, through
introduction of organic capabilities into all carrier battle group (CVBGs) and
surface battle groups, would be completed with completion of MCM
communication and control network systems.
Major MIW capabilities include intelligence collection and surveillance,
notification of imminent mining, interdiction, postinterdiction intelligence
evaluation and dissemination and passive MCM threat awareness and
signature control. With wide dissemination and availability of the battle space
information, the MCM control center has to communicate frequently as the
real-time development changes so far.
Communication systems related to the mission operations are data
communication links where tactical maneuvering information on MCM
missions is exchanged. Each division on communication links is evaluated
through communication quality of data, security and interoperability. As
organized supporting systems, and command, control, communications,
computers, intelligence (C4I) are introduced into the naval forces. Effective
C4I systems must allow MCM functions to be performed from variety of
platforms with highly dynamic environment. Database systems for mine
identifications will include such features as high-resolution environmental
mapping data and explicit simulations. It can be used for doing detailed
operational planning, training and rehearsing [8].
In the future MCM operations, the decision-making software, and combat
information display systems will be driven by data from extensive multisource collections. The C4I architecture will be networked to ensure the
existence of communication paths between platform and detachment despite
uncertain point-to-point links. Shared awareness of combat scene
information, such as analysis efforts, off-board forecasts and historical
environmental database, is critical, robust, real time and cooperated. With all
these doctrinal evolvements, up to dated technologies, hardware (H/W) with
appropriate operating software (S/W) should be accompanied for smart and
safe MCM operations in the future [2]. Fundamental issues for evaluating
communication systems for MCM are bandwidth, range, covertness and
required infrastructure [3]. The combined C4I system architecture will be
central for the coordinated and multiplatform operations.
MINE HUNTING, SWEEPING AND NEUTRALIZATION
[10]
In the MCM operations, there is many data sensed and produced to detect,
identify, classify mines and start sweeping and neutralization operations.
Active countermeasures are ways to clear a path through a minefield or
remove it completely. This is one of the most important tasks of any MCM
fleets and task forces. Minesweeping is either a contact sweep, a wire
dragged through the water by one or two ships to cut the mooring wire of
floating mines, or a distance sweep that mimics a ship to detonate the mines.
Mine hunting is different concepts and operations from minesweeping,
although some mine hunters can do both operations. Mines are hunted using
sonar, side scan sonar or synthetic aperture sonar, then inspected, classified
and destroyed either by remote controlled or by divers unmanned vehicle
systems. Mine hunting started as mine was used, but it was only after the
Second World War that it became truly effective [11].
A more drastic MCM method is mine breaking which is simply to load a
cargo ship with cargo that makes her less vulnerable to sinking and drive her
through the minefield, letting the ship to be protected follow the same path.
An updated form of mine breaking is application of small unmanned systems
that simulate the acoustic and magnetic signatures of larger ships and are
built to neutralize mines. In future, more autonomous unmanned systems are
involved in every step of minesweeping and neutralization operations due to
safety of human beings and effectiveness of MCM operations. Applications
of RF and laser optic technology are considered as potential alternative
methods of efficient underwater communication systems [3].
DATA PROCESSING AND DECISION MAKING AI LOGIC
SYSTEM
All data processing in intelligent MCM operations is carried through a
systematic data fusion approach. AI machine learning and perception, feature
extraction via real-time image or intelligent navigation and mine-clearing
operation sequences are integrated with mission management processes.
Main control center will support and coordinate automatic classification of
mines and other ordnance, as well as intelligence obstacle avoidance
navigations using data from navigation sensors. The image and pattern
identification-processing techniques, which have been adapted from video
image and sensor signal, focus on classification of mines by the recovery of
explicit feature-based descriptions of man-made objects [12]. Silhouette
descriptions and classification are used to support the dynamic view
positioning and automatic object classification.
The algorithmic partitioning in a front-end digital signal processing (DSP)
dedicates image acquisition, dynamic range equalization, image segmentation
and region-of-interest identification [13]. The parallel processing engine
supporting applications of a statistical evaluation of linearity or regular
curvature have gradient extraction, edge thinning and threshold algorithms
[14]. All the data from traditional navy data base, mine warfare
environmental decision aid library (MEDAL) systems on MIW, MCM and
tactical oceanography also could be accessed in main processing unit in the
MCM UUV system and fed into the identification and classification
processors [15]. A typical MCM data-processing flow of MCM UUV
systems is shown in Image 1.
Image 1.
Mine warfare environmental data flow.
Reacquisition and relocalization of predesignated mine or mine fields need
huge amount of signal, communications and data packages from various
sensor systems. Data sets for reconstructing three-dimensional (3D) and twodimensional (2D) modeling are very big and very difficult to transfer through
current acoustic carrier in underwater environments [5]. Some other
information comes from distance oriented and directional angle of
illumination of light source, which gives some incentive in the reconstruction
of 3D or 2D model of mines.
Identification, reconfirmation of mines and mine-like objects (MLO)
classifications are critical factors for mine disposal operations. Efficient and
different bandwidth characteristics of communication careers are critically
needed at the main control center of MCM operations to gain access to highquality mine detection sensor data from a remote area, due to the lack of
computational capabilities of the existing sensor data-processing systems [16,
17].
MISSION MANAGEMENT SYSTEM
If the duration of MCM operations at sea is expanded to more than 50 days, it
is necessary to maintain the clandestine nature of the MCM operation at
enemy littoral zone; therefore, in these cases, mission management is critical
to autonomous MCM operations. With the introduction of system autonomy
of mission goals, which is a relatively new area of research, this system will
retain clandestine operations and power system requirements for functionality
[18]. Coordinated MCM mission management systems optimize available
sensors and systems, regardless of the host platform, to ensure that the most
effective is used when and where it is most needed.
Fundamentals to the MCM operational concept are to locate minefields,
identify no-mine areas accurately and clear mines efficiently as soon as
possible [19]. This area focuses primarily on unmanned autonomous vehicles
intelligence since these often have the greatest redundancy, and because they
have the most intricate machine-readable, autonomous mission plans. Models
of the vehicle including their planning/control systems, and operating
environment can be linked together to form an assessment tool [16]. This tool
helps to verify interface compatibility and basic functionalities, as well as to
facilitate exploration of control options and evaluation of potential benefits.
The mission control data, which are required to define current mission
objective, the vehicle’s dynamic characteristics and the environmental data
are collected from external sensors and provided by the user, as they are
specific to the effective MCM operations. The autonomous mission-planning
algorithm translates the mission requirements into a mission plan, a set of
constraints and execution sequences. An integrated mission planner and
power management algorithm would combine to this intelligent system with
motion and power control subsystem [10].
MCM mission management configuration [17] of MCM UUV consists of
degrees of perception, intelligent control and management to define senseplan-act or sense-react behavior doctrine. Functional limitations of vehicle
sensor systems imposed by the combat environment require alternative
course of vehicle control, which defined mission goals to be factored in the
control system appropriately. A common approach to mission
implementation process is to develop functions, which are grouped and
sequenced to perform overall tasks through artificial intelligent (AI) expert
system architecture [12].
It is based on a traditional perception-action loop, allows low-level sensor
signal processing and does feature extraction to assist mine classification,
MCM mission planner and vehicle control system. MCM UUV mission
management system focuses on mine classification and dynamic
repositioning, which optimizes the aspect relative to the designated mine
target and clearing mine. Obstacle avoidance navigations require relatively
sparse sensor data in the dynamic construction of world models for
environment [12]. In the main control block, the perception/classification
processor is combining with dynamical position sensors via a highly
maneuverable platform. MCM expert system has the capability to perceive
the object of interest from multiple perspectives, which then increases the
probability of classification for the mine targets. A key concept in data
fusions of expert system includes the employment of heuristic knowledge,
which incorporates constraint knowledge associated with target
characteristics, environment and attitude of vehicle.
This provides basis for interaction with the object of interest, and dynamic
perceptions that provide active sensor management and vehicle path planning
in the navigation and guidance [16]. A second unique aspect of the AI expert
system architecture is the implementation of sensing, evaluation and action
encapsulated as subordinate tasks under the integrated mission control
system. It optimizes machine-generated direction of task-organized approach
and initiates signal for the course of action [6]. The MCM AI expert system
architecture and its relationships between the executors, and the data/signal
processing are shown in Image 2.
Image 2.
Operations of MCM mission management expert system.
VEHICLE MANAGEMENT SYSTEM
The system capability of precise navigation and operational data collection is
critical to ensure safe navigation of the vehicle and in the achievement of
system objectives. To resolve the vehicle position at the submeter level, a
compact low-power solid state inertial measurement unit (IMU) has been
incorporated [20]. This unit measures the change of 3D velocity, angular
velocity and temperature as well as performs corrections of thermally induced
drift. The IMU is augmented by a compact Doppler Sonar System using the
Kalman filter or other processing techniques. A precise navigational compass
and clinometers provide heading information and navigation frame correction
data for noises [21].
Simultaneous localization and mapping (SLAM) techniques [22] are utilized
as a navigational tool and are adapted for reconfirmation of designated mine
localizations with autonomous navigation for obstacle avoidance to a safe
margin. All the information are filtered and processed in a data processing
and distribution unit and distributed for navigation, SLAM processing and
mine neutralization procedures. With updated environmental 3D map and
obstacle information, the MCM UUV navigation system can be guided and
controlled within guided optimal paths to the targets with a degree of
accuracy [22].
OPERATIONAL ENDURANCE AND AVAILABLE POWER
SYSTEM ALTERNATIVES
System endurance in unmanned underwater vehicle (UUV) operations
consists of available total energy sources, vehicle system objectives,
maneuvering path plan and design parameter characteristics. In general, the
more the endurance is desired for UUV operations, the more the coordinated
efforts are required during the vehicle design stage regarding the structural
design, energy systems and power management. UUV endurance with respect
to UUV system activities is characterized by range of missions, operation
time and kinds of energy systems and is controlled by the capacity of onboard energy, hull structure of the vehicle, optimal propulsion and energy
management system [23]. Until now days, these properties were investigated
separately with system endurance as an individual component. In this study,
we integrate all these components and make overall suggestions to the system
endurance with respect to hull shape, power systems, propulsion efficiencies
and respective activities of missions.
ENERGY SOURCES FOR MCM UUVS
Power system considerations dominate the design of UUVs, due to the fact
that usually the energy source is the most limiting factor for mission
operations of autonomous vehicles. The energy system of UUVs has been a
major issue due to its impact on the ultimate performance and extension of
UUV missions [24]. There are strong desires to minimize the size, cost and
energy consumption rate for all aspects of UUV operations. In the operation
of unmanned vehicles, missions with high speed and longer endurance, such
as mine countermeasure (MCM), antisubmarine warfare (ASW), and
intelligence, surveillance, and reconnaissance (ISR), need more powerful and
sophisticated energy systems, such as fuel cells and hybrid systems in
addition to battery power [23].
Since the power required to propel an underwater vehicle is roughly
proportional to its surface area, and cubic of forward velocity, the stored
energy capacity is proportional to its volume, the mission duration or range
achievable at a given velocity varies directly with vehicles. The information
of the UUV energy source gained via analysis of the batteries, fuel cells,
internal combustion engines or other available energy sources is found in
Jane’s Underwater Technology Information. Important UUV power system
performance metrics consist of energy and power, specific energy and power,
usable state of charge, voltage response under load, calendar life and charge
acceptance specifically, power and energy density and physical volume are
critical to UUV system design. Image 3 [24] shows the specific power and
energy properties of major UUV energy sources.
Image 3.
Energy source characteristics, Ragone Chart [24].
Based on current technology development of battery systems, highperformance battery is the most favorable choice for the autonomous vehicles
based on performance, availability and cost-effectiveness. With battery
applications, to facilitate the addition of battery packs to the vehicle, the hull
shape should be redesigned to be longer or wider. Such hull reshaping
reduces the overall vehicle drag coefficient and increases energy to UUV’s
propulsion power. As UUV energy systems are characterized by specific
energy or power density per unit volume or weight, adding additional energy
to the system increases the vehicle length [23]. When the battery packs are
added, the midsection (D) must be longer (L) in order to house additional
battery packs. This changes the aspect ratio, and increases the vehicle drag
coefficient. Increased vehicle drag requires more propulsion power, which is
a portion of added energy. The sensitivity to added battery packs is
compensated for by changing the axial drag coefficients as and other
conditions, including hotel loads, are unchanged.
Based on the Ragone Chart characteristics [23], the preferred long-term
approach to using hydrogen is the fuel cell. Fuel cells use a process that is
essentially the reverse of electrolysis to combine hydrogen and oxygen in the
presence of a catalyst to generate electricity. Fuel cells are much more
efficient than ICEs often topping 70% [24, 25]. The main problem with fuel
cells is the cost, and the other primary issue with fuel cells is durability. Both
of these renewable fuels have lower heating values (Btu/gallon) than their
counterpart gasoline and diesel fuel, resulting in higher fuel consumption
when measured on a volume basis. Diesel engine offer better fuel savings
over gasoline engines, battery and fuel cells on specific energy containment
and gives good gas mileage on fuel consumption (gallon/mile) and loadspecific fuel consumption (gallon per ton-moles), defending on the engines
and operating conditions, diesel engine can provide up to 25% lower fuel
consumption than gasoline engines.
Considering more than 50 days of field operation of MCM UUV and current
technology development of battery systems, it could be a realistic
combination for larger diameter long endurance MCM UUV system with
diesel internal combustion engine (ICE) and effective battery systems.
Internal combustion system gives relatively high specific power and is
proven as a convenient technology, whereas battery systems give operational
conveniences. We tried to integrate a small diesel engine connected to battery
system, as well as to modify the hull of the UUV for a snorkeling operation
[25]. This is for both recharging and the propulsion of the MCM UUV. This
diesel battery hybrid power system is designed to be controlled by vehicle
management computers and the main AI expert mission management system.
In this power option, we consider appropriate snorkeling systems and
structural accommodations.
CONOPS OF HYBRID MINE COUNTERMEASURE UUV
PROSPECTIVE OF CONCEPT OF OPERATIONS
The main concept of MCM operations using UUV systems is to reduce
incoming threats for the naval fleet with the employment of a robust, highly
autonomous vehicle unit which is capable of operation engagement and
execution of neutralization procedures. Neutralization procedures include
either moving the mine out of the original place, precise delivery of the
charged device to a desired location or the acquirement of projectiles to blast
previously localized, in volume, drift, floating and bottom mines in deep and
shallow water zones [15, 16].
The main body of the new MCM UUV has fully trained AI expert systems
with MCM data bases to implement mission movement, in addition to vehicle
and contingency operational management. The MCM UUV system has
expandable small UUV sensor/neutralizers with a formed charge, and it is
able to acquire projectile reaching and explode designated mines. Before it
begins actual disposal activities, the control module of UUV unit requests a
confirmation of mine identification to the mother ship via acoustic and RF
communication links [8].
In the envisioned concept of operations, a UUV unit uses its high capacity
communication links to get prior mine target information from the MCM
operation center at the mother ship which is located more than 50 miles
standoff distance [6] at high sea as shown in Image 4. The vehicle then
initiates an adaptive engagement plan autonomously along its trajectory with
the available information from the navigation sensors. While compensating
for winds, waves and currents along the disposal range, it will try to navigate
accurately to the designated mine target. All the way to the designated mine
position, the launched disposal device maintains the targets in the field of the
imaging sonar. After neutralizing the designated mine, the UUV unit
performs a neutralization damage assessment, and reports that the mission
has been accomplished [18].
Image 4.
Concept of MCM UUV system operations.
System design of a large diameter MCM UUV unit
Current status of unmanned system technology and participation of
unmanned vehicle fleets to naval operations can transform the concepts of
future navy MCM tactics, weapons, and man machine interoperability in the
field. In the near future, MCM vessels with C4I capability and MIW data
base that act a main control center for a variety of unmanned vehicles in the
enemy territory and doing a MCM operations without the presence of men in
the mine field [9].
LARGE DIAMETER UUV [16]
The conventional smaller diameter unmanned vehicle system has major
operational difficulties in sensor system and vehicle endurance that give
limited search, launch and recovery operations. If we can design larger
unmanned vehicle, we will have more payload and energy storage for longer
endurance. They can be a force multiplier for increasing the operational
capability of submarines and surface ships [7].
The options for hull shape and various subsystem configuration of vehicle
define the set of vehicle design options which are evaluated by the design
requirements. Some important design options are the fineness ratio and block
coefficient, which dictate the basic packaging spaces and sizing of all other
vehicle subsystems [12].
MAIN CONTROL CENTER CONFIGURATIONS
The autonomy of the system and the spectrum of operations are fundamental
characteristics on MCM operations. The main platform has mission
management blocks that automatically perform various MCM operational
procedures such as contact of mines, obstacles from multiple sensor data
sources and management of neutralization process, environmental data and
bottom mapping [17]. For the mission accomplishment, organic MCM
platforms, and operation with various sensors must undergo guidance of
system management command, which is from rigorous analysis,
experimentation, modeling and simulation on board [18]. The missions of
MCM fleet operation is divided into five segments; launch, transit, execution
of mission, return transit and recovery, each defined by key mission, and
environmental parameters such as range, speed, ocean current and salinity
and various MCM mission-related execution orders [12].
Advanced technologies applied for MCM main mission management system
include situational reactive mission control suits, smart sensor system of
systems, dynamic intelligent mine classification processors with MCM data
bases. Vehicle management system controls precise intelligent undersea
navigation, intelligent sensor systems and obstacle avoidance measures.
Application of up to date AI technologies to the vehicle’s functionality and
mission effectiveness of MCM UUV system are implemented with expert
system blocks, AI pattern classification and efficient power management
systems in our works [12]. The MCM UUV unit provides autonomy of
vehicle systems through incorporation with platform-independent
autonomous controller that supports high degree of autonomy, highly precise
low-power navigation and machine vision in support of automatic
classification [26]. Mission management function block diagram of MCM
UUV is given in Image 5.
Image 5.
Function diagram of AI mission management system.
A capability of directing all aspects of the multifaceted MIW campaign plan
is needed to bring the various MCM capabilities together, providing unity of
effort in defeating the mine threat.
NEUTRALIZATION WEAPON
There are several state-of-the-art weapon systems to dispose or detonate
mines effectively, and economically such as the use of a laser gun, acquire
gun and small charge delivery devices. Furthermore, the confidence for job
completion requires the capability of accurate battle damage assessment
(BDA). Underwater motion projectile is multipurpose in formed cavity water,
due to its density, has a profound impact upon the terminal velocity of the
implant at the target. A suitable weapon technology applied to MCM UUV is
a lightweight composite 30 mm launcher that would implant a round filled
with either high explosives (HE) for an explosive hard-kill or reactive
material for a soft kill burn [3].
Similar technology was developed to counter roadside improvised explosive
devices using 50 caliber weapons. A 30 mm implant would be usefully larger
and could integrate a compliant fusing device, utilizing a detonator enables
digital fusing, and affords either timed or controlled detonation, including
detonation by an acoustically transmitted command. A 30 mm launcher
provides sufficient terminal velocity to penetrate 5/10 inch cold rolled steel
from a range of 30 feet [13, 26].
The currently achieved standoff range of 30 feet which the UUV should shot
detonate the mine is not sufficient to ensure safety of MCM UUVs. Shooting
from longer ranges requires significant basic research, and development, both
in material strengths, and in achieving precise sonar fire-control accuracies
before truly safe standoff ranges are achievable.
ENERGY AND POWER MANAGING SECTION
Considering the operational combat field endurance limit of more than 50
days of MCM UUV and the current status of the battery systems technology,
the combination of diesel internal combustion engine (ICE) and effective
battery systems could become reality. The high specific power generation of
the internal combustion system gives effective operation of the vehicle and
can provide a stable recharge power source of the battery system. Integration
of a small diesel engine connected to the battery systems, and modification of
the UUV hull structure for the snorkeling operation could give better
alternatives for both recharging and propulsion of the MCM UUV in the
meantime [25].
A diesel submarine is a very good example of hybrid power supplying and
sharing systems. The two or more sets of diesel engines in most diesel
submarines can run propellers or can run generators that recharge a huge
battery bank, or work in combination mode; one engine driving a propeller
and others driving a generator. The submarines should run the diesel engines,
they must surface or cruise just below the surface of water using snorkeling,
and once the batteries are fully charged, the submarine can dive to
underwater operations [8]. These diesel battery hybrid power systems are
controlled by vehicle management computers and a main AI expert mission
management system. Combined power generation, and the control system
structure are given in Image 6.
Image 6.
Power management system of MCM UUV.
System measure of effectiveness
SYSTEM EVALUATION MODEL
Recently, UUV systems have emerged as a viable technology for conducting
underwater search and neutralization operations in support for the naval
MCM missions. In the final phase of the system design process, either
conceptual or actual, justification studies for the proposed design should be
carried out with functional and cost-effectiveness evaluations. In this section,
analytical frameworks for evaluating the proposed MCM disposal UUV unit
are developed based on the part of current US naval underwater ship design
procedure [27, 28].
The evaluation models provide means to relate the effectiveness matrices to
the system-level performance parameters. These individual capabilities can
be stated in terms of vehicle subcomponents, such as sensors, data storage,
processing unit, communication systems, navigation instrumentations, and
disposal payload items. The evaluation framework is based primarily on the
approach that combines several well-known systems engineering practices
and decision making methods in a framework suitable for naval ship design
[29].
The general approach of measure of effectiveness (MOE) investigation is to
make high-level model as generic as possible and to increase detail and
resolution with each progression into the low-level models [27]. This is
accomplished by developing separate model subcomponents and linking
them together to form the overall system model.
SYSTEM EFFECTIVENESS MODEL
For the entire MCM evaluation framework, the specific operational
requirements can be defined as follows; the MCM operations with mine
reconnaissance, classification, localization and mine neutralization; the
autonomous operations with minimal reliance on support platforms; safe
recovery of the vehicle system unit. The effectiveness model has been
established through considering the operational requirements for MCM
autonomous vehicle systems and comparing those requirements to the
existing MOE to determine where the changes are needed as in Image 7 [27].
Image 7.
Structure of system MOE evaluations.
Thus, for the MCM UUV disposal system unit, the mission time, mission
accomplishment, autonomy, communication and covertness form the highest
level of the proposed MOE hierarchy [29]. As shown in Image 7, the MOE
evaluation for the proposed design has the following components: system
endurance, neutralization success, system autonomy, communication links
and clandestine operations.
The concepts for effective area coverage rates are better measured by time
rate for mine search and actual neutralization activity, while operational time
range is better for information, surveillance and reconnaissance (ISR)
operations for long-term mine detection activities in wide areas. The effective
area coverage rate can be defined as the ratio of the total search area to the
total amount of time required to complete the MCM missions from launching
to recovering of the UUV system. Duration of designed mission time is
fundamentally based on UUV system hardware capability related to energy
source, speed, operational load and hotel power consumptions [30].
Mine neutralization or sweeping success is the main object of MCM
operations, and this MOE represents the estimated probability of
search/classification of mines, as well as mission accomplishment for mine
clearing. Mine reconnaissance and clearance are the two basic MCM
missions, and the major objectives of mine reconnaissance are accurate
search, localization and containment of designated mine in the contacts.
Search level refers to cumulative probability of mine detection and
relocalization, classification and identification within specified MCM
operational areas. For the mine hunting, and neutralization phase, the MOE
will be scored from minesweeping levels and the search level, confirming
relocalization accuracy. The measures of mine neutralization success are
defined by performances of the individual disposal weapon system and by
successful identification of mines, which is expressed by the probability
measures based on the history of mission maneuvering trajectory,
performances of identification sensor systems and conditions of complex
underwater environments [28].
The autonomy measure represents mission management, and vehicle
operations related to the independence of the system from human oversight
for the mission tasks. The area of mission managements consists of
execution/service commands, communication links among MCM operations
and logistics support relating to launch and recovery of the small UUV unit.
The mission management requirement is specified in terms of discrete host
responsibility alternatives, such as performance of system platforms, remote
command and control (C2) and integration of mission activity by
subdivisions via operation executions [28].
Measure of dynamic vehicle operations is also based on the degree of
intelligence of vehicle maneuvering, obstacle avoidances and optimal path
planning. The degree of autonomy of vehicle operations is determined by the
level of guidance/navigation/control (GNC) of vehicles and obstacle
avoidance/optimal path planning required during the MCM mission
operations. The nature of this kind of MOE characteristics is well defined in
the department of defense(DoD) level of autonomy for autonomous vehicle
criteria by the number, capability of processing unit and data base capacity
for decision making within specific missions [29, 30].
Summary
In this study, we investigate the current MCM systems and evaluate the
technologies to be improved for future mine neutralization operations. The
configuration of hybrid MCM UUV systems has an effective future system
design Image that relays a battery power system in conjunction with a diesel
engine and an integrated AI expert applied autonomous system.
We try to make a larger and heavier UUV system with typical MCM
missions and mission management algorithms with appropriate hardware
configurations. Proposed larger displacement UUVs will be integrated into
new platform designs in order for these systems to become viable organic
assets. Realization of the full potential of the UUV as a truly autonomous
undersea vehicle (AUV) in warfare with UUV system structure, functional
specification of expert system controlled MCM UUV subsystem [30]. Hybrid
power system is introduced and effectiveness discussions were presented and
give the progressive transition guide line for a future larger displacement
vehicle.
KINEMATICS
Kinematics
Kinematic Performance Measures and Optimization of Parallel
Kinematics Manipulators: A Brief Review
Serial kinematics mechanisms (SKMs) have been widely used for different
applications. Although SKMs have many advantages, such serial mechanisms
have many drawbacks such as low stiffness, accumulating pose error, low
agility, low payload-to-weight ratio, and complicated inverse kinematics.
Hence, to overcome these drawbacks, parallel kinematics mechanisms
(PKMs) are used particularly for more demanding tasks such as high-speed
and high-precision applications. In spite of their many advantages, the PKMs
in general also have some drawbacks such as smaller workspace, complicated
forward kinematics, and singularity issue. To alleviate these drawbacks,
optimization with various techniques is commonly conducted to improve
their drawbacks while maintaining their advantages. In terms of the number
of objectives being optimized, the optimization can be either single-objective
or multi-objective. In most cases, there are more than one objectives required
to be optimized. Furthermore, some objectives quite frequently are
conflicting each other. For example, most PKMs usually require not only
larger workspace but also stiffer structure with lower mass. In fact, enlarging
the workspace usually requires longer links which results in the reduction of
the stiffness and the increase of mass. In the multi-objective optimization,
different objectives might be picked based on the priority of the objectives
which depends on the application.
Therefore, in this chapter, a comprehensive review of a number of
performance indices are defined and presented which are relevant for
different applications. This is followed by a review of the optimization
techniques used to design different systems to satisfy certain objective or
multiple objectives. This is extremely important given the nonlinearity of the
parallel link manipulator systems and the conflicting nature of the different
performance indices that could be counter intuitive to optimize by trial and
error and hence, mathematical schemes would be the solution.
Performance measures
There are quite many measures or indices to indicate the performance of a
robot. Patel and Sobh [1] classified them into either local or global, kinematic
or dynamic, and intrinsic or extrinsic. The local measures are dependent on
the robot configuration, whereas the global measures evaluate the robot
performance across the whole workspace. The global performance index
(GPI) can be obtained by integrating the local performance index P over the
workspace W as given by Eq. (1). If the local performance index cannot be
expressed analytically, discrete integration as given by Eq. (2) can be used. In
the latter case, the accuracy of the integration depends on the number of
points n being used for evaluation. If the inclusion of large number of points
is very computationally expensive, less representative sampling points can be
used.
The kinematic measures indicate the kinematic behavior of the robot,
whereas the dynamic measures are related to dynamic properties of the robot.
The intrinsic measures indicate the inherent characteristics of the robot
regardless of its task, whereas the extrinsic measures are related to the robot
task. The widely used performance measures include workspace, closeness or
avoidance from singularity, dexterity, manipulability, stiffness, accuracy,
repeatability, and reliability. Some of them are discussed below. The
performance measures should be considered during design phase of the robot.
Optimal design usually considers one or several performance measures as the
objective function(s) to be optimized.
WORKSPACE
The workspace is the set of points (locations) which can be reached by the
end effector. It is driven by the mobility of the robot which includes the
number and types of its degrees of freedom (DOF), and constrained by the
length of links, range of the joints, and interference between the components.
The workspace evaluation usually includes its size (area/volume) and shape.
The shape can be expressed by, for example, aspect ratio of the regular
workspace. In general, larger and better-shaped workspace is required.
Another way to characterize the workspace is by using workspace volume
index [2] or footprint ratio [3] which is defined as the ratio between the
workspace volume and the volume of the machine.
The first thing to determine in order to achieve better workspace before
optimizing the geometrical parameters is selecting better topology. For
example, many mechanism designs include sliders (gliders) in order to get
larger workspace, such as in Hexaglide, Linapod, Pentaglide, sliding H4, and
Triaglide.
The robot workspace is commonly classified into several types as follows [2]:
●
Constant orientation workspace (translation workspace) which
defines reachable points with constant orientation of the moving
platform.
●
Orientation workspace which defines reachable orientations
while the center tool point is fixed.
●
Reachable workspace (maximal workspace) which defines
reachable points with at least one orientation of the moving
platform.
●
Inclusive workspace which is reachable workspace in a given
orientation range.
●
Dexterous workspace which defines reachable points with any
orientation of the moving platform.
●
Total orientation workspace which is dexterous workspace in a
given orientation range.
●
Useful workspace (sometimes also called used workspace)
which defines part of the workspace to be used for a specified
application. It is usually regular workspace such as rectangle, circle,
cuboid, sphere, or cylinder.
The useful workspace is usually of the highest interest as it indicates part of
the workspace which can be really utilized for application. Baek et al. [4]
presented a method to find maximally inscribed rectangle in the workspace of
serial kinematics manipulates (SKM) and parallel kinematics manipulators
(PKM).
A complete representation of the workspace requires six-dimensional space.
However, graphical representation is only possible up to three-dimensional
space. For this reason, it is a common practice to represent the position
workspace separately from the orientation workspace. Workspace of a planar
mechanism can be plotted in a two-dimensional plot, whereas that of a
spherical or spatial mechanism can be plotted in a three-dimensional plot.
The workspace plot can be presented in either Cartesian or polar coordinate
system in the case of 2D plot and in either Cartesian, cylindrical, or spherical
coordinate system in the case of 3D plot. Plotting the graphical representation
of the workspace is easier in SKMs but is not always easy in PKMs. In the
latter case, the graphical illustration of the workspace can only be applied for
PKMs with no more than 3-DOF. For PKMs with more than 3-DOF, n – 3DOF should be fixed in order to be able to graphically illustrate the
workspace. Depending on which DOF to be fixed, the workspace will be
different.
In general, there are three main ways to determine and plot the workspace:
geometrical approach, discretization numerical approach, and nondiscretization numerical approach. Early works on the workspace
determination of PKMs are conducted by geometrical approach. Bajpai and
Roth investigated the workspace of PKMs and the influence of legs’ length to
the workspace. Three years later, Gosselin [6] presented his work on the
constant-orientation workspace determination of 6-DOF PKM followed by
Merlet who presented the orientation workspace determination, and Kim et
al. who proposed an algorithm to determine the reachable and dexterous
workspace. As planar PKMs require different treatments, Gosselin and Jean
followed by Merlet et al. presented the workspace determination of planar
PKMs. All of the aforementioned works use geometrical approach.
In the geometrical approach, the true boundaries of a PKM workspace are
obtained from the intersection of the boundaries of every open-loop chains
which compose the PKM. This approach is fast and also accurate. To make it
easier and much faster, CAD software might also be utilized such as the work
proposed by Arrouk et al. One of the main drawbacks of this approach is its
lack of general applicability since different robot topologies might need
different techniques to apply this approach. In other words, this approach
usually should be tailored to the considered robot. Another drawback of this
approach is the difficulty to include all the constraints.
In the discretization numerical approach, a discretized bounding space which
covers all possible points in the workspace is created and subsequently tested
by utilizing the inverse kinematics along with the constraints whether it
belongs to the workspace or not. This approach is sometimes called binary
representation since it assigns binary numbers during the evaluation: “1” is
assigned if it is reachable and therefore plotted and “0” is assigned if it is
unreachable and therefore not plotted. The main advantage of this approach is
its applicability to all types of PKMs as well as its intuitiveness. Moreover,
this approach can include all the constraints. However, the accuracy of this
approach depends on the size of the discretization steps. Also, small voids
inside the workspace cannot be detected unless the discretization steps are
small enough to capture the voids. A method similar to the discretization
approach is Monte Carlo method in which a large number of discrete active
joint points within the joint range are input to the forward kinematics and
accordingly the end effector position points are plotted. Further treatment to
the Monte Carlo results in order to determine the workspace boundaries or
compute the workspace volume can be conducted by putting the workspace
points in discretized bounding space as being used in the discretization
approach.
Some recent works using the discretization numerical approach includes
Bonev who proposed a new approach to determine the three-dimensional
orientation workspace of 6-DOF PKMs by using discretization approach.
Castelli et al. presented an algorithm based on the discretization approach to
determine the workspace, the workspace boundaries, the workspace volume,
and the workspace shape index of SKMs and PKMs. Dash et al. presented a
discretization method to determine the reachable workspace and detect
available voids of PKM.
Beyond the aforementioned two approaches, several works proposed nondiscretization numerical methods to determine the workspace of PKMs.
Some of the methods are as follows:
●
Jacobian rank deficiency method but only practical for the
determination of constant orientation workspace.
●
Numerical continuation method can avoid singularity points but
only practical for the determination of constant orientation
workspace.
●
Constrained optimization method which is modified from the
numerical continuation method.
●
Boundary search method which is based on constrained non-
linear programming.
●
Principle that the velocity vector of the moving platform cannot
have a component along the normal of the boundary, but this
method does not work for mechanisms with prismatic joints as well
as it is difficult to include the mechanical limits and interference
between links.
●
Interval analysis method which can deal with almost any
constraints and any number of DOF.
Recently, Bohigas et al. presented branch-and-prune technique which can
determine all the workspace boundary points of general lower-DOF (3-DOF
or lower) SKMs and PKMs. This technique overcomes the limitation of
numerical continuation method. Furthermore, Gao and Zhang proposed
simplified boundary searching (SBS) method which integrates geometrical
approach, discretization method, and inverse kinematics model of a parallel
mechanism. Saputra et al. proposed swarm optimization approach to
determine the workspace of PKM.
JACOBIAN MATRIX
The Jacobian matrix maps the relation between the velocities at the task space
(moving platform) and the velocities of the active joints. Furthermore, it also
maps the relation between the active joint load and the task wrench. It is
discussed here because it is related to many kinematic performance measures.
Given that the velocity kinematics of the robot is expressed by:
where ẋ ẋ is the end effector twist and q̇ q̇ is the actuator twist, then A is
called forward Jacobian matrix, B is inverse Jacobian matrix, and the total
Jacobian matrix J is given by:
As long as the Jacobian matrix is unit-consistent (homogeneous), it can be
directly used in the formulation of Jacobian-based performance measures
such as Jacobian condition number and manipulability. The problem appears
when the Jacobian matrix is unit-inconsistent (non-homogeneous) and
accordingly does not have appropriate physical meanings. To address this
problem, there are several ways found in the literature to normalize
inconsistent (non-homogeneous) Jacobian matrix including the following:
●
Using natural length or characteristic length
●
Using scaling matrix
●
Using weighting factor
●
By using power transition concept
●
Point-based method
●
General and systematic method
●
Homogeneous extended Jacobian matrix
SINGULARITY
The simplest classification of singularity in PKMs is given by Gosselin and
Angeles. By considering only the behavior of the active joints and the endeffector, they classified the singularity in PKMs into two types which are
mathematically determined by the singularity of the two Jacobian matrices in
the kinematics of the robot given in Eq. (3). The three types of singularity are
the following:
●
Type 1 singularity (also called: direct kinematic singularity,
forward kinematic singularity, serial singularity, or sub-mobility)
occurs when the forward Jacobian matrix A is singular. When this
kind of singularity occurs, it is not possible to generate some
velocities of the end-effector. In other words, very small changes in
the joint space do not affect the end-effector pose. In these
configurations, the mechanism loses one or more degrees of
freedom.
●
Type 2 singularity (also called: inverse kinematic singularity,
parallel singularity, or over-mobility) occurs when the inverse
Jacobian matrix B is singular. It corresponds to the appearance of
uncontrollable mobility of the end-effector because it is possible to
move it while the joints are locked. At the corresponding
configurations, the mechanism gains one or more uncontrollable
DOF. In other words, the end-effector can move without the joints
moving. Equivalently, the stiffness of the PKM is locally lost.
●
Type 3 singularity (also called: combined singularity) occurs
when both the forward Jacobian matrix A and the inverse Jacobian
matrix B are singular. When this kind of singularity occurs, the endeffector can move when the joints are locked, and at the same time
the end-effector pose does not change due to very small changes in
the joints.
Furthermore, singular configurations can be obtained by observing the
Jacobian matrices or by geometrical approach.
A more general discussion on the singularity was delivered by Zlatanov et al.
who included the passive joints in the singularity evaluation. They classified
the singularity into redundant input (RI) singularity which corresponds to
serial singularity, redundant output (RO) singularity which includes parallel
singularity as well as so-called constraint singularity, and so-called actuator
singularity which represents non-zero passive joint velocities while the
actuators are locked and the end-effector has zero velocities. Moreover,
higher order singularity has also been discussed in some works, but it does
not give much practical benefit.
JACOBIAN CONDITION NUMBER
From its mathematical expression, the value of Jacobian condition number
(or simply condition number) ranges from 1 to infinity, where infinity
indicates singularity and 1 indicates isotropy of the Jacobian matrix.
Alternatively, it can also be expressed by its inverse value, called inverse
Jacobian condition number, the value of which ranges from 0 to unity where
0 indicates singularity and unity indicates isotropy of the Jacobian matrix.
When the Jacobian matrix is close to singularity, it is called ill-conditioned.
On the other hand, the Jacobian matrix is called well-conditioned if it is far
from singularity. Furthermore, when the Jacobian matrix is isotropic, it
means that the velocity and force amplification is identical in all directions.
The commonly used norms to define the Jacobian condition number are as
follows:
●
2-norm, which is given by the ratio of the maximum and
minimum singular values of the Jacobian matrix.
●
Frobenius norm, which is very advantageous because it is an
analytical function of the robot parameters and hence will not give
serious concern if its gradient is evaluated, as well as it avoids the
computation of the singular values.
●
Weighted Frobenius norm, which can be rendered to specific
context by adjusting its weights in addition to all of the mentioned
advantages of the Frobenius norm.
The Jacobian condition number is a measure of kinematic dexterity (or
simply called dexterity). It indicates closeness to singularity, kinematic
uniformity (isotropy), dexterity, and accuracy. The kinematic dexterity is
defined as the capability of robot to move the end-effector in all directions
with ease. In fact, the kinematic isotropy of the robot represents its dexterity
as more isotropy indicates that the robot can move with the same ease to all
directions. However, this is still not a complete information about the
dexterity as it only informs how equal the ease in different directions, but not
how easy. It is possible that either the motion to all directions requires small
effort or the motion to all directions require large effort. Manipulability
which will be reviewed soon will give more complete information about the
kinematic dexterity.
Another interpretation of the Jacobian condition number is how large the
error in the task space will occur due to small error in the joint space. The
more ill-conditioned the Jacobian matrix, the larger the error in the task space
will occur due to small error in the joint space. Based on this fact, the
Jacobian condition number indicates the accuracy of the manipulator.
The Jacobian condition number is a local property. It depends on the robot
configuration (posture). To evaluate globally, global condition number (or
global condition index, GCI) is used. The GCI is obtained by integrating the
local condition index (LCI) over the workspace. Since the Jacobian condition
number is the common indicator of dexterity, GCI is also commonly called
global dexterity index (GDI). A map showing the values of LCI over the
workspace is commonly referred to dexterity map.
Since the manipulability is based on the Jacobian matrix, it faces unit
inconsistency issue when translation and rotation are mixed. In this case, the
Jacobian matrix should be homogenized.
MANIPULABILITY
Manipulability measure was first introduced by Yoshikawa as a local
measure (local manipulability index, LMI) which means that it is dependent
on the robot configuration (posture) since it is based on Jacobian matrix. It
can be evaluated globally by using global manipulability measure (GMI)
which is the local manipulability measure integrated over the workspace.
Another measure is uniformity of manipulability which represents how
uniform the manipulability across the workspace. Similar to the Jacobian
matrix, the manipulability faces unit inconsistency issue when translation and
rotation are mixed. In the same token with the Jacobian condition number,
the Jacobian matrix should be homogenized in such a case.
The manipulability is a measure of the input-output efficiency (the ratio of
output performance to input effort). In other words, it represents the quality
of velocity and force transmission (amplification). The manipulability
provides the information about the velocity and force amplification more than
the Jacobian matrix condition number. The latter only tells how isotropic the
velocity and force amplification but not the magnitude, whereas the earlier
informs the magnitude in addition to the isotropy of the velocity and force
amplification.
Two kinds of manipulability are well known: twist (velocity) manipulability
and wrench (force) manipulability. The earlier is commonly represented by
velocity manipulability ellipse/ellipsoid whereas the latter by force
manipulability ellipse/ellipsoid. In the velocity manipulability ellipsoid, the
velocity minimum, velocity maximum, and velocity isotropy are represented
by the ellipsoid axes length, whereas the manipulability is represented by the
ellipsoid volume. The major axis of the manipulability ellipsoid indicates the
direction along which the mechanism can move with the least effort, while
the minor axis indicates the direction along which the mechanism is stiffest,
i.e., the mechanism’s actuators can resist forces with minimum effort along
this direction. The force manipulability uses duality relation between velocity
and force (between differential kinematics and statics). Beside manipulability
ellipse/ellipsoid, the manipulability can also be represented by manipulability
polytope.
Tanev and Stoyanov have introduced the use of normalized manipulability
index which is bounded between zero and unity. Doty et al. proposed
weighted twist manipulability and weighted wrench manipulability.
Furthermore, a new manipulability measure for parallel robot was introduced
by Hong and Kim.
STIFFNESS
Beside the workspace, stiffness or rigidity of a robot structure plays a very
important role as it affects the accuracy and repeatability of the robot.
Stiffness is defined as the ability of the robot structure to resist deformation
due to wrench. A stiffness matrix relates deformation vector to wrench
vector. Another term equivalent to the stiffness is compliance (flexibility). If
a structure has high stiffness, it means that the structure has low compliance.
A compliance matrix is simply the inverse of the stiffness matrix, and vice
versa. The stiffness includes static stiffness and dynamic stiffness. For
machine tool, high stiffness enables machining with high speed and feed
while providing good precision, accuracy, and surface finish.
Stiffness of a mechanism depends on the robot topology, geometry, and
material of the mechanism. The overall stiffness is comprised of the stiffness
of the fixed base, the moving platform, the joints, and the links. The stiffness
of the joints includes that of the active joints (actuators) and the passive
joints. Many works discussed the influence of the passive joints on the robot
stiffness. The stiffness of the links is usually defined in axial direction (axial
stiffness), transversal direction (bending stiffness), or both of them. To
simplify stiffness model, rigid body assumption is frequently applied to one
or several components of the robot. For example, joints can be considered
elastic while the links are assumed to be rigid, or vice versa. A more realistic
model usually consider both of the joints and the links as elastic. In hybrid
machine tools, many works have proposed the use of parallel mechanism for
the spindle platform and serial mechanism for the worktable, as the most
flexible part of the machine is usually the spindle platform. Furthermore,
some works have suggested the use of passive legs to increase the stiffness
such as in Tricept and Georg V.
Stiffness is a local property. It depends on the robot configuration (posture).
To evaluate globally, global stiffness measures are used. Furthermore,
stiffness varies with the direction in which it is evaluated as well as the
direction of the wrench. Therefore, stiffness can be identified in different
directions, either translational directions (translational stiffness) or rotational
directions (rotational stiffness).
In the literature, compliance which is the inverse of stiffness is sometimes
used instead of stiffness. Several different expressions of stiffness have been
used in the literature, including engineering stiffness, generalized stiffness
matrix, and Cartesian stiffness matrix. The engineering stiffness is a one-
dimensional stiffness expression obtained by evaluating the displacement in
the same direction to the applied force. The generalized stiffness matrix,
according to Quennouelle and Gosselin, includes three contributions:
stiffness of the unconstrained joints, stiffness due to dependent coordinated
and internal wrench, and stiffness due to external wrench. In other words, the
generalized stiffness matrix is sum of the three stiffness components.
The Cartesian stiffness matrix is the most widely used expression of stiffness.
Ruggiu shows that Cartesian stiffness matrix of a mechanism is the Hessian
of its potential energy expression. Cartesian stiffness matrix is symmetric and
positive definite or positive semi-definite. However, some researchers
concluded that the Cartesian stiffness matrix of the elastic structure coupling
two rigid bodies is asymmetric in general and becomes symmetric if the
connection is not subjected to any preloading. Different expressions of
Cartesian stiffness matrix are mentioned by Klimchik and Quennouelle and
Gosselin. The latter authors proposed a Cartesian stiffness matrix which can
take into account non-zero external loads, non-constant Jacobian matrices,
stiff passive joints, and additional compliances. Furthermore, the Cartesian
matrix can be directly related to the generalized stiffness matrix by utilizing
the Jacobian matrix.
In robots with translational and rotational DOF, the Cartesian stiffness matrix
will be unit inconsistent. Consequently, evaluation of further stiffness indices
such as stiffness condition number becomes nonsense. To deal with this
problem, several approaches have been proposed including the following:
●
Homogenizing the Jacobian matrix (such as using characteristic
length) and subsequently using the homogenized Jacobian matrix to
calculate the stiffness matrix.
Eigenscrew decomposition of the stiffness or compliance
●
matrix.
●
Principal axis decomposition through congruence
transformation was proposed by making use of the eigenvectors of
the translational entry in the stiffness matrix.
●
Decomposition of the dynamic inertia matrix by transforming
variables into dimensionless parameters, which can be applied to the
stiffness matrix.
●
Decoupling of the stiffness matrix into translational parts and
rotational parts.
Furthermore, to model the robot stiffness beyond using continuous model
which works only for simple system, the following three different models are
widely used in the literature:
●
Jacobian matrix-based model; also called lumped parameter
model or virtual joint method (VJM) model. A one-dimensional
VJM was introduced by Gosselin, followed by Anatol et al. who
introduced multi-dimensional VJM. This model is widely used and
preferred in robotics since it is analytical, and therefore the same
expression works for all configurations of the robot and it requires
lower computational cost. However, it gives lower accuracy but still
acceptable. For that reason, this method is good for initial estimates
of the robot stiffness as well as for design optimization purpose.
●
Finite element model (FEM). As opposite of the lumped
parameter model, this model discretizes the mechanism into many
elements and therefore can also be called distributed model which
implies more closeness to the realistic, continuous model. It is
widely used in structural mechanics due to its high accuracy.
However, it requires high computational cost. Furthermore, it needs
new mesh at every different configuration of the robot which makes
it not practical. Due to its high accuracy, this model is usually used
to verify another less accurate model such as VJM model.
●
Matrix structural analysis (MSA) model. This model is actually
a special case of FEM model because it uses one-dimensional finite
elements such as beam elements instead of two or three dimensional
elements such as brick elements. As a result, the computational cost
decreases. This model gives trade-off between accuracy and
computational cost.
After that, modifications and improvements on the aforementioned methods
have been conducted, such as follows:
●
●
Online FEM by utilizing MSA using generalized springs.
VJM combined with FEM-based identification technique: high
accuracy with low computational cost.
●
Virtual spring approach: spring compliance is evaluated based
on FEM concept; high accuracy with low computational cost.
Evaluation of stiffness can also be conducted by experimental method, i.e.,
from measurement. In this case, the stiffness is obtained from the relation
between measured wrench and measured displacement. Another way to
evaluate the stiffness is by estimation or identification of the stiffness model.
Least squares estimation algorithm or other estimation algorithms can be
utilized in the estimation based on measurement data.
As a performance measure, the robot stiffness is represented in the following
different ways in the literature:
●
Graphical representations including stiffness maps, by which
the stiffness distribution can be plotted, and other graphical
representations such as iso-stiffness curves or surfaces.
●
Trace of the stiffness matrix.
●
Weighted trace of the stiffness matrix .
(Minimum, average, or maximum) eigenvalues (and
●
eigenvectors) of the stiffness matrix . For example, the evaluation of
minimum and maximum eigenvalues across the workspace by Li
and Xu .
●
Mean value of the eigenvalues .
●
Determinant of stiffness matrix, which is the product of the
stiffness matrix eigenvalues), and indicates the area/volume of a
stiffness ellipse/ellipsoid. It also indicates how far from singularity.
●
Norm of the stiffness matrix, which can be its Euclidian norm,
Frobenius norm, or Chebyshev norm .
●
●
Center of stiffness or equivalently center of compliance .
Global compliance index which is given by mean value and
deviation of generalized compliance matrix .
●
Virtual work stiffness index which is able to avoid the problem
caused by different units of translation and orientation
●
Collinear stiffness value (CSV) .
STIFFNESS CONDITION NUMBER
Stiffness condition number is a local measure. It depends on the robot
configuration. In similar token to the Jacobian condition number, the stiffness
condition number can take a value ranging from 1 to infinity. Alternatively,
inverse of the stiffness condition number which takes value ranging from 0 to
1 can also be used. Since the stiffness condition number represents the
isotropy or uniformity of the stiffness of any point in the workspace, stiffness
ellipses/ellipsoids are commonly used as the graphical representation.
Similar to Jacobian condition number, different definition of norms can be
used to evaluate the stiffness condition number. The commonly used norms
are 2-norm, Frobenius norm, and weighted Frobenius norm. The
considerations in selecting any of them is explained earlier when the Jacobian
condition number is discussed.
The global stiffness condition number is commonly expressed by global
stiffness index (GSI) which is usually defined as the inverse of the condition
number of the stiffness matrix integrated over the reachable workspace
divided by the workspace volume. It depicts the uniformity of stiffness within
the whole workspace.
Design optimization
In terms of the number of objectives being optimized, optimization can be
either single-objective (also called single-criteria) or multi-objective (also
called multi-criteria). The simplest way of design optimization is by trial and
error in which we pick several values of design parameters based on intuition,
knowledge, or experience, and compare the corresponding objective values.
However, this approach is non-systematic as well as does not cover all
possible values of the design parameters and therefore may not give optimum
solutions. In the literature, performance atlas and optimization algorithms are
commonly used in the design optimization of mechanisms. The performance
atlas presents graphically the relation between the design parameters (length
of the links) and the performance measures. Several performance atlases such
as atlas of workspace, atlas of GSI, and atlas of LSI have been used in the
literature. For single-objective optimization, the use of performance atlas is
easy and straightforward. However, a multi-objective optimization requires
inspection of several atlases which might give inconvenience, particularly
when some objectives are conflicting each other.
Beyond the use of performance atlas, various algorithms for optimization of
PKMs and HKMs have been utilized. Based on the search principles, those
techniques fall into two main categories: gradient-based optimization
techniques and population-based optimization techniques. The first category
is a local search algorithm. It is deterministic and can be linear or nonlinear
depending on the problem. The latter category is stochastic and does not need
gradient information. One of the most popular population-based techniques is
the genetic algorithm which is an evolutionary optimization technique and
works based on the idea of natural selection or survival of the fittest. The
genetic algorithm can be implemented for both single-objective and multiobjective optimization. For the latter implementation, several techniques have
been developed such as VEGA, NPGA, NPGA-II, NSGA, NSGA-II, PAES,
PESA, PESA-II, SPEA, SPEA-II, SPEA-II+, and many others . Beyond the
genetic algorithm, several global optimization algorithms have also been
proposed in the literature, such as controlled random search (CRS) ,
differential evolution (DE) , particle swarm optimization (PSO) , quantum
particle swarm optimization (QPSO) , and artificial neural network (ANN) .
The following will be showing more details on both types of optimization.
SINGLE-OBJECTIVE OPTIMIZATION
Although single-objective optimization is straightforward, different
algorithms might be used to search the optimal solution. For parallel
mechanism, if only single-objective is to be optimized, then it would be
usually the workspace since the main drawback of parallel mechanisms is
their limited workspace.
Beyond the use of widely used gradient-based optimization algorithms,
various algorithms have been proposed for single-objective optimization of
PKMs. Hosseini et al. used genetic algorithm to optimize the dexterous
workspace of a Tricept PKM. Kordjazi et al. used genetic algorithm
combined with fuzzy logic algorithm to optimize the Jacobian matrix
isotropy of PKM and showed that the result is better than using genetic
algorithm alone. Arana proposed a methodology to enlarge the workspace of
parallel manipulators by means of non-singular transitions. Ghao and Zhang
proposed the use of particle swarm optimization (PSO) to optimize the
workspace of 3-DOF spatial parallel mechanism.
MULTI-OBJECTIVE OPTIMIZATION
In most cases, there are more than one objectives required to be optimized.
Furthermore, some objectives quite frequently are conflicting each other. For
example, most PKMs usually require not only larger workspace but also
stiffer structure with lower mass. In fact, enlarging the workspace usually
requires longer links which results in the reduction of the stiffness and the
increase of mass. Multi-objective optimization can be bi-objective or manyobjective optimization. The earlier is simpler than the latter. The inclusion of
more than two objectives generally requires more computational cost. It also
gives more difficulty in the visual representation. If more than three
objectives are involved, graphical plots can only be done for three varying
design parameters while the rest should be fixed. An alternative approach to
reduce the number of objectives in the optimization is by putting performance
index threshold value as optimization constraint. However, this approach is
only suitable if the need is only to satisfy the threshold.
In the multi-objective optimization, different objectives (criteria) might be
picked based on the priority of the objectives which depends on the
application. Two main methods commonly used for multi-objective
optimization are:
●
Scalarization method which is commonly conducted by putting
the weighted multiple objective functions into one composite
objective function. While transforming the problem into one
objective function gives simplicity, the determination of appropriate
weights is difficult, even for those who are familiar with the
problem. This approach can be conducted through gradient-based
optimization methods as well as single-objective evolutionary
methods (such as single-objective genetic algorithm).
●
Pareto method which will give non-dominated solutions. This
method can be conducted through multi-objective evolutionary
methods (such as multi-objective genetic algorithm).
Different objectives, depending on the application needs, and various
algorithms have been proposed for multi-objective optimization of PKMs.
Hodgins optimized the workspace, the stiffness, and the dexterity of a
modified Delta robot by using weighted sum optimization method. Kelaiaia
et al. optimized the kinematic dexterity as well as the dynamic dexterity by
using genetic algorithms. Wu optimized the GCI and GDI of a 3RRR
spherical parallel mechanism, which can be used as orienting device, by
using genetic algorithm. Bounab optimized the dexterous regular workspace
and the stiffness of a delta mechanism by using genetic algorithm. Shijun
optimized GDI, GSI, and the ratio of the workspace to the work volume using
genetic algorithm. Gao et al. optimized the stiffness and dexterity by using
genetic algorithm and artificial neural network. Abbasnejad et al.
implemented particle swarm optimization (PSO) and quantum particle swarm
optimization (QPSO) to optimize the workspace and the dexterity as
weighted sum objective, and showed that QPSO has faster convergence than
PSO. Furthermore, Gao and Zhang introduced a comprehensive index to
integrate four different objectives.
Recommendations
It appears that mixed DOFs result in inconsistency of the indices while many
PKMs should have mixed DOFs to do the required tasks. For this reason, the
authors suggest that the introduction of any new index in the future should be
able to overcome this issue in a more natural way so that the physical insights
of the index will be as sound and intuitive as possible. Furthermore, the
authors have not found any published work discussing the optimization of a
large number of performance measures. While a good number of attempts
have been done to handle up to three or four objective functions, it will be
practically useful yet challenging to handle larger number of objective
functions. Knowing that the determination of appropriate weights in the
scalarization approach is not easy even for two to four objective functions, it
definitely will be more difficult to put appropriate weights to larger number
of objectives while it may be no more practical to use the Pareto approach for
the larger number of objectives. Therefore, the authors suggest the
introduction of new approach or technique to reliably optimize larger number
of objective functions.
Conclusion
This chapter provided a comprehensive overview of the literature related to a
good number of kinematic performance indices that designers would be
interested in using during the design of parallel kinematics mechanisms.
Kinematic performance indices such as workspace, Jacobian condition
number, manipulability, stiffness magnitude, and stiffness condition number
are of a prime interest to designers to optimize parallel kinematic mechanism
designs. However, many of these indices are conflicting such as the increase
in workspace size could lead to a decrease in stiffness and/or would lead to
reduced speed. Therefore, using optimization techniques is the solution to
accommodate such conflicting requirements by providing appropriate
weights for the different objectives to reflect the designer’s interests and
priorities. Nevertheless, devising the proper objective function requires
extensive experience and insight into the problem. The proper combination of
objectives with the suitable weights will highly impact the design parameters
results and hence should be chosen carefully. In summary, this paper
attempted to provide comprehensive overview of what is needed by parallel
kinematic mechanism designer to optimize their design and mechanism
performance.
Path Planning in the Local-Level Frame for
Small Unmanned Aircraft Systems
The rapid growth of the unmanned aircraft systems (UASs) industry
motivates the increasing demand to integrate UAS into the U.S. national
airspace system (NAS). Most of the efforts have focused on integrating
medium or larger UAS into the controlled airspace. However, small UASs
weighing less than 55 pounds are particularly attractive, and their use is likely
to grow more quickly in civil and commercial operations because of their
versatility and relatively low initial cost and operating expense.
Currently, UASs face limitations on their access to the NAS because they do
not have the ability to sense-and-avoid collisions with other air traffic .
Therefore, the Federal Aviation Administration (FAA) has mandated that
UASs were capable of an equivalent level of safety to the see-and-avoid
(SAA) required for manned aircraft . This sense-and-avoid (SAA) mandate is
similar to a pilot’s ability to visually scan the surrounding airspace for
possible intruding aircraft and take action to avoid a potential collision.
Typically, a complete functional sense-and-avoid system is comprised of
sensors and associated trackers, collision detection, and collision avoidance
algorithms. In this chapter, our main focus is on collision avoidance and path
planning. Collision avoidance is an essential part of path planning that
involves the computation of a collision-free path from a start point to a goal
point while optimizing an objective function or performance metric. A robust
collision avoidance logic considers the kinematic constraints of the host
vehicle, the dynamics of the intruder’s motion, and the uncertainty in the
states estimate of the intruder. The subject of path planning is very broad, and
in particular collision, avoidance has been the focus of a significant body of
research especially in the field of robotics and autonomous systems. Kuchar
and Yang provided a detailed survey of conflict detection and resolution
approaches. Albaker and Rahim conducted a thorough survey of collision
avoidance methods for UAS. The most common collision avoidance methods
are geometric-based guidance methods, potential field methods, samplingbased methods, cell decomposition techniques, and graph-search algorithms.
Geometric approaches to collision avoidance are straightforward and
intuitive. They lend themselves to fast analytical solutions based on the
kinematics of the aircraft and the geometry of the encounter scenario. The
approach utilizes the geometric relationship between the encountering aircraft
along with intuitive reasoning. Generally, geometric approach assumes a
straight-line projection to determine whether the intruder will penetrate a
virtual zone surrounding an ownship. Then, the collision avoidance can be
achieved by changing the velocity vector, assuming a constant velocity
model. Typically, geometric approaches do not account for uncertainty in
intruder flight plans and noisy sensor information.
The potential field method is another widely used approach for collision
avoidance in robotics. A typical potential field works by exerting virtual
forces on the aircraft, usually an attractive force from the goal and repelling
forces from obstacles or nearby air traffic. Generally, the approach is very
simple to describe and easy to implement. However, the potential field
method has some fundamental issues. One of these issues is that it is a greedy
strategy that is subject to local minima. However, heuristic developments to
escape the local minima are also proposed in the literature. Another problem
is that typical potential field approaches do not account for obstacle dynamics
or uncertainly in observation or control. In the context of airborne path
planning and collision avoidance, Bortoff presents a method for modeling a
UAS path using a series of point masses connected by springs and dampers.
This algorithm generates a stealthy path through a set of enemy radar sites of
known locations. McLain and Beard present a trajectory planning strategy
suitable for coordinated timing for multiple UAS. The paths to the target are
modeled using a physical analogy of a chain. Similarly, Argyle et al. present
a path planner based on a simulated chain of unit masses placed in a force
field. This planner tries to find paths that go through maxima of an
underlying bounded differentiable reward function.
Sampling-based methods like probability road maps (PRM) and rapidly
exploring random trees (RRTs) have shown considerable success for path
planning and obstacle avoidance, especially for ground robots. They often
require significant computation time for replanning paths, making them
unsuitable for reactive avoidance. However, recent extensions to the basic
RRT algorithm, such as chance-constrained RRT* and close-loop RRT, show
promising results for uncertain environments and nontrivial dynamics. Cell
decomposition is another widely used path planning approach that partitions
the free area of the configuration space into cells, which are then connected to
generate a graph. Generally, cell decomposition techniques are considered to
be global path planners that require a priori knowledge of the environment. A
feasible path is found from the start node to the goal node by searching the
connectivity graph using search algorithms like A* or Dijkstra’s algorithm.
The proposed approach in this work will consider encounter scenarios such as
the one depicted in Image 1, where the ownship encounters one or more
intruders. The primary focus of this work is to develop a collision avoidance
framework for unmanned aircraft. The design, however, will be specifically
tailored for small UAS. We assume that there exists a sensor(s) and tacking
system that provide states estimate of the intruder’s track.
Image 1.
The geometry of an encounter scenario.
Local-level path planning
A collision event occurs when two aircraft or more come within the
minimum allowed distance between each other. The current manned aviation
regulations do not provide an explicit value for the minimum allowed
distance. However, it is generally understood that the minimum allowed or
safe distance is required to be at least 500 ft. to 0.5 nautical miles (nmi). For
example, the near midair collision (NMAC) is defined as the proximity of
less than 500 ft. between two or more aircraft. Similarly and since the
potential UAS and intruder aircraft cover a wide range of vehicle sizes,
designs, airframes, weights, etc., the choice of a virtual fixed volume
boundary around the aircraft is a substitute for the actual dimensions of the
intruder.
As shown in Image 2, the choice for this volume is a hockey-puck of radius
dsds and height hshs that commonly includes a horizontal distance of 500 ft.
and a vertical range of 200 ft. Accordingly, a collision event is defined as an
incident that occurs when two aircraft pass less than 500 ft. horizontally and
100 ft. vertically.
Image 2.
A typical collision volume or protection zone is a virtual fixed volume
boundary around the aircraft.
In this work, we develop the path planning logic using a body-centered
relative coordinate system. In this body-centered coordinate system, the
ownship is fixed at the center of the coordinate system, and the intruder is
located at a relative position prpr and moves with a relative velocity vrvr with
respect to the ownship.
We call this body-centered coordinate frame the local-level frame because the
environment is mapped to the unrolled, unpitched local coordinates, where
the ownship is stationary at the center. As depicted in Image 3, the origin of
the local-level reference is the current position of the ownship. In this
configuration, the xx-axis points out the nose of the unpitched airframe, the
yy-axis points points out the right wing of the unrolled airframe, and the zzaxis points down forming a right-handed coordinate system. In the following
discussion, we assume that the collision volume is centered at the current
location of the intruder. A collision occurs when the origin of the local-level
frame penetrates the collision volume around the intruder.
Image 3.
Local-level reference frame.
The detection region is divided into concentric circles that represent
maneuvers points at increasing range from the ownship as shown in Image 4,
where the radius of the outmost circle can be thought of as the sensor
detection range. Let the region in the space covered by the sensor be called
the workspace. Then, this workspace is discretized using a cylindrical grid in
which the ownship is commanded to move along the edges of the grid. The
result is a directed weighted graph, where the edges represent potential
maneuvers, and the associated weights represent the maneuver cost and
collision risk. The graph can be described by the tuple G(N,E,C)GNEC,
where NN is a finite nonempty set of nodes, and EEis a collection of ordered
pairs of distinct nodes from NN such that each pair of nodes in EE is called a
directed edge or link, and CC is the cost associated with traversing each edge.
Image 4.
Discretized local-level reference workspace. The three concentric circles
represent three maneuvers points.
The path is then constructed from a sequence of nonrepeated nodes n1,n2,
⋯,nN such that each consecutive pair ni,ni+1 is an edges in GG. Let the
detection range drdr be the radius of the outermost circle, and rr be the radius
of the innermost circle so that dr=mrdr=mr. As shown in Image 6, let Ll
l=1,2,⋯,m be the llth level curve of the concentric circles. Assume that the
level curves are equally partitioned by a number of points or nodes such that
any node on the llth level curve, Ll connects to a predefined number of nodes
kk in the next level, that is, in the forward direction along the heading axis as
depicted in Image 4. The nodes on the graph can be thought of as predicted
locations of the ownship over a look-ahead time window. Additionally, we
assume that only nodes along the forward direction of the heading axis, that
is, x=0 connect to nodes in the vertical plane. This assumption allows to
command the aircraft to climb or descend by connecting to nodes in the
vertical plane as shown in Image 4. Let the first level curve of the innermost
circle be discretized into |L1|=k+2 nodes including nodes in the vertical
plane. Then, using the notation |A|A to denote the cardinality of the discrete
set AA, the number of nodes in the llth level curve is given by
where the total number of nodes is |N|=∑ml=1|Ll|. For example, assuming
that the start node is located at the origin of the reference map and given that
k = 3, that is, allowing the ownship to fly straight or maneuver right or left.
The total number of nodes in the graph including the start and destination
node is given by
Image 5 shows an example of a discretized local-level map. In this example,
k= 3 and m= 3, and the total number of nodes in the graph |N| is 39.
Image 5.
Example of discretized local-level map. (a) Top view: location and index of
nodes and (b) side view: location and index of nodes.
Assuming that the ownship travels between the nodes with constant velocity
and climb rate, the location of the iith node at the llth level curve, and ni,lni,l
in the horizontal plane of the graph is given by
Image 6.
Nodes location in the local-level reference frame.
The main priority of the ownship where it is under distress is to maneuver to
avoid predicted collisions. This is an important note to consider when
assigning a cost of each edge in the resulting graph. The cost associated with
traveling along an edge is a function of the edge length and the collision risk.
The cost associated with the length of the edge ei,i+1ei,i+1 that connects
between the consecutive pair nodes (ni,ni+1) is simply the Euclidean distance
between the nodes nini and ni+1 expressed as
The collision cost for traveling along an edge is determined if at any future
time instant, the future position of the ownship along that edge is inside the
collision volume of the predicted location of an intruder. An exact collision
cost computation would involve the integration of collision risk along each
edge over the look-ahead time window τ∈[t,t+mT]
A simpler approach involves calculating the collision risk cost at several
locations along each edge, taking into account the projected locations of the
intruder over the time horizon ττ. Assuming a constant velocity model, a
linear extrapolation of the current position and velocity of the detected
intruders are computed at evenly spaced time instants over the look-ahead
time window. The look-ahead time interval is then divided into several
discrete time instants. At each discrete time instant, all candidate locations of
the ownship along each edge are checked to determine whether it is or will be
colliding with the propagated locations of the intruders. For the simulation
results presented in this chapter, the collision risk cost is calculated at three
points along each edge in GG. If vo is the speed of the ownship, then the
distance along an edge is given by voT, where T=r/vo. The three points are
computed as
Where Ts=T/3. Let the relative horizontal and vertical position of the intruder
with respect to the ownship at the current time tt be pr(t) and prz(t),
respectively. Define the collision volume as
The predicted locations of each detected intruder over time horizon T at three
discrete time samples Ts are
where ℓ={1,2,3}ℓ=123. In Eq. (12), the ∞∞ or the maximum allowable cost
is assigned to any edge that leads to a collision, basically eliminating that
edge and the path passing through it. The total collision risk associated with
the iith edge is given by
where MM is the number of detected intruders.
A visual illustration of the collision risk computation is shown in Image 7.
The propagated collision volume of a detected intruder and the candidate
locations of the ownship over the first-time interval [t+Ts,t+3Ts] both in the
horizontal and vertical plane is depicted in Image 7a and b. Clearly, there is
no intersection between these candidate points the ownship may occupy and
the propagated locations of the collision volume over the same interval. Then,
according to Eq. (13), the cost assigned to these edges is zero. Next, all
candidate locations of the ownship along each edge over the second time
interval [t+4Ts,t+6Ts] are investigated. As shown in Image 7c, edges e2,7
e2,8 and e2,9 intersect with the predicted intruder location at time t+4TS and
t+5TS, respectively. Similarly, edges e3,15and e3,16 in the horizontal plane
intersect with the predicted intruder location at time t+4TSas shown in Image
7d. Accordingly, the maximum allowable costs will be assigned to these
edges, which eliminate these edges and the path passing through them. All
the candidate locations of the ownship over the time interval [t+7Ts,t+9Ts]
do not intersect with the predicted locations of the intruder as shown in
Image 7e and f. Therefore, by the time, the ownship will reach these edges
the detected intruder will be leaving the map, and consequently, a cost of zero
is assigned to edges belonging to the third level curve L3.
Image 7.
Example illustrating the steps to compute the collision risk. In this example,
we have k= 3 and m= 3. (a) Top view: predicted locations of intruder (less
transparent circles), and candidate locations of ownship; (b) side view:
predicted locations of intruder (less transparent rectangles), and candidate
locations of ownship; (c) predicted locations of intruder and candidate
locations of ownship over time window (t + 4Ts, t + 6Ts); (d) time window (t
+ 4Ts, t + 6Ts); (e) time window (t + 7Ts, t + 9Ts); (f) time window (t + 7Ts, t
+ 9Ts).
To provide an increased level of robustness, an additional threat cost is added
to penalize edges close to the propagated locations of the intruder even if they
are not within the collision volume. At each discrete time instant, we compute
the distances from the candidate locations of the ownship to all the
propagated locations of the intruders at that time instant. The cost of collision
threat along each edge is then given by the sum of the reciprocal of the
associated distances to each intruder
where d1,d2d1,d2, and d3d3 are given by
and the total collision risk cost associated with the iith edge with regard to all
intruders is given by
For example, the edges e1,2, e1,3, e1,4, e1,5 and e1,6 shown in Image 7a are
not intersecting with the propagated collision volume locations over the firsttime interval, yet they will be penalized based on their distances to the
predicated locations of the intruder according to Eq. (15). Note that edge e1,2
will have greater cost as it is the closest to the intruder among other candidate
edges.
Another objective of a path planning algorithm is to minimize the deviation
from the original path, that is, the path the ownship was following before it
detected a collision. Generally, the path is defined as an ordered sequence of
waypoints W=w1,w2,⋯.wf where wi=(wn,i,we,i,wd,i)T∈R3 is the northeast-down location of the iith waypoint in a globally known NED reference
frame. The transformation from the global frame to the local-level frame is
given by
where
where ψoψo is the heading angle of the ownship. Let wsws be the location
waypoint of the ownship at the current time instant tt and wf∈W be the next
waypoint the ownship is required to follow. Assuming a straight-line segment
between the waypoints wsws and wfwf, then any point on this segment can
be described as L(ϱ)=(1−ϱ)ws+ρwf where ϱ∈[0,1] and the minimum
distance between an arbitrary node nini in GG can be expressed by
where
And
Then, the cost that penalizes the deviation of an edge in GG from the nominal
path is given by
If small UASs are to be integrated seamlessly alongside manned aircraft, they
may require to follow right-of-way rules. Therefore, an additional cost can be
also added to penalize edges that violate right-of-way rules. In addition, this
cost can be used to favor edges in the horizontal plane over those in the
vertical plane. Since the positive direction of the yy-axis in the local-level
frame is the right-wing direction, it is convenient to define right and left
maneuvers as the positive and the negative directions along the right-wing
axis, respectively. Let e→i≜ni+1−ni be the direction vector associated with
the edge ei,i+1 in GG, where ni≜(xi,yi,zi)T∈R3 is the location of iith node in
the local-level reference frame. Let the direction vector e→i be expressed as
e→i=(eix,eiyeiz)T∈R3. We define E≜(eix,L,R,eiz)T∈R4, where eix and eiz
are the x and the z components of e→i. The y-component of e→i is
decomposed into two components: left L and right R, that are defined by
If
we
define
the
maneuvering
design
matrix
to
be
J=diag([0,cL,cR,cz])J=diag0cLcRcz, then the maneuvering cost associated
with each edge is given by
The costs cLcL and cRcR allow the designer to place more or less cost on the
left or right edges. Similarly, cz allows the designer to penalize vertical
maneuvers. Multiple values of these cost parameters may be saved in a lookup table, and the collision avoidance algorithm choses the appropriate value
based on the geometry of the encounter.
The overall cost for traveling along an edge comes from the weighted sum of
all costs given as
where k1, k2, and k3 are positive design parameters that allow the designer to
place weight on collision risk or deviation from path or maneuvering
preferences depending on the encounter scenario. Once the cost is assigned to
each edge in G, then a graph-search method can be used to find the least cost
path from a predefined start point to the destination point. In this work, we
have used Dijkstra’s algorithm.
Dijkstra’s algorithm solves the problem of shortest path in a directed graph in
polynomial time given that there are not any negative weights assigned to the
edges. The main idea in Dijkstra’s algorithm is to generate the nodes in the
order of increasing value of the cost to reach them. It starts by assigning some
initial values for the distances from the start node and to every other node in
the graph. It operates in steps, where at each step, the algorithm updates the
cost values of the edges. At each step, the least cost from one node to another
node is determined and saved such that all nodes that can be reached from the
start node are labeled with cost from the start node. The algorithm stops
either when the node set is empty or when every node is examined exactly
once. A naive implementation of Dijkstra’s algorithm runs in a total time
complexity of O(|N|2). However, with suitable data structure implementation,
the overall time complexity can be reduced to O(|E|+|N|log2|N|) .
The local-level path planning algorithm generates an ordered sequence of
waypoints Wc=wc1,wc2,⋯,wci. Then, these waypoints are transformed from
the relative reference frame to the global coordinate frame and added to the
original waypoints path W. When the ownship is avoiding a potential
collision, the avoidance waypoints overwrite some or all of the original
waypoints. Next, a path manager is required to follow the waypoints path and
a smoother to make the generated path flyable by the ownship. One possible
approach to follow waypoints path is to transit when the ownship enters a ball
around the waypoint WiWi or a better strategy is to use the half-plane
switching criteria that is not sensitive to tracking error . Flyable or smoothed
transition between the waypoints can be achieved by implementing the fillet
maneuver or using Dubins paths.
Simulation results
To demonstrate the performance of the proposed path planning algorithm, we
simulate an encounter scenario similar to the planner geometry shown in
Image 8. The aircraft dynamics are simulated using a simplified model that
captures the flight characteristics of an autopilot-controlled UAS. The
kinematic guidance model that we considered assumes that the autopilot
controls airspeed, va, altitude, h, and heading angle, ψ. Under zero-wind
conditions, the corresponding equations of motion are given by
where pn, pe are the north-east position of the aircraft. The inputs are the
commanded altitude, hchc, the commanded airspeed, vcavac, and the
commanded roll angel, φcφc. The parameters bv, bφ, bh, and bḣ b are
positive constants that depend on the implementation of the autopilot and the
state estimation scheme. For further analysis on the kinematic and dynamic
guidance models for UAS, we refer the interested reader to. In the following
simulation, the ownship starts at (0,0,−200)T in the NED coordinate system,
with an initial heading of 0 deg. measured from north and follows a straightline path at a constant speed of 22 m/s to reach the next waypoint located at
(1500,0,−200)T. The encounter geometry includes three intruders flying at
different altitudes: the first is approaching head-on, the second is converging
from the right, and the third is overtaking from the left. We chose the
intruders’s speed similar to the known cruise speed of ScanEagle UAS,
Cessna SkyHawk 172R, and Raven RQ-11B UAS. The speed of the intruders
is 41, 65, and 22 m/s, respectively. In addition, the intruders are assumed to
fly at a constant speed the entire simulation period. As shown in Image 8, the
initial locations of intruders in the NED coordinate system are
(−25,1000,−225)T, (500,1000,−180)T, and (25,−500,−200)T respectively,
with initial heading of 180, −90, and 0°, respectively.
Image 8.
Encounter geometry for the ownship and three intruders at tt = 0.1 s. (a)
Overhead view of initial locations of aircraft; (b) 3D view of initial locations
of aircraft; (c) overhead view of reference frame; (d) side view of relative
reference frame.
In the following simulation, our choice of the collision volume is a cylinder
of radius ds = 152.4 m (500 ft) and height hs = 61 m (200 ft) centered on each
of the intruders. A collision incident occurs when the horizontal relative
range and altitude to the ownship are simultaneously below horizontal and
vertical minimum safe distances ds and hs/2. We assume that there exists a
sensor and tracking system that provides the states of the detected intruders.
However, not every aircraft that is observed by the sensing system presents a
collision threat. Therefore, we implemented a geometric-based collision
detection algorithm to determine whether an approaching intruder aircraft is
on a collision course. The collision detection approach is beyond the scope of
this work, and we refer the interested reader to.
At the beginning of simulation, the predicted relative range and altitude at the
closest point of approach (CPA) are shown in Table 1. Imminent collisions
are expected to occur with the first and second intruders as their relative
range and altitude with respect to the ownship are below the defined
horizontal and vertical safe distances. The time remaining to the closest point
of approach tCPAtCPA with respect to the first and second intruders is 15.77
and 16.56 s, respectively. The scenario requires that the ownship plans and
executes an avoidance maneuver well before the tCPA. This example
demonstrates the need for an efficient and computationally fast avoidance
planning algorithm. Table 2 shows the total time required to run the
avoidance algorithm, and the maximum and average time required to execute
one cycle. The results show that the proposed algorithm takes a significantly
reduced time in computation with an average and maximum time to execute
one cycle of the code of 20 ms and 0.1326 s, respectively, and a total time of
0.3703 s to resolve the collision conflict.
Intruder
∥pr(tCPA)∥ (m)
|prz(tCPA)| (m)
tCPA (s)
24.90
25
15.77
141.33
20
16.56
437.14
4361.07
1982.25
TABLE 1.
Relative range and altitude, and the time remaining to the closest point of
approach.
Total run time (s)
Max. run time (one cycle) (s)
Average run time (one cycle) (s)
0.3703
0.1326
0.0206
TABLE 2.
Collision avoidance algorithm run time.
Image 9 shows the planned avoidance path by the ownship. These results
show that the avoidance path safely maneuvers the ownship without any
collisions with the intruders. In addition, the ownship should plan an
avoidance maneuver that does not lead to a collision with intruders that were
not on a collision course initially such as the case with the third intruder.
Initially, the third intruder and the ownship are flying on near parallel
courses. The relative range and altitude at CPA with respect to the third
intruder are 437.14 and 4361.07 m, respectively, and the time remaining to
the CPA is 1982.25 s. Obviously, both aircrafts are not on a collision course.
However, the third intruder is descending and changing its heading toward
the ownship. The path planner, however, accounts for predicted locations of
the detected intruder over the look-ahead time window, allowing the ownship
to maintain a safe distance from the third intruder. This example
demonstrates that the proposed path planner can handle unanticipated
maneuvering intruders. Once collisions are resolved the path planner returns
the ownship to the next waypoint of its initial path.
Image 9.
Avoidance path followed by the ownship and path tracks of the intruders at tt
= 75 s. (a) Overhead view of avoidance path and (b) 3D view of of avoidance
path.
The relative range between the ownship and the intruders is shown in Image
10. The results show that no collisions have occurred, and that the ownship
successfully planned an avoidance maneuver. The avoidance planner ensures
that when the relative horizontal range is less than dsds, the relative altitude is
greater than hs/2hs/2. For example, as shown in Image 10b, the relative
range to the first intruder over time interval [16.2, 18] s is below dsds.
However, over the same time interval, the relative altitude is above hs/2hs/2.
Image 10.
Relative horizontal range and altitude between the ownship and intruders. (a)
Horizontal range and relative altitude to intruders and (b) a close up view of
Image 10a.
Another important aspect to evaluate the performance of the proposed
algorithm is its ability to reduce the length of the avoidance path while
avoiding the intruders. This is important because it reduces the amount of
deviation from the original path and ultimately the flight time, which is of
critical importance for the small UAS with limited power resources. Table 3
shows that the length of the avoidance paths is fairly acceptable compared to
the initial path length.
Scenario number
Initial path length (m)
Avoidance path length (m)
1500
1955
TABLE 3.
Length of the avoidance path.
Conclusions
In this chapter, we have presented a path planning approach suitable for small
UAS. We have developed a collision avoidance logic using an ownshipcentered coordinate system. The technique builds a maneuver graph in the
local-level frame and use Dijkstra’s algorithm to find the path with the least
cost.
A key feature of the proposed approach is that the future motion of the
ownship is constrained to follow nodes on the map that are spaced by a
constant time. Since the path is represented using waypoints that are at fixed
time instants, it is easy to determine roughly where the ownship will be at any
given time. This timing information is used when assigning cost to edges to
better plan paths and prevent collisions.
An advantage of this approach is that collision avoidance is inherently a local
phenomenon and can be more naturally represented in local coordinates than
global coordinates. In addition, the algorithm accounts for multiple intruders
and unanticipated maneuvering in various encounter scenarios. The proposed
algorithm runs in near real time in Matlab. Considering the small runtime
shown in the simulation results, we expect that implementing these
algorithms in a compiled language, such as C or C++, will show that realtime execution is feasible using hardware. That makes the proposed approach
a tractable solution in particular for small UAS.
An important step forward to move toward a deployable UAS is to test and
evaluate the performance of the close-loop of sensor, tracker, collision
detection, path planning, and collision avoidance. Practically, the deployment
of any UAS requires a lengthy and comprehensive development process
followed by a rigorous certification process and further analysis including
using higher fidelity models of encounter airspace, representative number of
simulations, and hardware-in-the-loop simulation. Unlike existing collision
manned aviation collision detection and avoidance systems, an encounter
model cannot be constructed solely from observed data, as UASs are not yet
integrated in the airspace system and good data do not exist. An interesting
research problem would be to design encounter models similar to those
developed to support the evaluation and certification of manned aviation
traffic alert and collision avoidance system (TCAS).
How to Expand the Workspace of Parallel
Robots
The parallel robot has excellent characteristics such as high speed, high
precision, and high rigidity. However, mechanical collisions between limbs
and complexly existing singular configurations restrict its workspace. In this
chapter, firstly, methods for expanding the translational workspace of the
parallel robot are discussed. The parallel robot has multiple solutions of the
inverse and forward displacement analysis. By changing its configurations
from one solution to another, the parallel robot can expand its translational
workspace. However, conventional nonredundant parallel robot encounters
singularity during the mode change. Singularity-free mode changes of the
parallel robot by redundant actuation are introduced. Next, methods for
expanding the rotational workspace of the parallel robot are shown. In order
to achieve the large rotation, some mechanical gimmicks by gears, pulleys,
and helical joints have been embedded in the moving part. A novel
differential screw-nut mechanism for expanding the rotational workspace of
the parallel robot is introduced.
Expanding the translational workspace
MODE CHANGE OF THE PARALLEL ROBOT
Schematic model of a planar two-dof (degree-of-freedom) serial robot and a
planar two-dof parallel robot is illustrated in Images 1 and 2, respectively. R
and P represent the rotational pair and prismatic (sliding) pair, respectively.
Underline indicates that an actuator is located at the pair, namely, R and
Prepresent active rotational pair and active prismatic pair, and R and P
represent passive ones. The serial robot in Image 1 is represented as RR
mechanism, and the parallel robot in Image 2 is represented as 2PRR
mechanism. In 2PRR, “2” means that the two PRR mechanisms are connected
to the output link (end effector or hand) in parallel. In order to control the
position of the hand, forward displacement analysis gives the position and
orientation of the hand from the displacements of the active pairs (joints), and
inverse displacement analysis gives the displacements of the active joints
from the position and orientation of the hand. As shown in Images 1 and
2(a), different solutions for inverse displacement analysis and different
position of the joints for a position and orientation of the hand exist. In the
robotics, the solution of the inverse displacement analysis is called as the
working mode. Changing the positions of the joints between the different
working modes, at that time the robot changes its configuration, is named as
the working mode change. In the case of the serial robot, there exists only
one solution for the forward displacement analysis; namely, when the
displacement of the active joints is given, the position and orientation of the
robot are fixed. However, in the case of the parallel robot, there exist
different solutions for the forward displacement analysis as shown in Image
2(b). The solution of the forward displacement analysis is named as the
assembly mode. Parallel robot can change the position and orientation of the
hand with the same displacements of the actuated joints. Note here that the
parallel robot in Image 2 is designed as free of the mechanical interferences
such as limb collisions. Parallel robot can expand the workspace of the hand
by the assembly mode change as shown in Image 2(b).
Image 1.
Working mode change of RR serial robot.
Image 2.
Assembly and working mode change of 2PRR parallel robot.
KINEMATICS OF THE PARALLEL ROBOT
As mentioned before, the parallel robot can expand its workspace by the
mode change. However, parallel robot encounters the singular configuration
between the mode changes. In this section, kinematics and singular
configuration of a 2PRR parallel robot are shown in Image 3. Coordinate
frames Σ0 and Σi (i = 1, 2) are defined for the fixed coordinate and position of
each actuator, respectively. Vector bi is defined for the position of the origin
Σi with respect to the coordinate Σ0. In the case of the 2PRR parallel robot in
Image 3, bi = 0 (i = 1, 2) because each coordinate Σi is coincident with Σ0.
Unit direction vector and displacement of the linear actuator are defined as ui
and qi, respectively. Unit direction vector and length of the rod are defined as
wi and li, respectively. Position of the hand (output link) is given as
Image 3.
Kinematic model of 2PRR parallel robot.
Equation (1) is the loop closure equation of the parallel robot. Next, the
forward displacement analysis and inverse displacement analysis of the
parallel robot are going to be derived.
Eq.
(1)
is
deformed
as
The unit direction vector wi is eliminated by raising the both sides of Eq. (2)
to
As
the
shown
in
second
Image
3,
ui
power
and
bi
are
as,
given
as,
Substituting Eq. (4) into Eq. (3), one obtains a quadratic equation about the
displacement of the actuator as,
Solutions of the inverse displacement analysis are given by the solution of
Eq. (5) as,
Eq. (6) represents that there are two solutions for the inverse displacement
analysis. It means that the parallel robot has two working modes. Image 2(a)
(i) and (iii) represents the configurations of the parallel robot of the two
working modes.
In the forward displacement analysis, positions x and y of the hand are solved
by the simultaneous equations about Eq. (1) of each arm as,
Each equation of (7) represents that a circle of the central position is (qi, 0)
and the radius is li. The solution of Eq. (7) is given as the crossing point of
the two circles. There exist two crossing points of the circles; namely, the
parallel robot has two assembly modes. Image 2(b) (i) and (iii) represents
each configuration of the assembly mode of the parallel robot.
When y = li, at that time, the value in the square root in Eq. (6) becomes zero
and the inverse displacement analysis has a duplication solution. When y > li,
at that time, value in the square root in Eq. (6) becomes negative and the
inverse displacement analysis has no solution. When the distance between the
two circles of Eq. (7) equals (li + l2), the forward displacement analysis has a
duplication solution. When the distance between the two circles of Eq. (7)
becomes larger than (li + l2), the forward displacement analysis has no
solution.
SINGULARITY ANALYSIS OF THE PARALLEL ROBOT
THE JACOBIAN OF THE PARALLEL ROBOT
Differentiating the both sides of Eq. (1) with respect to time, one obtains
In Eq. (1), bi, ui, and li are constant values and their time derivatives become
zero. Because wi is unit direction vector, one obtains
Differentiating the both sides of Eq. (9) with respect to time, one obtains
Multiplying the both sides of Eq. (8) by wiT and taking into consideration
about Eq. (9), one obtains
The displacements of all actuators q1 and q2 are defined by vector form as q
= [q1 q2]T; then, Eq. (10)is expressed by matrix form as
Jx and Jq are the Jacobian of the parallel robot.
SECOND KIND OF SINGULARITY
When the Jacobian matrix Jx becomes singular, namely the determinant of the
Jacobian becomes zero as
Then, the parallel robot is in the second kind of singularity or in direct
kinematics singularity. The 2PRR parallel robot occurs in the second kind of
singularity when the direction of the two rods w1and w2 becomes identical.
This case corresponds to that two rods of the parallel robot lay in one line as
shown in Image 2(b) (ii). As shown in Image 2(b), the parallel robot
encounters the second kind of singularity during the assembly mode change.
FIRST KIND OF SINGULARITY
When the Jacobian matrix Jq is singular
Then, the parallel robot is in the first kind of singularity or in inverse
kinematics singularity. The 2PRR parallel robot occurs in the first kind of
singularity when the direction of at least one rod is perpendicular to the
direction of the actuator as shown in Image 2(a) (ii).
As shown if Image 2(a), the parallel robot encounters the first kind of
singularity during the working mode change.
THE THIRD KIND OF SINGULARITY
The third kind of singularity occurs when both Jq and Jx are simultaneously
singular. It requires certain conditions of the linkage parameter. In the case of
the 2PRR parallel robot, when the lengths of the rods are identical, the third
kind of singularity occurs as shown in Image 4. The third kind of singularity
is referred as the combined singularity.
Image 4.
The third kind of singularity of the 2PRR parallel robot.
PASSING THE SINGULAR CONFIGURATION DURING
THE MODE CHANGES
PASSING THE SECOND KIND OF SINGULARITY DURING THE
ASSEMBLY MODE CHANGE
USING THE INERTIA
As shown in Image 5(a), if the parallel robot has some speed just before the
singular configuration, the robot keeps moving according to the law of
inertia; then, the parallel robot passes the second kind of singularity. Note
here if the parallel robot stops at the singular configuration, it is impossible to
pass the singular configuration by the inertia.
Image 5.
Assembly mode change of nonredundant 2PRR parallel robot.
USING THE GRAVITY
As shown in Image 5(b), when the gravity force acts to the lower direction,
the parallel robot can pass the singular configuration even if the parallel robot
stops at the singular configuration. However, the robot cannot move to the
upper direction against the gravity. When the robot moves on the horizontal
plane where the gravity force does not act on the robot, the parallel robot
cannot pass the singular configuration.
USING THE REDUNDANCY
A robot has actuation redundancy when it is driven by number of actuators
greater than the degree of freedom. The actuation redundancy may increase
the cost of the robot and complexity of control. However, the actuation
redundancy is one of the most effective methods for avoiding the singularity
during the mode change.
Image 6 represents 3PRR parallel robot, a planar two-dof parallel robot
redundantly actuated three actuators. The Jacobian of the parallel robot is
given as
Image 6.
Assembly mode change of redundant 3PRR parallel robot.
where Jx is 3×2 matrix of its full rank that equals two. For convenience,
singularity analysis is applied to the 3×2 transposed Jacobian matrix of JxT.
When at least one 2×2 minor of JxT is nonsingular, the rank of the Jx equals
two; namely, the Jx has full rank. At this time, the robot still works as two-dof
parallel robot. For example, when the first rod and the second rod are
collinear as shown in Image 6(iii), 2×2 minors of the JxT become
As shown in Eq. (18), one minor is singular and the other two minors are
nonsingular; namely, the Jxstill has the full rank.
Now, the parallel robot loses the redundancy but keeps the nonsingularity.
The parallel robot can pass the singular configuration of Image 6 (iii). In the
same way, the parallel robot can pass the singular configuration of Image 6
(ii). Thus, the parallel robot achieves the assembly mode change from Image
6 (i) to (iv).
PASSING THE FIRST KIND OF SINGULARITY DURING THE
WORKING MODE CHANGE
The parallel robot encounters the first kind of singularity during the working
mode change. In the velocity control of a robot, the velocity of the actuator is
controlled to trace the desired velocity of the actuator q̇ rq̇r, which is given
from the desired velocity of the end-point ẋ rẋr from Eq. (12) as
−1
However, Jq
cannot be calculated when Jq is singular at the first kind of
singularity. In this case, the desired velocity of the actuator is directly given
instead of being indirectly given from Eq. (12).
RESEARCHES ON MODE CHANGE OF THE PARALLEL
ROBOT
Researches on expanding the workspace by the mode change of the parallel
robot have been reported for nonredundant two-dof planar robot,
nonredundant three-dof spatial translational robot, redundantly driven threedof translational and rotational planar robot, and redundantly driven three-dof
spatial translational robot. Redundant actuation is the actuation in one of the
most effective methods for avoiding the singularity. However, the
redundancy is not always the answer to avoiding the singularity. Additional
ingenuities, for example, path planning for mode change, or asymmetrical
design for the robot, are required.
Expanding the rotational workspace
CONVENTIONAL METHODS
In this section, methods for expanding the rotational workspace of the
parallel robot are introduced. Image 7(a) represents the Stewart Platform [9],
six-dof parallel robot with three-dof translations and three-dof rotations. The
moving plate is driven by linear actuator embedded six limbs. Each limb is
paired by a passive universal joint (U) with the base plate and paired by a
passive spherical joint (S) with the moving plate. The Stewart Platform is
categorized into 6UPS parallel mechanism. The Stewart Platform generates
high power with a hydraulic linear actuator. Flight simulator and driving
simulator for carrying heavy cockpit of aircraft are typical applications of the
Stewart Platform. However, the robot has a drawback of small rotation
around z axis because of the mechanical interference between the limbs.
Image 7(b) represents a six-dof cable-driven parallel robot. This robot also
has the same drawback of small rotation around z axis because of the cable
interference. Small rotation of parallel robots puts restriction to their
applications.
Image 7.
Conventional six-dof parallel robot.
In order to cope with the problem, Clavel invented a novel parallel robot
DELTA [11] as shown in Image 8. The DELTA generates three-dof
translation and one-dof rotation around the z-axis.
Image 8.
The DELTA robot.
Three control arms are rotated by three motors located on the base plate. Each
arm is connected to the moving part by a parallelogram rod with spherical
joints. The gripper on the moving part is connected to the motor on the base
plate with a telescopic arm via universal joints (Image 9).
Image 9.
Mechanical gimmicks enlarge the rotational workspace of parallel robot.
Researches on expanding the rotational workspace of the DELTA like
parallel robot have been reported from research group at the Montpelier
University. The basic idea is to convert the translational motion to the
rotational motion by the mechanical gimmicks such as pulley, rack and
pinion, and screw. Crank embedded moving part has been proposed by
McGill University for two-limb parallel robot and by Fraunhofer IPA for
cable-driven parallel robot.
DIFFERENTIAL SCREW DRIVE SYSTEM
Recently, differential screw drive systems composed of two screw-nut pairs
of different leads have been applied to robot systems. There are four driving
methods of the differential drive systems;
1.
Rotations of the two nuts are converted to the rotation and
translation of the screw pair, which is coaxially arranged with one
end interconnected with each other.
2.
Translations of the two nuts are converted to the rotation and
translation of the screw pair, which is coaxially arranged with one
end interconnected each other.
3.
Rotations of the two screws are converted to the rotation and
translation of the nut pair, which is coaxially arranged with one end
interconnected each other.
4.
Translations of the two screws are converted to the rotation and
translation of the nut pair, which is coaxially arranged with one end
interconnected each other.
In this section, kinematics of the differential drive system (b) as shown in
Image 10 is discussed for enlargement of the rotational workspace of the
parallel robot.
Image 10.
Differential screw drive system (translations of the two nuts).
Letli (i = 1, 2) be the lead of the ith screw nut. ni (i = 1, 2), z, and γ represent
the position of the ith nut, position, and angle of the screw pair, respectively.
Relation of these parameters is given as
Equation (20) is given in the matrix form as
Equation (21) gives the inverse kinematics of the differential drive system.
The forward kinematics of the system is derived by inverting Eq. (21) as
If two leads of the screw-nut pair are identical as
then the matrix in Eq. (22) becomes singular. It is necessary that the leads of
the two screw nuts are different from the differential drive system.
Image 11 represents a 4-dof parallel robot that the differential drive system
(a) embedded in the moving part. H represents a passive helical joint
composed of screw and nut pair. In Image 11, two Hpairs have the same
lead, but the different helix hands (right and left). In Image 9 (c), one H pair
and one R pair are embedded in the moving part. This is included in the
differential drive system (a), one of the H pair has zero lead.
Image 11.
Parallel robot with the differential drive system embedded moving part.
Conclusion and future works
After commercialization of the DALTA robot, a lot of parallel robots can be
found in machine factory and other industries. Recently, one can get the kit of
a 3D printer machine by linear DELTA mechanism under 500 USD. It may
be said that the first-generation parallel robot such as DELTA and Stewart
Platform is getting to reach the mature stage. In this chapter, expanding the
workspace of the parallel robot was introduced. The translational workspace
was expanded by singularity-free mode change using the actuation
redundancy. The differential drive mechanism converts the translational
motion to the rotational motion, which expands the rotational workspace of
the parallel robot. These parallel robots are expected as the next-generation
robot.
At the end of this chapter, an example of the next-generation parallel robot is
introduced. Image 12(a)represents a novel two-limb six-dof parallel robot.
As shown in Image 12(b), a differential screw mechanism is embedded in the
output link for enlarging the rotational workspace of the gripper. This parallel
robot extends its degree of freedom from four (three translations and one
rotation as shown in Image 11) to six (three translations and three rotations)
as shown in Image 13. We are working on the kinematic analysis and
detailed mechanical design of the parallel robot.
Image 12.
Two-limb six-dof parallel robot with the differential drive embedded output
link.
Image 13.
Motion of the two-limb six-dof parallel robot.
Kinematic and Biodynamic Model of the
Long Jump Technique
The long jump consists of four interconnected phases: approach, take-off,
flight and landing. According to some existent studies [2, 6, 9, 11], the
approach and take-off are the most important factors that affect the result.
The fundamental problem of long jump, from the biomechanical point of
view, is the transformation of horizontal velocity to a resultant of the vertical
and horizontal velocities in the take-off phase. It is very important that the
athlete realises the greatest possible vertical velocity with the smallest
possible loss of horizontal velocity. The jumping distance is defined,
according to the theoretical model after Ballreich and Bruggemann, [1], by
the take-off distance L1, flight distance L2 and landing distance L3. The CM
flight parabola is defined by the following parameters: relative height of CM
at take-off, resultant of the vertical and horizontal velocity of take-off, takeoff angle and air resistance [1, 5, 6, 7]. Most of the biomechanical studies of
long jump technique till now dealt with studying the kinematic characteristics
with high-frequency film or video cameras. However, there are very few
studies on top long jumpers with emphasis on the dynamic characteristics of
take-off, which is the most important generator of the long jump result. The
main purpose of our study was therefore a complex analysis of both the
kinematic model of long jump and the dynamic model of take-off.
Methods and procedures
KINEMATIC MODEL
The analysis was performed on two jumps of one of the world’s best long
jumpers G.C. (GC—body height 178.5 cm, body mass 69 kg, age 24 years,
personal best 8.40 m). The length of the first analysed jump was 7.93 m and
the second 8.25 m.
Kinematic parameters were obtained with a 3-D kinematic system ARIEL
(Ariel Dynamics Inc., USA) with four synchronised cameras (Sony—
DVCAM DSR 300 PK), shooting at 100 Hz. The first two cameras, set a 90°
angle to the filmed object, were used to analyse the last two strides of the
approach and the take-off, with the other two the flight and landing (Image
1). In the kinematic analysis, a 15-segment model was digitised. The model
segments represent parts of the body, connected with point joints. The masses
and centres of gravity of the segments and the common body centre of
gravity (CM) were computed according to the anthropometric model of
Dempster. All kinematic parameters were filtered with a seventh-level
Butterworth filter.
Image 1.
Measurement system for kinematic and dynamic parameters of the take-off in
long jump.
DYNAMIC MODEL
The dynamic parameters of take-off were registered with a force-platform
Kistler 9287, area 900 × 600 mm, covered with a tartan surface and installed
before the take-off board on the approach track of long jump. Forces were
measured in three directions: X—horizontal, Y—vertical and Z—lateral
(Image 2).
Image 2.
Take-off phase 3D-surface contour plot.
The jump length (L) was measured from the front impression of the foot of
the take-off leg on the force-platform to the point of contact of the feet in the
sand of the landing pit. The programme package Matlab (Mathworks Inc.,
USA) was used to analyse the measured forces. The registration frequency on
the force-platform was 2000 Hz.
OPTIMISING ANGLE OF PROJECTION AT TAKE-OFF
ACTION
We ask ourselves how to determine the best angle of projection at take-off for
long jumpers. In our case, we model jumpers as mass points and observe their
trajectories for a case where there is no air friction. We already know that
energy losses have to be taken into consideration because jumper always
loses some energy in a take-off phase, when he/she is transforming his/her
horizontal velocity before he/she jumps into combination of horizontal and
vertical velocity.
Let us first look at the example when height of centre of mass at take-off and
height of centre of mass when he/she first touches the ground are the same.
Let us denote velocity before jump with
and velocity after take-off with
. Angle of projection shall be denoted with ϕ and jumpers mass with .
All mechanical energy before take-off is
. This is jumpers
kinetic energy. δ denotes the amount of energy that is lost at the time of
take-off as heat and sound [16], and of energy before take-off transforms into
vertical kinetic energy. We denote this part as Δ
We can therefore write:
which means
From this we get
.
The length of a jump of a mass point, which in our case is the centre of mass,
is
Where
To find the optimal value of ϕ, we need to find the extreme of function
We do so by finding the zeros of the first derivate of function
We must also take a look at the second derivate to establish if angle that we
compute is maximum or minimum
Zeros of this function coincide with zeros of
From this equation we get the optimal angle of projection
With this we have established that optimal angle is independent of values
.
and
The fact
conclusion that
also holds true, which brings us to the
in fact is the optimal angle for longest jump, that is, is
the maximum of function
Of course height of the centre of mass of the jumper at take-off is bigger than
the height of the centre of mass when the jumper first touches the ground
because he/she pulls his/her legs closer to their bodies when landing (Image
2—kinematic and biodynamic model of the long jump technique). In this
case, the length of the jump is defined as
, where point
is defined as an intersection of curves:
and
, where denotes the height difference of centre of mass
before and after the jump. Computing this intersection gives us
which means
Taking into consideration that
we get
and
In the same way, as we did before, we can compute the maximum value of
function
by finding the zeros of the function
a lot of algebra and using Wolfram alpha we can show that
In McFarland’s research [10],
is estimated to be 0.1.
. With
Main body
KINEMATIC ANALYSIS
Take-off is one of the key generators of a successful long jump. The take-off
model is defined by kinematic (Table 1 and Image 3) and dynamic
parameters. These are interrelated, in accordance with biomechanical laws. In
this phase, the jumper must optimally transform the horizontal velocity
developed in the approach into speed of take-off.
Image 3.
Kinematic model of take-off (G.C., distance 8.25 m).
The take-off speed is the resultant of horizontal velocity at TO and vertical
velocity at TO and is one of the most important predictors of an effective
jump length. Hay et al., Nixdorf and Bruggemann, found that the correlation
between the take-off speed and end result is between 0.74 and 0.83.
Horizontal velocity at TD for the athlete GC is 9.46 m s
−1
. In take-off (TD-
TO), a reduction of the horizontal velocity of 1.36 m s
−1
occurs,
representing 14.3%. Its decrease is connected with the increase of vertical
velocity at TO, amounting to 3.90 m s
−1
(Image 3). We can conclude that
the loss in horizontal velocity is proportional to the increase in vertical
velocity. The vertical velocity at touchdown is directed downwards and has a
−1
negative sign (VYTD = −0.26 m s ).
The realisation of vertical velocity is directly connected with the magnitude
of the projection of CM on the surface at touchdown (touchdown distance =
0.63 m). One of the key biomechanical problems of take-off is how to ensure
the greatest possible vertical velocity, while keeping the decrease in
horizontal velocity to a minimum. The ratio of the take-off velocity
components VXTO:VYTO for the athlete GC is 2.08:1. The results of some
similar studies show us that this athlete has an optimal vertical velocity at
take-off and a horizontal velocity (VXTO) that is a little too low to allow
even longer jumps. The consequence of a relatively high vertical velocity at
take-off is also the magnitude of the angle of projection at take-off PATO =
24.10 deg. This magnitude of the angle of projection at take-off later defines
the CM parabola of the jumper.
The realisation of a high vertical velocity can be connected with an efficient
elevation of CM at take-off. The difference between the lowest position of
CM at touchdown and at take-off is 28 cm. This is augmented also with a
favourable ratio between the length of the last two strides (2.35 m:2.05 m)
and the lowering of CM for 11 cm in the last-but-one stride. The average
lowering of CM of the finalists of the 1997 World Championship in Athens
was 8 cm. This lowering of CM in the last stride but one increases the
vertical acceleration distance in the take-off phase.
A very important factor contributing to the take-off efficiency is the angular
velocity of the swing leg (AGTO), 790 deg s
−1
for the athlete GC. This
velocity is not constant, it changes. It is highest in the middle part of the
swing amplitude, the lowest at the end of the swing, when the movement of
the swing leg is blocked. The consequence of this is the transfer of the force
of the mass inertia of this segment (the leg represents, according to the
anthropometric model of Dempster, 16.1% of the entire body mass) to the
common body centre of gravity of the jumper. Hay et al. states that the
contribution of the swing leg to the entire take-off impulse is between 17 and
20%.
Combining this information with the measured data:
Because, we have all the needed data, it is interesting to see how initial
velocity before the take-off
affects the optimal angle of projection
computed above. The values for initial velocity were chosen from the interval
because most elite long jumpers have their initial velocity from this
interval (Image 4).
Image 4.
Effect of initial velocity on optimal angle of projection.
Similarly, we can investigate the relation between coefficient of efficiency
and the length of the jump (Image 5).
Image 5.
Effect of efficiency coefficient on length of the jump.
Next graph (Image 6) is in fact actually another graph of correlation between
initial velocity before take-off and optimal angle of projection with the
difference that this graph shows how the length of jump depends on initial
angle of projection for different initial velocities. The optimal angle can be
seen as a value of an angle where maximum is achieved.
Image 6.
Relation between length of a jump and initial angle of projection.
DYNAMIC ANALYSIS OF TAKE-OFF ACTION
The dynamic parameters of take-off (Table 2 and Image 7) were measured
directly with a force-platform. In this way, we measured the forces developed
by the jumper at take-off in a realistic situation, which is a rarity in studies of
this kind. The period of the contact phase was 127 ms. Here, the ratio
between the period of compression (84 ms) and the period of lift (43 ms) is
important—this was 66%:34% for our athlete, which is a good indicator of an
efficient take-off , from the point of view of the jump dynamics.
Image 7.
Force diagram of the take-off phase.
The compression phase by definition lasts from the instant the foot of the
take-off leg is placed on the ground till the moment of maximal amortisation
in the knee of the take-off leg (TD-MKF). The lift phase then follows till
take-off (MFK-TO). In the compression phase, our athlete developed a
maximal force of 4581 N in the horizontal direction and 5132 N in the
vertical direction. The muscular regime is eccentric in the compression phase
period and concentric in the lift phase period. The long jump take-off is a
typical example of a two-phase eccentric-concentric muscular contraction,
whose efficiency depends mainly on two physiological factors. The first in
the switching time—the utilisation of the elastic energy potential saved in the
muscles, connecting tissue and tendons. This elastic potential is available
only for a certain period of time. This time is defined by the life-time of the
cross-bridges between the actin and the myosin in the sarcomeres, lasting
from 15 to 100 ms. If the concentric contraction does not follow the eccentric
one quickly enough, there is no transfer of the elastic energy from the first to
the second phase. Studies have shown that elastic energy is the main force
generator in the eccentric (compression) phase and chemical energy of the
muscles in the concentric (lift) phase. An efficient integration of elastic and
chemical energy in the muscles and tendons can cause a 40% greater resultant
force.
The second factor of an economic eccentric-concentric contraction is the
ability of the muscles to resist rapid extension—stiffness. Stiffness, as a
neural mechanism, depends mostly on the pre-activation of the muscles and
the action of reflexes: miotatic and Golgi tendon reflex. In light of the
biomechanical characteristics, the short-range elastic stiffness is a
characteristic for long jump take-off, where it is a matter of an immediate
mechanic response of the activated muscle to eccentric contraction. In our
experiment, we found the eccentric contraction time to be 84 ms and the
athlete develops in this phase a maximal vertical ground reaction force of
5132 N. This force is almost 7.5 times the body mass of the athlete. An
important kinematic criterion of the muscles’ efficiency while resisting
stretching (lowering of CM) is the angle of flexion in the take-off leg’s knee
—MKF = 148 deg. This angle must be sufficiently large. A marked lowering
of CM in the MKF phase results in prolonging the compression phase period
and through this an inefficient integration of the eccentric and concentric
action of the muscles.
The results in the study of Lees et al. show that force impulse in the
compression phase is the primary indicator of vertical velocity in the lift
phase. A study by Ballreich and Bruggemann namely showed a much higher
correlation between vertical velocity of take-off with effective length of the
jump (r = 0.89), than with horizontal velocity of take-off (r = 0.21). The force
impulse in the compression phase for the athlete GC was −101.9 N s.
NEUROMUSCULAR
MECHANISMS
OF
EXPLOSIVE
POWER
In many sport disciplines, power is a major biomotor ability used in the
prediction of results. To some extent, other biomotor abilities are also
associated with power. It is of little wonder that many kinesiological studies
delve into the subject of power, investigating its structure, the training
methodology, the application of new methods and diagnostic procedures. In
modern kinesiological science, power is undoubtedly one of the most
meticulously researched biomotor abilities, yet many questions in this field
remain unanswered. The classification of power is based on different criteria.
According to the criterion of action, authors distinguish among peak power,
explosive power and endurance. Another criterion is that of neuromuscular
activity, where power manifests itself in the form of isometric contraction as
well as concentric, eccentric or eccentric-concentric contraction. Isometric
contraction occurs in conditions where muscular force equals an external
force, which is why there is no movement between two points of the muscle.
Eccentric contraction occurs in conditions where the external loading is
greater than the force of the activated muscles. In real motor situations, an
eccentric-concentric muscle contraction is the most common type and
manifests itself in take-off power. Take-off power is a special type of
explosive power in eccentric-concentric conditions and is most often seen in
cyclic, acyclic and combined movement structures. Its main characteristic is
the utilisation of elastic energy in the eccentric-concentric cycle of a muscle
contraction. The contribution of the elastic properties of the muscle-tendon
complex depends on the velocity of the transition. The transition must be as
fast as possible and should not exceed an interval of 260 ms. This type of
muscle contraction uses less chemical energy to perform mechanical work
compared to a concentric contraction alone, thus speeding up the movement.
If a concentric phase of contraction follows an eccentric phase quickly
enough, the elastic elements release the accumulated energy in kinetic and
mechanical work at the beginning of the concentric phase, thereby increasing
the muscular force.
The movement structures that occur in specific sport situations are associated
with different inputs of eccentric and concentric muscle contractions. Due to
the inter-segmentary transmission of energy and the optimisation of the takeoff action, the thigh muscles’ activity is important in vertical jumps in
conditions of a concentric or eccentric-concentric muscle contraction. With
vertical jumps, the muscles engage in the take-off action following the
proximal-distal principle of muscle activation. In the first phase of the jump
when the vertical velocity of the body’s centre of gravity increases, the
extensors of the trunk and hip are the most active muscles. The key role in
this phase of the take-off action is played by m. gluteus maximus, which is
able to develop great force due to the relatively low angular velocity of the
hips. The thigh muscles generate the peak activation at the beginning of the
hip extension [4, 6]. Electrical activity in a muscle results from a change in
the electrical potential of the muscle membrane. It is measured by means of
electrical potential that can be detected on the surface of the skin. For this
purpose, surface electrodes are fastened above the muscle whose EMG signal
is to be measured. Silver-silverchloride (Ag-AgCl) bipolar surface electrodes
are generally used. The skin at the recording site where the electrodes are
applied must be carefully prepared, cleansed and treated. Take-off power in
conditions where the active muscles first extend (eccentric contraction) and
then shorten (concentric contraction) is measured by means of a
countermovement vertical jump and a drop jump.
KINEMATIC ANALYSIS OF FLIGHT PHASE
On the basis of the kinematic parameters (Table 3 and Image 8), we see
different contributions of the individual component distances to the final
result. For the athlete GC, the flight distance (L2) has the greatest absolute
and relative share in the total jump length—88%, then landing distance (L3)
—7.7% and the smallest take-off distance (L1)—3.5%. The flight distance is
defined as the horizontal distance between the point CM at take-off and the
same point at the moment of contact with the sand.
Unit
Result
(L)
m
8.25
Official
distance
(OD)
m
8.10
Flight
distance
(L2)
m
7.33
Max. height
of CM in
flight
(HMF)
m
1.88
Parameter
Distance
jump
TABLE 3.
of
Kinematic parameters of the flight phase.
Image 8.
Kinematic model of flight and landing (GC, distance 8.25 m).
The small partial share of the take-off distance of the athlete GC is the
consequence of the take-off angle, which amounted to 71.9 deg and the ratio
between the vertical (VYTO) and horizontal component of velocity (VXTO)
at take-off. The relatively large share of the landing distance (L3) can be
ascribed to a very economic technique of landing. In the flight phase, the
hang-style long jump is characteristic for the athlete GC, which he utilises
very effectively as preparation for landing. A high parabola of CM flight can
be seen for this athlete, which is the consequence of a high take-off angle
(PATO = 24.1 deg). The highest point of the parabola is 65 cm above the
height of CM at take-off. The characteristics of the flight phase are of course
also influenced by other factors, especially air resistance, which was not the
object of study in this research.
KINEMATIC ANALYSIS OF LANDING PHASE
An economic technique of landing is defined by: landing distance (L3),
height of CM at the moment of contact of the feet with the sand (HLA) and
landing flight distance (L4)—Table 4 and Image 8.
Parameter
Unit
Result
Landing
distance
(L3)
m
0.63
Height of
CM
at
landing
(HLA)
m
0.92
Landing
flight
distance
(L4)
m
1.28
Fall-back
distance
(LFB)
m
0.15
TABLE 4.
Kinematic parameters of landing.
One of the most important variables in landing is the horizontal landing
distance, defined by the projection of point CM and the initial contact with
the sand minus the distance lost by the athlete by falling backward or
otherwise breaking the sand at a point closer to the board than the initial point
of contact (LFB). The index of technique’s economy is: IR = L3 − LFB. Hay
and Nohara [7] found on a representative sample of 12 best American long
jumpers that the average fall-back distance is 11 cm, for our athlete this
distance is 15 cm. This leads us to conclude that his technique is economic
and efficient. Mendoza [11] also finds that values of landing distance (L3)
between 57 and 65 cm point to a very economic landing. The athlete can
contribute an important part to the competitive result with good landing
technique, with this phase being dependent on specific motor abilities,
especially power of the hip flexors and belly muscles; morphologic
characteristics, coordinative abilities of the athlete and also the quality of the
landing area.
Conclusion
The results of kinematic and dynamic analysis showed that the efficiency of
long jump is defined mostly by the following take-off parameters: horizontal
velocity at TO, vertical velocity at TO, speed at TO, angle of projection at
TO, maximal force X—horizontal and Y—vertical axis, force impulse in
compression and lift phase. An important factor of a rational technique of
long jump is also the landing, which is defined by the landing distance and
fall-back distance. This study was conducted with only one elite long jumper
(third place at the Universiade in Fukuoka, third place at the Seville World
Athletics Championship) and offers an insight into the dynamic and
kinematic parameters required for top results in this athletic discipline.
Biomechanical diagnostics highlights the objective parameters of the
technique on which the training process must be based. The study results hold
both theoretical and practical value in applied biomechanics and sport
practice, mainly in terms of selecting the optimal training methods.
Kinematic Model for Project Scheduling
with
Constrained
Resources
Under
Uncertainties
Project scheduling problems have been well studied in the literature since the
1950s. According to Ref., project scheduling is an important process in
project management. The project scheduling literature largely concentrates on
the generation of a precedence-feasible and resource-feasible schedule that
optimizes scheduling objective(s) (most often the project duration) and that
serves as a baseline schedule for executing the project. Baseline schedules
serve as a basis on which to allocate resources to different activities in order
to optimize some measure of performance and for planning external activities
such as material procurement, preventive maintenance and delivery of orders
to external or internal customers.
However, project scheduling is a difficult process due to scarce resources as
well as precedence relations between activities. Admissible schedules must
obey constraints such as precedence relations (a task cannot start unless some
other tasks have been completed) and resource restrictions (labour and
machines are examples of scarce resources with limited capacities). Thus, the
resource-constrained project scheduling problem (RCPSP) consists of project
activities subject to many kinds of uncertainties that must be scheduled in
accordance with precedence and resource (renewable, non-renewable and
doubly constrained) availabilities such that the total duration or makespan of
a project is minimized.
Most of the variants and extensions of the RCPSP may be summarized and
classified within multiple modes, generalized time lags and objective
functions, resulting in highly complex optimization problems. The RCPSP
has become a well-known standard problem in the context of project
scheduling, and numerous researchers have developed both exact and
heuristic scheduling procedures. Due to the complex nature of the problem,
only a small number of exact algorithms have been presented in the literature,
and many heuristic solution algorithms have been presented in the literature.
Therefore, this chapter presents the development and implementation of the
kinematic model named as Coupled Estimate Technique for project
scheduling with constrained resources under uncertainties. In the Coupled
Estimate Technique, the modelled duration depends on the planned duration
and on the resource variability (aleatory uncertainty), and the modelled
resource depends on the planned resource and on the duration variability
(aleatory uncertainty).
TAILORING
OF
THE
ROBOTIC
MANIPULATOR
KINEMATIC CONCEPTS TO THE COUPLED ESTIMATE
TECHNIQUE
The development of the Coupled Estimate Technique was based on robotic
manipulator kinematic concepts. Kinematics is the science of motion that
treats motion without regard to the forces/moments that cause the motion.
Robotic manipulator or industrial robot consists of several rigid links
connected in series (open kinematic chain) by revolute or prismatic joints;
one end of the chain is attached to a supporting base, while the other end is
free and equipped with a tool (end-effector) to manipulate objects or perform
assembly tasks; and the end-effector could be a gripper, a welding torch, an
electromagnet or another device.
Robotic manipulator kinematics deals with the analytical description of the
spatial displacement of the end-effector with respect to a fixed reference
coordinate system, in particular the relations between the joint variables and
the position and orientation of the end-effector. There are two (direct and
inverse) fundamental problems in robotic manipulator kinematics. Given a
set of joint variables, the direct kinematic problem is performed to find the
position and orientation of the end-effector relative to the fixed reference
coordinate system. Given the position and orientation of the end-effector
relative to the fixed reference coordinate system, the inverse kinematic
problem is performed to find all possible sets of joint variables that could be
used to attain this given position and orientation. Image 1 illustrates a robotic
manipulator or industrial robot.
Image 1.
Robotic manipulator or industrial robot.
In analogous way, the Coupled Estimate Technique deals with the analytical
study of the geometry of project activities of a precedence diagram with
respect to a fixed reference two-dimensional coordinate system (0, T, R)
where the abscissa is the duration axis (T) and the ordinate is the resource
axis (R). Thus, the precedence diagram may be considered as a kinematic
chain (open, closed and/or hybrid), where one end of the chain is attached to
a fixed reference two-dimensional coordinate system and represents the begin
project activity, while the other end is free and represents the end project
activity, as well as the rigid links represent the precedences between project
activities and joints are represented by the project activities. Image 2 presents
a precedence diagram analogous to a robotic manipulator in open kinematic
chain.
Image 2.
Precedence diagram in open kinematic chain.
THE MAIN EFFECTS OF THE UNCERTAINTIES IN
PROJECT SCHEDULING
The majority of research efforts related to RCPSP assume complete
information about the scheduling problem to be solved and a static and
deterministic environment within which the precomputed baseline schedule is
executed. However, in the real world, project activities are subject to
considerable uncertainties, stemming from various sources, which are
gradually resolved during project execution.
The presence of uncertainties in project management is recognized by
practitioners and researchers as an important cause underlying the high
project failure rate. Project-related uncertainties can lead to numerous
schedule disruptions. These uncertainties may stem from the following
sources: activities may take more or less time than originally estimated;
resources may become unavailable; material may arrive behind schedule;
ready times and due dates may have to be changed; new activities may have
to be incorporated; activities may have to be dropped due to changes in the
project scope; and weather conditions may cause severe delays.
This chapter makes a distinction between uncertainty and risk, assuming that
there are degrees of knowledge about the estimations of project duration and
resources; the uncertainty may be quantifiable but without probability of
occurrence, while the risk is quantifiable and with probability of occurrence.
Uncertainty may be aleatory due to the intrinsic variability of projects or
epistemic due to the lack of relevant knowledge related to carrying out the
project. Thus, quantifying uncertainty is relevant to project scheduling
because it sheds light on the knowledge gaps and ambiguities that affect the
ability to understand the consequences of uncertainties in project objectives.
COMMON
APPROACHES
FOR
RCPSP
UNDER
UNCERTAINTIES
From a modelling viewpoint, there are four approaches for dealing with
uncertainty quantification in a scheduling environment where the evolution
structure of the precedence network is deterministic: reactive scheduling,
stochastic scheduling, scheduling under fuzziness and proactive (robust)
scheduling.
REACTIVE SCHEDULING APPROACH
The reactive (predictive) scheduling models do not try to cope with
uncertainty quantification when creating the baseline schedule. Basically,
efforts are largely concentrated on repairing, revising or re-optimizing the
baseline schedule when an unexpected event occurs.
The reactive effort may be based on very simple techniques aimed at a quick
schedule restoration; a typical example is the well-known right-shift rule.
This rule will move forward in time all the activities that are affected by the
delays; often, the precedence diagram must be tailored in order to rearrange
the changed activities. The reactive effort also may be based on as full
rescheduling of the precedence diagram.
STOCHASTIC SCHEDULING APPROACH
Most of the literature on the RCPSP assumes that activity durations are
deterministic; however, activity durations are often uncertain. These
uncertainties may be due to different sources, such as estimation errors, late
delivery of resources, unforeseen weather conditions, unpredictable incidents
(machine or worker) and others.
The stochastic RCPSP acknowledges that activity durations are not
deterministic, i.e. the activity durations are modelled as stochastic variables.
Therefore, the stochastic RCPSP aims to schedule project activities with
uncertainty quantification of durations in order to minimize the expected
project duration subject to precedence and constrained resources. The
stochastic project scheduling models view the project scheduling problem as
a multistage decision process; the complete schedule (containing all
activities) is constructed gradually as time progresses by means of a
scheduling policy, exploiting the available information about the uncertainty
of activity durations.
FUZZY SCHEDULING APPROACH
Generally, the uncertainty of activity duration in project scheduling was
handled by stochastic approaches using a probabilistic-based method. This
kind of uncertainty in project duration is associated with randomness.
However, for projects never be carried out previously, it is infeasible to
determine the probability distribution of activity duration.
Therefore, the fuzzy project scheduling approach is used when the probability
distributions for the activity durations are unknown due to a lack of historical
data and, thus, the activity durations have to be estimated by human experts.
In the situations involving the imprecision instead of uncertainty, the project
scheduling literature recommends the use of fuzzy numbers for modelling
activity durations. Instead of probability distributions, these quantities make
use of membership functions, based on possibility theory.
PROACTIVE (ROBUST) SCHEDULING APPROACH
Traditionally, the robust schedule may absorb some level of unexpected
events (machine breakdowns, staffing problems, unexpected arrival of new
orders, early or late arrival of raw material and uncertainties in the duration of
processing times). Thus, in order to minimize the impacts of uncertainties and
the need of new scheduling or rescheduling, proactive scheduling approach
aims at the generation of a robust baseline schedule that incorporates a certain
degree of anticipation of potential variability or of potential disruptions
according to the objective values (makespan).
The proactive (robust) project scheduling model has prospered widely in the
field of machine scheduling. Redundancy-based techniques related to
durations and resources have already found their way to the field of project
scheduling. The critical chain project management (CCPM) and Success
Driven Project Management (SDPM) methods are becoming increasingly
popular among project management practitioners.
MAIN
DIFFERENCES
ESTIMATE
BETWEEN
TECHNIQUE
AND
THE
THE
COUPLED
COMMON
APPROACHES FOR RCPSP UNDER UNCERTAINTIES
This chapter presents a kinematic model named as Coupled Estimate
Technique for project scheduling with constrained resources under
uncertainties. This technique considers precedence, duration, resources and
uncertainties related to project activities in order to analytically model the
outcomes of project-related events or conditions (uncertainty) with the
potential of favourable but mainly adverse consequences on project
objectives (duration and resources). This approach can be used to quantify
uncertainties; thus, it can help to solve project scheduling problems related to
the following limitations and disadvantages identified in the literature review:
●
The literature on project scheduling under risk and uncertainty
was clearly conceived in a machine scheduling environment,
whereas this work presents a project scheduling model from the
viewpoint of project management.
●
Projects are often subject to considerable uncertainty during
their execution, but most of the research on project scheduling deals
with only one source of uncertainty most often the duration of
activities. With the kinematic model detailed herein, the project
activities have a set of attributes represented by the duration and
resources as well as the uncertainties related to duration and
resources.
Traditional project scheduling is represented mainly by an
●
activity-on-node
network.
However,
this
representation
is
insufficient and inadequate when the activities of a project have a
set of attributes. In the kinematic model, project scheduling is
represented by IDEF0 (Integrated Computer-Aided Manufacturing
Definition for Functional Modelling).
●
Project scheduling methods are focused mainly on the basic
RCPSP model. In the kinematic model presented here, project
scheduling is considered as a kinematic chain (open (serial), closed
(parallel) and/or hybrid) formed by a set of rigid links (precedence
of activities) that are connected by joints (project activities) with one
fixed extremity (activity that represents the beginning of the project)
and one free extremity (activity that represents the end of the
project), which are represented by a homogeneous transformation
matrix.
●
The literature on project scheduling states that the generation of
proactive (robust) multi-resource baseline schedules in combination
with efficient and effective reactive schedule repair mechanisms
constitutes a viable area of future research. The kinematic model
presented in this chapter provides this combination through the
direct and inverse kinematic models. It provides evidence of the
influences stemming from uncertainties in project activities,
enabling the balancing of durations and resources between project
activities and between projects.
Kinematic fundamentals for the robotic manipulators and for
the model proposed by this chapter
This topic aims to present the main fundamentals related to the kinematic
problems for the robotic manipulators or industrial robots, as well as to
present the main fundamentals related to the kinematic model proposed by
this chapter for project scheduling with constrained resources under
uncertainties.
KINEMATIC
PROBLEMS
FOR
ROBOTIC
MANIPULATORS
In order to describe and represent the spatial geometry of the links of a
robotic manipulator with to a fixed reference coordinate system, Denavit and
Hartenberg (D-H) proposed a systematic approach utilizing matrix algebra.
This systematic approach reduces the direct kinematic problem to finding an
equivalent 4x4 homogenous transformation matrix that describes the spatial
relationship between two adjacent rigid links with respect to a fixed reference
coordinate system.
In the inverse kinematic problem, given the position and orientation of the
end-effector, we would wish to find the corresponding joint variables of the
robotic manipulator. The inverse kinematic problem is not as simple as the
direct kinematic problem, because the kinematic equations are nonlinear and
their solution is not easy (or even possible) in a close form. The inverse
kinematic problem may be solved by various methods, such as inverse
transform of Paul, screw algebra of Kohli and Soni, dual matrices of Denavit,
dual quaternion of Yang and Freudenstein, iterative of Uicker, geometric
approaches of Lee and Ziegler and others. Image 3 represents a synthesis to
the direct and inverse kinematic problem.
Image 3.
Synthesis to the direct and inverse kinematic problem for robotic
manipulators.
KINEMATIC
MODELS
FOR
COUPLED
ESTIMATE
TECHNIQUE
Basically, the project consists of a set of activities that must be performed in
order to complete the project. The kinematic model technique for project
scheduling with constrained resources under uncertainties named as Coupled
Estimate Technique deals with the movements in the scheduled activities
without consideration of the causes of the movements. Each project activity
has a set of attributes represented by activity estimates (duration, resource
and precedence), activity uncertainties (duration, resources and critical factor)
and activity variables (duration and resource).
The outcomes of activities depend of their attributes, and the analytical
kinematic model provides a metaschedule to the project. To model a project
activity with kinematic model, it is assumed that the activity estimates are the
constants, the kinematic equations are the orientation, and the modelled
outcomes depend of the uncertainties and variables. Thus, the kinematic
model may be direct or inverse type according to Image 4.
Image 4.
Kinematic model for Coupled Estimate Technique.
Therefore, given the activity uncertainties determined by the project team, the
direct kinematic model or the CET may be used to model the activity
variables. Moreover, given the activity variables determined by the project
team, using the inverse kinematics or the CET, the activity uncertainties may
be modelled. It is important to emphasize that the estimates of the activity
attributes must be part of organizational policy used by the team during the
project planning. Image 5 illustrates the kinematic model or the CET with the
IDEF0 language.
Image 5.
(a) Direct kinematic model and (b) inverse kinematic model.
Coupled Estimate Technique concepts
Coupled Estimative Technique (CET) is a project scheduling artefact to
model the project activity with constrained resources under uncertainties.
This section presents the main aspects of the CET that provide the
mathematical formulation involving the activity estimates (duration, resource
and precedence), activity uncertainties (duration, resources and critical factor)
and activity variables (duration and resource).
Let each project activity be represented in the two-dimensional Cartesian
coordinate system (0, T, R) where the abscissa is the duration axis (T) and the
ordinate is the resource axis (R). The value set of the duration and resource
for project activity lies between the estimated ordered pair (estimated
duration (te) and estimated resource (re)) and the modelled ordered pair
(modelled duration (tm) and modelled resource (rm)).
Estimated ordered pair is represented in the estimated coordinate system (0,
Te, Re), and the modelled ordered pair is represented in the modelled
coordinate system (0, Tm, Rm). The rotation alpha expresses the uncertainty
in the estimation of the activity duration, and the rotation beta expresses the
uncertainty in the estimation of activity resource. Image 6 shows the
graphical representation of one project activity.
Image 6.
Graphical representation of one project activity.
The mathematical formulation of the CET used to model the project activities
is obtained by algebraic operations with homogeneous transformation
matrices (4 × 4). They map a project activity from estimated coordinate
system (estimated ordered pair) to modelled coordinate system (modelled
ordered pair) using the homogeneous coordinate notation. The homogeneous
transformation matrix (H) consists of four submatrices, according to Eq. (1):
The rotation submatrix (R3 × 3) transforms one project activity expressed in
the estimated coordinate system to the modelled coordinate system. For
project scheduling, the rotation submatrices are used to produce the effect of
the uncertainty in the estimation of the activity duration by rotation alpha at
the duration axis and the effect of the uncertainty in the estimation of activity
resource by rotation beta at the resource axis. According to, the basic rotation
matrices are shown in Eq. (2), rotation alpha at the duration axis and rotation
beta at the resource axis:
As the duration and resource of the project activity vary between estimated
and modelled values, the position submatrix (P3 × 1) represents the
coordinates of the estimated ordered pair. Therefore, the first element of the
position submatrix is the duration translation or the estimated duration (te),
the second element of the position submatrix is the resource translation or the
estimated resource (re), and as the project activity is represented in a
geometric plane, the third element is null. The position submatrix (P3 × 1) is
shown in Eq. (3):
The perspective transformation submatrix (1 × 3) is useful for computer
vision and the calibration of camera models. For the mathematical
formulation of the CET, the elements of this submatrix are set to zero to
indicate null perspective transformation. The fourth diagonal element is the
global scaling factor (1 × 1), which is set to 1 in this work. Values different
from 1 are commonly used in computer graphics as a universal scale factor
taking on any positive values. The project activity is mathematically
represented through homogeneous transformation matrices (4 × 4) which are
arranged in four steps to obtain the continuous duration and cost functions as
described in Table 1.
The first step is performed through the product between the homogeneous
matrices (HT, α) and (Hte; re), where (HT, α) means a rotation alpha on the
duration axis and (Hte; re) means a translation in the duration and resource
axes. The algebraic operation of Eq. (4) corresponds to the effect of the
uncertainty in the estimation of the activity duration in estimated resource of
the project activity:
The first element of the position submatrix (P3 × 1) in Eq. (4) represents the
activity duration; the second represents the estimated resource varying
according to the uncertainty in the estimation of the activity duration. As the
position submatrix (P3 × 1) is a spatial geometric representation (abscissa,
ordinate and cote), the third element may be disregarded because the project
activity is a plane geometric representation (abscissa and ordinate).
The second step listed in Table 1 performs the product between the
homogeneous matrices (HT, β) and (Hte; re). The former is the homogeneous
matrix composed by a rotation beta on the resource axis, and (Hte; re) means a
translation in the duration and resource axes. The algebraic operation of Eq.
(5)corresponds to the effect of the uncertainty in the estimation of the activity
resource in estimated duration of the project activity:
The first element of the submatrix (P3 × 1) in Eq. (5) represents the activity
duration varying according to the uncertainty in the estimation of the activity
resource; the second, the activity resource. The third element of the submatrix
(P3 × 1) in Eq. (5) may be discarded because the project activity is mapped at
the geometric plane. The negative signal arises because the projection of the
duration translation is at the negative semi axis of the cote coordinate.
The third step prescribed in Table 1 performs the sum between Eqs. (4) and
(5). The summation shown in Eq. (6) represents the Coupled Estimate
Technique (CET) overall homogeneous matrix that provides the joint effect
of the uncertainties (duration and resource) in the estimations (duration and
resource) of the project activity:
The first element of the position submatrix (P3 × 1) in Eq. (6) represents the
modelled duration in function of the estimated duration and uncertainty in the
estimation of the activity resource. The second element represents the
modelled resource in function of the estimated resource and uncertainty in the
estimation of the activity duration. As the project activity is mapped at the
geometric plane, the third element may be disregarded. However, it might be
used to represent other project goals, e.g. the quantification of the activity
performance or quality.
And finally, the fourth step in Table 1 performs some orientations about the
range of the uncertainties (duration and resource) and the critical factor of the
project activity:
●
The uncertainties must range between 0° (highest degree of
uncertainty) and 89° (lowest degree of uncertainty). When alpha and
beta equal to 0°, the modelled duration equals to the double of
estimated duration (2te), and the modelled resource equals to the
double of estimated resource (2re).
●
When alpha and beta equal to 90°, there are no uncertainties or
certainties. Therefore, the modelled duration equals to the estimated
duration (te), and the modelled resource equals to the estimated
resource (re).
●
The certainties must range between 91° (lowest degree of
certainty) and 180° (highest degree of certainty). When alpha and
beta equal to 180°, the duration and resource modelled are null; this
means a dummy activity, i.e. a project activity without duration and
resource.
●
The critical factor (theta) of the project activity must be unitary
for cases where the modelled value is less than or equal to double of
the estimated value, and the critical factor (theta) must be greater
than one for cases where the modelled value is greater than double
what was estimated value.
Table 2 shows the mathematical formulations for one project activity of the
Coupled Estimate Technique.
Modelled
variables
Equations
Duration
tm = θ. te.
(1 + cos β)
Resource
rm = θ. re.
(1 + cos α)
TABLE 2.
Mathematical formulation of the CET.
Implementation of the direct kinematic model with CET
To demonstrate how the direct kinematic model is implemented, this section
presents a didactic example with scheduling of duration in days, one type of
resource (financial in dollar) and the critical factor unitary for all activities
present in the critical path, according to the four sequential tasks. The main
objective is to obtain the modelled schedule or project duration modelled
(pdm) and modelled budget or project resource modelled (prm) with the
effects of uncertainties of the estimates in the planned schedule or project
duration estimated (pde) and planned budget or project resource estimated
(pre) of the project. Assuming a critical path with the activities in the project
precedence diagram which have the attributes:
●
A0: Project beginning
●
A1: Duration estimation equal to 30 days with uncertainty equal
to 45°; resource estimation equal to $100 with uncertainty equal to
45°
●
A2: Duration estimation equal to 20 days with uncertainty equal
to 45°; resource estimation equal to $50 with uncertainty equal to
45°
●
A3: End of project
As task 1 for implementation of the direct kinematic model with CET, the
planned schedule and budget without the uncertainties must be determined;
thus, the planned schedule or project duration estimated (pde) is equal to 50
days, and the planned budget or project resource estimated (pre) is equal to
$150.
In task 2, the project precedence diagram is modelled with IDEF0 from the
attributes of the project activities. Image 7 presents the IDEF0 with:
●
Activities estimates (te1, re1; te2, re2)
●
Activity uncertainties (α1, β1, θ1; α2, β2, θ2)
●
Activity variables to be modelled (tm1, rm1; tm2, rm2)
●
Project duration estimated (pde) and project resource estimated
(pre)
●
Project duration modelled (pdm) and project resource modelled
(prm) to be modelled
Image 7.
Project precedence diagram with IDEF0 for direct kinematic model with
CET.
During the third task, the modelled variables are calculated according to the
kinematic equations of the CET presented in Table 2 and Image 4. From the
estimates and uncertainties of the activities, the modelled variables of project
activities are presented in Table 3.
Activities
Estimates
Uncertainties
Modelled
variables
A1
te1 = 30
days
α1 = 45°
51 days
re1
$100
=
β1 = 45°
$170
te2 = 20
days
α2 = 45°
34 days
re2 = $50
β2 = 45°
$85
A2
TABLE 3.
Modelled variables of the project activities.
And finally, the fourth task describes that the project duration modelled
(pdm) is equal to the sum of the modelled durations, and project resource
modelled (prm) is equal to the sum of the modelled resources. Table 4
presents the project scheduling modelled according to the direct kinematic
model with CET.
Project
Estimates
Modelled
Effects
Duration
pde = 50
days
pdm = 85
days
35 days
besides
of
planned
Resource
pre
$150
prm
$255
$85
besides
of
planned
=
=
TABLE 4.
Project scheduling modelled according to the direct kinematic model with
CET.
Analyzing the information in Table 4, due to the uncertainties in the
estimation of the project duration, the schedule should be increased in 35
days that represent 70% besides of the project duration estimated. In the same
manner, due to the uncertainties in the estimation of the project resource, the
budget should be increased in $85 that represents 70% besides of the project
resource estimated.
Implementation of the inverse kinematic model with CET
To demonstrate how the inverse kinematic model is implemented, this section
presents a didactic example with scheduling of duration in days, one type of
resource (financial in dollar) and the critical factor unitary for all activities
present in the critical path, according to the four sequential tasks. The main
objective is to obtain the modelled uncertainties in the estimation of the
project durations and resources from estimated and performed values of the
project activities. Assuming a critical path with the activities in the project
precedence diagram which have the attributes:
●
●
A0: Project beginning
A1: Duration estimation equal to 30 days and duration
performed equal to 51 days; resource estimation equal to $100 and
resource performed equal to $170
●
A2: Duration estimation equal to 20 days and duration
performed equal to 34 days; resource estimation equal to $50 and
resource performed equal to $85
●
A3: End of project
As task 1 for implementation of the inverse kinematic model with CET, the
planned schedule and budget as well as the performed schedule and budget
with the effects of the uncertainties must be determined; thus, the planned
schedule or project duration estimated (pde) is equal to 50 days, the planned
budget or project resource estimated (pre) is equal to $150, the performed
schedule or project duration performed (pdp) is equal to 85 days and the
performed budget or project resource performed (prp) is equal to $255.
In task 2, the project precedence diagram is modelled with IDEF0 from the
attributes of the project activities. Image 8 presents the IDEF0 with:
●
Activity estimates (te1, re1; te2, re2)
●
Activity uncertainties (α1, β1, θ1; α2, β2, θ2) to be modelled
●
Activity variables, which for inverse kinematic model, the
performed duration and resource (tp1, rp1; tp2, rp2)
●
Project duration estimated (pde) and project resource estimated
(pre)
●
Project duration performed (pdp) and project resource
performed (pre)
Image 8.
Project precedence diagram with IDEF0 for inverse kinematic model with
CET.
For the third task, the kinematic equations of the CET presented in Table 2
must be tailored where the modelled variables are substituted by the
performed variables according to Image 5. From the estimated and
performed values, the modelled uncertainties of project activities are
presented in Table 5.
In task 4, the uncertainty of the project must be analyzed. There are some
ways to determine the project uncertainty; frequently, this choice is realized
by the project team taking account the nature and challenges of the project.
The following are some suggestions:
●
The project uncertainty may be determined by the average
between the uncertainties of the activities, where for the case studied
in this section, the project uncertainty for the duration and resource
are equal to 45°.
●
The project uncertainty may be determined by the greater or
lower uncertainty of the activities.
●
The project uncertainty may be determined by the sum with the
weight of uncertainties of the activities.
Comments and conclusions
This chapter represented one interesting example of cross-fertilization
between different areas, where the manipulator kinematic concepts (direct
and inverse) were tailored and implemented in order to provide an innovative
solution for project scheduling with constrained resources under
uncertainties. Basically, the homogeneous transformation matrices (4 × 4)
were used to model the schedule (time) and budget (resource) of the projects
taking account the uncertainties in the estimates.
In the perspective of the project management, the direct kinematic model may
be used to the project scheduling in order to model the schedule and budget
when the duration and resource estimations are not known completely;
therefore, there are degrees of uncertainties related to duration and resource
estimation. And, the inverse kinematic model may be used to the
implemented projects in order to model or quantify the degrees of
uncertainties related to duration and resource estimation. These modelled
uncertainties of the implemented project may be used during the project
scheduling of similar projects.
The modelled outcomes provide information that can enhance the processes
for schedule and budget of the project management, helping to achieve
project scheduling of a project. They also provide information about project
risk management processes, helping to identify, analyze and respond to
uncertainties that are not definitely known but pose the potential for
favourable or, more likely, adverse consequences on project objectives
(duration and resources).
However, the model represents a conjecture of a phenomenon and, therefore,
an approximation of the behaviour of that phenomenon. Thus, the modelled
outcomes of the kinematic model for project scheduling with constrained
resources under uncertainties must be critically analyzed by a project team.
A New Methodology for Kinematic
Parameter Identification in Laser Trackers
The development of more accurate large-scale measurement systems is a
critical need in the verification of large parts and facilities in sectors with
high-quality requirements as in naval, aeronautic, or spatial industries. In
these industries, dimensional accuracy of large parts needs long-range
accurate measuring devices to ensure not only the parts right dimensions but
also the precise positioning of every part in large assemblies.
Their applications are very wide such as large-volume measurements,
inspection, calibration of an industrial robot, reverse engineering, analysis of
deformations, machine tool volumetric verification, and so on.
The laser tracker (LT) offers significant advantages such as portability,
flexibility, precision, or high-speed data acquisition. However, the
mechanical assembly is an important source of errors such as offsets or
eccentricities, which generates errors in measurements. A disadvantage of
these measurement systems is that the user cannot know whether the LT is
measuring correctly. Existing standards provide tests to evaluate the
performance of the LT. However, these tests require specialized equipment
with a high cost. In addition, they require long time-consuming and
specialized equipment.
The calibration procedure identifies the geometric parameters to improve
system accuracy. However, there are few studies about the calibration of LTs.
The basis of the calibration procedure is to determine the parameters of the
geometric error model measuring a set of reflectors located at fixed locations
from different locations of LT. One of the advantages of this method is that
specialized equipment is not required so that any user of a measurement
system could perform the LT calibration. Furthermore, the time required to
calibrate the TL is considerably reduced and considerably compared with the
time required to carry out functional tests recommended by ASME B89.4.19.
This chapter aims to present a method for performing a quick and easy
calibration by measuring a mesh of reflectors to improve the accuracy of the
LT. We have developed a kinematic error model and a generator of synthetic
points to evaluate the procedure performance. Later, an experimental trial to
identify the geometric parameters and a sensitivity analysis to determine the
most appropriate instrument calibration measurement positions have been
performed.
What a laser tracker is?
An LT is a long-range metrological device with an accuracy of some tens of
micrometers. The LT is composed of an interferometer mounted on a two
degrees of freedom rotatory gimbal. The laser beam reflects in a spherically
mounted retroreflector (SMR), which returns the beam to the interferometer.
Any displacement of the SMR is detected by position sensitivity devices,
which adapt the LT position to follow the SMR position. The distance
measured by the interferometer, (d), along with the readings of the encoders
placed in each one of the two rotatory axes, (θ, ϕ), gives the spherical
coordinates of the SMR centre referring to the LT reference system as shown
in Image 1. Knowing the spherical coordinates, the corresponding Cartesian
coordinates can also be calculated based on Eqs. (1)–(3).
Image 1.
Laser tracker working principle.
Laser tracker kinematic model
The LT can be considered as an open kinematic chain with three joints: two
rotary joints in the gimbal and a prismatic joint corresponding to the laser
beam. The kinematic model is a mathematical expression that determines the
position of the final joint of the kinematic chain (the SMR centre) with
reference to the LT frame. We have used the Denavit-Hartenberg (D-H)
formulation to develop the kinematic model. This method defines the
coordinate transformation matrices between each two consecutive reference
systems j and j − 1 as the product of the rotation and translation matrices
from j − 1 joint to j joint. The DH model needs four characteristic parameters
(distances dj, aj, and angles θj, αj) for these matrices. Image 2 shows the
relationships between the consecutive reference systems. Knowing the
kinematic parameters corresponding to every joint, the homogeneous
transformation matrix from reference system j to j − 1 is the result of the
product of the four rotation and translation matrices shown in Eq. (4).
Image 2.
Denavit-Hartenberg model.
Being Tm,n and Rm,n, the homogeneous translation matrices corresponding to
translation (T) or rotation (R) n along axis m.
Following the DH model, the LT kinematic model has been determined as
shown in Image 3. The position of the reflector referring to the LT reference
system is defined by Eq. (5).
Image 3.
Laser tracker kinematic model.
This chapter is based on the LT API Tracker 3. Table 1 shows the kinematic
parameters corresponding to this LT model.
i
1
αi
ai
di
θi
(°)
(mm)
(mm)
(°)
−90
0
0
θ −
90
2
90
0
0
ϕ −
90
3
0
0
d
−90
TABLE 1.
Laser tracker kinematic parameters.
Laser tracker error model
Relative positions between two consecutive joints are defined by the DH
model. These are the nominal positions, but they are conditioned by the LT
errors. This means that reference frames are not at its expected position as
shown in Image 4. An error model that fits the geometry of the joints is
necessary to modify the kinematic model.
Image 4.
Error model.
In this chapter, the error model shown in Image 5 has been used. This model
is based on six degrees of freedom error for each joint. Mathematically, the
error is stated as a transformation matrix between the nominal joint frame and
the real frame. The matrices are different for rotary joints (Eq. (6)) and linear
joints (Eq. (7)). For each error matrix, a new set of six error parameters is
defined (δx, δy, δz, εx, εy, εz). The calibration procedure will calculate the
optimum parameter values to minimize the LT error.
Image 5.
Error parameters.
Including the error matrices in Eqs. (5) and (8), which describes the
kinematic model with the error model, is obtained.
By checking the behavior of the error model, it has been proven that errors
depend on the joint position (the rotation angle in rotary joints and distance of
the interferometer). This means that error parameters must have a formulation
depending on the joint position. For the error parameters corresponding to the
linear error matrix, we have used a polynomial function (see Eq. (9)) and for
the error parameters corresponding to the rotary error matrices, a Fourier
shape function is more convenient because of its periodic behavior (see Eq.
(10)).
For φ = δx, δy, δz, εx, εy, εz.
Model validation
The kinematic error model must be validated. The validation has been
performed first with synthetic values and then with real values.
SYNTHETIC DATA VALIDATION
A parametric generator of meshes of reflectors with known errors has been
programmed. This algorithm generates a mesh of synthetic reflector
coordinates with nominal position values. Then a set of error parameters is
introduced, and the theoretical measurements of an LT, affected by the error
parameters introduced, are calculated.
These measurements are introduced in the calibration procedure to calculate
the error parameters with the optimization of the function described in Eq.
(11) and correct the measurements.
Finally, the initial measurements and the corrected ones are compared with
the nominal reflector positions according to Eq. (12) to calculate the
calibration accuracy improvement achieved.
The error parameter set is shown in Table 2.
Three meshes of synthetic reflectors have been generated:
Flat YZ plane mesh 15 × 14 = 210 reflectors (Image 6).
Image 6.
Errors in a plane mesh.
X = 5.000 mm constant.
Y = 10.000 ÷ 10.000 Δ 1.420 mm.
Z = 1.500 ÷ 5.000 Δ 500 mm.
Cubic XYZ mesh 6 × 6 × 6 = 216 reflectors (Image 7).
Image 7.
Errors in a cubic mesh.
X – 10,000 ÷ 10,000 Δ 4,000 mm.
Y – 10,000 ÷ 10,000 Δ 4,000 mm.
Z – 10,000 ÷ 10,000 Δ 4,000 mm.
Spherical HVR mesh 12 × 8 × 5 = 480 reflectors (Image 8).
Image 8.
Errors in a spherical mesh.
H 0° ÷ 360° Δ33°.
V 77° ÷ − 60° Δ20°.
R 1,000 ÷ 15,000 Δ 3,500 mm.
The calibration procedure calculates the parameter errors and corrects the
generated measurements. As the error parameters are calculated through a
mathematical optimization, the result do not gives exactly the nominal
parameters but provides a set of parameters that minimizes the LT error. In
fact, the calibration reduces the LT error more than 98%. For example, Table
3 shows the calculated error parameters corresponding to the spherical mesh.
REAL DATA VALIDATION
The validation with synthetic data shows that the programmed algorithms are
working properly but, as the errors have been generated with the error model
purposed, it was expected to obtain a good calibration result. It is necessary
to check the calibration behavior with real data. To do it, an experiment has
been performed. A set of 17 reflectors has been placed over the table of a
coordinates measuring machine (CMM). Then the positions of these
reflectors have been measured with the CMM and the LT from five different
positions as shown in Image 9.
Image 9.
Real data validation experiment.
The estimation of the error parameters have been performed on the basis that
the distances between every pair of reflectors must be the same regardless the
LT position from which they have been measured. Eq. (13) compares the
distances measured by the CMM and the LT.
Being dmik, the distance measured between the i-esim pair of reflectors from
the k-esim LT position and diCMM the same distance measured by the CMM.
Cn,r is the number of possible pairs of reflectors. In this case i=C2,17 = 136
pairs of reflectors. Image 10 and Table 4 show the initial and residual errors
of the reflectors mesh.
Image 10.
CMM reflectors mesh calibration.
.
In the real calibration object of this work, we measure a mesh of reflectors
out of the metrological laboratory, and there will be no nominal data to
calculate the error parameters. To simulate the calibration procedure behavior
and its requirements under real conditions, we have used the CMM
measurements with a new optimization function. This function is equivalent
to Eq. (11) but instead of comparing CMM and LT distances, we compare
distances from every pair of LT positions as shown in Eq. (14).
With this optimization criterion, we found out that calibration result increased
the LT error. That is due to the fact that the mathematical optimization
matches the distances but to a value different from the nominal. This means
that it is necessary to introduce a calibrated distances gauge in the reflectors
mesh to determine its behavior. After several simulations, a gauge of four
reflectors gives the best results, and the objective function is as shown in Eq.
(15)
To evaluate the calibration results, we have established two different criteria:
(1) a distances criterion that evaluates the differences of distances between
every pair of reflectors measured from every pair of positions of LT
according to Eq. (16) and (2) a coordinates criterion that evaluates the
position error for every reflector measured from every LT position (see Eq.
(17)). This second criterion requires first to transform all measurements to the
same reference system.
The results of the calibration following both criteria are shown in Tables 5
and 6.
Sensitivity analysis
In order to know the SMR positions more appropriate to perform the
calibration, a sensitivity analysis has been performed. In this analysis, the
influence of every error parameter in the global LT measuring error has been
analyzed. Using the synthetic data generator, many synthetic measurement
meshes as error parameters that have been considered in the error model have
been generated. Thus, 18 meshes are necessary. All of them have been
generated with the same nominal coordinates as the spherical mesh generated
in the synthetic data validation in chapter 5.1 and every mesh is affected by a
single error parameter with a value of 1µm for linear error parameters or
1µrad for angular error parameters.
Image 11 shows the error produced by every error parameter through the
spherical mesh.
Image 11.
Sensitivity analysis.
As a result, it is deduced that all distance error parameters (δ) produce
constant errors. Errors due to parameters εx1, εy1 and εz1 depend on θ, ϕ and d,
errors due to parameters εx2 and εy2 depend on ϕ and d and errors due to
parameters εz2, εx3 and εy3 depend only on d. Finally, parameter εz3 produces
no errors. Taking a closer look at the parameters that produce variable errors,
we can see in Image 12 that the maximum and minimum error values
correspond to maximum, minimum and zero values of ϕ and also on θ every
π/4.
Image 12.
Sensitivity analysis of variable errors.
Experimental setup
The sensitivity analysis proves that extreme and zero values of tilt angle are
the best. It also proves that pan angle ranges must be at least 90°. According
to these requirements, 24 reflectors have been placed in a corner of the
laboratory. Eight of them spread on the floor (minimum tilt angle), a second
set of eight reflectors on the wall at the LT height (zero tilt angle) and the last
eight reflectors in the upper part of the floor (maximum tilt angle). LT has
been placed at five different positions covering always a pan angle of 90° as
shown in Images 13 and 14.
Image 13.
Calibration experimental setup.
Image 14.
View of the experimental calibration process.
Calibration results
The calibration has been performed following the model in Eq. (15) and
evaluated according to Eqs. (16) and (17) in the same way as it has been done
with the CMM measurements. A gauge of four reflectors has been also
included in the mesh of reflectors, and this gauge has been measured in the
CMM to know the real distances among its reflectors.
Images 15 and 16 show the calibration result. In Table 7, the numerical
values of the calibration in function of the evaluation method can be
appreciated.
Image 15.
Calibration results distances criteria.
Image 16.
Calibration results coordinated criteria.
Calibration verification
Calibration results show an LT accuracy improvement according to both
criteria used, but as we do not have the SMRs real positions, it is not possible
to ensure without any doubt that the calibration procedure increases the LT
accuracy.
A new verification is therefore necessary to assess the calibration procedure
behavior. In Section 5, a set of SMRs has been measured with the LT from
five different positions. These SMRs were placed on a CMM table and
measured also with the CMM. These accurate measurements can be used as
nominal data to check whether the calibration has improved the LT accuracy
or not. The error parameters obtained in the calibration procedure have been
applied to the measurements made in the CMM table. The corrected values
can be then compared to the CMM nominal measurements and can be seen in
Images 17 and 18 and its values in Table 8.
Image 17.
Calibration verification distances criteria.
Image 18.
Calibration verification coordinates criteria.
Contribution of the SMR incidence angle in the measurement
uncertainty
In some of the measurements made, the position of the SMR could not be
reached by the LT beam. The SMR maximum viewing angle is within ±30°,
and they were placed facing a theoretical point in the middle of the LT
selected positions. However, as the SMR positions and orientations are fixed
along all the measurements, and they are manually placed, there is the
possibility that some of them could not be visible from all the LT positions
because the incidence angle was out of the SMR viewing range.
The incidence angle of the laser beam in the SMR has an important influence
in the measurement accuracy, and an experiment has been performed to
measure the contribution of the SMR incidence angle in the measurement
uncertainty.
An SMR with its magnetic holder has been placed on a rotary worktable. The
SMR has been centred; so its centre will be in the worktable rotation axis.
The SMR was centred using the rotary worktable pencil and its centring
accuracy has been measured to be in the range of ±0.1 μm.
The SMR has been initially located with its incidence angles equal to 0°
facing to an interferometer laser beam. The interferometer measurement has
been then reset to make this position as the zero length measurement.
The SMR has been then rotated on its horizontal and vertical angles within its
incident available range of ±30° in 7.5° steps as shown in Image 19.
Image 19.
SMR incidence angle error measurement.
Data measured by the interferometer are shown in Table 9. An important
dependence on the angle variation can be seen, showing the influence of the
vertex position error, that is, the distance between the optical centre of the
CCR and the SMR sphere.
Conclusions
A new LT kinematic calibration has been presented and verified by
comparing calibration results with nominal data measured with a CMM. The
novelty of the method is that a final calibration of the LT can be made by the
LT user at place just before measuring with the LT under real working
conditions. This can greatly help LT measurement process by assuring a
correct calibration at the moment of measuring. The only devices needed for
the calibration is a calibrated gauge and a set of reflectors to be located at the
measuring place.
The kinematic error model has been defined. This model has also been
validated with synthetic and nominal data. The study of the influence of
every error parameter in the global error of the LT has shown the best
configuration for the experimental setup.
The calibration procedure has been performed with a previously calibrated
LT, and the calibration has been able to improve the factory calibration of the
LT.
The influence of the laser incidence angle in the measurement uncertainty
shows an important contribution to the measurement errors.
The kinematic calibration model developed offers important advantages
compared to the conventional methods. Existing standards require strict
temperature conditions, and a large number of measurements are needed to
perform the calibration. The purposed method can be used in two ways; first,
the distance error calculated for every pair of reflectors measured from
different LT locations gives a dimensional value of the LT accuracy, which
will help the user to know whether a calibration of the device is necessary or
not. In other way, if the calibration is necessary, it can be performed by the
final user between the programmed calibrations without the need of a
metrological laboratory. It can also be used to develop new calibration
standards or complete the existing ones.
Future work
It is possible to find two constructive LT models from different
manufacturers. The purposed method is valid for the LT having the laser
source in the rotating head. The other model is characterized by having the
laser source in the fixed basis of the LT. This means that they need a rotating
mirror attached to the standing axis to reflect the laser beam from the source
to the SMR. The calibration procedure followed in the present work can also
be applied to this second LT constructive model adapting the kinematic
model to the LT geometry and the laser beam path.
Along with the development of this kinematic model, further tests are
convenient to study the behavior of the calibration method under different
conditions such as measurement range, temperature, number and distribution
of reflectors.
Optimization of Single-Sided Lapping
Kinematics Based on Statistical Analysis of
Abrasive Particles Trajectories
Nowadays, products have to be manufactured with both high quality and
efficiency. Finishing technologies that allow to achieve high quality surfaces
with low roughness, very high accuracy of shape and dimensions are
becoming increasingly important in the modern world. These results can be
obtained in lapping process with the use of relatively simple means of
productions. It is one of the oldest machining processes and a number of
precision manufacturing applications still use the lapping process as a critical
technology to achieve flatness tolerance and surface quality specification.
This technology is used for machining metals and their alloys, glasses,
natural materials such as marble, granite and basalt, materials used in
semiconductor technology, as well as carbon, graphite, diamond and
ceramics, have found use in many engineering applications. Typical parts
processed by lapping are: pneumatic and hydraulic parts (valve plates, slipper
plates, seals, cylinder parts, and castings), pump parts (seal faces, rotating
valves, and body castings), transmission equipment (spacers, gears, shims,
and clutch plates), inspection equipment (test blocks, micrometer anvils,
optical flats, and surface plates), cutting tools (tool tips and slitter blades),
automotive and aerospace parts (lock plates, gyro components, and seals). All
of the abrasive machining are complicated and random processes with the
large amount of influencing parameters and outcomes. As a result of
numerous variables affecting the process quality, the main outcomes of
lapping are stock removal, roughness and flatness. In order to ensure the
highest quality and accuracy on worked surfaces, researches should focus on
improving lapping process by studying significant process variables.
Generally, all lapping processes can be subdivided according to the active
surface of the lapping tool. The lapping process where the tool axis and the
workpiece surface are parallel to each other is known as peripheral lapping,
and side lapping applies when the tool axis and the workpiece surface are
perpendicular to each other. More specifically, the most used lapping
processes can be classified as the following: according to the generated
surfaces, process kinematics and tool profiles. The most well-known systems
are single-sided and double-sided lapping machines. Double-side surface
lapping is considered as the most accurate method in terms of parallelism and
uniformity of size as two parallel even surfaces are simultaneously lapped
during this process. These kinds of machine tools have a planetary system. In
case of flat lapping, the standard systems with conditioning rings are mostly
used.
Due to the wear, the active surface of the lapping plate has some shape errors
of convexity or concavity. This has a major impact on the shape accuracy of
the workpieces. If the lapping plate is out of flat, it should be re-conditioned.
Kinematic method of the correction of the tool shape errors can be applied.
Workpieces’ shape, size as well the lapping kinematics determine the contact
between workpieces and the tool. It was observed that a trajectories
distribution of abrasive particles on the lapping plate varies when the
kinematic conditions are changed, e.g., by placing the workpieces at different
radii, setting different rotational velocities or by introducing additional
movements of conditioning ring. In this chapter, the unconventional
kinematic systems were described. Carried out simulations have shown that
the speed ratio k1 and the period ratio k2, which represent the relationships
among the three basic motions of unconventional lapping systems, are major
factors affecting trajectory distribution. The optimization of these parameters
was conducted. The uniformity of tool wear was assumed as main
optimization criterion.
Main factors of lapping process
The lapping system consists of several elements: lapping plate, abrasives,
lapped workpieces, kinematics and machine. They influence the lapping
process which determines the product quality, tool wear and efficiency of the
process. The input factors of lapping process are of two categories:
controllable and non-controllable (noise) factors. The first category includes
machining parameters, its working pressures and speeds, abrasive type,
characteristics of the work equipment, tool, machine, duration of machining,
etc. The uncontrolled input variables includes, inter alia, environmental
temperature, slurry distribution, vibrations occurring in the system, internal
stress, etc. An overview of main factors of lapping process is presented in
Image 1.
Image 1.
Main factors of lapping process.
First of the most important input parameters, which influence the surface
formation and material removal in lapping process is the lapping plate, its
geometry, hardness, etc. Generally, it could be established that workpieces
are machined to a mirror image of the tool with respect to the flatness. During
lapping process, abrasives play an important role on material removal based
on size, shape, and hardness. It has been shown that material removal rate
increases with increasing abrasive size up to a critical size at which it begins
to stabilize and even decrease. Moreover, multiple studies have also shown
that surface roughness improves with decreasing grain size. The properties of
the abrasive vehicle, which is the medium which transports the abrasive
across the workpiece, are important to how effective the abrasives are in
material removal. In addition, the flatness, geometric and arrangement of
workpieces have also effect on lapping results. The material hardness of
workpieces has a crucial influence on a removal mechanism in abrasive
processes, as well on efficient of lapping. Surface formation of fragile
workpieces, e.g., ceramics, occurs as a result of cracking propagation or by
plastic deformation, when depth of cut is smaller than critical.
There has been a great amount of research conducted on the kinematics of
lapping. The lapping pressure and lapping speed can be seen as the main
variables in the lapping process. They are closely interdependent with
kinematic system and machine parameters.
Abrasive wear in single-sided lapping process
One of the most important mechanisms of the lapping process is abrasive
wear. The process describes the separation of material from surfaces in
relative motions caused by protrusions and particles between two surfaces or
fixed in one of them. In lapping process, abrasive grains are guided across the
surface to be lapped and backed up by a lapping plate or conditioning rings. It
is crucial to minimize friction and wear of the abrasive and to maximize
abrasive wear of the workpiece.
Abrasive wear is commonly classified into two types: two- and three-body
abrasion. In two-body abrasion, particles are rigidly attached to the second
body. When abrasive particles are loose and free to move, then we deal with
three-body abrasion. Therefore, in a two-body abrasion, the abrasive grains
can cut deeply into the workpiece due to the sliding motions and in the threebody abrasion, the particles spend part of time in cutting and part of time in
rolling.
Also in the case of lapping process, several abrasives move into active
positions and other grains leave the working gap. Only a specified part of all
particles is able to enter the working gap with height H, which is appropriate
to the lapping pressure F. Image 2 presents abrasive wear in single-sided
lapping process and different types of lapping grains being in the working
gap between tool and workpieces. As can be seen in Image 2, abrasives play
a crucial role on material removal based on size, shape, and hardness. For
example, it can be noticed that grain A is too small and grain B is too big. It
means that only some of the abrasives are active and they roll like grain C or
slide like grain D. The example of passive grain is grain E, which does not
stick to the affecting partner.
Image 2.
Single-sided lapping process: (1) lapping plate, (2) separator, (3) workpiece,
(4) felt pad, (5) weight system, (6) conditioning ring.
Moreover, regular sliding grains in and out of the working gap cause the
changes in the initial structure of the lapping plate. First, strongest loaded
grains disintegrate into smaller and their particles are able to take part in the
lapping process depending on the size and shape. Hence, the number of
bigger grains is reduced, while the number of smaller grains is increased and
the structure of the lapping tool is changed. The changing trajectories and
velocity of abrasives cause some grains crash with each other or with grain
fragments. It can be established that after crashes, grains are accumulating
and can disintegrate at any time. The chain between a workpiece and lapping
plate is fulfilled by active grains, which under their impact and edge transfer
the normal forces into the surface of the affecting partner. Acting forces are
proportional to the material volume. To reach a stationary working gap
height, the amount of all normal forces transferred by active grains must be
equal to the pressing force F [3].
Kinematic analysis of standard single-side lapping system
SINGLE-SIDED LAPPING MACHINE OVERVIEW
Nowadays, there are many manufacturers that offer surface planarization
technology with lapping kinematics. The most well-known are Peter Wolters,
Lapmaster, Stähli, Engis, Hamai, Kemet, Mitsunaga and LamPlan. After a
careful analysis of numerous researches and offers of many lapping machines
producers, it has been emerged that most of lapping machines have standard
kinematic system. Single-sided lapping machines differ in tool size, diameter
of lapping plate, size and number of conditioning rings, type of pressure
system (weighting or pneumatic system). More complex machines are
equipped with forced drive option of conditioning rings. Such a system
maintains a constant speed of workpieces.
Conventional single-sided lapping machine is shown in Image 3. The key
component of the machine is the tool, i.e., the annular-shaped lapping plate
(1), on which the workpieces (3) are applied to. One machine usually has
three to four conditioning rings (5). However, laboratory lapping machines
that have one ring are also popular. The lapping plate (1) rotates with angular
velocity ωt and it drives conditioning rings, where separators (4) are placed
allowing additional move of workpieces (3). Due to the frictional force,
conditioning rings (5) rotate on the working surface of lapping plate (1) with
angular velocity ωs. This force depends on a radial position, velocity of
conditioning rings, and friction conditions. The radial position (R) of
conditioning rings can be controlled with roller forks (2). During the lapping
process, a certain load is provided through felt pad (6) by weight disk (7) or
pneumatic system. In this way, the parts are pressed against a film of abrasive
slurry that is continuously supplied to the rotating lapping plate.
Image 3.
Single-sided lapping machine: (1) lapping plate, (2) roller forks, (3)
conditioning ring, (4) separator, (5) workpieces, (6) pad, (7) weight.
KINEMATICAL FUNDAMENTALS
Due to the fact that the kinematics of the lapping is affected by a number of
factors related to the influence and properties of the workpiece—abrasive
slurry—lapping plate system, in the following model considerations, it is
assumed that the angular velocities of conditioning rings, separator and
workpieces are identical. Moreover, conditioning rings role is to even the
lapping plate (Image 4).
Image 4.
Kinematic diagram of single-sided lapping machine: (1) lapping plate, (2)
separator, (3) workpiece, (4) conditioning ring.
The input parameters for analysis are: angular velocity of lapping plate ωt and
of conditioning rings ωs, inner Rdw and outer Rdz diameter of lapping plate,
radial position of conditioning ring Rp.
In order to model a lapping plate, the position of any point P belonging to a
workpiece must be determined. It is possible to do this by a radius vector in
two coordinate systems: absolute and relative, which is related with rotating
tool. The position of any point P(r,ϕp) belonging to a workpiece in singlesided lapping system are determined in x″y″ coordinate system, which is
related to conditioning ring as:
The coordinates of point P in x′y′ coordinate system are:
The relative velocity v of point P is defined as the derivative of the position
with respect to time:
SINGLE ABRASIVE TRAJECTORIES ANALYSIS
Equations of any point belonging to a workpiece position in standard singlesided lapping system were implemented in MATLAB program. This program
allows to analyze single-side lapping system and can be used to mark out
cycloids paths, which can be treated as areas where the lapping plate wears
by the grain placed in a specific location of a conditioning ring or workpiece.
In order to analyze single abrasive trajectories, an additional parameter must
be defined, which is rotational speed ratio of the conditioning ring to the
lapping plate:
Analysis of multiple simulations leads to conclusions that the k1 parameter
determines the type of cycloid path. Results are shown in Image 5. It can be
observed that epicycloid trajectories are marked out when the value k1 is less
than 0. When the value k1 is close to 0, stretched epicycloid then interlaced
epicycloid are received. Pericycloids appear when the value k1 is between 0
and 1. At k1 bigger than 1, hypocycloids can be obtained. Initially, they are
stretched hypocycloids; then, they transform to interlaced hypocycloids.
Attention needs to be paid to the following trajectories: cardioid (k1 = −1),
concentric circle (k1 = 0), eccentric circle (k1 = 1) and concentric ellipse (k1 =
2). Moreover, the cycloid curvature radius RP(t) is a periodic function of time
t and its cycle equals:
Image 5.
Path types depending on the rotational speed ratio k1.
Unconventional lapping kinematics systems
Analysis of numerous researches and offers of many lapping machines
producers leads to conclusion that most of lapping machines have the
standard kinematic system. Unconventional lapping systems, where the
conditioning ring performs an additional move such as radial, secant,
swinging or off-center are presented in Image 6. Simulations have shown
that changing the kinematic system in single-sided lapping process causes a
wide density variation of single abrasive trajectories.
Image 6.
Examples of approach to kinematics system lapping plate—conditioning
ring: (a) radial, (b) secant, (c) swinging, and (d) off-set.
The simulations allowed state that the most desirable system is that system,
which allows to smoothly control the position of the conditioning ring on the
lapping plate. It was pointed out that some systems are not very different
from the standard kinematic system. Generated trajectories were almost
identical. This is due to the fact that with such systems, the deflection of
conditioning ring along the radius was much smaller, than in the case, for
example, of the radial lapping system. In order to correct the flatness of
lapping plate, the ring must be shifted toward or out of the center of the tool.
The detailed kinematic diagram of single-sided lapping system with the
additional movement of conditioning ring along a chord is presented in
Image 7. It can be observed that in this idea, there is only one conditioning
ring, which in addition to rotary motion performs a reciprocating motion
between point A and B. The position of the conditioning ring is not constant
as in conventional system, parameter RPx(t) changes the value in time. The
distance from the center of the lapping plate to the chord equals RPy.
Image 7.
Diagram of single-sided lapping machine with reciprocating movement of
conditioning ring.
The principle of kinematics calculation is identical as in case of standard
single-sided lapping system. Due to additional motion of the conditioning
ring, only Eq. (3) must be changed:
Considering the reciprocating motion of the conditioning ring, RPx is a
function of time and it can be expressed as:
There are many ways in positioning control applications to move from one
point to another. The most common are trapezoidal and S-curve motions. In
trapezoidal move profile, velocity is changed linearly and acceleration makes
abrupt changes [14]. This can be quantified in an S-curve motion profile by
“jerk,” which is the time derivative of acceleration. S-curve motion helps to
smoothen the motion and reduce the possibility of exciting a vibration or
oscillation [15]. A single stroke of reciprocating motion with S-curve profile
is shown in Image 8. The conditioning ring moves between two points,
where TRis the reciprocating period, TA is the acceleration time of the
uniformly accelerated motion and TD is the deceleration time of the uniformly
decelerated motion. The conditioning ring has a maximum velocity vRmax at
the central range of the reciprocating stroke and uniformly accelerated and
decelerated motions at the two ends of the stroke.
Image 8.
Single stroke of reciprocating motion with S-curve profile.
For the simplicity, the velocity expressions of another non-dimensional
parameter was introduced and defined by Eq. (12). Parameter k2 is the ratio of
the reciprocating period to the lapping plate rotary motion period:
where d is the reciprocating stroke, TR and TT are the periods of the
reciprocating motion of the conditioning ring and rotary motion of the
lapping plate.
Kinematic optimization
Kinematic analysis showed that the basic kinematic parameters significantly
affect the trajectory and velocity distributions. To improve the quality and
flatness of the machined surfaces, the selection of parameter values, which
are optimal was carried out. It was assumed that, the main optimization
criterion is the uniformity of tool wear. In this section, the material removal
rate and the lapping plate wear model are described in detail. Based on the
lapping plate wear, its uniformity is calculated and the maximum value is
sought.
MATERIAL REMOVAL RATE
The volume of material removed by lapping process at a local position during
every unit of time is an important part of study about lapping process. There
are many models which describe material removal rate (MRR) during
lapping, grinding or polishing. Several researchers worked on experiment and
analytical models. They show that the lapping process can be improved by
optimizing both the machining efficiencies, and the consideration of the
process parameter influences with surface lap height. Considerations about
MRR are important because it is the only solution to provide the maintenance
of reliability and lifetime of the produced workpieces.
One of the most common wear models in abrasives process is the tribological
model developed by Preston, who related relative velocity and pressure to
material removal rate (MRR) which is known as Preston’s equation:
where H is a height of removed material in any point on the lapping plate, k
is a constant relating to the process parameters, p is the applied pressure, and
v is the relative velocity of the workpiece. The parameter k varies based on
any modifications to the material removal process such as abrasive and slurry
type, feed rate, and other process parameters.
However, some of the experiments have shown that in the case of relatively
high or low velocities and pressure, linear relationship in Preston’s equation
does not hold true. Therefore, many modifications to the Preston’s equation
were proposed. Moreover, there are many formulations in literatures, which
are based on the Preston’s equation. Using the Preston’s equation, Runnels
and Eyman proved that the MRR in the chemical-mechanical polishing
process is related to the normal and shear stresses. Tseng and Wang modified
the Preston’s equation to express the MRR as a function of the normal stress
of the wafer, the relative velocity of the polishing and the elastic deformation
of the abrasive grains. Nanz provided new MRR equation considers the
bending of pad and flow of slurry.
LAPPING PLATE WEAR MODEL
Intensity of lapping plate wear can be assumed as a contact intensity of the
tool with the workpieces through the lapping abrasive grains. There are
different methods for calculating contact intensity. This section assumes the
method of calculating particles density of interpolated trajectories. Therefore,
in order to simulate the trajectories and to count their distribution, Matlab
program was designed.
An example of trajectories density determination steps is shown in Image 9.
Initially, location of random particles is generated within the conditioning
ring. Then, the trajectories of the particles are calculated with a use of
kinematic equations. A set of points, which are equally spaced from each
other, are generated with interpolation function. The lapping plate surface is
divided into small squares with the same area. Finally, a statistics function is
used to count the total number of points within each square of the lapping
plate surface. The contact intensity can be developed for a profile of the tool.
It allows to determine if the wear causes a concavity or the convexity. The
area of the lapping plate must be divided into equal rings. A measure of
points in an appropriate area is determined by equation:
where n is a points number in area Ai and r is rings width.
Image 9.
Trajectories density determination: (a) random generated particles, (b, c)
trajectories generating, (d) trajectories interpolation, (e) density of
trajectories, (f) profile density of trajectories.
To calculate the standard deviation SD of all the values of Di, which is given
by:
where D¯ represents the average of all the values and N is the total number of
the divided rings or squares.
PARAMETERS OPTIMIZATION
Trajectories distribution of particles in single-sided lapping process is
significantly affected by rotational speed ratio of the conditioning ring to the
lapping plate k1 (Eq. (9)) in standard single-sided lapping system and in
addition by period ratio of the reciprocating motion of the conditioning ring
to rotary motion of the lapping plate k2 (Eq. (13)) in system with additional
movement of conditioning ring. To obtain a better uniformity base on
trajectory simulations and consequently even lapping plate wear, it is
important to optimize both parameters. The uniformity of tool wear was
assumed as main optimization criterion. In order to describe the evenness of
the lapping wear, the trajectories distribution uniformity is defined:
where SD is the standard deviation of trajectories density and D¯ is the
average value of the trajectories distribution.
It can be predicted that during the lapping process, there are more than 1
million active particles in the slurry and on the lapping plate. However,
because of the calculation time, an appropriate particle number which can
reflect the same regularity as the real number has to be determined. Image 10
shows the influence of the amount of abrasive grains to the uniformity.
Results are presented for standard single-sided lapping system, when k1 =
0.45 and RP = 125 mm. It can be observed that for 1000 randomly distributed
particles, uniformity is stable and constant.
Image 10.
Uniformity of trajectories distribution produced by different number of
random particles.
Trajectory density uniformity for standard single-sided lapping system and
for single-sided lapping system with reciprocating motion of conditioning
ring is presented in Images 11 and 12. Trajectories were generated during 60
s on lapping plate with internal diameter of 88 mm and outer diameter of 350
mm. The maximum uniformity was obtained in conventional system, when k1
= 0.6–0.9, and in non-standard lapping system, when k1 = 0.65–0.75 and k2 =
1–2. For single-sided lapping system with reciprocating motion of
conditioning ring, the uniformity was about 10% higher.
Image 11.
Trajectory density uniformity versus k1 for standard single-sided lapping
system, 1000 random particles, radial position of conditioning R = 125 mm,
simulation time t = 60 s.
Image 12.
Trajectory density uniformity versus k1 and k2 for single-sided lapping system
with reciprocating motion of conditioning ring, 1000 random particles,
simulation time t = 60 s.
Automated lapping system
The above considerations regarding complex kinematics of lapping process
were the starting point to develop automated lapping system. Nowadays,
machine tools are more efficient than in the past. Lapping machine
manufacturers also improve their machines and supply their basic
constructions with additional components. As a result of automation, some of
the supporting operations can be eliminated. Lapping machines are supplied
with feeding tables, loading and unloading systems of conditioning rings,
which form mini-production lines. However, it was emerged that the system
where the ring is led by the manipulator during the machining is novel. In
order to create universal mechanism that moves the conditioning ring at any
path is complicated and in some cases impossible. Thanks to the robot that
moves an effector from point to point, it is possible to change the ring
trajectory at any moment and different conditions. Owing to this solution, it
is possible to apply any lapping kinematics and use robot for supporting
operations.
One of a number of challenges in designing automated manufacturing
systems is the selection of a suitable robot. This problem has become more
demanding due to increasing specifications and complexity of the robots.
However, it was decided that the robot should perform material handling
tasks and also lapping process. In initial selection, the articulated robots,
which have four to five degrees of freedom and are powered by an electrical
drive should be chosen for further evaluation. Furthermore, a continuous path
or Point-to-Point control system is required.
The idea of how single-sided lapping machine and the robot working together
presents Image 13. The robot situated next to the lapping machine handle
primarily sorted workpieces from the table to the separator, located in
conditioning ring. Then robot grips the conditioning ring and shifts it onto the
lapping plate, which starts to rotate. The machining is executed by the robot.
It shifts the ring with workpieces in such a way to keep the flatness of the
plate along the radius. After lapping process, robot shifts the conditioning
ring onto collecting table and workpieces fall into the box with finished parts.
Finally, the flatness of the lapping plate is controlled and is fixed in case
when an error occurs.
Image 13.
Idea of robotic single-sided lapping system.
Conclusions
The aim of this chapter was to present the influence of selected factors on the
geometrical results of the single-sided lapping process. One of the most
crucial factors is kinematics of the executive system. Since flatness of an
active surface of the lapping plate has an essential influence on the shape
accuracy of lapped surfaces, the key is to maintain the flatness of the tool. It
can be established that lapping plate flatness deviation can be caused by its
uneven wear.
A desired distribution of the contact density, which determines a wear and
allows to correct the flatness error of the lapping plate, can be obtained by
choosing appropriate kinematic parameters. Therefore, the kinematic model
of single-sided system was in detail analyzed and modeled in Matlab
program. Based on the simulations, it was observed that a trajectories
distribution of abrasive particles on the lapping plate varies when the
kinematic conditions are changed, e.g., by placing the workpieces at different
radii, setting different rotational velocities or by introducing additional
movements of conditioning ring. Hence, the influence of additional guiding
movements of the conditioning ring has been verified. Major factors affecting
trajectory distribution are the speed ratio k1 and the period ratio k2, which
represent the relationships among the three basic motions of unconventional
lapping systems. The optimization of these parameters was aimed at
improving the quality and flatness of the machined surfaces. Main
optimization criterion was the uniformity of tool wear. The maximum
uniformity was obtained in conventional system, when k1 = 0.6–0.9, and in
non-standard lapping system, when k1 = 0.65–0.75 and k2 = 1–2. For singlesided lapping system with reciprocating motion of conditioning ring, the
uniformity was about 10% higher than in conventional system.
NOMENCLATURE
ωt
angular velocity of the lapping
plate
ωs
angular velocity
conditioning rings
Rdw
inner diameter of lapping plate
Rdz
outer diameter of lapping plate
Rp
radial position of conditioning
ring
of
the
rP, rS
radial position of examined
points
t
machining time
H
wear removal rate
k
Preston’s coefficient
p
force per unit area
v
lapping relative velocity
Di
trajectories density
d
distance
motion
vR
linear velocity of reciprocating
motion
of
reciprocating
TR
reciprocating period
TA
acceleration time
TD
deceleration time
JA
acceleration jerk
JD
deceleration jerk
k1
rotational speed ratio of the
conditioning ring to the lapping
plate
k2
period ratio of the reciprocating
motion of the conditioning ring
to rotary motion of the lapping
plate
SD
standard deviation
U
uniformity of lapping plate
wear
A Random Multi-Trajectory Generation
Method for Online Emergency Threat
Management (Analysis and Application in
Path Planning Algorithm)
Robot path planning have been witnessed a great achievement these years
with the various application of robots. Problems such as path planning,
motion planning, and online moving obstacle management have been widely
studied toward the goal of performing autonomy. Unmanned Aerial Vehicles
(UAVs), an easy access robot platform, has been increasingly applied in
research and commercial areas in recent years. UAV autonomy denotes the
ability of tackling with obstacle (or called no-fly zone) avoidance and
trajectory planning online from a starting position to a destination while
satisfying the kinematic constraints.
For robot path planning, emergency threat management (ETM) is one of the
hardest challenges that needs to be solved, where a sudden threat may burst
into view or dynamic obstacles are detected on line, especially when UAV is
following the desired path. Under such conditions, UAV should consider the
following attributes:
1.
Time efficiency: The most important requirement for ETM
algorithm is time efficiency. For general ETM, the configuration is
periodically updated, such as heuristic algorithm A*, which it is
computationally intensive if the map is represented with high
resolution. In order to guarantee safety, ETM requires real-time
performance.
2.
Kinematic feasibility: Kinematic feasibility denotes that the
output of the planner meets the kinematic constraints of the robot as
well as the environment. The constraints include: (a) Path
smoothness: The planner is required to output kinematic smooth
path, sometimes even kinodynamically feasible as well. Thus, the
path should meet the state of art tracking constraints, and enables
low tracking error for UAV; (b) Minimum cost of switching: The
strategy of handling the threat, especially ET, is to find the cost
minimum path by generating a new path or multiple paths besides
the initial one. The cost for choosing the best path should take the
dynamic constraints, energy consumption and time performance into
consideration.
3.
Specific requirements: UAVs have already been applied to
many areas, such as inspection, photography, and monitoring. They
have to meet some specific requirements according to environments
and system constraints. For example, best pose based illumination of
tunnel inspection for crack and spalling, and stable tracking with
obstacle avoidance as UAV photography which should be able to
keep stable capturing even during the flying.
Development with open robot platform and field implementation has
witnessed the promising performance of Sampling Based (SB) methods. SB
algorithms (SBA) have the advantages for planning in high dimensional
space, and it is with the ability to deal with multiple classes of path or motion
planning problem in both static and dynamic environment. Rapidly-exploring
random trees (RRTs) are single query methods which obtain Voronoi biased
property and only generate homotopy paths simultaneously. Although it
proposes to solve the multiple degrees of freedom (DOF) operating problems
in known static environments, SBA shows great performance of dealing with
any kind of path or motion planning problem in complex environments for
unmanned ground robots or aerial robots.
In this paper, we introduce two biased-sampling methods, which are obstacle
biased and Homologous Classes (HC) biased to perform path planning
respectively. For obstacle biased path method, we have discussed in with
UAV demonstration. For HC classed biased approach, it aims at solving the
ET problem by generating alternative paths for online dynamically switching.
HC introduces an online dynamic reconfiguration approach to ensure
singularity between paths, which tries to generate more paths with different
obstacle reference. Thus, it can perform alternative switching online when
confronted with ET. The obstacle biased planning method is called Guiding
Attraction based Random Tree (GART) and HC biased is called Multi-GART
(MGART). We consider the environment to be known as a priori to us, and
the UAVs are with the ability to understand the clearance region.
Experiments and comparative simulations are illustrated to provide the
effective evaluation of the proposed methods.
Preliminary materials
HOMOLOGY AND HOMOTOPY
For path planner, the purpose is to find a feasible path pf (cost minimum or
complete) from the initial position to the goal position in the workspace W
{n}
∈R
, n denotes the dimension of space the robots locate. A general cost
function can be represented as:
The cenergy,ctime,cthreat cost, time consumption, and threat respectively.
These costs are not fixed, since the energy cost can be path length, and time
consumption can change according to the velocity limitation. For cost
constrained path planner, the goal is to find the asymptotic optimal rather
than the completeness solution. Then, more than one path can be found
during the process, and the paths can be homotopic or belong to different
homotopic classes (or called homology).
It is illustrated in Image 1. Given a continuous map H : I × I → Γ or H : h(s, t)
= ht(s), Γ denotes the topological space and I = [0, 1] is the unit interval. The
obstacle regions are labeled with Ro1, xinitial=h(0,t)denotes start point,
xinter=h(1,t) denotes the goal position, xinter denotes an inter node for obstacle
avoidance. For the continuous deformation, given h(s, 0) = π0, h(s, 1) = π1, the
path can be continuously mapping through π0 to π1 with t ∈ [0, 1]. For any
path deformed between, they are homotopic with π0, π1 if and only if they
stay in the closed loop π0 ∐ − π1, where the closed loop cannot collide with
any obstacle region.
Definition 1—Homotopic Paths: It denotes the equivalence class of all
paths under continuous mapping H : h(s, t) = ht(s), which locates in the closed
loop formed h(s, 0) ∐ − h(s, 1). Any path in the set can be continuously
deformed into any other without colliding with any obstacle in the space. For
all paths in the set, they are with the same fixed end points.
Image 1.
Homotopic and homologous classes and paths.
We can conclude that π2 and π3 belong to the same homotopic class.
However, we can find path π4, which shares the same start and ending node,
cannot be continuously deformed to π3 due to the isolation of the obstacle. It
means (π3 ∪ − π4) ∩ Ro3 ≠ Θ. In such case, we call π3 and π4 are
homologous, and they belong to different homotopic classes.
Definition 2—Homologous Paths: Paths, which follows the same
continuous mapping H : h(s, t) = ht(s), cannot form a closed loop by h(s, 0) ∐
− h(s, 1). The homologous paths belong to different homotopic classes.
PROBLEM STATEMENT
Path planning follows a common procedure to perform trial and error process
under empirical constraint to achieve completeness. The problem of path
planning does not only solve a problem for exploration optimization, but also
try to model the environment with a best descriptor as discussed in. Let us
take a look again with the problem of path planning which can be represented
as:
The path h(s) (homologous) should stay in obstacle free region Rfree, that is,
h(s) ∈ Rfree. Usually, the path is piecewise continuously, and it can also be
smoothed to obtain first order continuous thus to ensure kinematics
continuous. Besides the exploration to achieve completeness (in Eq. (1)), the
obstacle modeling method is also important and affect the planning results.
To solve this problem, this paper proposed a multi-path online switching
approach, that is, the path planner can find alternative homologous-paths.
Then, this paper designs an online fast switching strategy. For multiple path
planner, it aims at finding as many paths as possible,
Halter denotes the set of all the alternative paths hi(x(t), u(t)), x(t) denotes the
state, and u(t) denotes the control. However, the mission planner cannot use
all the planed paths for online switching, it should find the reasonable paths
without redundancy. We propose the follow rule,
Hreason denotes the paths set where any two paths are not homotopic to each
other, H≠ denotes non-homotopy. Now, we have the paths which keep
distinguishable from each other with different obstacles sequence
surrounding.
Rapidly exploring random tree path planner
In this section, we try to describe the underlying research of rapidlyexploring random tree (RRT, upon which we propose a novel state of art
approach to facilitate the active exploration in cluttered environments). SBAs
are incremental search algorithms which perform random sampling with
collision checking for extension, and they were first proposed to solve high
dimension planning problem. They have the merits of considering the control
and kinematics constraints together, and can incrementally construct a tree
from the start node (or initial state) to the goal node (or goal state) with
continuously sampling and expansion.
It is shown Image 2, the whole tree graph by exploration is represented as
GT, the black solid dot denotes the valid state within step accessibility under
kinematics constraints, and the black solid lines connect each parent state
with child state for extension. Every step, a new sample gsample will be
generated randomly. It should be cleared to all that the initial random
sampling does not mean a fixed connection, that is, the random sampling can
be a direction for extending. Then, the random sample gsample tries to find the
nearest state in the tree for connection under minimum Euclidean metric,
Image 2.
RRTs propagate by applying the minimal cost criterion to connect the newly
sampled guard to the previous tree.
Where gi is an element of all valid states set GT.
RRT CONNECTION WITH KINEMATIC CONSTRAINTS
For RRT planner, given a system with state (xẋ ,xẏ ,θ̇ )xẋxẏθ̇, and a general
form of system model:
It can extend with simply random sampling with control inputs [ux, uy, uθ].
The random sample has to follow the kinematics constraints. Given the robot
system, the differential constraints can be represented as a set of implicit
equations as g(x,ẋ )=0gxẋ=0, and it can be further represented as:
Here, x denotes the state, and u ∈ U denotes the valid control in allowable
controls set. Given the parent state gparent(t), the time step follows a Δt
′
′
limits. Then, the control inputs vary with u = {u(t )| t ≤ t ≤ t + Δt}. To compute
x(t + Δt), we can follow a typical procedure as [12]. It should be noted that
the planner should extend toward the newly sampled gsample. The planner first
computes the possible region of reachability from current state x(t):
where ϵ is the maximum first order factor of control input. RRT now picks a
new state along the direction from parent to new sample, that is, gnew ∈ [x(t)
+ f(x(t), u(t) − Δt ∙ ϵ), x(t) + f(x(t), u(t) + Δt ∙ ϵ)] and gnew = gparent + δ(gsample
− gparent) with δ ∈ [0, 1].
VORONOI BIASED INCREMENTAL SEARCH
Before discussing the Voronoi biased property of the SBAs, let first introduce
some basic notation. Given a set of points S = {si| i = 1, 2, …, n} in a ndimension space Χ. For any two distinct points sp and sq in set S, the
dominant region (Voronoi region) of sp over sq is defined as the region where
any inside point should be closer to sp over sq, that is,
Where χ is the dominant region corresponding to sp, ||L denotes the Lebesgue
measurement. In a normal case, any point si has its own dominant region
with,
Normally, random sampling of RRT follows a Monte-Carlo Method to
perform an uniformly sampling in a n-dimensional space under Lebesgue
measurement. We can look back at the beginning of Section 3, the new
sampled node tries to connect to the nearest node under Euclidean metric. We
can now analyze the problem in another perspective that given gparent and gs,
they connect to the same origin go. Then, a new sample gsample is generated
randomly following a Monte-Carlo process. In order to explore and reach the
goal, gsample tries to connect to the tree following the metric defined in Eq. (5).
It means that gparent and gs can be connected for expansion under minimum
distance principle, then gsample has to be assigned to the dominant region
which subjects to a closer point (the Voronoi region). Under this principle,
gparent and gs can acquire new resource for extension with the ability to keep
distinct region and extending their branches.
A typical Voronoi-biased exploration using sampling can be seen in Image 3,
where each branch keeps distinct with each other to form a star network like
structure and it behaves the same for heuristic informed RRT. Here, unlike
the dominant region of a point, RRT branch can be also treated as a distinct
individual with its own Voronoi region for acquiring the extending resource.
Image 3.
Results of incremental exploration of RRT and hRRT [16] after 200 and 1000
iterations, respectively.
Obstacle and homology biased path planner
In this Section, we propose approaches to solve two main problems, which
are handling cluttered environment and online ET processing, using obstaclebiased method and homology-biased method. Collision detection during the
incremental exploration is time consuming, and it follows a routine procedure
to guarantee safety. It should be noted that the step validation of each new
sampling state provides the directional information of obstacle distribution.
OBSTACLE
BIASED
PATH
PLANNER
UNDER
KINEMATIC CONSTRAINTS
SBAs mostly deploy the general idea of generating random samples for
incremental exploration, and the sample locating in obstacle region will be
discarded since it is time consuming and no benefits for increasing the
performance of exploring. We firstly deployed a simple idea which was
proved to have much higher time performance then RRT and RRT* in.
This paper introduces an obstacle biased algorithm, using obstacle
information to help generating more samples for connection. It is shown in
Image 4, the newly sampled states xs1,xs1x1s,x1s tries to connect to the
nearest state in the tree. However, xs1x1s leads toward the obstacle region,
xs2x2s locates in obstacle region. To use the obstacle information, this paper
proposes an active exploring method, that is, inner propulsion and outer
attraction.
Image 4.
Obstacle biased SBA uses the obstacle location as input, with inner
propulsion and outer attraction, to generate more samples for exploration.
xs1,xs2x1s,x2s are new samples, the black region denotes obstacle region.
For outer attraction, new sample xs1x1s performs a collision checking, and
find the nearest nodes oxa, oxb. We define that the further the obstacle to the
sample, the more attraction it can support, that is, the attraction is
proportional to the distance between obstacle and the sample using L2-norm
L2. The sample then re-allocation by add a obstacle biased attraction as:
Where k is a constant to adjust the shifting percentage of the attraction vector.
The inner sample in collision with the obstacle is regarded to provide guiding
information for the algorithm. This paper tries to find two more states gx1, gx2
within kinematic reachable region (discussed with Eq. (8)), it tries to find out
the first two safe state with two directions which are out of obstacle region in
the kinematic reachable region (the light blue fan-shaped region). Then, the
two newly generated samples gx1, gx2 follows principle Eq. (11) to redistribute
to the final position, and connect to the tree.
By using the two proposed approaches, we can generate more useful samples
for extending, especially, the samples generate around the edge of the
obstacles with the ability to perform more active exploration in cluttered
environments. Besides, the outer attraction redistributes the samples toward
the narrow corridor between the obstacle, which thus increases the
probability of finding safe path through such obstacle crowed region.
HOMOLOGY BIASED
We assume any path ht(s) generated using SBA is consisted by a set of nodes
ht(s)={ht|ht(s′),s′∈[0,1]}, as it is illustrated in Image 5(a) that exploring tree
is consist of the red nodes. Each red node is regarded as distinct with other
nodes in the tree, with a distinct dominant region, i.e. Voronoi region. Thus, a
path ht(s), which is consisted a set of states from the initial state to the goal,
can be isolated with each other with a distinct region V(bT) combined by all
Voronoi regions of the states.
Image 5.
The extending-forbidden algorithm (EFA) tries to find all the states along the
goal reached branch at each goal reaching checking step, such as branch B2.
Then, EFA sets the flag of the states to be inactive, switching the extending
probability to the nearby branches.
The region dominant property differs the path with each other, where a SB
tree with multiple paths (the path here may not connect to the goal, but they
keep distinct with each other from the initial position) can be described by a
set of branches BT = {bT(1), bT(2), …, bT(n)}. For each branch, it consists of a
list of states which connect with each other to form tree structure. In the tree,
the end state relies on the parent state for extending as well as trajectory
tracking.
EXTENDING-FORBIDDEN ALGORITHM
The path planner performs exploration following the Monte-Carlo approach.
Given a configuration space C in a topological space (the rectangle region as
illustrated in Image 5), we denote the Lebesgue Measurement of the
∗
configuration space as L ∣ C∣. Then we can get the Lebesgue measurement
of each branch bT(i) of the tree using the same metric. Authors in proved that
the dispersion of n random nodes sampled uniformly and independently in
V(bT(i)) is,
Where ψ(bt(i)) denotes the number of samples m, 1 ≤ m ≤ n, that lies in the
sub-branch bT(i), n is the number of all the sampling, d is the dimension of
the configuration space. D denotes the difference between the ration of
sampling probability and ration of space Lebesgue measurement, which
follows the knowledge that Monte-Carlo method performs a uniform
sampling. It means the sampling probability approaches the ratio of Lebesgue
measurements, that is, the exploration probability can be represented as:
However, the probability of exploring in the configuration space does not
benefit the extending bias toward the goal. Let us still take a look at Image
5(a), the branch B2 dominant the near-goal region, and other region are not
able to extend toward the goal as the samples will not connect to the branches
if it locates in the near-goal region. To solve this problem, this paper
proposes an Extending-Forbidden Algorithm (EFA), it shifts the source for
extending to other branches by forbidding the goal reached path.
Definition 3—Goal-biased Probability: Given a configuration space C, the
exploring tree T and all its branches which are main branches BT and its
corresponding sub-branches. The goal-biased event denotes a branch can
exploring toward the goal. If a goal region can be represented as Gr, and
Voronoi(Gr2BT(i)) is the region that belongs to goal region and the Voronoi
region of branch BT(i). Then, the nominal goal biased probability of branch
BT(i) toward Gr is:
And the real goal biased probability is normalized value of all branches, that
is,
Definition 4—Long Sub-branch (LSB) and Short Sub-branch (SSB):
Given a tree T and all its Voronoi distinct main branches BT. Then, we can
define a length threshold δB. For all end vertices in each main branch, we
calculate the length lsb from end to the goal reach reached path (any state
which firstly reached). If the length lsb > δB, then we call it Long Sub-branch
(LSB). If lsb ≤ δB, we call the sub-branch as Short Sub-branch (SSB).
It should be noted that the threshold is very empirical in Definition 4, and it is
decided based on the configuration space and the kinematic constraints. In
Image 6, we set it as 15 meters, then we have SSB1, SSB2, SSB3 as SSB, and
LSB1 as LSB. The reason why we have this definition is that we cannot shift
all the extending resource to the neighbor branches, and we have the
hypothesis that the SSB must has lower probability of finding a new path
even given the resource for extending. For example, the main Voronoi
dominant branches BT keep distinct with each other, and each main branch
has the probability PbT(i) for exploration in the configuration space. After the
tree stops extending at a certain iteration, we can have the results as
illustrated in Image 5(a). Since goal region is in Voronoi region of branch B2,
then we know that we have the goal biased probability as PG(B(2)) = 1. One
branch B2reached the goal region which is represented with dotted back
rectangle, EFA searches the SSBs and LSBs based on Definition 4, and it
labels states and executing forbidden. Then, we have a resource shifted
Voronoi graph as illustrated in Image 5(b), where we can see that branch B1
and B3 obtains the Voronoi region which belongs to branch B2. The two
branches also obtain the rectangle goal region, that is, their goal biased
probabilities are bigger than zero, PG(B(1)) > 0, PG(B(3)) > 0.
Image 6.
An intuitive example of SSB and LSB in an exploring tree, which are
generated using threshold principle as defined in Definition 4.
Since EFA can shift the goal biased by extending resource to other branches,
while not all paths can obtain such resource. The following truths are hold:
1.
The increasing of goal biased probability ensures the generation
of a feasible path toward the goal, but not all branches with goal
biased probability can reach the goal at the same time. Only one can
reach the goal because of Voronoi dominant probability, thus the
general SBA cannot find multiple paths.
2.
The efficiency of generating multiple paths mainly depends on
the environment adaptability of random exploring algorithms. For
random exploring algorithm, their merits of generating multiple
branches enable the generation of multiple paths.
REASONABLE ALTERNATIVE REFERENCE CHOSEN
The proposed MGART is able to perform extending-forbidden toward
multiple paths, as the random exploring property guarantees completeness
and diversity. However, the quality of explored paths cannot be guaranteed,
particularly a large number of homotopic paths are generated. This paper
proposes an approach to generate the reasonable path, and we analysis under
the hypothesis that the environment is highly cluttered and it is not practical
to set threshold for path planner to choose the best homological paths.
Definition 5—Reasonable Alternative Paths: Consider two homotopic
paths h(π1) and h(π2) in a configuration space C. The surrounding obstacle
information along each path are ℶ(h(π1)) and ℶ(h(π2)). The reasonable
alternative path exists if and only the surrounding information of the two
paths are not the same, such that, ℶ(h(π1))≠ℶ(h(π2)).
Given the sensing range of a robot as Υ, and the obstacles set O = {o1, o2, ….,
on}. For path h(π1), it consists of a set of discrete states X(π1) = {x(π1)1,
x(π1)2, …, x(π1)n}. In this paper, we assume that any obstacle οi can be
described with a circle or ellipse centered at ocioic, then we can build a
Delaunay Triangulation [19] connection (DTc) using the obstacle centers, the
initial state, and the goal state. For DTc, it can generate a network like
structure, and each two states have at most one connection. It is illustrated in
Image 8(a) that the green edges are the valid connections, with labels to
distinct with each other. For any path, if the path intersects with an edge, the
edge information should be added to the information factor, such as the solid
red path intersects with edge L7, then L7∈ℶ(h(π1))L7∈ℶhπ1. The edge
labeling method can guarantee the uniqueness toward homology, while we
note that homotopic paths can also be used to perform emergency threat
management. It is represented in Image 7(b), the solid red path hs and the
dotted red path hD are homotopic to each other. Given the sensing range Υ,
we have the sensing envelop which are dotted purple lines hL1,hR1 for hD
and solid black lines hL2,hR2 for hs, indicating the maximum detection range
for emergency threat. Then, we have the {o1}⊂ℶ(hD) and {o1,o2,o3,o4}
⊂ℶ(hS), thus we have hs and hD both regarded as reasonable alternative
paths for online threat management.
Image 7.
An illustration of surrounding information used to find reasonable alternative
path for online emergency threat management. The information parameter
consists of edge information and obstacle surrounding information within
sensing envelop.
The informative approach discussed thus can help to label each path in a
configuration space, such as the results listed in Table 1 of paths in Image
7(a). Then according to Definition 5, we can find the label of each path. For
any several paths which have the same label, we choose the shortest path and
use as the candidate for online fast switching.
EMERGENCY THREAT MANAGEMENT
The reasonable alternative path set HRAP provides a network with cluttered
environment adaptivity. The concept of visibility was discussed in, where the
cycle information is used to enable fast deformation for motion planning.
Visibility is defined as:
Where x() denotes a state of a path, LlinkLlink is the connection of two states,
Cfree is the free space and Cobs is the obstacle region. A visual illustration is
provided in Image 8(a), where visibility can only in obstacle free region.
Image 8.
For online switching, UAV should follow the visible-node selection
algorithm (a) to explore the possible switching route, then it switches to the
cost minimum path for ETM (b).
It is noted that the visibility in this paper means a possible connection to
switch from one path to another for emergency threat management. For
switching with visibility, given all the reasonable alternative paths HRAP, the
algorithm performs exploration for visibility state at each UAV state
xUAVamong HRAP. The algorithm then outputs the visible guards (states) xRAP
as illustrated in Image 8(b). To avoid the pop-up threat (or dynamic threat),
UAV must select one entry guard from the visible guard set to reconnect to
another pre-planned path to the goal. To validate the best connection, that is,
the entry point and the entry connection, this paper applies the heuristic:
Using a simple cost based metric, where CFE is the forward energy cost which
is the distance and the turning cost from UAV position xUAV to entry state
xRAP(j) and the path from entry state to the goal HRAP(i)(xRAP(j)). The turning cost
is the integration of heading angle difference at each state, which denotes the
smoothness of the planned path. CTC is the threat cost, the integration of
inverse distance between state and obstacles. Using such approach, the
algorithm can find five visible states xRAP = {xRAP(1), …, xRAP(5)} as
illustrated in Image 8(b). Then, it tries to find the best entry state using the
minimum cost principle (Eq. (17)).
We further consider a situation that there may have no visible states at current
location. The paper proposes to use a long-term memory approach to handle
this problem, that is, the travel path should be stored in memory, such as the
orange edges and states illustrated in Image 8(b). In the meanwhile, the
method stores the visible state along the traveled path. Then, the UAV has to
fly back to find a cost minimum path toward the goal if it confronts with pop-
threat and has no visible states.
Experiment and discussion
In this section, we highlight the performance of the obstacle biased and
homology biased path planner with the ability of emergency threat
management (avoiding pop-up and dynamic threat online). In the section, we
will discuss the following points: (1) How the threshold of EFA affects the
performance of MGART. (2) The time performance and reliability of
reasonable alternative chosen algorithm. (3) The online emergency threat
management performance. The algorithm is implemented using MatLab
2016b on a laptop computer with a 2.6 GHz Intel Core I5 processor.
COMPARATIVE SIMULATION OF MULTIPLE PATH
EXPLORATION
We design three different scenarios, which are non-obstacle scenario,
rounded obstacle crowed scenario, and irregular polygons crowed scenario, to
∗
2
perform comparative simulations. All the scenarios are 2D with 100 65 m
space, and obstacles randomly generated.
For scenario 1, it is a non-obstacle environment, and we set the variable
threshold as a set with value {3, 5, 7, 8, 11,13,15,18,20,25,28} for
representation. As we know the length of EFA threshold affects the goal
biased probability, which directly decide the area of the newly obtained
Voronoi region of the neighbor branches, we design a set of comparative
experiments to study the effects between EF length and RAPs. An intuitive
result of the relationship between planned paths and EF length after 10,000
iterations are provided in Image 9(a)–(d). MGART can find 37 paths after
10,000 iterations if EF length is set as 3 step-length, and the number
decreases to 22 if the EF length is set as 28. The reason is that the longer the
EF length, the further the neighbor branches can obtain the goal biased
resource. Thus, the neighbor branches need more steps to exploring toward
the goal, that is, less paths will be achieved with better homology
performance. As we can see that the paths in Image 9(d) have a better
homology performance than Image 9(a). The same EF length variation
experiment is also deployed in scenario 2, and results are shown in Image
9(e)–(h). For RAPs, it is the same with the results in scene 1 that RAPs
decrease with the increasing of the EF length. However, as the increasing of
EF length enables more branches to explore toward the goal as well as
increasing the homologous paths (see in Image 9(e)–(h)), the number of the
RAPs increasing with the increasing of the EF length. The statistic relation
between the RAPs and the EF length is illustrated in Image 10, which further
proves the conclusion.
Image 9.
Illustration of alternative paths generated by MGART vary with
representative backward EF length. (a)–(d) denotes the results in nonobstacles scenario, (e)–(h) denotes the results in obstacle crowed.
Image 10.
Relation between EF backward length and APs and RAPs with two scenarios.
Here the solid diamond line denotes the relation of scene 1, and the triangle
lines denotes the results of scene 2.
The EFA can be used to any SBAs by shifting the goal biased resource to
achieve multiple RAPs for online switching. This paper compares the
performance between MGART and MRRT* in three scenarios with 10,000
iterations. We compare the efficiency of generating a path, RAPs, average
time for finding a path, and average time for any RAP are compared in Table
2. GART has a better performance in both path exploration and RAP
generation, such that MGART can find at least 3 times of the number of paths
toward the goal than MRRT*. Because GART introduces the environmental
information to speed up the exploring process, the results prove that MGART
is more efficient in finding RAPs, which is almost 100% faster than MRRT*.
For time performance, we can see that MGART also outperforms MRRT*
with at least 3 times advantage.
Besides comparison of the time performance of finding online switching
paths (that is RAPs), we also pay attention to the quality of the path
generated. The average lengths and standard deviation of the length of all
paths in each scene are illustrated in Image 11. The average length of the
paths that generated by MGART and MRRT* are illustrated in Image 11(a),
we can see that MGART has a strong convergence performance than
MRRT*. The standard deviation of the lengths is shown in Image 11(b),
results demonstrate that MGART is more likely to find paths with smaller
fluctuation as well as smaller cost.
Image 11.
Comparison of (a) average length and (b) standard deviation of the APs
generated by MGART and M-RRT* in three scenarios.
PERFORMANCE
OF
REASONABLE
ALTERNATIVE
PATH CHOSEN
We also test the path labeling algorithm, that is, the surrounding information
pursuing using DTc and sensing envelop, which is used to obtain the
reasonable alternative paths under Definition 5. It is should be noted that the
under the definition, any two paths do not have the same information
parameter, which enables fast switching when facing pop-up threat. As the
path label method guarantees the unique labeling of all the paths, only the
paths which stretch in a parallel way and within the same sensing envelop
have the same labels.
The results of simulation after 10,000 iterations in scenario 2 and 3 are
provided in Table 3. For each single path, the time needed for labeling the
path mainly depends on the area, dimension, the complexity of the
∗
configuration space. For our tested with area 100 60, the average time for
acquiring the information for labeling 0.078 s (see in Table 3). The average
time needs for RAP pursing of our cases is 0.139 s.
Scenario 2
Scenario 3
Time
for
labeling (s)
Time for
RAPs (s)
Time
for
labeling (s)
Time for
RAPs (s)
0.0721
0.146
0.0842
0.132
TABLE 3.
Time performance of proposed method in two scenarios.
EXPERIMENTS
OF
EMERGENCY
THREAT
MANAGEMENT
MGART can be used for 3D and 2D pop-up threat management, and the 3D
environments can be easily segmented by DT. We evaluate the performance
of our method in both 2D and 3D environments, and we also compared the
time performance.
For 2D environments, we implement three tests with different number of
dynamic threats. The RAP chosen algorithm works when robot realizes that
the path will collide with the pop-up threat, that is, robot at position xUAV
detects the moving threat (see in Image 12(a)). The simulation setting is
illustrated in Table 4, where the robot speed is 10 m/s and the moving threat
can be detected within 10 m detection range. Thus, the robot has less than 1 s
to re-plan a path and executing to avoid the obstacle. RAP chosen algorithm
first evaluates all its neighbor RAPs (the green lines) around the robot, and
chooses the cost minimal and collision free path based on principle Eq. (17)
(the dotted green path in Image 12(b)). It is noted that Image 12(b)–(d) are
results of using MGART to avoid one, two, and three moving threats,
respectively. The black parts along the navigation path denote the position
where threat is detected by robot. We also execute test in 3D environment
(see in Image 13) with pup-up and moving threats. The on-line switching is
supposed to be used for aerial robots in 3D, thus Dublin’s Curves is used
when switching from current position to safe path.
Image 12.
Tests of on-line switching to avoid dynamic threats using MGART in 2D
scenarios. (a) Robot detects moving threat at position xUAV, then it evaluates
all its visible neighbor RAPs (the green lines) to choose the switching path.
(b) Complete navigation of avoiding one moving threat, the red path is the
navigation path. (c) Test of avoiding two moving threats, the black and red
circles are threats. (d) Tests of avoiding three moving threats.
Image 13.
The experiment of using MGAT for 3D emergency threat management case,
where pop-up threat and moving threat are exist in the environment.
For all the experiments, we study the time efficiency of each switching to
escape from current dangerous situation. For one moving threat avoiding (see
in Image 12(b)), the time needed to switch to other RAP is 0.0507 s, and the
whole navigation duration is 13.14 s with 10 m/s speed. For two dynamic
threats avoiding case (see in Image 12(c)), the whole navigation time is
13.32 s, and the time spend to avoid the second threat is 0.0912 s. In scene 3,
we designed a long duration for threat (see in Image 12(d)). The two cyan
threats disable the blue-path, thus robot has to switch for more times while
tracking the dark path. The average time is no more than 0.15 s which can be
decreased when implemented in robot’s platform with C++ implementation,
and the whole navigation time is 13.5 s.
Conclusion
The main contribution of this paper is that an online EMT planner is
proposed, where pop-up threat and moving obstacle happen during tracking
the pre-planned path. We propose a new multiple path planning approach
called MGART, which is improved based on GART, by introducing an
‘Extending Forbidden’ algorithm to shift the goal biased probability to
neighbor branches around goal reached branch. The algorithm is shown to
inherit the merits of GART and the ability of exploring in cluttered
environments, and it guarantees asymptotically optimal and completeness. It
is also shown that the algorithm can generate multiple paths without using
variant cost principles, but only relying on the EFA threshold, thus it enables
selection for online dynamical switching.
In the future, we would like to research on online visual positioning and
environment perception topic, which is lack of discussion in this paper. We
would like to enable cognitive sensing and autonomous for robots.
MOTION TRACKING AND
GESTURE RECOGNITION
Motion Tracking
Training
System
in
Surgical
THE CHALLENGES IN HEALTHCARE AND SURGICAL
TRAINING
The nature of surgical training is consistently evolving in the past decade
along with continuous changes in the healthcare system worldwide. The
modern healthcare system has been pressurized by the current law that
involves a restricted number of working hours. The legal working hours per
week can be as low as 48 hours in the European countries and 80 hours in
North America. These working mandates are deemed necessary to guard
against human errors that may be related to stress and fatigue in a high‐
pressured working environment. In addition, there is also an increasing
popularity in reporting medico‐legal cases in the current media. High profile
medical reports such as the Kennedy Report (UK) and the Institute of
Medicine (US) report ‘To Err is Human’ have highlighted surgical errors that
turned the spotlight immediately on the adequacy of surgical training and, by
extension, the quality of surgical trainees. These current changes in
healthcare system could be continued to cause negative impact upon the
surgical training of many aspiring surgeons.
Historically, surgical training has been based on the apprenticeship model
throughout many years. The trainee surgeons are taught on how to perform
procedures by senior surgeons with on‐the‐job training. Therefore, the
training is opportunistic and the trainees were expected to demonstrate their
skills in the operating theatre under supervision of their consultants. This was
coined by William Halstead who exemplified the training approach as ‘see
one, do one and teach one’. The traditional teaching method is largely relying
upon variable cases that the trainees encounter during their daily work
routine. Typically, junior doctors learn from their seniors and more
experienced colleagues and their consultants. The skill level of consultants is
perceived as the proficiency level and therefore, the desired precision in
surgical training. The trainees are expected to reach the proficiency level that
would allow them to perform surgical procedures in the real operating
theatre. However, it is a challenge to assess surgical skills and obtain an
objective proficiency level.
This training model is less favourable in the current climate of healthcare
system. Due to the restriction, the trainees have limited opportunities to gain
competencies, and therefore the training period is prolonged. As a direct
consequence of these challenges, interest in laboratories with formal
curricula, specifically designed to teach surgical skills, has increased
dramatically. The attention has been shifted towards training in a simulation
lab using inanimate bench models, animals (cadaveric or live), hybrid or
virtual reality (VR) simulators. In United Kingdom, the use of live animals is
not permitted under the current law, unlike in Europe, United States and other
countries.
Therefore, simulation technology has gained its popularity among surgical
training institution worldwide. With the advancement of laparoscopic and
minimally invasive surgery (MIS) and steep learning curve in this specialty, a
burst of simulators became available in the market for over a decade ago.
Some examples of validated virtual reality (VR) simulators available in
laparoscopy are MIST VR, LapSim, LapMentor and Xitact LS500. The
trainees are able to practice their skills in hand‐eye co‐ordination,
intracorporeal suturing and procedures such as laparoscopic cholecystectomy
and appendicectomy by using these simulators. In general, the laparoscopic
instruments used are fitted with sensors that allow the cameras to track their
movement. The simulator then displays a two‐dimensional graphic of an
operative field such as the internal organs on a computer screen. From this,
the simulators are able to track and quantify the movement that would be
converted into meaningful metrics such as path length, smoothness and
economy of movement. These metrics provide an objective automated
measurement of technical skill proficiency instantaneously.
The advancement of simulation technology has allowed training bodies such
as The Society of American Gastrointestinal and Endoscopic Surgeons
(SAGES) to develop training models for both laparoscopic and endoscopic
skills. In the recent years, surgical training institutions have been
incorporating simulators in surgical skills assessment and trainees’ selection
process. We found that only 56% of the studies in the literature employed
simulator‐generated objective metrics in the laparoscopic skills assessment
either exclusively or combined with other assessment tools, unlike other
industries, such as the aviation industry, which are more advanced in the
simulation technology.
The MIS and laparoscopic surgery is only a branch of surgical specialties. In
general, the progression of simulator development has tended to target
minimally invasive surgery (MIS). However, open surgery remains the
foundation of all surgical specialties. The surgical trainees are expected to
master technical skills in open surgery before they are allowed to progress to
more complex surgical procedures such as MIS and microsurgery. Examples
of technical skills in open surgery include hand knot tying, suturing skill,
repair of nerve or tendon and open hernia repair. The surgeon’s ability to tie
knots securely is of paramount importance as loosening of surgical knots,
during or after tying, can compromise the outcome of a surgical procedure.
Despite its importance, the training of open surgical techniques is largely
depending on inanimate bench models. The trainees would practice surgical
skills on bench models such as skin pads and saphenofemoral junction model
from Limbs and Things™ (Bristol, United Kingdom) and laparotomy model
from Simulab Corporation (Seattle, WA). This is in contrast with MIS or
laparoscopic surgery simulators. Typically, in order to assess their
competency in this skill, a trainee will perform a specific procedure such as
excision of sebaceous cyst using an inanimate model and an observer who
has extensive experience in the field such as a consultant or a senior registrar
will watch the trainees and assess their skills using observer‐dependent
assessment tools. This can be done either by face‐to‐face or video recording.
The classic observational assessment tool for open surgical skills is the
objective structured assessment of technical skills (OSATS) (Images 1 and
2). It was coined by Professor Reznick and his research team in Canada. It is
based on observation and completing two sets of checklists. The first
checklist consists of important steps in a specific procedure and trainees are
assessed whether they have taken all these steps or not. The second checklist
is the global rating scale (GRS) which examines the global performance of
the trainees by using five‐point Likert scale. It assesses the fluidity and
efficiency of movement during completion of a surgical task.
Image 1.
A sample of task‐specific checklist from OSATS.
Image 2.
The global rating scale (GRS) of operative performance.
The observational assessment tool requires the recruitment of expert surgeons
to assess trainees. This proves to be labour‐intensive and time‐consuming.
One would argue that there could be human bias or favouritism when scoring
trainees using this type of assessment. Data in several studies suggested that
unblinded raters give higher scores than blinded raters (as would be expected
if knowledge of a learner subconsciously influences a rater’s behaviour).
Therefore, surgical training is moving away from the observer‐dependant
assessment tools but towards more objective and quantifiable analysis of the
technical skills. This would allow the assessment of the trainees’ skill level
and measure their reached precision according to their corresponding training
years.
OPEN SURGICAL SKILLS
Open surgical skills are fundamental in surgery. The skills involve hand
dexterity using surgical instruments. Thomas Morstede stated more than 500
years ago that surgeons should ‘be dextrous, have steady untrembling hands,
and clear sight’. A good surgeon is perceived as having a greater economy
and precision of hand and instrument movement.
Open surgical skills vary from simple technique, such as hand knot tying and
suturing, to more complex procedures, such as tendon or nerve repair,
laparotomy and vessel anastomosis. All of the surgical trainees are required
to master the open basic surgical skills, particularly in suturing and hand knot
tying skill. A good technique would ensure that the wound edges are
approximated neatly without causing any gaping if the sutures are loose or
skin necrosis if the sutures are too tight. The trainees would hone their skills
by practising on bench models as shown in Images 3 and 4.
Image 3.
A standard surgical knot tying task performed using the knot tying training
jig from Limbs and Things™ (Bristol, UK).
Image 4.
A trainee performing a simple interrupted suturing task using skin pads with
simulated wound edges.
The movement of the hands and fingers has to be precise and economical to
ensure that the procedure runs smoothly with minimum complication.
However, the assessment of dexterity, smoothness and economy of hand
movement using surgical instruments has been subjective and several
attempts have been made to quantify dexterity, but many of these are
unsatisfactory. Many have associated dexterity with the time taken to
complete a surgical task. It is a crude assessment and it is a poor
measurement of technical skills. Although operative speed is a desirable
surgical quality to lower the time spent under anaesthesia, it fails to assess the
quality of surgical performance.
The objective assessment of open surgical skills is slow to evolve, unlike
MIS and laparoscopic surgical training. We require an assessment tool that
could quantify the hand motion and provide an objective scale on the
performance when completing a surgical task. Therefore, we explored the
potential use of motion analysis in assessing open surgical skills. It would be
a non‐observational assessment tool that is automated and objective.
Materials and methods
PARTICIPANTS DEMOGRAPHICS
All medical students in pre‐clinical years (Years 1–3) from the Royal College
of Surgeons in Ireland (RCSI), basic surgical trainees (Years 1 and 2) and
consultant surgeons were invited to participate in our study. This allowed us
to divide the participants into three different subject groups: novice, trainees
and experts. It was made clear that the participation is voluntary. Ethical
approval was granted by Research Ethics Committee of RCSI.
Image 5 showed the demographics of participants in this study.
Image 5.
Participants demographics.
BASIC SURGICAL SKILLS ASSESSMENT
Inanimate bench models are used in this study. The bench models were from
Limbs and Things™ (Bristol, UK) which include the knot tying trainer jig,
skin pads, skin pad jig and cyst pads. All participants were required to
perform two fundamental tasks. Below are the tasks involved and their
description.
Task 1: One‐handed knot tying skill
The participants were required to perform surgical knots using the one‐
handed technique. They were given a standardised length of 2/0 Mersilk
(Ethicon) suture tie. The surgical knots were performed on the knot tying
trainer jig.
Task 2: Suturing simple interrupted technique
The participants were required to perform simple interrupted sutures on a
simulated wound. This task was performed on skin pads (Limbs and Things,
Bristol, UK). A 3/0 Mersilk suture (Ethicon) and surgical instruments were
provided.
The participants’ performances were assessed using observational tool (GRS)
and non‐observational tool (motion tracking device). The data allowed us to
analyse the validity of motion tracking device as an assessment tool in
comparison with the well‐established GRS scoring system. During the
experiment, videos were recorded in anonymous fashion. Each video was
labelled by a random code generator so that the assessor could not identify
the level of experience of each participant. The participants also had a sensor
attached to their right index finger to track hand motion and this will be
discussed in detail in the next section.
As for observational assessment, two assessors were selected to assess each
video using the GRS. The assessors were expert surgeons with greater than
10 years of consultant experience and are involved in teaching and educating
surgical trainees in Ireland. The experiment process is outlined in Image 6.
Image 6.
Flow diagram of the experiment process.
Motion analysis in surgical skill assessment
THE ROLE OF MOTION ANALYSIS
Surgical specialties have initiated a trend towards a more objective and
quantifiable measure of technical skill proficiency. In minimally invasive
surgery (laparoscopy and endoscopy), simulators have been developed with
the ability to quantify the associated skills with specific metrics including
total path length, movement efficiency and smoothness. Motion smoothness
in handling surgical tools is an essential skill that surgical residents must
acquire before independently operating on patients.
The use of motion analysis has been pioneered in gait analysis. It is used in
tracking the movement of body parts. These methods usually make use of
markers located on body articulations to garner movement information from
a particular limb. Its application is evident in various areas including sports
such as golf, training an apprentice in spray painting and also in diagnostic
simulators such as ultrasound simulation. Undoubtedly, one of the most
promising technological tools in medical training are the simulators for the
acquisition of clinical skills using motion sensors. The surgical arena has
used this technology to try and quantify surgical performance. Motion
analysis allows assessment of surgical dexterity using parameters that are
extracted from movement of the hands or laparoscopic instruments. The
motion analysis provides parameters that measure the precision of hand
motion when performing surgical skills. Hence, surgical competencies,
particularly in surgical trainees, can be ascertained by using these parameters.
Lord Ara Darzi and his researchers pioneered the use of an electromagnetic
motion tracking device in surgery, called the Imperial College Surgical
Assessment Device (ICSAD). This is the combination of a commercially
available electromagnetic tracking system (Isotrak, Polhemus Inc, Colchester,
VT) and a bespoke computer software program. This motion analysis device
uses an alternating current electromagnetic system with passive receiver
attached to the dorsum of the hand over the mid‐shaft of the third metacarpal.
It measures the time taken, the number of movements and the path length. All
of these metrics have been shown to change with experience in laparoscopic
surgery and in open surgery (bowel anastomosis and vein patch insertion).
We used a commercially available motion tracking device called The
Patriot™ from Polhemus Inc., Colchester, VT. This device utilises
electromagnetic technology and tracks 6 degrees of freedom (6DOF)
measurements of the sensor’s movement. In our study, we attached the sensor
on to the participants’ right index finger. Image 7 showed the airplane image
that indicates the sensor. It will move to the position and orientation of the
right index finger. The retrieved position and orientation are displayed as
numbers in six columns (upper part of screenshot), from left to right,
positions in X‐, Y‐ and Z‐axis and orientation in yaw, pitch and roll. The
Patriot™ collects these raw data which in turn convert to a set of meaningful
metrics using our bespoke software.
Image 7.
A screenshot of the PiMgr software.
CONSTRUCT VALIDITY OF MOTION ANALYSIS IN
SURGICAL SKILLS ASSESSMENT
Every evaluative tool needs to provide invaluable information on what it
measures or examines and that the conclusions drawn from the tool are
dependable. A validated assessment device should be able to differentiate
level of surgical skills according to the level of competency and this is
classified as construct validity. One inference of construct validity is the
extent to which a test discriminates between various levels of expertise.
Mason et al. have reviewed the published evidence as it relates to motion
analysis and the assessment of surgical performance. This systematic review
reported construct validity of ICSAD and other forms of motion analysis
devices such as ProMIS augmented reality simulator and Hiroshima
University Endoscopic Surgical Assessment Device (HUESAD) in assessing
laparoscopic skills.
Our research further assessed the use of a novel electromagnetic tracking
system in basic surgical skill tasks by using our own in‐house computer
software with a finger sensor. Images 8 and 9 showed the standard set up for
knot tying task and suturing task, respectively, with the Patriot™ motion
tracking device. Our in‐house software was designed to generate the classic
metrics that are time and total path length (TPL). In addition, new metrics
were developed: average deviation distance from X‐, Y‐ and Z‐axis and
average distance from centre of the bench model. The centre of the bench
model is labelled as a point of interest (POI), as we believe that hand motion
is most efficient when the hands are at certain distance away from the centre
of the workstation. Subjectively, when performing a certain task in open
surgery, such as tying surgical knots or suturing, a novice would have
unnecessary movement of their hands which include moving hands further
away from the field of surgery. This is thought to be inefficient in view of the
economy of the hand movement.
Image 8.
Knot tying model with the Patriot™ motion tracking device.
Image 9.
Simple interrupted suturing model and instruments with the Patriot™ motion
tracking device.
Our results demonstrated construct validity for both fundamental skills which
were one‐handed knot tying task (Image 10) and the simple interrupted
suturing skill (Image 11) for the metrics of time, total path length, point of
interest and deviation from the Z‐axis.
Image 10.
Distribution of the time taken (a) and total path length (b), average distance
from the POI (c) and Z‐axis (d) between the three subject groups in
completing one‐handed knot tying skill.
Image 11.
Distribution of the time taken (a), total path length (b), average distance from
the POI (c) and Z‐axis (d) between the three subject groups in completing
simple suturing task.
The box and whiskers plot shows a significant difference between experts,
trainees and novices (p < 0.001). This was analysed using Kruskal Wallis
statistical test. The horizontal lines within boxes are the median. The boxes
and whiskers represent interquartile range and range, respectively. The dot
represents outlier.
The novel parameters were able to differentiate subjects according to level of
experience along with the validated metrics as reported in literature. This
implies that a surgical novice moved his or her hand further away from the
virtual Z‐axis and mid‐point of the workstation than experts or surgical
trainees, as seen subjectively in the video recordings. Therefore, it is
postulated that this pattern of movement is less efficient. The lack of
significant change in X‐ and Y‐axis may reflect the standard suture tie length
used in this experiment. This limits the movement of the hand in these axes.
CONCURRENT VALIDITY OF MOTION ANALYSIS IN
SURGICAL SKILLS ASSESSMENT
A further validation of the motion analysis device was required to prove that
it is a robust assessment tool. It is important that the metrics from motion
analysis have a good correlation with the gold standard assessment tool,
which is the global rating scale (GRS), as mentioned previously. This would
prove the concurrent validity of this novel device.
Datta et al. revealed that there was a strong correlation between number of
hand movements analysed using the ICSAD and the GRS in suturing vein
patch on an inanimate model (Spearman coefficient of −0.587, p < 0.01). In
another study by Ezra et al., concurrent validity was demonstrated between
these two assessment tools in microsurgery suturing task. The metrics used in
this study were path length, hand movements and time.
In our chapter, for the one‐handed knot tying skill, our results demonstrated a
significant correlation between all the metrics generated by the Patriot™
motion tracking device and the items of the GRS scoring tool. The only
parameter that failed to demonstrate a significant relationship was deviation
from the x‐axis and ‘respect for tissue’. For the simple interrupted suturing
skill, we found a significant correlation between time, total path length and
deviation from the z‐axis and the total GRS score.
However, the metrics from Patriot™ motion tracking system failed to show a
more convincing correlation with the scale assessing tissue handling. This
may be explained by the fact that the ‘respect for tissue’ component on the
GRS is a very subjective parameter. This is reflected in the poor inter‐rater
reliability of the GRS scoring system. Apart from this, the metrics correlated
well with the GRS items especially items involving motion and flow of
operation. We could safely suggest that the Patriot provides more objective
score than the observer‐dependant scale.
APPLICATION OF MOTION ANALYSIS IN SURGICAL
TRAINING
The use of motion tracking and analysis in assessing surgical skills has been
described mainly in laparoscopic skills. There is a lack of literature that
describes the integration of such technology in surgical training curricula
across the globe, despite a myriad of validation studies.
The surgical trainees learn fundamental basic skills at an early stage. Open
basic skills remains to be the principal skills across all surgical specialties.
Therefore, any aspiring surgeons are expected to be proficient in these skills
before they can proceed to perform simple procedures such as excision of
skin or subcutaneous lesion or more complex procedures such as repair of
tendon or nerve. The trainees would require direct guidance and abundance
of practice in order to be proficient in these skills, as the saying goes ‘practice
makes perfect’. By having an expert or supervisor to observe them
consistently during practice session is not feasible when clinical work takes
priority. Therefore, motion analysis system would be necessary to provide an
automated system that allows the trainees to practice and record their
performance in their own time.
Proficiency‐based training has been described as learning environments in
which the trainee progresses from less to more technically demanding skills
and tasks only after achieving predefined criteria. One widely available
simulation‐based assessment and certification program is the fundamentals of
laparoscopic surgery (FLS) developed by the Society of American
Gastrointestinal and Endoscopic Surgeons (SAGES) and now administered
by SAGES and the American College of Surgeons. The FLS program
incorporates tasks from the McGill Inanimate System for Training and
Evaluation of Laparoscopic Skills (MISTELS) program, including
laparoscopic suturing, and uses well‐described, low‐fidelity inanimate
models [39]. The proficiency scores were determined by a group of experts in
the skills and the trainees or users are required to reach these predetermined
scores before they could proceed to the next level or task. These proficiency
scores act as an aim for the trainees to achieve and subsequently motivate
them to keep practising until a high standard of surgical skills is
accomplished. In order to do this, an automated objective measurement is
much desirable, as it does not require any expert surgeons or observers to
monitor and assess the performance.
We applied the concept of proficiency‐based training by using the validated
metrics from the Patriot™ motion tracking system. We determined the
proficiency goals or desired precision for each of these metrics in knot tying
and suturing skills. This was achieved by gathering the experts’ scores from
the motion analysis and calculating the proficiency target as follows:
The performance of surgical trainees in Years 1 and 2 of the surgical training
programme was assessed using the Patriot™ device. Their scores were then
analysed against these predetermined proficiency goals. Our intention was to
have
objective automated tool that can be integrated into the national training
curricula as part of the training module. This will help the trainees to practise
and eventually achieve the desired precision or performance in the most
fundamental skills in surgery.
Image 12 showed a sample of trainees’ performance in suturing which was
mapped out against the proficiency target. The dashed line represents the
proficiency level of 143.6 mm. The diamond shape points below the dashed
line are the trainees who have shorter path length and considered as proficient
in their skill (n = 13, 52%). The round shape points above the dashed line are
the trainees with longer path length and did not reach proficiency level (n =
22.48%).The performance graph is very useful when there is a group of
surgical trainees assessing their suturing technique and they are able to
compare their reached precision with the desired precision.
Image 12.
The total path length (TPL) of all the trainees in the study who performed
simple interrupted suturing skill.
The main advantage of motion analysis system in surgical training is that it is
capable of producing automated objective scoring system and does not
require a group of observers to assess the performance in any particular
surgical skills. In our study, the Patriot™ motion analysis system has shown a
promising potential in a learner‐oriented proficiency curriculum. By
providing an objective and numerical rating, trainees could benchmark and
aim to improve their score through enhancement of surgical skill. As surgical
educators, this assessment tool is useful in identifying any surgical trainees
who are underperform according to the proficiency standard at an early stage
of their training years. A remedial session can be offered to these surgical
trainees and their training module can be customized for them in order to be
able to reach proficiency as required. The motion analysis system can be used
continuously by the trainees during practice session and also in any
departmental assessment settings.
The learning curve in surgical skills is steep. The trainees are required to
improve their skills or reached precision over time and progress in their
surgical training. They are expected to practice the skills, preferably in the
simulation lab until they achieve the desired precision. The training
programmes are designed to teach the trainees skills that are appropriate to
their levels. The early part of the learning curve is associated with a higher
complication rate. The improvement in their reached precision in the
simulation lab will allow them to perform procedures on patients in real
operating room with confidence. They will also progress to a more complex
procedure such as tendon repair, vessel anastomosis and bowel resection.
Motion analysis provides an objective measurement of the skills that can be
used to map out the learning curve. In order to reach proficiency in the
learning curve, using time only as a metric is not reliable. It measures how
fast someone completes a task. This does not include how efficient it was
performed. Therefore, it is regarded as an adjuvant tool to assess surgical
technical skills due to its unique properties including non‐observer
dependent, automated and feasibility.
LIMITATION OF MOTION ANALYSIS AND FUTURE
RESEARCH
The main limitation of motion analysis is that its inability to detect surgical
errors. Hand‐tracking data appear to confirm that skilled individuals
demonstrate a shorter path length, make fewer movements and take less time
to perform an operation, but with the caveat that this improved performance
is not accompanied by an increase in error. In minimally invasive surgical
training such as laparoscopic skills, the technology in VR simulators such as
LapSim and LapMentor is more advanced than open surgical skills training.
These simulators are programmed to identify any surgical errors as well as
analysing the movement of the instruments.
Therefore, this vital limitation of motion analysis may be overcome by
incorporating an assessment of the end‐product following a completion of
surgical task. For instance, the quality of the surgical knots can be assessed
by a force gauge device in order to ensure that the knots do not slip under
certain tension. It is important that the surgical knots are secure as knot
slippage in a real operating setting can cause catastrophic bleeding which
leads to morbidity towards patients.
This shortcoming highlights that the surgical competency is multimodal and
there is no single solution for surgical assessment. We propose that surgical
educators should incorporate motion analysis and assessment of the end‐
product quality when assessing surgical techniques. Further research should
be focused on creating an all‐in‐one package in assessing surgical
competency that would be objective, automated and most importantly
independent from any observers.
Another limitation of motion analysis is that its use in the real operating
setting. All the studies in the literature showed the use of motion analysis
system in a simulation lab. The fundamental assumption of simulation‐based
training is that the skills acquired in simulated settings are directly
transferable to the operative setting. The current motion tracking devices that
are readily available use electromagnetic field to track sensors on the hands.
These devices are sensitive to surrounding metal objects such as electronic
machines, metal bars or large electrical cables in the walls that can cause
erratic reading. These metal objects are certainly present in all real operating
theatres in the hospitals. In addition, the sensors on the devices are attached
via cables, which potentially could interfere with the sterility of the operating
field. Due to these limitations, it is not feasible to utilize these devices in
assessing surgical skills in a real operating theatre. Therefore, a new
invention of a system that is wireless and not susceptible to the surrounding
metal objects is much desired.
Summary
Open surgical skill training requires an assessment tool that is independent,
automated and objective. The validity of motion analysis in assessing
fundamental surgical skills has been proven and showed positive results. It
has demonstrated its potential use in a proficiency‐based training as a step
away from the traditional method of surgical training. The future of
simulation‐based surgical training in open surgical skills appears promising
and it will finally shape the pathway towards creating top quality surgeons in
the current climate of healthcare system.
Layered Path Planning with Human Motion
Detection for Autonomous Robots
Autonomous and intelligent navigation in a dynamic and unstructured
environment is a critical capability for mobile robots and autonomous
systems. It integrates lots of technologies from sensing, environmental
modeling, object tracking, planning, decision making, control, and so on, to
deal with the challenges from a dynamic and uncertain environment, so that
robots are capable of planning paths to avoid moving obstacles and human
beings in a real-world environment.
Lots of researchers have proposed various methods of addressing pathplanning problems, which have been applied successfully in various domains.
However, most of those methods targeted at finding a path-planning solution
in a two-dimensional (2D) environment, or an oversimplified threedimensional (3D) environment. As more and more mobile robots and
autonomous systems are placed in buildings to provide services for human
beings, an emerging and interesting problem is how to plan paths for robots
to navigate effectively across floors in a multistorey building.
Consider a multistorey building with multiple elevators or escalators on the
same floor. If we ask a robot to deliver a box from the first floor to the fifth
floor in the building, there will be multiple paths for the robot to navigate via
the elevators or the escalators. For example, the robot can take the elevator to
go to the fifth floor directly and then go to the destination. Or if the fifth floor
is very crowded with people, it can use the elevator on the first floor to go to
the second floor, and then go to another elevator at a different location on the
second floor to reach the destination on the fifth floor. Then, it becomes a
practical and important problem to choose which elevators the robot should
take, based on the dynamic environment and human context information.
Additionally, the final state on one floor is the initial state of the next floor,
toward which the robot is navigating. While the cost function on each floor
can be minimized locally based on some criteria, how to minimize the global
cost is also an interesting question that we need to answer. Since there will be
people walking in a building, the environment is changing constantly, and
thus the cost of moving from one location to another location varies based on
timing, business schedule, and other factors. The scenario described above
can be extended to other industrial domains, such as transporting in rail yard
(multiple 2D environment), health-care service robotics (hybrid 2D
environment), and aerial service robotics (full 3D path planning).
The motivation of this chapter is to propose a solution to address the two
major problems mentioned above. First, we present a method of building a
global graph to describe the environment, which takes human motion in the
environment into consideration. Human motion can be detected and its 2D
spatial distribution can be estimated by the state-of-the-art radio tomographic
imaging (RTI) technology. Then, we use a hidden Markov model (HMM) to
represent a long-term context model. In the application stage, when humans
are detected, we use the long-term context model to predict the short-term
activities of humans in the environment. Then, we build Reactive Key Cost
Maps (RKCMs) for all the floors using the predicted human activities.
Second, we present a hierarchy planning framework for robots to find a path
to minimize the global cost. This method considers the whole map as a graph,
and the adjacent subgraphs for corresponding floors are connected using
elevator or stairs, which are also associated with costs. By planning on the
higher layer of the global graph, we can optimize the global cost.
The rest of the chapter is organized as follows: Section 2 summarizes
previous work on indoor human detection and motion planning, Section 3
explains our methodology in detail, Section 4 uses some experimental results
to validate our proposed methodology, and Section 5 proposes some future
work and summarized this paper.
Motivation
In a Veteran Affairs (VA) hospital, thousands of surgical tools are
transported between the operating rooms and the sterilization facilities every
day. Currently, the logistics of the perioperative process is labor intensive,
with medical instruments being processed manually by people. This manual
process is inefficient and could lead to improper sterilization of instruments.
A systematic approach can dramatically improve surgical instrument
identification and count, sterilization, and patient safety.
A fully automated robotic system involves multiple mobile and static robots
for both manipulation and transportation. The overall robotic system is shown
in Image 1. A key task throughout the sterilization process is to move robots
in the hospital from one location to another location while avoiding hitting
any obstacles including assets and people. It is a typical indoor robot
navigation task. However, due to the dynamic human activities in a hospital,
we need to address two challenges: one is to detect and track human motion
and activities in the hospital, and the other is to plan the motion trajectories
for robots to navigate through multiple floors.
Image 1.
Overall robotic system.
Related work
Robot navigation is one of the most important topics in robotics; many
sensors and techniques have been studied in the past few decades [1].
Odometer and inertial sensors such as accelerometer and gyroscope have
been used for dead reckoning, that is, relative position estimation. For
absolute position measurements, various systems have been developed using
sensors such as magnetic compasses, active beacons, and global positioning
system (GPS) chips. Combining relative and absolute position measurements,
GPS-aided inertial navigation systems have achieved satisfying performance
for many outdoor robotic applications. However, accurate and robust
localization is still an open problem in the research community for an indoor
environment and GPS-denied situation.
As wireless devices and networks become more pervasive, radio-based
indoor localization and tracking of people becomes a practical and costeffective solution. Extensive research has been performed to investigate
different wireless devices and protocols such as ultra-wide band (UWB),
Bluetooth, Wi-Fi, and so on to locate people carrying radio devices at indoor
or GPS-denied environments. A few recent studies even measure and model
the effect of the human body on the antenna gain pattern of a radio, and use
the model and the effect to jointly track both the orientation and position of
the person wearing a radio device such as an radio-frequency identification
(RFID) badge. However, all these methods require people to participate in the
motion capture and tracking system by carrying devices with them all the
time.
With respect to motion planning, there are some existing methods that use the
historical human activity data to assist robotic motion planning. A well-
known example is the planning engine in the Google Map, which relies on
crowd-sourced user data. However, we are targeting on robot motion
planning at indoor environments, where we cannot collect human activity
data from Google Map or GPS. We also cannot expect that everyone in a
building can hold a device for us to collect the human-associate data. A
natural and noncooperative method is to obtain such data by actively
detecting and tracking human activities in the environment without mounting
any sensors on human bodies, and that is the basic innovation point and
contribution of our method proposed in this book chapter. The technology we
used in this book chapter successfully helps us build a model to describe
human activities in the environment.
For robots to interact with human beings, human motion detection and
tracking is a critical problem to solve. Recently, a novel sensing and
localization technique called radio tomographic imaging (RTI) was
developed to use received signal strength (RSS) measurements between pairs
of nodes in a wireless network to detect and track people. Various methods
and systems have been developed to improve the system performance at
different situations. For example, a multichannel RTI system was developed
by exploring frequency diversity to improve the localization accuracy. A
variance-based RTI (VRTI) was developed to locate and track moving people
even through nonmetal walls. Further, to locate stationary and moving people
even at non–line-of-sight (NLOS) environments, kernel distance-based RTI
(KRTI) was developed that uses the histogram distance between a short-term
and a long-term RSS histogram. The state-of-the-art RTI technology has been
used to locate and track people in buildings, but we are not aware of any
research effort in using RTI to assist human robot interaction and robot path
planning.
RTI technology could help us describe human activities in the environment,
especially in buildings. The next step is to use appropriate methods to
represent the information obtained. When building a model of describing
human activities, some researchers focused on building a mathematical field
model, for example, Gaussian Mixture Model (GMM). Given a position, the
model returns a density of humans in the environment. Some researchers use
nodes to represent humans. One well-accepted and popular method is hidden
Markov model (HMM). Both discrete and continuous HMMs have been
proposed to describe states and transitions. In our system, we choose to use
discrete HMM to simplify the model and reduce the computing time when
the robot is moving. Lots of literatures can be found in this paper for using
HMMs in robotics research. In our method, we used HMMs, but we describe
human activities using the costs, not the nodes. Our contribution is to
successfully integrate the human detection results into an HMM and reduce
the number of nodes in the HMMs for robotic path planning.
Based on the representation model that we chose, a motion-planning
algorithm is used to enable robots to find a path from the current location to
the target position by searching the reachable space and finding a path to
satisfy task constraints. The path-planning process can be done in a
configuration space or a task space. Current path-planning methods are
categorized into three types: (1) roadmap, which samples the configuration of
a task space and finds a shortest or an optimal path to connect the sampled
points; (2) potential field, which generates a path in the configuration or a
task space by moving a particle attracted by the goal and repelled by
impedance areas; and (3) strategy searching, which searches the policy or
strategy database to find a solution that describes the path.
Methodology
The proposed system includes several subsystems such as robot navigation,
human activity and motion detection, long-term context model, and so on.
The overall system architecture is shown in Image 2. We explain each
subsystem in detail in this section.
Image 2.
System architecture.
First, we need to detect human activities in the environment. The detection
result is used for building a long-term context model in the learning stage and
predicting the current human activities in the environment in the application
stage. Most of human activities could be simplified as human motions and
locations if we do not want to consider what humans are doing. In the human
detection part, we choose to use the KRTI technology to record the locations
of humans in the environment. The locations are recorded together with
timing information to construct a hidden Markov model, which is stored as
our long-term context model, after the model is learnt and built. In the
application stage, whenever a human is detected in the environment, robots
use the context model to predict the human’s activity in the future a few
minutes/hours depending on the use cases. The descriptions of current and
predicted humans’ activities are combined to generate impedance areas. The
impedance areas will be used to generate Reactive Key Cost Maps describing
the cost between the key locations including elevators and the stairs. All the
RKCMs are connected by elevators and stairs, where robots can use to move
from one floor to another. The connected graph is the global graph describing
the current paths and connections in the multifloor environment.
When a target is defined in the cost map, which does not belong to elevators
and the stairs, we add the target location to the graph and use a Gaussian
model to compute the cost of moving from key locations to the target
location. The robot then tries to search a shortest path from its current
location to the target location in the global graph. Please notice that the path
maps are changing all the time, because the information on the heat map
changes continuously based on the current and predicted human activities,
which are detected by our indoor localization sensors.
HUMAN MOTION AND ACTIVITY DETECTION
This part is the first step of the whole system. All the following analysis,
planning, and decision making are based on the detection results coming
from this step. The input of this step is the locations detected by the sensors,
and the output of this step is different in the learning stage and the application
stage. In our system, we propose to use kernel distance-based radio
tomographic imaging (KRTI) to detect human beings and track their
positions in the environment. First, we give a simple introduction to the
KRTI system.
HUMAN MOTION DETECTION AND TRACKING
Assume we have a wireless network with L links. For each link, received
signal strength (RSS) is measured at the receiver from the transmitter, and we
use ql to denote the histogram of the RSS measurements recorded during a
calibration period, and use pl to denote the RSS histogram in a short time
window during the online period. Then, we can calculate the kernel distance
between two histograms ql and pl for each link as
where K is a kernel matrix from a kernel function such as a Gaussian kernel.
Let d=[d0,⋯dL−1]T denote a kernel distance vector for all L links of a
wireless network, and let x=[x0,⋯,xM−1]Tdenote an image vector
representing the human motion and presence in the network area. Then, the
previous RTI work has shown the efficacy of a linear model W to relate RSS
measurements with the image vector x :
where n is a vector representing the measurement noise and model error.
Finally, a KRTI image xˆcan be estimated from the kernel distance vector d
using the generalized least-squares solution :
where Cx is the covariance matrix of x, and Cn is the covariance matrix of n.
More details of the KRTI formulation can be found in Ref.
MODELING STAGE OF HUMAN ACTIVITY DETECTION
In the learning stage, the goal is to build a heat map describing long-term
human activities. The process is to put the sensor in the environment for a
long and meaningful time and record the human locations with temporal
information. In our experience, the duration of this process depends on the
situational environment and the requirements of applications. Normally, we
put the sensors in the environment for one whole week to collect weekly
based data. The reason is that we find the daily human activities to be largely
different and the weekly human activities have some trackable patterns.
Image 3 displays an example of detected human activities in a small
environment.
Image 3.
Human motion detection.
To simplify the modeling process, we use Gaussian Mixture Model (GMM)
to model the locations. The ‘hot’ locations are described as Gaussian models
whose centers are the peak points of activities that happen every day. It is
easy to understand that those peak points are some public spaces in the
environment. Mathematically, the location of each Gaussian model is
described as
where k represents the kth floor, j is the index number of the Gaussian model
on the kth floor, (x, y) is the location of the center of the Gaussian model
w.r.t. the local coordinates of the kth floor, and σ is the variance.
Then, based on the temporal information, a Hidden Markov Model can be
built. The HMM model describes the relationship between each location,
especially when a human is within an area described using Gaussian model,
where he/she will move to.
Assuming on kth floor, we have N Gaussian models and is monitored from
the starting time: t1, and the current state qt is Si at time t. The probability of
the transition from qt = Si to qt+1 = Sj at time t + 1 is
The state transition matrix is defined as A, where
Then, the observation matrix defined as B is given by
It means that the measured value is vk at time t while the current state is Gk(i).
The initial state distribution is defined as
The complete Hidden Markov Model then is defined as
Then, this model describes the transition between two locations based on the
observations. As mentioned in Ref., the Bayesian method is used to
determine the number of states in a HMM model by minimizing the equation:
where Lf is the likelihood of the model given the observed demonstration, np
is the number of the independent parameters in the model, and T is the
number of observations. The model, which has the smallest value according
to Eq. (10), will be chosen.
An example of a HMM built for our system is shown in Image 4.
Image 4.
Example of HMM of human activities.
APPLICATION STAGE OF HUMAN ACTIVITY DETECTION
Given an observation Ok, the nearest Gaussian model is computed:
which is used to define the current state Gk(i).
The most probable area where the human will move to is determined by the
equation:
then the area covered by Gk(l) and Gk(m) will be masked as high-impedance
area in the map in the next step.
One important thing we want to mention is that there are lots of people
moving in a business building. Then from the detection side, there will be lots
of areas masked as high-impedance areas.
REACTIVE KEY COST MAP
After we have the high-impedance areas obtained from the application stage
of human activity detection, the next step is to use a map or a graph, which is
associated with costs between two key locations, to describe the current
environmental information. In our method, Gaussian models are used to
describe human activities at hot locations in the environment. However, we
do not need to use all of them in the global shortest path searching. What we
care about is the cost between key locations, not all the locations. The key
locations in our definition are (1) elevators and stairs, which connect two
floors and (2) target locations, which may not be the hot locations we
detected in the above sections but the robot needs to move to.
We segment the free space of a static map into grids. Then, we overlay the
high-impedance areas to the static grid-based map as shown in Image 5.
Image 5.
Map overlay.
The cost of moving from one grid to an adjacent free grid is always 1, and the
cost of moving from one grid to an adjacent impedance grid is defined using
the following equation:
impedancecurrent is defined by the number of people. And impedancehistory is
defined by the number of people detected at t from the historical data.
Using c(i, j), we can build a Reactive Key Cost Map for each floor. As we
mentioned earlier, we do not care about the path between each location, but it
is necessary to find the cost of paths between key locations.
However, most of the time, a target location in a navigation task is a public
space like a conference room, an office, and so on, which are not close to the
key locations. So before we build our cost graph, we need to build one
additional key location in the map. Then, we connect the closest neighbors to
the target node. The cost is computed using the Gaussian impedance area:
where x is the target location, xi is the center of the Gaussian impedance area,
and Σ is the variance matrix. Image 6 displays the cost map generated from
Image 5.
Image 6.
Cost map with three key points.
Then, we apply a shortest path-searching algorithm to find the shortest paths
between all the key locations on each floor. In our system, we used A*
algorithm, since the map is known and the heuristic and current cost are all
known. Specifically, A* selects the path that minimizes
where n is the last node on the path, g(n) is the cost of the path from the start
node to n, and h(n) is a heuristic that estimates the cost of the cheapest path
from n to the goal.
After the searching, a Reactive Key Cost Map is built and all the paths are
described with cost, which is shown in Image 7.
Image 7.
Example of Reactive Key Cost map.
The next step is to connect all the maps of floors together and the connection
points are elevators and stairs. This is a simple matching and connection
process.
PATH SEARCHING
After the maps of the whole building is built, path searching and planning can
all be done in the global map.
Since we largely reduce the complexity of the map by building a Reactive
Key Cost Map, the computing task on the path-searching part is not very
difficult. We use Dijkstra’s algorithm to find a shortest path in a constructed
directed map. Dijkstra’s algorithm is a greedy searching algorithm to find a
shortest path in a directed graph by repeatedly updating the distances between
the starting node and other nodes until the shortest path is determined. Let the
node at which we are starting be called the initial node. Let the distance of
node Y be the distance from the initial node to Y. Dijkstra’s algorithm will
assign some initial distance values and will try to improve them step by step
as follows:
1.
Assign to every node a tentative distance value: set it to zero for
our initial node and to infinity for all other nodes.
2.
Mark all nodes unvisited. Set the initial node as current. Create
a set of the unvisited nodes called the unvisited set consisting of all
the nodes except the initial node.
3.
For the current node, consider all its unvisited neighbors and
calculate their tentative distances. For example, if the current node
A is marked with a distance of six, and the edge connecting it with a
neighbor B has length 2, then the distance to B (through A) will be
8, the summation of the above two numbers. If this distance is less
than the previously recorded tentative distance of B, then overwrite
that distance. Even though a neighbor has been examined, it is not
marked as “visited” at this time, and it remains in the unvisited set.
4.
When we are done considering all the neighbors of the current
node, mark the current node as visited and remove it from the
unvisited set. A visited node will never be checked again.
5.
If the destination node has been marked visited (when planning
a route between two specific nodes) or if the smallest tentative
distance among the nodes in the unvisited set is infinity (when
planning a complete traversal), then stop. The algorithm has
finished.
6.
Select the unvisited node that is marked with the smallest
tentative distance, and set it as the new “current node,” then go back
to step 3.
Using Dijkstra’s algorithm, in a map, a shortest path can be generated from
the “Starting” node to a destination node. In our testing, we found some
issues when applying Dijkstra’s algorithm in 3D path searching. Then, we
simplify our case by confining that the global map can be represented as a 2D
graph. This paper does not focus on proposing a novel planning algorithm, so
improving the motion-planning algorithm is not the concentration.
Experiments and results
To evaluate our system, we need to have two types of evaluation. One is to
make sure the path can be generated. We tested this part in a static
environment assuming that the human motion and activities have been
detected and remained the same for a certain amount of time. The second
testing is to test the reactive planning. Assuming that humans are moving in
the environment, then we generate a path plan reactively based on the current
environmental situation. This part is to validate that the system is responsive
quickly enough to deal with the uncertain and unexpected human activities in
the environment. First, we describe how we perform experiments to evaluate
our human motion-tracking system.
HUMAN DETECTION RESULTS AND DATASET OF RTI
LOCALIZATION
We use the TelosB motes as our wireless nodes to form a wireless network as
our testbed. We deploy nodes at static locations around an area of interest, as
shown in Image 8. All nodes are programmed with TinyOS program Spin so
that each node can take turns to measure the received signal strength from all
the other nodes. A base station is connected to a laptop to collect pairwise
RSS measurements from all nodes of the wireless network. Once we collect
RSS from the network, we feed the RSS vector to our KRTI formulation as
described in Section 4.1.1.
Image 8.
Experimental layout with reconstructed RTI image (anchor nodes shown as
bullets, true person position shown as a cross, RTI position estimate shown as
a triangle, and robot’s orientation and position are shown as arrow and square
for illustration purpose).
We describe our experiment in a controlled environment to evaluate our
motion-tracking system. As shown in Image 8, 34 TelosB nodes were
deployed outside the living room of a residential house. Before people started
walking in the area, we collect pairwise RSS measurements between anchor
nodes as our calibration measurements. During the online period, people
walked around a marked path a few times in the living room, so that we know
the ground truth of the locations of the people to compare with the estimate
from RTI. The reconstructed KRTI image is shown in Image 8 with the black
triangle indicating the location of the pixel with the maximum pixel value,
which is the location estimate of a person. Note that KRTI can detect human
presence by choosing a threshold of the pixel value based on the calibration
measurements. KRTI is also capable of tracking multiple people, as shown in
Image 3. More details of the experiments and dataset can be found in Refs.
SIMULATION RESULTS
We tested our proposed algorithm in simulation. After detecting the human
activities, the robot builds the heat map and the 2D RKCM of each floor.
Image 9 displays three 2D graphs of three floors, which are labeled by
shadowed circles. They are connected by stairs or elevators. The cost of
moving from one floor to another floor using the elevators or stairs is
manually defined as three in our graphs.
Image 9.
Reactive Key Cost map.
The distance matrix used for Dijkstra’s searching algorithm is shown in Eq.
(16). Each row represents the distance from one node to other nodes. The
elements represent the distance values. Given a task of the start point i1 on
map 1 and the goal state k3 on map 3, the robot finds a path from i1 to k3 as
shown in Image 10. The cost of the path is 25 which is the global minimum
cost.
Image 10.
Experimental results.
Comparing the time spent on movement and detection, the time of finding
path based on RKCM can almost be ignored. There are lots of papers
describing the algorithm complexities of Dijkstra’s algorithm and other
algorithms, where readers can refer to
We still use traditional online sense-plan-act algorithms when the robot is
moving. However, the experiments here largely reduce the computing
workload on the robot side. The robot knows in advance what is happening in
the environment based on the HMM model built from historical data, then it
uses the HMM to find a path which is optimal based on the historical
information, under the assumption that the current situation is similar to the
historical situation during the same time every day or every week.
DISCUSSION
DISCUSSION ON THE RESULTS
From the above experimental results, we can clearly see that our system
builds the global graph using the monitored human activities and generates a
shortest path for robots to move to the target locations.
In most indoor robotic navigation challenges, especially in crowded
environment, researchers tend to equip robots with various sensors to make
sure that they can detect everything in the environment. Then based on the
comprehensive detection and recognition process, a smart decision-making
mechanism can help robots to plan the motion, switch strategies, and move
freely in the environment. This method enforces large workload on the
online-computing component. The motivation of this book chapter is to find a
method to reduce such workload based on historical data of human motion
and activity.
Based on the collected human activity data, we use a simple HMM to
describe the cost of movement in the environment. Then, robots can plan the
motion in advance to avoid moving to a very crowded area. We have seen
lots of cases that robots run into a crowd of human and have lots of difficulty
in moving out of that area, which generates concerns on safety, cost, and
efficiency. Our method can avoid such a situation based on the modeling
results as seen from the last section.
FUTURE WORK
We do find some situations that the system does not work very well. When
the robot moves too fast, and the humans nearby are walking, the planning is
not fast enough to reflect the current changes of the environment, and thus
collision happens. We have not done much testing on this part and we plan to
have more testing to make the system more robust in such situations.
Additionally, we cannot expect that static HMM model can provide satisfying
information for robots to use. Every day, new patterns of human activities
may appear in the environment and the model needs to be updated
accordingly. Thus, it is desirable to perform data collection and modeling
when robots are moving, which enables robots to have the lifelong learning
capability. This capability could help robots to have up-to-date information to
use and make the planning more efficient and useful.
Moving one robot in a building is challenging, but motion planning for a
group of robots is even more complex [35, 36]. Sharing latest updated human
activity models among robots is key to the success to realize coordinated and
collaborated robotic motion planning. The critical technical problem is to
design a method of fusing models into one for all the robots to use.
Conclusion
This book chapter proposes a hybrid method of planning paths in a global
graph composed of subgraphs. We take the human activity into consideration
to build a cost graph. This method significantly reduces the computing
workload because it avoids planning in a global graph with lots of grids and
possibilities. We also carry out experiments in simulation to validate our
proposed method. The methods proposed in this chapter provide a solution to
enable autonomous systems/robots to navigate effectively and robustly in
human-existing multistorey buildings.
Audio-Visual Speaker Tracking
Speaker tracking aims at localizing the moving speakers in a scene by
analysing the data sequences captured by sensors or arrays of sensors. It
gained relevance in the past decades due to its widespread applications such
as automatic camera steering in video conferencing, individual speaker
discriminating in multi‐speaker environments, acoustic beamforming, audio‐
visual speech recognition, video indexing and retrieval, human‐computer
interaction, and surveillance and monitoring in security applications. There
are numerous challenges, which make speaker tracking a difficult task
including, but not limited to, the estimation of the variable number of
speakers and their states, and dealing with various conditions such as
occlusions, limited view of cameras, illumination change and room
reverberations.
Using multi‐modal information is one way to address these challenges since
more comprehensive observations for the state of the speakers can be
collected in multi‐modal tracking as compared to the single‐modal case, and
the collection of the multi‐modal information can be achieved by sensors
such as audio, video, thermal vision, laser‐range finders and radio‐frequency
identification (RFID). Among these sensors, audio and video sensors are
commonly used in speaker tracking compared to others, because of their
easier installation, cheaper cost and more data‐processing tools.
Earlier methods in speaker tracking employ either visual‐only or audio‐only
data, and each modality offers some advantages but is also limited by some
weaknesses. Tracking with only video offers robust and accurate
performance when the camera field of view covers the speakers. However, it
degrades when the occlusion between speakers happens, when the speakers
go out of the camera field of view, or any changes on illumination or target
appearance have occurred. Although audio tracking is not restricted by these
limitations, it has a tendency to non‐negligible‐tracking errors because of
intermittency of audio data. In addition, audio data may be corrupted by
background noise and room reverberations. Nevertheless, the combination of
audio and video data may improve the tracking performance when one of the
modalities is missing or neither provides accurate measurements, as audio
and visual modalities are often complementary to each other which can be
exploited to further enhance their respective strengths and mitigate their
weaknesses in tracking.
Previous techniques were focused on tracking a single person in a static and
controlled environment. However, theoretical and algorithmic advances
together with the increasing capability in computer processing have led to the
emergence of more sophisticated techniques for tracking multiple speakers in
dynamic and less controlled (or natural) environments. In addition, the type
of sensors used to collect the measurements is advanced from single‐ to
multi‐modal.
In the literature, there are many approaches for speaker tracking using multi‐
modal information, which can be categorized into two methods as one is
deterministic and data‐driven while the other is stochastic and model‐driven.
Deterministic approaches are considered as an optimization problem by
minimizing a cost function, which needs to be defined appropriately. A
representative method in this category is the mean‐shift method, which
defines the cost function in terms of colour similarity measured by
Bhattacharyya distance. The stochastic and model‐driven approaches use a
state‐space approach based on the Bayesian framework as it is suitable for
processing of multi‐modal information. Representative methods are the
Kalman filter (KF), extended KF (EKF) and particle filter (PF). The PF
approach is more robust for non‐linear and non‐Gaussian models as
compared with the KF and EKF approaches since it easily approaches the
Bayesian optimal estimate with a sufficiently large number of particles.
One challenge in the implementation of the PF to tracking problem is to
choose an optimal number of particles. An insufficient number may introduce
a particle impoverishment, while a larger number (than required) will lead to
extra computational cost. Therefore, choosing the optimal number of particles
is one of the issues that affect the performance of the tracker. To address this
issue and to find the optimal number of particles for the PF to use, adaptive
particle filtering (A‐PF) approaches have been proposed in Refs. Fox
proposed KLD sampling, which aims to bind the error introduced by the
sample‐based representations of the PF using the Kullback‐Leibler
divergence between maximum likelihood estimates (MLEs) of the states and
the underlying distribution to optimize the number of particles. The KLD‐
sampling criterion is improved in Ref. for the estimation of the number of
particles, leading to an approach for adaptive propagation of the samples.
Subsequent work introduces the innovation error to estimate the number of
particles by employing a twofold metric. The particles are removed by the
first metric in case their distance to a neighbouring particle is smaller than a
predefined threshold. The second metric is used to set the threshold on the
innovation error in order to control the birth of the particles. These two
thresholds need to be set before the algorithm is run. A new approach is
proposed in Refs., which estimates noise variance besides the number of
particles in an adaptive manner. Different from other existing adaptive
approaches, adaptive noise variance is employed in this method for the
estimation of the optimal number of particles based on tracking error and the
area occupied by the particles in the image.
One assumption in the traditional PF used in multi‐speaker tracking is that
the number of speakers is known and invariant during the tracking. In
practice, the presence of the speakers may change in a random manner,
resulting in time‐varying number of speakers. To deal with the unknown and
variable number of speakers, the theory of random finite sets (RFSs) has been
introduced, which allows multi‐speaker filtering by propagation of the multi‐
speaker posterior. However, the computational complexity of RFS grows
exponentially as the number of speakers increases since the complexity order
of the RFS is O(MΞ) where M is the number of measurements and Ξ is the
number of speakers. The PHD filtering approach is proposed to overcome
this problem, as the first‐order approximation of the RFS whose complexity
scales linearly with the number of speakers since the complexity order of the
PHD is O(MΞ). This framework has been found to be promising for multi‐
speaker tracking. However, the PHD recursion involves multiple integrals
that need to have closed‐form solutions for implementation. So far, two
analytic solutions have been proposed: Gaussian mixture PHD (GM‐PHD)
filter and sequential Monte Carlo PHD (SMC‐PHD) filter. Applications of
GM‐PHD filter are limited by linear Gaussian systems, which lead us to
consider SMC‐PHD filter to handle non‐linear/non‐Gaussian problems in
audio‐visual tracking.
Apart from the stochastic methodologies mentioned above, the mean-shift is
a deterministic and data‐driven method, which focuses on target localization
using representation of the target. The mean-shift easily convergences to peak
of the function with a high speed and a small computational load. Moreover,
as a non‐parametric method, the solution of the mean shift is independent
from the features used to represent the targets. On the other hand, the
performance of the mean-shift is degraded by occlusion or clutter as it
searches the densest (most similar) region starting from the initial position in
the region of interest. In this sense, the mean‐shift trackers may fail easily in
tracking small‐ and fast‐moving targets as the region of interest may not
cover the targets, which results in a track being lost after a complete
occlusion. Also, it is formulated for single‐target tracking, so it cannot handle
a variable number of targets. Therefore, several methods have been proposed
by integrating both deterministic and stochastic approaches to benefit their
respective strengths which will be discussed in Section 4.
Tracking modalities
VISUAL CUES
Visual tracking is a challenging task in real‐life scenarios, as the performance
of a tracker is affected by the illumination conditions, occlusion by
background objects and fast and complicated movements of the target. To
address these problems, several visual features, that is, colour, texture,
contour and motion, are employed in existing tracking systems.
Using colour feature is a very intuitive approach and commonly applied in
target tracking as the information provided by colour helps to distinguish
between targets and other objects. Several approaches can be found in the
literature which employs colour information to track the target. In Ref., a
colour mixture model based on a Gaussian distribution is used for tracking
and segmentation, while in Ref., an adaptive mixture model is developed.
Target detection and tracking can be easily maintained using colour
information if the colour of the target is distinct from those of the background
or other objects.
Another approach for tracking is contour‐based where shape matching or
contour‐evolution techniques are used to track the target contour. Active
models like snakes, geodesic‐active contours, B‐splines or meshes can be
employed to represent the contours. Occlusion of the target by other objects
is the common problem in tracking. This problem can be addressed by
detecting and tracking the contour of the upper body rather than tracking the
contour of the whole bodies, which leads to the detection of a new person as
the upper bodies are often distinguishable from back and front view for
different people.
Texture is another cue defined as a measure for surface intensity variation.
Properties like smoothness and regularity can be quantified by the texture.
The texture feature is used with Gabor wavelet in Ref.. The Gabor filters can
be employed as orientation and scale‐tunable edge and line detectors, and the
statistics of these micro‐features are mostly used to characterize the
underlying texture information in a given region. For improved detection and
recognition, local patterns of image have gained attention recently. Local
patterns are used in several application areas such as image classification and
face detection since they offer promising results. In Ref., the local binary
patterns (LBPs) method is used to create a type of texture descriptor based on
a grey‐scale‐invariant texture measure. Such a measure is tolerant to
illumination changes.
Another cue used in tracking, particularly in indoor environments, is motion
which is an explicit cue of human presence. One way to extract this cue is to
apply foreground detection algorithms. A simple method for foreground
detection is to compute the difference of two consecutive frames which gives
the moving part of the image. Although it has been used in multi‐modal‐
tracking systems, it fails when the person remains stationary since the person
is considered part of background after some time.
The scale‐invariant feature transform (SIFT) proposed in Ref. has found wide
use in tracking applications. SIFT uses local features to transform the image
data into scale‐invariant coordinates. Distinctive invariant features are
extracted from images to provide matching between several views of an
object. The SIFT feature is invariant to scaling, translation, clutter, rotation,
occlusion and lighting which makes it robust to changes in three‐dimensional
(3D) viewpoint and illumination, and the presence of noise. Even a single
feature has high matching rate in a large database because the SIFT features
are generally distinctive. On the other hand, non‐rigid targets in noisy
environments degrade the SIFT matching rate and recognition performance.
So far, several visual cues were introduced, and among them colour cues
have been used more commonly in tracking applications due to their easy
implementation and low complexity. Colour information can be used in the
calculation of the histogram of possible targets at the initialization step as
reference images which can be used in detection and tracking of the target.
There are two common colour histogram models, RGB or HSV in the
literature and HSV is more preferable since it is observed to be more robust
to illumination variation.
AUDIO CUES
There are a variety of audio information that could be used in audio tracking
such as sound source localization (SSL), time‐delay estimation (TDE) and
the direction of arrival (DOA) angle.
The audio source localization methods can be divided into three categories,
namely steered beamforming, super‐resolution spectral estimation and time‐
delay estimation. Beamformer‐based source localization offers comparatively
low resolution and needs a search over a highly non‐linear surface. Also, it is
computationally expensive which may be limited in real‐time applications.
Super‐resolution spectral estimation methods are not well suited for locating
a moving speaker since it is under the assumption that the speaker location is
fixed for a number of frames. However, the location of a moving speaker
may change considerably over time. In addition, these methods are not robust
to modelling errors caused by room reverberation and mostly have high
computational cost. The time‐delay of arrival (TDOA)‐based location
estimators use the relative time delay between the wave‐front arrivals at
microphone positions in order to estimate the location of the speaker. As
compared with the other two methods, the TDOA‐based approach has
advantages in the following two aspects. The first one is its computational
efficiency and the second one its direct connection to the speaker location.
The problem of DOA estimation is similar to that of the TDOA estimation.
To estimate the DOA, the TDOA needs to be determined between the sensor
elements of the microphone array. Estimation of source locations mainly
depends on the quality of the DOA measurements. In the literature, several
DOA estimation techniques such as the MUSIC algorithm and the coherent
signal subspace (CSS) have been proposed. The main differences between
them are the way of dealing with reverberation, background noise and
movement of the sources. The following three factors influence the quality of
the DOA estimation. The spectral content of the speech segment is
considered as the first one which is used for derivation of the DOAs. The
reverberation level of the room is the second one which causes outlier in the
measurements because of the reflections from the objects and walls. The
positions of the microphone array to the speakers and the number of
simultaneous sources in the field are considered the third factor.
Audio‐visual speaker tracking
Speaker tracking is a fundamental part of multimedia applications which
plays a critical role to determine the speaker trajectories and analyse the
behaviour of speakers. Speaker tracking can be accomplished with the use of
audio‐only, visual‐only or audio‐visual information.
Audio‐only information based approaches for speaker tracking have been
presented in. An audio‐based fusion scheme was proposed in Ref. to detect
multiple speakers where the locations from multiple microphone arrays are
estimated and fused to determine the state of the same speaker. Separate KFs
are employed for all the individual microphone arrays for the location
estimation. To deal with motion of the speaker and measurement uncertainty,
the probabilistic data association technique is used with an interacting model.
One issue in Ref. is that it cannot deal with the tracking problem for a time‐
varying number of speakers. Ma et al. proposed an approach based on
random finite set to track an unknown and time‐varying number of speakers.
The RFS theory and SMC implementation are used to develop the Bayesian
RFS filter, which tracks the time‐varying number of speakers and their states.
The random finite set theory can deal with a time‐varying number of
speakers; however, the maximum number of speakers that can be handled is
limited as its computational complexity increases exponentially with the
number of speakers. In that sense, a cardinalized PHD (CPHD) filter is
proposed in Ref., which is the first‐order approximation of the RFS, to reduce
the computational cost caused by the number of speakers. The positions of
the speakers are estimated using TDOA measurements from microphone
pairs by asynchronous sensor fusion with the CPHD filter.
A time‐frequency method and the PHD filter are used in Ref. to localize and
track simultaneous speakers. The location of multiple speakers is estimated
based on the time‐frequency method, which uses an array of three
microphones, then the PHD filter is employed to the localization results as
post‐processing to handle miss‐detection and clutters.
Speaker tracking with multi‐modal information has also gained attention, and
many approaches have been proposed in the past decade using audio‐visual
information , providing the complementary characteristics of each modality.
The differences among these existing works arise from the overall objective
such as tracking either single or multiple speakers and the specific
detection/tracking framework.
Audio‐visual measurements are fused by graphical models in Ref. to track a
moving speaker in a cluttered and noisy environment. Audio and video
observations are used jointly by computing their mutual dependencies. The
model parameters are learnt using the expectation‐maximization algorithm
from a sequence of audio‐visual data.
A hierarchical Kalman filter structure was proposed in Refs. to track people
in a three‐dimensional space using multiple microphones and cameras. Two
independent local Kalman filters are employed for audio and video streams,
and then the outputs of these two local filters are combined under one global
Kalman filter.
Unlike particle filters to estimate the predictions from audio‐ and video‐based
measurements and audio‐visual information fusion is performed at the feature
level. In other words, the independent particle coordinates from the features
of both modalities are fused for speaker tracking. These works have focused
on the single‐speaker case which cannot directly address the tracking
problem for multiple speakers.
Two multi‐modal systems are introduced in Ref. for the tracking of multiple
persons. A joint probabilistic data association filter is employed to detect
speech and determine active speaker positions. Two systems are performed
for visual features where a particle filter is applied first using foreground,
colour, upper body detection and person region cues from multiple camera
images and the latter is a blob tracker using only a wide‐angle overhead view.
Then, acoustic and visual tracks are integrated using a finite state machine.
Unlike, a particle filtering framework is proposed in Ref. which incorporates
the audio and visual detections into the particle filtering framework using an
observation model. It has the capability to track multiple people jointly with
their speaking activity based on a mixed‐state dynamic graphical model
defined on a multi‐person state space. Another particle filter based multi‐
modal fusion approach is proposed in Ref. where a single speaker can be
identified in the presence of multiple visual observations. Gaussian mixtures
model was adopted to fuse multiple observations and modalities. Compared
to, particle filtering framework is not used in Ref.; instead, hidden Markov
model based iterating decoding scheme is used to fuse audio and visual cues
for localization and tracking of persons.
In Refs., the Bayesian framework is used to handle the tracking problem for a
varying number of speakers. The particle filter is used in Ref., and
observation likelihoods based on both audio and video measurements are
formulated to use in the estimation of the weights of the particles, and then
the number of people is calculated using the weights of these particles. The
RFS theory based on multi‐Bernoulli approximations is employed in Ref. to
integrate audio and visual cues with sequential Monte Carlo implementation.
The nature of the random finite set formulation allows their framework to
deal with the tracking problem for a varying number of targets. Sequential
Monte Carlo implementation (or particle filter) of PHD filter is used in Ref.
where audio and visual modalities are fused in the steps of particle filter
rather than using any data fusion algorithms. Their work substantially differs
from existing works in AV multi‐speaker tracking with respect to the
capabilities for dealing with multiple speakers, simultaneous speakers, and
unknown and time‐varying number of speakers.
Tracking algorithms
In this section, a brief review of tracking algorithms is presented which
covers the following topics: Bayesian statistical methods, visual and audio‐
visual algorithms and non-linear filtering approaches.
Recall that in Section 1, tracking methods are either stochastic and model‐
driven or deterministic and data‐driven.
The stochastic approaches are based on the Bayesian framework which uses a
state‐space approach. Representative methods in this category are the Kalman
filter (KF), extended Kalman filter (EKF) and particle filter (PF). The PF
approach is more robust as compared to the KF and EKF approaches as it can
approach the Bayesian optimal estimate with a sufficiently large number of
particles. It has been widely applied to speaker tracking problems. The PF is
used to fuse object shapes and audio information in Refs. In Ref.,
independent audio and video observation models are fused for simultaneous
tracking and detection of multiple speakers. However, one challenge in PF is
to choose an appropriate number of particles. While an insufficient number
may lead to particle impoverishment (i.e. loss of diversity among the
particles), a larger number (than required) will induce additional
computational cost. Therefore, the performance of the tracker depends on the
number of particles that are estimated as an optimal value.
The PHD filter is another stochastic method based on the finite‐set statistics
(FISST) theory, which propagates the first‐order moment of a dynamic point
process. The PHD filter is used in many application areas after its proposal
and some applications with speaker tracking are reported in Refs. It has an
advantage over other Bayesian approaches such as Kalman and PF filters, in
that the number of targets does not need to be known in advance since it is
estimated in each iteration. The issue in the PHD filter is that it is prone to
estimation error in the number of speakers in the case of low signal‐to‐noise
ratio. The reason is that the PHD filter restricts the propagation of multi‐
target posterior to the first‐order distribution moment, resulting in loss of
information for higher order cardinality. To address this issue, the cardinality
distribution is also propagated with PHD distribution in the cardinalized PHD
(CPHD) filter which improves the estimation of the target number and state
of the speakers. However, additional distribution for cardinality requires extra
computational load, which makes the CPHD computationally more expensive
than the PHD filter. Moreover, the spawning of new targets is not modelled
explicitly in the CPHD filter.
As a deterministic and data‐driven method, the mean-shift uses representation
of the target for localization, which is based on minimizing an appropriate
cost function. In that sense, a similarity function is defined in Ref. to reduce
the state estimation problem to a search in the region of interest. To obtain
fast localization, a gradient optimization method is performed. The meanshift works under the assumption that the representation of the target is
sufficiently distinct from the background which may not be always true.
Although the mean-shift is an efficient and robust approach, in occlusion and
rapid motion scenarios, it may fail when the target is out of the region of
interest, in other words, the search area.
Many approaches have been proposed in the literature to address these
problems in mean‐shift tracking, which can be categorized into two groups.
One group improves the mean‐shift tracking by, for example, introducing
adaptive estimation of the search area, iteration number and bin number. In
the other group, the mean‐shift algorithm is combined with other methods
such as particle filter. The stochastic and deterministic approaches are
integrated under the same framework in many studies. Particle filtering
(stochastic) is integrated with a variation approach (deterministic) in Ref.
where the ‘switching search’ algorithm is run for all the particles. In this
algorithm, the momentum of the particles is compared with a pre‐determined
threshold value, and if it is smaller than the threshold, the deterministic
search is run; otherwise, the particles are propagated in terms of a stochastic
motion model.
The particle filtering and mean-shift are combined in Ref. under the name of
mean-shift embedded particle filter (MSEPF). It is inspired by Sullivan and
Rittscher, but the mean shift is used as a variational method. It is aimed to
integrate the advantages of the particle filtering and mean‐shift method. The
MSEPF has a capability to track the target with a small number of particles as
the mean‐shift search concentrates on the particles around local modes
(maxima) of the observation. To deal with the possible changes in
illuminations, a skin colour model is used and updated for every frame. As an
observation model, colour and motion cues are employed. To use a multi‐cue
observation model, the mean‐shift analysis is modified and applied to all the
particles. Resampling (selective resampling) is, then, applied when the
effective sample size is too small. The mean‐shift and particle filtering
methods are used independently in Ref. The estimated positions of the target
obtained by these two methods are compared using the Bhattacharyya
distance at every iteration and the best value is chosen as the estimated
position of the target to avoid the algorithm from being trapped to a local
maximum, and thus finding the true maximum beyond the local one.
A hybrid particle with a mean‐shift tracker is proposed in Ref. which works
in a similar manner to that in Ref. Alternatively, uses the original application
of the mean-shift and performs the mean‐shift process on all the particles to
reach the local maxima. Moreover, an adaptive motion model is used to deal
with manoeuvring targets, which have a high speed of movement. The kernel
particle filter is proposed in Ref. where small perturbations are added to the
states of the particles after the mean‐shift iteration to prevent the gradient
ascent from being stopped too early in the density. Kernel radius is calculated
adaptively every iteration and this method is applied to multiple target
tracking using multiple hypotheses which are then evaluated and assigned to
possible targets. An adaptive mean‐shift tracking with auxiliary particles is
proposed in Ref. As long as the conditions are met, such as the target
remaining in the region of interest, and there are no serious distractions, the
mean-shift is used as the main tracker. When sudden motions or distractions
are detected by the motion estimator, auxiliary particles are introduced to
support the mean‐shift tracker. As the mean shift may diverge from the target
and converge on the background, background/foreground feature selection is
applied to minimize the tracking error. Even though this study is inspired by
Sullivan and Rittscher, where the main tracker is a particle filter, in Ref., the
main tracker is the mean-shift. In addition, the switched trackers are used to
handle sudden movements, occlusion and distractions. Moreover, to maintain
tracking even when the target appearance is affected by illumination or view
point, the target model is updated online.
In the literature, several frameworks have been proposed to combine the
mean-shift and particle filters. However, it is still required to have an
explicitly designed framework for a variable number of targets. Both the
mean-shift and particle filter were derived for tracking only a single target.
To address this issue, the PHD filter is found as a promising solution as it is
originally designed for multi‐target tracking. However, the PHD filter does
not have closed‐form solutions as the recursion of the PHD filter includes
multi‐dimensional integrals. To derive analytical solution of the PHD filter,
the particle filter or sequential Monte Carlo (SMC) implementation is
introduced which leads to SMC‐PHD filtering. In Ref., the mean-shift is
integrated with standard SMC‐PHD filtering, aiming at improving
computational efficiency and estimation accuracy of the tracker for a variable
number of targets.
Besides the tracking methods explained so far, speaker tracking with multi‐
modal usage introduces a problem which is known as data association. Each
measurement coming from multi‐modality needs to be associated with an
appropriate target. Data association methods are divided into two classes.
Unique neighbour is the first data association, and a representative method in
this class is multiple hypothesis tracking (MHT). Here, each existing track is
associated with one of the measurements. All‐neighbours data association
belongs to the second class which uses all the measurements for updating the
entire track estimate, for example, the joint probabilistic data association
(JPDA). In MHT, the association between a target state and the
measurements is maintained by multiple hypotheses. However, the required
number of hypotheses increases exponentially over time. In JPDA, separate
Gaussian distributions for each target are used to approximate the posterior
target distribution which results in an extra computational cost. Data
association algorithms in target‐tracking applications with Bayesian methods
and the PHD filter can be found in. However, it is found that classical data
association algorithms are computationally expensive which lead to the
fusion of multi‐modal measurements inside the proposed framework. As in
Refs., audio and visual modalities are fused in the steps of the visual particle
filter.
Among the methods explained above, the PF, RFS, PHD filter and mean-shift
are the main methods discussed throughout this chapter and the main
concepts of the methods are presented below.
PARTICLE FILTERING
The PF became widely used tools in tracking after being proposed by Isard et
al. due to its ability to handle non‐linear and non‐Gaussian problems. The
main idea of the PF is to represent a posterior density by a set of random
particles with associated weights, and then compute estimates based on these
samples and weights. The principle of the particle filter is illustrated in
Image 1. Ten particles are initialized with equal weights in the first step. In
the second step, the particles are weighted based on given measurements, and
as a result, some particles require small weights while others require larger
weights represented by the size of the particles. The state distribution is
represented by these weighted particles. Then, a resampling step is performed
which selects the particles with large weights to generate a set of new
particles with equal weights in the third step. In step four, these new particles
are distributed again to predict the next state. This loop continues from steps
two through four until all the observations are exhausted.
Image 1.
Steps of the particle filter. The first step is particle initialization with equal
weights. The particles are weighted in the second step. After a resampling
step is performed in the third step, the particles are distributed to predict the
next state in the fourth step. This Image is adapted from Ref.
Although there are various extensions of the PF in the literature, the basic
concept is the same and based on the idea of representing the posterior
distribution by a set of particles.
RANDOM FINITE SET AND PHD FILTERING
The generic PF is designed for single‐target tracking. Multi‐target tracking is
more complicated than single‐target tracking as it is necessary to jointly
estimate the number of targets and the state of the targets. One multi‐target
tracking scenario is illustrated in Image 2a, where five targets exist in state
space (bottom plane) given at the previous time with eight measurements in
observation space (upper plane). In this scenario, the number of
measurements is larger than the number of targets due to clutter or noise.
When the targets are passed to the current time, the number of targets
becomes three and two targets no longer exist.
Image 2.
An illustration of the RFS theory in a multi‐target tracking application. One
possible multi‐target tracking scenario is given in (a), and (b) represents the
RFS approach to multi‐target tracking. The Images are adapted from Ref.
In that sense, the variable number of targets and noisy measurements need to
be handled for reliable tracking in multi‐target case. The RFS approach is an
elegant solution to address this issue. The basic idea behind the RFS
approach is to treat the collection of targets as a set‐valued state called the
multi‐target state and the collection of measurements as a set‐valued
observation, called multi‐observation. So, the problem of estimating multiple
targets in the presence of clutter and uncertainty is handled by modelling
these set‐valued entities as random finite sets. The point here is to generalize
the tracking problem from single target to multiple targ
Descargar