Guillermo Ricardo Simari Escuela Internacional CACIC 2008

Anuncio
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Argumentation
in Artificial Intelligence:
Theoretical Foundations and
Technological Applications
De donde venimos…
Carlos Chesñevar & Guillermo Simari
Desde 1995 funciona en la UNS el Departamento
de Cs. e Ing. en Ciencias de la Computación, y
dentro de éste, el Laboratorio de Investigación y
Desarrollo en Inteligencia Artificial (LIDIA)
CONICET and Laboratory of R&D in A.I.
Department of Computer Science and
Engineering
Universidad Nacional del Sur
Bahía Blanca, Argentina
Univ. Nacional del Sur
(Bahía Blanca, Argentina)
http://lidia.cs.uns.edu.ar
A modo de introducción….
¿Qué pretende este curso?
•
La argumentación puede verse de manera abstracta como el
intercambio de razones a favor y en contra de cierta conclusión.
•
Brindar una introducción a los conceptos teóricos fundamentales
asociados a la argumentación en Inteligencia Artificial.
•
En los últimos años, la argumentación ha ganado creciente
importancia en el ámbito de la Inteligencia Artificial (IA) como
vehículo para facilitar una interacción racional.
•
Introducir algunos de los sistemas lógicos utilizados para
argumentación, particularmente la denominada Programación en
Lógica Rebatible (DeLP) y sus extensiones.
•
En un contexto de múltiples agentes, la argumentación provee
herramientas para diseñar,
diseñar modelar e implementar formas de
interacción entre agentes.
•
Mostrar cómo la argumentación puede aplicarse en el contexto
multiagente para modelar toma de decisiones,
decisiones negociación,
negociación etc.
etc
•
•
En un contexto de un único agente, puede utilizarse
argumentación para modelar su razonamiento, teniendo en cuenta
preferencias, dinamicidad del entorno, etc.
Ilustrar aplicaciones y sistemas inteligentes
argumentación desarrollados recientemente.
•
Objetivo final: que quien realice el curso pueda acceder a
literatura científica vinculada a la argumentación en IA, aplicarla
en su ámbito de investigación, y/o profundizar en aquellos temas
que le hayan resultado de interés.
•
Este curso apunta a dar una introducción a la argumentación
en el contexto de agentes inteligentes.
Argumentation in Artificial Intelligence, 2008
3
basados
en
Argumentation in Artificial Intelligence, 2008
4
Sobre el material a utilizar
Outline - Part 1
• En este curso trabajaremos con transparencias
que resumen los principales contenidos del
curso.
• Introduction. Motivations for this course. Goals.
• Default Reasoning: problems, alternatives.
Argumentation as a way of formalizing default
reasoning.
• Las transparencias se complementan con
distintos artículos en revistas y trabajos en
congresos recientes.
• Systems
for
Defeasible
Argumentation.
Elements. Argument, Counterargument, defeat,
dialectical analysis. Unique status vs. Multiple
status approach.
• Las transparencias estarán en castellano o
inglés.
• Conclusions
Argumentation in Artificial Intelligence, 2008
5
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
6
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Main references – Part 1
Outline
H. Prakken, G. Vreeswijk. Logical Systems for
Defeasible Argumentation, in D. Gabbay (Ed.),
Handbook of Philosophical Logic, 2nd Edition, 2002.
• (Very brief) Introduction to Multiagent Systems
•
C.Chesñevar, A.Maguitman, R.Loui. Logical Models of
Argument. In ACM Computing Surveys,
Argument
Surveys Dec.
Dec 2000.
2000
• Conclusions
•
Trevor J.M. Bench-Capon, Paul E. Dunne.
Argumentation in Artificial Intelligence. Artificial
Intelligence Journal, Volume 171, Issues 10-15, JulyOctober 2007, Pages 619-641.
•
Argumentation in Artificial Intelligence, 2008
• What is argumentation? Fundamentals
7
Overview
Argumentation in Artificial Intelligence, 2008
Ubiquity, Interconnection, Intelligence
¨ Five ongoing trends have marked the history of
computing:
¨ As processing capability spreads, sophistication (and
intelligence of a sort) becomes ubiquitous.
¨ What could benefit from having a processor embedded
in it…?
•
ubiquity;
•
interconnection;
•
intelligence;
•
delegation; and
•
human-orientation
¨ Internet is powerful…Some researchers are putting
forward theoretical models that portray computing as
primarily a process of interaction.
¨ The complexity of tasks that we are capable of
automating and delegating to computers has grown
steadily.
Credits: some of these slides are based on Michael Wooldridge’s lecture notes for his book “An
Introduction to MAS” (Wiley & Sons, 2002)
Argumentation in Artificial Intelligence, 2008
9
Argumentation in Artificial Intelligence, 2008
Delegation, Human-Orientation
10
Programming progression…
¨Programming has progressed through:
¨ Computers are doing more for us – without our
intervention. Next on the agenda: fly-by-wire cars,
intelligent braking systems…
•
machine code;
•
assembly language;
¨ Programmers conceptualize and implement software in
t
terms
off higher-level
hi h l
l – more human-oriented
h
i t d–
abstractions.
•
machine-independent programming languages;
•
sub-routines;
b
ti
•
procedures & functions;
¨ The movement away from machine-oriented views of
programming toward concepts and metaphors that more
closely reflect the way we ourselves understand the
world.
•
abstract data types;
•
objects;
Argumentation in Artificial Intelligence, 2008
8
to agents.
11
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
12
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Where does it bring us?
Interconnection and Distribution
¨ Delegation and Intelligence imply the need to
build computer systems that can act effectively
on our behalf.
¨ Interconnection and Distribution have become
core motifs in Computer Science.
¨ But Interconnection and Distribution, coupled
y
to represent
p
our best
with the need for systems
interests, implies systems that can cooperate
and reach agreements (or even compete) with
other systems that have different interests
(much as we do with other people).
¨ This implies:
• The ability of computer systems to act
independently.
• The ability of computer systems to act in a way
that represents our best interests while
interacting with other humans or systems.
Argumentation in Artificial Intelligence, 2008
13
So Computer Science expands…
¨ A multiagent system is one that consists of a
number of agents, which interact with oneanother.
¨ All of these trends have led to the emergence of a
new field in Computer Science: Multiagent
Systems.
¨ In the most general case, agents will be acting
on behalf of users with different goals and
motivations.
¨ An agent is a computer system that is capable of
independent action on behalf of its user or owner
(figuring out what needs to be done to satisfy
design objectives, rather than constantly being
told).
¨ To successfully interact, they will require the
ability to cooperate, coordinate, and negotiate
with each other, much as people do.
15
Multiagent Systems
Argumentation in Artificial Intelligence, 2008
Agents as Technologies
¨ In Multiagent Systems, we address questions
such as:
• How can cooperation emerge in societies of selfinterested agents?
• What kinds of languages can agents use to
communicate?
i t ?
• How can self-interested agents recognize
conflict, and how can they (nevertheless) reach
agreement?
• How can autonomous agents coordinate their
activities so as to cooperatively achieve goals?
Argumentation in Artificial Intelligence, 2008
14
Multiagent Systems: a Definition
¨ These issues were not studied in Computer
Science until recently.
Argumentation in Artificial Intelligence, 2008
Argumentation in Artificial Intelligence, 2008
17
From: Agent Technology: A Roadmap for Agent-based Computing. M. Luck et. Al, 2005, Agentlink.
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
16
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Multiagent Systems
Multi-agent systems
¨ An agent needs the ability to make internal
reasoning:
•
•
•
•
•
¨ Agents need to:
• exchange information and explanations
• resolve conflicts of opinions
• resolve conflicts of interests
• make joint decisions
Reasoning about beliefs, desires, …
Handling inconsistencies
Making decisions
Generating, revising, and selecting goals
...
they need to engage in dialogues
¨ Argumentation provides a powerful metaphor for
solving these problems!
Argumentation in Artificial Intelligence, 2008
19
Argumentation in Artificial Intelligence, 2008
20
The role of argumentation
Why to study argumentation in MAS?
¨ For internal reasoning of single agents:
¨ Argumentation plays a key role for achieving the goals
of the above dialogue types
• Reasoning about beliefs, goals, ...
• Making decisions
• Generating, revising, and selecting goals
¨ Argument = Reason for some conclusion (belief, action,
goal, etc.)
¨ For
F interaction
i t
ti b
between
t
multiple
lti l agents:
t
¨ Argumentation = Reasoning about arguments →
decide on conclusion
• Exchanging information and explanations
• Resolving conflicts of opinions
¨ Dialectical argumentation = Multi-party argumentation
through dialogue
• Resolving conflicts of interests
• Making joint decisions
Argumentation in Artificial Intelligence, 2008
21
Argumentation in Artificial Intelligence, 2008
The role of argumentation
Types of Dialogues
Argumentation plays a key role for reaching agreements:
Type
¨ Additional information can be exchanged
¨ The opinion of the agent is explicitly explained (e.g.,
arguments
t in
i favor
f
off opinions
i i
or offers,
ff
arguments
t in
i
favor of a rejection or an acceptance)
¨ Agents can modify/revise their beliefs / preferences / goals
23
Initial
Situation
Main Goal
Participant’s
aims
Subtypes
Information Personal Spreading Gain, pass on,
seeking
ignorance knowledge show, or hide
knowledge
• Expert consultation
• Interview
• Interrogation
Persuasion Conflicting Resolution Persuade the
Beliefs
of conflict other(s)
by verbal
means
• Dispute
Inquiry
¨ To influence the behavior of an agent (threats, rewards)
Argumentation in Artificial Intelligence, 2008
22
General
Growth of Find a proof or
Ignorance knowledge destroy one
& agreemt’
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
• Scientific Research
• Investigation
24
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
A negotiation dialogue
Types of Dialogues
Buyer: Can’t you give me this Peugeot 407 a bit cheaper?
Initial
Situation
Type
Main
Goal
Participant’s
aims
Reach a Influence
decision Outcome
Deliberation Need for
action
Making
Conflict of
interests & a deal
need for
cooperation
Negotiation
Get the best
for oneself
Subtypes
Seller: Sorry that’s the best I can do. Why don’t you go
for a Peugeot 307 instead?
• Board meeting
• War planning
Buyer: I have a big family and I need a big car (B1)
Seller: Modern 307’s are becoming very spacious and
would easily fit in a big family. (S1)
• Bargaining
• Union negotiation
• Land dispute
Buyer: I didn’t know that, let’s also look at 307’s then.
Typology by Walton & Krabbe, 1995
Argumentation in Artificial Intelligence, 2008
25
Argumentation in Artificial Intelligence, 2008
26
Module for sensors and control
of effectors
Touch
Smell
Taste
Generic Agent
Language
Sensors receive
perceptions
Environ
nment
Module “What to do n
next?”
Sensors
Hearing
Sight
Emotions
¿What to do?
Cognitive
Behavior
Muscles
Effectors execute those
chosen actions to be
carried out…
Thought
Effectors
Memory
Do-it-yourself agent
Artificial Intelligence: A Modern Approach, 2nd Ed., S.Russell & P.Norvig 2003
Architecture
Sensors
Beliefs
Plans
Sensors
Beliefs
Plans
Desires
Interpreter
Desires
Intentions
Intentions
Argumentation-Based
Reasoning Engine!
Argumentation!
Actuator
Actuator
Actuator
Actuator
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Simple Reactive Agent
Reactive Agent with Inner State
Agent
Agent
Sensors
How the World
Evolves
Rules
Condition-action
What to do?
Effectors
Agent with Explicit Goals
Utility-based agent
Goals
Observations
About the current
State of The world
Which are the conseq.
of doing action A
Agent
Sensors
State
Observations
About the current state
Of The world
How the World
Evolves
Consequences from
actions
How good is the state
I would achieve?
Utility
What to do now?
Which are the
conseq ences
consequences
Of doing action A?
Environ
nment
Consequences from
Actions
Sensors
Environ
nment
How
The World
evolves
What to do?
Effectors
Argumentation-Based
Reasoning Engine!
State
Observations
About the world
Consequences of the actions
Argumentation-Based
Reasoning Engine!
Agent
Sensors
Environ
nment
Rules
Condition-action
Environ
nment
Observations
about the world
State
What to do now?
Effectores
Argumentation-Based
Reasoning Engine!
Argumentation-Based
Reasoning Engine!
Effectors
Outline
• (Very brief) Introduction to Multiagent Systems
• What is argumentation? Fundamentals
“Argumentation is a verbal and social activity of reason
aimed at increasing (or decreasing) the acceptability of a
controversial standpoint for the listener or reader, by
putting forward a constellation of propositions intended to
justify (or refute) the standpoint before a rational judge.”
(van Eemeren et al., 1996)
Argumentation in Artificial Intelligence, 2008
35
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
36
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Elements of Argument
Argumentation: two flavors
Argumentation
¨ Premises:
Logic-oriented
g
view
Linguistic-oriented
g
view
•
If Iraq has weapons of mass destruction (WMDs),
then it poses a threat to world peace
•
Iraq has WMDs
¨ Conclusion:
Borrows many concepts
from legal reasoning,
rhethorics, etc.
Follows a more “traditional”
style in symbolic AI…
Cross fertilization
•
Has enabled the development
of automated argumentation
systems (e.g. DeLP).
Gained importance in the
context of the World Wide
Web (Semantic Web tools)
Argumentation in Artificial Intelligence, 2008
Credits for some of the next slides: Iyad Rahwan (COMMA 2006 and AAAI 2007 presentations)
and Henry Prakken (EAAAS 2007).
37
Types of Argument
Iraq poses a threat to world peace.
Argumentation in Artificial Intelligence, 2008
38
Toulmin’s Schema (1950)
[Walton 2006]
¨ Deductive argument
•
If premises are true, the conclusion is true
•
E.g. mathematical proof
D
So, Q, C
¨ Inductive argument
Since W
•
Generalisation from observation
•
True to some degree of confidence
Unless, R
On account of B
¨ Presumptive argument
•
Conclusion is plausible, given the premises
•
Argument is defeasible as it may be overridden
Argumentation in Artificial Intelligence, 2008
D for Data
Q for Qualifier
C for Claim
39
W for Warrant
B for Backing
R for Rebuttal
¨ Claim: Iraq poses a threat to
world peace
¨ Data: Iraq has WMDs
¨ Warrant: Countries with WMDs
threaten peace
¨ Backing: 80% of countries with
WMDs since 1920 disrupted
world peace
¨ Rebuttal: Unless these
countries are democracies
¨ Qualifier: Presumably,
certainly, possibly, etc.
Argumentation in Artificial Intelligence, 2008
40
Critical Questions
Walton’s Argument Schemes
Critical Questions allow to undermine an argument scheme.
¨ Argument Schemes classify typical forms of premises and
conclusions
¨ E.g. Scheme for Argument from Expert Opinion
¨ E.g. Scheme for Argument from Expert Opinion
•
Premise: Source E is an expert in the subject domain S
•
Premise: E asserts that proposition A in domain S is true
•
Conclusion: A may plausibly be taken to be true
( Premise: Source E is an expert in the subject domain S
( Premise: E asserts that proposition A in domain S is true
( Conclusion: A may plausibly be taken to be true
¨ Critical Questions:
¨ E.g. Actual Argument from Expert Opinion
•
Premise: The CIA says that Iraq has WMDs
•
Premise: The CIA are experts on WMDs
•
Conclusion: Iraq has WMDs
(
(
(
(
(
(
¨ In natural language, some premises may be implicit.
Argumentation in Artificial Intelligence, 2008
41
Expertise: How credible is expert E?
Backup Evidence: Is A supported by evidence?
Field: Is E an expert in the field that A is in?
Opinion: Does E's testimony imply A?
Trustworthiness: Is E reliable?
Consistency: Is A consistent other experts’ testimony?
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
42
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Rebutting via Critical Question:
Presumptions vs. Exceptions
Using Argument Schemes
¨ There are many other schemes, e.g.
¨ Undermine a presumption, e.g.
( Argument from analogy
( Argument from negative consequence
( …
( Backup Evidence: Is A supported by evidence?
( You presume that the CIA have evidence for Iraq’s WMDs, but
this is not true.
¨ Important role in argumentative analysis
¨ Show that an exception holds, e.g.
(
(
(
(
( Consistency: Is A consistent other experts’ testimony?
( Actually, many other experts disagree that Iraq has WMDs
Argumentation in Artificial Intelligence, 2008
43
Analyse text
text, speech
speech, etc
etc.
Identify argument scheme used
Make implicit premises explicit
Use critical questions for evaluation
Argumentation in Artificial Intelligence, 2008
44
Argument schemes:
general form
Combining Arguments
¨ The same as logical inference rules
Premise 1, … , Premise n
Therefore (presumably), conclusion
¨ Critical questions provide “pointers” which make
an argument schema unapplicable (e.g.
undermining presumptions, showing that an
exception holds, etc.)
Argumentation in Artificial Intelligence, 2008
45
Argumentation in Artificial Intelligence, 2008
46
Argument Schema: “Normative syllogism”
Argument Schema: Statistical syllogism
¨ P and if P then as a rule Q
¨ P and if P then usually Q
is a reason to believe that Q
is a reason to accept that Q
¨ Critical question: are there exceptions?
John Pollock
3 Birds usually fly
( How does a lawyer argue for exceptions to a
rule?
¨ Critical question: subproperty
defeater?
3 Say legislation makes an exception
( Conflicting generalisation about an
exceptional class
3 Say it is motivated by the rule’s purpose
3 Penguins don’t fly
3 Find an overruling principle
3 Argue that rule application has bad consequences
Argumentation in Artificial Intelligence, 2008
47
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
48
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Argument Schema: Witness testimony
Argument Schema: Temporal persistence
P is true at T1 and T2 > T1
Witness W says P
Therefore (presumably), P is still true at T2
Therefore (presumably), P
¨ Critical questions:
¨ Critical q
questions:
•
Is W sincere? (veracity)
•
Was P known to be false between T1 and T2?
•
Did W really see P? (objectivity)
•
Is the gap between T1 and T2 too long?
•
Did P occur? (observational sensitivity)
Argumentation in Artificial Intelligence, 2008
49
Argument Schema:
Arguments from consequences
Argumentation in Artificial Intelligence, 2008
50
Types of Dialgoue (Walton and Krabbe, 1995)
Action A brings about good consequences
Dialogue Type
Dialogue Goal
Initial situation
Persuasion
resolution of conflict
conflict of opinion
Negotiation
making a deal
conflict of interest
Deliberation
reaching a decision
need for action
Information seeking
exchange of information
personal ignorance
Inquiry
growth of knowledge
general ignorance
Therefore (presumably), A should be done
¨ Critical questions:
•
Does A also have bad consequences?
•
Are there other ways to bring about the good consequences?
( We will come back to this later … )
Argumentation in Artificial Intelligence, 2008
51
Argumentation in Artificial Intelligence, 2008
52
The main problem: representing and reasoning with
commonsense knowledge
Defeasible reasoning in AI
Suppose we want to represent
knowledge of the real world
¨ Reasoning is generally defeasible
• Assumptions, exceptions, uncertainty, ...
1. Birds fly.
2. Penguins do not fly.
3. Penguins are birds.
4. Tweety is a bird.
5. Tweety is a penguin.
¨ AI formalises such reasoning with non-monotonic logics
• Default logic,
logic etc …
• New premisses can invalidate old conclusions
K=
¨ Argumentation logics formalise defeasible reasoning as
construction and comparison of arguments
Bird(x) Æ fly(x)
Penguin(x) Æ ¬fly(x)
Penguin(x) Æ Bird(x)
Bird(tweety).
Penguin(tweety).
First order logic cannot deal
with inconsistent information !
K A fly(tweety)
K A ¬fly(tweety)
This is a central problem in
Artificial Intelligence…
how to deal with defaults?
Argumentation in Artificial Intelligence, 2008
53
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
54
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Some issues in default reasoning
How to represent and reason with defaults?
Typical problems in (non-monotonic) default reasoning:
1) Representation of defaults: e.g. Birds usually fly
1) Representation of defaults: e.g. Birds usually fly
Solutions:
2) Inconsistency handling: identify relevant subsets of
consistent information.
a) Abnormality conditions: an additional predicate “ab”
is included.
∀x Bird(x) ∧ ¬ab1(x) → Canfly(x)
y gp
preferred models
3)) Identifying
Many approaches have been developed:
E.g.:
∀x Bird(x) ∧ ¬BrokenWing(x) → Canfly(x)
• Default logic (Reiter, 1980)
• Preferred subtheories (Brewka, 1989)
Problem: how many different types of abnormalities?
• Circunscription (McCarthy, 1987)
Æ Qualification problem
• Others…
Argumentation in Artificial Intelligence, 2008
55
How to represent and reason with defaults?
Argumentation in Artificial Intelligence, 2008
56
How to represent and reason with defaults?
Solutions:
Solutions:
b) Default logic (Reiter, 1980)
c) Preferred Subtheories (Brewka, 1989)
Bird(x) : Canfly(x) /
Canfly(x)
Defaults can be also formalized as formulas in first-order
logic (ordinary implications).
Defeasibility is captured in terms of consistency handling
strategies. Some subtheories are preferred over others.
“If it is provable that X is a bird, and it is not provable that
X cannot fly, then we may infer that X can fly”
D={
Bird(x) : Canfly(x) /
Canfly(x) }
(1) Bird Æ canfly
(2) Penguin Æ ¬canfly
(3) bird
(4) penguin
W = { Bird(Tweety), ∀x. Penguin(x) Æ ¬ Canfly(x) }
Then D ∪ W A Canfly(tweety)
But D ∪ W ∪ { Penguin(tweety) } G Canfly(tweety)
Argumentation in Artificial Intelligence, 2008
How to represent and reason with defaults?
58
Argumentation systems (AS) are “yet another way” to formalize
common-sense reasoning. Non-monotonicity arises from the fact that
new premises may give rise to stronger counterarguments, which in
turn will defeat the original argument.
d) Defeasible Rules
Defaults can be also formalized in terms of special rules
called “defeasible rules”, used to represent tentative
knowledge.
Argumentation in Artificial Intelligence, 2008
Argumentation in Artificial Intelligence, 2008
Systems for defeasible argumentation. Generalities
Solutions:
Bird(X) canfly(X)
{(2),(4)}
Two conflicting subtheories
57
∀x Bird(X) Æ canfly(X)
{(1),(3)}
1) Normality condition view: an argument = standard proof
from a set of premises + normality statements.
g
is an attack on such a normality
y
A counterargument
statement.
(If X is a bird, then X can fly)
2) Inconsistency handling view: an argument = standard
proof from a consistent subset of the premises.
A counterargument is an attack on a premise of an
argument.
(Birds usually can fly)
3) Semantic view: constructing ‘invalid’ arguments (wrt the
semantics) is allowed in the proof theory.
A
counterargument is an attack on the use of an inference
rule which deviates from a preferred model.
59
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
Views on
default
reasoning
from an
argumentation
perspective
60
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Systems for Defeasible Argumentation
Argumentation process (Amgoud)
According to Prakken & Vreeswijk (2002), there are
five common elements to systems for defeasible
argumentation:
Constructing arguments
Defining the interactions between arguments
Definition of Underlying Logical Language
Evaluating the strengths of arguments
Definition of Argument
Defining the status of arguments
Definition of Conflict among Arguments
Definition of Defeat among Arguments
Drawing conclusions using a
consequence relation
Comparing decisions using
a given principle
Definition of Status of Arguments
Inference problem
Argumentation in Artificial Intelligence, 2008
61
The underlying logic: Arguments & Logical consequence
¨ Argumentation Systems are constructed starting
from a logical language and an associated notion of
logical consequence for that language.
Decision making problem
Argumentation in Artificial Intelligence, 2008
62
Systems for Defeasible Argumentation
Definition of Underlying Logical Language
Definition of Argument
¨ The logical consequence relation helps to define
what will be considered an argument.
argument
D fi i i off C
Definition
Conflict
fli among A
Arguments
¨ This consequence relation is monotonic, i.e., new
information cannot invalidate arguments as such,
but rather give rise to counterarguments.
Definition of Defeat among Arguments
Definition of Status of Arguments
¨ Arguments are seen as proofs in the chosen logic.
Argumentation in Artificial Intelligence, 2008
63
Language for Argumentation
Argumentation in Artificial Intelligence, 2008
64
Systems for Defeasible Argumentation
The object language is basically a logical language:
Definition of Underlying Logical Language
¨ Constants (tweety, opus, etc.) and variables
(X,Y,Z).
Definition of Argument
¨ Predicate symbols (flies
(flies, bird,
bird etc.).
etc )
D fi i i off C
Definition
Conflict
fli among A
Arguments
¨ Connectives (∧, →, ¬, not)
Definition of Defeat among Arguments
¨ Special symbol for defeasible rule: e.g. or ⇒
Definition of Status of Arguments
¨ Negation is important!
Argumentation in Artificial Intelligence, 2008
65
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
66
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Argument as a ‘proof’
The concept of argument
Arguments are presented under different forms:
K = { bird(tweety), penguin(tweety), bird(X)Æ flies(X),
penguin(X)Æ bird(X), penguin(X) Æ ¬flies(X)}
¨ An inference tree grounded in premises.
¨ A deduction sequence.
Using modus ponens, from K we can infer flies(tweety)
and ¬ flies(tweety)
¨ A pair (Premises
(Premises, Conclusion),
Conclusion) leaving unspecified
the particular proof, in the underlying logic, that
leads from the Premises to the Conclusion.
Logic does not “care” in the contents of proofs, Logic is static.
Argumentation does care, as it is a dynamic process…
¨ A completely unspecified structure, such as in
Dung’s abstract framework for argumentation
(1995).
flies(tweety)
¬flies(tweety)
bird(tweety)
penguin(tweety)
penguin(tweety)
Argumentation in Artificial Intelligence, 2008
67
Argumentation in Artificial Intelligence, 2008
The concept of argument
Types of arguments: (Kraus et al. 98, Amgoud & Prade 05)
¨ Explanations (involve only beliefs)
•
Kstrict =
{ bird(tweety), penguin(tweety), bird(X)Æ flies(X), penguin(X)Æ
bird(X), penguin(X) Æ ¬flies(X)}
bird(tweety)
Tweety flies because it is a bird
¨ Threats (involve beliefs + goals)
Kdefeasible = {bird(X)Æ flies(X), penguin(X) Æ ¬flies(X)}
¬flies(tweety)
68
Different kinds of arguments...
That’s why it is important to distinguish tentative
knowledge from strict knowledge…
flies(tweety)
Two conflicting
“arguments” derived
from K
Two conflicting
“arguments” derived
from Kstrict ∪ Kdefeasible
•
You should do α otherwise I will do β
•
You should not do α otherwise I will do β
¨ Rewards (involve beliefs + goals)
penguin(tweety)
penguin(tweety)
•
If you do α, I will do β
•
If you don’t do α, I will do β
…
Argumentation in Artificial Intelligence, 2008
69
Systems for Defeasible Argumentation
70
Conflict, Attack, Counterargument
The notion of conflict (Counterargument or Attack)
between arguments is typically discussed
discriminating three cases:
Definition of Underlying Logical Language
¨ Rebutting attacks: arguments with
contradictory conclusions.
conclusions
Definition of Argument
D fi i i off C
Definition
Conflict
fli among A
Arguments
¨ Assumption attack: attacking non-provability
assumptions.
Definition of Defeat among Arguments
Definition of Status of Arguments
Argumentation in Artificial Intelligence, 2008
Argumentation in Artificial Intelligence, 2008
¨ Undercutting attacks: an argument that
undermines some intermediate step (inference
rule) of another argument.
71
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
72
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Rebutting and assumption attacks
Rebutting is symmetric, e.g.:
‘Tweety flies because it is a bird’
versus
Tweety doesn’t fly because it is a
penguin’.
tweety flies
Undercutting attack
¨ An argument challenges the connection
between the premises and the conclusion.
Assumption attack:
Tweety flies because it is a bird
and it is not provable that
Tweety is a penguin’ versus
Tweety is a penguin’
¬tweety flies
tweety flies
¬⎡ p,q,r / h ⎤
h
penguin tweety
p q r
Tweety flies because all the birds
I’ve seen fly
not(penguin tweety)
Argumentation in Artificial Intelligence, 2008
73
Direct vs. Indirect Attack
¬p
Argumentation in Artificial Intelligence, 2008
74
Systems for Defeasible Argumentation
These types of attack could be direct
and indirect.
Direct attack
Definition of Underlying Logical Language
Definition of Argument
Indirect attack
¬p
p
I’ve seen Opus; it is a bird and
it doesn’t fly
D fi i i off C
Definition
Conflict
fli among A
Arguments
s
Definition of Defeat among Arguments
p
Argumentation in Artificial Intelligence, 2008
Definition of Status of Arguments
75
Argumentation in Artificial Intelligence, 2008
76
Defeat: Comparing Arguments
Defeat: Comparing Arguments
¨ The notion of conflict does not embody any form of
comparison; this is another element of argumentation
¨ Argumentation systems vary in their grounds for
evaluation of arguments. One common criterion is the
specificity principle, which prefers arguments based
on the most specific defaults.
¨ Defeat has the form of a binary relation between
arguments, standing for
• ‘attacking
‘ tt ki and
d nott weaker’
k ’ (d
defeat
f t)
• ‘attacking and stronger’
defeats
(strict defeat)
flies(opus)
¨ Terminology varies: ‘defeat’ (Simari, 1989; Prakken &
Sartor, 1997), ‘attack’ (Dung, 1995; Bondarenko et. al
1997) and ‘interference’ (Loui, 1998).
Argumentation in Artificial Intelligence, 2008
bird(opus)
⟨ A, flies(opus) ⟩
77
¬flies(opus)
≤
bird(opus), broken_wing(opus)
⟨ B, ¬flies(opus) ⟩
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
78
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Defeat among arguments via specificity
If something moves when touched, it I usually not dead
If something looks dead, it is usually dead
If a spider is dead, usually it is not dangerous
If something is a spider, it is usually dangerous
Black widow (bw) is a spider
Black widow (bw) looks dead.
Black widow (bw) moves when touched.
¨ Specificity criterion: prefers arguments
(a) with more information or
(b) more direct
dangerous(bw)
h
p
h
p
Argument A for
dangerous (bw)
spider(bw)
<
A
B
spider(bw), dead(bw)
~dead (bw)
Discussion: “Computing Generalized Specificity” (F.Stolzenburg, A.García,
C.Chesñevar , G.Simari). Journal of Applied Non-Classical Logics, Vol.13, No. 1,
pp.87-113, 2003.
Argumentation in Artificial Intelligence, 2008
moves_when_touched (bw)
< A, dangerous(bw) >
< B, ~dangerous(bw) >
looks_dead(bw)
A
B
Yes.
Argument B for
~dangerous(bw)
defeats A (it is strictly more
specific than A, as it is more
informed)
~dangerous (bw)
<
?- dangerous(bw).
Argument C for
~dead(bw)
defeats B (it attacks argument
B at an inner subargument,
being as specific as the
subargument being attacked).
< C, ~dead(bw) >
79
80
Defeat: Comparing Arguments
Defeat: comparing arguments
¨ However, it has been argued that specificity is not a
general principle of commonsense reasoning, but
rather a standard that might (or might not) be used.
¨ In Simari&Loui’s framework, specificity is used as
a default, but it is ‘modular’: any other preference
relation defined among arguments could be used.
¨ Some researchers even claim that general, domainindependent principles of defeat do not exist
exist, or are
very weak.
¨ In Dung’s, defeat is an abstract notion, left
undefined
undefined.
¨ In Bondarenko’s framework, defeat is limited to
attack between arguments (there is no preference
at all!)
¨ Some even argue that the evaluation criteria are part
of the domain theory, and should also be debatable.
¨ Other comparison criteria are possible…
What do you think?
Argumentation in Artificial Intelligence, 2008
81
Defeat: comparing arguments
Argumentation in Artificial Intelligence, 2008
82
Systems for Defeasible Argumentation
¨ Defeat is basically a binary relation on a set of args.
¨ But ... it just tells us something about two arguments,
not about a dispute (that may involve many args.)
Definition of Underlying Logical Language
¨ A common situation is reinstatement as in the example
below (where an argument C reinstates an argument A
b d
by
defeating
f ti argumentt B)
Definition of Argument
D fi i i off C
Definition
Conflict
fli among A
Arguments
Definition of Defeat among Arguments
Definition of Status of Arguments
A
Argumentation in Artificial Intelligence, 2008
B†
C
83
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
84
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Status of Arguments
Status of arguments
¨ The last element in our ontology comes into play...
the definition of Status of Arguments.
¨ Status of arguments can be computed either in
‘declarative’ or ‘procedural’ form.
¨ This notion is the actual output of most Arg.Sys and
arguments are divided into (at least) two classes:
¨ In the declarative form usually requires fixed-point
definitions, and establishes certain sets of
arguments as acceptable (in the context of a set of
premises and a evaluation criteria) but without
defining a procedure for testing whether a given
argument is a member of this set.
•
Arguments with which a dispute can be ‘won’
won
•
Arguments with which a dispute can be ‘lost’
•
Arguments that leave the dispute ‘undecided’
¨ ‘Procedural form’ amounts to defining such a
procedure for acceptability.
¨ Usual terminology: ‘justified’ or ‘warranted’ vs.
‘defeated’ or ‘overruled’ vs. ‘defensible’, etc.
Argumentation in Artificial Intelligence, 2008
85
Status of arguments
Declarative form
of argumentation
Argumentation in Artificial Intelligence, 2008
86
Argument-based Semantics
¨ Which conditions on sets of arguments should
be satisfied?
Procedural form of
argumentation
¨ We will assume as background
• A set Args of arguments
defeat defined over it.
it
• A binary relation of ‘defeat’
Def. 1: Arguments are either justified or not justified
Argumentationtheoretic
semantics
1. An argument is justified if all arguments defeating it
(if any) are not justified.
Proof Theory
2. An argument is not justified if it is defeated by an
argument that is justified.
Argumentation in Artificial Intelligence, 2008
87
Argumentation in Artificial Intelligence, 2008
Argument-based Semantics
88
Example: Even cycle
A =“Nixon was a pacifist
because he was a quaker”
B =“Nixon wasn’t a pacifist
because he was a republican”
Example: Consider three arguments A, B and C
There are two status
assignment that satisfy Def 1
†
A
B
Def. 1: Arguments are either justified or not justified
A
B
C
1. An argument is justified if all arguments defeating it
(if any) are not justified.
Argument A and C are justified; argument B is not.
Argumentation in Artificial Intelligence, 2008
2. An argument is not justified if it is defeated by an
argument that is justified.
89
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
90
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Argument-based Semantics
The Unique-Status-Assignment Approach
In the literature, two approaches to the solution of this
problem can be found.
This idea could be presented in two
different ways:
¨ First approach: changing Def. 1 in such a way that
there is always precisely one possible way to assign
a status to arguments. Undecided conflicts get the
status ‘not
not justified’
justified .
¨ Using a fixed-point operator
Allowing unique-status assignment (u.s.a).
¨ Second approach: allowing multiple assignments,
defining an argument as ‘genuinely’ justified iff it is
justified in all possible assignments.
¨ Given a recursive definition of
justified argument
Allowing multiple-status assignment (m.s.a).
Argumentation in Artificial Intelligence, 2008
91
Argumentation in Artificial Intelligence, 2008
92
Problems with Unique-Status Assignment
Problems with Unique-Status Assignment
There are some problems when evaluating unique-status
assignment.
Example: Floating Arguments / Floating Conclusions
There are some problems when evaluating unique-status
assignment.
Example: Floating Arguments / Floating Conclusions
p
D
C
A
A - = Juan es mendocino porque vive en Mendoza
B- = Juan es porteño por la tonada que tiene
A = Juan es argentino porque es mendocino.
B = Juan es argentino porque es porteño.
But since C is defeated by A and B, we
could argue that C “floats” on A and B,
and therefore it should be overruled.
And then D is justified.
AA
B
Argumentation in Artificial Intelligence, 2008
93
Argumentation in Artificial Intelligence, 2008
94
Using Multiple-Status Assignment
Problems with Unique-Status Assignment
¨ A second way to deal with competing arguments of equal
strenght is to let them induce two alternative status
assignments.
There are some problems when evaluating unique-status
assignment.
Example: Floating Arguments / Floating Conclusions
¨ Evaluating outcomes from alternative status assignments
let us determine when an argument is justified.
D
p
The unique-status
approach is inherently
unable to capture floating
arguments and
conclusions.
C
A
BB
B
Argumentation in Artificial Intelligence, 2008
AA
Def. : (Status assignment) Given a set S of args ordered by
a binary defeat relation, an status assignment sa(S) is a
function which maps every argument in S into {in,out},
such that:
i.
BB
A is in iff all args defeating it (if any) are out.
ii. A is out if it is defeated by an arg that is in.
95
Argumentation in Artificial Intelligence, 2008
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
96
Argumentación en Inteligencia Artificial
Carlos Iván Chesñevar – Guillermo Ricardo Simari
Example
Classifying Arguments
Def. : Given a set S of arguments ordered by a
binary defeat relation, an argument A is
– justified iff it is ‘in’ in all sa(S).
B
A
B
A
( )
– overruled iff it is ‘out’ in all sa(S)
– defensible iff it is ‘out’ in some sa(S), ‘in’ in
others.
Def. : (Justification) Given a set S of arguments
ordered by a binary defeat relation, an
argument is justified iff it is in in all possible
status assignments to S.
Argumentation in Artificial Intelligence, 2008
¨ Are the two approaches are equivalent?
¨ The answer is no.
97
¨ Some researchers say that the difference
between the two approaches can be compared
with the ‘skeptical’ vs. ‘credulous’ attitude
towards drawing defeasible conclusions ...
C
A
¨ m.s.a is more convenient for identifying sets of
arguments that are compatible with each other.
B
¨ u.s.a considers arguments on an individual
basis.
¨ What are the status assignments?
¨ There are no status assignments!
Argumentation in Artificial Intelligence, 2008
98
Comparing the two approaches
Problems with Multiple-Status Assignment
A
Argumentation in Artificial Intelligence, 2008
99
Argumentation in Artificial Intelligence, 2008
END OF PART 1
101
Escuela Internacional CACIC 2008
Chilecito, Argentina – 06 al 10 de octubre de 2008
100
Descargar