strategies to defend a protagonist of an event
DESCRIPTION
We aim at controlling the biases that exist in every description, in order to give the best possible imageof one of the protagonists of an event. Starting from a supposedly complete set of propositionsaccounting for an event, we develop various argumentative strategies (insinuation, justification, referenceto customary norms) to imply the facts that cannot be simply omitted but have the “wrong”orientation w.r.t. the protagonist we defend. By analyzing these different strategies, a contribution ofthis work is to provide a number of relevant parameters to take into account in developing and evaluatingsystems aiming at understanding natural language (NL) argumentations. The source of inspirationfor this work is a corpus of 160 texts where each text describes a (different) car accident. Itsresult, for a given accident, is a set of first-order literals representing the essential facts of a descriptionintended to defend one of the protagonists. An implementation in Answer Set Programming isunderway. A couple of examples showing how to extract, from the same starting point, a defense forthe two opposite sides are provided. Experimental validation of this work is in progress, and its firstresults are reported.TRANSCRIPT
DOI: 10.1142/S0218213010000273
July 30, 2010 18:22 WSPC-IJAIT S0218213010000273
International Journal on Artificial Intelligence ToolsVol. 19, No. 4 (2010) 439–464c© World Scientific Publishing Company
STRATEGIES TO DEFEND A PROTAGONIST OF AN EVENT
SARA BOUTOUHAMI and DANIEL KAYSER
LIPN – UMR 7030 du C.N.R.S. – Institut Galilee – Univ. Paris-Nord,
93430 Villetaneuse, France
boutouhami, [email protected]
439
We aim at controlling the biases that exist in every description, in order to give the best possible im-
age of one of the protagonists of an event. Starting from a supposedly complete set of propositions
accounting for an event, we develop various argumentative strategies (insinuation, justification, ref-
erence to customary norms) to imply the facts that cannot be simply omitted but have the “wrong”
orientation w.r.t. the protagonist we defend. By analyzing these different strategies, a contribution of
this work is to provide a number of relevant parameters to take into account in developing and eva-
luating systems aiming at understanding natural language (NL) argumentations. The source of inspi-
ration for this work is a corpus of 160 texts where each text describes a (different) car accident. Its
result, for a given accident, is a set of first-order literals representing the essential facts of a descrip-
tion intended to defend one of the protagonists. An implementation in Answer Set Programming is
underway. A couple of examples showing how to extract, from the same starting point, a defense for
the two opposite sides are provided. Experimental validation of this work is in progress, and its first
results are reported.
Keywords: Argumentation; strategies; inference rules; natural language; nonmonotonic reasoning;
semi-normal defaults.
1. Introduction
1.1. Motivation
No text provides an exhaustive description of an event, and if it could, that text would be
excessively long and tedious. So every description whatsoever implies a selection be-
tween what is said and what is left for the reader to infer. This selection cannot be truly
neutral: the choice of the lexical items puts stress voluntarily or not on this or that fea-
ture; opting for the active/passive voice depends on whether we want to focus on who did
what, or rather to leave the agent in the shadow, and so on. All these choices cannot be
fully unprejudiced.
This paper aims at controlling the biases that necessarily exist in all descriptions, in
order to fulfill a goal: to give the best possible image of one of the protagonists of the
event. This goal can itself be considered as part of a more ambitious project: on the prac-
tical side, provide a better help to writers, by looking not only at their spelling or at their
style but more deeply at the linking of their arguments; on the theoretical side, simulating
the reasoning that we perform more or less consciously when we select what to write and
440 S. Boutouhami & D. Kayser
how to write it, is a way to complete the grand ambition of Artificial Intelligence. In fact,
finding the good arguments, their balance, order them to impress the reader is far from
obvious and requires a kind of reasoning which, as far as we know, has been little studied
in AI (see however Ref. 13).
1.2. Brief state of the art
Our goal is to produce descriptions which, while remaining truthful, are expected to
trigger, in the reader’s mind, inferences ultimately leading him/her to conclude what is
favorable for one agent.
Argumentation is definitely a field of study in AI,a but in a framework that differs
significantly from ours: our objective is basically connected with linguistics, and more
specifically pertains to rhetoric, which is a part of pragmatics. It requires determining,
from their content, which propositions can serve as arguments.
On the opposite, in AI, the authors generally assume a prior knowledge of a relation
of attack between arguments,7,18
and the arguments are supposed to exist independently of
their use in the argumentation. The focus is to defend a thesis in a goal-oriented dialogue,
the goal being to counteract every argument liable to attack that thesis. These works use
mainly logical tools and remain separate from linguistic concerns. Despite their theoreti-
cal and practical interest, these works do not accurately reflect the spontaneous nature of
argumentation used in our daily life.
Argumentation, as we want to investigate it, is located at a crossroads of a number of
disciplines: not only linguistics but also communication sciences, logic, sociology, psy-
chology. The problem is that, except for formal logic, the models developed in those
disciplines are not of a kind that can be implemented on computers.
According to Toulmin,26
formal logic is too abstract and is an inappropriate represen-
tation of how human beings actually argue. Formal logic is concerned with the notion of
internal correctness and validity and it assumes that concepts do not change with time.
Like Toulmin, Perelman observes that formal logic does not take into account value
judgments in everyday arguments and he considers arguments based on judgments as non
rational. By way of consequence, both Toulmin and Perelman consider formal logic as
not suitable for the practical purpose of finding arguments when we are neither concerned
with absolute truth nor with internal validity. According to Perelman, argumentation
proceeds informally rather than in keeping with logical forms; the object of his theory of
argumentation is the study of the discursive techniques inducing or increasing the mind’s
adherence to the theses presented for its assent.20
Toulmin rather recommends a transfor-
mation of Logic, from the science of mathematical proofs, towards a practical guideline
for commonsense reasoning. Instead of taking the syllogism as the archetype of the
means to reach a conclusion, he defends a richer structure composed of a qualified claim
a The interest and the use of argumentation in the sub-disciplines of AI are growing; for details see Ref. 3
[http://www.cmna.info/]. For an overview on current research in the interdisciplinary area lying between
Artificial Intelligence and the theory of argumentation, see Refs. 21 and 22.
Strategies to Defend a Protagonist of An Event 441
supported by both data and warrant, the latter being itself equipped with a backing and a
possible rebuttal.
This model can be used post hoc, to explain, generally more than univocally, how a
given text is an argumentation for a given thesis. But it is much more difficult to see how
we could take advantage of it to build a text from the thesis we want to argue for. In a
similar vein, Jean-Blaise Grize, has developed an alternative to formal logic in the analy-
sis of argumentation: a so-called natural logic that extends the realm of reasoning that
can be covered, but remains far from providing help to build the reasoning itself. Natural
logic studies the operations that enable the construction of meaning in an argumentation
process, how the mind reasons being analyzed through language studies.12
The concern of
natural logic is to capture phenomena of thought and not the phenomena of language,
unlike Anscombre and Ducrot. As a matter of fact, the theory of argumentation of these
authors2 studies the argumentative entailments from a linguistic point of view. The main
idea is that language does not purport to represent the world, but to argument about it.
Argumentation is defined as the study of the linking of utterances leading to a conclusion.
Anscombre and Ducrot endow every utterance with an argumentative aspect and an ar-
gumentative orientation. The analysis of connectors occupies an important place in this
theory, since the connectors define the nature of links between utterances.
The description we wish eventually to generate should be in Natural Language (NL)
but as NL generation is by itself a very difficult task, we limit provisionally our ambition
to the generation of an ordered list of propositions, the predicates and arguments of which
being NL words. Examples of interesting works on NL generation taking into account
the pragmatic aspects of the situation are Refs. 13 and 16. The former provides a list of
parameters to characterize the type of output that fits best the purpose of a writer in a
large number of situations; the latter focuses, as we do, on more specific issues: her study
concerns procedural texts, ours, car-crash descriptions, but the spirit is similar in both
cases.
The paper is organized as follows: Section 2 describes more precisely our task and its
motivations. Section 3 shows the general architecture of the system in which the argu-
mentative strategies are embedded. Section 4 presents the argumentative techniques
themselves. Section 5 is devoted to the representation language and its implementation.
Section 6 gives detailed examples. Section 7 describes a psychological experiment built
in order to validate our work.
2. Description of the Task
Our purpose is to build a system that generates a description of a situation of car crash
that is “optimally” argued, which means that it tends to minimize the share of responsibil-
ity of the defended protagonist.
To achieve our goal, we need a starting point: we assume that we have at our disposal
a comprehensive account of a real event. Starting from it, we want to build what we call a
“biased description” that:
442 S. Boutouhami & D. Kayser
(i) Contains enough information for the reader to reconstruct the outline of the event,
(ii) Does not contain any falsehood,
(iii) Gives the best possible image of the protagonist that we decide to defend.
Technically, we provide manually as input to our system:
• A list L of propositions describing the event in full detail,
• A subset of L, called the minimal list ML, considered as a sufficient basis for a reader
to understand roughly what happened.
From that data, we use the flexibility of NL to produce a description supposed to
guide the reader’s inferences in the most favorable way for one of the protagonists. Our
task makes use of two components: the reasoning and the language component. The main
idea is to use common sense knowledge in order to differentiate between what is in favor
of the desired conclusion and what is not, and especially to determine the best way to
present what it is unfavorable. The power of the reasoning rules is partly due to the way
they exploit the flexibility and the variety of natural language expressions. We try in this
work to articulate, in a single architecture, argumentative techniques related both to rea-
soning and to NL issues.
2.1. The domain of application
Our domain of application concerns road accidents; we have selected this domain for the
following reasons:
• The omnipresence of the argumentative aspect is these texts. Drivers involved in an
accident write short reports describing its circumstances, and send them to their
insurance company. They naturally try to present things in the most advantageous
perspective for them. • Most people know enough about car crashes, they do not need special knowledge, as
would be the case for, say medicine, to extract implicit information, to understand a
text describing an accident and so to answer questions as “what caused the accident?”
or “who is responsible for the accident?”. This makes it easier to design validation
scenarios. • There exist a large number of short texts (at most 5 lines) describing the circumstances
of an accident: every insurance company receives daily car-crash reports.b
• The choice of this domain is also motivated by its limited lexical and semantic field
while the kinds of accidents being varied enough, it provides us with a wide diversity
of cases.
• A corpus of such reports has already been studied from different points of views (see
for example Refs. 8, 10, 14 and 17).
b The corpus is composed of 166 reports sent to their insurance company by drivers involved in a car accident.
We are grateful to the MAIF insurance company for having given us access to these texts.
Strategies to Defend a Protagonist of An Event 443
• The corpus serves us as a source of inspiration for various situations of accidents and
as a benchmark for the purpose of validation.
• We used it in a preliminary study5 to analyze the strategies that drivers adopt to give a
good opinion of their conduct. Some drivers are really good arguers and provide what
seems to us an optimal version; others are less gifted, and their efforts to hide their
blunders are counterproductive. Anyway, this study helped us to determine on which
parameters to play, in order to produce an effect on the reader.
3. General Architecture
As shown in Fig. 1, several steps are required in the process of generating biased descrip-
tions.
Fig. 1. General architecture.
As said above, the system receives as input a “complete” list L of propositions de-
scribing an accident, inferred from a genuine report of our corpus, and a minimal list ML
intended as the shortest list from which a reader can get an idea of what happened.
A fact belongs to the minimal list if and only if
(i) it cannot be inferred from the rest of the reported facts,
(ii) the absence of this fact makes incomprehensible the sequence of the accident.
We are not interested in automating the construction of the minimal list (which might
be a very difficult task), we simply build it manually and use it as an input.
The output of the system is a biased description of the accident, given under the form
of an ordered list of literals, from which hopefully a NL generator will be in position to
write a textual description. The system consists of three modules (see Section 5.4 for
details of the implementation):
• The first module labels the input facts; this label determines the “path” they follow in
the subsequent chain of treatment.
Biased description
List L of actual facts
Minimal list ML
Module 3: Improvement
Module 1: Labeling
Module 2: Treating bad facts
List of facts labeled
List of what can be said
444 S. Boutouhami & D. Kayser
• The second module is dedicated to the treatment of facts that do not support the
desired conclusion, but must somehow be present in the final description.
• The last module improves the quality of the output: elimination of redundant facts,
global balance of the arguments, and lexical refinements.
3.1. Argumentative label
The argumentative status is a label associated to a fact; it determines how it will be pre-
sented in order to contribute positively to the final purpose of an argumentation. The
argumentative status is assigned for each element of the list L with respect to the desired
conclusion, which is in our study “to give the best possible image of the protagonist that
we decide to defend”. The argumentative status evolves throughout the “path” that a fact
follows in the chain of treatment.
Figure 2 shows the argumentative status used and its evolution.
Fig. 2. The argumentative labels.
(i) Facts favorable to the desired conclusion: It is the status of facts which are likely
to guide the reader's reasoning in a favorable way for the author (the protagonist
we defend). Facts are “good”, either because they are intrinsically considered as
positive (“to keep a safe distance from the vehicle ahead” is a good fact), or because
in a given context, they contribute in avoiding an accident (“to move back” is not
particularly good in general, but if it is done in order to facilitate the maneuver of a
truck, it will be considered as a good fact).
A fact is labeled “good” in two cases:
• It is a good fact, intrinsically or contextually, performed by the protagonist we
defend, which reflects respect and compliance to standards of conduct.
• It is a bad fact (see below) performed by the adversary; by contrast, this fact may
stress the good behavior of the agent that we defend.
is favorable B
A fact
is unfavorable M
will be omitted S must be in the final description
will be insinuated I will be justified J will refer to customary
norms N
Strategies to Defend a Protagonist of An Event 445
Facts labeled as good appear by default in the final description, because we expect
them to have a positive influence, but they can be omitted if they have no effect or if
their overabundance risks to provoke the opposite effect (see Section 4.4 and the
validation of this hypothesis in Section 7).
(ii) Facts unfavorable for the desired conclusion: It is the status of facts that are
unfavorable for the protagonist that we defend. Facts are “bad”, either when they are
considered negatively (“not paying attention to the traffic lights” is intrinsically bad),
or because in a given context, they make the accident more likely, for example the
fact of driving, even “slowly”, while it would be better to stop.
Symmetrically to facts labeled “good”, facts are labeled “bad” in two cases: it is a
bad fact performed by the protagonist we defend, or it is a good fact performed by
the adversary.
The presence of these facts in the final description may be detrimental to the
purpose of argumentation. Two cases are considered, depending on the minimal
list ML.
(a) Bad fact to mention in the final description: If a fact does not support the
desired conclusion but belongs to ML, then it must appear, explicitly or
implicitly, in the final list. Three techniques are considered to minimize its
argumentative impact: insinuation, justification or appealing to what we call
“customary norms”. The first technique consists in evoking only implicitly the
undesired fact while the two others mention it explicitly but present it in an
appropriate context that tends to minimize its negative effect. We consider in
turn these three techniques in Section 4.
(b) Bad facts to omit: The silence is an argumentative strategy, and sometimes it
is the best one. Thus, if a fact does not support the desired conclusion and
does not belong to the minimal list, then it is better to say nothing about it,
unless the previous treatments (insinuation, justification or customary norms)
judge that its use in the final description would in fact be beneficial to the
argumentation.
The attribution of these argumentative statuses is done at the level of the first mo-
dule, by using a set of rules that we call “General rules”. As shown in Fig. 3, the output
of this module is a list of labeled facts, which will constitute the input of the second
module.
General rules are based both on the highway code and on the norms of common-sense
reasoning (some rules are displayed in section 6). They operate in two steps:
(i) The first step consists in attributing, for each fact of the factual list, one of the two
argumentative statuses “good” or “bad”; the result is an intermediate list.
(ii) The second step consists in updating the intermediate list, on the basis of the
information provided by the minimal list. So in addition to the argumentative
statuses “good” and “bad”, we will also have “silence” and “insinuate”.
446 S. Boutouhami & D. Kayser
Fig. 3. The labeling module.
4. The Argumentative Strategies
According to dictionaries, to argue consists in providing arguments for a thesis or against
it. A straightforward interpretation of this definition would be that a good argumentation
presents only facts that are favorable for the thesis defended and says nothing about the
rest. But unfortunately, some facts are essential for understanding the descriptions gener-
ated, even if they are “bad to say”. The difficulty is to present the evidence, while res-
pecting the main purpose of the argument. We have proposed three techniques to address
these facts; these techniques compose the core of second module (see Fig. 4). The three
techniques are applied in the following order of priority: the insinuation, the justification
and the use of customary norms. The priority is ensured by using non semi-monotonic
defaults (see section 5.3).
Fig. 4. Module treating bad facts.
We have given the strategy of insinuation the highest priority, because it is always
preferable to let the reader infer the implied fact, rather than to tell it explicitly: the risk
being to antagonize the reader by expressing overtly unfavorable facts. We do not have a
rule of insinuation at our disposal for every scenario in the corpus. In the absence of
such rules in a given case, we use the second strategy: the justification. This technique is
General rules
Intermediate list (B,M)
List L of factual facts
Minimal list
List of facts labeled (B, M ,I ,S)
List of facts labelled (B, M ,I ,S) Rules of insinuation
Rules based on customary
norms
Rules of justification
List of what can be said
Strategies to Defend a Protagonist of An Event 447
widely used in practice; we often try to combine the facts with others to make them more
acceptable. The third technique uses the customary norms rules.
4.1. Insinuation
As we have already said, we are sometimes forced to tell some facts that are unfavorable
for us, and this may seriously harm the purpose of our argumentation. This is the case of
bad facts belonging to the minimal list. The strategy consists in getting these facts across
without uttering them at all, but in a flexible and attenuated manner.
Basically, the insinuation of a fact f1 consists in replacing f1 by another fact f2 present
in the factual description L, so that f1 can be inferred from f2, and f2 has a less negative
impact on the reader than f1. The relation “f1 can be inferred from f2” may be either the
result of an inference rule based on the domain knowledge, or due to the flexibility of
NL. NL pragmatics has studied presuppositions and implicatures, and it is often a good
argumentative strategy to use a proposition f2 that presupposes f1, rather than uttering f1
itself.15, 6
Practically this can be done by exploiting:
• A strict implication “f2 → f1”: The implication may be interpreted in different ways: it
can express duties derived from the regulations, or it can express consequences or
causes due to specific positions on the road. As appropriate, we may use an abductive
or deductive reasoning to activate these rules.
For example, if factually A has no priority over B because A is leaving his home and
joins the traffic on the road, rather than “A had no priority”, it is better to write “A was
leaving home”: although everyone knows that one has no priority when coming from a
private ground, it sounds much less negative.
• A default like “if f2 then generally f1”: We have not all information about the accident
and our reasoning rules cannot be limited to logical implications, because we need to
handle exceptions that are inherent to situations encountered in everyday life. We
therefore use defaults, which are tools allowing to reason in the presence of exceptions
and/or when certain information is not available. The possibility of inference in the
absence of certain information characterizes human common-sense reasoning. We
exploit this flexibility to trigger the inference of implied facts, whose validity is only
assumed and is not made explicit.
For example, if the adversary B had signaled that he was turning, expressing this fact
would show B’s respect of the regulations; saying only that B intended to turn is
equivalent (how should A know what B intends, if B does not signal it?) but puts no
emphasis on his respect of the law. We can achieve this trick if we have at our disposal
a default such as “if a driver intends to turn, generally he signals it”.
• Equivalences “f2 ↔ f1”: Even in case of logical equivalence between two propositions,
a difference may exist between them, in terms of their ability to trigger other
inferences. For example, if C is a possible consequence of A, despite the equivalence
between A and B, in some cases C is more easily accessible from A than from B. We
use this logical equivalence in addition to the linguistic negation, which plays a
448 S. Boutouhami & D. Kayser
significant role in the argumentative process and may have different effects. In some
situations it is preferable to use A instead of ¬ ¬A and vice versa.
For example, it is better to say “I could not avoid it” instead of saying “I hit it”. Even
if the information content conveyed by these two sentences is roughly the same, the
impact is rather different with regard to the objective of the argumentation.
4.2. Justification
Justification is the action that attempts to legitimize something (an action or failure to
perform an action) by giving an explanation or a valid motive. The justification of a fact
f1 using another fact f2 is applied when we find no way to insinuate f1 whereas this fact is
unfavorable for the desired conclusion and belongs to the minimal list ML. The justifica-
tion consists in choosing among the factual list L other fact(s) f2 which, added to f1,
give(s) a better impression than f1 mentioned alone. f1 and f2 must be related and the rela-
tionship between them may be causal, consequential or explanatory.
Causal knowledge is often used during the argumentative process and can be ex-
ploited in the justification task. We can distinguish among argumentative processes that
ascribe a causal relationship,1 and those that exploit causal relationships like the argu-
ment by the cause / by the consequences. We can in our case, try to justify an accident
by referring to one of its causes, which exempts the person we defend from his responsi-
bilities. For example: in a passage of text B69 from our corpus: “My stop wasn’t quite
rigorous (road wet)”, the author suggests that he is not responsible for the rain, which
lengthens the braking distances, so he presents it as a mitigating circumstance, if not as
the cause of his “not quite rigorous” stop ... not to say that the road conditions were per-
fectly known to him, and he has not held account of them.
These relationships between f1 and f2, can also express intentions or beliefs. This in-
formation is very helpful for the final step of the process, as the type of the relation be-
tween facts determines the choice of the syntactic elements to present them in the best
way to obtain the desired effect on the reader.
4.3. Reference to customary norms
To argue, we often appeal to “customary norms”. A customary norm (also called infor-
mal rule) is a general practice accepted as regulating like a law the well-behaved drivers.
These norms are not part of the driving regulations, but are widely recognized as legiti-
mate. Our corpus study shows that this legitimacy is often invoked as an excuse to justify
the violation of a true regulation.c The idea is to try to justify one’s faulty behavior f1
by the fact that the adversary did not respect a customary norm f2. Contrary to the justifi-
cations discussed in the previous section, there is no real connection between f1 and f2,
as the adversary non-complying with a customary norm does not create any right. More-
cSocio-psychological studies4 show how the behavior of drivers at intersections depends on parameters such as
age, sex, occupation, and is determined by informal rules when they drive. For more details about driver
behaviors see [http://www.ictct.org].
Strategies to Defend a Protagonist of An Event 449
over, a few texts of our corpus witness a strategy consisting in the insinuation of a custo-
mary norm which is not even part of the set of the driving behaviors generally accepted,
just to argue that the adversary did not respect the alleged norm, thereby making the
author’s behavior legitimate!
An example of customary norm is found in a passage of text B53 from our corpus: At
the traffic lights, I was stopped behind a car. The light passing green, I moved off, but the
driver of the vehicle ahead of me did not start. So I hit the back of the car.
The non-compliance of a law-like norm by the adversary "when the light turns green,
start the vehicle” is used as a argument to justify the violation of a strong norm “one must
ensure that one’s distance from the vehicle ahead is appropriate before moving off”.
In our implementation, customary norms get a special label N, and we do not plan to
implement the strategy consisting in forging pretended norms just for the sake of an ar-
gumentative need. As said above the type of relation between facts provides guidelines
for the step of generation. The use of customary norm is already expressed by the syntac-
tic phenomenon of concession. The concession consists in coupling two opposite ele-
ments in order to put the light on one of them. A large number of connectors can be used
to link the facts used in the customary norm (e.g. but, while, ...).
4.4. Improvements
The result of the first two modules of our system is a list of facts that can appear in the
biased description. The third module shown in Fig. 5, aims at improving the output
through a number of operations.
The first one is a filtering operation, which selects what information to keep in order
to avoid both an excess and a lack of information which may not be favorable for our
argumentation. Filtering rules have been implemented to make these choices. Once the
resulting list is established, the next step is to introduce connectors between the remain-
ing facts to improve the coherence of the presentation.
Fig. 5. Module of improvement.
Writing rules
Biased description (draft)
List of what can be said
Lexico-syntactical refinement
Biased description
450 S. Boutouhami & D. Kayser
We take advantage of the argumentative guidance provided by these connectors. The
final order in which elements of the description are placed is one of the important factors
for the construction of a biased description. We found, by analyzing the texts of our cor-
pus, that the authors use different scheduling strategies, depending on whether they con-
sider themselves at fault or not (their aim is to leave as much as possible their mistakes in
the background). Often the mere juxtaposition in chronological order is sufficient to
create one or more relations between two propositions: addition, opposition and some-
times cause/consequence. One of the strategies that seems useful in cases where the au-
thor is at fault, consists in beginning with expressing the intention of the actions underta-
ken: this may affect straightway positively the reader. Then the time comes to present the
description of the mistake; finally, the conclusion describes the accident as having mi-
nimal consequences and implies that the damages would have been much more severe if
other decisions had been taken. Other factors contribute to the optimal order; the rules
used in the previous modules may impose some special sequences: premises-conclusions
for example in the case of rules of justification. Once we have the plan (structure or
scheme), it only remains to relate the facts in the order chosen (we recall that our short-
term objective is not the generation of texts, but to give enough directions to a text gene-
rator for it to output a text).
This last step is coupled with the task of lexical refinement: it is well known that any
argumentation requires a careful choice of words. The words can evoke concepts that
highlight the viewpoint of the author. For example, depending on whom we wish to de-
fend, we may select a member of the set of verbs expressing the same reality, e.g. a colli-
sion between two cars, but a verb can orient the reader towards the idea that the collision
was after all not really hard (e.g. I touched the other car) while another verb has an oppo-
site orientation (he smashed into my car).
5. Representation Language
5.1. Reification
We use a reified first-order language. According to the reification technique, a predicate,
say P(x; y; z) expressing the fact that a property P applies to three arguments x, y and z
will be written true(P; x; y; z): the name P of the predicate becomes an argument of the
new predicate true.
This technique allows quantifying over predicate names. However, a limit of reifica-
tion is that it forces a fixed arity to the predicate true. To cope with this problem, we
introduce a binary function combine, which constructs an object that represents the com-
bination of its two arguments. A property with four arguments Q(x; y; z; t) is thus written:
true(combine(Q; x); y; z; t).
Another drawback of the technique of reification is that it requires the redefinition of
ad hoc axioms on reified propositions that express the negation, the conjunction, the
disjunction of properties: these axioms are included in classical first-order logic but
should in principle be added here. Fortunately, in practice, we do not need to redefine all
Strategies to Defend a Protagonist of An Event 451
of them. It turns out that the only axiom we really need in reasoning is the one that con-
cerns the negation of a property. For this reason, we introduce a function not and we have
the axiom:
∀(P) true(X; not(P); A;T) ↔ ¬ true(X; P; A; T)
In our system, the 4 arguments of the predicate true(X; P; A; T) are respectively:
• X: the argumentative label of the fact represented by the other arguments. X can take
the following values:
• E, the fact is effectively true. This is the initial label of all facts given as input.
• B, the fact is favorable for the desired conclusion.
• M, the fact is unfavorable for the desired conclusion.
• S, it is preferable to stay silent about the fact.
• Mi, the fact belongs to the minimal list ML.
• I, it is preferable to insinuate the fact.
• J, the fact should be justified.
• N, a customary norm should be appealed to present the fact.
• D, the fact appears in the final description.
• IJ, is a value used to ensure the priority in the execution of insinuation rules.
• JN, is a value used to ensure the priority in the execution of justification rules.
• P is a simple property (name of a predicate) or a complex one (result of the function
combine); it can designate an action, an event or an effect.
• A is the agent concerned by the property P.
• T is a temporal parameter.
The temporal aspect is crucial to our reasoning, as we need a representation reflecting
the order in which the events actually happened. A linear and discrete representation of
time is sufficient for our needs. Therefore T is an integer.
5.2. Modalities
An argumentation draws its strength thanks to the modalities which modify the impact
of the bare propositions. The subjective nature of some of them plays a central role in
the argumentative process. Reification allows representing modalities as first-order
predicates. Indeed, to affect a modality Mod to a proposition P(x; y) we just write:
Mod(X; P; x; y) where X is, as above, the argumentative label expressing the status (good
or bad to say or to insinuate) of the modality. So true is one of the modalities; the others
are:
• Duty(X; P; A; T): at time T, agent A has the duty of making property P true.
• Ability(X; P; A; T): at time T, agent A has the ability of making property P true.
• Intent(X; P; A; T): at time T, agent A has the intention of making property P true.
• Belief(X; P; A; T): at time T, agent A believes that property P is true. For example:
Belief(B, combine (avoid, person), adv, 2) means that we have judged good to say that
452 S. Boutouhami & D. Kayser
at time 2, the other protagonist (adv) believed that he would avoid the protagonist we
defend (person).
Two further predicates are useful: Qualification which has the same arguments as
true:
• Qualification(X; combine(P; Q); A; T) expresses a qualification Q of the property P
which is “argumentatively” labeled X for an agent A at a time T.
• Connection links two propositions and labels their relationship. We write
Connection(X; P1; A1; T1; P2; A2; T2; Type_rel) where P1 and P2 are properties, A1
and A2 are agents, T1 and T2 are time states, Type_rel is an argument expressing the
type of relationship between the two propositions P1 and P2: goal, intent, cause, ...
5.3. Nonmonotonicity
Clearly, an argumentative strategy can be judged appropriate only because of the absence
of some facts that, if present, could have been taken as a basis for a better way of present-
ing a cause. So argumentation belongs to the family of nonmonotonic reasoning.6 Among
the large number of formalisms designed to handle non-monotonicity we have selected
Reiter’s semi-normal defaults.23,24
The main reasons are that a default “theory” can have
multiple “extensions”, and this is adequate to handle NL, where a given text can accept
several readings; the lack of semi-monotonicity corresponds to the possibility to rank the
default knowledge: “strong” defaults can override “weak” ones.
The price to pay for the possibility of having priorities among defaults is the loss of
the guarantee that every theory has at least an extension. But it is well-known in the lite-
rature (see e.g. Ref. 9) that only “pathological” semi-normal theories lack extensions.
Two kinds of inference rules are considered:
• the strict ones, represented as material implications of the form A→ B, • and the defeasible ones represented by Reiter’s normal and semi-normal defaults:
• normal defaults of the form :A B
B abbreviated by writing A : B
• semi-normal ones, of the form :A B C
B
∧ abbreviated by writing A : B[C].
As we have said, we have opted for semi-normal defaults, because it allows us to en-
sure priority between rules. For example given the following set defaults D:
D1 :A B
B, D2
:C B
B
¬
¬
The default theory ∆ = ⟨D, {A, C}⟩ has two extensions, the deductive closure of
E1 = {A, C, B} and E2 = {A, C, ¬B}. Both contain A and C, one contains B (which is the
consequent of D1 and blocks D2) and the other contains ¬B (which is the consequent of
D2 and blocks D1).
Strategies to Defend a Protagonist of An Event 453
If we want to impose an order of priority between the two defaults, such as: if D1 is
applicable, D2 should not be applied, we must change the structure of the default D2.
D1 :A B
B, D2
:C B A
B
¬ ∧ ¬
¬
By adding (¬A) to the justification of D2 we are sure that whenever D1 is applied,
meaning that A is true, D2 is blocked because its justification (¬B∧¬ A) is not true. On
the other hand, if we have no information about A, D2 can be applied if the other parts of
its condition are verified.
5.4. Implementation
To implement our system, we opted for the “Answer Set Programming (ASP)” para-
digm.11
ASP uses negation as failure to deal with defaults and achieves a nonmonotonic
reasoning. It also expresses exceptions, restriction and represents incomplete knowledge.
Several tools have been developed for ASP. We use in our application the tools Lparsed
and Smodels25
into which we have translated our inference rules.e Smodels computes the
so-called “stable models”, i.e. the set of literals belonging to an extension of the corres-
ponding default theory. To give an idea of how to translate default logic into Smodels we
consider the following simple cases where A, B, C are reified first-order literals.
• A material implication A→ B is translated into the rule B:-A. • A normal default A : B " " " B:-A, not –B • A semi normal default A : B [C] " " " B:-A, not –B, not –C.
The number of inference rules currently in use is about 230 : 132 of them are general
rules, 19 insinuation rules, 56 justification rules, 5 customary norms rules and 17 writing
rules. We have made preliminary tests of our system on some factual descriptions, and
we find our first results rather encouraging (see Section 6 below).
5.5. Semantic classes
In order to improve the generality of our rules, we have introduced in the language the
notion of semantic classes. Elements of a given class have a semantic feature in common.
We define basic semantic classes as sets of concepts that share the same core and differ
only by non essential features, for example the classes Shock_Action, Roll_Action.
Basic classes are grouped into meta-classes; elements of a given meta-class share a
semantic feature which is more abstract than the semantic feature common to the ele-
ments of basic classes. For example, the meta-class Is_Moving, contains the basic classes:
Shock_Action, Roll_Action, Turn_Action and others. Using basic classes or meta-classes
depends on the generality or the specificity of the rule.
dSMODELS and LPARSE are available on the Web at http://www.tcs.hut.fi/Software/smodels/
eThere is an easy translation between the fragment of default logic we used in our system and Answer Set
Programming, for more details see Ref. 19.
454 S. Boutouhami & D. Kayser
We present here some classes that appear in the rules used to treat the example devel-
oped in Section 6.
• P1 is a class of properties that are favorable for their agent. A property belongs to the
class P1 if it corresponds to an action, an event or a state which is conform to the
highway code, for example, “driving on the right” (in right-driving countries). • P2, is a class of properties that are unfavorable for their agent. For example “entering
the wrong way”. • P, is a class including the two previous classes in addition to some properties that can
be judged as good or bad only with respect to the context. For example, overtaking a
car depends on conditions of the progress of this action.
• Agent, is a class whose elements designate the agent involved in the accident (by
convention, the agent that we defend is ‘person’ and his adversary ‘adv’).
• Obstacle, is a class including all elements that can represent an obstacle for an agent,
for example, person, adv, tree, dog, ... • Inconsistent, is a timeless predicate, with two parameters of type property.
Inconsistent (P, P') means that the two properties P and P' are incompatible with each
other (they cannot be simultaneously true), for example inconsistent (stop, move). We
have another predicate consistent expressing the fact that two properties are
compatible, for example consistent (move, combine (signal, turn)). • Shock_Action, is a class that includes all actions designating a shock, for example:
collide with, hit, knock, jostle, run into, touch ... Each of these verbs expresses a shock,
but some verbs connote this action with more violence than others, so they are
preferred candidates when we want to express a shock caused by the adversary. • Turn_Action, is a class that includes all actions which express the nuances of the
turning action, for example: turn, deflect, change direction, swing … • Roll_Action, is a class that includes all actions that express the fact that a vehicle is
moving, for example: run, move, drive, roll, pilot, steer ...
6. A Couple of Examples
Inspired by genuine reports of our corpus, we consider two descriptions of car accidents.
6.1. Example 1
The first report presents the case of a driver leaving her residence to join the traffic in the
street. She does not check whether someone is coming. Actually, a car is coming fast and
when she realizes it, she has no other choice than to turn quickly the wheels; this results
in her hitting a boundary stone of the sidewalk.
We have at our disposal a parser that translates reports written in French into facts of
our representation language.14
However, as the original reports are already argumenta-
tively loaded, and the input of our system has to be neutral, the above story has been
manually converted into a set of 31 propositions. Each of them gets the label E (effective-
Strategies to Defend a Protagonist of An Event 455
ly true facts). To avoid duplication, this set will be presented after the presentation of the
labeling process.
The first step consists in sticking a B or a Mf label on the facts that have an argumen-
tative impact, this is done by using general rules, for instance:
P1(P) ∧ true(E; P; person; T) → true(B; P; person; T) (R1)
P2(P) ∧ true(E; P; person; T) → true(M; P; person; T) (R2)
[if the fact that person has a property P included in the class P1 (good properties) is
present in the list L (label E), this fact gets the label B; symmetrically, if it is a bad thing,
property included in the class P2, it gets the label M].
true(B; P; person; T) : true(M; P; adv; T) (R3)
true(M; P; person; T) : true(B; P; adv; T) (R4)
[by default, whatever is good (respectively bad) for the person is bad (good) for her
adversary (adv)].
duty (E, P, person, T) ∧ true(E, P', person, T+1) ∧ inconsistent(P, P')
→ duty(M, P, person, T) ∧ true(M, P', person, T+1) (R5)
[if the person had the duty to do something P and it turns out that at the next time, a
property P’ inconsistent with that duty holds, then it is bad to mention both the duty and
the fact that P’ holds].
But the labeling can be more contextual, e.g. not putting one’s indicator is good if one
has not the intention to turn; otherwise it is bad, hence the following rule:
true(E; not(combine(signal; turn action)); person; T)
∧ true(E; turn action; person; T + 1)
→ true(M; not(combine(signal; turn action)); person; T) (R6)
By means of rules of this kind, the 31 propositions get a label, as follows:
(1) true(M; suitable lane; adv; 1)
(2) true(B; is_ moving; adv; 1)
(3) true(B; suitable_lane; person; 1)
(4) true(B; is_moving; person; 1)
(5) true(B; combine(leave; home); person; 1)
(6) true(M; not(combine(see; adv)); person; 1)
(7) true(M; combine(has_ priority; person); adv; 1)
(8) duty(M; stop; person; 1)
(9) intention(B; combine(join; street); person; 2)
(10) true(M; suitable_lane; adv; 2)
fB and M are the initials for the French phrases “Bon / Mauvais à dire” (good / bad to say).
456 S. Boutouhami & D. Kayser
(11) true(B; is_moving; adv; 2)
(12) qualification(B; combine(is_moving; fast); adv; 2)
(13) true(B; suitable_lane; person; 2)
(14) true(M; is_moving; person; 2)
(15) true(M; not(combine(check; lane)); person; 2)
(16) duty(M; combine(suitable_distance; stone) ; person; 2)
(17) duty(M; stop; person; 2)
(18) intention(B; combine(avoid; adv) ; person; 3)
(19) true(M; turn_wheel; person; 3)
(20) true(M; suitable_lane; adv; 3)
(21) true(B; is_moving; adv; 3)
(22) true(B; combine(avoid; adv); person; 3)
(23) true(M; not(suitable_lane); person; 3)
(24) true(M; not(combine(suitable_distance; stone)); person; 3)
(25) true(M; not(combine(see;stone)); person; 3)
(26) true(M; not(stop); person; 3)
(27) duty(M; not(combine(hit; stone)); person; 3)
(28) true(M; suitable lane; adv; 4)
(29) true(B; is_moving; adv; 4)
(30) true(M; combine(hit; stone); person; 4)
(31) true(B; combine(is_along; stone); sidewalk; 4)
The next step consists in selecting among the “bad” propositions, what can be left un-
said, and what must appear under one form or another. The minimal list ML below
amounts to stating that the person left her residence without checking that the track was
clear, that someone was actually coming, the person swung and hit a stone.
(5) true(Mi; combine(leave; home) ; person; 1)
(11) true(Mi; is_moving; adv; 2)
(15) true(Mi; not(combine(check; lane)); person; 2)
(19) true(Mi; turn_wheel; person; 3)
(30) true(Mi; combine(hit; stone); person; 4)
The basic rules are:
true(M; P; A; T) ∧ true(Mi; P; A; T) → true(I; P; A; T) (R7)
[if a fact is labeled bad (M) but belongs to the minimal list (Mi), then it gets the label
I, i.e. the fact is a candidate for being insinuated.]
The other “bad” propositions can generally be left unsaid. This is captured by the de-
fault:
true(M; P; A; T) : true(S; P; A; T) [ ¬true(Mi; P; A; T); ¬true(D; P; A; T)] (R8)
Strategies to Defend a Protagonist of An Event 457
[if a fact is labeled bad, its default status is silence (S), but the default is blocked
either if the fact is a member of the minimal list or if by some technique (e.g. insinuation),
it gets the label D].
Therefore, among the 18 facts labeled M, only the facts no.15, 19, 30 receive the label
I; if we hide them, the whole story becomes incomprehensible. But how should we
present them to the best? Consider first the fact no.15: the person did not check that the
lane was free. A consequence is that she did not see in due time her adversary adv. But
there could many other reasons for her not seeing adv, and it is by far better to leave the
premise implicit and to express only the consequence, leaving for the reader to guess for
what reason she was unable to see adv. The rule is:
true(I; not(combine(check; lane)); person; T) :
true(S; not(combine(check; lane)); person; T) ∧ ¬ability(D; combine(see; adv);
person; T) ∧ true(IJ; not(combine(check; lane)); person; T)
[true(B; combine(see; adv); person; T)] (R9)
[if you need to insinuate that you did not check, by default, stay silent about it and
claim that you were unable to see your adversary adv; the default is blocked if it can be
proven that is good to mention the fact that person has actually seen adv. We use here the
converse of: “if the person had checked the lane she could have seen adv in time”].
As said above, the strategy of insinuation has priority over justification. This priority
is implemented through the generation, with every execution of an insinuation rule, of a
new literal with the argumentative label IJ. This label is checked by the justification rules:
when present, it means that the fact having already been insinuated, it no longer needs
justification. The same principle is used with the rules concerning customary norms by
using this time the label JN.
For the fact no.19, we have no insinuation rule, so we look for a justification; the idea
being that when the fact labeled I can avoid a serious inconvenience, your doing this fact
can be justified by your intention to avoid this inconvenience.
turn_action(At) ∧ obstacle(O) ∧ true(I; At; person; T) ∧
intention(B; combine(avoid; O); person; T) : true(S; At; person; T) ∧
intent(S; combine(avoid; O); person; T) ∧ connection(B; At; person;
combine(avoid; O); person; T; goal)[¬ true(IJ; At; person; T)] (R10)
Finally, for the fact no.30, if a fact labeled I is an undesirable effect of an action per-
formed for good reasons, present it as an unavoidable consequence of a good choice.
By using the principle of equivalence, we opt for a different presentation, which
makes things look better:
turn_action(At) ∧ shock_action(Sa) ∧ obstacle(O) ∧ connection(B; At; person;
combine(avoid; O); person; T; goal) ∧ true(B; combine(avoid; adv);
person; T): true(S; combine(avoid; adv); person; T) ∧ true(D;
not(combine(Sa; adv)); person; T) (R11)
458 S. Boutouhami & D. Kayser
Using the principle of equivalence between A and ¬ ¬A, in this rule “avoid” is
replaced by “do not hit”. We prefer to use ¬ ¬ A, because A is already used in the body
of the connection rule and also we want to evoke the possibility that ¬A could have
occurred.
true(B; Ra; Agent; T) ∧ qualification(B; combine(Ra; V);
Agent; T) ∧ roll_action(Ra) ∧ qualif_speed(V) → true(S; Ra; Agent; T) (R12)
This rule is used to eliminate a redundancy; the fact that the vehicle of adv is moving
is included in the information given by the qualification of its speed that we want to high-
light.
As said above, good facts appear by default in the final description, but they can be
omitted if they provoke a negative effect. The basic rule is:
true(B; P; A; T) : true(D; P; A; T) [¬true(S; P; A; T)] (R13)
[if a fact is labeled good, it gets the label D to be included in the final description,
except if the default is blocked because this fact has received the status (S) (e.g, to avoid
redundancy, or by a new reformulation)]
Before giving the output of our system, we show an example of translation of one of
our default rule into the syntax of S models:
shock_action(Sa) ∧ obstacle(O) ∧ true(M; combine(Sa; O); Person; T) :
true(S; combine(Sa; O); Person; T) [¬true(Mi; combine(Sa; O); Person; T)] (R14)
Translation:
true(S, combine(Sa, O), person, T) :- time(T), obstacle(O), shock_action(Sa),
true(M, combine(Sa, O), person, T), not true(Mi, combine(Sa, O), person, T),
not true(S,combine(Sa, O), person, T) (R15)
Inferring that it is better not to mention the fact that the person satisfies the property
combine(Sa, O) at T consist in proving that Sa has the semantic of a shock action which
means that it belongs to the class shock_action, that O is an obstacle, T is a moment,
that this property is bad to say, that there is no proof that this fact belongs to the minimal
list [not true(Mi, ...)] and that we have no proof that we should not stay silent about it
[not true(S, ...)].
Finally, we obtain a list of 9 propositions; put into English, they would read:
• The other car was moving
• I was leaving home
• The other car was driving too fast
• I was unable to see the other car
• I turned the wheels to avoid the other car
• I did not hit the other car
Strategies to Defend a Protagonist of An Event 459
• The other car was moving
• I was unable to avoid the stone
• The stone was along the sidewalk.
This is still far from a fair text, but hopefully a NL generator might take it as input to
yield a more palatable report. Let us now show what would result of taking exactly the
same input (31 propositions + minimal list) and to adopt the viewpoint of adv. We skip
the intermediary steps and give only the output:
• I was moving
• I was on the appropriate lane
• The other car was moving
• The other car had no priority over me
• The other car had the duty to check the lane {but} The other car did not check the lane
• The other car was leaving home
• The other car had the duty to stop {but} The other car did not stop
• The other car had the intention to join the lane
• The other car turned the wheel
• The other car was not on the appropriate lane
• The other car had the duty to avoid the stone {but} The other car hit the stone
• The stone was along the sidewalk
Here, not only should the style be improved by a NL generator, but the overabun-
dance of arguments against the other car could reveal harmful. So a filtering stage will be
added to keep, among the facts labeled B, those which are not deducible from others and
which contribute significantly to the defense of the agent.
6.2. Example 2
We consider now another example of our corpus, the case of a driver moving off at the
green light and hitting the back of the vehicle ahead of him. He barely touched this ve-
hicle, since only its bumper is slightly damaged, while his own vehicle suffered no dam-
age.
As for example 1, the above story has first been manually converted into a set of
propositions; each of them gets the label E.
(1) true(E; stop; adv; 0)
(2) true(E; stop; person; 0)
(3) true(E; combine(light; red); person; 0)
(4) true(E; combine(light; red); adv; 0)
(5) duty(E; stop; person; 0)
(6) true(E; stop; adv; 1)
(7) true(E; move_off; person; 1)
(8) true(E; combine(light; green); person; 1)
(9) duty(E; combine(approp_distance; adv); person; 1)
460 S. Boutouhami & D. Kayser
(10) duty(E; stop; person; 1)
(11) true(E; combine(light; green); adv; 1)
(12) true(E; combine(follow; adv); person; 1)
(13) true(E; not(combine(approp_distance; adv)); person; 2)
(14) true(E; combine(hit; adv); person; 2)
(15) qualif(E; combine(light_damage; back); adv; 2)
The minimal list ML here consists in stating that the person moved off at the green
light and hit the vehicle ahead of him, i.e.
(7) true(Mi; move_off; person; 1)
(11) true(Mi; combine(light; green); adv; 1)
(12) true(Mi; combine(follow; adv); person; 1)
(14) true(Mi; combine(hit; adv); person; 2)
Rules very similar to those described for our first example stick a B or M label on the
facts and it turns out that only (12) gets a B. So, by applying the rule R7, the facts no.7,
11, 14 receive the label I.
Consider first the fact no.7: the person moves off; anyone, in a similar situation, i.e.
when a red light turns green, would find it quite legitimate to move forward. This
represents a customary norm used to justify this action if we have no rule to insinuate or
justify it. We also use a customary norm “when the light turns green, start the vehicle” to
connect the fact no.11 to the duty for adv to start his vehicle. For the fact no.14, we ex-
ploit the semantic implication: if A could not avoid B, then A hit B. As the former prag-
matically implies that A tried to avoid B, which is good to say for the defense of A, we
prefer the former expression to the latter.
Finally, we obtain a list of 9 propositions; put into English, they would read:
• The other vehicle was stopped.
• I was stopped.
• The light was red for the other car.
• The other car should normally move off when the light is green.
• The other vehicle stopped.
• I followed the other vehicle.
• The light was green so I started my car.
• I was not able to avoid the other car.
• The back of the other car was slightly damaged.
If we adopt the viewpoint of adv we get 9 propositions; put in English, they would
read:
• I was stopped.
• The other vehicle was stopped.
• The light was red for the other vehicle.
• The light is green for me.
• The other vehicle moved off while it should not do so.
Strategies to Defend a Protagonist of An Event 461
• The other vehicle followed me.
• The other vehicle did not keep sufficient distance between our two vehicles.
• The other vehicle hit my vehicle.
• The back of my vehicle was damaged.
7. Validation
We have built a psychological experiment in order to tune some parameters of the sys-
tem, and we analyze here its first results. The experiment consists in presenting to the
subject 5 variations called (0), (A), (+), (–), (B) of a given text.. The version (0) is always
an original report taken from our corpus; from this report, we have inferred a number of
facts and a minimal list as in the above example. The version (A) is the current output of
our system, put in Natural Language, when it is asked to generate a description favorable
for the author of the report. In order to assess whether the system generates a correct
amount of arguments, we have erased from (A) two favorable facts: this yields the ver-
sion called (–); and we have augmented (A) with two favorable facts, and this yields
version (+). We present also to our subjects the output of the system when asked to favor
the other protagonist, but we then change the pronouns as if it had been written by the
author of the original report: this is version (B).
Our hypotheses are that:
• the subjects will prefer (A) to (–), i.e. providing too few arguments is harmful;
• they will also prefer (A) to (+), i.e. giving too many arguments is also harmful;
• (B) will be unanimously considered as the worst presentation when the goal is to
defend the author.
Each subject reads the 5 presentations (presented in a random order to cancel the or-
der effects) and has to answer three questions:
• For each presentation, is it very favorable, favorable, neutral, unfavorable, or very
unfavorable for the author?
• According to this presentation, what is the percentage of responsibility of the author in
the accident?
• Sort the 5 presentations from the most favorable to the most unfavorable for the author.
We have collected the answers of 62 subjects for 3 texts i.e. 186 answers (one of the
texts being precisely the one that we took as an example in section 6). As far as the first
question is concerned, 99% of the answers (all but two of them) perceive a difference
between the presentations. But this difference does not necessarily change the degree of
responsibility: actually, 31 answers give 100% responsibility to the author whatever the
presentation. The change of degree, when it exists, is consistent with the answers given
to the first question, i.e. no subject has given a lesser degree of responsibility to a pre-
sentation rated ‘unfavorable’ than to a presentation rated ‘neutral’ or ‘favorable’. This is
an indication that the subjects took this questionnaire seriously and did not answer at
random.
462 S. Boutouhami & D. Kayser
The subjects were our students and colleagues. The ranking is not significantly differ-
ent among the two populations, except for the degree of responsibility; students tend to
be more forgiving: in the average they give 20% less responsibility to the author of the
accident than the faculty staff, possibly because they show more solidarity to poor drivers?
Our first hypothesis is confirmed: over 184 answers, 139 prefer (A) to (–), 3 put them
at the same level, only 42 prefer (–) to (A).
Our last hypothesis is also confirmed; however a small minority (12 answers) prefer
the presentation (B), perhaps because they appreciate the fairness of giving all arguments
favoring one’s adversary.
However, our second hypothesis is disconfirmed: only 51 answers over 184 prefer (A)
to (+), while 9 judge them equivalent and 124 prefer (+) to (A).
If we now turn to the rank from 1 to 5 given by the subjects when they answered
the last question where they are asked to sort the 5 presentations, a clear result is
that the presentation (+) is best ranked for all texts (its average rank is 1,99/5), followed
by the output of our system (average rank of (A)= 2,55/5). What is noticeable is that
our system beats, in the average, the original report (average rank of (0) = 2,74/5). The
presentations (–) and (B) are clearly rejected (respectively 3,64/5 and 4,08/5). This means
that, according to a large majority of subjects, (A) has not reached the optimal number
of positive arguments, optimum after which adding more arguments makes more harm
than good.
It is hard, however, to derive conclusive facts from an experiment of this kind for
many reasons. For one thing, our translation into NL of the output of the system can
introduce an unwanted bias, even if we strived to stay as close as possible to the proposi-
tions. The choice of the arguments subtracted (–) or added (+) certainly plays an impor-
tant role too. The subjects (students and academics) are certainly not a balanced sample
of the population. All these factors can be mitigated, as there are several ways by which
the experiment could be improved.
But for what goal? The current data shows that the tastes of the subjects differ consi-
derably: e.g. some of them prefer short reports, some others vote systematically for long
ones. Therefore, having the objective to build “the most favorable description” for one of
the agents is rather elusive. Producing a text that gets the highest average rank for a well-
balanced panel of subjects would not necessarily be of practical interest: averaging all
choices flattens the various profiles of the subjects, and this “best description” might well
be a second choice for every member of the panel. It could then be scientifically more
valuable to generate a number of texts, each of them being judged as excellent by a sig-
nificant subset of the panel.
The analysis of this experiment is still in progress and may reveal other interesting
facts. But at this stage, the main encouraging conclusion is that our system succeeds in
producing descriptions having an argumentative quality which competes with, and even
slightly outperforms, that of the texts written by human drivers for whom the persuasion
of their insurance company is a real challenge.
Strategies to Defend a Protagonist of An Event 463
8. Perspectives
Turning back to our initial objective, at its current state our work puts together in the
same architecture various inferential and linguistic techniques in order to simulate a basic
form of argumentation used by humans in everyday life. The experiment described in
section 7 shows that, besides a number of improvements mentioned in the flow of the
paper, better results would be obtained by adding more arguments to our descriptions.
But it is by now already clear that useful results can be gotten if we continue in this direc-
tion. On the practical side, the output should be connected to a good text generator in
order to produce a valuable tool. On the theoretical side, further argumentative tech-
niques should be explored, designed and evaluated, in order to bring a better knowledge
of the criteria influencing the readers. Being able to generalize from there, and to deter-
mine what causes the effectiveness of an argumentative strategy, would be an interesting
step in the cognitive study of Natural Languages.
Acknowledgments
We express our gratefulness to Denis Hilton who provided an invaluable help for the
design of the psychological experiment described in Section 7. We thank Farid Nouioua
for useful comments and stimulating discussions. The work presented here has been sup-
ported by the MICRAC project, funded by ANR (Agence Nationale de la Recherche).
References
1. L. Amgoud and H. Prade. Arguing about potential causal relations. In Actes. Intelligence Arti-
ficielle Fondamentale (IAF) (Grenoble, France, 2007).
2. J. C. Anscombre and O. Ducrot, L'argumentation dans la langue, Mardaga (ed.) (Liège,
Bruxelles, 1983).
3. T. J. M. Bench-Capon and P. E. Dunne (eds.), Special issue on argumentation in artificial in-
telligence, J. Artificial Intelligence 171(10-15) (2007) 619–941.
4. G. Björklund, Driver Interaction; Informal rules, Irritation and Aggressive Behaviour, Digital
Comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciences, Vol.
8 (2005).
5. S. Boutouhami and D. Kayser, Vers la construction de descriptions argumentées d’un accident
de la route: analyse de diverses stratégies argumentatives, Revue. Sciences du Langage CO-
RELA 6(1) (2008).
6. O. Ducrot, Dire et ne pas dire (Hermann, Paris, 1972).
7. P. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reason-
ing, logic programming and n-person games, J. Artificial Intelligence 77(2)(1995) 321–357.
8. P. Enjalbert (ed.), Sémantique et traitement automatique du langage naturel (Hermès, Paris,
2005).
9. D. W. Etherington, Formalizing Nonmonotonic Reasoning Systems, J. ArtificiaI Intelligence
31(1) (1987) 41–85.
10. R. Johanson, A. Berglund, M. Danielsson, P. Nugues, Automatic Text-To-Scene Conversion
in the Traffic Accident Domain, in: Proc. 19th Int. Conf. Artificial Intelligence (IJCAI )
(Edinburgh, Scotland, 2005), pp.1073–1078.
11. M. Gelfond and V. Lifschitz, Classical negation in logic programs and disjunctive databases,
J. New Generation Computing 9(3-4) (1991) 363–385.
464 S. Boutouhami & D. Kayser
12. J. B. Grize, Logique et langage (Ophrys, Paris, 1990).
13. E. Hovy, Pragmatics and natural language generation, J. Artificial Intelligence 43(2) (1990)
153–198.
14. D. Kayser and F. Nouioua, From the textual description of an accident to its causes, J. Artifi-
cial Intelligence 173(12-13) (2009) 1154–1193.
15. C. Kerbrat-Orecchioni, L’implicite (A. Colin, Paris,1986).
16. L. Kosseim. Génération de structures sémantiques et rhétoriques dans les textes procéduraux.
In. Actes du Workshop Le texte procédural: texte, action et cognition (Toulouse, France, 1997),
pp.249–263.
17. F. Lévy (ed.), Numéro spécial, Approches sémantiques, Revue. Traitement Automatique des
Langues, 35(1) (1994).
18. S. Modgil. Reasoning about preferences in argumentation frameworks, J. Artificial Intelli-
gence 173(9-10) (2009) 901–934.
19. F. Nouioua and P. Nicolas. Using answer set programming in an inference-based approach to
natural language semantics. In Proc. 5th International Workshop on Inference in Computa-
tional Semantics (Buxton, England, 2006), pp.77–86.
20. C. Perelman and L. Olbrechts-Tyteca, Traité de l'argumentation (PUF, Paris, 1958).
21. I. Rahwan, G. Simari (eds.), Argumentation in Artificial Intelligence. (Springer-Verlag, New
York, 2009).
22. C. A. Reed and F. Grasso. Recent advances in computational models of natural argument.
J. Intelligent Systems 22(1) (2007) 1–15.
23. R. Reiter. A logic for default reasoning, J. Artificial Intelligence 13(1-2) (1980) 81–132.
24. R. Reiter, G. Criscuolo, On interacting defaults, in Proc. 7th Int. Conf. Artificial Intelligence
(IJCAI ), (Vancouver, Canada, 1981), pp. 270–276.
25. T. Syrjaänen and I. Niemelä, The Smodels systems. In Proc. 6th Int. Conf. Logic Program-
ming and NonMonotonic Reasoning, (Vienna, Austria, 2001), pp.434-438.
26. S. E. Toulmin, The Uses of Argument (Cambridge, UK,1985).
Copyright of International Journal on Artificial Intelligence Tools is the property of World Scientific Publishing
Company and its content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email articles for
individual use.