The potential benefits to the student of interacting with LAS are that it encourages them to organise their knowledge into a coherent argument for their point of view, makes them appreciate the need to support their view rather than adopt it unquestioningly, helps them to gain new insights into the substantive issues involved, helps to foster debating and reasoning skills, and helps to erode the view of the teacher (human or machine) as a purveyor of reified and indisputable knowledge (cf Baker 1990). LAS also offers potential benefits in other areas, such as expert system explanation (Bench-Capon et al 1991) and on-line help systems (Pilkington 1992a), where knowledge negotiation is an important type of interaction.
The model adopted is derived from work on "logical dialogue games" (Walton 1984). A logical dialogue game is essentially a set of rules, regulating the participants as they make moves in the dialogues. These rules legislate as to permissible sequences of moves, and also as to the effect of moves on participants' "commitment stores", conceived as records of statements made or accepted.
Logical dialogue games have a number of apparent attractions from the point of view of their utilisation within a computer dialogue system. First, given that the games purport to be models of "what is fair and reasonable in argument and criticism" (Walton 1985), then constraining both computer and user to such a game will, if the game is valid, yield "fair and reasonable" dialogue, and thus satisfy the requirements for educational debate. A second attraction concerns the treatment of dialogue rules. The aim would be, of course, to utilise the generic rules both to establish the legality of student input, and to assist in formulating a response by restricting attention to the set of legal moves, and thus to guide the dialogue, in an economical and practical way, beyond a single question/answer interaction. The computational attractiveness of the dialogue game rules is enhanced by a consideration of their nature. For Mackenzie's rules (Mackenzie 1979), for example, prescribe no more than three types of responses to an incoming move type, thus promising to ease the task of checking for legality, and substantially to reduce both the search problem for the computer's rejoinder and the strategic issue of selection between competing alternatives.
A third attraction is the games' concern with modelling commitment rather than belief. A sharp distinction is drawn between commitments and beliefs, in that "participants need neither believe their commitments, nor commit themselves to their beliefs" (Mackenzie 1981). Commitments are incurred during the dialogue, in line with clearly stated commitment rules, and the resulting "commitment store" is seen not as a psychological model of memory, but rather as akin to a publicly inspectable set of statements recorded on a slate (Mackenzie 1979). Moves made during a dialogue game may cause commitments to be added to or erased from the store, according to the system's commitment rules or commitment function (Mackenzie 1981). The commitment store has the potential to control the line of argument, and, given its public nature, to act as an agenda for discussion; the store can be used to check that the dialogue is well regulated: points that need challenging are challenged, points omitted are probed.
Hamblin argues that the concept of commitment is "natural and intuitive as soon as we theorise about how people talk to one another" (1987). Elsewhere (1970) he claims that if A is arguing with B he will have to start with something B will accept, if he has any notion of winning the argument. A final plank in the case for a commitment store is concerned with the modelling of fallacious argumentation. Walton (1984) suggests that, for Hamblin, the manner of modification of commitment stores is "the key to modelling the fallacies", and elsewhere (1989) suggests that use of the store can prevent the "straw man" fallacy , and explain when "ad hominen" argumentation becomes fallacious. The separation of commitment from belief has the advantage that one can allow for machines to argue, in principle at least (Hamblin 1970), without having to concede that they have beliefs (Mackenzie 1979). Equally important, the explicit nature of the commitment function intuitively suggests computational feasibility.
The characterisation of dialogue moves also appears advantageous from the computational point of view, for several reasons. First, the content of a move is generally restricted to one "locution", i.e. a statement together with a statement operator (e.g. assert, question, withdraw). Although this length restriction has the consequence that certain dialogues, e.g. the cannibalism dialogue of Woods and Walton (1982), cited approvingly by Walton himself (1989) would fall outside the scope of such systems, it has the computational advantage of avoiding complexities such as deciding on a practical length for turns (Clerk and Schaefer 1989) or formulating a turn-length control policy (Frohlich and Luff 1990), and difficulties involving moves spanning more than one speaker turn (Reichman 1985).
Further, the restricted range of move types allowed by the games potentially makes it possible to use a menu scheme for user input. In addition, the rule bound nature of the moves enables relatively straightforward checks on the legitimacy of input. The strictures of the rules can be expressed as move type pre- and post-conditions, and this suggests the possibility of autonomous move type objects which have an awareness of their own availability at certain stages of the game, and also of internal consistency checks between the currently applicable rule and the currently satisfied pre-conditions.
Perhaps the main computational attraction, however, is the clarity and expressiveness provided by the moves. Their pre and post conditions relate purely to explicit moves and to participants' (inspectable) commitment stores, so that complications concomitant upon implicit moves (see Moore 1993) never arise. Neither is there any requirement to attempt the difficult task of divining the intentionality of the maker of the move (cf Mackenzie 1990), given the moves' clearly defined function within the game. Intentionality is, in effect, embedded within the move. The difficulty of interpreting pragmatic content is, therefore, largely overcome: "the idea would be to limit formally the permissible forms of expressions so that intention in utterance would be unambiguous and the need for complex pragmatic parsing would be bypassed" (Pilkington 1992a).
In brief, within logical dialogue games, the moves and the concomitant commitments are clear, and dependent only upon the locution used (Mackenzie 1979) so that issues of illocutionary force, for example (Stenton 1988), are effectively by-passed, as are "dynamic social-psychological factors" such as politeness (Reichman 1985). The possible loss of richness consequent upon this can be seen simply as a reflection of the different paradigm that computer-based communication offers, and greatly improves the prospects for successful computer implementation (cf Pilkington 1992a). It is also of potential educational value, since commitments (and hence obligations to defend) are clear, reflecting Clancey's favouring of "indexed remarks" (Clancey 1987), and Apostel's talk of logic's "pedagogical function" of "[helping] us participate more critically and more fruitfully in concrete disputes" (1982). Moreover, the rules seem intuitively amenable to computerisation.
The prospect of a constrained yet powerful dialogue system can intuitively be expected to offer great potential from the computational point of view.
1 A Do you want CP brought back? 2 B No I don't believe in CP, it is wrong. 3 A Why do you think CP is wrong? 4 B CP is wrong because it is killing. 5 A Most people in GB believe in it. 6 B Are you sure of this statistic? 7 A Yes 8 B Why do you think people should be killed for crimes they have committed? 9 A Because it stops others from doing the same. 10 B Will it stop all people from killing others? 11 A No. 12 B Why use this method if it is not fully effective? 13 A Withdraw it stops people from committing crimes 14 B Is it wrong to punish the wrong person? 15 A Yes. 16 B Do the wrong people ever get convicted wrongly? 17 A Yes. 18 B Would these people be hung if they were convicted? 19 A Yes. 20 B Do you consider this acceptable? 21 A No. 22 B Do you agree with CP? 23 A Yes. 24 B Will the wrong people ever be hung? 25 A Yes. 26 B You say killing the wrong people is wrong but you say that CP will kill the wrong people.Other examples of dialogues generated using DC can be found in Finklestein and Fuks (1990), Moore (1993) and Pilkington et al (1992). Such dialogues are realistic and of potential educational benefit to the users, and they represent a major advance over many ITS applications. The practicality of such dialogues, both from the point of view of students successfully adopting the dialogue model (DC) and from the point of view of having the computer act as a partner in them, is the major issue of our current research.
A second reason for the choice of DC is that a prima facie case can be made for its computational tractability. The rules of the system are clear and relatively simple; in order to assess the legality of an event within a dialogue, the computer would need to know only the state of the commitment stores, the previous event, and certain "syntactic relations between locutions" (Mackenzie 1979). Commitments of both parties are made clear by the commitment rules, and since at any stage the number of entries in a store is at most twice the number of moves made to that stage in the dialogue, combinatorial explosions of commitments cannot occur (Mackenzie 1990). Further, there are certain precedents for computational use of DC. It is used by Bench-Capon et al (1991) to enable co-operative interactions between knowledge based systems and their users, and is seen as potentially providing a novel means of knowledge elicitation. Hartley and Hintze (1990) utilise DC as the model for a system that facilitates dialogue-based diagnosis of "bugs" in students' thinking, thus enabling the actual construction and maintenance of a user model to be in itself of an instructive nature. DC is also used, albeit in somewhat modified form, by Finklestein and Fuks (1990) as the basis for a system supporting software specification.
The DC system allows 5 move types: (i) statement ('P', 'Q' etc., and truth functional compounds), (ii) withdrawals ('no commitment to P'), (iii) questions ('P?'), (iv) challenges ('why P?'), and (v) resolution demands ('resolve whether P'). There are 4 rules regulating commitment stores: (i) stores are null at dialogue commencement; (ii) statements by either participant are added to the stores of each; (iii) a statement P in response to a challenge of Q results in both P and P--->Q being added to each store; (iv) a challenge of P results in P being added to the store of the hearer, and why-P being added to, and P being removed from, the store of the maker of the move. There are 6 dialogue rules in our amended version of the system: (i) participants may utter individual permitted locutions in turn; (ii) mutual commitments may not be uttered; (iii) the question `P?' must be answered by 'P', 'not P', or 'no commitment P'; (iv) 'why P?' must be responded to by a withdrawal of 'P', a statement not under challenge by its speaker, or a resolution demand of any commitments of the hearer which immediately imply 'P'; (v) resolution demands may be made only if the hearer is committed to an immediately inconsistent conjunction of statements, or withdraws or challenges an immediate consequent of his commitments; (vi) a resolution demand must be followed by withdrawal of one of the offending conjuncts, or affirmation of the disputed consequent.
In order to illustrate the workings of the DC system, part of the specimen dialogue cited earlier will be analysed using DC. In line 1, A is using the move type question to establish B's view on CP. The move type has no effect on commitment stores (CSs), and, since this is the start of the dialogue, both stores remain empty. At line 2 B answers the question in the negative, via move type statement, in line with dialogue rule (iii). By commitment rule (ii) both CSs now contain the statement "CP is wrong". At line 3 A challenges this statement (move type (iv)), causing commitment rule (iv) to remove `CP is wrong' from, and add `why is CP wrong?' to, A's CS. At line 4, B responds in line with dialogue rule (iv); by commitment rule (iii) both CSs are expanded to include `CP is killing' and `CP is killing r CP is wrong'. Similar analysis applies to the rest of the dialogue; note that at line 26 B poses a resolution demand concerning A's apparently contradictory commitments, forcing A to address this apparent paradox in his position.
A qualitative assessment of the debates generated via DC suggests that the system can be regarded as a valid prescriptive framework for educational dialogue. Further, a comparison with dialogues generated without any DC-imposed restrictions suggests that the qualitative differences between the dialogues produced by the two paradigms are in practice relatively little. Two concerns with the DC dialogues are the way questions are handled by the DC framework, and the relatively fixed initiative patterns that can result from its adoption. However, we argue that most of the apparent restrictions in DC can be overcome, and that DC avoids some difficulties that were found to beset the unconstrained dialogues.
An interface has been developed, by Hintze (Hartley and Hintze 1990, Pilkington et al 1992) that allows two players to engage in a DC dialogue supported by a computer-based gameboard and referee. Certain educational advantages of the use of this dialogue game interface can be suggested. It could be used as a spur to develop arguments logically, to remain consistent, and to improve debating skills; using the interface with different dialogue partners forces application of these skills to information offered by the other player, thus potentially generating new insights into the domain of enquiry. The system offers the facility to replay previous dialogues either in full or to some specified point, from which users may continue to add their own contributions.
Indeed, the successful implementation of the DC interface is all that is required, strictly speaking, to demonstrate the computational tractability of DC as a dialogue model. For DC is seen as a vehicle for dialogue, a framework through which two people can engage in dialogue with each other, by adhering to certain rules, and by being made accountable, via their commitment stores, for what they say and accept during the debate. Hintze's DC interface provides precisely such a vehicle. It provides a computerised version of the DC system, and DC's computational tractability is therefore demonstrated. However, of more interest educationally is the possibility of having the computer act as a participant in the dialogue. An architecture for LAS has therefore been designed to this end, and is the subject of on-going research and development work (Moore 1993).
One major concern in connection with the implementation of LAS is that, since DC operates at a very high level of abstraction, LAS will need suitable strategic knowledge to enable use of the model to generate dialogue. The DC data suggests that three levels of decision making are required: (i) whether to retain or change the current focus of argument; (ii) whether to seek to demolish partner's position, by having him remove from his commitment store propositions which he has used to support his thesis, or to seek to build up one's own position, by making statements the acceptance of which, or asking bipolar questions the answers to which, ultimately imply the truth of that position; (iii) which method to adopt in fulfilment of the objective set at levels 1 and 2. A set of heuristics has been derived from the data, which can guide a participant in any given game situation. For example, if one's partner has made a statement, the following set of ordered heuristics apply: (i) if partner has uttered (but not subsequently withdrawn) contradictory propositions, then request resolution; (ii) if there is any evidence directly contradicting any of partner's statements, then state it (iii) seek substantive objection to partner's commitments, and pose questions with a view to making partner accept that objection (iv) seek out by challenge partner's arguments, with a view to ultimately rebutting them
A second major concern involves LAS's representation of its knowledge base. This is a difficulty because the knowledge base must be able to furnish not only the answer to bipolar questions, but also, for any given proposition, some further proposition that could reasonably be used to support that proposition. Among approaches currently under investigation is the use of rhetorical structure theory (RST) (Pilkington et al 1992). RST utilises "rhetorical predicates", seen as "explicit organising relations used in discourse" (McKeown 1986), perhaps grouped into relatively fixed patterns or "schemata", to generate coherent text at the paragraph level or beyond (Pilkington 1992b). RST has been used successfully in a number of computer-based implementations, e.g. Eurohelp (Pilkington 1992a), "Text" (McKeown 1986), "Texplan" (Maybury 1992).
In the current context, the approach could be used to pool relevant information into the appropriate schema, in a similar manner to Eurohelp. The individual statements, resulting from this process, could each then form a move in the system's strategy, with the opportunity to question them actively apparent to the user. The RST approach would therefore enable LAS to implement build strategies, by extracting from the knowledge base a schematized answer content, which would constrain what is judged relevant and thus appropriately focus the content of the strategy. Individual predicates may also be useful in the context of debate. The use of examples and counter-examples is seen as useful in argument (e.g. Cohen 1987), and can be provided for by the rhetorical predicate "illustration" (Maybury 1992). Other generally useful predicates include "purpose", "analogy", and "inference" (Maybury 1992), and "evidence" (Moore and Swartout 1991). A comprehensive example of the use of RST within a dialogue game context is provided in Pilkington et al (1992).
Summarising, then, the marriage of dialogue game theory with rhetorical structure theory offers a potentially attractive way of servicing the requirements of computer-based debate, given the successful implementation of the former theory in the dialogue game interface, and the successful implementation of the latter theory in systems such as Text and Texplan. The use of RST would allow LAS to utilise a generic knowledge base, potentially useful therefore for a variety of applications (including a variety of types of dialogue games), from which the rhetorical predicate operators abstract out propositions useful in the prevailing circumstances.
Whilst further experimental work is required to verify the utility of the interface, the expectation is that a similar interface, linked with a computational dialogue generator as discussed above, will provide a suitably user friendly interface to LAS.
This still leaves as an open question, however, how LAS will handle substantive student input. Computer processing of free range natural language input is of course a large problem, beyond the scope of the current project. It may be possible, however, to use rhetorical predicates as an intermediary language, forming a level between the system's semantic domain representation and the user interface. The idea is that the predicates are iconised and the student uses them in the cut and paste construction of his substantive moves. In this way the student can construct relationships between objects, to form the propositional content of his moves; he may, for example, select a cause-consequence predicate, and objects "angina" and "fatigue", to suggest that angina can give rise to fatigue.
In this way the system can provide a range of semantic links which can give rise to a common understanding between system and user. The system can then test for consistency between relationships and objects proffered by the user, and existing relationships and objects in the domain knowledge base, and decide, on this basis, whether to accept the statement or to seek to demolish it via the strategic heuristics. Indeed, it may be that in certain circumstances, the domain knowledge base itself should be altered as a result of relationships suggested by users, so that a learning element can be incorporated into the system (cf Pilkington et al 1992). In sum, whilst the provision of a full natural language interface remains a long term aim, the use of a combination of menus for move type selection, and rhetorical predicate links for propositional content, suggests that the natural language problem, the "Achilles' heel" of dialogue systems (Anderson 1988), can at least be overcome to the extent necessary to enable interesting debate to occur between system and user. An alternative means of substantive input, involving selection from pre-configured hypertext-like cards, is discussed elsewhere (Moore 1993).
One important way in which the research can be carried forward is to combine it with our research in interactive multimedia (Hobbs and Moore, this volume). A need has been identified for interactive debriefs and discussions within multimedia systems (Hartley 1993), and DC can provide the vehicle for this. Further, LAS could use multimedia output as its contribution to the debate, and hypermedia links in the output to enable the student to clarify points where necessary.
The ultimate goal is a system that engages in a variety of interaction styles with students, in line with the demands of the subject matter and the educational interests of the individual student. Whilst many issues need to be addressed before this goal can be achieved, the research documented in this paper has, it is hoped, laid a foundation for important steps in this direction.
Apostel L (1982)
Towards a General Theory of
Argumentation; in Barth E M and Martens J L (eds.)
Argumentation : Approaches to Theory Formation.
Amsterdam/John Benjamins B V
Baker M (1990)
Arguing with the tutor; in Elsom-
Cook M (ed.) Guided Discovery Learning, Paul
Chapman Publishing Ltd.
Baker M (1992)
Modelling Negotiation in Intelligent
Teaching Dialogues; in Moyse R, Elsom-Cook M T
(eds.) Knowledge Negotiation; Academic Press
Bench-Capon T J M, Dunne P E S, Leng P H (1991)
Interacting with Knowledge Based Systems through
Dialogue Games; in Proc Eleventh International
Conference, Expert Systems and their Applications,
vol. 1; Avignon, May 1991
Clancey W J (1987)
Knowledge - Based Tutoring: The
GUIDON Program. M I T Press
Clark H H, Schaefer E F (1989)
Contributing to
Discourse; Cognitive Science 13, pp 259 - 294
Cohen R (1987)
Analysing the Structure of
Argumentative Discourse; in Computational
Linguistics, vol. 13 numbers 1-2, p 11 - 24.
Finklestein A, Fuks H (1990)
Conversation Analysis
and Specification; in Luff N (ed.) Computers and
Conversation; Academic Press
Frohlich D M, Luff P (1990)
Applying the
Technology of Conversation to the Technology for
Conversation; in Luff N (ed.) Computers and
Conversation; Academic Press
Girle R A (1986)
Dialogue and Discourse; in Bishop
G and Van Lint W (eds.), Proc Fourth Annual
Computer Assisted Learning in Tertiary Education
Conference, Adelaide 1986, distributed by Office of
Continuing Education, University of Adelaide.
Hamblin C L (1970)
Fallacies; Methuen
Hamblin C L (1987)
Imperatives; Basil Blackwell
Hartley J R (1993), Interacting with multimedia,
University Computing, 15, 129-136
Hartley J R and Hintze D (1990)
Dialogue and
Learner Modelling; in Cheri S A (ed.) Student Model
Acquisition in a Natural Laboratory (NATLAB);
GEC DELTA Project D-1016 Final Report, Brussels;
CEC
Mackenzie J D (1979)
Question-Begging in
Non-Cumulative Systems; Journal of Philosophical
Logic p 117-133
Mackenzie J D (1981)
The Dialectics of Logic; in
Logique at Analyse vol 94 pp 159- 177
Mackenzie J (1990)
Four Dialogue Systems; Studia
Logica XLIX 4, p 567-583
Maybury M (1992)
Communicative Acts for
Explanation Generation; IJMMS 37, p 135 - 172
McKeown K R (1986)
Text Generation; Cambridge
University Press
Moore D (1993)
Dialogue Game Theory for Intelligent
Tutoring Systems; Unpublished PhD thesis, Leeds
Metropolitan University
Moore J D, Swartout W R (1991)
A Reactive
Approach to Explanation; in Paris C L, Swartout W
R, Mann W C (eds.) Natural Language Generation in
Artificial Intelligence and Computational Linguistics;
Kluwer Academic Publishers
Moyse R, Elsom-Cook M T (1992)
Knowledge
negotiation: An Introduction; in Moyse R, Elsom-
Cook M T (eds.) Knowledge Negotiation; Academic
Press
Petrie-Brown A (1989)
Intelligent Tutoring Dialogue:
The Structures of an Interaction; in Bierman D,
Breuker J, Sandberg J (eds.) Artificial Intelligence and
Education, Proc Fourth International Conference on
AI and Education, 24-26 May 1989
Pilkington R M (1992a)
Intelligent Help,
Communicating with Knowledge Based Systems; Paul
Chapman Publishing Ltd.
Pilkington R M (1992b)
Question-Answering for
Intelligent On-Line Help; Cognitive Science 16, p 455
- 489
Pilkington R M, Hartley J R, Hintze D, Moore D J,
(1992)
Learning to Argue and Arguing to Learn: An
Interface for Computer-based Dialogue Games;
Journal of Artificial Intelligence in Education, vol. 3 no
3 p 275-295
Reichman R (1985)
Getting computers to talk like you
and me; MIT press.
Shiffrin D (1985)
"Lauri Carlson: Dialogue Games,
An Approach to Discourse Analysis"; Language in
Society vol. 144, p 98-100
Stenton S P (1988)
Dialogue Management for
Co-operative Knowledge Based Systems; The
Knowledge Engineering Review p 99-122
Walton D N (1984)
Logical Dialogue Games and
Fallacies; University Press of America
Walton D N (1985)
New Directions in the Logic of
Dialogue; Synthese 63, p 259-274
Walton D (1989)
Question-Reply Argumentation;
Greenwood Press
Wenger E (1987)
Artificial Intelligence and Tutoring
Systems - Computational and Cognitive Approaches
to the Communication of Knowledge; Morgan
Kaufmann
Woods J, Walton D (1982)
Argument: The Logic of
the Fallacies; McGraw-Hill Ryerson.