The systematisation of reasoning, in such a way that it could potentially be carried out quite automatically, has been a central concern of philosophers at least since Aristotle (with echoes down the centuries in both medievals and moderns, such as Hobbes and Leibniz). Aristotle's theory of syllogism remained supreme until the 19th century, when it was finally superseded by Frege's discovery of propositional and predicate logic, which form the basis of the modern discipline of deductive logic. Since then the field of automated reasoning has grown hugely, with many practical uses and raising various interesting issues of potential philosophical significance (see for example the online Stanford Encyclopedia of Philosophy article on
A Puzzle about Non-Deductive Inference
Progress and philosophical interest in the automation of non-deductive inference have been much less, making Hume's reference (in his Abstract of 1740) to a comment of Leibniz still pertinent:
The celebrated Monsieur Leibnitz has observed it to be a defect in the common systems of logic, that they are very copious when they explain the operations of the understanding in the forming of demonstrations, but are too concise when they treat of probabilities, and those other measures of evidence on which life and action entirely depend, and which are our guides even in most of our philosophical speculations.
As Hume's remark indicates, it seems very surprising that philosophers in general should have focused so exclusively on deduction (not least in devising Philosophy teaching programmes, which standardly include deductive logic but no probability), when most human thinking is non-deductive. Moreover deduction is relatively very straightforward and far better understood, so one might reasonably expect that non-deductive inference would have attracted far more philosophical attention. In most areas of investigation, after all, it is the untamed boundaries that attract philosophers, rather than the settled domains of established subject specialists.
The explanation of this situation might well be that non-deductive reasoning has until recently been just too difficult to investigate. Some simple and fundamental results - notably Bayes' Theorem - are secure and well-known, and there have been interesting developments building on Bayesian probability theory, such as Inductive Logic and Bayesian Epistemology. But these developments have generally made very little impact in philosophical circles, perhaps for the following reasons:
- First, most material on Bayesian inference is seriously technical, and well beyond the reach of the vast majority of philosophers (who, even if mathematical, are for cultural reasons likely to know little about probability as opposed to deductive systems).
- Secondly, progress in developing non-deductive logics has been slow and riddled with controversy, due to major theoretical difficulties (e.g. coping in any practical way with the non-monotonicity of probabilistic reasoning, whereby new evidence can undermine a previously made inference).
- Thirdly, until very recently such theories had negligible practical value, because - even if their principles could have been agreed on - they were so hard to implement.
- Fourthly, and for the same reason, such theories could not seriously be tested or proven through significant results; hence philosophers were most unlikely to be convinced that it was worthwhile to invest in the technical learning necessary for understanding them.
The development of computers has fundamentally changed this landscape, making the experimental study of non-deductive reasoning systems tractable and potentially useful for the first time. The field has naturally been occupied enthusiastically by researchers in Artificial Intelligence (many of whom have - as it happens - come from a philosophical background). But as a central area of human thought and rationality, it remains very much part of the philosopher's traditional domain, and ripe for philosophical investigation and discussion.