Partenaires

logo Equipe Chspam
Logo Laboratoire Logo équipe Chspam
CNRS
Logo Université Paris-Diderot Logo Université Paris1-Panthéon-Sorbonne


Rechercher

Sur ce site

Sur le Web du CNRS


Accueil > Archives > Séminaires : Septembre 2008–Juillet 2012 > Programmes des séminaires 2010-2011 > Philosophie des mathématiques – Paris Diderot

Axe Histoire et philosophie des mathématiques 2010–2011

Philosophie des mathématiques – Paris Diderot

Organization :
Brice Halimi, David Rabouin, Ivahn Smadja, Sean Walsh


The sessions are held at
Université Paris Diderot - site Rive Gauche, bâtiment Condorcet, 4, rue Elsa Morante, 75013, Paris.



2010 – 2011



Session 1
Tuesday, November 9, 2010, Room Mondrian, 9h30 – 18h.


Algebraic Geometry and Philosophical Perspectives on Uncountable Categoricity


- 9h30 - 11h
Boris Zilber (Mathematics, Oxford University)
webpage

On continuity and its alternatives


- 11h – 12h30
Andrew Arana (Philosophy, Kansas State University)
webpage

Geometric model theory and the foundations of mathematics


- 14h – 15h30
François Loeser (Mathematics, Université Pierre et Marie Curie)
webpage

All points are equal, but some are more equal than others (a non-categoricity survival guide)


- 15h30 – 17h00
Sean Walsh (Philosophy, Birkbeck, University of London) webpage

Explanation, rationality, and uncountable categoricity




Session 2 Tuesday, November 30, 2010, Room Mondrian, 9h30 - 18h.

Co-organized by : Isabelle Drouet (Philosophy, IHPST) webpage

Philosophy of Statistics

- 9h30 - 11h
Christian P. Robert (Statistics, Université Paris-Dauphine, Institut Universitaire de France and CREST)
webpage

Classical Bayesian model selection, alternatives and criticisms

Abstract : I will cover the most orthodox approach to Bayesian model selection, followed by alternatives that have been recently proposed in the literature and by a description of an on-going controversy with phylogeneticists about the nature of testing.


- 11h - 12h30
Jan Sprenger (LPS, Tilburg)
webpage

On the Impossibility of Objective Bayesianism

Abstract : Philosophers usually understand Objective Bayesianism as the claim that there is one particular rational assignment of degrees of belief to propositions, as opposed to the pluralist claim that in a particular situation, several assignments can be rational. This is often found in conjunction with adherence to the principle of maximizing entropy (MaxEnt) when having the choice between different probability distribution.
In statistics, Objective Bayesianism usually means something else : namely the use of standardized reference priors (Bernardo 1979) in data analysis, as opposed to priors that have been elicited by expert opinion, guesswork, etc. The aim of these procedures is to base the inference on the assumed statistical model and the data alone, as to avoid equivocation in scientific communication.
This talk compares these two research programs in Objective Bayesianism, and argues that
"Objective Bayesianism" is, speaking strictly, a misnomer : Notwithstanding enormous practical successes of both projects, their foundations significantly differ from orthodox Bayesian principles, and the classification as "objective" is equally shaky. It is then discussed how these programs should best be understood and assessed.


- 14h – 15h30
Bengt Autzen (Philosophy, London School of Economics)
webpage

Error Statistics : Objections and Replies

Abstract : Since disagreements about statistical methodologies are fundamental for more general issues about statistical evidence and uncertain inference in science, it is found relevant in the philosophical literature to consider and appraise the rival statistical methods in and of themselves, apart from their application in a particular area. In this paper I present the error-statistical theory of evidence and discuss several of the objections raised against this account. I argue that many of the objections typically found in the philosophical literature do not justify the rejection of the error-statistical theory as a theory of evidence in favour of Bayesian or likelihoodist accounts.


- 15h30 - 17h
Stephen E. Fienberg (Department of Statisitics, Machine Learning Department, Cylab, and i-Lab, Carnegie
Mellon University)
webpage

On the Causes of Effects

Abstract :
While much of science is concerned with the effects of causes, relying upon evidence accumulated from randomized experiments and observational studies, the problem of inferring the causes of effects arises in many practical policy and legal contexts. Discussions of the two concepts, "the effects of causes" and "the causes of effects," go far back in the philosophical literature. I review the two concepts and how they are related, and I discuss some statistical aspects of inferring the causes of effects through a series of examples.




Session 3 Tuesday, February 1, 2011, Room Mondrian, 9h30 - 18h.

Co-organized by : Davide Crippa (Philosophy, (Université Paris 7 - Denis Diderot)

Impossibility Proofs in Mathematics


- 9h30 - 11h
Pavel Pudlák (Math, Czech Academy of Sciences)
webpage

The natural numbers, reality and finitism

Abstract : The natural numbers studied in mathematics is an entity with
an ideal property. Roughly speaking, the natural numbers stretch to
infinity as much as is consistent. One may question if this structure
corresponds to the physical natural numbers, that is, the numbers that
can be represented by distinct physical objects, or configurations of
physical objects. I will argue that proof complexity can offer an
interesting different kind of structure, namely, one in which the
natural numbers are "wide" rather than "long". This can be viewed as a
new kind of finitism.


- 11h - 12h30
Michael Detlefsen (Philosophy, University of Notre Dame, Paris-Diderot, Nancy 2, ANR) webpage

Impossibility, Permanence and the Axiom of Solvability

Abstract :Impossibility proofs are considered in relation to such seemingly contrary principles as Hilbert’s so-called Axiom of Solvability.
Attention is given to the conditions that govern problem-solution. Of particular interest to us in this connection is the so-called Principle of
Permanence.


- 14h – 15h30
Henrik Kragh Sørensen (Science Studies, University of Aarhus)
webpage

Impossibility proofs in mathematics during the early 19th century

Abstract : During deep transitions in mathematics that took place in the first half of the nineteenth century, new kinds of proofs made their way into mathematics. Among these new types of proofs were mathematical proofs of the impossibility of performing some mathematical task such as the construction of a mathematical entity. Other, related, new types of proofs include non-constructive existence proofs and a variety of classification results.

The impossibility of effecting particular geometric constructions within the clearly delineated domain of Euclidean geometric constructions has been supposed since Antiquity and proofs were attempted at various times (see also recent work by J. Lützen). Also in algebra, the period around 1800 was a particularly important one for impossibility results because techniques and perspectives became available from which such “meta-results” could be treated.

In 1824/26, the Norwegian mathematician Niels Henrik Abel published proofs of the impossibility of expressing the root of a general polynomial equation of degree five using only algebraic operations. Such a result had been suspected since the last quarter of the eighteenth century and proofs had been offered which were neither universally known nor universally accepted.

In my talk, I shall contextualize Abel’s proofs of the algebraic unsolvabilty of the quintic equation within the transition from a predominantly formula-centred style in mathematics (algebra) towards a more concept-centred one during the first half of the nineteenth century. In so doing, I shall relate Abel’s approach and result to questions of classification and delineation as important aspects of concept-centred mathematics.




Session 4 Tuesday, March 1st, 2011, Room Mondrian, 9h30 – 16h.

Homotopy and Higher Categories

Presentation : The algebraic formalization of homotopic notions, since the work of Daniel Quillen, has given rise to a huge amount of new category-theoretic constructions and concepts. In particular, model categories have been introduced as an abstract framework for homotopy theory and have recently been applied to type theory. Besides, higher categories have been introduced as a way to model higher-order homotopy in "iterated loop spaces" : Identity is replaced with homotopy, identity of homotopies is replaced with homotopy between homotopies, and so on. Since then, the algebraic expression of that idea has turned out to be very fruitful in many contexts outside the strict domain of topology, for example in rewriting theory, and has become a research topic in its own right.
Category theory has clearly been instrumental in supporting homotopy theory and in articulating higher-order homotopy. The session will be devoted to that facet of category theory.

Wittgenstein says in the Tractatus (5.303) that an identity statement (in particular, in an arithmetic context) is always meaningless ("Roughly speaking, to say of two things that they are identical is nonsense, and to say of one thing that it is identical with itself is to say nothing at all"). As a way to take that argument into account, one can view a calculation as a path, and an identity as the exhibition of an homotopy betweeen two different paths. Accordingly, identity between two calculations is never a fact, but always itself the result of a (higher-order) calculation. Quite generally, conceiving of identity of mathematical objects as an identity up to some homotopy could contribute towards an understanding of that issue in the framework of category theory.

The goal of this session is to provide an interdisciplinary forum in which to further explore those issues. In terms of format, it will have three speakers, embracing respectively an historical, a philosophical and a mathematical point of view.


- 9h30 - 11h
Ralf Krömer (Universität Siegen)
webpage
Interactions of category theory and the concept of homotopy : historical and epistemological inquiries focusing on the fundamental groupoid

Abstract : In the first part of the talk, we will investigate early conceptions of homotopy (and more generally, topology) and their role in the treatment of problems from analysis, starting with the work of Riemann and Jordan. This culminates in Poincaré’s series of papers on "Analysis situs" where he introduced, along with many other topological tools, the fundamental group. These matters are well known and well described in the literature, but a recapitulation is prerequisite for the subsequent parts of the talk. In the second part, we take a look at the history of partial composition and the groupoid concept, starting with algebraic work by Heinrich Brandt on quadratic forms and ideals of algebras. Again focusing on the context of topology, notably on the concept of fundamental groupoid, we will examine the claim that the history of this latter concept should reasonably be divided in two periods in the first of which (from Poincaré via Reidemeister up to Crowell and Fox’s book on knot theory) it didn’t play a central role, and its uses were developed more or less independently of category theory, whereas things became different with Ronald Brown’s stressing of the utility of the groupoid concept for algebraic topology. In the last part of the talk, we analyze the epistemological significance of the technical concept of homotopy for the work with the fundamental groupoid in the historical events described before, especially examining whether Wittgenstein’s dictum recalled by the organizers of the seminar is significant in this case.


- 11h15 - 12h45
David Corfield (University of Kent),
webpage

Categorification and Logic

Abstract : Categorification designates the process of finding higher level constructions whose ’shadows’ are familiar constructions. The categorified construction generally contains information which has been lost in its shadow. As we move up the n-category ladder for increasing n, an increasing elaborate notion of sameness emerges : equality is replaced by isomorphism, isomorphism by equivalence, and so on. Now, categories of a certain kind will support a logic of an associated kind, so it is natural to expect that as we move up the n-category ladder, that we will find categorified versions of logic. In this talk I shall examine what sense we can make of a categorified logic.


- 14h30 – 16h
François Métayer (Paris-Diderot, PPS)
webpage
Homotopy and rewriting

Abstract : A result of 1987 by Craig Squier relates the topology of a monoid to the properties of its presentations by rewriting systems. Precisely, if a monoid can be presented by a finite, confluent and terminating system, then its third homology group is of finite type.
I will show how the space of computations attached to such
rewriting systems supports a structure of omega-category, and revisit
Squier’s theorem from this omega-categorical point of view.
This approach is based on the construction of a Quillen model
structure on strict higher-categories, a recent joint work with Yves
Lafont and Krzysztof Worytkiewicz.




Session 5 Wednesday, April 6, 2011, Room Mondrian, 9h30 – 18h.

Co-organized by : Emmylou Haffner (Paris-Diderot) webpage

Anthony Strimple (University of Notre Dame) webpage


19th Century Philosophy of Mathematics


- 9h30 - 11h
Stefania Centrone (Philosophisches Seminar, Universität Hamburg)
webpage
Rigorous Proof and the Ban of the Metábasis eis állo génos.
An Investigation to Bernard Bolzano’s
Beyträge zu einer begründeteren Darstellung der Mathematik

Abstract : In his booklet ‘Contributions to a better founded presentation of mathematics’ of 1810 Bernard Bolzano made his first serious attempt to explain the notion of a rigorous proof. Although the system of logic he employed at that stage is in various respects far below the level of the achievements in his later Wissenschaftslehre, there is a striking continuity between his earlier and later work as regards the methodological constraints on rigorous proofs. This paper tries to give a perspicuous and critical account of the fragmentary logic of Beyträge, and it shows that there is a tension between that logic and Bolzano’s methodological ban on ‘kind crossing’.


- 11h10 - 12h40
Göran Sundholm (Instituut voor Wijsbegeerte, Universiteit Leiden)
webpage
Three kinds of function

Abstract : The development of the notion of function in commonly held
to have gone from the idea that functions are (anchored in)
expressions with free variables to the idea that they are mappings not
tied to expressions and that the "sets of ordered pairs unique in the
last component" conception is the precise version of this.
I shall, to the contrary, distinguish three notions and discuss examples :
1. Euler-Frege functions - dependent objects of lowest level, with
substitution taking the role of application ;
2. Riemann-Dedekind mappings - independent objects of higher level,
with a primitive notion of application ;
3. Courses of value ("graphs"), used by Frege, Von Neumann, and set
theory (Russell, Hausdorff, ...) - independent objects of lowest
level, where one needs a special application function of kind 1.
(Frege’s curved arch, Von Neumann’s [x,y], Rusell’s elevated inverted
comma for descriptive functions. The set theorists generally ignore
the need ..)


- 14h30 - 16h00
Jeremy Gray (Mathematics and Statistics, Open University and
Mathematics Institute, Warwick)
webpage
Hermite, Poincaré, and mathematical rigour

Abstract : Hermite and Poincaré were two of the most influential French mathematicians in the second half of the 19th century, and alongside interesting differences there are many ways in which Hermite’s thoughts about the nature of mathematics were continued and extended by Poincaré. One of these concerns mathematical rigour, where they disagreed strongly with the emerging German paradigm of abstract mathematics. Their views illuminate a philosophy of mathematics animated by what it is to do original mathematics, rather than by questions of a logical kind.


- 16h10 - 17h40
Hourya Benis Sinaceur (Directeur de Recherche au CNRS, membre
statutaire de l’IHPST)
webpage
Frege, Dedekind et logicisme

Abstract : Dedekind a écrit que l’arithmétique, l’algèbre et l’analyse sont "une partie de la logique". Doit-on nécessairement comprendre cela comme une profession de foi logiciste ?
1. Je rappelle les principales thèses du logicisme de Frege.
2. J’analyse, à la lumière de ce rappel, un certain nombre de textes (bien connus) de Dedekind. Je montre que des affirmations linguistiquement proches de celles de Frege recouvrent un sens et une démarche de pensée totalement différents.
3. Je propose une interprétation de la célèbre phrase de la première préface de Was sind und was sollen die Zahlen ? :
"Indem ich die Arithmetik (Algebra, Analysis) nur einen Teil der Logik nenne, spreche ich schon aus, dass ich den Zahlbegriff für gänzlich unabhängig von den Vorstellungen oder Anschauungen des Raumes und der Zeit, dass ich ihn vielmehr für einen unmittelbaren Ausfluss der reinen Denkgesetze halte".




Session 6 Tuesday, May 3, 2011, Room 086A, 9h30 - 18h.

Co-organized by : Mattia Petrolo (Université Paris Diderot - Paris 7, Laboratoire SPHERE, Equipe REHSEIS) webpage

Proof-theoretic semantics and the justification of logical laws

- 9h30 - 11h
Enrico Moriconi (Università di Pisa)
webpage

First Steps in Proof-Theoretic Semantics

Abstract : We examine the thesis that it is possible to fix the meaning of logical constants by resorting only to the role they play in inference. Here the explanation is given in terms of "procedure" or "construction", broadly seen as an open concept, once it is saved its constructivistic flavour. Stemming from difficulties linked to the well known Prawitz Dummett perspective, we describe a different approach, which has been called "Definitional Reasoning", and which has its origin in Gentzen’s Sequent Calculus and in Lorenzen’s Operative Logic.


- 11h10 - 12h40
Noam Zeilberger (Paris Diderot-PPS)
webpage
From side-effects to types and contexts

Abstract : In programming language theory, a function has _side effects_ if it is "not merely a function", i.e., if its behavior is not fully described as a mapping from arguments to results. For example, a function computing the product of two matrices might have the side-effect of allocating memory, and aborting if the memory supply is exhausted.

From the point-of-view of semantics, side-effects are problematic because they seem to break compositionality : side-effects depend upon and modify the global state of the system, sometimes in ways that are difficult to predict. Indeed, Ken Shan has drawn an analogy between computational side-effects and apparently noncompositional phenomena in natural language, such as anaphora, intensionality and quantification. On the other hand, researchers in programming languages have supplied a few partial answers to the question of how to restore this lost composionality — for example through concepts such as "continuations" and "monads" — albeit with no unifying theory.

Without assuming any background in programming languages, I will explain how the problem of side-effects can be used as an opportunity to revisit basic questions about the meaning of types (propositions). In particular, I will describe work-in-progress that tries to take seriously the distinction between types and contexts emphasized in proof theory, and refine it towards a better understanding of the *duality* between types and contexts.


- 14h30 - 16h00
Jan von Plato (University of Helsinki)
webpage
Classical natural deduction

Abstract : In a step of indirect inference in natural deduction, a negative temporary assumption -A is closed and A concluded. When assumptions are closed otherwise in the absence of non-normalities, they are subformulas of the conclusion, as in implication introduction, or of the major premiss, as in those elimination rules that close assumptions, and the subformula property of normal derivations follows. The situation is different with indirect proof, because the conclusion A can be a major premiss in an elimination, and it can be lost trace of even in the absence of non-normal instances of the rest of the rules. Solutions to the problem have included the leaving out of disjunction and existence from the language and the restriction of conclusions of indirect proof to atomic formulas, by which they cannot be major premisses of elimination rules. Other solutions involve restrictions in the way elimination rules can be instantiated, and yet others contain global proof transformations.

It turns out that derivations in the full language of predicate logic can be so transformed by standard methods of local permutations that no conclusion of an indirect step of proof is the major premiss of an elimination rule. For the rest, a normal form can be defined as for intuitionistic derivations, in particular, with no a priori restrictions on rule instances : Normal derivations and whatever rule instances they may contain come out purely as results of a normalization procedure. It follows in particular that normal derivations have the subformula property. The situation is particularly clear if natural deduction is formulated in terms of general elimination rules with the definition : A derivation is normal if all major premisses of elimination rules are assumptions. This definition can be applied directly to classical natural deduction for predicate logic.


- 16h10 - 17h40
Ian Rumfitt (University of London, Birkbeck)
webpage
The semantic justification of logical laws

Abstract : Attempts to resolve disputes between rival logical schools will be futile unless the logical laws applied in the resolution are common ground between the rival parties. For this reason, purely homophonic semantic theories are useless in resolving disputes over basic logical laws. However, non-homophonic semantic theories can be useful. Taking the dispute between classical and intuitionist logicians as example, I construct non-homophonic theories that identify the metaphysical sources of the dispute. I argue that this way of approaching logical disagreements is superior to the proof theoretic method recommended by Dummett and Prawitz.


- 17h40 - 18h
Göran Sundholm (Universiteit Leiden)
webpage

Discussion and comments on Rumfitt’s "The semantic justification of logical laws"




Session 7 Tuesday, May 17, 2011, !!! Room Malevitch (483A) !!!, 10h - 18h.

Algorithmic Randomness

- 10h - 11h15
Hector Zenil (Computer Science, Lille 1)
webpage

Towards a stable definition of Algorithmic Randomness

Abstract : Although the concept of algorithmic randomness has reached maturity after the converging results of the definitions in the late 70s and the development of the work of Martin-Loef for infinite random sequences, the formal definition for finite strings is invariant up to an additive constant. The range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of algorithmic randomness (K) of any string. Some attempts have been made to arrive at a framework stable enough for a concrete definition of K, independent of any constant under a programming language, by appealing to the "naturalness" of the language in question. The aim of this talk is to present a novel approach to overcome the problem by using Levin’s universal distribution (Solomonoff’s algorithmic probability), thereby providing a framework for a stable definition of K. A stable definition would allow the concept to be applicable in a large range of real world problems, including the philosophical question of how patterns are distributed in the world. We will start with a short introduction to the subject of algorithmic randomness and its basic definitions.


- 11h25 - 12h40
Christopher Porter (Philosophy/Math, Notre Dame)
webpage

On Analogues of the Church-Turing Thesis in Algorithmic Randomness

Abstract : According to the Church-Turing Thesis, the informal notion of an effectively calculable function has the same extension as the notion of a Turing computable function. Is there an analogue of the Church-Turing Thesis that holds for some definition of algorithmic randomness for infinite sequences ? While several analogues have been suggested, I will argue (i) that each of these suggestions is problematic, and (ii) that, rather than single out one definition of algorithmic randomness as adequate to our intuitions, a more promising approach is a pluralistic one according to which a number of non-equivalent definitions of algorithmic randomness play an important role in illuminating the concept of randomness.


- 14h30 - 16h00
Antony Eagle (Philosophy, Oxford)
webpage

Probability and Randomness

Abstract : Von Mises thought that an adequate account of objective probability required a condition of randomness. For frequentists, some such condition is needed to rule out those sequences "where the relative frequencies converge towards definite limiting values, and where it is nevertheless not appropriate to speak of probability… [because such a sequence] obeys an easily recognizable law" (von Mises, Probability, Statistics, and Truth, p. 23). But is a condition of randomness required for an adequate account of probability, given the existence of decisive arguments against frequentism ? To put it another way : is it characteristic of the probability role that probability should have a connection to randomness ? I will answer this question in the negative.


- 16h10-17h40
J. W. McAllister (Philosophy, Leiden)
webpage

Limits to the Algorithmic Compression of Empirical Data

Abstract : Many writers, including some of the co-founders of algorithmic information theory, have held that scientific laws and theories constitute algorithmic compressions of empirical data sets, consisting of the outcomes of scientific observations and measurements. Contrary to this view, I argue that laws and theories are only algorithmic compressions of additive components of empirical data sets, which leave a residual noise term. I further argue that, because of the incompressibility of this noise term, it should not be expected that science can provide an algorithmic compression of empirical data sets in their entirety.




Session 8 Tuesday, June 7, 2011. Room Klimt (366A), 10h-12h30.

- Mark Wilson (Department of Philosophy, University of Pittsburgh)
webpage

What is Classical Physics Anyway ? Reflections on Hilbert’s 6th Problem.

Abstract : Various difficulties involving size scales and rigid bodies seem to
preclude any axiomatization of the working principles.of standard textbook
"classical mechanics." I shall argue that valuable general lessons with
respect to the structuring of effective descriptive language are suggested
thereby.







Archives 2009-2010