SEMINAR - Finished

Giorgio venturi

In this talk we present the recent developments in the study of non-classical models of ZFC. We will show that there are algebras that are neither Boolean, nor Heyting, but that still give rise to models of ZFC. This result is obtained by using an algebra-valued construction similar to that of the Boolean-valued models. We also show that by a suitable modification of the interpretation of equality and belongingness, we can significantly extend the class of algebra-valued models of ZFC. We also present an application of these constructions, showing the independence of CH from non-classical set theories.

Data
Edson Bezerra

In this talk, I will introduce modalities which interpret the predicates "it is logically valid that" and "it is logically consistent that" in many-valued logics. Such modalities are defined in such a way that the truth of []F and <>F takes only into consideration whether F receives or not designated values, reflecting the two-valuedness of many-valued logics. In this analysis, we aim to establish what are the most general principles of validity in this family of logics.

 

Data
Santiago Jockwich

In this presentation, we will give an overview of Priests material and model-theoretic approach to (naive) paraconsistent set theory. We will highlight several criticisms put forward by Incuravti and Meadows. Then we will outline the problems one encounters in the construction of an algebra-valued model for naive LP Set Theory.  Similarly, we will reject this approach due to the weak conditional of LP and to the questionable treatment of identity. But we will propose a possible fix. First, we propose *somehow* to add a class function to the lattice valued model and secondly to discriminate between faithful and unfaithful interpretations of identity. When restricted to faithful interpretations we can show that our *expanded* algebra valued model is indeed a model that validates ZF, the axioms of Naive LP Set Theory and the law of Leibniz. We compare our approach to the model construction of Priest and discuss possible objections.  

Data
Bruno Mendonça

Logical omniscience, a key theorem of normal epistemic logics, states that the knowledge set of ordinary rational agents is closed for its logical consequences. Although epistemic logicians in general consider logical omniscience unrealistic, there is no clear consensus on how it should be restrained. The challenge is most of all conceptual: we must find adequate criteria for separating obvious logical consequences (i.e., consequences for which epistemic closure certainly holds) from non-obvious ones. Non-classical game-theoretic semantics has been employed in this discussion with relative success. On the one hand, based on urn semantics [5], an expressive fragment of classical game semantics that weakens the dependence relations between quantifiers occurring in a formula, we can formalize, for a broad array of examples, epistemic scenarios in which an individual ignores the validity of a given first order argument or sentence. On the other hand, urn semantics offers a disproportionate restriction of logical omniscience. Therefore, an improvement of this system is required to obtain a more accurate solution of the problem. In this paper, I propose one such improvement based on two claims. First, to avoid the difficulties faced by accounts of logical obviousness in terms of easy provability [e.g., 2, 3, 4], I argue that we should rather conceive logical knowledge in terms of a default and challenge model of justification [1, 6]. Secondly, I sustain that our linguistic competence in using quantifiers requires a sort of basic hypothetical logical knowledge that can be roughly formulated as follows: (R ∀) when inquiring on the truth-value of a sentence of the form ∀x p, an individual might be unaware of all substitutional instances that this sentence accepts, but at least she must know that, if an element a is given, then ∀x p holds only if p(a/x) is true. Both claims accept game-theoretic formalization in terms of a refinement of urn semantics. I maintain that the system so obtained (US+R ∀) affords an improved solution of the logical omniscience problem. To do this, I prove that it is complete for a special class of urn models and, subsequently, I characterize first order theoremhood in this logic. Based on this characterization, we will be able to see that the classical first order validities which are not preserved in US + R ∀ form a class of formulae such that we have semantic reasons to affirm that an individual can present perfect linguistic competence and still ignore their logical truth.
 

References

[1] Brandom,R. Making it Explicit: Reasoning, Representing, and Discursive Commitment, Cambridge: Harvard University press, 1998.

[2] D’Agostino, M.”Tractable depth-bounded logics and the problem of logical omniscience”, pages 245–275,in H.Hosni and F. Montagna (eds.), Probability, Uncertainty and Rationality, Dordrecht: Springer, 2010.

[3] Jago, M. “Logical information and epistemic space”, Synthese, 167, 2 (2009): 327–341.

[4] Jago, M.”The content of deduction”, Journal of Philosophical Logic, 42,2(2009):317–334.

[5] Rantala,V. “Urn models: a new kind of non-standard model for first-order logic”, pages 347–366, in E. Saarinen (ed.) Game-Theoretical Semantics, Dordrecht: Springer, 1979

[6] Williams, M., Problems of Knowledge, Oxford: Oxford University Press, 2001.

Data
Pedro Yago

"Abstraction" is a term that bears multiple meanings. Since the later half of the 20th century, however, the main interest of philosophers in abstraction has been related to the use of abstraction principles, for the role they play in the foundation of mathematics: given a suitable abstraction principle and some appropriate definitions, it is possible to derive second-order Peano's Arithmetic. This result set the field for a rebirth of the view that arithmetic knowledge is analytic, a view which is tempting for its epistemological implications. A couple of problems this view faces are that of justifying the accepting of an abstraction principle as being analytic, and that of explaining, given the analiticity of an abstraction principle, just how it can be that an analytic truth implies the existence of objects (indeed of infinitely many). In trying to solve the latter, Linnebo has introduced the concept of thin objects, which are lightweight objects, in the sense that nothing (or at least not much) is required for them to exist. Fine also provides a defense of abstraction, but of a different sort, closer to the original conception of it, by employing his theory of arbitrary objects (which are objects possessing all the properties common to a certain range of objects). Following Linnebo's suggestion and Fine's defense of Cantorian abstraction, I intend to argue that a proper conception of arbitrary objects may shed a light upon all the different sorts of abstraction - and, most importantly, upon abstraction principles.

Data