Formal criteria of theoreticalequivalence are mathematical mappings between specific sorts of mathematical objects, notably including those objects used in mathematical physics. Proponents of formal criteria claim that results involving these criteria have implications that extend beyond pure mathematics. For instance, they claim that formal criteria bear on the project of using our best mathematical physics as a guide to what the world is like, and also have deflationary implications for various debates in the metaphysics of physics. In (...) this paper, I investigate whether there is a defensible view according to which formal criteria have significant non-mathematical implications, of these sorts or any other, reaching a chiefly negative verdict. Along the way, I discuss various foundational issues concerning how we use mathematical objects to describe the world when doing physics, and how this practice should inform metaphysics. I diagnose the prominence of formal criteria as stemming from contentious views on these foundational issues, and endeavor to motivate some alternative views in their stead. (shrink)
I investigate syntactic notions of theoreticalequivalence between logical theories and a recent objection thereto. I show that this recent criticism of syntactic accounts, as extensionally inadequate, is unwarranted by developing an account which is plausibly extensionally adequate and more philosophically motivated. This is important for recent anti-exceptionalist treatments of logic since syntactic accounts require less theoretical baggage than semantic accounts.
When are two formal theories of broadly logical concepts, such as truth, equivalent? The paper investigates a case study, involving two well-known variants Kripke-Feferman truth. The first, KF+CONS, features a consistent but partial truth predicate. The second, KF+COMP, an inconsistent but complete truth predicate. It is well-known that the two truth predicates are dual to each other. We show that this duality reveals a much stricter correspondence between the two theories: they are intertraslatable. Intertranslatability under natural assumptions coincides with definitional (...)equivalence, and is arguably the strictest notion of theoreticalequivalence different from logical equivalence. The case of KF+CONS and KF+COMP raises a puzzle: the two theories can be proved to be strictly related, yet they appear to embody remarkably different conceptions of truth. The puzzle can be solved by reflecting on the scope and limitations of formal notions of theoreticalequivalence in certain contexts. (shrink)
Philosophers of science often assume that logically equivalent theories are theoretically equivalent. I argue that two theses, anti-exceptionalism about logic (which says, roughly, that logic is not a priori, that it is revisable, and that it is not special or set apart from other human inquiry) and logical realism (which says, roughly, that differences in logic reflect genuine metaphysical differences in the world), make trouble for both this commitment and the closely related commitment to theories being closed under logical consequence. (...) I provide three arguments. The first two show that anti-exceptionalism about logic provides an epistemic challenge to both the closure and the equivalence claims; the third shows that logical realism provides a metaphysical challenge to both the closure and the equivalence claims. Along the way, I show that there are important methodological upshots for metaphysicians and philosophers of logic. In particular, there are lessons to be drawn about certain conceptions of naturalism as constraining the possibilities for metaphysics and the philosophy of logic. (shrink)
Abstract Theories are metaphysically equivalent just if there is no fact of the matter that could render one theory true and the other false. In this paper I argue that if we are judiciously to resolve disputes about whether theories are equivalent or not, we need to develop testable criteria that will give us epistemic access to the obtaining of the relation of metaphysical equivalence holding between those theories. I develop such ?diagnostic? criteria. I argue that correctly inter-translatable theories (...) are metaphysically equivalent, and what we need are ways of determining whether a putative translation is correct or not. To that end I develop a number of tools we can employ to discern whether a translation is a correct one. (shrink)
Reformulating a scientific theory often leads to a significantly different way of understanding the world. Nevertheless, accounts of both theoreticalequivalence and scientific understanding have neglected this important aspect of scientific theorizing. This essay provides a positive account of how reformulating theories changes our understanding. My account simultaneously addresses a serious challenge facing existing accounts of scientific understanding. These accounts have failed to characterize understanding in a way that goes beyond the epistemology of scientific explanation. By focusing on (...) cases where we have differences in understanding without differences in explanation, I show that understanding cannot be reduced to explanation. (shrink)
I here develop a specific version of the deflationary theory of truth. I adopt a terminology on which deflationism holds that an exhaustive account of truth is given by the equivalence between truth-ascriptions and de-nominalised (or disquoted) sentences. An adequate truth-theory, it is argued, must be finite, non-circular, and give a unified account of all occurrences of “true”. I also argue that it must descriptively capture the ordinary meaning of “true”, which is plausibly taken to be unambiguous. Ch. 2 (...) is a critical historical survey of deflationary theories, where notably disquotationalism is found untenable as a descriptive theory of “true”. In Ch. 3, I aim to show that deflationism cannot be finitely and non-circularly formulated by using “true”, and so must only mention it. Hence, it must be a theory specifically about the word “true” (and its foreign counterparts). To capture the ordinary notion, the theory must thus be an empirical, use-theoretic, semantic account of “true”. The task of explaining facts about truth now becomes that of showing that various sentences containing “true” are (unconditionally) assertible. In Ch. 4, I defend the claim (D) that every sentence of the form “That p is true” and the corresponding “p” are intersubstitutable (in a use-theoretic sense), and show how this claim provides a unified and simple account of a wide variety of occurrences of “true”. Disquotationalism then only has the advantage of avoiding propositions. But in Ch. 5, I note that (D) is not committed to propositions. Use-theoretic semantics is then argued to serve nominalism better than truth-theoretic ditto. In particular, it can avoid propositions while sustaining a natural syntactic treatment of “that”-clauses as singular terms and of “Everything he says is true”, as any other quantification. Finally, Horwich’s problem of deriving universal truth-claims is given a solution by recourse to an assertibilist semantics of the universal quantifier. (shrink)
We present a new “reason-based” approach to the formal representation of moral theories, drawing on recent decision-theoretic work. We show that any moral theory within a very large class can be represented in terms of two parameters: a specification of which properties of the objects of moral choice matter in any given context, and a specification of how these properties matter. Reason-based representations provide a very general taxonomy of moral theories, as differences among theories can be attributed to differences in (...) their two key parameters. We can thus formalize several distinctions, such as between consequentialist and non-consequentialist theories, between universalist and relativist theories, between agent-neutral and agent-relative theories, between monistic and pluralistic theories, between atomistic and holistic theories, and between theories with a teleological structure and those without. Reason-based representations also shed light on an important but under-appreciated phenomenon: the “underdetermination of moral theory by deontic content”. (shrink)
The presence of symmetries in physical theories implies a pernicious form of underdetermination. In order to avoid this theoretical vice, philosophers often espouse a principle called Leibniz Equivalence, which states that symmetry-related models represent the same state of affairs. Moreover, philosophers have claimed that the existence of non-trivial symmetries motivates us to accept the Invariance Principle, which states that quantities that vary under a theory’s symmetries aren’t physically real. Leibniz Equivalence and the Invariance Principle are often seen (...) as part of the same package. I argue that this is a mistake: Leibniz Equivalence and the Invariance Principle are orthogonal to each other. This means that it is possible to hold that symmetry-related models represent the same state of affairs whilst having a realist attitude towards variant quantities. Various arguments have been presented in favour of the Invariance Principle: a rejection of the Invariance Principle is inter alia supposed to cause indeterminism, undetectability or failure of reference. I respond that these arguments at best support Leibniz Equivalence. (shrink)
Context: Consistency of mathematical constructions in numerical analysis and the application of computerized proofs in the light of the occurrence of numerical chaos in simple systems. Purpose: To show that a computer in general and a numerical analysis in particular can add its own peculiarities to the subject under study. Hence the need of thorough theoretical studies on chaos in numerical simulation. Hence, a questioning of what e.g. a numerical disproof of a theorem in physics or a prediction in (...) numerical economics could mean. Method: An algebraic simple model system is subjected to a deeper structure of underlying variables. With an algorithm simulating the steps in taking a limit of second order difference quotients the error terms are studied at the background of their algebraic expression. Results: With the algorithm that was applied to a simple quadratic polynomial system we found unstably amplified round-off errors. The possibility of numerical chaos is already known but not in such a simple system as used in our paper. The amplification of the errors implies that it is not possible with computer means to constructively show that the algebra and numerical analysis will ‘on the long run’ converge to each other and the error term will vanish. The algebraic vanishing of the error term cannot be demonstrated with the use of the computer because the round-off errors are amplified. In philosophical terms, the amplification of the round-off error is equivalent to the continuum hypothesis. This means that the requirement of (numerical) construction of mathematical objects is no safeguard against inference-only conclusions of qualities of (numerical) mathematical objects. Unstably amplified round-off errors are a same type of problem as the ordering in size of transfinite cardinal numbers. The difference is that the former problem is created within the requirements of constructive mathematics. This can be seen as the reward for working numerically constructive. (shrink)
The aim of this paper is to describe and analyze the epistemological justification of a proposal initially made by the biomathematician Robert Rosen in 1958. In this theoretical proposal, Rosen suggests using the mathematical concept of “category” and the correlative concept of “natural equivalence” in mathematical modeling applied to living beings. Our questions are the following: According to Rosen, to what extent does the mathematical notion of category give access to more “natural” formalisms in the modeling of living (...) beings? Is the so -called “naturalness” of some kinds of equivalences (which the mathematical notion of category makes it possible to generalize and to put at the forefront) analogous to the naturalness of living systems? Rosen appears to answer “yes” and to ground this transfer of the concept of “natural equivalence” in biology on such an analogy. But this hypothesis, although fertile, remains debatable. Finally, this paper makes a brief account of the later evolution of Rosen’s arguments about this topic. In particular, it sheds light on the new role played by the notion of “category” in his more recent objections to the computational models that have pervaded almost every domain of biology since the 1990s. (shrink)
The lattice operations of join and meet were defined for set partitions in the nineteenth century, but no new logical operations on partitions were defined and studied during the twentieth century. Yet there is a simple and natural graph-theoretic method presented here to define any n-ary Boolean operation on partitions. An equivalent closure-theoretic method is also defined. In closing, the question is addressed of why it took so long for all Boolean operations to be defined for partitions.
The main algebraic foundations of quantum mechanics are quickly reviewed. They have been suggested since the birth of this theory till up to last years. They are the following ones: Heisenberg-Born- Jordan’s (1925), Weyl’s (1928), Dirac’s (1930), von Neumann’s (1936), Segal’s (1947), T.F. Jordan’s (1986), Morchio and Strocchi’s (2009) and Buchholz and Fregenhagen’s (2019). Four cases are stressed: 1) the misinterpretation of Dirac’s algebraic foundation; 2) von Neumann’s ‘conversion’ from the analytic approach of Hilbert space to the algebraic approach of (...) the rings of operators; 3) Morchio and Strocchi’s improving Dirac’s analogy between commutators and Poisson Brackets into an exact equivalence; 4) the recent foundation of quantum mechanics upon the algebra of perturbations. Some considerations on alternating theoretical importance of the algebraic approach in the history of QM are offered. The level of formalism has increased from the mere introduction of matrices to group theory and C*-algebras but has not led to a definition of the foundations of physics; in particular, an algebraic formulation of QM organized as a problem-based theory and an exclusive use of constructive mathematics is still to be discovered. (shrink)
The Nash counterfactual considers the question: what would happen were I to change my behaviour assuming no one else does. By contrast, the Kantian counterfactual considers the question: what would happen were everyone to deviate from some behaviour. We present a model that endogenizes the decision to engage in this type of Kantian reasoning. Autonomous agents using this moral framework receive psychic payoffs equivalent to the cooperate-cooperate payoff in Prisoner’s Dilemma regardless of the other player’s action. Moreover, if both interacting (...) agents play Prisoner’s Dilemma using this moral framework, their material outcomes are a Pareto improvement over the Nash equilibrium. (shrink)
The mathematical structure of realist quantum theories has given rise to a debate about how our ordinary 3-dimensional space is related to the 3N-dimensional configuration space on which the wave function is defined. Which of the two spaces is our (more) fundamental physical space? I review the debate between 3N-Fundamentalists and 3D-Fundamentalists and evaluate it based on three criteria. I argue that when we consider which view leads to a deeper understanding of the physical world, especially given the deeper topological (...) explanation from the unordered configurations to the Symmetrization Postulate, we have strong reasons in favor of 3D-Fundamentalism. I conclude that our evidence favors the view that our fundamental physical space in a quantum world is 3-dimensional rather than 3N-dimensional. I outline lines of future research where the evidential balance can be restored or reversed. Finally, I draw lessons from this case study to the debate about theoreticalequivalence. (shrink)
Quine argues that if sentences that are set theoretically equivalent are interchangeable salva veritate, then all transparent operators are truth-functional. Criticisms of this argument fail to take into account the conditional character of the conclusion. Quine also argues that, for any person P with minimal logical acuity, if ‘belief’ has a sense in which it is a transparent operator, then, in that sense of the word, P believes everything if P believes anything. The suggestion is made that he intends that (...) result to show us that ‘believes’ has no transparent sense. Criticisms of this argument are either based on unwarranted assertions or on definitions of key terms that depart from Quine’s usage of those terms. (shrink)
Philosophers sometimes give arguments that presuppose the following principle: two theories can fail to be empirically equivalent on the sole basis that they present different "thick? metaphysical pictures of the world. Recently, a version of this principle has been invoked to respond to the argument that composite objects are dispensable to our best scientific theories. This response claims that our scientific evidence distinguishes between ordinary and composite-free theories, and it empirically favors the ordinary ones (Hofweber, 2016, 2018). In this paper, (...) I ask whether this response to the dispensability argument is tenable. I claim that it is not. This is because it presupposes an indefensible thesis about when two empirical consequences are distinct or the same. My argument provides some insight into what our empirical consequences are, and I conclude that scientific evidence is radically metaphysically neutral. This gives us some insight into the significant content of our scientific theories---the content that a scientific realist is committed to---and I show how this insight relates to questions about theoreticalequivalence more broadly. (shrink)
One of the objections against the thesis of underdetermination of theories by observations is that it is unintelligible. Any two empirically equivalent theories — so the argument goes—are in principle intertranslatable, hence cannot count as rivals in any non-trivial sense. Against that objection, this paper shows that empirically equivalent theories may contain theoretical sentences that are not intertranslatable. Examples are drawn from a related discussion about incommensurability that shows that theoretical non-intertranslatability is possible.
Choice-theoretic and philosophical accounts of rationality and reasoning address a multi-attitude psychology, including beliefs, desires, intentions, etc. By contrast, logicians traditionally focus on beliefs only. Yet there is 'logic' in multiple attitudes. We propose a generalization of the three standard logical requirements on beliefs -- consistency, completeness, and deductive closedness -- towards multiple attitudes. How do these three logical requirements relate to rational requirements, e.g., of transitive preferences or non-akratic intentions? We establish a systematic correspondence: each logical requirement (consistency, completeness, (...) or closedness) is equivalent to a class of rational requirements. Loosely speaking, this correspondence connects the logical and rational approaches to psychology. Addressing John Broome's central question, we characterize the extent to which reasoning can help achieve consistent, complete, or closed attitudes, respectively. (shrink)
Assertoric sentences are sentences which admit of truth or falsity. Non-assertoric sentences, imperatives and interrogatives, have long been a source of difficulty for the view that a theory of truth for a natural language can serve as the core of a theory of meaning. The trouble for truth-theoretic semantics posed by non-assertoric sentences is that, prima facie, it does not make sense to say that imperatives, such as 'Cut your hair', or interrogatives such as 'What time is it?', are truth (...) or false. Thus, the vehicle for giving the meaning of a sentence by using an interpretive truth theory, the T-sentence, is apparently unavailable for non-assertoric sentences. This paper shows how to incorporate non-assertoric sentences into a theory of meaning that gives central place to an interpretive truth theory for the language, without, however, reducing the non-assertorics to assertorics, or treating their utterances as semantically equivalent to one or more utterances of assertoric sentences. Four proposals for how to incorporate non-assertoric sentences into a broadly truth-theoretic semantics are reviewed. The proposals fall into two classes, those that attempt to explain the meaning of non-assertoric sentences solely by appeal to truth conditions, and those that attempt to explain the meaning of non-assertroic sentences by appeal to compliance conditions, which can be treated as one variety of fulfillment conditions for sentences of which truth conditions are another variety. The paper argues that none of the extant approaches is successful, but develops a version of the generalized fulfillment approach which avoids the difficulties of previous approaches and still exhibits a truth theory as the central component of a compositional meaning theory for all sentences of natural language. (shrink)
The INBIOSA project brings together a group of experts across many disciplines who believe that science requires a revolutionary transformative step in order to address many of the vexing challenges presented by the world. It is INBIOSA’s purpose to enable the focused collaboration of an interdisciplinary community of original thinkers. This paper sets out the case for support for this effort. The focus of the transformative research program proposal is biology-centric. We admit that biology to date has been more fact-oriented (...) and less theoretical than physics. However, the key leverageable idea is that careful extension of the science of living systems can be more effectively applied to some of our most vexing modern problems than the prevailing scheme, derived from abstractions in physics. While these have some universal application and demonstrate computational advantages, they are not theoretically mandated for the living. A new set of mathematical abstractions derived from biology can now be similarly extended. This is made possible by leveraging new formal tools to understand abstraction and enable computability. [The latter has a much expanded meaning in our context from the one known and used in computer science and biology today, that is "by rote algorithmic means", since it is not known if a living system is computable in this sense (Mossio et al., 2009).] Two major challenges constitute the effort. The first challenge is to design an original general system of abstractions within the biological domain. The initial issue is descriptive leading to the explanatory. There has not yet been a serious formal examination of the abstractions of the biological domain. What is used today is an amalgam; much is inherited from physics (via the bridging abstractions of chemistry) and there are many new abstractions from advances in mathematics (incentivized by the need for more capable computational analyses). Interspersed are abstractions, concepts and underlying assumptions “native” to biology and distinct from the mechanical language of physics and computation as we know them. A pressing agenda should be to single out the most concrete and at the same time the most fundamental process-units in biology and to recruit them into the descriptive domain. Therefore, the first challenge is to build a coherent formal system of abstractions and operations that is truly native to living systems. Nothing will be thrown away, but many common methods will be philosophically recast, just as in physics relativity subsumed and reinterpreted Newtonian mechanics. -/- This step is required because we need a comprehensible, formal system to apply in many domains. Emphasis should be placed on the distinction between multi-perspective analysis and synthesis and on what could be the basic terms or tools needed. The second challenge is relatively simple: the actual application of this set of biology-centric ways and means to cross-disciplinary problems. In its early stages, this will seem to be a “new science”. This White Paper sets out the case of continuing support of Information and Communication Technology (ICT) for transformative research in biology and information processing centered on paradigm changes in the epistemological, ontological, mathematical and computational bases of the science of living systems. Today, curiously, living systems cannot be said to be anything more than dissipative structures organized internally by genetic information. There is not anything substantially different from abiotic systems other than the empirical nature of their robustness. We believe that there are other new and unique properties and patterns comprehensible at this bio-logical level. The report lays out a fundamental set of approaches to articulate these properties and patterns, and is composed as follows. -/- Sections 1 through 4 (preamble, introduction, motivation and major biomathematical problems) are incipient. Section 5 describes the issues affecting Integral Biomathics and Section 6 -- the aspects of the Grand Challenge we face with this project. Section 7 contemplates the effort to formalize a General Theory of Living Systems (GTLS) from what we have today. The goal is to have a formal system, equivalent to that which exists in the physics community. Here we define how to perceive the role of time in biology. Section 8 describes the initial efforts to apply this general theory of living systems in many domains, with special emphasis on crossdisciplinary problems and multiple domains spanning both “hard” and “soft” sciences. The expected result is a coherent collection of integrated mathematical techniques. Section 9 discusses the first two test cases, project proposals, of our approach. They are designed to demonstrate the ability of our approach to address “wicked problems” which span across physics, chemistry, biology, societies and societal dynamics. The solutions require integrated measurable results at multiple levels known as “grand challenges” to existing methods. Finally, Section 10 adheres to an appeal for action, advocating the necessity for further long-term support of the INBIOSA program. -/- The report is concluded with preliminary non-exclusive list of challenging research themes to address, as well as required administrative actions. The efforts described in the ten sections of this White Paper will proceed concurrently. Collectively, they describe a program that can be managed and measured as it progresses. (shrink)
Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots (...) need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven’t done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of ‘procreative beneficence’ towards robots. (shrink)
The paper has three objectives: to expound a set-theoretical triplet model of concepts; to introduce some triplet relations (symbolic, logical, and mathematical formalization; equivalence, intersection, disjointness) between object concepts, and to instantiate them by relations between certain physical object concepts.
In a recent paper, Jiri Benovsky argues that the bundle theory and the substratum theory, traditionally regarded as ‘deadly enemies’ in the metaphysics literature, are in fact ‘twin brothers’. That is, they turn out to be ‘equivalent for all theoretical purposes’ upon analysis. The only exception, according to Benovsky, is a particular version of the bundle theory whose distinguishing features render unappealing. In the present reply article, I critically analyse these undoubtedly relevant claims, and reject them.
Book Description (Blurb): Cognitive Design for Artificial Minds explains the crucial role that human cognition research plays in the design and realization of artificial intelligence systems, illustrating the steps necessary for the design of artificial models of cognition. It bridges the gap between the theoretical, experimental and technological issues addressed in the context of AI of cognitive inspiration and computational cognitive science. -/- Beginning with an overview of the historical, methodological and technical issues in the field of Cognitively-Inspired Artificial (...) Intelligence, Lieto illustrates how the cognitive design approach has an important role to play in the development of intelligent AI technologies and plausible computational models of cognition. Introducing a unique perspective that draws upon Cybernetics and early AI principles, Lieto emphasizes the need for an equivalence between cognitive processes and implemented AI procedures, in order to realise biologically and cognitively inspired artificial minds. He also introduces the Minimal Cognitive Grid, a pragmatic method to rank the different degrees of biologically and cognitive accuracy of artificial systems in order project and predict their explanatory power with respect to the natural systems taken as source of inspiration. -/- Providing a comprehensive overview of cognitive design principles in constructing artificial minds, this text will be essential reading for students and researchers of artificial intelligence and cognitive science. (shrink)
The notion of strength has featured prominently in recent debates about abductivism in the epistemology of logic. Following Williamson and Russell, we distinguish between logical and scientific strength and discuss the limits of the characterizations they employ. We then suggest understanding logical strength in terms of interpretability strength and scientific strength as a special case of logical strength. We present applications of the resulting notions to comparisons between logics in the traditional sense and mathematical theories.
This is a chapter of the planned monograph "Out of Nowhere: The Emergence of Spacetime in Quantum Theories of Gravity", co-authored by Nick Huggett and Christian Wüthrich and under contract with Oxford University Press. (More information at www<dot>beyondspacetime<dot>net.) This chapter investigates the meaning and significance of string theoretic dualities, arguing they reveal a surprising physical indeterminateness to spacetime.
An important moral category—dishonest speech—has been overlooked in theoretical ethics despite its importance in legal, political, and everyday social exchanges. Discussion in this area has instead been fixated on a binary debate over the contrast between lying and ‘merely misleading’. Some see lying as a distinctive wrong; others see it as morally equivalent to deliberately omitting relevant truths, falsely insinuating, or any other species of attempted verbal deception. Parties to this debate have missed the relevance to their disagreement of (...) the notion of communicative dishonesty. Communicative dishonesty need not take the form of a lie, yet its wrongness does not reduce to the wrongness of seeking to deceive. This paper therefore proposes a major shift of attention away from the lying/misleading debate and towards the topic of communicative dishonesty. Dishonesty is not a simple notion to define, however. It presupposes a difficult distinction between what is and is not expressed in a given utterance. This differs from the more familiar distinction between what is and is not said, the distinction at the heart of the lying/misleading debate. This paper uses an idea central to speech act theory to characterize dishonesty in terms of the utterer’s communicative intentions, and applies the resulting definition to a variety of contexts. (shrink)
In this paper, I explore several versions of the bundle theory and the substratum theory and compare them, with the surprising result that it seems to be true that they are equivalent (in a sense of 'equivalent' to be specified). In order to see whether this is correct or not, I go through several steps : first, I examine different versions of the bundle theory with tropes and compare them to the substratum theory with tropes by going through various standard (...) objections and arguing for a tu quoque in all cases. Emphasizing the theoretical role of the substratum and of the relation of compresence, I defend the claim that these views are equivalent for all theoretical purposes. I then examine two different versions of the bundle theory with universals, and show that one of them is, here again, equivalent to the substratum theory with universals, by examining how both views face the famous objection from Identity of Indiscernibles in a completely parallel way. It is only the second, quite extreme and puzzling, version of the bundle theory with universals that is not be equivalent to any other view; and the diagnosis of why this is so will show just how unpalatable the view is. Similarly, only a not-so-palatable version of the substratum theory is genuinely different from the other views; and here again it's precisely what makes it different that makes it less appealing. (shrink)
Recent pragmatic accounts of slurs argue that the offensiveness of slurs is generated by a speaker's free choice to use a slur opposed to a more appropriate and semantically equivalent neutral counterpart. I argue that the theoretical role of neutral counterparts on such views is overstated. I consider two recent pragmatic analyses, Bolinger (2017) and Nunberg (2018), which rely heavily upon the optionality of slurs, namely, that a speaker exercises a deliberate lexical choice to use a slur when they (...) could have easily used a neutral counterpart instead. Against such views, I argue that across a range of different offensive uses of slurs, a speaker's choice to use a slur opposed to a neutral counterpart plays little to no role in accounting for why the slur generates offence. Such cases cast serious doubt upon the explanatory depth of these pragmatic analyses, and raise more general concerns for views which draw upon the relationship between a slur and its neutral counterpart. The main upshot is this: theorists should exercise caution in assuming that neutral counterparts play any fundamental or systemic role in explaining why slurs are offensive. (shrink)
We characterize access to empirical objects in biology from a theoretical perspective. Unlike objects in current physical theories, biological objects are the result of a history and their variations continue to generate a history. This property is the starting point of our concept of measurement. We argue that biological measurement is relative to a natural history which is shared by the different objects subjected to the measurement and is more or less constrained by biologists. We call symmetrization the (...) class='Hi'>theoretical and often concrete operation which leads to considering biological objects as equivalent in a measurement. Last, we use our notion of measurement to analyze research strategies. Some strategies aim to bring biology closer to the epistemology of physical theories, by studying objects as similar as possible, while others build on biological diversity. (shrink)
We review some of the main implications of the free-energy principle (FEP) for the study of the self-organization of living systems – and how the FEP can help us to understand (and model) biotic self-organization across the many temporal and spatial scales over which life exists. In order to maintain its integrity as a bounded system, any biological system - from single cells to complex organisms and societies - has to limit the disorder or dispersion (i.e., the long-run entropy) of (...) its constituent states. We review how this can be achieved by living systems that minimize their variational free energy. Variational free energy is an information theoretic construct, originally introduced into theoretical neuroscience and biology to explain perception, action, and learning. It has since been extended to explain the evolution, development, form, and function of entire organisms, providing a principled model of biotic self-organization and autopoiesis. It has provided insights into biological systems across spatiotemporal scales, ranging from microscales (e.g., sub- and multicellular dynamics), to intermediate scales (e.g., groups of interacting animals and culture), through to macroscale phenomena (the evolution of entire species). A crucial corollary of the FEP is that an organism just is (i.e., embodies or entails) an implicit model of its environment. As such, organisms come to embody causal relationships of their ecological niche, which, in turn, is influenced by their resulting behaviors. Crucially, free-energy minimization can be shown to be equivalent to the maximization of Bayesian model evidence. This allows us to cast natural selection in terms of Bayesian model selection, providing a robust theoretical account of how organisms come to match or accommodate the spatiotemporal complexity of their surrounding niche. In line with the theme of this volume; namely, biological complexity and self-organization, this chapter will examine a variational approach to self-organization across multiple dynamical scales. (shrink)
Which rules for aggregating judgments on logically connected propositions are manipulable and which not? In this paper, we introduce a preference-free concept of non-manipulability and contrast it with a preference-theoretic concept of strategy-proofness. We characterize all non-manipulable and all strategy-proof judgment aggregation rules and prove an impossibility theorem similar to the Gibbard--Satterthwaite theorem. We also discuss weaker forms of non-manipulability and strategy-proofness. Comparing two frequently discussed aggregation rules, we show that “conclusion-based voting” is less vulnerable to manipulation than “premise-based voting”, (...) which is strategy-proof only for “reason-oriented” individuals. Surprisingly, for “outcome-oriented” individuals, the two rules are strategically equivalent, generating identical judgments in equilibrium. Our results introduce game-theoretic considerations into judgment aggregation and have implications for debates on deliberative democracy. (shrink)
A model-theoretic realist account of science places linguistic systems and their corresponding non-linguistic structures at different stages or different levels of abstraction of the scientific process. Apart from the obvious problem of underdetermination of theories by data, philosophers of science are also faced with the inverse (and very real) problem of overdetermination of theories by their empirical models, which is what this article will focus on. I acknowledge the contingency of the factors determining the nature – and choice – of (...) a certain model at a certain time, but in my terms, this is a matter about which we can talk and whose structure we can formalise. In this article a mechanism for tracing "empirical choices" and their particularized observational-theoretical entanglements will be offered in the form of Yoav Shoham's version of non-monotonic logic. Such an analysis of the structure of scientific theories may clarify the motivations underlying choices in favor of certain empirical models (and not others) in a way that shows that "disentangling" theoretical and observation terms is more deeply model-specific than theory-specific. This kind of analysis offers a method for getting an articulable grip on the overdetermination of theories by their models – implied by empirical equivalence – which Kuipers' structuralist analysis of the structure of theories does not offer. (shrink)
We investigate an enrichment of the propositional modal language L with a "universal" modality ■ having semantics x ⊧ ■φ iff ∀y(y ⊧ φ), and a countable set of "names" - a special kind of propositional variables ranging over singleton sets of worlds. The obtained language ℒ $_{c}$ proves to have a great expressive power. It is equivalent with respect to modal definability to another enrichment ℒ(⍯) of ℒ, where ⍯ is an additional modality with the semantics x ⊧ ⍯φ (...) iff Vy(y ≠ x → y ⊧ φ). Model-theoretic characterizations of modal definability in these languages are obtained. Further we consider deductive systems in ℒ $_{c}$ . Strong completeness of the normal ℒ $_{c}$ logics is proved with respect to models in which all worlds are named. Every ℒ $_{c}$ -logic axiomatized by formulae containing only names (but not propositional variables) is proved to be strongly frame-complete. Problems concerning transfer of properties ([in]completeness, filtration, finite model property etc.) from ℒ to ℒ $_{c}$ are discussed. Finally, further perspectives for names in multimodal environment are briefly sketched. (shrink)
Stephen Jay Gould argued that replaying the ‘tape of life’ would result in radically different evolutionary outcomes. Recently, biologists and philosophers of science have paid increasing attention to the theoretical importance of convergent evolution—the independent origination of similar biological forms and functions—which many interpret as evidence against Gould’s thesis. In this paper, we examine the evidentiary relevance of convergent evolution for the radical contingency debate. We show that under the right conditions, episodes of convergent evolution can constitute valid natural (...) experiments that support inferences regarding the deep counterfactual stability of macroevolutionary outcomes. However, we argue that proponents of convergence have problematically lumped causally heterogeneous phenomena into a single evidentiary basket, in effect treating all convergent events as if they are of equivalent theoretical import. As a result, the ‘critique from convergent evolution’ fails to engage with key claims of the radical contingency thesis. To remedy this, we develop ways to break down the heterogeneous set of convergent events based on the nature of the generalizations they support. Adopting this more nuanced approach to convergent evolution allows us to differentiate iterated evolutionary outcomes that are probably common among alternative evolutionary histories and subject to law-like generalizations, from those that do little to undermine and may even support, the Gouldian view of life. (shrink)
We generalize and extend the class of Sahlqvist formulae in arbitrary polyadic modal languages, to the class of so called inductive formulae. To introduce them we use a representation of modal polyadic languages in a combinatorial style and thus, in particular, develop what we believe to be a better syntactic approach to elementary canonical formulae altogether. By generalizing the method of minimal valuations à la Sahlqvist–van Benthem and the topological approach of Sambin and Vaccaro we prove that all inductive formulae (...) are elementary canonical and thus extend Sahlqvist’s theorem over them. In particular, we give a simple example of an inductive formula which is not frame-equivalent to any Sahlqvist formula. Then, after a deeper analysis of the inductive formulae as set-theoretic operators in descriptive and Kripke frames, we establish a somewhat stronger model-theoretic characterization of these formulae in terms of a suitable equivalence to syntactically simpler formulae in the extension of the language with reversive modalities. Lastly, we study and characterize the elementary canonical formulae in reversive languages with nominals, where the relevant notion of persistence is with respect to discrete frames. (shrink)
This PhD dissertation examines the conceptual and theoretical foundations of the most general and most widely used framework for understanding social evolution, W. D. Hamilton's theory of kin selection. While the core idea is intuitive enough (when organisms share genes, they sometimes have an evolutionary incentive to help one another), its apparent simplicity masks a host of conceptual subtleties, and the theory has proved a perennial source of controversy in evolutionary biology. To move towards a resolution of these controversies, (...) we need a careful and rigorous analysis of the philosophical foundations of the theory. My aim in this work is to provide such an analysis. I begin with an examination of the concepts behavioural ecologists employ to describe and classify types of social behaviour. I stress the need to distinguish concepts that are often conflated: for example, we need to distinguish simple cooperation from collaboration in collective tasks, behaviours from strategies, and control from manipulation and coercion. I proceed from here to the formal representation of kin selection via George R. Price’s covariance selection mathematics. I address a number of interpretative issues the Price formalism raises, including the vexed question of whether kin selection theory is ‘formally equivalent’ to multi-level selection theory. In the second half of the dissertation, I assess the uses and limits of Hamilton’s rule for the evolution of social behaviour; I provide a precise statement of the conditions under which the rival neighbour-modulated fitness and inclusive fitness approaches in contemporary kin selection theory are equivalent (and describe cases in which they are not); and I criticize recent formal attempts to establish the controversial claim that kin selection leads to organisms behaving as if maximizing their inclusive fitness. (shrink)
The purpose of this note is to present a strong form of the liar paradox. It is strong because the logical resources needed to generate the paradox are weak, in each of two senses. First, few expressive resources required: conjunction, negation, and identity. In particular, this form of the liar does not need to make any use of the conditional. Second, few inferential resources are required. These are: (i) conjunction introduction; (ii) substitution of identicals; and (iii) the inference: From ¬(p (...) ∧ p), infer ¬ p. It is, interestingly enough, also essential to the argument that the ‘strong’ form of the diagonal lemma be used: the one that delivers a term λ such that we can prove: λ = ¬ T(⌈λ⌉); rather than just a sentence Λ for which we can prove: Λ ≡ ¬T(⌈Λ⌉). The truth-theoretic principles used to generate the paradox are these: ¬(S ∧ T(⌈¬S⌉); and ¬(¬S ∧ ¬T(⌈¬S⌉). These are classically equivalent to the two directions of the T-scheme, but they are intuitively weaker. The lesson I would like to draw is: There can be no consistent solution to the Liar paradox that does not involve abandoning truth-theoretic principles that should be every bit as dear to our hearts as the T-scheme. So we shall have to learn to live with the Liar, one way or another. (shrink)
Two expressive limitations of an infinitary higher-order modal language interpreted on models for higher-order contingentism – the thesis that it is contingent what propositions, properties and relations there are – are established: First, the inexpressibility of certain relations, which leads to the fact that certain model-theoretic existence conditions for relations cannot equivalently be reformulated in terms of being expressible in such a language. Second, the inexpressibility of certain modalized cardinality claims, which shows that in such a language, higher-order contingentists cannot (...) express what is communicated using various instances of talk of ‘possible things’, such as ‘there are uncountably many possible stars’. (shrink)
This paper describes a cubic water tank equipped with a movable partition receiving various amounts of liquid used to represent joint probability distributions. This device is applied to the investigation of deductive inferences under uncertainty. The analogy is exploited to determine by qualitative reasoning the limits in probability of the conclusion of twenty basic deductive arguments (such as Modus Ponens, And-introduction, Contraposition, etc.) often used as benchmark problems by the various theoretical approaches to reasoning under uncertainty. The probability bounds (...) imposed by the premises on the conclusion are derived on the basis of a few trivial principles such as "a part of the tank cannot contain more liquid than its capacity allows", or "if a part is empty, the other part contains all the liquid". This stems from the equivalence between the physical constraints imposed by the capacity of the tank and its subdivisions on the volumes of liquid, and the axioms and rules of probability. The device materializes de Finetti's coherence approach to probability. It also suggests a physical counterpart of Dutch book arguments to assess individuals' rationality in probability judgments in the sense that individuals whose degrees of belief in a conclusion are out of the bounds of coherence intervals would commit themselves to executing physically impossible tasks. (shrink)
Since the pioneering work of Birkhoff and von Neumann, quantum logic has been interpreted as the logic of (closed) subspaces of a Hilbert space. There is a progression from the usual Boolean logic of subsets to the "quantum logic" of subspaces of a general vector space--which is then specialized to the closed subspaces of a Hilbert space. But there is a "dual" progression. The notion of a partition (or quotient set or equivalence relation) is dual (in a category-theoretic sense) (...) to the notion of a subset. Hence the Boolean logic of subsets has a dual logic of partitions. Then the dual progression is from that logic of partitions to the quantum logic of direct-sum decompositions (i.e., the vector space version of a set partition) of a general vector space--which can then be specialized to the direct-sum decompositions of a Hilbert space. This allows the logic to express measurement by any self-adjoint operators rather than just the projection operators associated with subspaces. In this introductory paper, the focus is on the quantum logic of direct-sum decompositions of a finite-dimensional vector space (including such a Hilbert space). The primary special case examined is finite vector spaces over ℤ₂ where the pedagogical model of quantum mechanics over sets (QM/Sets) is formulated. In the Appendix, the combinatorics of direct-sum decompositions of finite vector spaces over GF(q) is analyzed with computations for the case of QM/Sets where q=2. (shrink)
When disability-adjusted life years are used to measure the burden of disease on a population in a time interval, they can be calculated in several different ways: from an incidence, pure prevalence, or hybrid perspective. I show that these calculation methods are not equivalent and discuss some of the formal difficulties each method faces. I show that if we don’t discount the value of future health, there is a sense in which the choice of calculation method is a mere question (...) of accounting. Such questions can be important, but they don’t raise deep theoretical concerns. If we do discount, however, choice of calculation method can change the relative burden attributed to different conditions over time. I conclude by recommending that studies involving disability-adjusted life years be explicit in noting what calculation method is being employed and in explaining why that calculation method has been chosen. (shrink)
Will Kymlicka argues that societal culture matters to liberalism because it contributes to its members’ freedom. If so, multiculturalism that advocates group rights to sustain minority societal cultures in the liberal West is in fact entailed by liberalism, the core value of which is individual freedom. “Freedom,” then, functions as the main bridge between liberalism and multiculturalism in Kymlicka’s position. Kymlicka is correct that societal culture contributes to its members’ freedom by providing them with meaningful options. The sense of freedom (...) enabled by culture, however, is not equivalent to the notion of freedom advocated by mainstream liberalism, liberal autonomy. I argue in this paper that Kymlicka’s liberal multiculturalism is an inconsistent and therefore implausible theoretical construct because Kymlicka unwittingly equivocates on “freedom” in using its two distinct senses interchangeably. (shrink)
In Lewis reconstructs set theory using mereology and plural quantification (MPQ). In his recontruction he assumes from the beginning that there is an infinite plurality of atoms, whose size is equivalent to that of the set theoretical universe. Since this assumption is far beyond the basic axioms of mereology, it might seem that MPQ do not play any role in order to guarantee the existence of a large infinity of objects. However, we intend to demonstrate that mereology and plural (...) quantification are, in some ways, particularly relevant to a certain conception of the infinite. More precisely, though the principles of mereology and plural quantification do not guarantee the existence of an infinite number of objects, nevertheless, once the existence of any infinite object is admitted, they are able to assure the existence of an uncountable infinity of objects. So, ifMPQ were parts of logic, the implausible consequence would follow that, given a countable infinity of individuals, logic would be able to guarantee an uncountable infinity of objects. (shrink)
Einstein structured the theoretical frame of his work on gravity under the Special Relativity and Minkowski´s spacetime using three guide principles: The strong principle of equivalence establishes that acceleration and gravity are equivalents. Mach´s principle explains the inertia of the bodies and particles as completely determined by the total mass existent in the universe. And, general covariance searches to extend the principle of relativity from inertial motion to accelerated motion. Mach´s principle was abandoned quickly, general covariance resulted mathematical (...) property of the tensors and principle of equivalence inconsistent and it can only apply to punctual gravity, no to extended gravity. Also, the basic principle of Special Relativity, i.e., the constancy of the speed of the electromagnetic wave in the vacuum was abandoned, static Minkowski´s spacetime was replaced to dynamic Lorentz´s manifold and the main conceptual fundament of the theory, i.e. spacetime is not known what is. Of other hand, gravity never was conceptually defined; neither answers what is the law of gravity in general. However, the predictions arise of Einstein equations are rigorously exacts. Thus, the conclusion is that on gravity, it has only the equations. In this work it shows that principle of equivalence applies really to punctual and extended gravity, gravity is defined as effect of change of coordinates although in the case of the extended gravity with change of geometry from Minkowski´s spacetime to Lorentz´s manifold; and the gravitational motion is the geodesic motion that well it can declare as the general law of gravity. (shrink)
The two-fold ontological character of linguistic objects revealed due to the distinction between “type” and “token” introduced by Ch. S. Peirce can be a base of the two-fold, both theoretical and axiomatic, approach to the language. Referring to some ideas included in A. A. Markov’s work [1954] (in Russian) on Theory of Algorithms and in some earlier papers of the author, the problem of formalization of the concrete and abstract words theories raised by J. Słupecki was solved. The construction (...) of the theories presented here has two levels. The axiomatic theory of label-tokens: material, physical linguistic objects, constitutes the ﬁrst one. Label-types, according to the literature of the subject, are deﬁned on the other level as equivalence classes of equiform label-tokens. Assuming the opposite point of view, one can accept that theory of label-types: abstract labels, formalized on the first level, in which it is possible to deﬁne the notion of label-token as well as the derivative notions on the second level, should become the basis of formalization of the theory of linguistic expressions and the theory of language in general. The axioms and deﬁnitions of both theories of labels: T k and T p representing the other approach to the ontology of language are included in the sequel of the abstract. The foundations of the theory of labels T k in which the primary assumption as to the label-types existence is superﬂuous have been referred on the basis of the author's monography "Teorie Języków Syntaktycznie Kategorialnych" ( "The Theories of Syntactically Categorial Languages"), PWN, Warszawa-Wrocław 1985. The basis of the theory of labels T p which takes into account the other position has to be presented here for the ﬁrst time. Some extended ideas of the paper will also be presented in author's paper "Logiczne podstawy ontologii składni Języka" ("Logical foundations of language syntax ontology), Studia Filozoﬁczne 6-7 (271-272), (1988), pp. 263-284. (shrink)
The metaphysics of representation poses questions such as: in virtue of what does a sentence, picture, or mental state represent that the world is a certain way? In the first instance, I have focused on the semantic properties of language: for example, what is it for a name such as ‘London’ to refer to something? Interpretationism concerning what it is for linguistic expressions to have meaning, says that constitutively, semantic facts are fixed by best semantic theory. As here developed, it (...) promises to give a reductive, universal and non-revisionary account of the nature of linguistic representation. -/- Interpretationism in general, however, is threatened by severe internal tension, due to arguments for radical inscrutability. These contend that, given the interpretationist setting, there can be no fact of the matter what object an individual word refers to: for example, that there is no fact of the matter as to whether “London” refers to London or to Sydney. -/- A series of challenges emerge, forming the basis for this thesis. 1. What sort of properties is the interpretationist trying to reduce, and what kind of reductive story is she offering? 2. How are inscrutability theses best formulated? Are arguments for inscrutability effective in their own terms? What kinds of inscrutability arise? 3. Is endorsing radical inscrutability a stable position? 4. Are there theoretical virtues—such as simplicity—that can be appealed to in discrediting the rival (empirically equivalent) theories that underpin inscrutability arguments? -/- In addressing these questions, I concentrate on diagnosing the source of inscrutability, mapping the space of ways of resisting the arguments for radical inscrutability, and examining the challenges faced in developing a principled account of linguistic content that avoids radical inscrutability. -/- The effect is not to close down the original puzzles, but rather to sharpen them into a set of new and deeper challenges. (shrink)
This thesis starts with three challenges to the structuralist accounts of applied mathematics. Structuralism views applied mathematics as a matter of building mapping functions between mathematical and target-ended structures. The first challenge concerns how it is possible for a non-mathematical target to be represented mathematically when the mapping functions per se are mathematical objects. The second challenge arises out of inconsistent early calculus, which suggests that mathematical representation does not require rigorous mathematical structures. The third challenge comes from renormalisation group (...) (RG) explanations of universality. It is argued that the structural mapping between the world and a highly abstract minimal model adds little value to our understanding of how RG obtains its explanatory force. I will address the first and second challenges from the similarity perspective. The similarity account captures representations as similarity relations, providing a more flexible and broader conception of representation than structuralism. It is the specification of the respect and degree of similarity that forges mathematics into a context of representation and directs it to represent a specific system in reality. Structuralism is treatable as a tool for explicating similarity rela-tions set-theoretically. The similarity account, combined with other approaches (e.g., Nguyen and Frigg’s extensional abstraction account and van Fraassen’s pragmatic equivalence), can dissolve the first challenge. Additionally, I will make a structuralist response to the second challenge, and suggestions regarding the role of infinitesimals from the similarity perspective. In light of the similarity account, I will propose the “hotchpotch picture” as a method-ological reflection of our study of representation and explanation. Its central insight is to dissect a representation or an explanation into several aspects and use different theories (that are usually thought of competing) to appropriate each of them. Based on the hotchpotch picture, RG explanations can be dissected to the “indexing” and “inferential” conceptions of explanation, which are captured or characterised by structural mappings. Therefore, structuralism accommodates RG explanations, and the third challenge is resolved. (shrink)
In a previous paper we outlined a series of historical touchpoints between classical aether theories and modern theoretical physics which showed a shared conceptual lineage for the modern tools and methods of the most common interpretations and fluid based “Hydrodynamic” treatments of an electromagnetic medium. It was proposed that, though the weight of modern experimentation leaves an extremely narrow and convoluted window for even a reconceptualization of a medium, all of modern physics recognizes a plethora of behaviors and attributes (...) for free space and these physics are interchangeable with modern methods for treating superfluid-like continuums. Thus the mathematical equivalence of the methods do not comprise alternative physics but an alternative interpretation of the same physics. Though many individual components describing a “neo-aether” or “quintessence” are available, an overarching structural outline of how these tools can work together to provide an alternative working overview of modern physics has remained undefined. This paper will propose a set of introductory concepts in the first outline of a toy model which will later connect the alternative tools and conceptualizations with their modern counterparts. This introductory paper provides the simpler “100-miles out” overview of the whole of physics from this perspective, in an easily comprehensible, familiar and intuitive, informal dialog fashion. While this paper grants the largest and loosest introductory overview, subsequent papers in this series will address the finite connections between modern physics and this hydrodynamic view. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.