In the social sciences, within the explanatory paradigm of structural individualism, a theory of action – like rational choice theory – models how individuals behave and interact at the micro level in order to explain macro observations as the aggregation of these individuals actions. A central epistemological issue is that such theoretical models are stuck in a dilemma between falsity of their basic assumptions and triviality of their explanation. On the one hand, models which have a great empirical success often (...) rest on unrealistic or even knowingly false assumptions; on the other hand, more complex models, with additional more realistic hypotheses, can (trivially) adapt to a wide range of situations and thus loose their explanatory power. Our purpose here is epistemological and consists in wondering to which extent demanding realistic assumptions in such cases is a relevant criterion with respect to the acceptance of a given explanatory model. Via an analogical reasoning with physics, we argue that this criterion seems too strong and actually irrelevant. General physical principles are not just idealized or unrealistic, they can also be formulated in many different yet equivalent ways which do not imply the same fundamental unobservable entities or phenomena. However, the classification of phenomena that such principles allow to highlight does not depend, at the end, on any particular formulation of these basic assumptions. This suggests that some hypotheses in theoretical models are actually not genuine empirical statements that could be independently tested but only substrates of modeling embodying a classification principle. Thus, we develop a structural invariance criterion that we then apply to rational choice models in the social sciences. We argue that this criterion allows to escape from the epistemological dilemma without condemning formal approaches like rational choice theory for their lack of realisticness nor being stuck to any antirealist viewpoint. (shrink)
Many fields (social choice, welfare economics, recommender systems) assume people express what benefits them via their 'revealed preferences'. Revealed preferences have well-documented problems when used this way, but are hard to displace in these fields because, as an information source, they are simple, universally applicable, robust, and high-resolution. In order to compete, other information sources (about participants' values, capabilities and functionings, etc) would need to match this. I present a conception of values as *attention policies resulting from constitutive judgements*, and (...) use it to build an alternative preference relation, Meaningful Choice, which retains many desirable features of revealed preference. (shrink)
I propose a novel model of the human ego (which I define as the tendency to measure one’s value based on extrinsic success rather than intrinsic aptitude or ability). I further propose the conjecture that ego so defined both is a non-adaptive by-product of evolutionary pressures, and has some evolutionary value as an adaptation (protecting self-interest). I explore ramifications of this model, including how it mediates individuals’ reactions to perceived and actual limits of their power, their ability to cope with (...) risk and uncertainty, and how this model may interpolate between rational choice models and cognitive psychology. I develop numerous examples and applications, including poverty traps, to demonstrate the model’s predictive power to elucidate a broad range of social phenomena. -/- [December 2018: Updated version to submit for publication. Expanded Sections 4 and 5.1, revised Section 5.7] -/- . (shrink)
I criticise, from a critical rationalist perspective, Israel Kirzner's notion of entrepreneurial alertness and Matthew McCaffrey's endorsement of Joseph Salerno's rival account of entrepreneurial judgment.
Is the Kantian basis of valuing in humanity sufficient or sound enough to account for all valuing? At least two other such bases have been proposed across the ages, that of the sentiments and the valuing of life itself. This article focuses on the Kantian view, the first of these three possible bases of valuing. The concern is: by which criteria can we assess whether a given theory of or approach to basing a value is in fact usable and optimal, (...) that is can in fact explain why a value can be assuredly based on a theory of value. That is, can a possible theory of value indeed explain all other values? That is, it seems a “meta” analysis of the situation is due. The article looks to five perspectives on the Kantian concept of humanity to determine if that humanity, as a theory of all valuing’s bases, is at least supportable / sustainable, if not yet sufficiently fleshed out. The article concludes that these five perspective may each and severally leave some cogent doubt of humanity as the basis of all other value, while leaving unsettled the possibility the perspective is flawed or is not thorough enough for due criticism of humanity as basis. (shrink)
Adaptive preferences give rise to puzzles in ethics, political philosophy, decision theory, and the theory of action. Like our other preferences, adaptive preferences lead us to make choices, take action, and give consent. In 'False Consciousness for Liberals', recently published in The Philosophical Review, David Enoch (2020) proposes a criterion by which to identify when these choices, actions, and acts of consent are less than fully autonomous; that is, when they suffer from what Natalie Stoljar (2014) calls an 'autonomy deficit'. (...) According to Enoch, such actions are not protected in the usual way against interference by others; there is not the same prohibition against trying to prevent someone from acting in a particular way when that action is motivated by such adaptive preferences and is an attempt to satisfy them. In this note, I raise two concerns about Enoch's criterion. (shrink)
In this paper, I ask three questions of the liberal. In each, I fill in philosophical detail around a certain sort of complaint raised in current public debates about their position. In the first, I probe the limits of the liberal's tolerance for civil disobedience; in the second, I ask how the liberal can adjudicate the most divisive moral disputes of the age; and, in the third, I suggest the liberal faces a problem when there is substantial disagreement about the (...) boundaries of the rational and the reasonable. (shrink)
This paper concerns Warren Quinn’s famous “The Puzzle of the Self-Torturer.” I argue that even if we accept his assumption that practical rationality is purely instrumental such that what he ought to do is simply a function of how the relevant options compare to each other in terms of satisfying his actual preferences that doesn’t mean that every explanation as to why he shouldn’t advance to the next level must appeal to the idea that so advancing would be suboptimal in (...) terms of the satisfaction of his actual preferences. Rather, we can admit that his advancing would always be optimal, but argue that advancing isn’t always what he ought to do given that advancing sometimes fails to meet some necessary condition for being what he ought to do. For instance, something can be what he ought to do only if it’s an option for him. What’s more, something can be what he ought to do only if it’s something that he can do without responding inappropriately to his reasons—or, so, I argue. Thus, the solution to the puzzle is, I argue, to realize that, in certain circumstances, advancing is not what the self-torturer ought to do given that he can do so only by responding inappropriately to his reasons. (shrink)
This paper aims to address the question of how one ought to choose when one is uncertain about what outcomes will result from one’s choices, but when one can nevertheless assign probabilities to the different possible outcomes. These choices are commonly referred to as choices (or decisions) under risk. I assume in this paper that one ought to make instrumentally rational choices—more precisely, one ought to adopt suitable means to one’s morally permissible ends. Expected utility (EU) theory is generally accepted (...) as a normative theory of rational choice under risk, or, more specifically, as a theory of instrumental rationality. According to EU theory, when faced with a decision under risk, one ought to rank one’s options (from least to most choiceworthy) according to their EU and one ought to choose whichever option carries the greatest EU (or one of them in the event that several alternatives are tied). The EU of an option is a probability-weighted sum of each of its possible utilities. In this paper, I argue that EU theory is not the correct theory of instrumental rationality. In its place, I argue for a new theory of instrumental rationality, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. On the basis of this principle, I then argue, roughly speaking, that for any agent, S, faced with any decision under risk, S ought to rank her options (in terms of how choiceworthy they are) according to their ECU and S ought to choose whichever option carries the greatest ECU (or one of them in the event that several alternatives are tied). For any option, a, a’s ECU is a probability‐weighted sum of a’s comparative utilities across the various possible states of the world. In this paper, I show that in some commonplace decisions under risk, ECU theory delivers different verdicts from those of EU theory. (shrink)
Andy Egan's Smoking Lesion and Psycho Button cases are supposed to be counterexamples to Causal Decision Theory. This paper argues that they are not: more precisely, it argues that if CDT makes the right call in Newcomb's problem then it makes the right call in Egan cases too.
No existing normative decision theory adequately handles risk. Expected Utility Theory is overly restrictive in prohibiting a range of reasonable preferences. And theories designed to accommodate such preferences (for example, Buchak's (2013) Risk‐Weighted Expected Utility Theory) violate the Betweenness axiom, which requires that you are indifferent to randomizing over two options between which you are already indifferent. Betweenness has been overlooked by philosophers, and we argue that it is a compelling normative constraint. Furthermore, neither Expected nor Risk‐Weighted Expected Utility Theory (...) allow for stakes‐sensitive risk‐attitudes—they require that risk matters in the same way whether you are gambling for loose change or millions of dollars. We provide a novel normative interpretation of Weighted‐Linear Utility Theory that solves all of these problems. (shrink)
This is an edited transcript of a conversation to be included in the collection "Conversations on Rational Choice". The conversation was conducted in Munich on 7 and 9 February 2016.
The purpose of this paper is to illustrate, formally, an ambiguity in the exercise of political influence. To wit: A voter might exert influence with an eye toward maximizing the probability that the political system (1) obtains the correct (e.g. just) outcome, or (2) obtains the outcome that he judges to be correct (just). And these are two very different things. A variant of Condorcet's Jury Theorem which incorporates the effect of influence on group competence and interdependence is developed. Analytic (...) and numerical results are obtained, the most important of which is that it is never optimal--from the point-of-view of collective accuracy--for a voter to exert influence without limit. He ought to either refrain from influencing other voters or else exert a finite amount of influence, depending on circumstance. Philosophical lessons are drawn from the model, to include a solution to Wollheim's "Paradox in the Theory of Democracy". (shrink)
In the face of an impossibility result, some assumption must be relaxed. The Mere Addition Paradox is an impossibility result in population ethics. Here, I explore substantially weakening the decision-theoretic assumptions involved. The central finding is that the Mere Addition Paradox persists even in the general framework of choice functions when we assume Path Independence as a minimal decision-theoretic constraint. Choice functions can be thought of either as generalizing the standard axiological assumption of a binary “betterness” relation, or as providing (...) a general framework for a normative (rather than axiological) theory of population ethics. Path Independence, a weaker assumption than typically (implicitly) made in population ethics, expresses the idea that, in making a choice from a set of alternatives, the order in which options are assessed or considered is ethically arbitrary and should not affect the final choice. Since the result establishes a conflict between the relevant ethical principles and even very weak decision-theoretic principles, we have more reason to doubt the ethical principles. (shrink)
Consideration of the moral and ethical consequences of competing scientific and social theories often proceeds on the assumption that such discourses are or ought to be rational. The concept of incommensurability, however, threatens this assumption. Literature on the Incommensurability thesis consists mostly in explanation of the concept itself and the degree of damage it portends for the rationality of science. Most often this work is done via historical case studies. Exemplary is Thomas Kuhn’s The Copernican Revolution. Historical studies do sometimes (...) provide us with great insights, but their worth is limited by the backward-looking nature of anecdotes and single case investigations. The current work looks toward new and prospective methods of investigation. Through the lens of a novel and experimental approach to incommensurability the author asks, “What are the prospects for an inter-theoretic language in the social sciences?” In particular, there is a focus on the contentious claim by some that it is perfectly fine and even beneficial to have social, business or intimate relations with clients. (shrink)
We present an abstract model of rationality that focuses on structural properties of attitudes. Rationality requires coherence between your attitudes, such as your beliefs, values, and intentions. We define three 'logical' conditions on attitudes: consistency, completeness, and closedness. They parallel the familiar logical conditions on beliefs, but contrast with standard rationality conditions like preference transitivity. We establish a formal correspondence between our logical conditions and standard rationality conditions. Addressing John Broome's programme 'rationality through reasoning', we formally characterize how you can (...) (not) become more logical by reasoning. Our analysis connects rationality with logic, and enables logical talk about multi-attitude psychology. (shrink)
This paper will be concerned with hard choices—that is, choice situations where an agent cannot make a rationally justified choice. Specifically, this paper asks: if an agent cannot optimize in a given situation, are they facing a hard choice? A pair of claims are defended in light of this question. First, situations where an agent cannot optimize because of incompleteness of the binary preference or value relation constitute a hard choice. Second, situations where agents cannot optimize because the binary preference (...) or value relation violates acyclicity do not constitute a hard choice. (shrink)
This paper will be concerned with hard choices—that is, choice situations where an agent cannot make a rationally justified choice. Specifically, this paper asks: if an agent cannot optimize in a given situation, are they facing a hard choice? A pair of claims are defended in light of this question. First, situations where an agent cannot optimize because of incompleteness of the binary preference or value relation constitute a hard choice. Second, situations where agents cannot optimize because the binary preference (...) or value relation violates acyclicity do not constitute a hard choice. (shrink)
The Sure-Thing Principle famously appears in Savage’s axiomatization of Subjective Expected Utility. Yet Savage introduces it only as an informal, overarching dominance condition motivating his separability postulate P2 and his state-independence postulate P3. Once these axioms are introduced, by and large, he does not discuss the principle any more. In this note, we pick up the analysis of the Sure-Thing Principle where Savage left it. In particular, we show that each of P2 and P3 is equivalent to a dominance condition; (...) that they strengthen in different directions a common, basic dominance axiom; and that they can be explicitly combined in a unified dominance condition that is a candidate formal statement for the Sure-Thing Principle. Based on elementary proofs, our results shed light on some of the most fundamental properties of rational choice under uncertainty. In particular they imply, as corollaries, potential simplifications for Savage’s and the Anscombe- Aumann axiomatizations of Subjective Expected Utility. Most surprisingly perhaps, they reveal that in Savage’s axiomatization, P3 can be weakened to a natural strengthening of so-called Obvious Dominance. (shrink)
In his paper, ‘Patients, doctors and risk attitudes’, Makins argues that doctors, when choosing a treatment for their patient, need to follow their risk profile.1 He presents a pair of fictitious diseases facing a patient who either has ‘exemplitis’, which requires no treatment or ‘caseopathy’, which is severe and disabling and for which there is a treatment with unpleasant side effects. The doctor needs to decide whether the patient should pursue the unpleasant treatment, just in case he has caseopathy. Makins (...) believes that rational-choice theory is a productive way to approach this problem. This theory frames all decisions as gambles. Each possible gamble (eg, not to pursue treatment) leads, probabilistically, to a set of outcomes (life with no disease and no side effects, but also, possibly, life with a disabling disease, since no treatment is taken). Utilities of these outcomes and probabilities that they will happen are combined to yield a number—expected utility. One is supposed to gamble—to make a choice—in a way that maximises this expected utility. Makins takes this framework for granted and is focused on an additional complication, the patient’s risk preference. He believes that, in his hypothetical example, even if expected utility were exactly the same, withholding treatment …. (shrink)
Predictable polarization is everywhere: we can often predict how people’s opinions, including our own, will shift over time. Extant theories either neglect the fact that we can predict our own polarization, or explain it through irrational mechanisms. They needn’t. Empirical studies suggest that polarization is predictable when evidence is ambiguous, that is, when the rational response is not obvious. I show how Bayesians should model such ambiguity and then prove that—assuming rational updates are those which obey the value of evidence—ambiguity (...) is necessary and sufficient for the rationality of predictable polarization. The main theoretical result is that there can be a series of such updates, each of which is individually expected to make you more accurate, but which together will predictably polarize you. Polarization results from asymmetric increases in accuracy. This mechanism is not only theoretically possible, but empirically plausible. I argue that cognitive search—searching a cognitively accessible space for a particular item—often yields asymmetrically ambiguous evidence, I present an experiment supporting its polarizing effects, and I use simulations to show how it can explain two of the core causes of polarization: confirmation bias and the group polarization effect. (shrink)
This paper draws some bold conclusions from modest premises. My topic is an old one, the Neohumean view of practical rationality. First, I show that this view consists of two independent claims, instrumentalism and subjectivism. Most critics run these together. Instrumentalism is entailed by many theories beyond Neohumeanism, viz. by any theory that says rational actions maximize something. Second, I give a new argument against instrumentalism, using simple counterexamples. This argument systematically undermines consequentialism and rational choice theory, I show, using (...) detailed examples of their many social science applications. There is no obvious fix. (shrink)
Our values change. What we value, want, desire, prefer, and how much; for nearly everyone, these will be different at different times in their life. These changes can be gradual or abrupt; they can be long-lasting or short-lived; and they can be induced by forces outside yourself or they can come from within or they can have no specific catalyst at all. Such preference change raises a number of questions for our theorising about rational choice, and these have been discussed (...) at length. In §2 and §3, I’ll outline two of these questions along with some of the putative solutions that have been proposed. But preference change also raises questions for our theorising about autonomy, and these have hardly been considered at all. In §4, I’ll outline three problems for personal autonomy; and in §5, I’ll outline one problem for political autonomy. In §6, I conclude. (shrink)
When is it legitimate for a government to ‘nudge’ its citizens, in the sense described by Thaler and Sunstein (2008)? In their original work on the topic, Thaler and Sunstein developed the _‘as judged by themselves’ (or AJBT) test_ to answer this question (Thaler and Sunstein 2008, p. 5). In a recent paper, Paul and Sunstein (2019) raised a concern about this test: it often seems to give the wrong answer in cases in which we are nudged to make a (...) decision that leads to what Paul calls a _personally transformative experience_, that is, one that results in our values changing (Paul 2014). In those cases, the nudgee will judge the nudge to be legitimate after it has taken place, but only because their values have changed as a result of the nudge. In this paper, I take up the challenge of finding an alternative test. I draw on my _aggregate utility account_ of how to choose in the face of what Ullmann-Margalit (2006) calls _big decisions_, that is, decisions that lead to these personally transformative experiences (Pettigrew 2019, Chapters 6 and 7). (shrink)
Seeking a decision theory that can handle both the Newcomb problems that challenge evidential decision theory and the unstable problems that challenge causal decision theory, some philosophers recently have turned to ‘graded ratifiability’. However, the graded ratifiability approach to decision theory is, despite its virtues, unsatisfactory; for it conflicts with the platitude that it is always rationally permissible for an agent to knowingly choose their best option.
While ordinary decision theory focuses on empirical uncertainty, real decision-makers also face normative uncertainty: uncertainty about value itself. From a purely formal perspective, normative uncertainty is comparable to (Harsanyian or Rawlsian) identity uncertainty in the 'original position', where one's future values are unknown. A comprehensive decision theory must address twofold uncertainty -- normative and empirical. We present a simple model of twofold uncertainty, and show that the most popular decision principle -- maximising expected value (`Expectationalism') -- has different formulations, namely (...) Ex-Ante Expectationalism, Ex-Post Expectationalism, and hybrid theories. These alternative theories recommend different decisions, reasoning modes, and attitudes to risk. But they converge under an interesting (necessary and sufficient) condition. (shrink)
People reason not only in beliefs, but also in intentions, preferences, and other attitudes. They form preferences from existing preferences, or intentions from existing beliefs and intentions, and so on. This often involves choosing between rival conclusions. Building on Broome (Rationality through reasoning, Hoboken, Wiley. https://doi.org/10.1002/9781118609088, 2013) and Dietrich et al. (J Philos 116:585–614. https://doi.org/10.5840/jphil20191161138, 2019), we present a philosophical and formal analysis of reasoning in attitudes, with or without facing choices in reasoning. We give different accounts of choosing, in (...) terms of a conscious activity or a partly subconscious process. Reasoning in attitudes differs fundamentally from reasoning _about_ attitudes, a form of theoretical reasoning in which one discovers rather than forms attitudes. We show that reasoning in attitudes has standard formal properties (such as monotonicity), but is indeterministic, reflecting choice in reasoning. Like theoretical reasoning, it need not follow logical entailment, but for a more radical reason, namely indeterminism. This makes reasoning in attitudes harder to model logically than theoretical reasoning. But it can be studied abstractly, using indeterministic consequence operators. (shrink)
What I call "strategic injustice" involves a set of formal and informal regulatory rules and conventions that often lead to grossly unfair outcomes for a class of individuals despite their resistance. My goal in this paper is to provide the necessary conditions for such injustices and for eliminating their instances from our social practices. To do so, I follow Peter Vanderschraaf's analysis of circumstances of justice and expand his account by embedding "asymmetric conflictual coordination games" that summarize fair division problems (...) in a dynamic social network. I use the network effect on such coordination games to explain the emergence of stable exploitative behavior and conventions by a class of individuals even in the presence of restraining efforts by others. I conclude that such unfair conventions are resilient to uncoordinated individual actions and interventions. In fact, maintaining a rough equality itself turns into another coordination problem. Finally, I show that something similar to a social movement that restructures the network of social relations is necessary to solve such coordination problems. (shrink)
The phenomenon of weakness of will – not doing what we perceive as the best action – is not recognized by neoclassical economics due to the axiomatic assumptions of the revealed preference theory (RPT) that people do what is best for them. However, present bias shows that people have different preferences over time. As they cannot be compared by the utility measurements, economists need to normatively decide between selves (short- versus long-term preferences). A problem is that neoclassical economists perceive RPT (...) as value-free and incorporate present bias within the economic framework. The axiomatic assumption that people do what is best for them leads to theoretical and practical dilemmas. This work examines weakness of will to resolve some shortcomings of RPT. The concept of intention is used to provide multiple self conception with the framework to decide between selves, which had not been done before. The paper concludes that individuals should not always follow their revealed preferences (desires) but the intentions (reason) because the latter indicates what people really want. (shrink)
This paper argues that under conditions of uncertainty, there is frequently a positive option value to staying alive when compared to the alternative of dying right away. This value can make it prudentially rational for you to stay alive even if it appears highly unlikely that you have a bright future ahead of you. Drawing on the real options approach to investment analysis, the paper explores the conditions under which there is a positive option value to staying alive, and it (...) draws out important implications for the problems of suicide and euthanasia. (shrink)
This paper examines two strands of literature regarding economic models of cooperation. First, payoff transformation theories assume that people may not be exclusively motivated by self-interest, but also care about equality and fairness. Second, team reasoning theorists assume that people might reason from the perspective of the team, rather than an individualistic perspective. Can these two theories be unified? In contrast to the consensus among team reasoning theorists, I argue that team reasoning can be viewed as a particular type of (...) payoff transformation. However, I also demonstrate that many payoff transformations yield actions that team reasoning rules out. (shrink)
I find much to like in Craig Callender's [2022] arguments for the rational permissibility of non-exponential time discounting when these arguments are viewed in a conditional form: viz., if one thinks that time discounting is rationally permissible, as the social scientist does, then one should think that non-exponential time discounting is too. However, time neutralists believe that time discounting is rationally impermissible, and thus they take zero time discounting to be the normative standard. The time neutralist rejects time discounting because (...) they think it is rationally impermissible to prefer to live a worse life in expectation because of arbitrariness. Callender’s attack on the time-neutralist position is the following: the time-neutralist’s non-arbitrariness intuition assumes the existence of nonexistent ‘pure’ time preferences. In response, I aim to clarify the time-neutralist position and show that the non-arbitrariness argument does not rely on the existence of pure time preferences. Instead, the debate between time neutralism and permissivism about time discounting boils down to a methodological question: can we ever criticize the content of preferences? If so, we should embrace time neutralism. (shrink)
In this paper, I argue for a new normative theory of rational choice under risk, namely expected comparative utility (ECU) theory. I first show that for any choice option, a, and for any state of the world, G, the measure of the choiceworthiness of a in G is the comparative utility (CU) of a in G—that is, the difference in utility, in G, between a and whichever alternative to a carries the greatest utility in G. On the basis of this (...) principle, I then argue that for any agent, S, faced with any decision under risk, S should rank his or her decision options (in terms of how choiceworthy they are) according to their comparative expected comparative utility (CECU) and should choose whichever option carries the greatest CECU. For any option, a, a's CECU is the difference between its ECU and that of whichever alternative to a carries the greatest ECU, where a's ECU is a probability-weighted sum of a's CUs across the various possible states of the world. I lastly demonstrate that in some ordinary decisions under risk, ECU theory delivers different verdicts from those of standard decision theory. (shrink)
Individuals often face administrative hurdles in attempting to access health care, public programmes, and other legal statuses and entitlements. These ordeals are the products, directly or indirectly, of institutional and policy design choices. I argue that evaluating whether such ordeals are justifiable or desirable instruments of social policy depends on assessing, beyond their targeting effects, the process-related burdens they impose on those attempting to navigate them and these burdens’ distributive effects. I here examine specifically how ordeals that levy time costs (...) reduce and constrain individuals’ free time, and how such time-cost ordeals may thereby create, deepen and compound disadvantages. (shrink)
This paper considers contractarianism as a method of justification. The analysis accepts the key tenets of contractarianism: expected utility maximization, unanimity as the criteria of acceptance, and social-scientific uncertainty of modelled agents. In addition to these three features, however, the analysis introduces a fourth feature: a criteria of rational belief formation, viz. Bayesian belief updating. Using a formal model, this paper identifies a decisive objection to contractarian justification. Insofar as contractarian projects approximate the Agreement Model, therefore, they fail to justify (...) their conclusions. Insofar as they fail to approximate the Agreement Model, they must explain which modelling assumption they reject. (shrink)
This paper develops an argument against causal decision theory. I formulate a principle of preference, which I call the Guaranteed Principle. I argue that the preferences of rational agents satisfy the Guaranteed Principle, that the preferences of agents who embody causal decision theory do not, and hence that causal decision theory is false.
Neoclassical economists use expected utility theory to explain, predict, and prescribe choices under risk, that is, choices where the decision-maker knows---or at least deems suitable to act as if she knew---the relevant probabilities. Expected utility theory has been subject to both empirical and conceptual criticism. This chapter reviews expected utility theory and the main criticism it has faced. It ends with a brief discussion of subjective expected utility theory, which is the theory neoclassical economists use to explain, predict, and prescribe (...) choices under uncertainty, that is, choices where the decision-maker cannot act on the basis of objective probabilities but must instead consult her own subjective probabilities. (shrink)
This paper defends revealed preference theory against a pervasive line of criticism, according to which revealed preference methodology relies on appealing to some mental states, in particular an agent’s beliefs, rendering the project incoherent or unmotivated. I argue that all that is established by these arguments is that revealed preference theorists must accept a limited mentalism in their account of the options an agent should be modelled as choosing between. This is consistent both with an essentially behavioural interpretation of preference (...) and with standard revealed preference methodology. And it does not undermine the core motivations of revealed preference theory. (shrink)
A hard choice is a situation in which an agent is unable to make a justifiable choice from a given menu of alternatives. Our objective is to present a systematic treatment of the axiomatic structure of such situations. To do so, we draw on and contribute to the study of choice functions that can be indecisive, i.e., that may fail to select a non-empty set for some menus. In this more general framework, we present new characterizations of two well-known choice (...) rules, the maximally dominant choice rule and the top-cycle choice rule. Together with existing results, this yields an understanding of the circumstances in which hard choices arise. (shrink)
Bilimsel faaliyetin ve bilimsel bilginin en temel özelliklerinden bir tanesi olarak karşımıza çıkan bilimsel nesnellik bilim felsefesi alanı içerisinde sıklıkla tartışılan bir konu olagelmiştir. Bu doğrultuda, bilimsel nesnelliğin temin edilmesine yönelik çeşitli görüşler ileri sürülmektedir. Genel olarak bilimsel nesnellik bilim insanlarının çalışmalarında olguları doğrudan yansıtması ya da bilim insanlarının çalışmalarını tarafsız bir bakış açısıyla tamamlaması olarak anlaşılmaktadır. Bu görüşlerin bilim felsefesi içerisindeki yansımaları sırasıyla olgulara bağlılık olarak nesnellik ve hiçbir yerden bakış olarak nesnellik isimleriyle olmuştur. Bu bakış açısı, kişisel çıkarların (...) ve değerlerin bilimsel çalışmalardan izole edilmesi sayesinde bilimsel nesnelliğin sağlanabileceğini kabul etmektedir. Diğer bir deyişle, bilimler değerlerden bağımsız olduğu takdirde nesnel olabilmektedirler. Bu görüşe karşı olarak, Helen Longino gibi bilim insanları ise değerleri bilimsel nesnelliğin bir gerekliliği olarak görmektedirler. Bu çalışmada, özellikle değerlerin göz ardı edilmesiyle bilimsel nesnelliğin gerçekleştirilmesinin mümkün olamayacağını vurgulan Helen Longino’nun “bağlamsal deneycilik” olarak bilinen görüşlerine yer verilmektedir. Buna göre Longino, bilimsel araştırmanın toplumsal yönlerini göz önünde bulundurarak değerden bağımsız ideali tamamen reddetmektedir. O değer yüklü bir bilimin hem bilgi kuramsal açıdan hem de nesnellik açısından güvenilir olabileceğini düşünmektedir. -/- Scientific objectivity, which is one of the most basic features of scientific activity and scientific knowledge, is a subject that is frequently discussed in the field of philosophy of science. In this direction, various views are put forward to ensure scientific objectivity. In general, scientific objectivity is understood as scientists reflecting the facts as they are in their studies or scientists completing their studies with an impartial point of view. The reflections of these views in the philosophy of science were respectively called objectivity as faithfulness to facts and objectivity as a view from nowhere. This perspective recognizes that scientific objectivity can be achieved by isolating personal interests and values from scientific studies. In other words, sciences can only be objective if they are value-free. Against this view, scientists such as Helen Longino see values as a necessity of scientific objectivity. In this study, Helen Longino's views known as "contextual empiricism" are included. Accordingly, it is emphasized that it is not possible to realize scientific objectivity by ignoring values. Longino completely rejects the value-free ideal, considering the social aspects of scientific research. She thinks that a value-laden science can be reliable both in terms of epistemology and objectivity. (shrink)
One of the main purposes of science is to explain natural phenomena by increasing our understanding of the physical world and to make predictions about the future based on these explanations. In this context, scientific theories can be defined as large-scale explanations of phenomena. In the historical process, scientists have made various choices among the theories they encounter at the point of solving the problems related to their fields of study. This process, which can be called ‘theory choice’, is one (...) of the most debated issues in the field of philosophy of science in the twentieth century, because this discussion is a very comprehensive problem because it includes important issues such as the use of logical arguments and the determination of the scientific method. At the point of solving this problem, members of the Vienna Circle and Karl Popper think that an objective criterion can be determined by which scientists can apply for theory choice. While the Vienna Circle emphasizes that the best-confirmed theory should be chosen among competing theories, Popper states that competing theories or theories should be tested ruthlessly with appropriate methods, and that successful or corroborated theories should be selected as a result of these tests. Contrary to these views Kuhn states that there are some non-obligatory subjective elements that scientists should follow at the point of theory choice. Accordingly, in this study, the problem of how scientists make their choices among competing theories will be discussed by highlighting Kuhn’s arguments regarding the subjective nature of theory selection. (shrink)
Ordeals are burdens placed on individuals that yield no benefits to others; hence they represent a dead-weight loss. Ordeals – the most common is waiting time – play a prominent role in rationing health care. The recipients most willing to bear them are those receiving the greatest benefit from scarce health-care resources. Health care is heavily subsidized; hence, moral hazard leads to excess use. Ordeals are intended to discourage expenditures yielding little benefit while simultaneously avoiding the undesired consequences of rationing (...) methods such as quotas or pricing. This analysis diagnoses the economic underpinnings of ordeals. Subsidies for nursing-home care versus home care illustrate. (shrink)
Recently, philosophers have investigated the emergence and evolution of the social contract. Yet extant work is limited as it focuses on the use of simple behavioral norms in rather rigid strategic settings. Drawing on axiomatic bargaining theory, we explore the dynamics of more sophisticated norms capable of guiding behavior in a wide range of scenarios. Overall, our investigation suggests the utilitarian bargaining solution has a privileged status as it has certain stability properties other social arrangements lack.
The critics of rational choice theory (RCT) frequently build on the contrast between so-called thick and thin applications of RCT to argue that thin RCT lacks the potential to explain the choices of real-world agents. In this paper, I draw on often-cited RCT applications in several decision sciences to demonstrate that despite this prominent critique there are at least two different senses in which thin RCT can explain real-world agents’ choices. I then defend this thesis against the most influential objections (...) put forward by the critics of RCT. In doing so, I explicate the implications of my thesis for the ongoing philosophical debate concerning the explanatory potential of RCT and the comparative merits of widely endorsed accounts of explanation. (shrink)
This book reports on cutting edge research concerning social practices. Merging perspectives from various disciplines, it discusses theoretical aspects of social behavior along with models to investigate them, and also presents key case studies. Further, it describes concepts related to habits, routines and rituals and examines important features of human action such as intentionality and choice, exploring the influence of specific social practices in different situations.