Abstract
The paper explores how well AI algorithms can fare with regards to intellectual virtues compared to humans. The paper argues that if intellectual virtues are understood functionally as displaying behavior similar to human virtuous behavior, AI algorithms likely can exceed humans in unbiased perception and non-stereotypical thinking. Doing so would give them an edge over humans with regard to visual accuracy and fairness. Humans likely will have a lasting edge with regards to other intellectual virtues like creativity and intellectual autonomy.
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Avoid common mistakes on your manuscript.
1 Introduction
Geoffrey Hinton, Nobel prize winner and one of the founding fathers of artificial intelligence, recently claimed that AI will soon surpass human intelligence. Furthermore, he claims that most AI-experts believe the same [38]. Others are less optimistic and see AI as rather ‘stupid’, not capable of achieving true intelligence but merely capable of poorly imitating humans or rehashing data [10]. ChatGPT, one of the most popular AI-systems today, is more nuanced. When answering a prompt ‘Can AI-algorithms grow more intelligent than humans’ it gave the following answer:
Whether AI can eventually become superintelligent, surpassing human intelligence in all domains, is an open question, hinging on future breakthroughs in fields like machine learning, neuroscience, and ethics.Footnote 1
Discussions on the intelligence of AI-systems usually single out one small element, like AI’s superior computing power or AI’s lack of ability for thinking outside of the box. As a result, proper assessment of AI’s intellectual abilities is usually lacking. Claims about how well AI-systems fare in comparison to humans tend to fall prey to the same problem. This paper aims to move the debate further. It will do so by looking at the intelligence of AI-algorithms from a virtue-epistemological perspective, where intelligence is closely tied to exercising a set of intellectual virtues.
Looking at the problem from a virtue-epistemological perspective has a number of advantages. Epistemic virtues are multi-faceted. They allow for an assessment of multiple qualities tied to intelligence, like cognitive performance, honesty, autonomy or open mindedness. As a result, it can move the discussion past simple assessments in terms of computing power or how much data can be handled. It also allows to bring AI into the realm of contemporary approaches in epistemology.
Like many (including ChatGPT itself) note, the intellectual possibilities of AI depend greatly on future advances or lack thereof. The discussion which follows therefore remains speculative to some extent. It can nonetheless note some advantages and disadvantages of AI-algorithms compared to human knowers.
In this paper, I aim to argue that AI systems, or better AI algorithms, bear the promise of becoming more virtuous knowers than humans can ever hope to be in domains where humans are plagued by biases. Contrary to humans, algorithms can hope to rule out some biases and some cognitive limitations definitively. Since biases and cognitive limitations make knowers less intellectually fair and therefore less virtuous, algorithms therefore have an advantage over humans. In other domains, like intellectual autonomy and creative thinking, humans probably will continue to have an edge over AI algorithms because such functions or traits are much harder to program.
This paper is structured as follows: in Sect. 3, I discuss what it takes to be a virtuous knower and whether intellectual virtues can be applied to AI algorithms. In Sect. 4, I argue that humans suffer from biases and have great difficulties to overcome biases. I then place this in contrast with the resources available to algorithms to overcome biases. In Sect. 5, I discuss why human knowers will likely continue to have an edge over algorithms in creativity and intellectual autonomy. I end with a conclusion. I start with defining some key terms and limiting the discussion in Sect. 2.
2 Key terms
Much of the philosophical discussion on biases and A.I. suffers from some conceptual confusion. Often terms are used in an idiosyncratic way and sometimes terms are used with different meanings than in the relevant sciences (in this case, cognitive science, epistemology and computer science). I will be using the terms ‘bias’, ‘algorithm’, ‘virtue’ and ‘machine learning’ with meanings stipulated here.
Bias: A tendency to form false or inaccurate beliefs.
Bias is a widely discussed topic in philosophy, psychology and cognitive science. In philosophical discussions, the term tends to have a negative connotation similar to thinking in a distorted way or in a way not aimed at truth (e.g., [21]). Biases can be distortions in functions, like biased eyesight where the eye and/or brain misprocesses sensory input or produces inaccurate representations. Biases may also be character traits. Well-studied examples are racial or sexist biases where people tend to form prejudiced beliefs about women or people of a different ethnicity. Given the link to distortion, biases are therefore commonly regarded as something humans do well to get rid off. This account of ‘bias’ therefore has clear normative connotations. Being biased is something to be avoided or something one should strive to avoid if possible.
The term does not always have such negative connotations in psychology and cognitive science. Often the term indicates a mere tendency to think in a certain way which may be truth conducive or not (e.g., [6]). On these accounts, the term has no normative connotations. For example, humans have a bias to fear snakes [35]. Given that snakes are often dangerous, regarding them as dangerous is often accurate and the bias is therefore often truth conducive. Other tendencies are not truth conducive and can be regarded as distortions.
The remainder of this paper uses the term more in line with the first, normative definition. Merely having a tendency to form a certain belief or class of beliefs is not intrinsically epistemically virtuous or vicious. Distortions do make subjects prone towards falsehoods and are therefore vicious. Below, I argue that AI algorithms bear the promise of overcoming distortions in thinking in a more permanent way than humans can ever hope to do.
The definition of bias does raise an issue for algorithms. To exercise a bias, one must be able to form beliefs. A belief is commonly regarded as the mental state of taking something to be true. Few would concede that algorithms have mental states or that such systems have mentality in the first place. Attributing biases, let alone an ability to overcome biases, would therefore be a misnomer and the discussion a non-starter. The same problem haunts any discussion of attributing virtue to AI algorithms (see below).
The problem can be avoided by using functionalist definition of ‘bias’ (and of ‘virtue’ and ‘belief’). On functionalist accounts, mental states and mental functions are defined by their functions rather than by their internal constitution [40]. Functionalists tend to make strong ontological claims. For many functionalists, a mental state or function is nothing but its function. Such strong ontological stances suffer from some well-known problems.Footnote 2 A strong ontological, functionalist view is not required for or purposes. The remainder of this paper aims instead for a more modest, conditional claim. Our investigation is: ‘If ‘bias’, ‘virtue’ and ‘belief’ are cashed out in functional terms, can AI algorithms have or exercise them and in what way?’ A positive answer leaves ample leeway to claim that biases (and virtue, belief) are not appropriately attributed to computer systems since functionalist definitions fall short. The discussion nonetheless remains interesting and can be of practical use to assess the abilities and potentialities of algorithms. Even if the term ‘biased algorithm’ should be qualified, it does allow a distinction between worse and better algorithms and thus leave room for epistemic evaluation.
What would a functionalist definition of ‘bias’ appropriate for computer systems be? Most algorithms rely on statistical informational models. The models can be of various forms, but usually govern how incoming data streams are processed or transformed to draw conclusions. A biased model is one that tends to produce false conclusions based on incoming data streams. It thus arrives at conclusions in a distorted way. Examples are algorithms that tend to deny mortgages to minorities. The algorithms in question likely connected irrelevant pieces of information (e.g., about ethnicity) to information regarding mortgages [43]. An unbiased algorithm does not tend to produce false conclusions in this way. An unbiased algorithm for mortgage allocation would only take relevant information (like income, savings, debt) into account and draw accurate conclusions based on that information.
A thorough account of an appropriate functional definition for ‘bias’ for AI algorithms would require more details. A fuller account would include how forming false conclusions displays a specific pattern, for example when more false conclusions are made with regards to subjects from racial minorities. This, however, lies beyond the scope of this paper. Our provisional functionalist definition of ‘drawing inaccurate conclusions based on relevant data’ does suffice for our purposes.
Algorithm:
A finite series of well-defined, computer-implementable instructions or rules to solve one or more computable problems.Footnote 3
The definition of algorithm encompasses a wide array of very different algorithms. Some are quite old, among the oldest are basic classification algorithms like the perceptron algorithm [54]. Others are of much later date, like random forest algorithms (e.g., [9]). Algorithms can have very different tasks. Some are search algorithms. Their goal is finding information (for example from existing texts or other data) and presenting it in a clear, structured way. A well-known example is the GPT3 algorithm on which ChatGPT operates [17]. Some algorithms generate images or sounds. An example is DALL-E. Such algorithms are less interesting for our purpose. They do not produce new information or knowledge. Search algorithms merely summarize or represent existing information in a structured way. In some cases, the summary may allow for a clearer grasp of the information or allow for new insights. This is, however, not due to the algorithm.Footnote 4 Search algorithms do not rely on new observations and do not draw new conclusions. Algorithms that generate images do create something new, but it is not informationFootnote 5 or anything that can be considered knowledge.
Calculating and classifying are key functions of many algorithms. Often used calculation algorithms rely on regression analysis. Some classification algorithms do as well. Often multiple analyses are combined in convolutional neural networks. Newer classification algorithms include K-nearest neighbor and decision trees. Unlike search algorithms, calculation and classification algorithms are in the business of producing new information or knowledge. Calculation algorithms calculate new information. Classification algorithms often rely on existing classification groups (i.e. classifying images as faces or not), but the classification results are usually new. Some classifying algorithms also create new classification groups.Footnote 6
Virtue: A morally or epistemically excellent character trait or quality.
This definition is similar to how the term is commonly used in discussions over virtue ethics (e.g., [31]), but slightly broadened to include epistemic virtues. The definition again raises worries for application to AI algorithms. Having a character trait or simply having a character is usually preserved for humans. Some argue that having virtues is therefore the sole prerogative of humans. Others extended the scope of virtuous behavior to some of the higher animals like apes, dolphins or possibly crows, but seem to exclude artificial things like robots or computers as well. Some classical accounts of what virtues are lists a number of capacities a being must possess to be properly called virtuous.Footnote 7 The capacities and traits required for virtue are said to ensure (or make likely) that humans display virtuous action in a stable way. AI algorithms do not have all required capacities and traits.
Regardless of these worries, there is some discussion on whether robots could be virtuous. For example, Peeters and Haselager suggest the possibility that robots may exhibit virtues (and vices) through their own behavior. They add that the extent to which robots could be virtuous depends on how well they resemble humans [49]. Others see no problems in programming moral virtues like honesty, obedience, loyalty and righteousness in robots (e.g., [41]). Programming virtues would boil down to programming behavioral responses to situations that resemble human virtuous behavior. An obedient robot would be a robot that follows orders and an honest robot one that does not say falsehoods. Critics will counter that such responses are merely cases of imitation behavior and are not the result of a virtuous character. Having virtuous character requires more, like rational and emotional capacities.
The examples do signify a way towards an applicable functionalist definition of virtue. Programming behavioral responses that resemble human virtuous action may allow us to speak of robot virtuous behavior. If we leave out the requirement for specific mental states or other mental capacities, we may end up with something that resembles human virtue close enough. Whether this is sufficient to merit the label ‘virtue’ or not is then not the most interesting question.Footnote 8 What matters more for practical purposes is whether the robot behavior can be such that is imitates human virtues behavior in a stable, predictable way. In that sense, talk of ‘robot virtue’ can be a shorthand for ‘robot behavior that resembles human virtuous action in a stable way’. The stability is not due to mental states or capacities but due to the way robots are programmed (and therefore perhaps to the programmers by extension).
Can this be applied to AI algorithms as well? Following Peeters and Haselager, one could argue that such algorithms resemble humans less well than robots do. Algorithms do not display motor movement, do not produce sounds and are not even clearly visible. They can nonetheless perform actions in similar ways like humans do, like calculation and classification. This is sufficient to assess whether their behavior in these actions can be programmed in such a way that it resembles human virtuous action in a stable way. It then would make no sense to discuss virtues that require very different actions, like talking about a morally courageous algorithm. For other virtues, like the main epistemic virtues (open-mindedness, curiosity, diligence) it makes more sense.
One issue with our functionalist definition is that it may be vague. It is not immediately clear when behavior resembles human virtuous action to a satisfactory degree or when the behavior is stable. A more thorough account would likely need to quantify the resemblance in some way. This, however, lies beyond the scope of this paper.
The discussion of the key terms provides clarity and limits the discussion to some extent. What follows does not have ramifications for algorithms that do not allow systems to perform tasks without explicit instructions nor does it have implications for algorithms that do not generate information. What follows also does not address the question whether computer or algorithms can be regarded as virtuous on stronger accounts of ‘virtue’ that require advanced rational or emotional capacities.
3 Knowing virtuously
The goal of this paper is to assess epistemic virtues (also called intellectual virtues) in A.I. algorithms. Epistemic virtues are like moral virtues insofar as they are excellent traits or qualities. They differ regarding what they are aimed at. Where moral virtues are aimed at ethically good behavior, epistemic virtues are aimed at epistemically good behavior. This section gives a broad overview of two schools of thought in recent epistemology on the nature of epistemic virtues: virtue reliabilism and virtue responsibilism. It also discusses how algorithm-virtues can be cashed out in both schools.
A first group, virtue reliabilists sees epistemic virtues as stable and reliable cognitive faculties or powers, like vision or introspection. A second group, called virtue responsibilists regard epistemic virtues as character traits, like fair-mindedness or courage [2]. In this section, I give a brief overview of both positions and argue that they can be extended to apply to algorithms.Footnote 9
John Greco notes that both virtue reliabilists and virtue responsibilists agree that epistemology is a normative discipline. When we say that someone knows something or is rational in believing something, we do not just make neutral observations but make value judgments. Being rational or knowing is deemed better than being irrational or not knowing. A focus on intellectual virtues highlights this normative dimension of epistemology. It also allows us to consider when a subject deserves credit for knowledge or belief [27]. By consequence, assessing whether someone (or something) exercises an intellectual virtue allows us to make a normative judgment. It also allows us to compare when someone (or something) is epistemically superior. This will be of importance to compare AI algorithms to human knowers.
3.1 Virtue reliabilism
The foremost and earliest defender of virtue reliabilism, Ernest Sosa, defines intellectual virtue as:
“[A] quality bound to help maximize one's surplus of truth over error" ([59]: 225).
The definition resembles our general definition of ‘virtue’ (see above) and applies it to the epistemic realm. It highlights how the function or goals of epistemic virtues is not maximizing moral good or good action as is the case for moral virtues, but rather maximizing epistemic good. Virtue reliabilists focus on qualities similar to human (cognitive) functions, like visual qualities or reasoning qualities.
Turri et al. argue that virtue reliabilism is best understood as a descendant from earlier epistemological theories like process reliabilism [62]. Like Sosa, process reliabilists focused on qualities. They note that some processes yield a higher ratio of true beliefs to false beliefs and can therefore be regarded as reliable. Clear examples are vision at close range or careful reasoning. Reliabilists continue to argue that a belief is justified or constitutes knowledge if (and only if) it is produced by a reliable process (cf. [26]). Virtue reliabilists differ by not claiming that justification or knowledge depends solely on the process used. Instead, they argue that the degree of knowledge tracking (or truth tracking) of a process determines how intellectually virtuous a subject is. Vision or reasoning only yield justified belief or knowledge if used in a good way. Reasoning only leads to good beliefs if not used in a sloppy way. By distinguishing good and bad ways of applying various belief-forming processes, virtue reliabilists can provide a more precise account of when processes justify beliefs.
One defender of virtue reliabilism, John Greco, argues that intellectual virtues should be understood as cognitive abilities or excellences [28]. Any ability or excellence points to a good practice that yields considerable success. For one is only able to do something when one can achieve something. In this regard, intellectual virtues, like good vision, are cognitive functions that yield a considerable degree of success for subjects.
As Jason Baehr notes, Sosa connects a virtue’s use and value closely to the environment in which it operates. For example, introspection can only be regarded as an intellectual virtue when it produces beliefs about one’s own mental states [2]. Introspection is useless if applied to form beliefs about the external world. It makes little sense to assess virtue in isolation devoid of the context wherein they produce beliefs. A trait that is an intellectual virtue in one context need not be a virtue in a different context.
Sosa argues that justification stems from the exercise of intellectual virtue. A belief is therefore justified if, and only if, it results from the exercise of one or more intellectual virtues. The focus on exercising belief-forming processes well allows a connection to (likely) truth, which is a standard requirement for justification.
3.2 Virtue reliabilism and algorithms
It appears as if AI algorithms can have epistemic or intellectual virtues as defined by virtue reliabilists quite straightforwardly. Like humans, algorithms have qualities or abilities that help them maximize their surplus of truth over error. Algorithm qualities are, however, different than those of humans. Algorithms cannot introspect and do not have the same visual systems humans have. They are, however, able to process visual input from cameras. They are also able to break down visual input from pictures into meaningful information. Algorithms are also able to do so in good and bad ways. For example, AI algorithms are increasingly getting better in recognizing human faces. They do so by breaking down pictures into dimensions like the distance between the nose and ears or the distance between a person’s eyes. That information is stored as numeric data and can be used to perform analyses.
Sosa identifies reasoning as a clear example of an intellectual virtue. One form of reasoning that algorithms are particularly good at is calculation. For example, the Monte Carlo Tree Search Algorithm, which was used to beat human experts in the game of GO, is able to calculate possible moves faster than human experts can [56]. Algorithms are less error-prone than humans with regards to mathematical calculation.
Like human subjects, AI algorithms can exercise belief-forming functions in a good and bad way. A poorly trained algorithm will not be good at classification tasks. A well-trained algorithm will also perform well and poorly in different circumstances and contexts. Therefore, virtuous and non-virtuous ways of forming beliefs can be distinguished in AI algorithms as well.
3.3 Virtue responsibilism
Virtue responsibilists do not see epistemic virtues as truth conducive cognitive faculties or belief-forming processes but as traits of character. Examples of such traits of character are attentiveness, intellectual courage, carefulness and thoroughness [2].Footnote 10
Zagzebski defines intellectual virtue as:
"a deep and enduring acquired excellence of a person" ([67]: 137).
According to Zagzebski, each intellectual virtue has a motivation and a success component. For example, being open-minded motivates a subject to do research or to explore new possibilities. An open-minded subject also has greater odds at finding out new truths and correcting some of her errors.Footnote 11
As to the motivation component, Zagzebski argues that all intellectual virtues have the same foundational motivation, the motivation for knowledge.Footnote 12 When people suffer from intellectual vices, like lethargy, closed-mindedness or dogmatism, they lack the motivation for truth. Having a motivation for truth requires a certain attitude to act in a certain way. Zagzebski claims that the proper attitudes are acquired by habituation [67].
Having a mere motivation for truth is not enough. To exercise intellectual virtue, a subject needs to have a certain degree of success in her epistemic endeavors. Being open-minded is an intellectual virtue partially because it leads to more true beliefs and fewer falsehoods [67].
3.4 Virtue responsibilism and algorithms
Can AI algorithms exercise intellectual virtues as they are regarded in virtue responsibilism? I argued above that algorithms can use processes that are usually truth conducive. In a similar fashion as they can use visual perception, they can also apply the processes associated with open-mindedness, carefulness or thoroughness. In line with our functionalist proposal above, a comparison with humans should focus on processes that are functionally similar to human processes connected to intellectual virtues. For example, open mindedness is closely connected to behavior where subjects look at multiple sources of information. An algorithm can behave in an open-minded way by considering multiple sources of information. Experts in machine learning also developed procedures that ‘punish’ algorithms for being too certain or for converging to a solution to fast.Footnote 13 In this way, AI algorithms can enjoy the benefits of being open-minded with regard to its truth-conduciveness.
AI algorithms can also behave in a way similar to human carefulness or thoroughness. Algorithms can take into account many sources of data or information. Some algorithms are indeed ‘fed’ huge datasets with information to be trained better. Algorithms can also make use of diverse sources of data, like data from multiple cultures or multiple perspectives.
Whether algorithms can be properly motivated for truth is less straightforward. Having a motivation is often regarded as having a certain mental state. In this regard, someone who is motivated to gain true beliefs would have the belief that gaining true beliefs is important and the desire to find out as many truths as she can. There is some discussion on whether computers will ever be able to be conscious or have mental states like beliefs and desires (see for example: [16]). It is, however, clear that in their current state, computers do not have them (yet). If having a motivation does not require a mental state, but merely having a reason to act in a certain way, computers and their algorithms can be regarded as motivated. The reason that algorithms have for gaining truth, is that they are programmed to do so.Footnote 14
4 AI algorithms as more virtuous: overcoming biases
Having a grasp of intellectual virtue and how it can apply to AI algorithms, we can now move to compare AI algorithms to humans. In the next section, I argue that AI algorithms have more resources available to overcome biases in a stable, enduring way. This holds for biases relevant for both reliabilist and responsibilist views of intellectual virtue. I present two examples of biases and then how they affect human intellectual virtue. I also give examples of how some biases are hard, if not impossible, to overcome by humans while AI algorithms have more sources of mitigation available.
4.1 Visual biases
It is common knowledge that humans suffer from perceptual biases. People tend to see inaccurately at great distance and sometimes have visual hallucinations. On a virtue reliabilist account, visual biases reduce the truth conduciveness of vision and therefore make it less virtuous. A closer look at visual bias in this section reveals why humans suffer from perceptual biases and why these are very hard, if not impossible, to overcome.
Orquin et al. note that human attention is a scarce resource. For this reason, humans are usually careful where to focus their gaze. Visual attention can be measured by means of eye tracking.Footnote 15 Although humans experience their environment as having high acuity, only a small part of their visual field is seen clearly. The clear part is called the ‘fovea’. Humans can reposition the fovea to inspect an object or location of interest. Usually, the positioning of the fovea is closely connected to attention. When humans visually inspect an object, they position the eyes in such a way that the object is in the fovea of both eyes. This position is retained for about 200 to 400 ms. This is known as a ‘fixation’. After a fixation, humans reposition their eyes to another object. Repositioning of this kind is called a ‘saccade’. There is little or no visual processing during saccades. Therefore, by measuring the position of fixations, eye-tracking can measure what information is processed and what information is not [47].
Because of their visual limitations, humans need to be selective about which information they attend to (cf. [57]). Acting in this way is often rational. For example, in some (North American) supermarkets consumers can see up to four hundred products within a single product category. In these situations, it is often reasonable to ignore a large portion of products. Often, however, there are environmental factors that affect vision which are unrelated to the goals of subjects. For example, visual perception is impacted by the position or ordering of information, the size and color of information and the predictability of where information will appear. A flickering banner can make humans fixate and ignore the rest of the environment. Objects with changing colors or that have a sharp contrast with their environment can have the same effect. Humans also have difficulties with higher set sizes. If a set grows larger, humans tend to fixate on a smaller portion of the set. Humans also tend to fixate on larger objects. Another environmental feature that affects attention is emotional salience. Emotional stimuli like angry faces attract more attention than emotionally neutral stimuli. The effect is higher for negative emotional stimuli than for positive ones [47].
Orquin et al. argue that many of the human limitations in visual perception are unavoidable [47]. Structural limitations in human biology (limitations of attention and limited acuity) prevent humans from achieving better vision. While some humans have better visual abilities than others do, all humans must rely on shifting their fovea’s to focus and all have limited attention.
4.2 Visual bias in AI algorithms
I noted that AI algorithms can make use of something similar to visual perception to gain truths. In this section, I discuss how an algorithm can recognize images and how it can avoid the pitfalls of biased human visual perception.
Most AI algorithms used for image recognition are deep neural networks. Neural networks are algorithms inspired by biological neural networks. What distinguishes neural networks from other machine learning algorithms is that they are not programmed to perform specific tasks but have a certain degree of generality. In image recognition, generality allows neural networks to learn features of images without explicitly being instructed how to do this. Neural networks have a number of layers. Input is first processed in an input layer, passed on to intermediary layers that perform calculations to learn aspects of features of the input,Footnote 16 and finally passed to an output layer. Deep learning neural networks have many intermediary layers.
Although how image recognition algorithms learn in intermediary layers is largely opaque to users, we have some idea of how algorithms can recognize images.Footnote 17 The algorithm is fed a large set of labeled images.Footnote 18 For example, to train an algorithm to recognize animals in images, a dataset with images of both animals and non-animals is needed. The dataset must also contain labels about what images are animals and which one do not. The images are converted to pixel space in the first layer. This information is passed to the intermediary layers where the algorithm learns to identify patterns in pixel space that correspond to features of the image. For example, it could learn to identify tails in a set of animal images.Footnote 19 By considering what features correspond to what labels, the algorithm is able to learn what features are indicative for a label. When the algorithm has been trained, it can apply what it has learned to new images and classify them according to the label.
Unlike humans, AI algorithms have no structural limitations in attention and focus. Algorithms do not rely on shifting focus on parts of images or the environment but rely on numerical translations of the total image. The only limit in assessing the numerical translations is the amount of processing power available. Processing power of computers has increased enormously over the past decades and there is no reason to believe this expansion will end soon.Footnote 20 Human processing power will not progress at the same speed, if at all.
With the current processing power available, AI algorithms have already surpassed humans for a large number of visual tasks. In 2012, an algorithm was able to achieve greater accuracy in recognizing German traffic signs than humans could achieve [14]. Algorithms, however, still fall behind in recognizing other images. A lot of progress can and probably will be made. Whether algorithms will outperform humans in most visual recognition tasks in the future is interesting, but I want to focus on different advantage algorithms have over humans. Contrary to humans, algorithms can be trained to avoid being overly attentive to the position or ordering of information, the size and color of information and the predictability of where information will appear. We saw how humans suffer from biases in this regard and that these biases are very hard to overcome. Although algorithms also suffer from biases that are unrelated to its goals, it can be improved to overcome them. For example, Mariya Yao compared a number of algorithms on how well they were able to classify images as chihuahua dogs or muffins. Some algorithms made many mistakes. Clearly, the algorithms had come to regard three large black dots with brown areas in between as indicative of muffins (or chihuahuas). Some of the algorithms, however, were able to distinguish muffins and Chihuahuas correctly.Footnote 21 This shows that although algorithms can suffer from visual biases, they can be trained to overcome them.
4.3 Stereotype biases
Humans also suffer from well-documented biases that hamper their intellectual character traits (especially intellectual fairness). Well-known examples are the self-serving bias [13], and the compassion fade bias.Footnote 22 In this section, I discuss another bias that often prevents humans from exercising intellectual virtue, implicit stereotype bias.
Thinking in a (stereotypical) biased way signals a failure to meet the virtue of intellectual fairness (the propensity to treat relevant points of view alike). By thinking stereotypically, subjects treat members of outgroups unwarrently different. They also tend to form false beliefs about the superiority of one’s own group. Biased subjects also tend to lend higher credence to testimony of the ingroup even when the testimony is demonstratively false (see: [22]).
A large number of cognitive researchers argue that stereotyping is an implicit feature of human social categorization.Footnote 23 Converging evidence strongly suggests humans are disposed to link certain concepts unconsciously or automatically.Footnote 24 For example, ‘doctor’ has a strong intuitive link to ‘nurse’. The tendency easily creates a stereotypical association between racial categories and character traits. Humans can come to associate ‘being black’ or ‘being white’ with ‘being aggressive’ quite easily [29]. Such biases often take hold from a very young age onwards. Racial and sexist biases are present at the age of 3–5 [58]. Some research suggests that even infants tend to display a preference for faces whose skin color matches those of their primary caregivers [64]. Although this varies with exposure to racial differences, it nonetheless shows how easy stereotypical biases take hold of humans.
There is some discussion over whether stereotypical thinking is the result of innate tendencies or instilled by culture [29]. In any case, a number of authors argue that stereotypical thinking is very hard, if not impossible, to overcome. Already at young age, interventions, like intergroup contact and education appear to have limited effects.Footnote 25 John Bargh argues that the evidence for control of stereotypical thinking is weak and problematic. He argues that when a stereotype is so entrenched that it is activated automatically, there is little that can be done to stop or control it. Even humans who disavow stereotypes display facial expressions, tones of voice and reactions in line with stereotypical thinking. The only way to eradicate stereotypical thinking according to Bargh is to prevent stereotypes from ever getting a hold on people [3]. Doing so, however, is very difficult given the prevalence of stereotypes and exposure to them. The main intervention used to overcome stereotypical thinking (especially racial stereotypical thinking) is implicit bias training where subjects are taught to become aware of their unconscious biases. Although few, if any, longitudinal studies, have been conducted,Footnote 26 studies into the effects are mixed at best. Most studies do not report a statistically significant effect of such trainings.Footnote 27 Some studies do note an effect, but suggest that achieving success requires substantial energy an effort, like changing the overall organizational culture and continuous interventions [5, 20].
Patricia Devine argues that stereotype bias can be overcome, but that it requires a large amount of intention, attention and time. To overcome stereotype bias an individual must inhibit automatically activated stereotypes and intentionally replace such activation with non-stereotype ideas [19]. Devine et al. made a similar claim [21]. Devine’s response shows that even if stereotype bias can be overcome, it takes a lot of effort and time. On many occasions, humans likely will not make the effort or lack the time and slip back into stereotypical thinking.
The short overview strongly suggest that implicit association stereotypes get instilled easily in human minds. When they do, they are very hard to overcome. As a result, many people enduringly suffer from stereotypical thought and associated intellectual unfairness. This in turn impedes them from exercise intellectual virtues like intellectual fairness.
I will now consider how well AI algorithms fare with regards to stereotype biases. Recently, the role of machine learning algorithms in racial profiling and its subsequent discrimination has been getting a lot of bad press. For example, a study by Joy Buolamwini and Timnit Gebru showed that some algorithms used by major commercial companies had a far higher error rate in classifying dark-skinned females than in classifying lighter skinned males (34.7 to 0.8%) [11].Footnote 28
The example gives the impression that algorithms often fail to avoid stereotype biases and are therefore not intellectually fair. The examples, however, do not show that algorithms are more biased than human subjects are.Footnote 29 We noted above that humans also display stereotype biases regularly. At this point, it is not clear whether AI algorithms do better or worse than humans would when performing the same classification tasks. Assessing this would require an experimentally sound comparison between the performances of humans and algorithms. To my knowledge, this has not been done. The huge number of data algorithms process also makes comparison to humans difficult because performing the same task is impossible for humans. Epistemic vice in AI algorithms does appear to have broader implications than human epistemic vices. Most humans who suffer from stereotypical thinking only affect the people in their vicinityFootnote 30 whereas biased algorithms tend to affect many more.
There is good reason to think that algorithms are better at avoiding stereotype biases in a stable way or at least can be so. Frederik Zuiderveen Borgesius argues that stereotypical classification by algorithms is often due to the training data. Most A.I.-algorithms make predictions by applying a statistical model fitted on training data. For classifying algorithms, training data contains a set of examples and the correct classifying label. For example, an algorithm used to review school applications is trained on a set of examples with all required information (like age, profession, city, etc.) and the classifying label (i.e. whether an application was approved). By learning to what extent the information predicts a ‘yes’ or ‘no’ on the application, the algorithm can classify new information. Frederik Zuiderveen Borgesius gives the example of an algorithm that discriminated against women and people with an immigrant background for school applications. It turned out that the algorithm was trained on data provided by people who were biased against both groups [8]. Failure to avoid stereotype was thus brought about by poor training data provided by biased humans.
If algorithms suffer from stereotype bias because of poor training data, a solution is obvious. By having algorithms train on cleaned, unbiased training sets, continued bias can be avoided. Many companies and institutions are taking steps to do precisely that. A straightforward way to avoid stereotype bias is removing variables related to race, nationality or gender from a dataset. an alternative is adding additional data entries or scrambling variables related to race, nationality or gender. Interventions can also be made to alter the weighting of some variable (see examples below). Cleaning of datasets can happen manually or algorithmically [61].
Other suggest that algorithms produce biased models because of mean imputation or bad selection of variables used in models [44]. Mean imputation is used to handle missing data. When one or more values for variables are missing, the population mean is inserted. An obvious solution here is not relying on mean imputation and solely relying on complete records. This will of course make data collection more strenuous but is certainly possible. Bad selection of variables (e.g., including variables sensitive to racial or gender differences) can be solved as well; in this case by carefully selecting what variables to use.
Biases may also arise when certain variables (like race or gender) get prioritized over others or when certain groups are overrepresented [23]. As a result, the training data does not adequately reflect the population for which the model is used. An often used remedy for this problem is resampling of data, where repeated samples are drawn from the original data (see below). A solution for prioritization of sensitive variables is adjusting the weighting of these variables (see also below).
To be able to clean a training dataset, it does not matter that stereotype bias is hard to overcome in a stable way by humans. While humans appear to have great difficulty in avoiding stereotype bias in a number of situations, they have fewer difficulties to avoid stereotype bias when building a training set. Building a training set does not make use of fast cognition or usually does not put humans under cognitive constraints. Training sets can also be crosschecked by multiple people. More effort in building training sets can aid algorithms in overcome stereotypical bias. Because AI algorithms do not slip back into other modes of thinking, the result can be lasting.
Some interventions in algorithms have lead to fairer models. Huang et al. survey 8 studies were preprocessing led to mitigation of racial bias in artificial intelligence models [30]. Other examples are available as well. A model for predictions of clinical outcomes was made less biased by group mitigation [24]. BerLine et al. achieved a fairer model by fixing imbalances in the training data [12]. Samorani and Blunt updated a model for appointment setting in healthcare by deleting variables related to race and social standing [55]. Park et al. made use of reweighting to update a model for mental health service utilization for patients suffering from past partum depression [48]. Reeves et al. used resampling to successfully make a predictive model for suicide deaths more fair [53].Footnote 31
Claiming that AI algorithms have more capacities to overcome stereotypical thinking remains speculative to some extent. Many algorithms continue to be biased at this point. In many cases no interventions are made to make algorithms fairer. Building algorithms that are less biased requires additional efforts. AI algorithms do have more and better resources available to overcome such biases than humans do. This strongly suggests that algorithms can overcome stereotypical thinking in a stable way if programmed in the right way. Humans do not have the same prospects for change.
5 Humans as more virtuous: creative thinking and intellectual autonomy
Above I argued that AI algorithms bear the promise of being more epistemically virtuous than humans with regards to visual perception and stereotypical thinking. Questions remain whether the (potential) better abilities can be generalized to other cognitive tasks, like reasoningFootnote 32 and character traits like open mindedness or conscientiousness. Answering all questions lies beyond the scope of this paper. In the remainder of this paper, I focus on one intellectual capacity and one responsibilist virtue where humans appear to have a lasting edge, creativity and intellectual autonomy.
Paisley Livingstone defines ‘creativity’ as: “originality in the devising of an effective means to some end” ([42]: 108: emphasis added). Mike Beaney distinguishes between creativity as forming new concepts and a more radical form of creativity aimed at ‘development of new conceptual frameworks’ ([4]: 275). Maggie Boden sees creativity as the creation of new ‘conceptual spaces’ ([7]: 86). On all accounts, creativity involves the creation of something new or being innovative in some way. Often this takes the form of producing new ideas.Footnote 33 Creative newness goes beyond merely coming up with new information or beliefs.
It is clear that humans are (at least sometimes) creative thinkers. There are numerous examples of people producing (radically) new ideas in philosophy (e.g. Charles Sanders Peirce’s pragmatism) and science (e.g. Erwin Schrodinger’s equation for the quantum wave function). While creativity may be rare in comparison to other cognitive functions or intellectual virtues, its occurrence in humans cannot be denied. Creativity may be limited to a subset of the human population.
AI algorithms, by contrast, are far less creative. Most algorithms merely do what they were programmed to do and show no inclination towards thinking outside of the box or performing tasks in different ways. More advanced algorithms like those resulting from deep neural networks may be more creative in finding solutions to problems or tasks. Here creativity remains very limited as well. Scenario’s where computers develop some kind of independent thinking or new ideas remain the domain of science fiction.
Above we were not merely interested in current AI algorithms but also whether future, improved algorithms will transcend human limitations. In the case of creativity. AI algorithms appear to be on worse footing than humans because of structural limitations. AI algorithms rely on mathematical models that often function deterministically. This shows in the tendency of algorithms to reside in temporary optima. Improving models to meet new challenges usually requires input from human programmers who often rely on non-deterministic, non-mathematical thinking. These resources appear to be unavailable to AI algorithms themselves.Footnote 34
While AI algorithms may very well exceed humans on some responsibilist virtue, like intellectual fairness, they will very likely continue to fare worse on others. One example where humans will very likely continue to have an edge is intellectual autonomy. Autonomy in general is sometimes equated to self-governance or being able to set one’s own course of action. Autonomy is not the same as independence since autonomous persons may rely on outside help or support. Autonomy does mean that the subject is in the driving seat and not determined by outside forces [68]. Autonomy goes beyond but requires agency; the ability to act on the basis of intentions or beliefs [1]. Autonomy requires an ability to decide how to act on the basis of intentions or beliefs.
Intellectual autonomy is self governance with regards to one’s epistemic life. Elizabeth Fricker describes intellectual autonomy as: "This ideal type [i.e. the intellectual autonomous subject] relies on no one else for any of her knowledge. Thus she takes no one else's word for anything, but accepts only what she has found out for herself, relying only on her own cognitive faculties and investigative and inferential powers” ([25]: 225). Linda Zagzebski takes a somewhat less strong account where an intellectual autonomous person need not be fully self-reliant. She may submit herself to reason or God as sources of authority. She may, however, not outsource her epistemic evaluations or slavishly follow the dictates of any source of authority [68].
AI algorithms will likely continue to be less intellectually autonomous than humans. AI algorithms cannot program themselves. While some can achieve some level of autonomy by finding new patterns in data are building more optimal models, they critically rely on given structural features programmed by humans. All algorithms are given their basic formats on the onset, and this determines how they produce new information. Less advanced algorithms follow a fixed set of rules. More advanced algorithms (e.g., neural networks) can adjust to new input more readily and thereby appear more autonomous. They are, however, still heavily constrained by the goals and boundaries set by programmers.
One may object that many human thinkers are also heavily constrained in their thinking. Many would follow patterns of thought furnished by their upbringing and education. Many may also struggle to adapt to new ways of thinking. However, humans are at the very least capable of thinking more autonomously. They are less rigidly constrained by how they draw conclusions. Human intelligence also seems less an application of following rules like computer intelligence is.
A possible rejoinder in favor of AI algorithms is claiming that defining ‘creativity’ as we did (i.e. as producing new information or finding new conceptual spaces) stacks the deck in favor of humans. The same may hold if ‘intellectual autonomy’ is defined in terms of governing one’s own epistemic life. A different account of both virtues may suit AI algorithms better. Such a different account may also be of a more functionalist nature, focusing on functions that AI algorithms can exert. It is, however, difficult to see how both virtues can be defined in a way favorable to AI algorithms without losing their central meaning. Any account of ‘creativity’ must encompass creating substantial newness or doing things in a substantially new way at minimum. An account of intellectual autonomy must at least involve a significant degree of freedom or taking the sole responsibility for one’s epistemic life. AI algorithms that can do either one seem to only exist in science fiction stories. Some popular movies feature creative computers developing new ideas or novel behavior.Footnote 35 A larger number of movies feature computer systems going rogue and acting on their own behalf, going against their programmers intentions.Footnote 36 It is much harder to find any real life examples of an AI algorithm displaying behavior close to that.
6 Concluding remarks
In this paper, I argued that AI algorithms have advantages over human knowers that make them potentially more intellectually virtuous in some virtues. Algorithms have the potential of developing better epistemic faculties like vision. In some cases, like calculation, algorithms already surpassed human knowers.
I also argued that algorithms are less crippled to intellectual vices like intellectual unfairness as manifested in stereotypical thinking. While algorithms are not always intellectually virtuous in this regard today, they can be improved to perform better and some examples already exist. Human knowers, by contrast, have a lot more difficulties in overcoming stereotypical biases.
In both cases, AI algorithms have an edge over humans because humans suffer from more structural limitations. Humans cannot improve their sense perception far because of biological limitations. Humans also have great difficulties in overcoming biases like stereotypical thinking because of cognitive limitations.
While AI algorithms have a lot of potential to surpass human knowers in many intellectual virtues, they likely do not with regards to creativity and intellectual autonomy. Structural limitations leave AI algorithms largely stuck in mathematical, deterministic ways of thinking. Humans are not limited to these. AI algorithms will also remain determined by the goals and rules set by their programmers, hampering their intellectual autonomy. The results of the comparison are thus mixed. This signals that straightforward claims like 'AI algorithms will exceed human intelligence' or 'AI algorithms will always fall short in comparison to humans' have elements of truth but are too shallow.
Data availability
No datasets were generated or analysed during the current study.
Notes
The quote was part of the conclusion of a longer answer where ChatGPT gave some domains where AI-algorithms do exceed human intelligence, like in playing chess or Go. (prompted on 11/10/2024).
See: 52 for a discussion of some problems with functionalism.
This definition is drawn from [45]
There are cases where algorithms like ChatGPT do create new information. For example, ChatGPT will produce references to non-existing scholarly work if prompted to do so. This information is, however, false and can therefore not constitute new knowledge.
The images are generated as code which is interpreted by software. The code can be considered information which is new. The code is, however, merely a means to an end and the result (i.e. the image) is not knowledge.
Such algorithms are usually called unsupervised learning.
For example, according to Aristotle a subject needs rational and emotional capacities (among other things) to develop virtue. See: [37]: Sect. 2).
Some argue decisively in the negative. For example Constantinescu and Crisp argue that robots merely behave in a virtuous way and cannot genuinely be virtuous [15].
My overview is largely based on [2].
A great deal of contemporary discussion ins responsibilist virtue epistemology consists of detailed examination of intellectual virtues. The discussion in the next section draws on some of those.
Zagzebski adds that there are some intellectual virtues that do not have both components, like wisdom and integrity [67]: 165).
Usually, motivation for knowledge means motivation for the agent to possess knowledge herself. In some cases, like originality or inventedness, virtues are related to the motivation of advancing knowledge for humanity [67]: 167).
Examples are regularization or other measures to prevent overfitting.
A more technical reason, which many algorithms have for getting to truth, is that they aim to minimize a cost function or aim to achieve higher accuracy.
Orquin et al. Note that eye movement and attention are not identical, although there is a close connection. Sometimes, there is decoupling when humans maintain a gaze in one location while they concentrate on visual stimuli in another. This decoupling is, however, rare. [47].
The intermediary layers are often called ‘hidden layers’ because it is not clear to the user what the algorithm is doing in these layers.
For a discussion, see: [69].
Often image recognition algorithms make use of publicly available datasets like the MNIST dataset [18].
It is important to note that the algorithm does not learn that the pattern is a tail (unless the image dataset contains labels for which images contain tails). It can identify a returning pattern that is indicative for the different kinds of images in the dataset.
One reason the expansion might end sooner than later are ecological considerations. Training complex algorithms (like those for image recognition) require a very large amount of energy and associated carbon emission. Problems like climate change and pollution might put restrictions on training algorithms in the near future.
See: 66
Research shows that as threats or harm increase, compassion and therefore societal concern decreases [63].
For an overview see: [29].
In an overview article of multiple studies, Skinner and Meltzhoff concludes that there is mixed evidence for the effects of parental intergroup messages, intergroup contact and intergroup education on racial thinking. Some other possible intervention, i.e. perceived positive cooperative contact and reading about imagined contact with outgroup members are more clearly associated with some effects on racial bias [58]. All such interventions require considerable effort and it is not clear how long the effects last.
For a discussion, see: [46].
For more examples of biased AI algorithms, see: [23].
Interestingly, some studies note that underprivileged groups sometimes prefer to be evaluated by algorithms rathen humans. They feel that humans are more likely to discriminate against them [50].
This is of course different for people in positions of power. They, however, constitute a minority.
Some do note a trade-off between fairness and accuracy, see also: [51]. This may signal that an increase in intellectual fairness may sometimes imply a decrease in reliabilist virtues. Other trade-offs in virtues were noted in human as well, for example between gaining truths and avoiding falsehoods [32].
It is clear that AI algorithms have already surpassed humans with regards to cognitive skills like calculating or model fitting.
For an extended discussion on accounts of ‘creativity’, see: (Kidd 36).
This is also argued by Kyle Jennings. He claims that making artificial intelligence more creative requires input from other (human) creators and critics [33].
An example is the movie Ex Machina (2014).
Well-known examples are The Matrix (1999), The Terminator (1984) and Robocop (1987).
References
Arfini S, Bellani P, Picardi A, Yan M, Fossa F, Caruso G. Design for inclusivity in driving automation: theoretical and practical challenges to human-machine interactions and interface design. In: Fossa F, Cheli F, editors. Connected and Automated vehicles: integrating engineering and ethics. Cham: Springer Nature Switzerland; 2023. p. 63–85. https://doi.org/10.1007/978-3-031-39991-6_4.
Baehr J. Virtue epistemology. Internet Encyclopedia of Philosophy. 2013. https://www.iep.utm.edu/virtueep/.
Bargh JA. The cognitive monster: the case against the controllability of automatic stereotype effects. 1999.
Beaney M. Conceptual creativity in philosophy and logic. In: Berys G, Matthew K, Gaut B, Kieran M, editors. Creativity and philosophy. New York: Routledge; 2018. p. 273–91. https://doi.org/10.4324/9781351199797-16.
Bezrukova K, Spell CS, Perry JL, Jehn KA. A meta-analytical integration of over 40 years of research on diversity training evaluation. Psychol Bull. 2016;142(11):1227.
Bilalić M, McLeod P, Gobet F. The mechanism of the Einstellung (set) effect: a pervasive source of cognitive bias. Curr Dir Psychol Sci. 2010;19(2):111–5.
Boden MA. The creative mind: myths and mechanisms. Hove: Psychology Press; 2004.
Borgesius FZ. Discrimination, artificial intelligence, and algorithmic decisionmaking. Directorate General of Democracy, Strasbourg. Retrieved August 2:2019. 2018.
Breiman L. Random forests. Mach Learn. 2001;45:5–32.
Bridle J. The stupidity of AI. The Guardian. 16 maart 2023. https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt.
Buolamwini J, Gebru T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, 77–91. 2018.
Burlina P, Joshi N, Paul W, Pacheco KD, Bressler NM. Addressing artificial intelligence bias in retinal diagnostics. Transl Vis Sci Technol. 2021;10(2):13–13.
Campbell WK, Sedikides C. Self-threat magnifies the self-serving bias: a meta-analytic integration. Rev Gen Psychol. 1999;3(1):23–43.
CireşAn D, Meier U, Masci J, Schmidhuber J. Multi-column deep neural network for traffic sign classification. Neural Netw. 2012;32:333–8.
Constantinescu M, Crisp R. Can robotic AI systems be virtuous and why does this matter? Int J Soc Robot. 2022;14(6):1547–57.
Cotterill R. Enchanted looms: conscious networks in brains and computers. Cambridge: Cambridge University Press; 1998.
Dale R. GPT-3: what’s it good for? Nat Lang Eng. 2021;27(1):113–8.
Deng Li. The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process Mag. 2012;29(6):141–2.
Devine PG. Stereotypes and prejudice: their automatic and controlled components. J Pers Soc Psychol. 1989;56(1):5.
Devine PG, Forscher PS, Austin AJ, Cox WTL. Long-term reduction in implicit race bias: a prejudice habit-breaking intervention. J Exp Soc Psychol. 2012;48(6):1267–78.
Ennis RH. Is critical thinking culturally biased? Teach Philos. 1998;21(1):15–33.
Farooq A, Argyri EK, Adlam A, Rutland A. Children and adolescents ingroup biases and developmental differences in evaluations of peers who misinform. Front Psychol. 2022. https://doi.org/10.3389/fpsyg.2022.835695.
Ferrara E. Fairness and bias in artificial intelligence: a brief survey of sources, impacts, and mitigation strategies. Sci. 2023;6(1):3.
Foryciarz A, Pfohl SR, Patel B, Shah N. Evaluating algorithmic fairness in the presence of clinical guidelines: the case of atherosclerotic cardiovascular disease risk estimation. BMJ Health Care Inform. 2022. https://doi.org/10.1136/bmjhci-2021-100460.
Fricker E. Testimony and epistemic autonomy. In: Lackey J, Sosa E, editors. The epistemology of testimony. Oxford: Oxford University Press; 2006. p. 225–50. https://doi.org/10.1093/acprof:oso/9780199276011.003.0011.
Goldman A, Beddor B. Reliabilist epistemology. The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). 2015. https://plato.stanford.edu/archives/win2016/entries/reliabilism/.
Greco J. Virtue epistemology. In: Dancy J, Sosa E, Steup M, editors. A companion to epistemology. 2nd ed. Oxford: Wiley Blackwell; 2010. p. 75–82.
Greco J, Reibsamen J. Reliabilist virtue epistemology. In The Oxford handbook of virtue. 2017.
Hinton P. Implicit stereotypes and the predictive brain: cognition and culture in “biased” person perception. Palgrave Commun. 2017;3:17086.
Huang J, Galal G, Etemadi M, Vaidyanathan M. Evaluation and mitigation of racial bias in clinical machine learning models: scoping review. JMIR Med Inform. 2022;10(5): e36388.
Hursthouse R, Pettigrove G. Virtue ethics. The Stanford encyclopedia of philosophy (Fall 2023 Edition). 2022. https://plato.stanford.edu/archives/fall2023/entries/ethics-virtue/.
James W. The will to believe, by William James. Longmans and green. 1917.
Jennings KE. Developing creativity: artificial barriers in artificial intelligence. Mind Mach. 2010;20:489–501.
Kahneman D. Thinking, fast and slow. New York: Macmillan; 2011.
Kawai N. The fear of snakes: evolutionary and psychobiological perspectives on our innate fear. Singapore: Springer; 2019.
Kidd Ian James. 2020. “Creativity in Science and the ?Anthropological Turn? In Virtue Theory.” European Journal for Philosophy of Science 11(1):1–16. https://doi.org/10.1007/s13194-020-00334-5.
Kraut R. Aristotle’s ethics. The Stanford Encyclopedia of Philosophy (Fall 2022 Edition). 2022. https://plato.stanford.edu/entries/aristotle-ethics/.
Landymore F. Godfather of AI says there’s an expert consensus that AI will soon exceed human intelligence. The Byte. 2024. https://futurism.com/the-byte/godfather-ai-exceed-human-intelligence.
Lehman B, Colbert K, Goltz S, Mayer A, Rouleau M. Effects of repeated implicit bias training in a North American university. J High Educ Policy Manag. 2023;45(3):306–22.
Levin J. Functionalism. The Stanford Encyclopedia of Philosophy (Summer 2023 Edition). 2004. https://plato.stanford.edu/entries/functionalism/.
Liu J. Human-in-the-loop ethical AI for care robots and Confucian virtue ethics. In International Conference on Social Robotics, Springer; 2022. 674–88.
Livingston P. Explicating ‘creativity.’ In: Berys G, Matthew K, Gaut B, Kieran M, editors. Creativity and philosophy. New York: Routledge; 2018. p. 108–23.
Martinez E, Kirchner L. The secret bias hidden in mortgage-approval algorithms. The Markup. 2021. https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms. Accessed 25 Aug 2021.
Nazer LH, Zatarah R, Waldrip S, Ke JXC, Moukheiber M, Khanna AK, Hicklen RS, et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digit Health. 2023;2(6): e0000278.
No author. The definitive glossary of higher mathematical jargon. Nath Vault. 2015. https://mathvault.ca/math-glossary/#algo.
Nordell J. The end of bias: a beginning: the science and practice of overcoming unconscious bias. New York: Metropolitan Books; 2021.
Orquin JL, Perkovic S, Grunert KG. Visual biases in decision making. Appl Econ Perspect Policy. 2018;40(4):523–37.
Park Y, Jianying Hu, Singh M, Sylla I, Dankwa-Mullan I, Koski E, Das AK. Comparison of methods to reduce bias from clinical prediction models of postpartum depression. JAMA Netw Open. 2021;4(4):e213909–e213909.
Peeters A, Haselager P. Designing virtuous sex robots. Int J Soc Robot. 2021;13(1):55–66.
Pethig F, Kroenung J. Biased humans,(un) biased algorithms? J Bus Ethics. 2023;183(3):637–52.
Pfohl SR, Foryciarz A, Shah NH. An empirical characterization of fair machine learning for clinical risk prediction. J Biomed Inform. 2021;113: 103621.
Polger TW. z.d. ‘Functionalism’. Internet encyclopedia of philosophy. https://iep.utm.edu/functism/.
Reeves M, Bhat HS, Goldman-Mellor S. Resampling to address inequities in predictive modeling of suicide deaths. BMJ Health Care Inform. 2022. https://doi.org/10.1136/bmjhci-2021-100456.
Rosenblatt F. The perceptron, a perceiving and recognizing automaton Project Para. Cornell Aeronautical Laboratory. 1957.
Samorani M, Blount LG. Machine learning and medical appointment scheduling: creating and perpetuating inequalities in access to health care. Am J Public Health. 2020. https://doi.org/10.2105/AJPH.2020.305570.
Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, Schrittwieser J, et al. Mastering the game of Go with deep neural networks and tree search. Nature. 2016;529(7587):484.
Sims CA. Implications of rational inattention. J Monet Econ. 2003;50(3):665–90.
Skinner AS, Meltzoff A. Childhood experiences and intergroup biases among children. Soc Issues Policy Rev. 2019;13(1):211–40.
Sosa E. Knowledge in perspective: selected essays in epistemology. Cambridge: Cambridge University Press; 1991.
Sperber D. Intuitive and reflective beliefs. Mind Lang. 1997;12(1):67–83.
Tae KH, Roh Y, Oh YH, Kim H, Whang SE. Data cleaning for accurate, fair, and robust models: a big data-AI integration approach. In Proceedings of the 3rd international workshop on data management for end-to-end machine learning, 2019, 1–4.
Turri J, Alfano M, Greco J. Virtue epistemology. The Stanford encyclopedia of philosophy (Fall 2019 Edition). 1999. https://plato.stanford.edu/archives/fall2019/entries/epistemology-virtue/.
Västfjäll D, Slovic P, Mayorga M, Peters E. Compassion fade: affect and charity are greatest for a single child in need. PLoS ONE. 2014;9(6): e100115.
WaxmanAndra SR. Racial awareness and bias begin early: developmental entry points, challenges, and a call to action. Perspect Psychol Sci. 2021;16(5):893–902.
Worden RE, Najdowski CJ, McLean SJ, Worden KM, Corsaro N, Cochran H, Engel RS. Implicit bias training for police: evaluating impacts on enforcement disparities. Law Hum Behav. 2024. https://doi.org/10.1037/lhb0000568.
Yao M. Chihuahua or muffin? My search for the best computer vision API’. 2017. FreeCodeCamp. Accessed 12 Oct.
Zagzebski LT. Virtues of the mind: an inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge: Cambridge University Press; 1996.
Zagzebski LT. Intellectual autonomy. Phil Issues. 2013;23:244–61.
Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I. Cham: Springer International Publishing; 2014. p. 818–33. https://doi.org/10.1007/978-3-319-10590-1_53.
Author information
Authors and Affiliations
Contributions
First author is responsible for the content.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Van Eyghen, H. AI Algorithms as (un)virtuous knowers. Discov Artif Intell 5, 2 (2025). https://doi.org/10.1007/s44163-024-00219-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44163-024-00219-z
Keywords
Profiles
- Hans Van Eyghen View author profile