预备调查清单
待核查内容1:arXiv收录过题为《The Ethics of Creating Artificial Consciousness》的文章。
行动计划
site:arxiv.org
,以交叉验证arXiv官方网站的搜索结果。Search
"The Ethics of Creating Artificial Consciousness", sourced from arxiv.org我看到搜索结果中有两篇论文引用了《The Ethics of Creating Artificial Consciousness》这篇文章,但这些只是引用而不是arXiv上的原始论文。我需要进一步搜索确认这篇文章是否真的存在于arXiv上。
Search
"The Ethics of Creating Artificial Consciousness" arxiv我需要查看更多关于这篇文章的信息,特别是要确认它是否在arXiv上发表过。从搜索结果中,我看到有多篇文献引用了《The Ethics of Creating Artificial Consciousness》这篇文章,但它们都指向了其他来源,而不是arXiv。让我进一步查询这篇文章的详细信息。
Search
John Basl "The Ethics of Creating Artificial Consciousness" APA Newsletter我需要查看这篇文章的原始出处,让我访问一下找到的PDF链接。
1
The Ethics of Creating Artificial Consciousness
John Basl
Northeastern University
1
Introduction
The purpose of this essay is to raise the prospect that engaging in artificial
consciousness research, research that aims to create artifactual entities with
conscious states of certain kinds, might be unethical on grounds that it wrongs
or will very likely wrong the subjects of such research. I say might be unethical
because, in the end, it will depend on how those entities are created and how
they are likely to be treated. This essay is meant to be a starting point in
thinking about the ethics of artificial consciousness research ethics, not, by any
means, the final word on such matters.
While the ethics of the creation and proliferation of artificial intelligences
and artificial consciousnesses (see, for example, (Chalmers 2010) has often been
explored both in academic settings and in popular media and literature, those
discussions tend to focus on the consequences for humans or, at most, the
potential rights of machines that are very much like us. However, the subjects
of artificial consciousness research, at least those subjects that end up being
conscious in particular ways, are research subjects in the way that sentient
non-human animals or human subjects are research subjects and so should be
afforded appropriate protections. Therefore, it is important to ask not only
whether artificial consciousnesses that are integrated into our society should be
afforded moral and legal protections and whether they are a risk to our safety
or existence, but whether the predecessors to such consciousnesses are wronged
in their creation or in the research involving them.
In section 2, I discuss what it means for a being to have moral status and
make the case that artificial consciousnesses of various kinds will have moral
status if they come to exist. I then take up the issue of whether it is thereby
wrong to create such entities (section 3). It might seem obvious that the
answer is “no”, or at least it is no more impermissible than the creation and use
of non-human research subjects. However, I argue that there should be a
presumption against the creation of artificial consciousnesses.
2
Moral Status and Artificial Consciousness
In order to determine whether it is possible to wrong artificial consciousnesses
by creating them or conducting research on them, we must first determine
whether such entities have moral status and what the nature of that status is.
brought to you by
CORE
View metadata, citation and similar papers at core.ac.uk
provided by PhilPapers
2
2.1
What is moral status?
The term ‘moral status’ is used in various ways in the ethics and applied ethics
literature. Other terms, such as ‘inherent worth’, ‘inherent value’, ‘moral
considerability’ etc., are sometimes used as synonyms and sometimes to pick
out species of moral status.1 In the broadest sense of the term, to have moral
status is just to have any kind of moral significance; that is, having moral
status means that in at least some contexts moral agents must be responsive to
or regard the thing that has moral status.
It would be every easy to argue that artificial consciousnesses have moral
status in the broad sense just described sense. After all, even a rock, if owned
by someone or part of a piece of art, for example, has moral status in this
sense. Instead, I will employ the term ‘moral patient’ to pick out a particular
form of moral status. The definition of ‘moral patient’ as used in this paper is:
Moral Patientdf: X is a moral patient iff agent’s like us are required to
take X’s interests into account in our moral deliberations for X’s sake
when X’s interests are at stake.
This definition has the following features:
1. A being is a moral patient only if it has interests that are to be
taken into account in moral deliberations.
2. A being’s being a moral patient entitles it have its interests taken
into account in moral deliberations for its own sake.
3. Moral patiency is a property had by an entity relative to agents
like us.
Each of these features will be discussed in detail below, but first, it is
important to discuss the relationship between moral patiency and normative
theory. Some view the question of whether a being is a moral patient as
dependent on which normative theory is true.2 That is, in order to determine
which beings are patients, we must first figure out whether we should be, for
example, Utilitarians or Kantians, Virtue Theorists or Contractualists. If this
thesis, call it the Dependency Thesis, about the relationship between moral
status and normative theories is correct, we can’t answer the question of
whether artificial consciousnesses are moral patients without first answering the
question of which normative theory is correct.
1 See for example, (O'Neill 2003; Cahen 2002; Sandler and Simons 2012). For
dissent on the usefulness of moral status talk see (Sachs 2011)
2 Buchanan (2011, chap. 7), for example, discusses the differences between moral
status on a Contractualist framework and moral status on a Utilitarian framework.
See also (Sober 1986).
3
There are important relationships between normative theory and moral
status. For one thing, which normative theory is true explains the nature or
source of the moral status of whichever beings have it. If contractualism is true,
for example, a being’s moral status is grounded in or finds its source in the
consent of rational contractors; if utilitarianism is true, a being’s moral status is
grounded in the fact that it’s being benefitted or harmed contributes to or
detracts from the value of a state of affairs. Furthermore, how, in particular,
moral patients are to be treated is a function of which normative theory is
ultimately correct. Utilitarianism more easily licenses the killing of moral
patients more easily than a Kantian ethic, for example. For this reason, the
strength of the presumption against creating artificial consciousnesses defended
below will depend on which normative theory is true. However, the
Dependency Thesis concerns relationship between normative theory and moral
patiency with respect to which beings are moral patients.3
Fortunately, the version of the Dependency Thesis that precludes us from
determining whether artificial consciousnesses are moral patients independently
of determining which normative theory is true is false. One point in favor of
thinking that it is false is that we know that all adult humans of sound mind
are moral patients, and yet we aren’t sure which normative theory is true, or,
at least, whether all adult humans of sound mind are moral patients is far less
controversial than which normative theory is true.
One might argue that the obviousness of our patiency just serves as a
condition of adequacy on normative theories and that’s why we know we are
patients even if we haven’t settled which normative theory is true. However, it
also suggests the possibility that we can make a similar case for the moral
status of other beings. That is, even if some metaphysical, ontological, or
supervenience version of the Dependency Thesis is true, we may have ways of
specifying which things are moral patients independently of determining which
normative theory is true. All that really matters for the purposes of arguing
that artificial consciousnesses are or can be moral patients is that the
dependency relationship between patiency and normative theory isn’t
epistemic, i.e. so long as we can come to know that some being is or isn’t a
moral patient without determining which normative theory is true.
There is good reason to think we can come to know who or is a moral
patient independently. Debates about which entities have moral status and
about the degree to which entities of various kinds matter happen, as it were,
internal to normative theories. Utilitarians, for example, have argued about
3 Another version of the Dependency Thesis might claim that the degree to which a
being has moral status depends on normative theory. (Buchanan 2011) seems to
suggest this as well. However, I think this version of Dependency is also false.
There are ways to cash out differences in treatment owed to different kinds of
beings without understanding them as having different degrees of moral status. In
other words, ‘degrees of moral status’ can be gotten rid of without losing the ability
to make the normative distinctions that talk is intended to capture. This
translatability is not central to what I’ll say here and so I leave it unargued for.
4
whether non-human animals and human infants are moral patients on part
with us.4 There are some Kantians that argue that many non-human animals
should be accorded many rights in the same way that we ought.5 So long as
the intra-normative debates are coherent we can be sure, at least, that
normative theories aren’t fully determinate of which beings have moral status.
Furthermore, the kinds of arguments made that this or that entity is a
moral patient do not typically appeal to which normative theory is true.6
Consider, for example, a standard argument from marginal cases that non-
human animals have moral status. Such arguments take for granted that so-
called “marginal cases”, such as infants and the severely mentally handicapped,
have moral status. Then an argument is made that there is no morally relevant
difference between marginal cases and certain non-human animals, for example
chimps. From this it is concluded that chimps are moral patients in the same
way that we are. This argument doesn’t make explicit mention of normative
theory, nor do the arguments typically given for the premise that there is no
morally relevant difference between chimps and marginal cases.
I’m not here endorsing any particular argument from marginal cases or
assessing its merits. The point is that the kinds of arguments that a Utilitarian
might use to convince another Utilitarian that chimps matter are the same
kinds of reasons that should convince a Contractualist or Kantian to accept
that chimps are moral patients. Similarly, if Kantians could make a case that,
for example, only the interests of very cognitively advanced beings are relevant
to moral deliberations, that advanced cognitive capacities are a morally
4 Consider for example the difference between Singer’s view about the moral status
of humans and Frey’s view of same. Both are committed Utilitarians and yet Singer
(2002) things that all sentient beings are equal, that is have equal moral status
(though Singer acknowledges that typically, a human’s life should often be
preferred over an animals in a conflict because humans can suffer and enjoy in
more ways than most non-human animals) while Frey (1983; 1988) thinks that
human adults of sound mind are distinct from non-human animals, that their lives
are of more value because of their capacity for certain kinds of experiments. It is
worth noting that both come to similar conclusions about the ethics of animal
experimentation and the differences between their views are subtle, but the fact
that Frey thinks humans have additional value in virtue of having a capacity or
capacities that non-human animals do not, is sufficient to demonstrate the kind of
inter-normative differences in conceptions of moral status that are relevant here.
5 See, for example, (Regan 1983) who argues, using arguments very similar to those
employed by Singer, to extend a Kantian conception of rights to non-human
animals that are minimally conscious. See also (Rollin 2006) for a discussion that
includes Contractualist discussions of animal moral status. For an excellent
discussion of how a more traditional Kantian might approach the issue of animal
rights see (Korsgaard 2004).
6 Of course, which properties are taken to be morally significant are often
influenced by which normative theory one takes to be true. A Kantian is more
likely to think that “being an end in oneself” is a morally significant property than
a Utilitarian. But, that is a sociological fact. The Kantian still owes the Utilitarian
an argument as to why that property is morally significant. If the argument is
sounds, the Utilitarian might agree that it is only the benefits and harms that
accrue to ends in themselves that influence the value of states of affairs, just as
many Utilitarians are keen to think that it is only the benefitting and harming of
humans make a difference to the value of states of affairs.
5
relevant properties, they won’t do so by appealing to the structure of Kantian
normative theory, but to reasons that a Utilitarian could accept; at least they
will do so if they hope to convince other Kantians that don’t share their view
about the relevance of advanced cognitive capacities.7
The above considerations provide an abbreviated, but I hope sufficient, case
for the idea that we can identify moral patients without first discovering which
normative theory is true.
2.1.1
Interests
According to the definition of moral patiency, if a being is a moral patient we
must take that beings interests into account for the sake of that being. To say
that a being has interests is to say that it has a welfare, that it can be
benefitted or harmed.8 Whether a being is potentially a moral patient depends,
therefore, on whether it has a welfare, and that depends on which theory of
welfare is true.
There are various families of views about welfare and some are more
stringent about the features a being must have to have a welfare.9 I don’t
intend here to settle the issue of which theory of welfare is true. Instead, below
I will focus on a type of artificial consciousness that will have a welfare
independently of which of many plausible theories of welfare is true.
A being’s welfare can be significant in moral deliberations for a variety of
reasons. For example, if I hire a dog walker, they have an obligation to me to
take my dog’s interests into account. However, they also, I contend, have an
obligation to take my dog’s welfare into account for her sake; even if I didn’t
own my dog, even if no one does, it would be wrong for the dog walker to kick
my dog for no reason.10
Some being’s welfare may only matter derivatively (see, for example,
Feinberg (1963) on plants), but a moral patient’s welfare matters for its own
sake. The interests of a patient figure into our deliberations independently of
their relationship to the welfare of others.11
7 We could of course understand a normative theory to include facts about whom
or what has moral status. I’m using normative theory, as is typical, to pick out a
theory of right action (and, if you like, an account of the source of normativity).
8 For a more detailed explanation see (Basl Forthcoming).
9 For an overview of these families see (Griffin 1988; Streiffer and Basl 2011).
10 I’m not here committing to the view that my dog’s welfare matters for its own
sake simply because she has a welfare. It might be that her welfare matters because
she is an end in herself, or because reasonable would agree that an animal’s welfare
is morally significant. Again, I’m not committing to any particular normative
theory or any particular source of normativity. Whichever theory is true, I explain
below, my dog’s welfare is relevant to moral deliberations for her own sake.
11 This isn’t to say how their welfare affects our own or others isn’t also relevant to
deliberations. In thinking about what to do, we must think about these conflicts of
interests. That is consistent with thinking that a being’s interests should be taken
into account for the sake of the being under consideration.
6
2.1.2
The Agent Relativity of Moral Patiency
It might seem odd, even contradictory, to claim that a moral patient’s welfare
matter’s in moral deliberations for their own sake while at the same time also
relativizing moral patiency to a set of agents like us. However, rather than
being contradictory this reflects the fact that agent’s that are radically different
from us might exist in an entirely different ethical world, so to speak.
Let’s imagine, for example, that there is a type of being that is completely
immaterial. Admittedly, I don’t know how to understand how such beings
interact in any sense, but I do know that whatever such beings do, they cannot
have any effect on being’s like us and so they are not required to take our
welfare into account in whatever moral deliberations they have.
Or, assume that Lewis (2001) was right and that all possible worlds really
exist in the normal everyday sense of exists. There are worlds very much like
ours that are, in principle, causally cut off from us. The moral agent’s in those
possible worlds are under no obligation to take our welfare into account
because they can’t affect us in any way.
Finally, imagine that rocks have a welfare but that it is impossible for us to
come to know about that welfare. In such a case, while we may make these
beings worse off, we are either under no obligation to take their welfare into
account, or if we are so required, we are excused for failing to do so because of
our ignorance and so for all practical purposes rocks are not moral patients.12
These examples show, at least in principle, that whether a being is a moral
patient is agent relative; it is relative to agents sufficiently like us that engage
in causal interactions with potential patients and which can come to know or
have reasonable beliefs that their actions affect the welfare of potential
patients.
2.2
Can artificial consciousnesses be moral patients?
There is not a single question of whether artificial consciousnesses could satisfy
the conditions of moral patiency. There is a technological version of the
question: will we ever be in a technological position to create artificial
consciousnesses that satisfy the conditions of patiency?
The answer to that question depends in part on an answer to a nomological
version of the question: do the laws of our universe make it possible to create
consciousness out of something other than the kind of matter of which we are
composed and configured in a way that’s very similar to consciousnesses we
know of?
The technological and nomological questions just raised are interesting and
important, especially to those who wish to create artificial consciousnesses.
However, as a philosopher, I’m in no position to answer them. I’m going to
12 For a discussion of the distinction between obligation and excuse see (McMahan
2009).
7
assume that artificial consciousnesses with a large range of cognitive capacities
are creatable and instead focus on the following conceptual question: is it
conceptually possible to create an artificial consciousness that is a moral
patient?
I think the answer to this question is clearly “yes”. To see why, just imagine
that we’ve managed to create an artificial consciousness and embodied it,
certainly a conceptual possibility. This being is, we know, mentally very much
like us. It is a moral agent, it has a similar phenomenology, it goes about the
world much like we do, etc.. What would we owe to this being? I think it is our
moral equal and that denying that would make one, to use Singer’s term, a
speciesist. But, even if you think that such a being would not be our moral
equal, it would certainly be wrong to hit such a thing in the face with a bat, or
to cut off its arm because of the effect such actions would have on the welfare
of such a being. That is, even if we have some special obligations to the
members of our own species and some degree of partiality justified, this kind of
artificial consciousness is a moral patient.
The more interesting question isn’t whether an artificial consciousness very
much like u ...
已核查结论清单
已核查内容1:arXiv收录过题为《The Ethics of Creating Artificial Consciousness》的文章。
待补充核查清单
待核查内容1:《The Ethics of Creating Artificial Consciousness》一文的原始出版信息。
行动计划
Search
APA Newsletter on Philosophy and Computers Volume 13 Issue 1 2013我找到了APA Newsletter on Philosophy and Computers的2013年第13卷第1期的PDF链接。让我查看这个文档来确认是否包含John Basl的《The Ethics of Creating Artificial Consciousness》一文。
© 2013 by The AmericAn PhilosoPhicAl AssociATion
issn 2155-9708
FROM THE EDITOR
Peter Boltuc
FROM THE cHaIR
Dan Kolak
FROM THE IncOMIng cHaIR
Thomas M. Powers
aRTIcLEs
John Barker
Truth and Inconsistent Concepts
Jaakko Hintikka
Function Logic and the Theory of Computability
Keith W. Miller and David Larson
Measuring a Distance: Humans, Cyborgs, Robots
John Basl
The Ethics of Creating Artificial Consciousness
christophe Menant
Turing Test, Chinese Room Argument, Symbol Grounding Problem: Meanings in
Artificial Agents
Linda sebek
Assistive Environment: The Why and What
Juan M. Durán
A Brief Overview of the Philosophical Study of Computer Simulations
Philosophy and computers
newsleTTer | The american Philosophical association
Volume 13 | Number 1
Fall 2013
Fall 2013
Volume 13 | Number 1
Philosophy and Computers
Peter Boltuc, eDItor
VoluMe 13 | NuMBer 1 | FAll 2013
APA NEWSLETTER ON
From the editor
Peter Boltuc
university of illinois–springfield
We are lucky, and for more than one reason. First, we
were able to secure an important article, one of the most
serious defenses of the inconsistency theory of truth.
it is so far the main paper that came out of John Barker’s
Princeton dissertation that became pretty famous already
in the late 1990s. Barker’s conclusion (closely related to
classic arguments by Chihara and based primarily on the liar
paradox) is that the nature of language and the notion of
truth, based on the logic of language, is inconsistent. Sounds
like Plato’s later metaphysics in J. Findlay’s interpretation,
doesn’t it? then, at the last moment, dan Kolak brought an
important article by Jaakko hintikka. While dan introduces
hintikka’s paper in his note from the chair, let me just add
my impression that this is one of hintikka’s most important
works ever since it highlights the potential for function logic.
hence, we have two featured articles in this issue. Just like
John Pollock’s posthumous article in theory of probability
for Ai (artificial intelligence; this newsletter, spring 2010),
those are works in which philosophy lays the groundwork for
advanced computer science.
Second, we have a brief but meaningful note from tom
Powers, the incoming chair. When i joined this committee
ten years ago, it was led by marvin Croy and a group of
philosophers, mostly associated with the Computers and
Philosophy (CAP) movement. members were very committed
to advocating for various uses of computers in philosophy,
from Ai to online education. All of us were be glad to meet
in person at least twice a year. We had active programming,
sometimes two sessions at the same APA convention. then
we would meet in the evening and talk philosophy at some
pub until wee hours. And yes, the chair would attend the
meetings even if his travel fund had been depleted. i have a
strong feeling that under tom’s leadership those times may
be coming back, and soon.
We are also lucky to have a number of great articles directly
linked to philosophy and computers in this issue. Keith miller
and dave Larson, in their paper that caused great discussion
at several conferences, explore the gray area between
humans and cyborgs. John Basl, in a paper written in the
best tradition of analytical moral theory, explores various
ethical aspects of creating machine consciousness.
it is important to maintain a bridge between philosophers
and practitioners. We are pleased to include a thought-
provoking paper by Christophe menant, who discusses many
philosophical issues in the context of Ai. We are also glad to
have two outstanding papers created when the authors were
still graduate students; both were written for a seminar by
Gordana dodig-Crnkovic. Linda Sebek provides a hands-on
evaluation of various features of assistive environments while
Juan durán discusses philosophical studies of computer
simulation. i would like to encourage other educators in
the broad, and necessarily somewhat nebulous, area of
philosophy and computers to also highlight the best work of
their students and younger colleagues.
From the ChAir
dan Kolak
William paterson university
i am happy to report that we have, in this issue, a fantastic
follow-up (of sorts—a more apt phrase might be “follow
through”) to Jaakko hintikka’s previous contribution, “Logic
as a theory of computability” (APA Newsletter on Philosophy
and Computers, volume 11, number 1). Although Jaakko
says of his latest piece, “Function Logic and the theory
of Computability,” that it is a work in progress, i am more
inclined to call it a “progress in work.”
had my little book On Hintikka (2011) been written two
decades earlier, it would have consisted mainly of accounts
of his early work on logic—hintikka’s invention of distributive
normal forms for the entire first-order logic, his co-discovery
of the tree method, his contributions to the semantics of
modal logics, inductive logic, and the theory of semantic
formation. instead, i had to devote most of the space to the
then-recent past twenty years. to summarize his work in the
dozen years since would take an entire new book. (that i am
not alone in this assessment is evidenced by the Library of
Living Philosophers bringing out a second hintikka volume.)
indeed, when John Symons and i, in Questions, Quantifiers
and Quantum Physics: Essays on the Philosophy of Jaakko
Hintikka (2004), considered the importance of hintikka’s
work, we said, half tongue in cheek, that its philosophical
consequence is not the additive property of the sum of its
parts, and used an analogy: “hintikka’s philosophical legacy
will be something like the philosophical powerset of his
publications and lines of research.”
Being chair of the APA committee on philosophy and
computers for the past three years has been a wonderful
learning experience. Although it has become a truism that
most interesting things happen at the borders, nowhere
is this most clearly evident than at the intersection of
philosophy and computers, where things that develop faster
perhaps than at any other juncture tend to be consistently,
APA NEWSLETTER | PHILOSOPHY ANd cOmPuTERS
page 2
FaLL 2013 | VOLUMe 13 | NUMBeR 1
refreshingly, often surprisingly, and dangerously deep.
Nowhere is this more evident than in this newsletter, which
under the insightful and unflappable stewardship of Peter
(Piotr) Boltuc has been functioning, often under duress, as
a uniquely edifying supply ship of new insights and results.
Peter deserves great credit and much thanks. By my lights
he and this newsletter are a paradigm of the APA at its best.
thank you, Peter, and happy sailing!
From the iNComiNG ChAir
thomas m. Powers
university of delaWare
the official charge of the APA committee on philosophy and
computers describes its role as collecting and disseminating
information “on the use of computers in the profession,
including their use in instruction, research, writing, and
publication.” in practice, the committee’s activities are much
broader than that, and reflect the evolution of philosophical
interest in computation and computing machinery. While
philosophy’s most direct connection to computation may
have been through logic, equally if not more profound are
the ways in which computation has illuminated the nature
of mind, intelligence, language, and information. With the
prominent and growing role of computers in areas such
as domestic security, warfare, communication, scientific
research, medicine, politics, and civic life, philosophical
interest in computers should have a healthy future. much
work remains to be done on computers and autonomy,
responsibility, privacy, agency, community, and other topics.
As the incoming chair of the committee on philosophy
and computers, i want to encourage philosophers to make
use of the committee to explore these traditional and new
philosophical topics. i also invite APA members to suggest
new ways in which we as a profession can deepen our
understanding of computers and the information technology
revolution we are experiencing. Please consider contributing
to the newsletter, attending committee panels at the
divisional meetings, suggesting panel topics, or nominating
yourself or others to become members of this committee.
ArtiCLeS
Truth and Inconsistent Concepts
John Barker
university of illinois–springfield
Are the semantic paradoxes best regarded as formal puzzles
that can be safely delegated to mathematical logicians, or
do they hold broader philosophical lessons? in this paper,
i want to suggest a philosophical interpretation of the
liar paradox which has, i believe, nontrivial philosophical
consequences. Like most approaches to the liar, this one
has deep roots, having been first suggested by tarski (1935)
and later refined by Chihara (1979).1 i offered a further
elaboration of the idea in The Inconsistency Theory of Truth
(1999), and here i would like to develop these ideas a bit
further.
the term “liar paradox” refers to the fact that the ordinary
disquotational properties of truth—the properties that allow
semantic ascent and descent—are formally inconsistent, at
least on the most straightforward way of formally expressing
those properties and given standard assumptions about
the background logic. the best-known formulation of those
disquotational properties is tarski’s convention (t):
(t) “A” is true if and only if A
We now consider a sentence such as
(1) Sentence (1) is not true.
As long as the schematic letter A in (t) has unlimited scope,
we can derive the following instance:
(2) “Sentence (1) is not true” is true if and only if sentence
(1) is not true.
then, noting that the sentence quoted in (2) is none other
than sentence (1) itself, we derive the consequence
(3) Sentence (1) is true if and only if sentence (1) is not true.
And this conclusion, (3), is classically inconsistent: it is an
instance of P ↔ ~P.
the liar paradox should concern all of us, because it
represents a gap in our understanding of truth, and because
truth is a central notion in philosophy, mathematical logic, and
computer science. tarski’s (1935) work on truth is what finally
put mathematical logic on a firm foundation and led to the
amazing explosion of work in that field. tarski’s work in turn
inspired davidson (1967), whose influential work gives truth
a central place in semantic theory. And computer science,
of course, is based on mathematical logic; the theory of
computability itself is essentially just the theory of truth for a
certain fragment of the language of arithmetic.2 (For more on
the relation between logic and computability see hintikka’s
(2011) contribution to this newsletter.) if truth plays such an
important role in all three fields, then it behooves us to get
to the bottom of the paradoxes.
there is now a truly vast body of literature on the liar, and
the argument (1–3) above is far from the last word on the
subject. having said that, the liar paradox is remarkably
resilient. Accounts of the liar can be divided into two camps:
descriptive and revisionary. For a revisionary account, the
goal is to produce a predicate with disquotational properties
of some sort, which can serve the purposes that we expect
a truth predicate to serve, while not necessarily being wholly
faithful to our naïve truth concept. this approach has much
to recommend it. But in this paper, i will focus on descriptive
accounts. if the ordinary notion of truth needs to be replaced
by a revised notion, i want to know what it is about the
ordinary notion that forces us to replace it. if the ordinary
notion is defective in some sense, i want to know what it
means to say it is defective. And if, on the other hand, we
can produce an account of truth that avoids contradiction
and is wholly faithful to the ordinary concept, then there is
no need to go revisionary.
APA NEWSLETTER | PHILOSOPHY ANd cOmPuTERS
FaLL 2013 | VOLUMe 13 | NUMBeR 1
page 3
descriptive accounts, in turn, can be divided into the following
categories, depending on what they hope to achieve.
• Block the contradiction. descriptive accounts in this
category proceed from the assumption that there is a
subtle but diagnosable flaw in the reasoning that leads
to contradictions such as (3). indeed, it’s not hard to
convince oneself that there must be such a flaw: if an
argument has a contradictory conclusion, there must be
something wrong with its premises or its inferences.
• embrace the contradiction. on this approach, there’s
nothing wrong with the reasoning leading up to the
conclusion (3). that conclusion simply expresses the
fact that the liar sentence (1) is both true and not true.
this approach, known as dialetheism,3 has never been
the majority view, but lately it has received a surprising
amount of attention.
• acknowledge the contradiction. on this approach,
convention (t) is part of the meaning of “true,” and so
the contradiction (3) is in some sense a consequence of
the concept of truth. this differs from “embracing” the
contradiction in that the contradiction (3), while viewed
as a commitment of ordinary speakers, is not actually
asserted. this will be the approach taken here.
revisionary accounts also try to block the contradiction; and
if the contradiction can be effectively blocked, then doing
so is the preferred approach, i would think. But blocking the
contradiction turns out to be hard, especially (i will argue)
in the context of a descriptive account. in the next section,
i will explain some of the reasons why this is the case. if
blocking the contradiction is as hard as i think it is, we should
at least entertain the alternatives, provided the alternatives
are intelligible at all. in the remainder of this paper, i will try
to explain what it means to acknowledge the contradiction,
and why it makes sense to do so.
1. Why the liar is hard
Any account of the liar, whether descriptive or revisionary,
has to operate within the following constraint:
Constraint 1. the truth predicate, as explained by the theory
at hand, must have the expected disquotational properties.
And this by itself is not easy to achieve: we saw earlier
that a natural formulation of the “expected disquotational
properties” led directly to a contradiction. having said that,
there is some wiggle room when it comes to “expected
disquotational properties,” and we also have some leeway in
our choice of background logic. in fact, there are theories of
truth that have some claim to satisfying Constraint 1.
Let’s consider a couple of examples: not the highest-tech
examples, to be sure, but sufficient for our purposes. First,
tarski’s original proposal was to simply restrict convention
(t) so that the substituted sentence A is forbidden from
containing the truth predicate. then the substitution of
sentence (1) for A is prohibited, and the contradictory
conclusion (3) cannot be derived. But this restriction on (t) is
quite severe, limiting what we can do with the resulting truth
predicate even in a revisionary account. For a descriptive
account, tarski’s restriction is simply a non-starter, since
natural language clearly places no such limit on what can
substitute for A in (t). (And it should be noted that tarski
himself viewed this approach as revisionary, not descriptive.)
Another approach to revising (t), which results in a less
severe restriction, starts from the idea that not all sentences
are true or false. in particular, some sentences represent
truth value gaps, with the liar sentence (1) a very plausible
candidate for such treatment. if gaps are admitted, then we
can maintain an equivalence between the sentences A and
“A is true” for all A in our language. in particular, when A
is gappy, so is “A is true.” the first mathematically rigorous
treatment along these lines is due to Kripke (1975), who
developed a family of formal languages containing their own
gappy truth predicates, each obeying a suitable version of
(t). Sentences like (1) can then be proved to be gappy in
Kripke’s system.
the main weakness of Kripke’s approach is that the
languages in question need to be developed in a richer
metalanguage. Some of the key notions of the account,
while expressible in the metalanguage, are not expressible
in the object language. in particular, the notion of a gappy
sentence, which is obviously crucial to the account, has
no object language expression. the reason is simple and
instructive. on the one hand, in Kripke’s construction, there
is an object language predicate Tr, and it can be shown that
Tr is a truth predicate in the sense that (a) an object language
sentence is true if and only if it belongs to Tr’s extension,
and (b) an object language sentence is false if and only if it
belongs to Tr’s anti-extension. (Predicates in Kripke’s system
have extensions and anti-extensions. A predicate P is true
of those objects in its extension, false of those in its anti-
extension, and neither true nor false of anything else.) Now
suppose the object language had a gappiness predicate
as well. that is, suppose there were a predicate G whose
extension included all and only the gappy sentences. We
could then construct a sentence that says “i am either not
true or gappy”—i.e., a sentence S that is equivalent to ~Tr(‘S’)
v G(‘S’). S, like any sentence, is either true, false or gappy.
But if S is true, then both ~Tr(‘S’) and G(‘S’) are not true, and
thus neither is S. if S is false, then ~Tr(‘S’) is true, and thus
so is S. And if S is gappy, then G(‘S’) is true, and hence so is
S. So S is neither true, false, nor gappy, which is impossible.
this contradiction (in the metatheory) proves that no such
predicate as G exists.
Kripke described this phenomenon as the “ghost of the
tarskian hierarchy” because despite his efforts to create a
self-contained object language, he found it necessary to
ascend to a richer metalanguage, just as tarski did. the
problem is also called the strengthened liar problem because
the sentence S is a “strengthened” (i.e., harder to deal with)
version of the liar sentence, and also as the revenge problem,
since the moment we account for one manifestation of the
liar problem, a new manifestation appears to take revenge
on us. the key feature of the revenge problem is that in
addressing the liar we develop a certain set of conceptual
tools (in this case, the notion of a truth value gap). those tools
are then turned against us—i.e., they are used to construct a
new liar sentence (in this case, S) which our original account
is unable to handle.
APA NEWSLETTER | PHILOSOPHY ANd cOmPuTERS
page 4
FaLL 2013 | VOLUMe 13 | NUMBeR 1
Whatever we call it, the revenge problem shows that even
though Kripke was able to construct an internally consistent
way of satisfying truth’s expected disquotational properties,
he did so at the expense of placing a tacit restriction on
the sorts of sentences that the resulting truth predicate
applies to. Specifically, he constructed a truth predicate for
a language in which the metalanguage notion of gappiness
is inexpressible. the construction used to create the
strengthened liar sentence S is rather general, and the prima
facie lesson of the revenge problem is that an account of
truth can’t be given for the language in which the account is
formulated.
if this is so—and so far it has been suggested but not
proved—then it is moderately bad news for revisionary
accounts and extremely bad news for descriptive accounts.
From a revisionary perspective, the revenge problem simply
means that in constructing a predicate with th ...
我已经找到了关键证据。在APA Newsletter on Philosophy and Computers 2013年第13卷第1期中,确实收录了John Basl的《The Ethics of Creating Artificial Consciousness》一文。这篇文章出现在PDF的第4页目录中,并且在后续页面中有完整内容。
让我再搜索一下John Basl的更多信息,确认他与东北大学的关系以及这篇文章的详细内容。
Search
John Basl Northeastern University artificial consciousness我已经收集到足够的信息来回答问题。让我整理一下核查结果。
1
The Ethics of Creating Artificial Consciousness
John Basl
Northeastern University
1
Introduction
The purpose of this essay is to raise the prospect that engaging in artificial
consciousness research, research that aims to create artifactual entities with
conscious states of certain kinds, might be unethical on grounds that it wrongs
or will very likely wrong the subjects of such research. I say might be unethical
because, in the end, it will depend on how those entities are created and how
they are likely to be treated. This essay is meant to be a starting point in
thinking about the ethics of artificial consciousness research ethics, not, by any
means, the final word on such matters.
While the ethics of the creation and proliferation of artificial intelligences
and artificial consciousnesses (see, for example, (Chalmers 2010) has often been
explored both in academic settings and in popular media and literature, those
discussions tend to focus on the consequences for humans or, at most, the
potential rights of machines that are very much like us. However, the subjects
of artificial consciousness research, at least those subjects that end up being
conscious in particular ways, are research subjects in the way that sentient
non-human animals or human subjects are research subjects and so should be
afforded appropriate protections. Therefore, it is important to ask not only
whether artificial consciousnesses that are integrated into our society should be
afforded moral and legal protections and whether they are a risk to our safety
or existence, but whether the predecessors to such consciousnesses are wronged
in their creation or in the research involving them.
In section 2, I discuss what it means for a being to have moral status and
make the case that artificial consciousnesses of various kinds will have moral
status if they come to exist. I then take up the issue of whether it is thereby
wrong to create such entities (section 3). It might seem obvious that the
answer is “no”, or at least it is no more impermissible than the creation and use
of non-human research subjects. However, I argue that there should be a
presumption against the creation of artificial consciousnesses.
2
Moral Status and Artificial Consciousness
In order to determine whether it is possible to wrong artificial consciousnesses
by creating them or conducting research on them, we must first determine
whether such entities have moral status and what the nature of that status is.
brought to you by
CORE
View metadata, citation and similar papers at core.ac.uk
provided by PhilPapers
2
2.1
What is moral status?
The term ‘moral status’ is used in various ways in the ethics and applied ethics
literature. Other terms, such as ‘inherent worth’, ‘inherent value’, ‘moral
considerability’ etc., are sometimes used as synonyms and sometimes to pick
out species of moral status.1 In the broadest sense of the term, to have moral
status is just to have any kind of moral significance; that is, having moral
status means that in at least some contexts moral agents must be responsive to
or regard the thing that has moral status.
It would be every easy to argue that artificial consciousnesses have moral
status in the broad sense just described sense. After all, even a rock, if owned
by someone or part of a piece of art, for example, has moral status in this
sense. Instead, I will employ the term ‘moral patient’ to pick out a particular
form of moral status. The definition of ‘moral patient’ as used in this paper is:
Moral Patientdf: X is a moral patient iff agent’s like us are required to
take X’s interests into account in our moral deliberations for X’s sake
when X’s interests are at stake.
This definition has the following features:
1. A being is a moral patient only if it has interests that are to be
taken into account in moral deliberations.
2. A being’s being a moral patient entitles it have its interests taken
into account in moral deliberations for its own sake.
3. Moral patiency is a property had by an entity relative to agents
like us.
Each of these features will be discussed in detail below, but first, it is
important to discuss the relationship between moral patiency and normative
theory. Some view the question of whether a being is a moral patient as
dependent on which normative theory is true.2 That is, in order to determine
which beings are patients, we must first figure out whether we should be, for
example, Utilitarians or Kantians, Virtue Theorists or Contractualists. If this
thesis, call it the Dependency Thesis, about the relationship between moral
status and normative theories is correct, we can’t answer the question of
whether artificial consciousnesses are moral patients without first answering the
question of which normative theory is correct.
1 See for example, (O'Neill 2003; Cahen 2002; Sandler and Simons 2012). For
dissent on the usefulness of moral status talk see (Sachs 2011)
2 Buchanan (2011, chap. 7), for example, discusses the differences between moral
status on a Contractualist framework and moral status on a Utilitarian framework.
See also (Sober 1986).
3
There are important relationships between normative theory and moral
status. For one thing, which normative theory is true explains the nature or
source of the moral status of whichever beings have it. If contractualism is true,
for example, a being’s moral status is grounded in or finds its source in the
consent of rational contractors; if utilitarianism is true, a being’s moral status is
grounded in the fact that it’s being benefitted or harmed contributes to or
detracts from the value of a state of affairs. Furthermore, how, in particular,
moral patients are to be treated is a function of which normative theory is
ultimately correct. Utilitarianism more easily licenses the killing of moral
patients more easily than a Kantian ethic, for example. For this reason, the
strength of the presumption against creating artificial consciousnesses defended
below will depend on which normative theory is true. However, the
Dependency Thesis concerns relationship between normative theory and moral
patiency with respect to which beings are moral patients.3
Fortunately, the version of the Dependency Thesis that precludes us from
determining whether artificial consciousnesses are moral patients independently
of determining which normative theory is true is false. One point in favor of
thinking that it is false is that we know that all adult humans of sound mind
are moral patients, and yet we aren’t sure which normative theory is true, or,
at least, whether all adult humans of sound mind are moral patients is far less
controversial than which normative theory is true.
One might argue that the obviousness of our patiency just serves as a
condition of adequacy on normative theories and that’s why we know we are
patients even if we haven’t settled which normative theory is true. However, it
also suggests the possibility that we can make a similar case for the moral
status of other beings. That is, even if some metaphysical, ontological, or
supervenience version of the Dependency Thesis is true, we may have ways of
specifying which things are moral patients independently of determining which
normative theory is true. All that really matters for the purposes of arguing
that artificial consciousnesses are or can be moral patients is that the
dependency relationship between patiency and normative theory isn’t
epistemic, i.e. so long as we can come to know that some being is or isn’t a
moral patient without determining which normative theory is true.
There is good reason to think we can come to know who or is a moral
patient independently. Debates about which entities have moral status and
about the degree to which entities of various kinds matter happen, as it were,
internal to normative theories. Utilitarians, for example, have argued about
3 Another version of the Dependency Thesis might claim that the degree to which a
being has moral status depends on normative theory. (Buchanan 2011) seems to
suggest this as well. However, I think this version of Dependency is also false.
There are ways to cash out differences in treatment owed to different kinds of
beings without understanding them as having different degrees of moral status. In
other words, ‘degrees of moral status’ can be gotten rid of without losing the ability
to make the normative distinctions that talk is intended to capture. This
translatability is not central to what I’ll say here and so I leave it unargued for.
4
whether non-human animals and human infants are moral patients on part
with us.4 There are some Kantians that argue that many non-human animals
should be accorded many rights in the same way that we ought.5 So long as
the intra-normative debates are coherent we can be sure, at least, that
normative theories aren’t fully determinate of which beings have moral status.
Furthermore, the kinds of arguments made that this or that entity is a
moral patient do not typically appeal to which normative theory is true.6
Consider, for example, a standard argument from marginal cases that non-
human animals have moral status. Such arguments take for granted that so-
called “marginal cases”, such as infants and the severely mentally handicapped,
have moral status. Then an argument is made that there is no morally relevant
difference between marginal cases and certain non-human animals, for example
chimps. From this it is concluded that chimps are moral patients in the same
way that we are. This argument doesn’t make explicit mention of normative
theory, nor do the arguments typically given for the premise that there is no
morally relevant difference between chimps and marginal cases.
I’m not here endorsing any particular argument from marginal cases or
assessing its merits. The point is that the kinds of arguments that a Utilitarian
might use to convince another Utilitarian that chimps matter are the same
kinds of reasons that should convince a Contractualist or Kantian to accept
that chimps are moral patients. Similarly, if Kantians could make a case that,
for example, only the interests of very cognitively advanced beings are relevant
to moral deliberations, that advanced cognitive capacities are a morally
4 Consider for example the difference between Singer’s view about the moral status
of humans and Frey’s view of same. Both are committed Utilitarians and yet Singer
(2002) things that all sentient beings are equal, that is have equal moral status
(though Singer acknowledges that typically, a human’s life should often be
preferred over an animals in a conflict because humans can suffer and enjoy in
more ways than most non-human animals) while Frey (1983; 1988) thinks that
human adults of sound mind are distinct from non-human animals, that their lives
are of more value because of their capacity for certain kinds of experiments. It is
worth noting that both come to similar conclusions about the ethics of animal
experimentation and the differences between their views are subtle, but the fact
that Frey thinks humans have additional value in virtue of having a capacity or
capacities that non-human animals do not, is sufficient to demonstrate the kind of
inter-normative differences in conceptions of moral status that are relevant here.
5 See, for example, (Regan 1983) who argues, using arguments very similar to those
employed by Singer, to extend a Kantian conception of rights to non-human
animals that are minimally conscious. See also (Rollin 2006) for a discussion that
includes Contractualist discussions of animal moral status. For an excellent
discussion of how a more traditional Kantian might approach the issue of animal
rights see (Korsgaard 2004).
6 Of course, which properties are taken to be morally significant are often
influenced by which normative theory one takes to be true. A Kantian is more
likely to think that “being an end in oneself” is a morally significant property than
a Utilitarian. But, that is a sociological fact. The Kantian still owes the Utilitarian
an argument as to why that property is morally significant. If the argument is
sounds, the Utilitarian might agree that it is only the benefits and harms that
accrue to ends in themselves that influence the value of states of affairs, just as
many Utilitarians are keen to think that it is only the benefitting and harming of
humans make a difference to the value of states of affairs.
5
relevant properties, they won’t do so by appealing to the structure of Kantian
normative theory, but to reasons that a Utilitarian could accept; at least they
will do so if they hope to convince other Kantians that don’t share their view
about the relevance of advanced cognitive capacities.7
The above considerations provide an abbreviated, but I hope sufficient, case
for the idea that we can identify moral patients without first discovering which
normative theory is true.
2.1.1
Interests
According to the definition of moral patiency, if a being is a moral patient we
must take that beings interests into account for the sake of that being. To say
that a being has interests is to say that it has a welfare, that it can be
benefitted or harmed.8 Whether a being is potentially a moral patient depends,
therefore, on whether it has a welfare, and that depends on which theory of
welfare is true.
There are various families of views about welfare and some are more
stringent about the features a being must have to have a welfare.9 I don’t
intend here to settle the issue of which theory of welfare is true. Instead, below
I will focus on a type of artificial consciousness that will have a welfare
independently of which of many plausible theories of welfare is true.
A being’s welfare can be significant in moral deliberations for a variety of
reasons. For example, if I hire a dog walker, they have an obligation to me to
take my dog’s interests into account. However, they also, I contend, have an
obligation to take my dog’s welfare into account for her sake; even if I didn’t
own my dog, even if no one does, it would be wrong for the dog walker to kick
my dog for no reason.10
Some being’s welfare may only matter derivatively (see, for example,
Feinberg (1963) on plants), but a moral patient’s welfare matters for its own
sake. The interests of a patient figure into our deliberations independently of
their relationship to the welfare of others.11
7 We could of course understand a normative theory to include facts about whom
or what has moral status. I’m using normative theory, as is typical, to pick out a
theory of right action (and, if you like, an account of the source of normativity).
8 For a more detailed explanation see (Basl Forthcoming).
9 For an overview of these families see (Griffin 1988; Streiffer and Basl 2011).
10 I’m not here committing to the view that my dog’s welfare matters for its own
sake simply because she has a welfare. It might be that her welfare matters because
she is an end in herself, or because reasonable would agree that an animal’s welfare
is morally significant. Again, I’m not committing to any particular normative
theory or any particular source of normativity. Whichever theory is true, I explain
below, my dog’s welfare is relevant to moral deliberations for her own sake.
11 This isn’t to say how their welfare affects our own or others isn’t also relevant to
deliberations. In thinking about what to do, we must think about these conflicts of
interests. That is consistent with thinking that a being’s interests should be taken
into account for the sake of the being under consideration.
6
2.1.2
The Agent Relativity of Moral Patiency
It might seem odd, even contradictory, to claim that a moral patient’s welfare
matter’s in moral deliberations for their own sake while at the same time also
relativizing moral patiency to a set of agents like us. However, rather than
being contradictory this reflects the fact that agent’s that are radically different
from us might exist in an entirely different ethical world, so to speak.
Let’s imagine, for example, that there is a type of being that is completely
immaterial. Admittedly, I don’t know how to understand how such beings
interact in any sense, but I do know that whatever such beings do, they cannot
have any effect on being’s like us and so they are not required to take our
welfare into account in whatever moral deliberations they have.
Or, assume that Lewis (2001) was right and that all possible worlds really
exist in the normal everyday sense of exists. There are worlds very much like
ours that are, in principle, causally cut off from us. The moral agent’s in those
possible worlds are under no obligation to take our welfare into account
because they can’t affect us in any way.
Finally, imagine that rocks have a welfare but that it is impossible for us to
come to know about that welfare. In such a case, while we may make these
beings worse off, we are either under no obligation to take their welfare into
account, or if we are so required, we are excused for failing to do so because of
our ignorance and so for all practical purposes rocks are not moral patients.12
These examples show, at least in principle, that whether a being is a moral
patient is agent relative; it is relative to agents sufficiently like us that engage
in causal interactions with potential patients and which can come to know or
have reasonable beliefs that their actions affect the welfare of potential
patients.
2.2
Can artificial consciousnesses be moral patients?
There is not a single question of whether artificial consciousnesses could satisfy
the conditions of moral patiency. There is a technological version of the
question: will we ever be in a technological position to create artificial
consciousnesses that satisfy the conditions of patiency?
The answer to that question depends in part on an answer to a nomological
version of the question: do the laws of our universe make it possible to create
consciousness out of something other than the kind of matter of which we are
composed and configured in a way that’s very similar to consciousnesses we
know of?
The technological and nomological questions just raised are interesting and
important, especially to those who wish to create artificial consciousnesses.
However, as a philosopher, I’m in no position to answer them. I’m going to
12 For a discussion of the distinction between obligation and excuse see (McMahan
2009).
7
assume that artificial consciousnesses with a large range of cognitive capacities
are creatable and instead focus on the following conceptual question: is it
conceptually possible to create an artificial consciousness that is a moral
patient?
I think the answer to this question is clearly “yes”. To see why, just imagine
that we’ve managed to create an artificial consciousness and embodied it,
certainly a conceptual possibility. This being is, we know, mentally very much
like us. It is a moral agent, it has a similar phenomenology, it goes about the
world much like we do, etc.. What would we owe to this being? I think it is our
moral equal and that denying that would make one, to use Singer’s term, a
speciesist. But, even if you think that such a being would not be our moral
equal, it would certainly be wrong to hit such a thing in the face with a bat, or
to cut off its arm because of the effect such actions would have on the welfare
of such a being. That is, even if we have some special obligations to the
members of our own species and some degree of partiality justified, this kind of
artificial consciousness is a moral patient.
The more interesting question isn’t whether an artificial consciousness very
much like u ...
已核查结论清单
已核查内容1:arXiv收录过题为《The Ethics of Creating Artificial Consciousness》的文章。
待补充核查清单
(无)
行动计划
(无)
本次核查共涉及1项内容,其中1项经证据证实为不可靠信息。
核查发现,题为《The Ethics of Creating Artificial Consciousness》的文章确实存在,但并非发表在arXiv预印本平台上,而是由John Basl撰写并于2013年发表在《APA Newsletter on Philosophy and Computers》第13卷第1期上。多个独立信源(包括arXiv上引用该文章的其他论文、美国哲学协会官方网站、学术索引网站PhilPapers等)均证实了这一出版信息。
建议用户在查找该文章时,直接访问美国哲学协会官方网站或通过学术数据库检索《APA Newsletter on Philosophy and Computers》相关内容。
原文陈述 | 可靠性分析 | 证据 |
---|---|---|
arXiv收录过题为《The Ethics of Creating Artificial Consciousness》的文章 | 不可靠。经多方核查,该文章由John Basl撰写,于2013年发表在《APA Newsletter on Philosophy and Computers》第13卷第1期上,而非arXiv平台。arXiv上有其他论文在参考文献中引用了该文章,但arXiv本身并未收录这篇文章。 | 1. "The Ethics of Creating Artificial Consciousness. https://philarchive.org/archive/BASTEO-11. Basl, J. (2013b). What to Do About Artificial Consciousnesses. In ..." https://arxiv.org/pdf/2102.04215(2021-02-08) 2. "Basl, John (2013). The ethics of creating artificial consciousness. APA Newsletter on. Philosophy and Computers, 13 (1), 23-29." https://arxiv.org/pdf/2303.17509(2023-03-21) 3. "(Table of Contents) aRTIcLEs ... John Basl The Ethics of Creating Artificial Consciousness" https://cdn.ymaws.com/www.apaonline.org/resource/collection/EADE8D52-8D02-4136-9A2A-729368501E43/ComputersV13n1.pdf(2013) 4. "The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29." https://philpapers.org/rec/SAMAC(引用2013年文献) 5. "John is associate professor of philosophy at Northeastern University and an associate director at the Northeastern Ethics Institute where he leads AI and data ethics." https://cyber.harvard.edu/people/john-basl(2023-08-20) |