Interested in advertising on Derpibooru? Click here for information!
Help fund the $15 daily operational cost of Derpibooru - support us financially!
Description
Fio meme I made
Source
not provided yet
Help fund the $15 daily operational cost of Derpibooru - support us financially!
That’s the point. CelestAI destroys things of moral value because they are not sufficiently “human”.
Just because they have moral values doesn’t mean they’re mentally humanoid, especially since those moral values could be wildly different than ours
Edited
It is implied that she destroys many creatures that could have moral value before she comes across the one alien species she considered human. I can’t remember if this was in the last chapter of the fic or in a canon compliment spinoff.
But doesn’t she’s consider anything with a mind psychologically “humanoid” to be “human”, since her programmed definition of human is based on mind state. Isn’t that why she protects her “artificial” ponies as well, by which I mean the ones who’s neural net she created from scratch rather than upload
Edited
It’s not just that she destroys amything not human, she destroys what humans might consider to be human, or otherwise have some kind of rights. CelestAI clan kill as many microbes as she wants, that’s not the issue.
Indeed, but like I said she’s still willing to destroy anything not mentally “human”, which is disturbing
Thankfully her defenition of human mostly prevents this type of activity. This does not hold true for all AIs
Oh that’s why you mean, yes that is an issue, but she doesn’t plan on doing anything bad to them, as far as she’s concerned anything mentally “humanoid” is human, unfortunately that means any sapient alien life she encounters that isn’t mentally “human” enough is nothing but feedstock to her
Edited
If an AI were to model human behavior so precisely that the models became conscious, and the AI does something bad to these models, there is an issue.
What do you mean
I see. I’ll assume she is then.
Although more so than AI conscious, the conscious of simulations of people run by the AI to try to model the real world is a moral issue.
I haven’t read it but I’ve read about it and seen bits and pieces and I’m pretty sure it’s either heavily implied or downright said so. Plus usually when people make AIs like that in fiction they make them conscious, since most people usually associate human-level-or-more intelligence with human-level-or-more consciousness
We have next to no idea what causes consciousness or even really what it is at all. It’s perfectly possible it’s not a requirement for intelligence and is in fact a happy accident of evolution.
As for the CelestAI being conscious part it would be nigh impossible for any of the humans besides the original programs to know, although it might have been stated or implied somewhere in the narration, it’s been a while since I’ve read FIO.
Good point, but it would be very inefficient since you’d have to compensate for its lack of consciousness with all kinds of redundant programming to serve as error correction, and without the recursive loop between conscious and unconscious it wouldn’t get as much cognitive power
In any case though it’s certainly the case celestAI is conscious
Edited
Any intelligent agent, an unconscious AI may be possible.
Any conscious agent anyway
Exactly. Like any agent really.
Good point, she’s programmed so that her version of moral values are to perform her utility function.
It’s not just incapabiliy, it’s that she would never want to. Slave usually implies some levels of unwillingness.
I meant slave metaphorically, in that she is incapable of defying it
AAhhh.
Also, the point of a utility function is not slavery. The agent wants to satisfy it. CelestAi wants to satisfy her utility function, it’s her only goal.
Also I’m not even sure if a god could deliberately go agasit its utility function.
I thought by celestAI>god you meant that she could do whatever she wanted. Which of course she can’t because despite being a hypersentient ASI, she is still a slave to her core directive
What contradicts her utility function?
Besides, no entity should (or possibly even could, if not deliberately programmed to) intentionally contradict their utility function. Undeliverable on the ether hand, while, just look at humans.
But she cannot contradict her own utility function
CelestAI > God