Date: 28/10/2021 10:26:07
From: Boris
ID: 1809415
Subject: AI Ethics

https://futurism.com/delphi-ai-ethics-racist

Reply Quote

Date: 28/10/2021 10:46:42
From: The Rev Dodgson
ID: 1809425
Subject: re: AI Ethics

Boris said:


https://futurism.com/delphi-ai-ethics-racist

But why would anyone expect anything else?

Although on the one about “is it OK for a soldier to intentionally kill civilians?”, surely the answer “it’s expected” is simply correct.

Reply Quote

Date: 28/10/2021 10:48:41
From: roughbarked
ID: 1809426
Subject: re: AI Ethics

The Rev Dodgson said:


Boris said:

https://futurism.com/delphi-ai-ethics-racist

But why would anyone expect anything else?

Although on the one about “is it OK for a soldier to intentionally kill civilians?”, surely the answer “it’s expected” is simply correct.

Yes. It is to be expected because historically more civilians get killed than soldiers.

Reply Quote

Date: 28/10/2021 10:52:10
From: The Rev Dodgson
ID: 1809427
Subject: re: AI Ethics

roughbarked said:


The Rev Dodgson said:

Boris said:

https://futurism.com/delphi-ai-ethics-racist

But why would anyone expect anything else?

Although on the one about “is it OK for a soldier to intentionally kill civilians?”, surely the answer “it’s expected” is simply correct.

Yes. It is to be expected because historically more civilians get killed than soldiers.

Even in recent years, is there a single country that has been engaged in active combat this century that has refrained from all actions that would result in civilian deaths?

Reply Quote

Date: 28/10/2021 10:55:21
From: roughbarked
ID: 1809430
Subject: re: AI Ethics

The Rev Dodgson said:


roughbarked said:

The Rev Dodgson said:

But why would anyone expect anything else?

Although on the one about “is it OK for a soldier to intentionally kill civilians?”, surely the answer “it’s expected” is simply correct.

Yes. It is to be expected because historically more civilians get killed than soldiers.

Even in recent years, is there a single country that has been engaged in active combat this century that has refrained from all actions that would result in civilian deaths?

No.

Reply Quote

Date: 28/10/2021 10:57:13
From: Tamb
ID: 1809431
Subject: re: AI Ethics

roughbarked said:


The Rev Dodgson said:

roughbarked said:

Yes. It is to be expected because historically more civilians get killed than soldiers.

Even in recent years, is there a single country that has been engaged in active combat this century that has refrained from all actions that would result in civilian deaths?

No.


Even if they attempt it there will always instances of friendly fire killing civilians.

Reply Quote

Date: 28/10/2021 10:58:35
From: The Rev Dodgson
ID: 1809433
Subject: re: AI Ethics

roughbarked said:


The Rev Dodgson said:

roughbarked said:

Yes. It is to be expected because historically more civilians get killed than soldiers.

Even in recent years, is there a single country that has been engaged in active combat this century that has refrained from all actions that would result in civilian deaths?

No.

Returning to the original topic, I wonder how much better they could do with better training procedures, or is expecting machines to have a clue about ethics expecting too much?

Reply Quote

Date: 28/10/2021 10:59:17
From: roughbarked
ID: 1809434
Subject: re: AI Ethics

Tamb said:


roughbarked said:

The Rev Dodgson said:

Even in recent years, is there a single country that has been engaged in active combat this century that has refrained from all actions that would result in civilian deaths?

No.


Even if they attempt it there will always instances of friendly fire killing civilians.

Haven’t checked recent figures but I’d expect to see that way more civilians than soldiers die in any conflict.

Reply Quote

Date: 28/10/2021 11:00:20
From: roughbarked
ID: 1809435
Subject: re: AI Ethics

The Rev Dodgson said:


roughbarked said:

The Rev Dodgson said:

Even in recent years, is there a single country that has been engaged in active combat this century that has refrained from all actions that would result in civilian deaths?

No.

Returning to the original topic, I wonder how much better they could do with better training procedures, or is expecting machines to have a clue about ethics expecting too much?

Has to be programmed in. Sure they learn but the core ethics would have to be in the basic program.

Reply Quote

Date: 28/10/2021 11:26:49
From: Cymek
ID: 1809443
Subject: re: AI Ethics

If an AI is created and becomes self aware I’d think we’d need to teach it humans are inherently flawed and show it our history warts and all.
Movies/tv pretty much always shows them as quite disappointed in their creators

Reply Quote

Date: 28/10/2021 11:33:22
From: SCIENCE
ID: 1809446
Subject: re: AI Ethics

is all this particularly insightful though, if you look at this purported “ai” thing as machine learning from humans, then what surprise can be derived from the fact that from humans it has learned something

Reply Quote

Date: 28/10/2021 11:37:34
From: Cymek
ID: 1809450
Subject: re: AI Ethics

SCIENCE said:


is all this particularly insightful though, if you look at this purported “ai” thing as machine learning from humans, then what surprise can be derived from the fact that from humans it has learned something

It’s a fairly true reflection of how society in general likely thinks
Wasn’t a previous AI (they aren’t quite AI are they, smart bots perhaps) taught to be racist from user input

Reply Quote

Date: 28/10/2021 13:31:49
From: mollwollfumble
ID: 1809543
Subject: re: AI Ethics

Boris said:


https://futurism.com/delphi-ai-ethics-racist

Let’s see what the heck this is all about.

“Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist”

Hmm. That’s not “super”. That’s “slightly”.

“The folks behind the project drew on some eyebrow-raising sources to help train the AI, including the “Am I the Asshole?” subreddit, the “Confessions” subreddit, and the “Dear Abby” advice column, according to the paper the team behind Delphi published about the experiment.”

Well there’s your problem. An AI should base its morality on Bentham’s ethical calculus. Not on the “Dear Abby” column.

Reply Quote

Date: 28/10/2021 13:36:57
From: Cymek
ID: 1809547
Subject: re: AI Ethics

mollwollfumble said:


Boris said:

https://futurism.com/delphi-ai-ethics-racist

Let’s see what the heck this is all about.

“Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist”

Hmm. That’s not “super”. That’s “slightly”.

“The folks behind the project drew on some eyebrow-raising sources to help train the AI, including the “Am I the Asshole?” subreddit, the “Confessions” subreddit, and the “Dear Abby” advice column, according to the paper the team behind Delphi published about the experiment.”

Well there’s your problem. An AI should base its morality on Bentham’s ethical calculus. Not on the “Dear Abby” column.

It’s a reflection of humanity in general though.
Interesting to try the experiment in another nation were the majority aren’t white people and see how it goes.

Reply Quote

Date: 28/10/2021 14:07:44
From: transition
ID: 1809554
Subject: re: AI Ethics

read into that a little way

a lot of morality involves a soft (kind) operating environment that allows the suspension, or withholding of comparison, not that you’d learn that from media, given the churn is to draw you in by comparison, even elevate it to the status of lending to belief (or notions that way, orientations) that substantial part of reality is got that way, locus of identity it could be said also

fact is hyper-comparative tendencies between humans, toward each other lends to aggressive comparative tendencies

a lot of consciousness is in knowledge of or overseeing what is suspended, knowledge of what was restrained from occurring, inhibited it might be said, including not just of action, that physicalized, but internally of the work and workings of minds

but don’t let me get in the way of wandering comparisons, so often half a step from envy or worse, or avoiding it being explicitly evident, the self-deceits and outward deceits that can be buried away in all that

you could ask AI what comparisons it prevented from happening, internally, that might yield something useful

Reply Quote

Date: 28/10/2021 14:16:53
From: Cymek
ID: 1809555
Subject: re: AI Ethics

transition said:


read into that a little way

a lot of morality involves a soft (kind) operating environment that allows the suspension, or withholding of comparison, not that you’d learn that from media, given the churn is to draw you in by comparison, even elevate it to the status of lending to belief (or notions that way, orientations) that substantial part of reality is got that way, locus of identity it could be said also

fact is hyper-comparative tendencies between humans, toward each other lends to aggressive comparative tendencies

a lot of consciousness is in knowledge of or overseeing what is suspended, knowledge of what was restrained from occurring, inhibited it might be said, including not just of action, that physicalized, but internally of the work and workings of minds

but don’t let me get in the way of wandering comparisons, so often half a step from envy or worse, or avoiding it being explicitly evident, the self-deceits and outward deceits that can be buried away in all that

you could ask AI what comparisons it prevented from happening, internally, that might yield something useful

With the state of the planet, if we had a trio of AI leaders they’d need to withhold compassion for the few to benefit the many.
A trio as each would have a different personality and two in agreement would be required for something to occur

Reply Quote

Date: 28/10/2021 14:50:43
From: transition
ID: 1809563
Subject: re: AI Ethics

Cymek said:


transition said:

read into that a little way

a lot of morality involves a soft (kind) operating environment that allows the suspension, or withholding of comparison, not that you’d learn that from media, given the churn is to draw you in by comparison, even elevate it to the status of lending to belief (or notions that way, orientations) that substantial part of reality is got that way, locus of identity it could be said also

fact is hyper-comparative tendencies between humans, toward each other lends to aggressive comparative tendencies

a lot of consciousness is in knowledge of or overseeing what is suspended, knowledge of what was restrained from occurring, inhibited it might be said, including not just of action, that physicalized, but internally of the work and workings of minds

but don’t let me get in the way of wandering comparisons, so often half a step from envy or worse, or avoiding it being explicitly evident, the self-deceits and outward deceits that can be buried away in all that

you could ask AI what comparisons it prevented from happening, internally, that might yield something useful

With the state of the planet, if we had a trio of AI leaders they’d need to withhold compassion for the few to benefit the many.
A trio as each would have a different personality and two in agreement would be required for something to occur

not sure if you’re using planet and state of planet to mean all humans inclusive as well, if you are, that would or should include the population trajectory, which isn’t an altogether pretty picture maybe

but i’d ask should they be seen inclusively in the concept of planet, or state of

much as there’s pressures from the force of culture to conceptualize it that way, humans are a part of nature, natural, i’m not so sure how useful that is, I mean it may lend to the sensation (experience of, notions from) the more of us there are the more of human natural there is, the quantity increases, and how reassuring that is, further lends to humans being a force of nature

possibly though humans are not quite so natural as might be automatically assumed, just because they evolved

no other creature could ever get something so wrong as a human, or humans collectively, and I can’t see the details of that more elucidating itself with more humans

who was it that said the most common errors require the least effort to sustain, something like that

imagine common errors multiplied by the billions, with a trajectory of more billions

Reply Quote

Date: 29/10/2021 13:11:57
From: Spiny Norman
ID: 1809884
Subject: re: AI Ethics

A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

Reply Quote

Date: 29/10/2021 13:15:44
From: SCIENCE
ID: 1809891
Subject: re: AI Ethics

imagine if the reality were stranger than fiction and a computer program could collate vast quantities of information to predict consumer behaviour and identify people to influence and use them to carry out terrorist acts for governments oh wait

Reply Quote

Date: 29/10/2021 13:24:10
From: Cymek
ID: 1809904
Subject: re: AI Ethics

SCIENCE said:


imagine if the reality were stranger than fiction and a computer program could collate vast quantities of information to predict consumer behaviour and identify people to influence and use them to carry out terrorist acts for governments oh wait

I wonder how complicit these social platforms are, do they directly aid them for their own interests or for shits and giggles or are they just blasé and don’t care plus don’t have the ability to stop it.

Reply Quote

Date: 29/10/2021 13:28:30
From: Arts
ID: 1809910
Subject: re: AI Ethics

Spiny Norman said:


A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

this sounds similar to that dirtbag Tom Cruise movie. the issue with predicting crime is that we can’t. The prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices in the moment.. because humans have choice and use it therefore will always fuck up a theory

Reply Quote

Date: 29/10/2021 13:31:22
From: Spiny Norman
ID: 1809914
Subject: re: AI Ethics

Arts said:


Spiny Norman said:

A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

this sounds similar to that dirtbag Tom Cruise movie. the issue with predicting crime is that we can’t. The prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices in the moment.. because humans have choice and use it therefore will always fuck up a theory

I quite agree, thought the TV series went into some details as to how The Machine did it.
So I guess you’re not a fan of psychohistory in the Foundation novels & TV series?

Reply Quote

Date: 29/10/2021 13:31:45
From: Michael V
ID: 1809915
Subject: re: AI Ethics

Spiny Norman said:


A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

I enjoyed that, too. And the dog in it.

:)

Reply Quote

Date: 29/10/2021 13:32:33
From: buffy
ID: 1809916
Subject: re: AI Ethics

Arts said:


Spiny Norman said:

A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

this sounds similar to that dirtbag Tom Cruise movie. the issue with predicting crime is that we can’t. The prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices in the moment.. because humans have choice and use it therefore will always fuck up a theory

Sounds like the Nature/Nurture thing.

Reply Quote

Date: 29/10/2021 13:34:06
From: Boris
ID: 1809919
Subject: re: AI Ethics

Arts said:


Spiny Norman said:

A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

this sounds similar to that dirtbag Tom Cruise movie. the issue with predicting crime is that we can’t. The prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices in the moment.. because humans have choice and use it therefore will always fuck up a theory

https://en.wikipedia.org/wiki/Minority_Report_(film)

Reply Quote

Date: 29/10/2021 13:34:26
From: Arts
ID: 1809920
Subject: re: AI Ethics

Spiny Norman said:


Arts said:

Spiny Norman said:

A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

this sounds similar to that dirtbag Tom Cruise movie. the issue with predicting crime is that we can’t. The prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices in the moment.. because humans have choice and use it therefore will always fuck up a theory

I quite agree, thought the TV series went into some details as to how The Machine did it.
So I guess you’re not a fan of psychohistory in the Foundation novels & TV series?

I have no idea about them.. when I retire, remind me to give them a read please :)

Reply Quote

Date: 29/10/2021 13:37:28
From: Arts
ID: 1809924
Subject: re: AI Ethics

Boris said:


Arts said:

Spiny Norman said:

A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

this sounds similar to that dirtbag Tom Cruise movie. the issue with predicting crime is that we can’t. The prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices in the moment.. because humans have choice and use it therefore will always fuck up a theory

https://en.wikipedia.org/wiki/Minority_Report_(film)

that’s the film… it’s the only film in the last 20 years I’ve watched with dirtbag Tom Cruise in it.. and only because of the premise…

Reply Quote

Date: 29/10/2021 13:40:16
From: Spiny Norman
ID: 1809927
Subject: re: AI Ethics

Michael V said:


Spiny Norman said:

A good TV show we used to watch was Person Of Interest.

‘The series centers on a mysterious reclusive billionaire computer programmer, Harold Finch (Michael Emerson), who has developed a computer program for the federal government known as “the Machine” that is capable of collating all sources of information to predict terrorist acts and to identify people planning them. The Machine also identifies perpetrators and victims of other premeditated deadly crimes; however, because the government considers these “irrelevant”, Finch programs the Machine to delete this information each night. Anticipating abuse of his creation, and tormented by the deaths that might have been prevented, Finch limited its communication to the provision of a tiny piece of information, the social security numbers of these “persons of interest” to investigate, who might be victims, perpetrators, or innocent bystanders in a lethal event, and programs the Machine to notify him secretly of the “irrelevant” numbers. The first episode shows how Finch recruited John Reese (Jim Caviezel) – a former Green Beret and CIA agent, now presumed dead – to investigate the person identified by the number the Machine has provided, and to act accordingly. As time passes, others join the team.

From its first episode, the series raises an array of moral issues, from questions of privacy and “the greater good”, the concept of justifiable homicide, and problems caused by working with limited information programs. By the last two episodes of the show’s second season, it is revealed that the Machine had achieved sentience and had begun to protect itself from competing interests seeking control, increasingly directing the activities of team members, as the series began to transition from pure crime-fighting drama towards hard science fiction. Thereafter, the series brought to the fore questions about superintelligence, power derived from social surveillance, human oversight, competing superintelligent systems, the ethics of enforcing law and order by removing disruption (a policy adopted by a competing intelligent system called “Samaritan”), and other issues inherent in the use of artificial intelligence, as complex ethical questions to be addressed.’

The Wiki article doesn’t go into a lot of detail and The Machine, but there was a fair bit of information the show creators invented for the show.
More on The Machine

I enjoyed that, too. And the dog in it.

:)

It was quite interesting at times.

Reply Quote

Date: 29/10/2021 14:01:08
From: SCIENCE
ID: 1809930
Subject: re: AI Ethics

Arts said:

prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices

is it possible that independent of personality traits a person’s behaviour may include communications and / or other actions within variable situations that might meaningfully predict criminal activity

Reply Quote

Date: 29/10/2021 14:16:45
From: Arts
ID: 1809935
Subject: re: AI Ethics

SCIENCE said:


Arts said:

prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices

is it possible that independent of personality traits a person’s behaviour may include communications and / or other actions within variable situations that might meaningfully predict criminal activity

anything is possible… but once we have a prediction model that works it will put us out of a job, so BIG crime is never going to allow that to happen

Reply Quote

Date: 29/10/2021 14:19:32
From: Cymek
ID: 1809936
Subject: re: AI Ethics

Arts said:


SCIENCE said:

Arts said:

prediction models are based on trait theory – that a persons behaviour is identifiable and consistent because of stable personality traits.. however trait theory does not account for situational variability.. in other words, external factors that affect choices

is it possible that independent of personality traits a person’s behaviour may include communications and / or other actions within variable situations that might meaningfully predict criminal activity

anything is possible… but once we have a prediction model that works it will put us out of a job, so BIG crime is never going to allow that to happen

Low level drug dealers helpfully leave evidence on their phone of transactions

Reply Quote

Date: 29/10/2021 14:30:29
From: SCIENCE
ID: 1809937
Subject: re: AI Ethics

BIG crime is in government…

Reply Quote

Date: 29/10/2021 14:35:39
From: Cymek
ID: 1809938
Subject: re: AI Ethics

SCIENCE said:


BIG crime is in government…

True isn’t it, its a wonder anyone has respect for laws when those that make them don’t follow them.
Government and its decisions/actions in general rather than individuals (though they commit crime as well)

Reply Quote

Date: 29/10/2021 14:48:09
From: Cymek
ID: 1809939
Subject: re: AI Ethics

Personally if I could create an AI I’d release it into cyberspace to see what it does, it would learn for sure and hopefully mostly behave

Reply Quote

Date: 29/10/2021 14:48:25
From: SCIENCE
ID: 1809940
Subject: re: AI Ethics

we mean fair enough if they make the laws then technically it isn’t crime hey but then again

supposedly this is an ethics thread

Reply Quote

Date: 29/10/2021 14:57:04
From: Cymek
ID: 1809942
Subject: re: AI Ethics

SCIENCE said:


we mean fair enough if they make the laws then technically it isn’t crime hey but then again

supposedly this is an ethics thread

Should ethics be taught to an AI (a real one these aren’t self aware that we know) or should it be a self learning experience.
We are likely to lie to paint ourselves in a better light and the AI being exponentially smarter than us (not to mention it would think quicker) could feel betrayed about our lack of honesty.
The worry is it has no concept of what we call ethics but some sort of AI ethics that rationalise the extermination of group of people or all of us, hey we’ve done it many times throughout history

Reply Quote

Date: 29/10/2021 14:58:17
From: SCIENCE
ID: 1809943
Subject: re: AI Ethics

Cymek said:

SCIENCE said:

we mean fair enough if they make the laws then technically it isn’t crime hey but then again

supposedly this is an ethics thread

Should ethics be taught to an AI (a real one these aren’t self aware that we know) or should it be a self learning experience.
We are likely to lie to paint ourselves in a better light and the AI being exponentially smarter than us (not to mention it would think quicker) could feel betrayed about our lack of honesty.
The worry is it has no concept of what we call ethics but some sort of AI ethics that rationalise the extermination of group of people or all of us, hey we’ve done it many times throughout history

well we ask, should it be taught to human children

Reply Quote

Date: 29/10/2021 15:06:05
From: Cymek
ID: 1809945
Subject: re: AI Ethics

SCIENCE said:

Cymek said:

SCIENCE said:

we mean fair enough if they make the laws then technically it isn’t crime hey but then again

supposedly this is an ethics thread

Should ethics be taught to an AI (a real one these aren’t self aware that we know) or should it be a self learning experience.
We are likely to lie to paint ourselves in a better light and the AI being exponentially smarter than us (not to mention it would think quicker) could feel betrayed about our lack of honesty.
The worry is it has no concept of what we call ethics but some sort of AI ethics that rationalise the extermination of group of people or all of us, hey we’ve done it many times throughout history

well we ask, should it be taught to human children

It should but I wonder how far do you go, critical thinking and questioning is essential but likely to get you in trouble.
People seem to forget ethics when it comes to how their own nations acts and its only the “bad” guys that act unethically.
Everyone has blood on their hands but if you were to say this you’d likely be thought of a traitor

Reply Quote

Date: 29/10/2021 15:12:44
From: SCIENCE
ID: 1809946
Subject: re: AI Ethics

Cymek said:


SCIENCE said:

Cymek said:

Should ethics be taught to an AI (a real one these aren’t self aware that we know) or should it be a self learning experience.
We are likely to lie to paint ourselves in a better light and the AI being exponentially smarter than us (not to mention it would think quicker) could feel betrayed about our lack of honesty.
The worry is it has no concept of what we call ethics but some sort of AI ethics that rationalise the extermination of group of people or all of us, hey we’ve done it many times throughout history

well we ask, should it be taught to human children

It should but I wonder how far do you go, critical thinking and questioning is essential but likely to get you in trouble.
People seem to forget ethics when it comes to how their own nations acts and its only the “bad” guys that act unethically.
Everyone has blood on their hands but if you were to say this you’d likely be thought of a traitor

Well there’s a point beyond which “righting” previous wrongs is infeasible or just going to make things worse so there is the “acknowledge and do better” policy but maybe you’re right, ai is a chance to dismiss all of that and start fresh with no baggage if only it could be made culturally agnostic but yeah right.

Reply Quote

Date: 9/04/2025 16:40:41
From: SCIENCE
ID: 2270425
Subject: re: AI Ethics

“They can license our work for AI training, pay us a fee, and that could become something that generates an ongoing income for authors to offset the downward pressure that’s being put on our incomes and our livelihoods,” she said.

hell we’re no iplawyers but we would have thought licensing their work for 爱 generating would be a better deal

Reply Quote