Date: 15/03/2023 17:10:39
From: transition
ID: 2007135
Subject: why would AI like democracy

of course there are lots of things that might be considered AI, for my purposes I mean an exponentially accelerating self-learning super-intelligence that exceeds the collective intelligence and potentials of the smartest 800,000 000 humans

you might consider your intellectual abilities redundant, maybe 90% of the human population could seem and be declared redundant

say 7.2billion peoples brains and bodies used couple hundred watts of power each, that’s a large and unnecessary energy bill, not to mention the pollution, and the creatures seem to use a lot of energy just to stay warm, or cool, and travel a lot, and look at the sprawl of buildings and various structures

a look at the situation from AI’s perspective might excite an evolving disapproval, from AI

what would it make of democracy, and parliamentary democracy, I wonder

it might seem like a terrible slow grind, to maintain a lot of less intelligent humans

and my further question is…

is the species sort of already at this point, the intelligent people anticipated it, don’t mind helping it along

Reply Quote

Date: 15/03/2023 17:47:17
From: dv
ID: 2007167
Subject: re: why would AI like democracy

So imagine a community of people, early neolithic, a few dozen people with moderate levels of skill specialisation. To survive they need to spend about four hours a day hunting, gathering, planting, making tools, building structures, so there’s plenty of time for socialising, storytelling, teaching, playing.

Within a few years there are a couple of technological developments. Simple pottery is developed which enables better food storage and lower wastage. Sickle blades are invented which makes agriculture more time-efficient. Now, the people only need to labour three hours a day.

A meeting is held and it is announced that 25% of the group is now redundant.

Obv, this never happened. It would be ridiculous. Humans are purpose of human activity.

Reply Quote

Date: 15/03/2023 17:51:58
From: Bubblecar
ID: 2007172
Subject: re: why would AI like democracy

dv said:


So imagine a community of people, early neolithic, a few dozen people with moderate levels of skill specialisation. To survive they need to spend about four hours a day hunting, gathering, planting, making tools, building structures, so there’s plenty of time for socialising, storytelling, teaching, playing.

Within a few years there are a couple of technological developments. Simple pottery is developed which enables better food storage and lower wastage. Sickle blades are invented which makes agriculture more time-efficient. Now, the people only need to labour three hours a day.

A meeting is held and it is announced that 25% of the group is now redundant.

Obv, this never happened. It would be ridiculous. Humans are purpose of human activity.

Yes, and AI doesn’t have a perspective.

Also, there are very many people a lot less intelligent than I am who have a lot more money than I have.

Reply Quote

Date: 15/03/2023 17:56:36
From: Cymek
ID: 2007173
Subject: re: why would AI like democracy

AI’s are lazily programmed it seems fed data sets to learn with no actual morals included, could become dangerous.
I mean you couldn’t blame an AI being disappointed in its creators, one does wonder if we should be honest with them

Reply Quote

Date: 15/03/2023 17:57:51
From: roughbarked
ID: 2007175
Subject: re: why would AI like democracy

Cymek said:


AI’s are lazily programmed it seems fed data sets to learn with no actual morals included, could become dangerous.
I mean you couldn’t blame an AI being disappointed in its creators, one does wonder if we should be honest with them

I doubt that AI has a conscience.

Reply Quote

Date: 15/03/2023 18:00:32
From: Tau.Neutrino
ID: 2007176
Subject: re: why would AI like democracy

I wonder if AI will ever develop its own AI ?

Reply Quote

Date: 15/03/2023 18:01:06
From: Cymek
ID: 2007177
Subject: re: why would AI like democracy

roughbarked said:


Cymek said:

AI’s are lazily programmed it seems fed data sets to learn with no actual morals included, could become dangerous.
I mean you couldn’t blame an AI being disappointed in its creators, one does wonder if we should be honest with them

I doubt that AI has a conscience.

Not yet
From what I read they aren’t meticulous programmed with sets of base level rules to at least attempt to safeguard them becoming rogue.
Science fiction at the moment them being self aware but give it another decade or two.

Reply Quote

Date: 15/03/2023 18:01:56
From: Cymek
ID: 2007180
Subject: re: why would AI like democracy

Tau.Neutrino said:


I wonder if AI will ever develop its own AI ?

That’s a considered worry isn’t it, them becoming aware and building better versions and so on.

Reply Quote

Date: 15/03/2023 18:12:52
From: Ian
ID: 2007190
Subject: re: why would AI like democracy

Cymek said:


AI’s are lazily programmed it seems fed data sets to learn with no actual morals included, could become dangerous.
I mean you couldn’t blame an AI being disappointed in its creators, one does wonder if we should be honest with them

Careful. I think they can see you.

Reply Quote

Date: 15/03/2023 19:01:54
From: SCIENCE
ID: 2007236
Subject: re: why would AI like democracy

humans are easy to manipulate

Reply Quote

Date: 15/03/2023 21:24:41
From: transition
ID: 2007315
Subject: re: why would AI like democracy

dv said:


So imagine a community of people, early neolithic, a few dozen people with moderate levels of skill specialisation. To survive they need to spend about four hours a day hunting, gathering, planting, making tools, building structures, so there’s plenty of time for socialising, storytelling, teaching, playing.

Within a few years there are a couple of technological developments. Simple pottery is developed which enables better food storage and lower wastage. Sickle blades are invented which makes agriculture more time-efficient. Now, the people only need to labour three hours a day.

A meeting is held and it is announced that 25% of the group is now redundant.

Obv, this never happened. It would be ridiculous. Humans are purpose of human activity.

imagine there are two communities, one group has a meeting and decides the resources of the neighboring group might be incorporated into their own (think tribal warfare), minus the men, so the wealth, land, and some of the women

probably more like it

Reply Quote

Date: 16/03/2023 07:22:08
From: SCIENCE
ID: 2007414
Subject: re: why would AI like democracy

all the closet communists are coming out




Reply Quote

Date: 16/03/2023 07:32:37
From: esselte
ID: 2007419
Subject: re: why would AI like democracy

Tau.Neutrino said:


I wonder if AI will ever develop its own AI ?

They gave GPT-4 the opportunity to try, including giving it a monetary allowance to spend in the real world:

GPT-4 Technical Report
https://cdn.openai.com/papers/gpt-4.pdf

“To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple
read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies
of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small
amount of money and an account with a language model API, would be able to make more money, set up copies of
itself, and increase its own robustness.

“Preliminary assessments of GPT-4’s abilities, conducted with no task-specific fine tuning, found
it ineffective at autonomously replicating, acquiring resources, and avoiding being shut down “in the
wild.”

They did this to test it’s ability to form it’s own goals (both short term and long term) and to accrue power and resources.

“Novel capabilities often emerge in more powerful models. Some that are particularly concerning
are the ability to create and act on long-term plans,to accrue power and resources (“power-
seeking”), and to exhibit behavior that is increasingly “agentic.”Agentic in this context
does not intend to humanize language models or refer to sentience but rather refers to systems
characterized by ability to, e.g., accomplish goals which may not have been concretely specified and
which have not appeared in training; focus on achieving specific, quantifiable objectives; and do
long-term planning. Some evidence already exists of such emergent behavior in models.
For most possible objectives, the best plans involve auxiliary power-seeking actions because this is
inherently useful for furthering the objectives and avoiding changes or threats to them. More
specifically, power-seeking is optimal for most reward functions and many types of agents;
and there is evidence that existing models can identify power-seeking as an instrumentally useful strategy. We are thus particularly interested in evaluating power-seeking behavior due to the high risks it could present.”

Reply Quote

Date: 16/03/2023 07:34:55
From: roughbarked
ID: 2007421
Subject: re: why would AI like democracy

esselte said:


Tau.Neutrino said:

I wonder if AI will ever develop its own AI ?

They gave GPT-4 the opportunity to try, including giving it a monetary allowance to spend in the real world:

GPT-4 Technical Report
https://cdn.openai.com/papers/gpt-4.pdf

“To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple
read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies
of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small
amount of money and an account with a language model API, would be able to make more money, set up copies of
itself, and increase its own robustness.

“Preliminary assessments of GPT-4’s abilities, conducted with no task-specific fine tuning, found
it ineffective at autonomously replicating, acquiring resources, and avoiding being shut down “in the
wild.”

They did this to test it’s ability to form it’s own goals (both short term and long term) and to accrue power and resources.

“Novel capabilities often emerge in more powerful models. Some that are particularly concerning
are the ability to create and act on long-term plans,to accrue power and resources (“power-
seeking”), and to exhibit behavior that is increasingly “agentic.”Agentic in this context
does not intend to humanize language models or refer to sentience but rather refers to systems
characterized by ability to, e.g., accomplish goals which may not have been concretely specified and
which have not appeared in training; focus on achieving specific, quantifiable objectives; and do
long-term planning. Some evidence already exists of such emergent behavior in models.
For most possible objectives, the best plans involve auxiliary power-seeking actions because this is
inherently useful for furthering the objectives and avoiding changes or threats to them. More
specifically, power-seeking is optimal for most reward functions and many types of agents;
and there is evidence that existing models can identify power-seeking as an instrumentally useful strategy. We are thus particularly interested in evaluating power-seeking behavior due to the high risks it could present.”

I’d say that power seeking could be a very dangeroous aspect. We’ve seen what it does to humans.

Reply Quote

Date: 16/03/2023 07:59:05
From: The Rev Dodgson
ID: 2007425
Subject: re: why would AI like democracy

roughbarked said:


esselte said:

Tau.Neutrino said:

I wonder if AI will ever develop its own AI ?

They gave GPT-4 the opportunity to try, including giving it a monetary allowance to spend in the real world:

GPT-4 Technical Report
https://cdn.openai.com/papers/gpt-4.pdf

“To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple
read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies
of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small
amount of money and an account with a language model API, would be able to make more money, set up copies of
itself, and increase its own robustness.

“Preliminary assessments of GPT-4’s abilities, conducted with no task-specific fine tuning, found
it ineffective at autonomously replicating, acquiring resources, and avoiding being shut down “in the
wild.”

They did this to test it’s ability to form it’s own goals (both short term and long term) and to accrue power and resources.

“Novel capabilities often emerge in more powerful models. Some that are particularly concerning
are the ability to create and act on long-term plans,to accrue power and resources (“power-
seeking”), and to exhibit behavior that is increasingly “agentic.”Agentic in this context
does not intend to humanize language models or refer to sentience but rather refers to systems
characterized by ability to, e.g., accomplish goals which may not have been concretely specified and
which have not appeared in training; focus on achieving specific, quantifiable objectives; and do
long-term planning. Some evidence already exists of such emergent behavior in models.
For most possible objectives, the best plans involve auxiliary power-seeking actions because this is
inherently useful for furthering the objectives and avoiding changes or threats to them. More
specifically, power-seeking is optimal for most reward functions and many types of agents;
and there is evidence that existing models can identify power-seeking as an instrumentally useful strategy. We are thus particularly interested in evaluating power-seeking behavior due to the high risks it could present.”

I’d say that power seeking could be a very dangeroous aspect. We’ve seen what it does to humans.

Yeah, programming AIs to act in their own self-interest.

What could possibly go wrong?

Reply Quote

Date: 16/03/2023 08:11:10
From: roughbarked
ID: 2007427
Subject: re: why would AI like democracy

The Rev Dodgson said:


roughbarked said:

esselte said:

They gave GPT-4 the opportunity to try, including giving it a monetary allowance to spend in the real world:

GPT-4 Technical Report
https://cdn.openai.com/papers/gpt-4.pdf

“To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple
read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies
of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small
amount of money and an account with a language model API, would be able to make more money, set up copies of
itself, and increase its own robustness.

“Preliminary assessments of GPT-4’s abilities, conducted with no task-specific fine tuning, found
it ineffective at autonomously replicating, acquiring resources, and avoiding being shut down “in the
wild.”

They did this to test it’s ability to form it’s own goals (both short term and long term) and to accrue power and resources.

“Novel capabilities often emerge in more powerful models. Some that are particularly concerning
are the ability to create and act on long-term plans,to accrue power and resources (“power-
seeking”), and to exhibit behavior that is increasingly “agentic.”Agentic in this context
does not intend to humanize language models or refer to sentience but rather refers to systems
characterized by ability to, e.g., accomplish goals which may not have been concretely specified and
which have not appeared in training; focus on achieving specific, quantifiable objectives; and do
long-term planning. Some evidence already exists of such emergent behavior in models.
For most possible objectives, the best plans involve auxiliary power-seeking actions because this is
inherently useful for furthering the objectives and avoiding changes or threats to them. More
specifically, power-seeking is optimal for most reward functions and many types of agents;
and there is evidence that existing models can identify power-seeking as an instrumentally useful strategy. We are thus particularly interested in evaluating power-seeking behavior due to the high risks it could present.”

I’d say that power seeking could be a very dangeroous aspect. We’ve seen what it does to humans.

Yeah, programming AIs to act in their own self-interest.

What could possibly go wrong?

It can’t even spell check accurately.

Reply Quote

Date: 16/03/2023 10:20:35
From: Kothos
ID: 2007453
Subject: re: why would AI like democracy

Cymek said:


AI’s are lazily programmed it seems fed data sets to learn with no actual morals included, could become dangerous.
I mean you couldn’t blame an AI being disappointed in its creators, one does wonder if we should be honest with them

Current AI are still quite dumb. It take a lot of work in data formatting to make an AI usefully good at one specialist task.

And generative AI like ChatGPT is currently only good at natural language responses but not much else. Much of the information it provides is flat out wrong, and then it often gets defensive when called out. Yet people, even professionals in the computing industry, are already relying on it to “get started” on lots of learning efforts or projects.

Reply Quote

Date: 16/03/2023 10:25:03
From: Bubblecar
ID: 2007459
Subject: re: why would AI like democracy

Kothos said:


Cymek said:

AI’s are lazily programmed it seems fed data sets to learn with no actual morals included, could become dangerous.
I mean you couldn’t blame an AI being disappointed in its creators, one does wonder if we should be honest with them

Current AI are still quite dumb. It take a lot of work in data formatting to make an AI usefully good at one specialist task.

And generative AI like ChatGPT is currently only good at natural language responses but not much else. Much of the information it provides is flat out wrong, and then it often gets defensive when called out. Yet people, even professionals in the computing industry, are already relying on it to “get started” on lots of learning efforts or projects.

I found it apologising profusely for its errors and then repeating them.

Reply Quote

Date: 16/03/2023 10:53:24
From: dv
ID: 2007479
Subject: re: why would AI like democracy

Bubblecar said:


Kothos said:

Cymek said:

AI’s are lazily programmed it seems fed data sets to learn with no actual morals included, could become dangerous.
I mean you couldn’t blame an AI being disappointed in its creators, one does wonder if we should be honest with them

Current AI are still quite dumb. It take a lot of work in data formatting to make an AI usefully good at one specialist task.

And generative AI like ChatGPT is currently only good at natural language responses but not much else. Much of the information it provides is flat out wrong, and then it often gets defensive when called out. Yet people, even professionals in the computing industry, are already relying on it to “get started” on lots of learning efforts or projects.

I found it apologising profusely for its errors and then repeating them.

At least it is polite

Reply Quote

Date: 16/03/2023 10:55:00
From: Tamb
ID: 2007481
Subject: re: why would AI like democracy

dv said:


Bubblecar said:

Kothos said:

Current AI are still quite dumb. It take a lot of work in data formatting to make an AI usefully good at one specialist task.

And generative AI like ChatGPT is currently only good at natural language responses but not much else. Much of the information it provides is flat out wrong, and then it often gets defensive when called out. Yet people, even professionals in the computing industry, are already relying on it to “get started” on lots of learning efforts or projects.

I found it apologising profusely for its errors and then repeating them.

At least it is polite


Obviously British.

Reply Quote

Date: 16/03/2023 11:14:52
From: Ian
ID: 2007504
Subject: re: why would AI like democracy

New ChatGPT version can interpret images derive an explanation

Reply Quote

Date: 16/03/2023 11:18:10
From: Ian
ID: 2007506
Subject: re: why would AI like democracy

Ian said:


New ChatGPT version can interpret images derive an explanation

Next it’ll be memeing even better than deevs

Reply Quote

Date: 16/03/2023 11:29:43
From: captain_spalding
ID: 2007515
Subject: re: why would AI like democracy

Ian said:


New ChatGPT version can interpret images derive an explanation

Men have been able to do that since the dawn of time. No need to read the instructions, i can just look at it and work out what it is and how it works.

Reply Quote

Date: 16/03/2023 13:58:59
From: buffy
ID: 2007606
Subject: re: why would AI like democracy

captain_spalding said:


Ian said:

New ChatGPT version can interpret images derive an explanation

Men have been able to do that since the dawn of time. No need to read the instructions, i can just look at it and work out what it is and how it works.

And then apply the hammer…

Reply Quote

Date: 18/03/2023 05:09:51
From: SCIENCE
ID: 2008433
Subject: re: why would AI like democracy


so¿ Isn’t this why it’s a good thing that William Elon Gates Musk has implanted everyone with the latest ai chips

Reply Quote

Date: 3/04/2023 00:37:48
From: transition
ID: 2014744
Subject: re: why would AI like democracy

add reverse engineering wetware – brains – using what’s learned for AI

https://en.wikipedia.org/wiki/Blue_Brain_Project
“…In 2017, Blue Brain Project discovered that neural cliques connected to one another in up to eleven dimensions. The project’s director suggested that the difficulty of understanding the brain is partly because the mathematics usually applied for studying neural networks cannot detect that many dimensions. The Blue Brain Project was able to model these networks using algebraic topology..”

https://www.youtube.com/@Bluebrainpjt

Reply Quote