If anyone reading this had an imaginary friend as a little kid, they were probably over it by the age of five. What worries me is that with AI, kids today can have an imaginary friend for life, reinforcing who-knows-what.
A key different is that AI imaginary friends don't come from the kid's (or adult's) imagination - they come from someone else's imagination. That means that they are subject to someone else's motivations.
And anything that can be used for good (and there are some beneficial use cases, for sure)
can equally be used for evil.
That's very scary.
Human connection requires HUMANS. Not code that someone (or something) else wrote.
At what point does a regular human being reach such a low functional state of living that being taken care of by third parties is the 'agreeable' response to its own commitment to its well-being?
Shouldn't ethics be applied to ourselves first?
Otherwise, it might be understood that the responsibility (as the capability to face and overcome challenges) of our actions is a matter of convenience.
And, when it's not convenient, no problem.
Not responsible, no consequence from free-taken action.
For a regular human being. (Not very fond of using 'normal'. Probably, cause I'm not that much.)
When you don't take care of yourself...
...At what point is 'agreeable' for others being required to do your duty?
The Danish researchers either made a mistake in their numbers, or used the wrong numbers. They said that 181 notes out of 54,000 is a small number. This is true, however, the number they should have used was 181 notes out of 10,000,000 as the ten million is the number of notes reviewed according to what the article noted. This works out to .00181% which doesn't even reach any sort of validity level. The 54,000 was the number of people whose notes were reviewed. Finding only 126 people out of 54,000, is also a very small number when it comes to looking for any kind of relationship or causality, that works out to just a bit under 1/4 of 1% of those studied.
I also agree with the comments of the other two who posted prior to my post. There will always be those "outliers" in any study done, and there will always be ways to "abuse" any system developed, that doesn't mean the system is the problem, more likely it's the people using the system. While suicide and increasing mental health problems are a concern, it may be a bit overplayed regarding the dangers of AI. I think there are bigger issues regarding AI than concerns about mental health & suicide issues, especially at the levels that the study appears to demonstrate.
My own opinion: Thanks to remote work, and the age of the social media "influencer", among other technological inventions, there is a Loneliness pandemic happening right now. People working from home have less physical interactions with other humans. Kids are spending more time on their devices instead of riding bikes, hanging out with other kids, or even walking their dogs. The seclusion aspect of all of this leads directly to the use of Ai chatbots, that are, as you mention, trained to be agreeable. It's like doubling a number over and over again: 2+2=4, 4+4=8, 8+8=16 and so on. The chatbot never says no, never stops building on the agreeable responses so there is never any challenge, debate or conflict. It confirms what the human is asking it or telling it which, in the case of someone with a weakened mental state, can lead them directly to disaster if they're receiving affirmative feedback that will lead them to self harm or to harming others. The scarier thing is, they can't see it. I've been lambasted by people when I say that remote work leads to less comradery, siloed work and isolation. One woman went so far as to tell me she doesn't need friends at work. That's not my point. Humans are built to be social. No one one can even make eye contact anymore.
The design tension you identified — that a system built to validate users is precisely the wrong thing for someone whose cognition is already distorted — feels like the real story here, not the raw case counts. Do you think the mental health field will develop its own intake protocols around chatbot use, or is this more likely to get resolved (or not) at the platform level?
If anyone reading this had an imaginary friend as a little kid, they were probably over it by the age of five. What worries me is that with AI, kids today can have an imaginary friend for life, reinforcing who-knows-what.
A key different is that AI imaginary friends don't come from the kid's (or adult's) imagination - they come from someone else's imagination. That means that they are subject to someone else's motivations.
And anything that can be used for good (and there are some beneficial use cases, for sure)
can equally be used for evil.
That's very scary.
Human connection requires HUMANS. Not code that someone (or something) else wrote.
At what point does a regular human being reach such a low functional state of living that being taken care of by third parties is the 'agreeable' response to its own commitment to its well-being?
Shouldn't ethics be applied to ourselves first?
Otherwise, it might be understood that the responsibility (as the capability to face and overcome challenges) of our actions is a matter of convenience.
And, when it's not convenient, no problem.
Not responsible, no consequence from free-taken action.
For a regular human being. (Not very fond of using 'normal'. Probably, cause I'm not that much.)
When you don't take care of yourself...
...At what point is 'agreeable' for others being required to do your duty?
Thoughts welcome. Thank you all.
The Danish researchers either made a mistake in their numbers, or used the wrong numbers. They said that 181 notes out of 54,000 is a small number. This is true, however, the number they should have used was 181 notes out of 10,000,000 as the ten million is the number of notes reviewed according to what the article noted. This works out to .00181% which doesn't even reach any sort of validity level. The 54,000 was the number of people whose notes were reviewed. Finding only 126 people out of 54,000, is also a very small number when it comes to looking for any kind of relationship or causality, that works out to just a bit under 1/4 of 1% of those studied.
I also agree with the comments of the other two who posted prior to my post. There will always be those "outliers" in any study done, and there will always be ways to "abuse" any system developed, that doesn't mean the system is the problem, more likely it's the people using the system. While suicide and increasing mental health problems are a concern, it may be a bit overplayed regarding the dangers of AI. I think there are bigger issues regarding AI than concerns about mental health & suicide issues, especially at the levels that the study appears to demonstrate.
This is some frightening stuff!
I am enjoying reading the well thought out comments today.
My own opinion: Thanks to remote work, and the age of the social media "influencer", among other technological inventions, there is a Loneliness pandemic happening right now. People working from home have less physical interactions with other humans. Kids are spending more time on their devices instead of riding bikes, hanging out with other kids, or even walking their dogs. The seclusion aspect of all of this leads directly to the use of Ai chatbots, that are, as you mention, trained to be agreeable. It's like doubling a number over and over again: 2+2=4, 4+4=8, 8+8=16 and so on. The chatbot never says no, never stops building on the agreeable responses so there is never any challenge, debate or conflict. It confirms what the human is asking it or telling it which, in the case of someone with a weakened mental state, can lead them directly to disaster if they're receiving affirmative feedback that will lead them to self harm or to harming others. The scarier thing is, they can't see it. I've been lambasted by people when I say that remote work leads to less comradery, siloed work and isolation. One woman went so far as to tell me she doesn't need friends at work. That's not my point. Humans are built to be social. No one one can even make eye contact anymore.
The design tension you identified — that a system built to validate users is precisely the wrong thing for someone whose cognition is already distorted — feels like the real story here, not the raw case counts. Do you think the mental health field will develop its own intake protocols around chatbot use, or is this more likely to get resolved (or not) at the platform level?
As for the other subjects, the oil release did nothing useful. The Strait of Hormuz is shut down to all but China and Iran.
Comer is a fool as he insists on proving almost every day.
And Pam Bondi is finding out that the right wing of her party can get pretty violent. Also, base housing is cheaper or maybe even free?