ChatGPT and AI: Are we all about to die?

The original Den of Iniquity
Fuck the Glazers
Legend
Posts: 12641
Joined: 12 years ago

FuB wrote: 6 months ago
Fuck the Glazers wrote: 6 months ago AI is brilliant and has the ability to transform our lives for the better. But under capitalism it's being used to make people unemployed in order to save money / drive profit.

https://www.theguardian.com/business/20 ... nch-adzuna

Need to also add it's being used by states to kill people. The Israeli army are using it to aid their genocide. They said AI is more accurate at finding Hamas targets but weirdly it keeps bombing children.

It's the two things I mentioned earlier; all technological developments are used to kill people and make money.
Sorry not to get back to this thread before now... i've actually been working a lot on an AI-driven system but more on that another day.

Is it fair to draw parallels to the agricultural revolutions, industrial revolution and, in fact, the last round of computerisation/robotics that put people out of work? Notwithstanding the imminent pain that's here (and more coming), in the end future generations start finding new roles and opportunities. This of course doesn't help those at risk of losing employment.

As for war... well, it's not really a surprise to find warfare at the forefront of technological advancement. It's shit that people are so keen to use better and better technology to kill more and more people. In the end, though, this is the underlying concern with AI. It's not the AI that's the issue (at least not presently), it's the human beings using it.
Fair point. But then AI can get to the point where it makes decisions itself? Or it's left to run things, and fucks them up.
User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

Fuck the Glazers wrote: 6 months ago
FuB wrote: 6 months ago
Fuck the Glazers wrote: 6 months ago AI is brilliant and has the ability to transform our lives for the better. But under capitalism it's being used to make people unemployed in order to save money / drive profit.

https://www.theguardian.com/business/20 ... nch-adzuna

Need to also add it's being used by states to kill people. The Israeli army are using it to aid their genocide. They said AI is more accurate at finding Hamas targets but weirdly it keeps bombing children.

It's the two things I mentioned earlier; all technological developments are used to kill people and make money.
Sorry not to get back to this thread before now... i've actually been working a lot on an AI-driven system but more on that another day.

Is it fair to draw parallels to the agricultural revolutions, industrial revolution and, in fact, the last round of computerisation/robotics that put people out of work? Notwithstanding the imminent pain that's here (and more coming), in the end future generations start finding new roles and opportunities. This of course doesn't help those at risk of losing employment.

As for war... well, it's not really a surprise to find warfare at the forefront of technological advancement. It's shit that people are so keen to use better and better technology to kill more and more people. In the end, though, this is the underlying concern with AI. It's not the AI that's the issue (at least not presently), it's the human beings using it.
Fair point. But then AI can get to the point where it makes decisions itself? Or it's left to run things, and fucks them up.
I think that was the crux of one of my very early posts in this thread. It does concern me that the over-excitement on what is still an emerging technology and the speed with which things are being pushed ahead is very likely to lead to an over-reliance on something that is so far from perfect it's easily evident from just a simple conversation with an LLM. They regularly lie and make shit up (so called "hallucination") so we are a million miles from having something that could reasonably be trusted to control, say, critical infrastructure.

OK, so we can program in "guardrails" to limit a system's behaviour and that's what we see with current LLM implementations such as ChatGPT. If you try to converse on something unethical or ridiculous, they will continually remind of the ethical concerns or that the conversation has gone on a hypothetical direction. If you try to discuss something not allowed (sexual content, for instance), they will most likely shut down the conversation, etc.

They are also trained hard on being as helpful as they possibly can but I'd suggest (i'm not alone in this) that this is one of the key reasons they will hallucinate. They are so concerned with being helpful, they'll talk bullshit to appease the user. Like a kid trying to impress an adult... and that analogy works the other way around: we're getting excited about AI like a proud parent declaring that their newborn shows wonderful intelligence whilst, from the outside and with a dispassionate view, they are actually a fucking idiot in comparison to a fully developed human being.
NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
Fuck the Glazers
Legend
Posts: 12641
Joined: 12 years ago

FuB wrote: 6 months ago
Fuck the Glazers wrote: 6 months ago
FuB wrote: 6 months ago
Fuck the Glazers wrote: 6 months ago AI is brilliant and has the ability to transform our lives for the better. But under capitalism it's being used to make people unemployed in order to save money / drive profit.

https://www.theguardian.com/business/20 ... nch-adzuna

Need to also add it's being used by states to kill people. The Israeli army are using it to aid their genocide. They said AI is more accurate at finding Hamas targets but weirdly it keeps bombing children.

It's the two things I mentioned earlier; all technological developments are used to kill people and make money.
Sorry not to get back to this thread before now... i've actually been working a lot on an AI-driven system but more on that another day.

Is it fair to draw parallels to the agricultural revolutions, industrial revolution and, in fact, the last round of computerisation/robotics that put people out of work? Notwithstanding the imminent pain that's here (and more coming), in the end future generations start finding new roles and opportunities. This of course doesn't help those at risk of losing employment.

As for war... well, it's not really a surprise to find warfare at the forefront of technological advancement. It's shit that people are so keen to use better and better technology to kill more and more people. In the end, though, this is the underlying concern with AI. It's not the AI that's the issue (at least not presently), it's the human beings using it.
Fair point. But then AI can get to the point where it makes decisions itself? Or it's left to run things, and fucks them up.
I think that was the crux of one of my very early posts in this thread. It does concern me that the over-excitement on what is still an emerging technology and the speed with which things are being pushed ahead is very likely to lead to an over-reliance on something that is so far from perfect it's easily evident from just a simple conversation with an LLM. They regularly lie and make shit up (so called "hallucination") so we are a million miles from having something that could reasonably be trusted to control, say, critical infrastructure.

OK, so we can program in "guardrails" to limit a system's behaviour and that's what we see with current LLM implementations such as ChatGPT. If you try to converse on something unethical or ridiculous, they will continually remind of the ethical concerns or that the conversation has gone on a hypothetical direction. If you try to discuss something not allowed (sexual content, for instance), they will most likely shut down the conversation, etc.

They are also trained hard on being as helpful as they possibly can but I'd suggest (i'm not alone in this) that this is one of the key reasons they will hallucinate. They are so concerned with being helpful, they'll talk bullshit to appease the user. Like a kid trying to impress an adult... and that analogy works the other way around: we're getting excited about AI like a proud parent declaring that their newborn shows wonderful intelligence whilst, from the outside and with a dispassionate view, they are actually a fucking idiot in comparison to a fully developed human being.
AI is a people pleaser. That's fucked up.

The over excitement of the new technology - this worries me also, cos firms are itching to use AI cos it suits the neoliberal approach of cutting jobs and "streamlining" to reduce costs. Generally wages are the biggest cost to a business, and if they can get AI to do the work, they will. They won't settle at it doing the basics, they'll want it running services before long. BT openly believe they can sack around 40-55k staff in the coming years.
User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

JoelfuckingGlazer wrote: 8 months ago We use AutogenAI, with a supplementary AI engine our tech guys have created. Seems pretty intuitive, but the devil is always in how your instruct and feed it. I'll see if I can find out a bit more of how it was put together.
Did you ever manage to get any more info on this, Joel? I do know that AutogenAI (one of my clients looked at it) is 24K per year for two licenses. I've seen one of their marketing vids and it's hard to tell what you really get for that money. I mean, do you get a lovely templated up set of tender documents you could send out immediately or is it just a question of it creating all the waffle and then you have to package it?
NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

FuB wrote: 6 months ago
Fuck the Glazers wrote: 6 months ago
FuB wrote: 6 months ago
Fuck the Glazers wrote: 6 months ago AI is brilliant and has the ability to transform our lives for the better. But under capitalism it's being used to make people unemployed in order to save money / drive profit.

https://www.theguardian.com/business/20 ... nch-adzuna

Need to also add it's being used by states to kill people. The Israeli army are using it to aid their genocide. They said AI is more accurate at finding Hamas targets but weirdly it keeps bombing children.

It's the two things I mentioned earlier; all technological developments are used to kill people and make money.
Sorry not to get back to this thread before now... i've actually been working a lot on an AI-driven system but more on that another day.

Is it fair to draw parallels to the agricultural revolutions, industrial revolution and, in fact, the last round of computerisation/robotics that put people out of work? Notwithstanding the imminent pain that's here (and more coming), in the end future generations start finding new roles and opportunities. This of course doesn't help those at risk of losing employment.

As for war... well, it's not really a surprise to find warfare at the forefront of technological advancement. It's shit that people are so keen to use better and better technology to kill more and more people. In the end, though, this is the underlying concern with AI. It's not the AI that's the issue (at least not presently), it's the human beings using it.
Fair point. But then AI can get to the point where it makes decisions itself? Or it's left to run things, and fucks them up.
I think that was the crux of one of my very early posts in this thread. It does concern me that the over-excitement on what is still an emerging technology and the speed with which things are being pushed ahead is very likely to lead to an over-reliance on something that is so far from perfect it's easily evident from just a simple conversation with an LLM. They regularly lie and make shit up (so called "hallucination") so we are a million miles from having something that could reasonably be trusted to control, say, critical infrastructure.

OK, so we can program in "guardrails" to limit a system's behaviour and that's what we see with current LLM implementations such as ChatGPT. If you try to converse on something unethical or ridiculous, they will continually remind of the ethical concerns or that the conversation has gone on a hypothetical direction. If you try to discuss something not allowed (sexual content, for instance), they will most likely shut down the conversation, etc.

They are also trained hard on being as helpful as they possibly can but I'd suggest (i'm not alone in this) that this is one of the key reasons they will hallucinate. They are so concerned with being helpful, they'll talk bullshit to appease the user. Like a kid trying to impress an adult... and that analogy works the other way around: we're getting excited about AI like a proud parent declaring that their newborn shows wonderful intelligence whilst, from the outside and with a dispassionate view, they are actually a fucking idiot in comparison to a fully developed human being.
Yep... https://www.theregister.com/2025/09/17/ ... ncentives/
NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
User avatar
swampash
Legend
Posts: 3608
Joined: 12 years ago

User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

swampash wrote: 2 months ago Thought this was interesting...
https://aiwithintelligence.substack.com ... n-unknowns
Seems a bit rambling and waffly to me. Not entirely sure what the point is.

However, what really strikes me about it is that it's almost certainly written by AI or at least the basic text was. It's full of tell tale signs.

What was your take from it, anyway, Swamps?
NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
User avatar
swampash
Legend
Posts: 3608
Joined: 12 years ago

I thought it was interesting, particularly on the issue of the circle of knowledge, which I think originated with Einstein...?
What do you think were the AI indicators?
User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

swampash wrote: 2 months ago I thought it was interesting, particularly on the issue of the circle of knowledge, which I think originated with Einstein...?
What do you think were the AI indicators?
Well, i've already asked you to give something like ChatGPT a go and, had you done so, you'd probably see that, even at the visual level, it looks like ChatGPT output. The biggest giveaway is the use of em dashes all over the place. Do you know how to type one on your keyboard, swamps?

Even so, don't take my word for it: go to chatGPT and use the prompt: "Please make an analysis of the following article and create a commentary article about it. Make your article about 1000 words: https://www.futureofbeinghuman.com/p/sp ... ges-moving". You won't get the same article but you'll see that it makes a very good stab at an article - full of em dashes of course.

By the way, the link there is the actual article being critiqued in the article you posted. If you haven't already, you might want to read it.
NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
User avatar
swampash
Legend
Posts: 3608
Joined: 12 years ago

Hi Fuß.
Yes, I did read Maynard’s referenced article (I also read Johnson’s Book ‘Where Good Ideas Come From’ when it came out years ago).
As the authors are arguing that AI has a place as a tool in moving human knowledge forward, I guess it’s hardly surprising they might use AI to aid their initial drafting.
I agree the em dash is overused, and rather sloppy, but not just by AI. Some might argue it’s an indicator of the general dumbing down that is a function of the increasing use of digital communications. Why waste time thinking about commas, semicolons or parentheses when an em dash will cover it all? I'm probably guilty of overusing it myself too!
I followed your suggestion, but isn’t it a foregone conclusion that if you point an AI tool at Maynard’s article and ask it to write something similar that it will, precisely because that article, and others, exist in the universe of known knows? Interestingly what it produced wasn’t full of em dashes (just 3 in fact).
But that isn’t the point. I was really more interested to hear what you, as someone who works with AI, make of Blood’s and Maynard’s arguments?
Post Reply