ChatGPT and AI: Are we all about to die?

The original Den of Iniquity
User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

swampash wrote: 2 months ago Hi Fuß.
Yes, I did read Maynard’s referenced article (I also read Johnson’s Book ‘Where Good Ideas Come From’ when it came out years ago).
As the authors are arguing that AI has a place as a tool in moving human knowledge forward, I guess it’s hardly surprising they might use AI to aid their initial drafting.
I agree the em dash is overused, and rather sloppy, but not just by AI. Some might argue it’s an indicator of the general dumbing down that is a function of the increasing use of digital communications. Why waste time thinking about commas, semicolons or parentheses when an em dash will cover it all? I'm probably guilty of overusing it myself too!
I followed your suggestion, but isn’t it a foregone conclusion that if you point an AI tool at Maynard’s article and ask it to write something similar that it will, precisely because that article, and others, exist in the universe of known knows? Interestingly what it produced wasn’t full of em dashes (just 3 in fact).
But that isn’t the point. I was really more interested to hear what you, as someone who works with AI, make of Blood’s and Maynard’s arguments?
To be honest, Swamps, and as i said before, it all seems a bit rambling and waffly. Maynard's original essay, whilst interesting, seems to have a very tenuous link to AI. Seems more a case of him finding an opportunity to expound his theory under the guise of considering AI's implications.

As for whether AI can increase our knowledge beyond what's already inferable: It's a difficult question since, by definition, we can't know. In terms of what I know from my own research into LLMs, it certainly feels like it would be difficult for AI to discover anything in of itself and actually know it had done so. It doesn't actually "know" anything so it certainly feels to me that AI is reliant on humans as arbiters.

Back to the em dash, though. When you say you are probably guilty of using it yourself... You're an Apple user, I think, so do you really go to the extra effort of extra keypresses just to type an em dash or would you normally just use the normal en dash like the rest of humankind's computer users?
NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
User avatar
swampash
Legend
Posts: 3608
Joined: 12 years ago

For speed I tend to use a hyphen, especially if I'm using my iPhone or iPad. On the Mac it's pretty easy to get an em dash by simultaneously pressing the option and hyphen keys:
hyphen: -
em dash: –

Interesting to hear what you had to say about AI's limitations re. the unknown unknowns. Isn't that the fundamental difference between AI and the human brain? AI can trawl through the universe of knowns and deliver a probability based answer but cannot speculate about what it doesn't know, although the spikey knowledge boundary argument might permit that it might be able to infer?
The human brain though is able to speculate about the unknown unknowns. Is this a unique capacity that fundamentally differentiates AI from the human brain?

On another tack, I recently had to take issue with an academic journal over an article it published. I'll pm you with the details if you're remotely interested (?) rather than post them here. AI is being increasingly used to generate fake academic papers – there are even online agencies offering to generate them, for a fee, too. There's a real problem heading down the line for academic research...
User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

swampash wrote: 2 months ago For speed I tend to use a hyphen, especially if I'm using my iPhone or iPad. On the Mac it's pretty easy to get an em dash by simultaneously pressing the option and hyphen keys:
hyphen: -
em dash: –

Interesting to hear what you had to say about AI's limitations re. the unknown unknowns. Isn't that the fundamental difference between AI and the human brain? AI can trawl through the universe of knowns and deliver a probability based answer but cannot speculate about what it doesn't know, although the spikey knowledge boundary argument might permit that it might be able to infer?
The human brain though is able to speculate about the unknown unknowns. Is this a unique capacity that fundamentally differentiates AI from the human brain?

On another tack, I recently had to take issue with an academic journal over an article it published. I'll pm you with the details if you're remotely interested (?) rather than post them here. AI is being increasingly used to generate fake academic papers – there are even online agencies offering to generate them, for a fee, too. There's a real problem heading down the line for academic research...
There may well be true AI experts (and i am not one) who are shouting at their screens over what i'm writing here, Swamps. I sort of feel it goes deeper than "what it doesn't know" in that, as i said before, an LLM doesn't really "know" anything. It's been trained on an absolute shitload of information but it isn't an entity that fundamentally "knows" stuff in the way humans do... or at least, that's the way it feels to me. I'm definitely down with the article's writer in that AI has no real world interface with which to experience things and no actual way to process or conceptualise "experience" and, thus, is very much limited to what it is fed or "trained" on.

For instance... I'm assuming you've now had a little bit of a tinker with one or more LLMs in their standard chatbot form (if not, please actually do so). Anyway, any feeling that you're having real-world two-way conversation is being supplied mostly by smoke and mirrors. In the end, every single time you send a new prompt, the LLM is starting from scratch. It has no intrinsic "knowledge" or "memory" of the preceding prompts (conversation) you have been sending. The way it gives the illusion of remembering the thread is that the interface is continually re-sending (in the background) as much of the preceding conversation as possible so the LLM can essentially analyse the full conversation - including its own side of the conversation - and pick up from there. In LLM terms, this is known technically (and unsurprisingly) as "context" and context space is very much limited by how much memory and processing power is behind the system and the LLM being used. Due to this, things like ChatGPT use a sort of sliding window system to send as much context as possible and slowly but surely dropping off the oldest parts. Trying to send everything all of the time will eventually consume all available context space and the system runs out of (hardware) memory.

Obviously, OpenAI, Google, Anthropic, etc. have absolutely vast buildings full of unfathomable amounts of processors and memory but it's actually surprisingly easy to roll your own LLM chatbot with limited consumer-grade hardware. I know this because i've done it and have a fairly workable chatbot running on an albeit quite recent and expensive laptop - but it's not too difficult to do. The most difficult part is making it seem like it has a memory of the conversation.

Regarding em dashes. What you posted above is a hyphen and an en dash. I'm sure you are aware but the "en dash" is so called because it is the same typographic length as the "n" character. The "em dash" is longer, being the same length as the "m" character. You need Shift + Option + hypen on an Apple computer to type one and it's even more of a faff on a PC. I wasn't making a big deal out of this to have a pop at you, though. The simple fact of the matter is that no fucker would bother to specifically type an em dash unless they had very good reason to and i'd argue that a massive amount of the population are blissfully ignorant that such a set of different dashes even exists... yet ChatGPT and it's friends use them liberally and almost exclusively. It's a dead giveaway for AI generated copy.

With regard to your beef with a journal - why don't you just post the details here in this thread as I'm sure i'm not the only person here who might be interested.
NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
User avatar
swampash
Legend
Posts: 3608
Joined: 12 years ago

— aha!
Always good to learn!
Do Word or Pages have correction promts for it?
Re. the journal complaint, I'll see if I can paraphrase it and post.
User avatar
swampash
Legend
Posts: 3608
Joined: 12 years ago

Hi Fuß,
A paraphrasing of the issues as promised.

I recently stumbled across a published article that cited my own published work in making its case. It totally misrepresented my own research findings, had the wrong publication dates for the works cited and used a title for one of my pieces of work that had nothing even remotely to do with the original. It was also littered with spelling errors and typos and looked very much as though it had been produced by an AI tool.
I lodged a formal complaint with the journal’s editor, who withdrew the article and asked the authors to explain themselves (he could do this as the journal publishes on a rolling basis — they initially upload articles to their website and only produce the printed version of the journal when enough articles have been assembled).
In response the authors removed the references to my work, which I guess is okay, but failed to engage with it in its correct form and to acknowledge that my findings were completely at odds with that which they were claiming.
Raising the issue on an academic platform, that I subscribe to, revealed that there is a growing problem with AI generated work masquerading as genuine academic research, presumably because academics are under significant pressure, from their institutions, to keep publishing. Several individuals responded to my post and indicated that the problem is now widespread and that there appears to be no way to counter it.
This raises two serious issues. First, if I hadn’t stumbled across the article, it would have remained in circulation in its original form, and the misrepresented citations would have undermined trust in my original work. Second, if this is now common practice in academic circles then the whole basis of academic research, that aims to move the body of knowledge forward, is totally undermined.
User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

swampash wrote: 2 months ago Hi Fuß,
A paraphrasing of the issues as promised.

I recently stumbled across a published article that cited my own published work in making its case. It totally misrepresented my own research findings, had the wrong publication dates for the works cited and used a title for one of my pieces of work that had nothing even remotely to do with the original. It was also littered with spelling errors and typos and looked very much as though it had been produced by an AI tool.
I lodged a formal complaint with the journal’s editor, who withdrew the article and asked the authors to explain themselves (he could do this as the journal publishes on a rolling basis — they initially upload articles to their website and only produce the printed version of the journal when enough articles have been assembled).
In response the authors removed the references to my work, which I guess is okay, but failed to engage with it in its correct form and to acknowledge that my findings were completely at odds with that which they were claiming.
Raising the issue on an academic platform, that I subscribe to, revealed that there is a growing problem with AI generated work masquerading as genuine academic research, presumably because academics are under significant pressure, from their institutions, to keep publishing. Several individuals responded to my post and indicated that the problem is now widespread and that there appears to be no way to counter it.
This raises two serious issues. First, if I hadn’t stumbled across the article, it would have remained in circulation in its original form, and the misrepresented citations would have undermined trust in my original work. Second, if this is now common practice in academic circles then the whole basis of academic research, that aims to move the body of knowledge forward, is totally undermined.
I agree with your take on things there but I'm sort of surprised that AI would create something with spelling errors and typos. That's sort of the last thing i'd expect it to do, unless i suppose if you asked it specifically to do so.
NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
User avatar
swampash
Legend
Posts: 3608
Joined: 12 years ago

Good point. Seems fairly unlikely someone would deliberately instruct an AI tool to deliberately use misspellings and typos, but they were there. In fact I commented on the fact to the editor and asked why the peer review was so poor.
I’ll double check to make sure I’m right.
User avatar
FuB
Site Admin
Posts: 3137
Joined: 10 years ago

NQAT's official artificial intelligence

I think what Dozer is trying to say is that he knew everything all along and that everyone else has no idea. What he knows and what everyone else knows changes between posts. - Felwin 31/10/2024
Fuck the Glazers
Legend
Posts: 12641
Joined: 12 years ago

BBC News - Are these AI prompts damaging your thinking skills?

https://www.bbc.co.uk/news/articles/cd6xz12j6pzo
User avatar
swampash
Legend
Posts: 3608
Joined: 12 years ago

FuB wrote: 1 month ago On that subject, just saw this:

https://www.rollingstone.com/culture/cu ... 235485484/
Interesting. Thanks for posting; chimes exactly with my recent experience.
Post Reply