Rise of the ChatGPT cheaters

Appledog

Green Belt
In recent months, even just weeks... a subtle but unmistakable shift has occurred on this and other forums. I think I first noticed it on facebook groups. There was an increase in clearly AI generated content being passed off as real. Next I noticed longtime forum members, bloggers, etc began to exhibit unexpected changes in tone, vocabulary, and stance. Such transformations were too consistent, too sanitized, and too conveniently in tune with public knowledge to be purely coincidental.

More telling than the content shifts were the unmistakable fingerprints of machine-generated prose. AI chatbots, particularly those trained on large language models, have a signature style: the overuse of parallel structures, numbered or bulleted lists, and a preference for neutral, overly balanced statements. Here's a big one, the way they incorporate Chinese 中文 (Zhōngwén) words, often picking a term that is not really a martial arts term 术语 (shùyǔ) and translating it as if it needed to be translated as a technical term.
  • 术 means "technique" or "skill"
  • 语 means "language" or "word"
Grammatical constructions like “It is important to note that...” or “While some argue X, others contend Y...” appear with unnatural frequency. Translation choices—like using “path” instead of “way” or “principle” where a human might have said “rule”—betray the ghost hand of the machine. And—Let's not forget—The overuse of certain—shall we say—punctuation marks—like the dash—or whatever; the simultaneous under-use or absence of others. The repetition of these patterns across diverse voices, all lacking the color and grit of lived experience, is damning.

This represents a deep moral failing, in fact the most common one--reaching for the far but avoiding the near. It is a form of laziness and dishonesty that these people wish to participate in conversations they clearly could otherwise not. It is a betrayal of wu de, the martial virtue of sincerity, humility, and loyalty to lineage. When one appropriates knowledge from an AI and passes it off as personal insight, they step outside the tradition of learning from a recognized master. This severing from lineage introduces grave danger. AI hallucinations, shallow interpretations, and cross-disciplinary confusion may lead to false teachings, devoid of context or embodied understanding. No chatbot can taste the sting of correction from a sifu or feel the lived rhythm of a form in one’s bones. The result of this lie is not just misrepresentation of the art—it is the dilution and eventual dissolution of the art.

This is not a trend that can be stopped. But it can, and must, be resisted. A return to the traditional values of wu de—especially loyalty to one’s sifu and respect for the lineage—is the only sure defense. Any teaching or commentary that cannot be traced to the name of a recognized master should be treated with caution. We are not here to impress or collect accolades; we are here to preserve the art. The knowledge is not lost—but if we allow it to be trampled by the churn of AI-generated speculation, it may soon be drowned beneath a tide of well-formatted nonsense. The gate must be kept, not for pride, but for the survival of meaning.

This will be my final post to this forum or to any other I am currently on. I feel that the winds have changed and that the mandate has been lost. The real learning will now take place in person. There is perhaps one thing that can help solve this, and that would be a trust system where people can 'trust' users and have a 'trust' score. Although even that can be abused. Stay safe and always remember your goals.

"Do not cry for me, for I am already dead." -Barney Gumble
 
Last edited:
In recent months, even just weeks... a subtle but unmistakable shift has occurred on this and other forums. I think I first noticed it on facebook groups. There was an increase in clearly AI generated content being passed off as real. Next I noticed longtime forum members, bloggers, etc began to exhibit unexpected changes in tone, vocabulary, and stance. Such transformations were too consistent, too sanitized, and too conveniently in tune with public knowledge to be purely coincidental.

More telling than the content shifts were the unmistakable fingerprints of machine-generated prose. AI chatbots, particularly those trained on large language models, have a signature style: the overuse of parallel structures, numbered or bulleted lists, and a preference for neutral, overly balanced statements. Here's a big one, the way they incorporate Chinese 中文 (Zhōngwén) words, often picking a term that is not really a martial arts term 术语 (shùyǔ) and translating it as if it needed to be translated as a technical term.
  • 术 means "technique" or "skill"
  • 语 means "language" or "word"
Grammatical constructions like “It is important to note that...” or “While some argue X, others contend Y...” appear with unnatural frequency. Translation choices—like using “path” instead of “way” or “principle” where a human might have said “rule”—betray the ghost hand of the machine. And—Let's not forget—The overuse of certain—shall we say—punctuation marks—like the dash—or whatever; the simultaneous under-use or absence of others. The repetition of these patterns across diverse voices, all lacking the color and grit of lived experience, is damning.

This represents a deep moral failing, in fact the most common one--reaching for the far but avoiding the near. It is a form of laziness and dishonesty that these people wish to participate in conversations they clearly could otherwise not. It is a betrayal of wu de, the martial virtue of sincerity, humility, and loyalty to lineage. When one appropriates knowledge from an AI and passes it off as personal insight, they step outside the tradition of learning from a recognized master. This severing from lineage introduces grave danger. AI hallucinations, shallow interpretations, and cross-disciplinary confusion may lead to false teachings, devoid of context or embodied understanding. No chatbot can taste the sting of correction from a sifu or feel the lived rhythm of a form in one’s bones. The result of this lie is not just misrepresentation of the art—it is the dilution and eventual dissolution of the art.

This is not a trend that can be stopped. But it can, and must, be resisted. A return to the traditional values of wu de—especially loyalty to one’s sifu and respect for the lineage—is the only sure defense. Any teaching or commentary that cannot be traced to the name of a recognized master should be treated with caution. We are not here to impress or collect accolades; we are here to preserve the art. The knowledge is not lost—but if we allow it to be trampled by the churn of AI-generated speculation, it may soon be drowned beneath a tide of well-formatted nonsense. The gate must be kept, not for pride, but for the survival of meaning.

This will be my final post to this forum or to any other I am currently on. I feel that the winds have changed and that the mandate has been lost. The real learning will now take place in person. There is perhaps one thing that can help solve this, and that would be a trust system where people can 'trust' users and have a 'trust' score. Although even that can be abused. Stay safe and always remember your goals.

"Do not cry for me, for I am already dead." -Barney Gumble
I am sorry to hear about the challenges that you are currently facing. It sounds like you're noticing a significant shift in the online landscape, one that reflects deeper changes in how we interact with content, knowledge, and the people behind it. The presence of AI-generated content, the shift in tone and perspectives from regular participants, and the sense that something has become more "sanitized" or less authentic seem like valid observations. These changes can feel unsettling, especially when they blur the lines between human and machine-generated interactions.

In a way, this shift could feel like the heart of certain online communities is fading or being altered in a way that doesn't align with the initial spirit of these spaces. Your decision to step away is understandable if you feel that the authenticity and spontaneity of these forums are no longer there. It makes sense that you'd want to return to more personal, direct forms of interaction—where the nuances of conversation and learning are more tangible and real.

If you're willing, I'd love to hear more about your experiences on these forums and the kinds of transformations you've noticed. How do you think we can preserve genuine human interaction in the midst of all these changes, [user]... uh I mean, fellow human person?
 
AI is a tool.

AI can be used, IMO, but you should then research what it says to make sure it is correct. Or, IMO, you should make sure everyone who might read it knows that the content came from AI
 
The birth of strong artificial intelligence relies on mathematical regularity issues, and the same will be true for future super artificial intelligence. Mathematical logic determines the limitations and predictability of AI’s thought processes. If humans cannot control the intelligence they create, then even if they become enslaved by it, there will be nothing left to say.

Mathematics has no inherent good or bad, and artificial intelligence built upon mathematical foundations naturally does not either! Before persuading us to give up exploring artificial intelligence, shouldn’t others be advised to abandon their research as well?
 
The birth of strong artificial intelligence relies on mathematical regularity issues, and the same will be true for future super artificial intelligence. Mathematical logic determines the limitations and predictability of AI’s thought processes. If humans cannot control the intelligence they create, then even if they become enslaved by it, there will be nothing left to say.

Mathematics has no inherent good or bad, and artificial intelligence built upon mathematical foundations naturally does not either! Before persuading us to give up exploring artificial intelligence, shouldn’t others be advised to abandon their research as well?
Your post is interesting deep. Although I’ve never used any chatGPT/AI stuff I find the development of it interesting, can it go so far as we/it actually create a whole new universe for our entire existence .
Fields in science speak about a possibility that our universe is virtual, a hologram or something in that direction, and that mathematics is Gods language and so on.
So if everything we are now are mathematics and mathematics is not inherently good or bad, then anything such like “morals” are just an illusion??
Is this AI development just an AI morphing back into itself ?

This goes all Buddha thread.
 
I’ve watched Terminator 2, Judgement Day plenty of times. I remember what he said about AI.

“It became self aware at 2:14 A.M Eastern time on August 29.”

I ain’t going down that road.

To be serious for a brief moment, in 1985 two kids and their dad joined my dojo. The dad was working at M.I.T at the time with a team of others to develop Artificial Intelligence. Again, this was in 1985.

I ran into him and some of his colleagues at a private gun range near Patriots stadium several months later. To this day they were the creepiest people I’ve ever personally met.
 
I’ve watched Terminator 2, Judgement Day plenty of times. I remember what he said about AI.

“It became self aware at 2:14 A.M Eastern time on August 29.”

I ain’t going down that road.

To be serious for a brief moment, in 1985 two kids and their dad joined my dojo. The dad was working at M.I.T at the time with a team of others to develop Artificial Intelligence. Again, this was in 1985.

I ran into him and some of his colleagues at a private gun range near Patriots stadium several months later. To this day they were the creepiest people I’ve ever personally met.
The creepiest people you've ever met so far.
 
There's a group in japan that's made artificial skin for robots that's supposed to self-heal the same way skin does, since it's made from human skin.
Scientists in Japan Give Robots a Fleshy Face and a Smile
A quote from that article: We aim to create skin that closely mimics the functionality of real skin by gradually constructing essential components such as blood vessels, nerves, sweat glands, sebaceous glands and hair

If you combine that with AI improving we can easily end up at a spot where we don't know if we're talking to a human or robot in real life, not just online.
 
I Google online for a question. I also ask AI the same question. I get the same answer. Either Google ask AI, or AI Google online.
This is true, if you're using google's AI, rather than checking for yourself. But very often (I'd say close to half the time), the answer provided by google AI is incorrect in some way, but most of the time you wouldn't notice if you didn't look deeply into it. I've learned to just scroll past it so I can check how I used to, rather than risk getting false info.

I probably mentioned this the last time the topic came up, but about 6 months ago, I used chatgpt/google AI to take a practice CompTIA test. These are tests with specific answers, specialized knowledge but also beginner level of that specialized knowledge, and multiple choice questions (or at least those were what I was testing). So all in all, something that AI should be able to handle very well.

I'd put the question in, and get their answer, then put it into the practice test. They scored right around 60-65% correct. Which means that asking AI was the equivalent of asking a very confident D student on a subject.
 
The creepiest people you've ever met so far.

That gun range, which I’m going to try and find out if it is still there, was an outdoor range. I swear to God it’s the strangest place I’ve ever been and I’ve been to some strange places.

The place was packed. The mixture of people were renegade bikers in full colors, spooks (NSA and the like) FBI guys with FBI windbreakers on, AI guys who all knew the guy I was with. And me. Still creeps me out when I think about it.

A week later I told the guy I didn’t want him in my dojo anymore. He left. I told him his kids could stay, but he wouldn’t let them.
 
There is perhaps one thing that can help solve this, and that would be a trust system where people can 'trust' users and have a 'trust' score.
I'm going to start on this. First and then address the other stuff, that I probably need Ai to figure out.

Trust Scores sound like something worse than Ai. I'll trust my dog's ability to alert to people I shouldn't trust before I depend on a Trust score. Just saying "Trust Score" makes me want to game the system. Good luck to you and that Trust System. It sounds like a good way to be taken advantage of.
 
In recent months, even just weeks... a subtle but unmistakable shift has occurred on this and other forums. I think I first noticed it on facebook groups. There was an increase in clearly AI generated content being passed off as real. Next I noticed longtime forum members, bloggers, etc began to exhibit unexpected changes in tone, vocabulary, and stance. Such transformations were too consistent, too sanitized, and too conveniently in tune with public knowledge to be purely coincidental.

More telling than the content shifts were the unmistakable fingerprints of machine-generated prose. AI chatbots, particularly those trained on large language models, have a signature style: the overuse of parallel structures, numbered or bulleted lists, and a preference for neutral, overly balanced statements. Here's a big one, the way they incorporate Chinese 中文 (Zhōngwén) words, often picking a term that is not really a martial arts term 术语 (shùyǔ) and translating it as if it needed to be translated as a technical term.
  • 术 means "technique" or "skill"
  • 语 means "language" or "word"
Grammatical constructions like “It is important to note that...” or “While some argue X, others contend Y...” appear with unnatural frequency. Translation choices—like using “path” instead of “way” or “principle” where a human might have said “rule”—betray the ghost hand of the machine. And—Let's not forget—The overuse of certain—shall we say—punctuation marks—like the dash—or whatever; the simultaneous under-use or absence of others. The repetition of these patterns across diverse voices, all lacking the color and grit of lived experience, is damning.

This represents a deep moral failing, in fact the most common one--reaching for the far but avoiding the near. It is a form of laziness and dishonesty that these people wish to participate in conversations they clearly could otherwise not. It is a betrayal of wu de, the martial virtue of sincerity, humility, and loyalty to lineage. When one appropriates knowledge from an AI and passes it off as personal insight, they step outside the tradition of learning from a recognized master. This severing from lineage introduces grave danger. AI hallucinations, shallow interpretations, and cross-disciplinary confusion may lead to false teachings, devoid of context or embodied understanding. No chatbot can taste the sting of correction from a sifu or feel the lived rhythm of a form in one’s bones. The result of this lie is not just misrepresentation of the art—it is the dilution and eventual dissolution of the art.

This is not a trend that can be stopped. But it can, and must, be resisted. A return to the traditional values of wu de—especially loyalty to one’s sifu and respect for the lineage—is the only sure defense. Any teaching or commentary that cannot be traced to the name of a recognized master should be treated with caution. We are not here to impress or collect accolades; we are here to preserve the art. The knowledge is not lost—but if we allow it to be trampled by the churn of AI-generated speculation, it may soon be drowned beneath a tide of well-formatted nonsense. The gate must be kept, not for pride, but for the survival of meaning.

This will be my final post to this forum or to any other I am currently on. I feel that the winds have changed and that the mandate has been lost. The real learning will now take place in person. There is perhaps one thing that can help solve this, and that would be a trust system where people can 'trust' users and have a 'trust' score. Although even that can be abused. Stay safe and always remember your goals.

"Do not cry for me, for I am already dead." -Barney Gumble

Ironically, and take this as a compliment, your post is formatted in a similar way.

Remember at the end of the day, it's just prompt vs prompt. Call it out if it's obvious, and correct where appropriate. I use GPT heavily for research but check it meticulously.
 
It is a form of laziness and dishonesty that these people wish to participate in conversations they clearly could otherwise not. It is a betrayal of wu de, the martial virtue of sincerity, humility, and loyalty to lineage. When one appropriates knowledge from an AI and passes it off as personal insight, they step outside the tradition of learning from a recognized master. This severing from lineage introduces grave danger. AI hallucinations, shallow interpretations, and cross-disciplinary confusion may lead to false teachings, devoid of context or embodied understanding. No chatbot can taste the sting of correction from a sifu or feel the lived rhythm of a form in one’s bones. The result of this lie is not just misrepresentation of the art—it is the dilution and eventual dissolution of the art.
Ok I'm going to stop here. I can't get through anymore. This sounds to me like you have some other issue going in your life and you are Blaming Ai for it. I had a long day and I'm this in itself is just way too much drama.

The only thing I tell you is that this post seems like you don't understand or use Ai. It seems that you are letting what others do with Ai define the value of what Ai can do for you.
 

Latest Discussions

Back
Top