Rise of the ChatGPT cheaters

ha ha ha. Ai has a way of talking down on people like you got a problem lol.

Depending on how it's trained client-side, this is easily avoided.

Call it out, be explicit and lengthy with prompting, and interrogate it's output. It's a tool in the end. Just because LLMs are conversational, don't forget you're the boss.
 
AI is a tool.

AI can be used, IMO, but you should then research what it says to make sure it is correct. Or, IMO, you should make sure everyone who might read it knows that the content came from AI
I will go even as far as to say. If you don't know 30% of what you are asking Ai then I would probably use Ai like an advance search engine to point me to some reliable sources. I'm saying this that at least with a 30% knowledge will give a person the ability to question the accuracy of Ai when it says something that seems off. People who use Ai without questioning from time to time if what it says is correct, are probably the people who don't need to use it.

In terms of martial arts those will be the same people who aren't going to seek deeper meaning in their training.
 
All search engines will be Ai soon. There's no escape in that.

They already were, since the 2000s in some way or another. But LLMs have recently been a feature with assumed AI suggestions now like in Google search.

Search engines won't go full-AI, ever. That would produce a feedback loop for search functionality and service providers know this.
 
I'd put the question in, and get their answer, then put it into the practice test. They scored right around 60-65% correct. Which means that asking AI was the equivalent of asking a very confident D student on a subject.
The reason why you have a low percentage rate is because Ai requires context in order to get the correct answers. When someone asks you a question, the first thing you will do is to think about it and put it into context. All of this happens very quickly and it helps you come up with a correct answer. Ai cannot do this so you have to put what you are thinking in your mind to provide the context that it lacks.

For example, You know what the question is related to CompTiA and you understand the context in which the questions are asked. For example, What type of questions does CompTiA ask and what do you do if none of the questions have answers that people actually use in real life? Is CompTIA asking me for a book answer or is it asking me for a real-world experience answer? Things like this are critical in getting accurate responses out of Ai.

Putting the world in context is so natural to humans that we do it without even realizing that we are doing it.
 
Depending on how it's trained client-side, this is easily avoided.
I had Gemini Ai and CoPilot Ai analyze the same thing and ChatGPT made me sound as if I was the problem. CoPilot picked up on the subtle differences. This comes from the information that they use when training the Ai. I think there's a documentary that talks about how Ai is trained with Biases. Then I had each give their "thoughts" on what the other Ai was saying. Gemini failed. I could not only see the bias, but I could also feel it as I read. It brought up some past emotions in me that reminded me of how I was treated as a teen.

In terms of using Ai I would say that it's a good idea to know where the short comings in the Ai that anyone uses so that those things can be taken into consideration. I can see someone killing themselves simply because Ai couldn't capture those subtle things about a person's personality. Or when it identifies a weakness in a person and chooses to exploit it.
 
When someone asks you a question, the first thing you will do is to think about it and put it into context. All of this happens very quickly and it helps you come up with a correct answer. Ai cannot do this so you have to put what you are thinking in your mind to provide the context that it lacks.

ChatGPT has the client-side history of conversations to provide context, as well as the capability to remember user specific data like health, tonality preferences, traumas, interests, and intentionality. This provides a pretty robust framework for it when navigating context in its reasoning and output. It's up to the user to trust the service enough to provide that information, and from my experience it's been invaluable.

This comes from the information that they use when training the Ai.

The training you're referring to is alignment to stop it from hallucinating (it still does in some cases), ensuring that data is weighed appropriately without bias, and that's without mentioning client-side configurations which drastically change output.

I can see someone killing themselves simply because Ai couldn't capture those subtle things about a person's personality.

That's a user issue.
 
The reason why you have a low percentage rate is because Ai requires context in order to get the correct answers. When someone asks you a question, the first thing you will do is to think about it and put it into context. All of this happens very quickly and it helps you come up with a correct answer. Ai cannot do this so you have to put what you are thinking in your mind to provide the context that it lacks.

For example, You know what the question is related to CompTiA and you understand the context in which the questions are asked. For example, What type of questions does CompTiA ask and what do you do if none of the questions have answers that people actually use in real life? Is CompTIA asking me for a book answer or is it asking me for a real-world experience answer? Things like this are critical in getting accurate responses out of Ai.

Putting the world in context is so natural to humans that we do it without even realizing that we are doing it.
The answers it provided that were incorrect were neither the book nor the real world answers. Once it gets to a specific level of detail/necessary knowledge, it was actively incorrect with its statements. I've done other tests and context does not help, it still remains iffy at best beyond a certain point; this was just the most concrete example.

As a side note, I've also used it to write powershell scripts. I have to be very specific with those of what I want it to do and why I want it done, and how a system/network is set up. And it does genuinely save me half an hour to an hour of figuring out how I want to code it, getting most of the lines right and just overall saving me headache. But I've yet to have it run correctly on its own, beyond a simple script. This isn't that it runs but misunderstood what I was asking, this is that there are errors in the scripts it provides. Now, they're things that are pretty easy to debug, and definitely useful, but expecting it to work right out the bat, and taking the script and trying to run it could cause potential problems. Just like expecting it to provide 100% accurate answers to anything that you ask that requires a deeper level of understanding.
 
As a side note, I've also used it to write powershell scripts.

SAME, except in bash. It's excessive coding output for what I want it to do but I've made three small scripts that use APIs straight from the CLI.

One script uses Letterboxd to open my library, with an input menu, and I type in what I want to watch (still in the terminal), it selects from my subscribed services, then launches Firefox with that movie or series episode in a new tab.
 
I'm going to start on this. First and then address the other stuff, that I probably need Ai to figure out.

Trust Scores sound like something worse than Ai. I'll trust my dog's ability to alert to people I shouldn't trust before I depend on a Trust score. Just saying "Trust Score" makes me want to game the system. Good luck to you and that Trust System. It sounds like a good way to be taken advantage of.
It could be as Dan grades with certificates and diplomas !?
 
Most likely they will use search engine inputs to gather information for machine learning.
Fine by me as long as they don't use my browser history. Wouldn't want the output to be all about Mia M... M... MARTIAL ARTS! I SAID MARTIAL ARTS! Did I click "reply"? Edit button where?! FUUUUUUUUUUUUUUUUUU
 
The answers it provided that were incorrect were neither the book nor the real world answers. Once it gets to a specific level of detail/necessary knowledge, it was actively incorrect with its statements. I've done other tests and context does not help, it still remains iffy at best beyond a certain point; this was just the most concrete example.

As a side note, I've also used it to write powershell scripts. I have to be very specific with those of what I want it to do and why I want it done, and how a system/network is set up. And it does genuinely save me half an hour to an hour of figuring out how I want to code it, getting most of the lines right and just overall saving me headache. But I've yet to have it run correctly on its own, beyond a simple script. This isn't that it runs but misunderstood what I was asking, this is that there are errors in the scripts it provides. Now, they're things that are pretty easy to debug, and definitely useful, but expecting it to work right out the bat, and taking the script and trying to run it could cause potential problems. Just like expecting it to provide 100% accurate answers to anything that you ask that requires a deeper level of understanding.
It think we are getting different outcomes because we are testing different things. I have not tested any programming or math beyond simple equations. I'm getting 90% - 100% accuracy in overall usage. Ironically, the things that I think are more complex are running around 95% -100% accuracy.

One question I ask Ai is "how can I write this so that you can understand?" I do this so I can get some insight on how it's understanding things. I then use what it gives me to accomplish things and I monitor when it no longer works. If seems to not work as well then, I ask it about

I agree. I don't expect 100% accurate answers to anything. To be honest even humans don't hit that mark. We often hit a lower percent of accuracy and then work towards a higher percent of accuracy.
 
I will go even as far as to say. If you don't know 30% of what you are asking Ai then I would probably use Ai like an advance search engine to point me to some reliable sources. I'm saying this that at least with a 30% knowledge will give a person the ability to question the accuracy of Ai when it says something that seems off. People who use Ai without questioning from time to time if what it says is correct, are probably the people who don't need to use it.

In terms of martial arts those will be the same people who aren't going to seek deeper meaning in their training.
pretty much what I am saying, without the percentages, but I was not clear. Research, using non-AI sources.
 
It could be as Dan grades with certificates and diplomas !?
That's very good comparison.
pretty much what I am saying, without the percentages, but I was not clear. Research, using non-AI sources.
I think the younger generation is going to be hit the hardest with the problems of Ai. They are coming into Ai with the least amount of knowledge.
 
Something to think about in regard to AppleDogs post. I was curious as to what may be what is really the issue behind his post and I don't think it has anything to do with Ai. Sooooo what did I do? I ran a behavior profile on it with AI of course lol

My observation: Appledogs's post probably has nothing to do with Ai. We've seen many post like this where people bring issues from other sites into this one. Then people from this site gets sucked into "Half the story." Appledog enjoyed talking about theory and concepts and didn't speak of actual sparring experience (to my knowledge}. I could be wrong about the sparring but I definitely don't remember him speaking much of application.

I pointed out the society issues that China has that existed before Ai. This highlights the reality that the same issues Appledog accuses Ai of existed long before Ai came on the scene. Then I pointed out this question. "How much trust is there really if a Trust Score is needed?"

Ai: Responses to my comments: "
Looking at their post, it seems like their deeper concern is about authenticity and gatekeeping knowledge in martial arts communities. They emphasize the importance of lineage, learning from a recognized master, and preserving the art from misinformation. AI, in their view, disrupts this by allowing people to participate in conversations they otherwise wouldn’t be able to—without the lived experience or direct mentorship that traditionally legitimizes martial arts knowledge.

But if that’s the case, then the real frustration might not be with AI itself, but with who gets to speak and how knowledge is shared. AI is just the latest tool that challenges traditional hierarchies of expertise. Maybe Appledog feels that AI is accelerating a trend they already disliked—one where people can bypass traditional learning and still be seen as credible."

My Thoughts: I don't know about you guys but this sounds accurate to how some of Appledogs posts feel. Especially the "Who gets to speak and how knowledge is shared." If my memory is correct I think there was a discussion about lineage that took this path.

Appledog had alot of pride in the knowledge that he has. In contrast Kung Fu Wang and some others took pride in what they were able to do. This contrast of importance is clear if you look at past comments.

I think Ai nailed this one. How close do you think Ai may have gotten to the deeper issue that AppleDog has with Ai.?
 
Fine by me as long as they don't use my browser history. Wouldn't want the output to be all about Mia M... M... MARTIAL ARTS! I SAID MARTIAL ARTS! Did I click "reply"? Edit button where?! FUUUUUUUUUUUUUUUUUU
ha ha ha. We would only be so lucky. I just read an Article that Elon Musk will be using X to train his Grok Ai. If I wanted to train Ai Social media definitely wouldn't be the teacher I would want for my Ai lol.
 

Latest Discussions

Back
Top