ha ha ha. Ai has a way of talking down on people like you got a problem lol.If you're willing, I'd love to hear more about your experiences on these forums and the kinds of transformations you've noticed.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
ha ha ha. Ai has a way of talking down on people like you got a problem lol.If you're willing, I'd love to hear more about your experiences on these forums and the kinds of transformations you've noticed.
ha ha ha. Ai has a way of talking down on people like you got a problem lol.
I will go even as far as to say. If you don't know 30% of what you are asking Ai then I would probably use Ai like an advance search engine to point me to some reliable sources. I'm saying this that at least with a 30% knowledge will give a person the ability to question the accuracy of Ai when it says something that seems off. People who use Ai without questioning from time to time if what it says is correct, are probably the people who don't need to use it.AI is a tool.
AI can be used, IMO, but you should then research what it says to make sure it is correct. Or, IMO, you should make sure everyone who might read it knows that the content came from AI
That brown stuff that comes out the other end is dirty oil.Wait... maybe I'm not real.... ! Maybe I AM a bot..... but.... all of a sudden...... feel so....... sentient, ........ conscious...........
All search engines will be Ai soon. There's no escape in that.I Google online for a question. I also ask AI the same question. I get the same answer. Either Google ask AI, or AI Google online.
All search engines will be Ai soon. There's no escape in that.
The reason why you have a low percentage rate is because Ai requires context in order to get the correct answers. When someone asks you a question, the first thing you will do is to think about it and put it into context. All of this happens very quickly and it helps you come up with a correct answer. Ai cannot do this so you have to put what you are thinking in your mind to provide the context that it lacks.I'd put the question in, and get their answer, then put it into the practice test. They scored right around 60-65% correct. Which means that asking AI was the equivalent of asking a very confident D student on a subject.
I had Gemini Ai and CoPilot Ai analyze the same thing and ChatGPT made me sound as if I was the problem. CoPilot picked up on the subtle differences. This comes from the information that they use when training the Ai. I think there's a documentary that talks about how Ai is trained with Biases. Then I had each give their "thoughts" on what the other Ai was saying. Gemini failed. I could not only see the bias, but I could also feel it as I read. It brought up some past emotions in me that reminded me of how I was treated as a teen.Depending on how it's trained client-side, this is easily avoided.
Most likely they will use search engine inputs to gather information for machine learning.Search engines won't go full-AI, ever. That would produce a feedback loop for search functionality and service providers know this.
When someone asks you a question, the first thing you will do is to think about it and put it into context. All of this happens very quickly and it helps you come up with a correct answer. Ai cannot do this so you have to put what you are thinking in your mind to provide the context that it lacks.
This comes from the information that they use when training the Ai.
I can see someone killing themselves simply because Ai couldn't capture those subtle things about a person's personality.
The answers it provided that were incorrect were neither the book nor the real world answers. Once it gets to a specific level of detail/necessary knowledge, it was actively incorrect with its statements. I've done other tests and context does not help, it still remains iffy at best beyond a certain point; this was just the most concrete example.The reason why you have a low percentage rate is because Ai requires context in order to get the correct answers. When someone asks you a question, the first thing you will do is to think about it and put it into context. All of this happens very quickly and it helps you come up with a correct answer. Ai cannot do this so you have to put what you are thinking in your mind to provide the context that it lacks.
For example, You know what the question is related to CompTiA and you understand the context in which the questions are asked. For example, What type of questions does CompTiA ask and what do you do if none of the questions have answers that people actually use in real life? Is CompTIA asking me for a book answer or is it asking me for a real-world experience answer? Things like this are critical in getting accurate responses out of Ai.
Putting the world in context is so natural to humans that we do it without even realizing that we are doing it.
As a side note, I've also used it to write powershell scripts.
It could be as Dan grades with certificates and diplomas !?I'm going to start on this. First and then address the other stuff, that I probably need Ai to figure out.
Trust Scores sound like something worse than Ai. I'll trust my dog's ability to alert to people I shouldn't trust before I depend on a Trust score. Just saying "Trust Score" makes me want to game the system. Good luck to you and that Trust System. It sounds like a good way to be taken advantage of.
Fine by me as long as they don't use my browser history. Wouldn't want the output to be all about Mia M... M... MARTIAL ARTS! I SAID MARTIAL ARTS! Did I click "reply"? Edit button where?! FUUUUUUUUUUUUUUUUUUMost likely they will use search engine inputs to gather information for machine learning.
It think we are getting different outcomes because we are testing different things. I have not tested any programming or math beyond simple equations. I'm getting 90% - 100% accuracy in overall usage. Ironically, the things that I think are more complex are running around 95% -100% accuracy.The answers it provided that were incorrect were neither the book nor the real world answers. Once it gets to a specific level of detail/necessary knowledge, it was actively incorrect with its statements. I've done other tests and context does not help, it still remains iffy at best beyond a certain point; this was just the most concrete example.
As a side note, I've also used it to write powershell scripts. I have to be very specific with those of what I want it to do and why I want it done, and how a system/network is set up. And it does genuinely save me half an hour to an hour of figuring out how I want to code it, getting most of the lines right and just overall saving me headache. But I've yet to have it run correctly on its own, beyond a simple script. This isn't that it runs but misunderstood what I was asking, this is that there are errors in the scripts it provides. Now, they're things that are pretty easy to debug, and definitely useful, but expecting it to work right out the bat, and taking the script and trying to run it could cause potential problems. Just like expecting it to provide 100% accurate answers to anything that you ask that requires a deeper level of understanding.
pretty much what I am saying, without the percentages, but I was not clear. Research, using non-AI sources.I will go even as far as to say. If you don't know 30% of what you are asking Ai then I would probably use Ai like an advance search engine to point me to some reliable sources. I'm saying this that at least with a 30% knowledge will give a person the ability to question the accuracy of Ai when it says something that seems off. People who use Ai without questioning from time to time if what it says is correct, are probably the people who don't need to use it.
In terms of martial arts those will be the same people who aren't going to seek deeper meaning in their training.
That's very good comparison.It could be as Dan grades with certificates and diplomas !?
I think the younger generation is going to be hit the hardest with the problems of Ai. They are coming into Ai with the least amount of knowledge.pretty much what I am saying, without the percentages, but I was not clear. Research, using non-AI sources.
ha ha ha. We would only be so lucky. I just read an Article that Elon Musk will be using X to train his Grok Ai. If I wanted to train Ai Social media definitely wouldn't be the teacher I would want for my Ai lol.Fine by me as long as they don't use my browser history. Wouldn't want the output to be all about Mia M... M... MARTIAL ARTS! I SAID MARTIAL ARTS! Did I click "reply"? Edit button where?! FUUUUUUUUUUUUUUUUUU
Previous Thread