Ai tips and Tricks with a dash of martial arts.

@O'Malley
I think Ai will "improve us" It is the only tool available where the more accurate the data that we feed it, the more reliable the results will be.

I think the desire to have reliable information will outweigh the temptation to lie to Ai. Ai that is inaccurate is useless. Companies with inaccurate Ai platforms will be the first to fade away.
 
TECH DYSTOPIA: Google Drops Pledge Not To Use AI for Weapons or Mass Surveillance Systems

So much for worring about what AI can do from other countries used in the US 😂

military-ai-robot-by-grok-22.jpg


Feeling lucky today, well do ya punk

"Google has announced it has made ‘updates’ in their AI Principles – now, all the previous vows not to use AI for weapons and surveillance are gone." 😳
 
Last edited:
TECH DYSTOPIA: Google Drops Pledge Not To Use AI for Weapons or Mass Surveillance Systems

So much for worring about what AI can do from other countries used in the US 😂

military-ai-robot-by-grok-22.jpg


Feeling lucky today, well do ya punk

"Google has announced it has made ‘updates’ in their AI Principles – now, all the previous vows not to use AI for weapons and surveillance are gone."
That was going to happen. Anyone who has played a video game where you can have the computer automatically target enemies already knew this was going to happen. Call of Duty Sentry guns when playing multiplayer? Yep. Easily. If I knew how to do robotics and ai, I would have already created this guy to take out the chipmunks in my yard . I would need Ai so it can learn and analyze chipmunk movement. lol.

1738897572907.webp


But seriously. Anyone who didn't think Ai would be part of a weapon system were just fooling themselves.
I'm an anime fan and this is screaming turret and ai.

Comparison

Yeah. China just raised the stakes.. Just know it's coming. I don't about you guys, but I'm going to invest in Glue Traps and String Traps. lol.

I've yet to see a robot navigate things it can get tangled in and things that can stick all over it lol. I figure glue traps and string traps are the unexpected element. The thing they don't train robots on.
 
That was going to happen. Anyone who has played a video game where you can have the computer automatically target enemies already knew this was going to happen. Call of Duty Sentry guns when playing multiplayer? Yep. Easily. If I knew how to do robotics and ai, I would have already created this guy to take out the chipmunks in my yard . I would need Ai so it can learn and analyze chipmunk movement. lol.

View attachment 32729

But seriously. Anyone who didn't think Ai would be part of a weapon system were just fooling themselves.
I'm an anime fan and this is screaming turret and ai.

Comparison

Yeah. China just raised the stakes.. Just know it's coming. I don't about you guys, but I'm going to invest in Glue Traps and String Traps. lol.

I've yet to see a robot navigate things it can get tangled in and things that can stick all over it lol. I figure glue traps and string traps are the unexpected element. The thing they don't train robots on.
Just so there is no confusion my statement is not a us vs them statement. I was just highlighting the speed at which technology is increasing and how easy it is to say. To give it an Ai brain. Military leading the development of technology is the norm. As far as Google's pledge, it says in their Terms and Agreement that they can change their term whenever they want. To have a government contract to add Ai is a good business decision.

I'm more concerned about what criminals will do with Ai than what countries will do with ai.
 
I uploaded sequence images of me Punching the heavy bag. I was curious if it could tell if I was punching the bag hard. it analyzed the images and gave formulas for power, velocity, and force.

Unfortunately I couldn't get the Ai to guess.
 
Exactly. That's the problem that Ai will run into all the time with Martial Arts. The problem is that Ai either doesn't body mechanics or it doesn't understand martial arts, or the terms., or all of the above.

I'll ask it again. Then I'll ask it to use its references to body mechanics (if that's the correct term) to see if it works. Maybe I have to tell it what to use in order to come up with an accurate answer?

That's where multi-shot-prompting comes in, and assuming that AI has sufficient data to draw upon in which to accurately answer anything within the context of Martial Arts is scarce.

The problem I noticed is that chatgpt seems to have a limited "memory"

It remembers specifics on how you want to instruct it, but it is significantly limited in compute power and reasoning without outsourcing certain processes to python code by printing out scripts and encouraging you to execute them remotely.


Avoid avoid avoid. It's a compromised LLM built by the Chinese government. I don't care how fast or efficient it appears, I can't get over the fact that it more than likely records and shares information from the user maliciously. It's recently been banned in Australia for those working under the Australian government and I think this policy extends to other Commonwealth nations.

It may be open source but the methods of alignment and data gathering is completely unknown. You can create a fork of it and run it locally but that will not change the fact that it has been trained very differently and uses the devices' resources differently to other LLMs. It has backdoor vulnerabilities and kernel level capabilities. Why would any LLM feature either?
 
Last edited:
That's where multi-shot-prompting comes in, and assuming that AI has sufficient data to draw upon in which to accurately answer anything within the context of Martial Arts is scarce.



It remembers specifics on how you want to instruct it, but it is significantly limited in compute power and reasoning without outsourcing certain processes to python code by printing out scripts and encouraging you to execute them remotely.



Avoid avoid avoid. It's a compromised LLM built by the Chinese government. I don't care how fast or efficient it appears, I can't get over the fact that it more than likely records and shares information from the user maliciously. It's recently been banned in Australia for those working under the Australian government and I think this policy extends to other Commonwealth nations.

It may be open source but the methods of alignment and data gathering is completely unknown. You can create a fork of it and run it locally but that will not change the fact that it has been trained very differently and uses the devices' resources differently to other LLMs. It has backdoor vulnerabilities and kernel level capabilities. Why would any LLM feature either?
I just read that another Ai platform was created in less time than deepseek and for $50. It's said to have more accurate responses as well. They noticed that the more time the Ai is given to think the more accurate it will be. They say the extra time gave it enough time to double check itself and to check any assumptions, and any hallucination it comes up with. Researches are working to make Ai more accurate. The researchers used the same techniques that deepseek used to train the Ai.

I'm starting to guess that many of the billionaires who sold their Nvidia stock early knew that this was coming. The company that is first to have the most reliable Ai will be the king of the hill.
 
I just read that another Ai platform was created in less time than deepseek and for $50.

It wasn't forked from anything either? I really doubt that but would be open to reading about it!

the more time the Ai is given to think the more accurate it will be. They say the extra time gave it enough time to double check itself and to check any assumptions, and any hallucination it comes up with

Yeah that's compute power/time which is currently available in the 4o GPT model (you can even ask it to show its reasoning which is interesting). I think the base paid model is limited to under 50 prompts a day though using that model. The problem with hallucination-checking is that AI doesn't know when it's hallucinating and requires huge retraining to detect it.
 
I just read that another Ai platform was created in less time than deepseek and for $50. It's said to have more accurate responses as well. They noticed that the more time the Ai is given to think the more accurate it will be. They say the extra time gave it enough time to double check itself and to check any assumptions, and any hallucination it comes up with. Researches are working to make Ai more accurate. The researchers used the same techniques that deepseek used to train the Ai.

I'm starting to guess that many of the billionaires who sold their Nvidia stock early knew that this was coming. The company that is first to have the most reliable Ai will be the king of the hill.

Wow you're right. It was a research project at Stanford using a fork of Google's early Gemini 2.0 LLM:


Fascinating stuff.

Code:
Step instructions are more costly than
other methods. The step delimiters require around 6 tokens each which for e.g. 64 steps adds up to a total of around 380
tokens. When ignoring the step delimiters in token counts as in Table 12, the model still requires 7551 thinking tokens on
average to achieve only 33.3% on AIME24.

That's a LOT of tokens for computing.

"7,551 tokens for a 33.3% accuracy rate is inefficient and suggests that the AI model in question needs major improvements in reasoning efficiency. Ideally, AI should be able to solve reasoning problems with fewer tokens and much higher accuracy to be considered practical."

So it comes down to resource usage in the end and patience. I don't know where the $50 came from, but the power consumption is enormous.

EDIT:
So it's a specialised logical reasoning model best used in maths and problem solving.
It doesn't appear to specialise in chatbot conversational contexts, hence the disregard for token usage.

It doesn't have image processing, file handling, or sympathetic training. That's what sets it apart.

I might start running it...
 

Attachments

Last edited:
Back
Top