Ai tips and Tricks with a dash of martial arts.

@O'Malley
I think Ai will "improve us" It is the only tool available where the more accurate the data that we feed it, the more reliable the results will be.

I think the desire to have reliable information will outweigh the temptation to lie to Ai. Ai that is inaccurate is useless. Companies with inaccurate Ai platforms will be the first to fade away.
 
TECH DYSTOPIA: Google Drops Pledge Not To Use AI for Weapons or Mass Surveillance Systems

So much for worring about what AI can do from other countries used in the US 😂

military-ai-robot-by-grok-22.jpg


Feeling lucky today, well do ya punk

"Google has announced it has made ‘updates’ in their AI Principles – now, all the previous vows not to use AI for weapons and surveillance are gone." 😳
 
Last edited:
TECH DYSTOPIA: Google Drops Pledge Not To Use AI for Weapons or Mass Surveillance Systems

So much for worring about what AI can do from other countries used in the US 😂

military-ai-robot-by-grok-22.jpg


Feeling lucky today, well do ya punk

"Google has announced it has made ‘updates’ in their AI Principles – now, all the previous vows not to use AI for weapons and surveillance are gone."
That was going to happen. Anyone who has played a video game where you can have the computer automatically target enemies already knew this was going to happen. Call of Duty Sentry guns when playing multiplayer? Yep. Easily. If I knew how to do robotics and ai, I would have already created this guy to take out the chipmunks in my yard . I would need Ai so it can learn and analyze chipmunk movement. lol.

1738897572907.webp


But seriously. Anyone who didn't think Ai would be part of a weapon system were just fooling themselves.
I'm an anime fan and this is screaming turret and ai.

Comparison

Yeah. China just raised the stakes.. Just know it's coming. I don't about you guys, but I'm going to invest in Glue Traps and String Traps. lol.

I've yet to see a robot navigate things it can get tangled in and things that can stick all over it lol. I figure glue traps and string traps are the unexpected element. The thing they don't train robots on.
 
That was going to happen. Anyone who has played a video game where you can have the computer automatically target enemies already knew this was going to happen. Call of Duty Sentry guns when playing multiplayer? Yep. Easily. If I knew how to do robotics and ai, I would have already created this guy to take out the chipmunks in my yard . I would need Ai so it can learn and analyze chipmunk movement. lol.

View attachment 32729

But seriously. Anyone who didn't think Ai would be part of a weapon system were just fooling themselves.
I'm an anime fan and this is screaming turret and ai.

Comparison

Yeah. China just raised the stakes.. Just know it's coming. I don't about you guys, but I'm going to invest in Glue Traps and String Traps. lol.

I've yet to see a robot navigate things it can get tangled in and things that can stick all over it lol. I figure glue traps and string traps are the unexpected element. The thing they don't train robots on.
Just so there is no confusion my statement is not a us vs them statement. I was just highlighting the speed at which technology is increasing and how easy it is to say. To give it an Ai brain. Military leading the development of technology is the norm. As far as Google's pledge, it says in their Terms and Agreement that they can change their term whenever they want. To have a government contract to add Ai is a good business decision.

I'm more concerned about what criminals will do with Ai than what countries will do with ai.
 
I uploaded sequence images of me Punching the heavy bag. I was curious if it could tell if I was punching the bag hard. it analyzed the images and gave formulas for power, velocity, and force.

Unfortunately I couldn't get the Ai to guess.
 
Exactly. That's the problem that Ai will run into all the time with Martial Arts. The problem is that Ai either doesn't body mechanics or it doesn't understand martial arts, or the terms., or all of the above.

I'll ask it again. Then I'll ask it to use its references to body mechanics (if that's the correct term) to see if it works. Maybe I have to tell it what to use in order to come up with an accurate answer?

That's where multi-shot-prompting comes in, and assuming that AI has sufficient data to draw upon in which to accurately answer anything within the context of Martial Arts is scarce.

The problem I noticed is that chatgpt seems to have a limited "memory"

It remembers specifics on how you want to instruct it, but it is significantly limited in compute power and reasoning without outsourcing certain processes to python code by printing out scripts and encouraging you to execute them remotely.


Avoid avoid avoid. It's a compromised LLM built by the Chinese government. I don't care how fast or efficient it appears, I can't get over the fact that it more than likely records and shares information from the user maliciously. It's recently been banned in Australia for those working under the Australian government and I think this policy extends to other Commonwealth nations.

It may be open source but the methods of alignment and data gathering is completely unknown. You can create a fork of it and run it locally but that will not change the fact that it has been trained very differently and uses the devices' resources differently to other LLMs. It has backdoor vulnerabilities and kernel level capabilities. Why would any LLM feature either?
 
Last edited:
That's where multi-shot-prompting comes in, and assuming that AI has sufficient data to draw upon in which to accurately answer anything within the context of Martial Arts is scarce.



It remembers specifics on how you want to instruct it, but it is significantly limited in compute power and reasoning without outsourcing certain processes to python code by printing out scripts and encouraging you to execute them remotely.



Avoid avoid avoid. It's a compromised LLM built by the Chinese government. I don't care how fast or efficient it appears, I can't get over the fact that it more than likely records and shares information from the user maliciously. It's recently been banned in Australia for those working under the Australian government and I think this policy extends to other Commonwealth nations.

It may be open source but the methods of alignment and data gathering is completely unknown. You can create a fork of it and run it locally but that will not change the fact that it has been trained very differently and uses the devices' resources differently to other LLMs. It has backdoor vulnerabilities and kernel level capabilities. Why would any LLM feature either?
I just read that another Ai platform was created in less time than deepseek and for $50. It's said to have more accurate responses as well. They noticed that the more time the Ai is given to think the more accurate it will be. They say the extra time gave it enough time to double check itself and to check any assumptions, and any hallucination it comes up with. Researches are working to make Ai more accurate. The researchers used the same techniques that deepseek used to train the Ai.

I'm starting to guess that many of the billionaires who sold their Nvidia stock early knew that this was coming. The company that is first to have the most reliable Ai will be the king of the hill.
 
I just read that another Ai platform was created in less time than deepseek and for $50.

It wasn't forked from anything either? I really doubt that but would be open to reading about it!

the more time the Ai is given to think the more accurate it will be. They say the extra time gave it enough time to double check itself and to check any assumptions, and any hallucination it comes up with

Yeah that's compute power/time which is currently available in the 4o GPT model (you can even ask it to show its reasoning which is interesting). I think the base paid model is limited to under 50 prompts a day though using that model. The problem with hallucination-checking is that AI doesn't know when it's hallucinating and requires huge retraining to detect it.
 
I just read that another Ai platform was created in less time than deepseek and for $50. It's said to have more accurate responses as well. They noticed that the more time the Ai is given to think the more accurate it will be. They say the extra time gave it enough time to double check itself and to check any assumptions, and any hallucination it comes up with. Researches are working to make Ai more accurate. The researchers used the same techniques that deepseek used to train the Ai.

I'm starting to guess that many of the billionaires who sold their Nvidia stock early knew that this was coming. The company that is first to have the most reliable Ai will be the king of the hill.

Wow you're right. It was a research project at Stanford using a fork of Google's early Gemini 2.0 LLM:


Fascinating stuff.

Code:
Step instructions are more costly than
other methods. The step delimiters require around 6 tokens each which for e.g. 64 steps adds up to a total of around 380
tokens. When ignoring the step delimiters in token counts as in Table 12, the model still requires 7551 thinking tokens on
average to achieve only 33.3% on AIME24.

That's a LOT of tokens for computing.

"7,551 tokens for a 33.3% accuracy rate is inefficient and suggests that the AI model in question needs major improvements in reasoning efficiency. Ideally, AI should be able to solve reasoning problems with fewer tokens and much higher accuracy to be considered practical."

So it comes down to resource usage in the end and patience. I don't know where the $50 came from, but the power consumption is enormous.

EDIT:
So it's a specialised logical reasoning model best used in maths and problem solving.
It doesn't appear to specialise in chatbot conversational contexts, hence the disregard for token usage.

It doesn't have image processing, file handling, or sympathetic training. That's what sets it apart.

I might start running it...
 

Attachments

Last edited:
I'm always trying to find the limitations, pitfalls and benefits of Ai. And to think of it I kind of do the same when I explore my martial arts techniques. I always want to know when a technique works and when it doesn't work. It basically turns into me trying to use a technique against a variety of opponents, offensive opportunities, and defensive opportunities. I'm currently doing the same now with Ai.

Project: I don't watch football (American), tell me who is most likely to win.
I had copilot to only look at the Stats of the two teams and to base its results solely on that. I was able to get some detailed information (that I did not verify) and it gave me a percentage of who is most likely to win the Super Bowl today.

End Results: Eagles win. If all stays constant without injuries, Neither team scores above 27 points. The Ai's analysis is supposed to be based solely on the team stats.


Today I also asked CoPilot if it can analyze human behavior. This project will take some time, but the Human Behavior one is one that I'm going to be very interested in. Especially since most Sci movies where the robots take over humanity is often started by Ai's interpretation of human behavior. Based on CoPilot, these are some of the things that I'll need to provide it. Since CoPilot cannot analyze video, I'll have to have Ai look at transcripts, screenshots, and go from there.

CoPilot:
Yes, I can provide insights into human behavior by examining both what a person says and what a person does. Here are some key points to consider:

  1. Consistency: Look for consistency between words and actions. Consistent behavior often indicates sincerity and reliability, while discrepancies can signal contradictions or hidden motives.
  2. Body Language: Non-verbal cues such as facial expressions, gestures, posture, and eye contact can reveal a lot about a person's emotions and intentions. For example, crossed arms may indicate defensiveness, while maintaining eye contact often shows confidence and interest.
  3. Tone of Voice: The way something is said (tone, pitch, and pace) can provide context beyond the words themselves. A calm, steady voice might indicate confidence, while a shaky or fast-paced voice can suggest nervousness or excitement.
  4. Context: Consider the context of the behavior and statements. Situational factors, cultural background, and external pressures can influence how someone acts or what they say.
  5. Patterns Over Time: Repeated behaviors and statements over time can help build a more accurate picture of someone's character and intentions. One-off actions might not be as telling as consistent patterns.
  6. Emotional Reactions: Pay attention to emotional responses in various situations. For example, a person who gets easily defensive might have underlying insecurities, while someone who shows empathy regularly may be more compassionate by nature.
 
Security Issue with Deepfake. I'll post this as I come across them. I'm not actively going out looking security issues in Ai. Now that I've found this, I will keep my eyes open to see what the fix is. This is coming through my random news feed from some site called Android Headlines. I don't know who Android Headlines is.

DeepSeek’s iOS app disables Apple’s App Transport Security protocol​

Source

According to the report, DeepSeek’s iOS application also disables Apple’s App Transport Security protocol, which would enforce encryption of data. For those unaware, Apple implemented ATS to ensure that sensitive user data goes only over encrypted channels. In its report, NowSecure has found that DeepSeek has turned the feature off in its iOS app. Since DeepSeek has disabled the protection, it sends unencrypted data over the internet to China.
 
DeepSeek’s iOS app disables Apple’s App Transport Security protocol

"All iOS apps distributed via the App Store must comply with ATS by default, meaning all network connections should use HTTPS. Exceptions (e.g., HTTP for specific domains) require explicit justification and must be declared in the app’s Info.plist file."

""Deep Seek" is available in the Apple App Store as "DeepSeek - AI Assistant"
 
"All iOS apps distributed via the App Store must comply with ATS by default, meaning all network connections should use HTTPS. Exceptions (e.g., HTTP for specific domains) require explicit justification and must be declared in the app’s Info.plist file."

""Deep Seek" is available in the Apple App Store as "DeepSeek - AI Assistant"
I don't know if it has fix the security issues or not but multiple security sites say that it's a security risk. A week after coming out those security risks led to a cyber attack on their systems.

"Logging in via Google is the first privacy concern relating to DeepSeek, as by using it to log in into the service, you give DeepSeek access to personal information collected and stored by Google."

 
Back
Top