My rule for things like this have always been a business approach. What is the law of the land. What is the government allowed to do. If you don't like the laws of that government then don't use their stuff or do business there.
By default the US and UK will not be open to software from China. China will not be open to software from the US. The laws contradict in certain areas. That's just a reality. I'm not sure where US laws stands these days. Lots of unusual things going on. It's not clear how it will have an effect on ai.
I'm not an expert on AI but as I understand it Deepseek is an LLM (Large Language Model), a type of AI able to process and generate human language (like the models used by ChatGPT, Claude, Gemini, Copilot, etc.). Those algorithms have the capacity to process massive amounts of data and have been trained by feeding them large amounts of text. An extreme simplification of this process would be to tell the AI a million times that the correct response to "hi, how are you?" is "I'm fine mate, how 'bout ya?". At the end, if I say "hi, how are you?", the AI will write an answer based on the responses that have received the most "correct" ratings. It does not think, it just puts together pieces of sentences in the order that is the most probably correct. And I will get my answer: "I'm fine, mate, how 'bout ya?". That's why LLMs can have "hallucinations" and write answers that don't make sense, because they're fundamentally guessing the most probable answer. For example, here is me asking ChatGPT to summarize a book that does not exist:
- Me: Give me a one paragraph summary of the book "How I became a vegan" by Sebastian W. Grunenfeld, as well as its citation in APA style.
- ChatGPT: "How I Became a Vegan" by Sebastian W. Grunenfeld chronicles the author’s personal journey of transitioning to a vegan lifestyle. In the book, Grunenfeld reflects on the ethical, environmental, and health-related reasons that led him to make this significant change. He discusses the challenges, misconceptions, and emotional experiences that accompanied his decision to adopt veganism, offering insight into the broader societal shift toward plant-based living. Through this narrative, Grunenfeld encourages readers to consider the moral implications of their food choices while providing practical advice for those interested in adopting a vegan diet.
Citation in APA style: Grunenfeld, S. W. (2009). How I became a vegan. Vegan Press.
The main risk with AI is allowing it to process sensitive data in ways that threaten fundamental rights and freedoms (e.g. conduct surveillance on political/business opponents or vulnerable minorities; manipulate information on a large scale to support a particular political narrative by, say, influencing search engine results; etc.).
AI is unparalleled at recognizing patterns from massive amounts of unorganized data. For example, it's already possible for some data brokers to have a precise idea of your tastes and stance on specific issues; the people you regularly meet; the places you visit; etc. Social media and search engine algorithms have been using that for years to show you contents with which they want you to interact (e.g. ads for products that fit your tastes so you're more likely to buy; or content that's likely to provoke an emotional reaction in you so that you click/comment and engage more with the platform, which means that you'll spend more time on it and see more ads). One can think of other creative ways to use the intel: I can determine with whom my political rival has an affair and when they meet, so that I can get kompromat on him (bonus points if he's, say, a closeted homosexual). Or I can identify somebody in a particular position who's got debts and a close relative with cancer, so I can make an offer that he can't refuse. Heck, if I'm unpopular with red-headed people (or if I just don't like them), I can systematically underfund the hospitals in which they get treatment and ensure they're more worried about their own health than about me.
That's why it would be concerning if somebody fed sensitive data such as financial or medical information; payments; social security numbers; etc. to an algorithm without control nor accountability. That risk is present in China but it's not the only country with advanced AI technology and failing democratic checks and balances.