This is not a troll-post; I am genuinely curious about why this is the case. When I asked DeepSeek AI some Western propaganda questions like “Is Taiwan a country” and “What happened on Tiananmen Square 1989”, it refuses to answer.
This is strange because on other Chinese sites like Baidu, you can easily search these topics and get answers from the non-Western, Chinese point of view that are very educational, yet DeepSeek for some reason flags these questions. I’ve only tested this out with the English version since I unfortunately am not fluent in Chinese.
Does anyone have any possible explanation for why this may be the case?
Edit: After some further investigation, I’m seeing that the AI’s political views tend to be pretty liberal and only a little to the left of ChatGPT. In this context, I can see why it refuses to answer these questions in an attempt to prevent the spread of disinformation.
You can’t reliably get a specific or true answer from a large language model. As such it becomes necessary to punt in situations where the wrong answer could cause liability. Western models also do this, albeit on different topics. If you don’t build functionality in to do things like this you get shitstorms like the image generator from Google making portraits of ethnically diverse Nazis, or answers instructing users on the best way to iron or steam their scrotum to remove its wrinkles. Refusing questions that are clearly trying to bait the model into making a problematic response is basically necessary in the current political climate, no matter who is making the model.