Yep. Story out today that OpenAI is claiming DeepSeek “inappropriately” used their data (ironic as it is).
Yep. Story out today that OpenAI is claiming DeepSeek “inappropriately” used their data (ironic as it is).
All of this is like directly from the TV show Person of Interest from 10 years ago. That show was way ahead of its time.The screen grabs I’ve seen where it starts to answer, and then stops itself as soon as it says the ‘forbidden thing’ have to be in the code, not it’s training. It’s ’catching itself’ after formulating the forbidden answer.
Can someone ask it why Xi looks like Winnie the Pooh?