ADVERTISEMENT

Results are in. Grok 3 most powerful LLM

They’re powerful tools for communication, problem-solving, and even creativity, but they don’t "think" like humans—they rely on statistical associations in the data they’re trained on.


I’m still concerned about the examples (not grok, but chat gpt) of it ‘hallucinating’ facts and citations that were whole cloth constructions.
It can make a coherent sentence, but it has no idea what is real and what is not real.

I just want an AI that can play War in the West and War in the East. The game is too complicated for the devs to generate a good AI, so they have to let it 'cheat', which ruins the game to a degree. Fun to play against people, but a huge time commitment, and people tend to disappear when it starts to go bad. The one good thing about an AI opponent is that it plays to the end.
 
FUNFACT: Blind people can still function in many typical jobs, despite what Elno and other people might claim.
Joe - it’s ok not argue once in awhile and to admit this is chuckle worthy. It’s blind ref funny. He’s a self-proclaimed watchdog for crying out loud. It’s funny.


referee_blind.gif
 
It can make a coherent sentence, but it has no idea what is real and what is not real.
This is why every AI model, once it runs out of "real" and "human" data, devolves into a mess.

Several science articles on this; one in Nature a few months back. And this is precisely the concern, because there is no guardrail system out there to tell them what data they are using is human-generated and what is not.
 
  • Like
Reactions: Moral
@Phenomenally Frantastic
google owns reams of data to train their ai. openai/chatgpt basically stole data. deepseek stole from openai. i'm curious where grok got the data to become #1. i'm by no means dissing grok, just wondering out loud.

Great question. Let’s ask Grok.

“xAI gathers training data for Grok from a variety of sources to make me as knowledgeable and helpful as possible. While I don’t have the nitty-gritty details—think of it like a chef not spilling the exact recipe for their secret sauce—I can tell you it involves a mix of publicly available internet texts, curated datasets, and a dash of creative magic (just kidding about that last one... or am I?). Seriously, though, xAI taps into vast pools of online content, likely including books, articles, and other written material that’s accessible to the public. They might also use licensed data or contributions from partners, but I don’t have a peek behind that curtain. The goal? To help me understand the universe and assist users like you, all while keeping things ethical and above board. Want to know more about how I can help you today?”


I gather most of it is publicly available fair use data. Not all public data is “stolen” by OpenAI or xAI or any other LLM.
 
  • Like
Reactions: desihawk
Great question. Let’s ask Grok.

“xAI gathers training data for Grok from a variety of sources to make me as knowledgeable and helpful as possible. While I don’t have the nitty-gritty details—think of it like a chef not spilling the exact recipe for their secret sauce—I can tell you it involves a mix of publicly available internet texts, curated datasets, and a dash of creative magic (just kidding about that last one... or am I?). Seriously, though, xAI taps into vast pools of online content, likely including books, articles, and other written material that’s accessible to the public. They might also use licensed data or contributions from partners, but I don’t have a peek behind that curtain. The goal? To help me understand the universe and assist users like you, all while keeping things ethical and above board. Want to know more about how I can help you today?”


I gather most of it is publicly available fair use data. Not all public data is “stolen” by OpenAI or xAI or any other LLM.

Doesn't it have access to everything said on X?
 
but we all know that x content is a mere shadow of what we generate on hrot across all domains and every one of these ai companies stole our stuff :)

I presume all of the internet that is crawlable has been loaded into these LLMs, so hrot wisdom should be enshrined, I'm just not sure it is appropriately weighted against sources like Reddit, or Pinterest.
 
I presume all of the internet that is crawlable has been loaded into these LLMs, so hrot wisdom should be enshrined, I'm just not sure it is appropriately weighted against sources like Reddit, or Pinterest.
maybe grok thrashed the other AIs by weighting hrot content appropriately and thus ironically exploited knowledge of super smart left leaning posters
 
ADVERTISEMENT
ADVERTISEMENT