21
Elon Musk’s new xAI company launches to “understand the true nature of the universe”
(www.theverge.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Given the absolutely vast amounts of data that goes into these models, especially the most recent ones, I'm sceptical that there was absolutely nothing in the training data from a WikiHow about stacking objects or tutorials about how to create code that can draw animals. I read an article a few months ago about someone asking an "AI" to create a crochet pattern for a narwhal, and the resulting pattern did indeed look something like a narwhal, in that it had all the right parts in roughly the right place, even if it was still a ghastly abomination. There's no evidence that the "AI" actually understood what it was creating: there are plenty of narwhal crochet patterns online which were included in its datasets, and it simply predicted a pattern based on those.
I'm inclined to believe the unicorn code is the same. It doesn't need to understand the concept of a head or even a unicorn to be able to predict a code for a unicorn without a horn. In the vastness of the internet, there is undoubtedly a tutorial out there that has some version of "you can turn your unicorn into a horse by removing this bit of code". There probably are tutorials out there for "if you want your unicorn facing the other way, do it like this", too. Its training data will always include the lines of code for the horn as part of the code for the head. It's not like there's code out there for "how to draw a unicorn with a horn on its butt" (although I'm open to being proved wrong on this, I'm sure somebody on the internet has a thing for unicorns with horns on their butts instead of their heads, but it's unlikely to be the most predictable structure for the code). So predictive text ability alone would predict it's unlikely for the horn code to be anywhere near the butt code.
The training data likely also includes all the many, many texts out there describing how to test for a theory of mind, so the ability to predict what someone writing about theory of mind would say (including descriptions of how a child/animal passing a theory of mind test will predict where objects are) doesn't prove that an "AI" has a theory of mind.
So I remain very, very sceptical that there is any general intelligence in the latest versions. They just have larger datasets and more refined predictive abilities, so the results are more accurate and less prone to hallucination. That's not the same as evidence of actual consciousness. I'd be more convinced if it correctly completed a brand new puzzle, which has never been done before and has not been posted about on the internet or written about in scientific journals or text books. But so far all the evidence of general intelligence is predicting the response to a question or puzzle for which there is ample data about the correct response.