Elon Musk owns and/ or operates a plenitude of companies, from Twitter to SpaceX, xAI, Neuralink, The Boring Company, and Tesla( TSLA)- Get Free Report. The world’s richest man holds bachelor’s degrees in both drugs and economics. But just as Musk is no astrophysicist when it comes to questions about space, he’s likewise not an expert when it comes to artificial intelligence.
Despite this, he has bandied the content on multitudinous occasions lately, maybe most significantly adding his hand to the open letter which called for a six-month doldrums on the development of more important AI models.

Musk jumped more officially into the sector in July when he launched a new company– xAI– whose thing, beyond furnishing a rival option to Microsoft and Google, is to” understand the macrocosm.”
His intention with xAI is to make a safe AI model. And his proposition as to how that is done involves the” growing” of a” curious and verity-seeking” model for the simple reason that” I suppose to a superintelligence, humanity is much more intriguing than not humanity.” And if humanity is the most intriguing thing in actuality, and an AI model is designed to be curious and verity-seeking, the pitfalls of a mischief AI attacking humanity, to Musk, come diminished.
This approach, he added, is one that ought to go in hand with government regulation.
also, Musk explained in a Twitter Spaces on July 14 that xAI is intent on avoiding what he calls the” inverse morality problem,” which has come known by some as the” Waluigiproblem. However, you risk making Waluigi at the same time, If you make Luigi.”
The crux of the so-called Waluigi problem is this” When you spend numerous bits of optimization locating a character, it only takes many redundant bits to specify their counter.”
Musk’s Claims Through The Expert Lens
Musk’s approach to developing what he says will be a safe AI system does not make perfect sense to some experts. Despite Musk’s assertions, AI can not actually be curious.
” I do not suppose that attributing mortal attributes to AI models is a good idea, or accurate in any way. Models can not be curious because they are not cognizant,” AI expert and experimenter Dr. Sasha Luccioni told The Street.” They can have an advanced literacy rate, be presented with further training data, or have a specific armature that allows them to explore a wider information space. but they are not curious.”
John Licato, a professor of computer wisdom and engineering at the University of South Florida, agreed with Luccioni. AI models, he said, are generally defined by the objects of their literacy functions.
” The precedences( and posterior actions) of the literacy algorithm are defined relative to those objects, so in a way, all of being AI has a type of curiosity erected- in formerly,” he said.” It’s veritably questionable to say that further curiosity equals systems that are more” verity- seeking.”
And, though the intention might not be completely out-base, Musk’s Waluigi problem does not really apply to the world of Large Language Models.
Studies have revealed a binary-use problem in the environment of an AI model being used to discover salutary medicines( and also”mega-poisons” as well, Luccioni said.” But given the fact that generative AI models are similar as language models are, by description, multi-purpose, the binary use(” Waluigi”) problem does not apply to them.”
And while Licato said that creating benevolent AI goes in hand with developing the more dangerous kinds of AI,” It’s frequently insolvable to know what the implicit damages( or benefits) of new technology will be until we actually produce and emplace it, but also when we eventually learn what the damages are it’s frequently too late to stop them.”