Artificial intelligence is scary to a lot of people, even within the tech world. Just look at how industry insiders have co-opted a tentacled monster called a shoggoth as a semi-tongue-in-cheek symbol for their rapidly advancing work.
But their online memes and references to that creature — which originated in influential late author H.P. Lovecraft’s novella “At the Mountains of Madness” — aren’t quite perfect, according to the world’s leading Lovecraft scholar, S.T. Joshi.
related investing news
If anyone knows Lovecraft and his wretched menagerie, which includes the ever-popular Cthulhu, it’s Joshi. He’s edited reams of Lovecraft collections, contributed scores of essays about the author and written more than a dozen books about him, including the monumental two-part biography “I Am Providence.”
So, after The New York Times recently published a piece from tech columnist Kevin Roose explaining that the shoggoth had caught on as “the most important meme in A.I.,” CNBC reached out to Joshi to get his take — and find out what he thought Lovecraft would say about the squirmy homage from the tech world.
“While I’m sure Lovecraft would be grateful (and amused) by the application of his creation to AI, the parallels are not very exact,” Joshi wrote. “Or, I should say, it appears that AI creators aren’t entirely accurate in their understanding of the shoggoth.”
First of all, it’s “shoggoth,” not “Shoggoth,” Joshi said. The capitalized version of the word, as it’s spelled in the Times article, has indeed appeared in many editions of “At the Mountains of Madness,” which was first published in “Astounding Stories” in 1936, the year before Lovecraft died at age 46. But decades ago, Joshi found that Lovecraft himself made it lowercase in his manuscript and typescript of the science fiction/horror tale set in Antarctica.
“It is a species name, not a proper name,” Joshi wrote in an email to CNBC.
But that’s a minor quibble. There are bigger thematic things to consider.
Workers and others in the generative-AI field use the shoggoth meme, which often appears as a squiggly cartoon festooned with eyes and appendages, to acknowledge the mysterious, at-times frightening potential of the technology. “That some A.I. insiders refer to their creations as Lovecraftian horrors, even as a joke, is unusual by historical standards,” Roose wrote in his Times column.
The recent advancement of generative AI has already provoked references to science fiction classics such as “The Terminator” and “The Matrix,” or Harlan Ellison’s chilling science fiction story “I Have No Mouth, and I Must Scream,” all of which portray sinister artificial intelligence wiping out most of humanity.
Bringing Lovecraft’s cosmic horrors into the mix might seem excessive at this point, even as the technology creates uncanny things. For instance, a recent fake Toronto Blue Jays ad, created by a TSN producer who used text-to-video AI tech, is packed with horrifying images such as people feasting on each other’s hot dog tentacles.
The shoggoth meme’s creator, known by the Twitter handle @TetraspaceWest, said the inspiration came about because Lovecraft’s monsters are “indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about possible future powerful A.I.”
Astounding Stories – February 1936 (Street & Smith) – “At the Mountains of Madness” by H. P. Lovecraft. Artist Howard V. Brown, 1936
Pierce Archive LLC | Buyenlarge | Getty Images
The meme also tries to put a happy face on the shoggoth — literally — as it usually depicts the monster sporting a smile emoji on a tentacle. That’s in reference to efforts to train language models to be nice, according to the Times. It also reads like a commentary on how futile and absurd it might be to try.
Lovecraft’s shoggoths probably wouldn’t entertain the idea of sending a friendly signal, and, in the story, they certainly aren’t indifferent to their creators, whom they try to usurp.
While artificial intelligence is based in machines, the monsters in the novella are organically bred slave creatures that develop brains and their own will, Joshi pointed out. Lovecraft describes a shoggoth as a “column of foetid black iridescence” consisting of “protoplasmic bubbles, faintly self-luminous, and with myriads of temporary eyes forming and unforming as pustules of greenish light.”
A big concern among people who fear AI is that the programs will someday become more intelligent than humans and take over. There is no parallel event in Lovecraft’s story. The shoggoths don’t end up surpassing their masters, the ancient Old Ones, “in intelligence or any other capacity,” Joshi writes. “Lovecraft clearly states otherwise.”
That’s not to say the meme totally misses the mark.
In the story, shoggoths rise up against the Old Ones in a series of slave revolts that surely contribute to the collapse of the Old Ones’ society, Joshi notes. The AI anxiety that inspired comparisons to the cartoon monster image certainly resonates with the ultimate fate of that society.
“So the general metaphor of an artificial creation overwhelming its creator does have some sort of parallel to AI (or the fears of what AI might do in the future), but it’s a fairly inexact parallel,” Joshi wrote.
But even this imperfect metaphor pairs well with what happens in Lovecraft’s story, which describes a once-grand civilization that had too many problems to fix.
In our world — a world beset by toxic wildfire smoke and water shortages, violent insurrections in democracies, and the most military combat in Europe since World War II — AI is just part of a whole. There’s a lot of hype and confusion around it, as well as positive potential. There are also real concerns, namely in how AI could act as an accelerant for bigotry and extremism, or as an engine for misinformation, or as a job killer.
In the novella, the Old Ones fall prey to a variety of threats, including attacks from rival entities who come from outer space. The story ends with insinuations of even greater mind-shattering horrors that lay beyond the mountains of madness.
In reality, humans could well scale those terrible heights with the help of AI, but only if we let it happen. Maybe we should be the ones wearing the smiley faces.
Ryu Young-sang, CEO of South Korean telecoms giant SK Telecom, told CNBC that AI is helping telecoms firms improve efficiency in their networks.
Manaure Quintero | Afp | Getty Images
South Korea has tasked some of its biggest companies and promising startups to build a national foundational AI model using mainly domestic technology, in a rare move to keep the country apace with the U.S. and China.
The project will feature South Korean technologies from semiconductors to software, as Seoul looks to create a near self-sufficient AI industry and position itself as an alternative to China and the U.S.
The Ministry of Science and ICT (MSIT) for Korea announced that five consortia have been selected to develop the models. One is led by SK Telecom, a telecommunications giant in Korea and includes gaming firm Krafton and chip startup Rebellions among other companies.
There are other teams led by some of the country’s other prominent firms including LG and Naver.
“We are going through an important juncture in terms of our technological development. So Korea, at the national level, is focusing on ensuring that we lay the technical foundation to have our competitiveness,” Kim Taeyoon, head of the foundation model office at SK Telecom who also leads the company’s consortium, told CNBC.
“Korea has many entities that would excel at creating a big AI industry. And we could clearly see the possibility that we are very capable of creating a good AI stack,” Kim added.
A “stack” refers to various technologies that make up a product or other technology.
South Korea’s forte
The initiative aims to draw on the strategic position of some of South Korea’s firms and the technology they develop that are crucial to AI.
For example, SK Hynix makes high-bandwidth memory (HBM) which is critical to Nvidia’s products. Samsung is also another major memory player. SK Telecom has been expanding its business into data centers. While Rebellions, which is part of SKT’s consortium, is developing chips designed to handle AI workloads.
Samsung, meanwhile, has its own chip manufacturing business, also known as foundry.
“This means the country possesses the entire AI stack, from chips to cloud to AI models, and also benefits from a robust community of advanced AI researchers who are actively publishing papers and securing patents,” Nick Patience, practice lead for AI at The Futurum Group, told CNBC.
Given the intricacies of technology supply chains, no one country can do it alone. The consortia will still rely on graphics processing units (GPUs) from American firm Nvidia which have become the gold standard for training AI models.
Meanwhile, SK Telecom will train the models it develops on its own Titan supercomputer, which is made up of Nvidia GPUs, as well as an AI data center the company is developing with Amazon.
AI model roadmap
SK Telecom is not new to the AI model game. The company launched a beta version of its first chatbot based on its own large language model in 2022 called “A.” which is pronounced “A dot.” Since then, it has developed more advanced versions of the model and chatbot.
SK Telecom’s consortium plans to release its first model by the end of the year, Kim said. It will be initially focused on the market in South Korea, but could be used globally. The model will be open-source, meaning it will free for developers to use and build on, potentially with some licensing requirements.
Any AI models coming out of South Korea’s project will face intense competition from players including OpenAI and Anthropic as well as many of the strong open-source offerings out of Chinese firms like Alibaba and DeepSeek.
Creating an AI model won’t be a problem, given SK Telecom and other companies’ already-proven track record in doing so.
The bigger challenge will be putting forward models that can compete with those coming out of frontier AI labs, which are pouring billions of dollars into research and development. Another issue will be getting traction among developers to build upon these models. That has what has made a success of other open-source models, like those from Alibaba.
SK Telecom’s Kim said the goal is to create models that can rival these other companies.
“Our first goal is to create a very strong state-of-the-art open source model and we already have an example of those open source models which are on par in terms of performance with those large tech (players) like OpenAI or Anthropic,” Kim told CNBC.
He added that there will be models of different sizes that can be used by different industries.
An open-source national AI model could also provide benefits by giving businesses across the country access to the latest technology without having to rely on a tech giant from abroad.
Meanwhile, South Korean AI models could be positioned as an alternative to U.S. and Chinese-developed systems.
“Beyond domestic benefits, a proven sovereign AI model presents significant export potential. Just as Korea excelled in memory chips, this could become a valuable product for other nations seeking alternatives to U.S. or Chinese systems, strengthening Korea’s position in the global AI landscape,” Patience said.
AI sovereignty
Underpinning this push from South Korea is the concept of “sovereign AI” that has gained traction with many nations.
This is the notion that AI models and services, which governments see as having strategic importance, should be built within a country and run on servers located domestically.
“All major nations are increasingly concerned about AI sovereignty as the US and China vie for AI dominance,” The Futurum Group’s Patience said.
“Given AI’s growing influence on critical sectors like healthcare, finance, defense, and government, countries cannot afford to cede control of their digital intelligence to foreign entities.”
The launch of an Instagram feature that details users’ geolocation data illicited backlash from social media users on Thursday.
Meta debuted the Instagram Map tool on Wednesday, pitching the feature as way to “stay up-to-date with friends” by letting users share their “last active location.” The tool is akin to Snapchat’s Snap Map feature that lets people see where their friends are posting from.
Although Meta said in a blog post that the feature’s “location sharing is off unless you opt in,” several social media users said in posts that they were worried that was not the case.
“I can’t believe Instagram launched a map feature that exposes everyone’s location without any warning,” said one user who posted on Threads, Meta’s micro-blogging service.
Another Threads user said they were concerned that bad actors could exploit the map feature by spying on others.
“Instagram randomly updating their app to include a maps feature without actually alerting people is so incredibly dangerous to anyone who has a restraining order and actively making sure their abuser can’t stalk their location online…Why,” said the user in a Threads post.
Instagram chief Adam Mosseri responded to the complaints on Threads, disputing the notion that the map feature is exposing people’s locations against their will.
“We’re double checking everything, but so far it looks mostly like people are confused and assume that, because they can see themselves on the map when they open, other people can see them too,” Mosseri wrote on Thursday. “We’re still checking everything though to make sure nobody shares location without explicitly deciding to do so, which, by the way, requires a double consent by design (we ask you to confirm after you say you want to share).”
Still, some Instagram users claimed that that their locations were being shared despite not opting in to using the map feature.
“Mine was set to on and shared with everyone in the app,” said a user in a Threads post. “My location settings on my phone for IG were set to never. So it was not automatically turned off for me.
A Meta spokesperson reiterated Mosseri’s comments in a statement and said “Instagram Map is off by default, and your live location is never shared unless you choose to turn it on.”
“If you do, only people you follow back — or a private, custom list you select — can see your location,” the spokesperson said.
Tesla’s vice president of hardware design engineering, Pete Bannon, is leaving the company after first joining in 2016 from Apple, CNBC has confirmed.
Bannon was leading the development of Tesla’s Dojo supercomputer and reported directly to Musk. Bloomberg first reported on Bannon’s departure, and added that Musk ordered his team to shut down, with engineers in the group getting reassigned to other initiatives.
Tesla didn’t immediately respond to a request for comment.
Since early last year, Musk has been trying to convince shareholders that Tesla, his only publicly traded business, is poised to become an an artificial intelligence and robotics powerhouse, and not just an electric vehicle company.
A centerpiece of the transformation was Dojo, a custom-built supercomputer designed to process and train AI models drawing on the large amounts of video and other data captured by Tesla vehicles.
Tesla’s focus on Dojo and another computing cluster called Cortex were meant to improve the company’s advanced driver assistance systems, and to enable Musk to finally deliver on his promise to turn existing Teslas into robotaxis.
On Tesla’s earnings call in July, Musk said the company expected its newest version of Dojo to be “operating at scale sometime next year, with scale being somewhere around 100,000 H-100 equivalents,” referring to a supercomputer built using Nvidia’s state of the art chips.
Tesla recently struck a $16.5 billion deal with Samsung to produce more of its own A16 chips with the company domestically.
Tesla is running a test Robotaxi service in Austin, Texas, and a related car service in San Francisco. In Austin, the company’s vehicles require a human safety supervisor in the front passenger seat ready to intervene if necessary. In San Francisco, the car service is operated by human drivers, though invited users can hail a ride through a “Tesla Robotaxi” app.
On the earnings call, Musk faced questions about how he sees Tesla and his AI company, xAI, keeping their distance given that they could be competing against one another for AI talent.
Musk said the companies “are doing different things.” He said, “xAI is doing like terabyte scale models and multi-terabyte scale models.” Tesla uses “100x smaller models,” he said, with the automaker focused on “real-world AI,” for its cars and robots and xAI focused on developing software that strives for “artificial super intelligence.”
Musk also said that some engineers wouldn’t join Tesla because “they wanted to work on AGI,” one reason he said he formed a new company.
Tesla has experienced an exodus of top talent this year due to a combination of job terminations and resignations. Milan Kovac, who was Tesla’s head of Optimus robotics engineering, departed, as did David Lau, a vice president of software engineering, and Omead Afshar, Musk’s former chief of staff.