Connect with us

Published

on

In this article

Seventh grade Alabama teacher Sarah Wildes relies on a tool called Checkology to teach her students how to spot real news and misinformation.
Courtesy of Sarah Wildes

When Sarah Wildes, a seventh grade teacher in Alabama, was asked by a student about the mass confusion surrounding the results of the 2020 U.S. presidential election, she knew she had a big job in front of her. 

“I have to tread lightly, but I pointed out that we do know,” said Wildes, a science and technology teacher at Sparkman Middle School in the small town of Toney. “There are facts. There have been committees who reviewed the election. The numbers show us a truth, but the social media bubbles confuse us about that truth.”

Wildes and teachers across the country face a vexing and evolving challenge as the new school year begins and students return to the classroom following a roughly 18-month hiatus from normal in-person learning. Since the last time full classrooms congregated, a whole industry of misinformation has exploded online, spreading conspiracy theories on everything from the alleged steal of the presidential election, which Joe Biden won, to the prevalence of microchips in Covid-19 vaccines.

It’s bad enough that kids are exposed to dangerous untruths across their favorite social media apps like Facebook, YouTube and TikTok. An equally large problem is that, while stuck at home during the pandemic, many students had their days of virtual schooling interrupted by screaming parents, who themselves had fallen deep into the internet’s darkest rabbit holes.

Some 15 percent of Americans believe QAnon conspiracy theories, according to a May report from non-profit groups Public Religion Research Institute and Interfaith Youth Core. QAnon believers were largely responsible for spreading “stop the steal” content on social media, backing the lie that former president Donald Trump won the election.

Meanwhile, 22% of Americans self-identify as anti-vaxxers, according to an academic study published in May, even as scientists and public health officials agree on the extreme efficacy and importance of Covid-19 vaccines.

For kids who have yet to fully develop critical thinking skills, basic truths are being distorted by the combination of misinformation on social media and a growing population of duped and radicalized parents.

“They were at home consuming this information without really being able to bust out of their own bubble having been in quarantine,” Wildes said. “They were starved for guidance on how to navigate all the things that they were seeing.”

In addition to dealing with the standard curriculum and trying to make up for lost classroom time, Wildes is taking on the responsibility of helping students filter out misinformation and find reliable news outlets. She’s leaning on the News Literacy Project (NLP), a non-profit in Washington, D.C., that last year developed Checkology, an online tool for educators to help students spot and dispel misinformation.

Checkology teaches students about the various types of misinformation they may encounter, the role the press plays in democracy, understanding bias in the news and recognizing how people fall into conspiracies. Since its launch in May 2016, Checkology has registered more than 1.3 million students and nearly 36,300 teachers. 

“The pandemic, the election, social justice issues — people are looking for information, and educators need support to navigate that disinformation out there,” said Shaelynn Farnsworth, NLP’s director of educator network expansion.

Finding a Reddit community

Other online communities are giving the children of conspiracy theorists ways to connect and share their experiences. And also to detox.

Mobius, a 17-year-old who lives on the West Coast, said his mom is an anti-vaxxer who has started down the path of QAnon. Mobius, who asked us not to use his real name to preserve his family relationships, said his mom talks about the coronavirus as biological warfare and thinks the government is trying to profit from vaccines. He said 90% of her information comes from Facebook or TikTok.

In July, most of Mobius’s family was infected with Covid-19 after his mother contracted the virus and didn’t enter quarantine. She even traveled by plane while she was sick, said Mobius, adding that he was the only one in the family to get vaccinated and to avoid infection.

He said his mom wouldn’t let his siblings get the vaccine and that he missed several childhood immunizations growing up.

Mobius posted about his experience in QAnonCasualties, a Reddit group that says it offers “support, resources and a place to vent” for people who have friends or loved ones “taken in by QAnon.” The group was created in July 2019 and has 186,000 members. It’s flooded with stories that resemble Mobius’ experience.

A woman wearing a pin during an anti-mandatory coronavirus disease (COVID-19) vaccine protest held outside New York City Hall in New York, August 16, 2021.
Jeenah Moon | Reuters

One user post last month was from a university student recounting the anxiety she felt after her dad showed her a video that claimed Covid vaccines would make her infertile. A more recent post came from a 16-year-old girl, who claims she recently “escaped” her abusive QAnon parents and doesn’t know whether to get the Covid vaccination.

“I don’t know what’s real or not anymore,” she wrote on the Reddit board. “I’m terrified and confused. My parents told me I’d get blood clots, I’d die, be dead within five years, be sterile, microchipped, tracked by the government, controlled by the government etc.” 

QAnon is a far-right conspiracy theory movement that emerged after the 2016 election. Though the messaging is disjointed, members often claim the world is controlled by a cabal of Satanic and cannibalistic elites who conspired against former President Trump.

Mobius, who just entered college and needed the vaccination to attend, said he began to question his family’s views around the time Trump entered office. He got more proactive in seeking the facts, turning to news sources rather than listening to his mom. He landed on the Associated Press and BBC as his most trusted outlets.

Still, Mobius said he tries to avoid talking about anything remotely political with his mom’s side of the family. He said his mom has gotten better about spouting conspiracies since getting sick, though her beliefs haven’t changed.

On QAnonCasualties, divorcees mourn the loss of decades-long relationships, workers talk about leaving their jobs because of a supervisor’s anti-vaccine rants and teens and young adults desperately vent about their parents.

Afraid of ‘vaccine toxicity’

Another member of the Reddit group, who asked to be called Vulture, posted on the board in early August, looking for support and advice on dealing with her mom. 

Vulture, who’s 18 and was only comfortable going by a pseudonym, described her mom as an anti-vaxxer who began diving into the QAnon conspiracy in early 2020, at the start of the pandemic.

She said her mom believes 5G cell phone towers are harmful (one QAnon theory says that 5G causes the coronavirus), and she doesn’t allow her children to have WiFi on at night because she’s concerned about radiation. Vulture said her mom gets her information from Facebook, YouTube, Telegram and even in-person groups. 

Vulture’s parents divorced and her mom is now married to another woman. Her mom’s wife got vaccinated earlier this year, creating a riff in the relationship because Vulture’s mom was afraid she had “vaccine toxicity” and told her wife she no longer loved her unconditionally. 

Vulture said her mom has also threatened to kick her and her younger sibling out of the house if they get vaccinated, a threat that weighs heavily on her, especially as she prepares for her freshman year in college.

Jake A, 33, aka Yellowstone Wolf, from Phoenix, wrapped in a QAnon flag, addresses supporters of US President Donald Trump as they protest outside the Maricopa County Election Department as counting continues after the US presidential election in Phoenix, Arizona, on November 5, 2020.
Olivier Touron | AFP | Getty Images

While teenagers like Mobius and Vulture are finding like-minded people online, groups such as Polarization and Extremism Research Innovation Lab (PERIL) and the Southern Poverty Law Center (SPLC) are trying to protect kids from falling victim to hoaxes and disinformation.

Last year PERIL and SPLC published “A Parents & Caregivers Guide to Online Youth Radicalization,” to help adults deal with teenagers who are at risk of exposure to extremism and conspiracy theories.

“Radicalization is a problem for our entire society, from the innocent people it victimizes to the family bonds it breaks apart,” the guide says. It includes sections on how to recognize warning signs, understanding what drives people toward extremism and how caregivers can engage with at-risk youth.

PERIL and the SPLC also created supplements to the guide for educators, counselors and coaches and mentors.

Seventh grade Alabama teacher Sarah Wildes relies on a tool called Checkology to teach her students how to spot real news and misinformation.
Courtesy of Sarah Wildes

Wildes, the Alabama school teacher, sees a bigger role for the classroom and technology like Checkology in combating the spread of misinformation.

“Once people start going down the rabbit hole, it’s hard to get them out,” she said.

Checkology isn’t dogmatic in its approach, Wildes said. Through interactive lessons, the program is designed to give kids the tools to figure out what’s a hoax and what’s a fact supported by evidence. NLP also puts together a weekly newsletter, The Sift, which is intended to help educators teach students news literacy and to understand why a hoax or conspiracy theory that’s spreading is inaccurate.

Wildes said, based on the behavior she witnesses, that she thinks many middle school kids today are better equipped than adults to reject misinformation.

“I think they really enjoy being spoken to in a way that makes them responsible for their own thoughts,” she said.

WATCH: Former Facebook chief privacy officer on fighting vaccine misinformation

Continue Reading

Technology

Elon Musk’s xAI raises $10 billion in debt and equity as it steps up challenge to OpenAI

Published

on

By

Elon Musk's xAI raises  billion in debt and equity as it steps up challenge to OpenAI

Elon Musk announced his new company xAI, which he says has the goal to understand the true nature of the universe.

Jaap Arriens | Nurphoto | Getty Images

XAI, the artificial intelligence startup run by Elon Musk, raised a combined $10 billion in debt and equity, Morgan Stanley said.

Half of that sum was clinched through secured notes and term loans, while a separate $5 billion was secured through strategic equity investment, the bank said on Monday.

The funding gives xAI more firepower to build out infrastructure and develop its Grok AI chatbot as it looks to compete with bitter rival OpenAI, as well as with a swathe of other players including Amazon-backed Anthropic.

In May, Musk told CNBC that xAI has already installed 200,000 graphics processing units (GPUs) at its Colossus facility in Memphis, Tennessee. Colossus is xAI’s supercomputer that trains the firm’s AI. Musk at the time said that his company will continue buying chips from semiconductor giants Nvidia and AMD and that xAI is planning a 1-million-GPU facility outside of Memphis.

Addressing the latest funds raised by the company, Morgan Stanley that “the proceeds will support xAI’s continued development of cutting-edge AI solutions, including one of the world’s largest data center and its flagship Grok platform.”

xAI continues to release updates to Grok and unveiled the Grok 3 AI model in February. Musk has sought to boost the use of Grok by integrating the AI model with the X social media platform, formerly known as Twitter. In March, xAI acquired X in a deal that valued the site at $33 billion and the AI firm at $80 billion. It’s unclear if the new equity raise has changed that valuation.

xAI was not immediately available for comment.

Last year, xAI raised $6 billion at a valuation of $50 billion, CNBC reported.

Morgan Stanley said the latest debt offering was “oversubscribed and included prominent global debt investors.”

Competition among American AI startups is intensifying, with companies raising huge amounts of funding to buy chips and build infrastructure.

OpenAI in March closed a $40 billion financing round that valued the ChatGPT developer at $300 billion. Its big investors include Microsoft and Japan’s SoftBank.

Anthropic, the developer of the Claude chatbot, closed a funding round in March that valued the firm at $61.5 billion. The company then received a five-year $2.5 billion revolving credit line in May.

Musk has called Grok a “maximally truth-seeking” AI that is also “anti-woke,” in a bid to set it apart from its rivals. But this has not come without its fair share of controversy. Earlier this year, Grok responded to user queries with unrelated comments about the controversial topic of “white genocide” and South Africa.

Musk has also clashed with fellow AI leaders, including OpenAI’s Sam Altman. Most famously, Musk claimed that OpenAI, which he co-founded, has deviated from its original mission of developing AI to benefit humanity as a nonprofit and is instead focused on commercial success. In February, Musk alongside a group of investors, put in a bid of $97.4 billion to buy control of OpenAI. Altman swiftly rejected the offer.

CNBC’s Lora Kolodny and Jonathan Vanian contributed to this report.

Continue Reading

Technology

China’s Huawei open-sources AI models as it seeks adoption across the global AI market

Published

on

By

China's Huawei open-sources AI models as it seeks adoption across the global AI market

In recent years, the company has transformed from a competent private sector telecommunications firm into a “muscular technology juggernaut straddling the entire AI hardware and software stack,” said Paul Triolo, partner and senior vice president for China at advisory firm DGA-Albright Stonebridge Group.

Ramon Costa | SOPA Images | Lightrocket | Getty Images

Huawei has open-sourced two of its artificial intelligence models — a move tech experts say will help the U.S.-blacklisted firm continue to build its AI ecosystem and expand overseas. 

The Chinese tech giant announced on Monday the open-sourcing of the AI models under its Pangu series, as well as some of its model reasoning technology.

The moves are in line with other Chinese AI players that continue to push an open-source development strategy. Baidu also open-sourced its large language model series Ernie on Monday. 

Tech experts told CNBC that Huawei’s latest announcements not only highlight how it is solidifying itself as an open-source LLM player, but also how it is strengthening its position across the entire AI value chain as it works to overcome U.S.-led AI chip export restrictions.

In recent years, the company has transformed from a competent private sector telecommunications firm into a “muscular technology juggernaut straddling the entire AI hardware and software stack,” said Paul Triolo, partner and senior vice president for China at advisory firm DGA-Albright Stonebridge Group.

In its announcement Monday, Huawei called the open-source moves another key measure for Huawei’s “Ascend ecosystem strategy” that would help speed up the adoption of AI across “thousands of industries.”

The Ascend ecosystem refers to AI products built around the company’s Ascend AI chip series, which are widely considered to be China’s leading competitor to products from American chip giant Nvidia. Nvidia is restricted from selling its advanced products to China. 

A Google-like strategy?

Pangu being available in an open-source manner allows developers and businesses to test the models and customize them for their needs, said Lian Jye Su, chief analyst at Omdia. “The move is expected to incentivize the use of other Huawei products,” he added.

According to experts, the coupling of Huawei’s Pangu models with the company’s AI chips and related products gives the company a unique advantage, allowing it to optimize its AI solutions and applications. 

While competitors like Baidu have LLMs with broad capabilities, Huawei has focused on specialized AI models for sectors such as government, finance and manufacturing.

“Huawei is not as strong as companies like DeepSeek and Baidu at the overall software level – but it doesn’t need to be,” said Marc Einstein, research director at Counterpoint Research. 

“Its objective is to ultimately use open source products to drive hardware sales, which is a completely different model from others. It also collaborates with DeepSeek, Baidu and others and will continue to do so,” he added. 

Nvidia CEO: Huawei ‘has got China covered’ if the U.S. doesn’t participate

Ray Wang, principal analyst at Constellation Research, said the chip-to-model strategy is similar to that of Google, a company that is also developing AI chips and AI models like its open-source Gemma models.

Huawei’s announcement on Monday could also help with its international ambitions. Huawei, along with players like Zhipu AI, has been slowly making inroads into new overseas markets.

In its announcement Monday, Huawei invited developers, corporate partners and researchers around the world to download and use its new open-source products in order to gather feedback and improve them.

“Huawei’s open-source strategy will resonate well in developing countries where enterprises are more price-sensitive as is the case with [Huawei’s] other products,” Einstein said. 

As part of its global strategy, the company has also been looking to bring its latest AI data center solutions to new countries. 

Continue Reading

Technology

As nations build ‘sovereign AI,’ open-source models and cloud computing can help, experts say

Published

on

By

As nations build 'sovereign AI,' open-source models and cloud computing can help, experts say

Digital illustration of a glowing world map with “AI” text across multiple continents, representing the global presence and integration of artificial intelligence.

Fotograzia | Moment | Getty Images

As artificial intelligence becomes more democratized, it is important for emerging economies to build their own “sovereign AI,” panelists told CNBC’s East Tech West conference in Bangkok, Thailand, on Friday.

In general, sovereign AI refers to a nation’s ability to control its own AI technologies, data and related infrastructure, ensuring strategic autonomy while meeting its unique priorities and security needs.

However, this sovereignty has been lacking, according to panelist Kasima Tharnpipitchai, head of AI strategy at SCB 10X, the technology investment arm of Thailand-based SCBX Group. He noted that many of the world’s most prominent large language models, operated by companies such as Anthropic and OpenAI, are based on the English language.

“The way you think, the way you interact with the world, the way you are when you speak another language can be very different,” Tharnpipitchai said. 

It is, therefore, important for countries to take ownership of their AI systems, developing technology for specific languages, cultures, and countries, rather than just translating over English-based models. 

Sovereign AI rises as governments become power brokers

Panelists agreed that the digitally savvy ASEAN region, with a total population of nearly 700 million people, is particularly well positioned to build its sovereign AI. People under the age of 35 make up around 61% of the population, and about 125,000 new users gain access to the internet daily.

Given this context, Jeff Johnson, managing director of ASEAN at Amazon Web Services, said, “I  think it’s really important, and we’re really focused on how we can really democratize access to cloud and AI.”

Open-source models 

According to panelists, one key way that countries can build up their sovereign AI environments is through the use of open-source AI models. 

“There is plenty of amazing talent here in Southeast Asia and in Thailand, especially. To have that captured in a way that isn’t publicly accessible or ecosystem developing would feel like a shame,” said SCB 10X’s Tharnpipitchai. 

Doing open-source is a way to create a “collective energy” to help Thailand better compete in AI and push sovereignty in a way that is beneficial for the entire country, he added. 

Access to computing 

Open-source AI will have a massive impact on the world, says Hugging Face CEO

“We’re here in Thailand and across Southeast Asia to support all industries, all businesses of all shapes and sizes, from the smallest startup to the largest enterprise,” said AWS’s Johnson. 

He added that the economic model of the company’s cloud services makes it easy to “pay for what you use,” thus lowering the barriers to entry and making it very easy to build models and applications. 

In April, the U.N. Trade and Development Agency said in a report that AI was projected to reach $4.8 trillion in market value by 2033. However, it warned that the technology’s benefits remain highly concentrated, with nations at risk of lagging behind. 

Among UNCTAD’s recommendations to the international community for driving inclusive growth was shared AI infrastructure, the use of open-source AI models and initiatives to share AI knowledge and resources.

Continue Reading

Trending