One of the nations leading Christian-based entertainment websites is urging parents not to take their children to watch the new Barbie movie.
Movieguide, founded by Ted Baehr and his wife, Lili Baehr, posted a warning on Monday to parents, asserting that the PG-13 Barbie film includes adult content LGBT themes, among it that many moms and dads will not want their kids to see.Warning: Dont take your daughter to see Barbie, the headline said.
Many parents will wrongly believe Barbie is a kid-friendly film due to the films ties to the popular Barbie doll franchise, Movieguide said.
The film forgets its core audience of families and children while catering to nostalgic adults and pushing lesbian, gay, bisexual and transgender character stories, Movieguide added.
They had a built-in market and audience for this franchise that they completely ignored, the site noted. Millions of families would have turned out to the theaters and purchased tickets, but instead, Mattel chose to cater to a small percentage of the population who has proven over and over to abandon the box office. Movieguides 40 years of research indicate this just isnt true, and Mattel has made a grievous mistake.
Movies do best when they promote family and biblical values. Movieguide said.
Even the Barbie cartoon movies promote redemption, compassion, team work, kindness to strangers, self-sacrifice and more, said Movieguide. Parents trust the brand, and that is why they must know the truth about the upcoming movie.
The films director, Greta Gerwig, previously said the films LGBT storyline was essential. Gerwig directed Lady Bird and Little Women (2019) and is set to direct at least two Chronicles of Narnia movies for Netflix, according to Collider.
Theres no way we could have told this story without bringing in the LGBTQ+ community, Gerwig told Out, and it was important for us to represent the diversity that Mattel has created with all of the different Barbies and Kens that exist today.
Barbie is rated PG-13 for suggestive references and brief language.
Related:
Photo courtesy: Warner Brothers, used with permission.
Michael Foust has covered the intersection of faith and news for 20 years. His stories have appeared in Baptist Press, Christianity Today, The Christian Post, the Leaf-Chronicle, the Toronto Star and the Knoxville News-Sentinel.
Yoshua Bengio (L) and Max Tegmark (R) discuss the development of artificial general intelligence during a live podcast recording of CNBC’s “Beyond The Valley” in Davos, Switzerland in January 2025.
CNBC
Artificial general intelligence built like “agents” could prove dangerous as its creators might lose control of the system, two of of the world’s most prominent AI scientists told CNBC.
In the latest episode of CNBC’s “Beyond The Valley” podcast released on Tuesday, Max Tegmark, a professor at the Massachusetts Institute of Technology and the President of the Future of Life Institute, and Yoshua Bengio, dubbed one of the “godfathers of AI” and a professor at the Université de Montréal, spoke about their concerns about artificial general intelligence, or AGI. The term broadly refers to AI systems that are smarter than humans.
Their fears stem from the world’s biggest firms now talking about “AI agents” or “agentic AI” — which companies claim will allow AI chatbots to act like assistants or agents and assist in work and everyday life. Industry estimates vary on when AGI will come into existence.
With that concept comes the idea that AI systems could have some “agency” and thoughts of their own, according to Bengio.
“Researchers in AI have been inspired by human intelligence to build machine intelligence, and, in humans, there’s a mix of both the ability to understand the world like pure intelligence and the agentic behavior, meaning … to use your knowledge to achieve goals,” Bengio told CNBC’s “Beyond The Valley.”
“Right now, this is how we’re building AGI: we are trying to make them agents that understand a lot about the world, and then can act accordingly. But this is actually a very dangerous proposition.”
Bengio added that pursuing this approach would be like “creating a new species or a new intelligent entity on this planet” and “not knowing if they’re going to behave in ways that agree with our needs.”
“So instead, we can consider, what are the scenarios in which things go badly and they all rely on agency? In other words, it is because the AI has its own goals that we could be in trouble.”
The idea of self-preservation could also kick in, as AI gets even smarter, Bengio said.
“Do we want to be in competition with entities that are smarter than us? It’s not a very reassuring gamble, right? So we have to understand how self-preservation can emerge as a goal in AI.”
AI tools the key
For MIT’s Tegmark, the key lies in so-called “tool AI” — systems that are created for a specific, narrowly-defined purpose, but that don’t have to be agents.
Tegmark said a tool AI could be a system that tells you how to cure cancer, or something that possesses “some agency” like a self-driving car “where you can prove or get some really high, really reliable guarantees that you’re still going to be able to control it.”
“I think, on an optimistic note here, we can have almost everything that we’re excited about with AI … if we simply insist on having some basic safety standards before people can sell powerful AI systems,” Tegmark said.
“They have to demonstrate that we can keep them under control. Then the industry will innovate rapidly to figure out how to do that better.”
Tegmark’s Future of Life Institute in 2023 called for a pause to the development of AI systems that can compete with human-level intelligence. While that has not happened, Tegmark said people are talking about the topic, and now it is time to take action to figure out how to put guardrails in place to control AGI.
“So at least now a lot of people are talking the talk. We have to see if we can get them to walk the walk,” Tegmark told CNBC’s “Beyond The Valley.”
“It’s clearly insane for us humans to build something way smarter than us before we figured out how to control it.”
There are several views on when AGI will arrive, partly driven by varying definitions.
OpenAI CEO Sam Altman said his company knows how to build AGI and said it will arrive sooner than people think, though he downplayed the impact of the technology.
“My guess is we will hit AGI sooner than most people in the world think and it will matter much less,” Altman said in December.
The ChargeX Consortium has figured out how to automatically restart failed EV charging sessions at fast chargers so drivers don’t have to.
Every EV driver has been there. You plug in, walk away to grab food or run errands, and expect your battery to be juicing up at a DC fast charger, only to return and realize nothing happened. Maybe the session failed, or maybe the charger glitched. Either way, you’re stuck unplugging, plugging back in, and now it’s going to take twice as long to charge.
The ChargeX Consortium (National Charging Experience Consortium), which is made up of researchers from the National Renewable Energy Laboratory (NREL), Idaho National Laboratory (INL), and Argonne National Laboratory (ANL), along with industry stakeholders, has come up with a smart fix for one of the most frustrating parts of public EV charging: failed sessions.
Its new report highlights the benefits of what it calls “seamless retry” – a hands-free tech solution that automatically restarts failed charging attempts. In other words, the driver no longer needs to physically unplug and replug the charging connector when a charging session fails.
The consortium’s new tech is designed specifically for DC fast charging. The “novel mechanism” automatically resets both the EV and the charger, then restarts the session in the background, so drivers don’t have to return to the car – or even have to think about it.
Ed Watt, a researcher at NREL and lead author of the “Recommended Practice Seamless Retry for Electric Vehicle Charging” report, said, “With a seamless retry mechanism in place, an EV driver at a retail center can plug in a charging connector, provide user input data, leave to shop, and feel confident that they will return to a charged vehicle.” (Click on the report link to see the specifics of how the novel mechanism works.)
The researchers didn’t just focus on the perks of seamless retry – they also looked at potential downsides. One concern was the extra time it might take for the system to restart a failed session, which could leave drivers frustrated. To tackle that, the consortium suggests that the EV industry provide transparency in the form of real-time status updates, insights into what went wrong, and recommendations based on the type of charging failure and number of attempts made.
Going forward, as the user experience becomes clearer, more work will fine-tune seamless retry. The ChargeX Consortium will keep refining the system – developing smarter, more targeted retry methods, ironing out implementation details, and running verification tests to make sure everything works seamlessly in the real world.
To limit power outages and make your home more resilient, consider going solar with a battery storage system. In order to find a trusted, reliable solar installer near you that offers competitive pricing, check outEnergySage, a free service that makes it easy for you to go solar. They have hundreds of pre-vetted solar installers competing for your business, ensuring you get high-quality solutions and save 20-30% compared to going it alone. Plus, it’s free to use and you won’t get sales calls until you select an installer and you share your phone number with them.
Your personalized solar quotes are easy to compare online and you’ll get access to unbiased Energy Advisers to help you every step of the way. Get startedhere. –trusted affiliate link*
FTC: We use income earning auto affiliate links.More.