Connect with us

Published

on

As the generative AI field heats up, consumer-facing chatbots are fielding questions about business strategy, designing study guides for math class, offering advice on salary negotiation and even writing wedding vows. And things are just getting started. 

OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Bing and Anthropic’s Claude are a few of today’s leading chatbots, but over the coming year, we’ll likely see more emerge: In the venture capital space, generative AI-related deals totaled $1.69 billion worldwide in Q1 of this year, a 130% spike from last quarter’s $0.73 billion – with another $10.68 billion worth of deals being announced but not yet completed in Q1, according to Pitchbook data. 

related investing news

Nvidia's 'iPhone moment' in AI signals tons of future growth. Here's our new price target

CNBC Investing Club

Two months after ChatGPT’s launch, it surpassed 100 million monthly active users, breaking records for the fastest-growing consumer application in history: “a phenomenal uptake – we’ve frankly never seen anything like it, and interest has grown ever since,” Brian Burke, a research VP at Gartner, told CNBC. “From its release on November 30 to now, our inquiry volume has shot up like a hockey stick; every client wants to know about generative AI and ChatGPT.” 

These types of chatbots are built atop large language models, or LLMs, a machine learning tool that uses large amounts of internet data to recognize patterns and generate human-sounding language. If you’re a beginner, many of the sources we spoke with agreed that the best way to start using a chatbot is to dive in and try things out. 

“People spend too much time trying to find the perfect prompt – 80% of it is just using it interactively,” Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania, who studies the effects of AI on work and education, told CNBC. 

Here are some tips from the pros:

Keep data privacy in mind. 

When you use a chatbot like ChatGPT or Bard, the information you put in – what you type, what you receive in response, and the changes you ask for – may be used to train future models. OpenAI says as much in its terms. Although some companies offer ways to opt out – OpenAI allows this under “data controls” in ChatGPT settings – it’s still best to refrain from sharing sensitive or private data in chatbot conversations, especially while companies are still finessing their privacy measures. For instance, a ChatGPT bug in March briefly allowed users to see parts of each others’ conversation histories. 

“If you wouldn’t post it on Facebook, don’t put it into ChatGPT,” Burke said. “Think about what you put into ChatGPT as being public information.”

Offer up context. 

For the best possible return on your time, give the chatbot context about how it should act in this scenario, and who it’s serving with this information. For example, you can write out the persona you want the chatbot to assume in this scenario: “You are a [marketer, teacher, philosopher, etc.].” You can also add context like: “I am a [client, student, beginner, etc.].” This could save time by directly telling the chatbot which kind of role it should assume, and which “lens” to pass the information through in a way that’s helpful to you. 

For instance, if you’re a creative consultant looking for a chatbot to help you with analysis on company logos, you could type out something like, “Act as if you are a graphic designer who studies logo design for companies. I am a client who owns a company and is looking to learn about which logos work best and why. Generate an analysis on the ‘best’ company logos for publicly listed companies and why they’re seen as good choices.” 

“If you ask Bard to write an inspirational speech, Bard’s response may be a bit more generic – but if you ask Bard to write a speech in a specific style, tone or format, you’ll likely get a much better response,” Sissie Hsiao, a VP at Google, told CNBC.

Make the chatbot do all the work.

Sometimes the best way to get what you want is to ask the chatbot itself for advice – whether you’re asking about what’s possible as a user, or about the best way to word your prompt.

“Ask it the simple question, what kinds of things can you do? And it’ll give you a list of things that would actually surprise most people,” Burke said. 

You can also game the system by asking something like, “What’s the best way to ask you for help writing a shopping list?” or even assigning the chatbot a prompt-writing job, like, “Your job is to generate the best and most efficient prompts for ChatGPT. Generate a list of the best prompts to ask ChatGPT for healthy one-pot dinner recipes.” 

Ask for help with brainstorming. 

Whether you’re looking for vacation destinations, date ideas, poetry prompts or content strategies for going viral on social media, many people are using chatbots as a jumping-off point for brainstorming sessions. 

“The biggest thing…that I find them to be helpful for is inspiring me as the user and helping me learn things that I wouldn’t have necessarily thought of on my own,” Josh Albrecht, CTO of Generally Intelligent, an AI research startup, told CNBC. “Maybe that’s why they’re called generative AI – they’re really helpful at the generative part, the brainstorming.” 

Create a crash course. 

Let’s say you’re trying to learn about geometry, and you consider yourself a beginner. You could kick off your studies by asking a chatbot something like, “Explain the basics of geometry as if I’m a beginner,” or, “Explain the Pythagorean Theorem as if I’m a five-year-old.” 

If you’re looking for something more expansive, you can ask a chatbot to create a “crash course” for you, specifying how much time you’ve got (three days, a week, a month) or how many hours you want to spend learning the new skill. You can write something like, “I’m a beginner who wants to learn how to skateboard. Create a two-week plan for how I can learn to skateboard and do a kickflip.” 

To expand your learning plan beyond the chatbot, you can also ask for a list of the most important books about a topic, some of the most influential people in the field and any other resources that could help you advance your skill set. 

Don’t be afraid to give notes and ask for changes. 

“The worst thing you could do if you’re actually trying to use the output of ChatGPT is [to] just ask it one thing once and then walk away,” Mollick said. “You’re going to get very generic output. You have to interact with it.”

Sometimes you won’t choose the perfect prompt, or the chatbot won’t generate the output you were looking for – and that’s okay. You can still make tweaks to make the information more helpful, like asking follow-up questions like, “Can you make it sound less generic?” or “Can you make the first paragraph more interesting?” or even restating your original ask in a different way. 

Take everything with many grains of salt.

Chatbots have a documented tendency to fabricate information, especially when their training data doesn’t fully cover an area you’re asking about, so it’s important to take everything with a grain of salt. Say you’re asking for a biography of Albert Einstein: A chatbot might tell you the famous scientist wrote a book called “How to Be Smart,” when, unfortunately, he never did. Also, since large language models are trained upon large swaths of the internet, they’re best at pattern recognition, meaning they can generate biased outputs or misinformation based on their training data. 

“Where there’s less information, it just makes stuff up,” Burke said, adding, “These hallucinations are extraordinarily convincing…You can’t trust these models to give you accurate information all the time.”

Experiment and try different approaches.

Whether you’re asking for a chatbot to generate a list of action items from a meeting transcript or translate something from English to Tagalog, there are an untold range of use cases for generative AI. So when you’re using a chatbot, it’s worth thinking about the things you want to learn or need help with and experimenting with how well the system can deliver. 

“AI is a general-purpose technology; it does a lot of stuff, so the idea is that whatever field you’re in and whatever job you’re in, it’s going to affect aspects of your job differently than anyone else on the planet,” Mollick said. “It’s about thinking about how you want to use it…You have to figure out a way to work with the system…and the only way to do that is through experimenting.” 


Subscribe to CNBC on YouTube. 

Continue Reading

Technology

X lawsuit vs. Apple and OpenAI stays in Fort Worth, Texas; judge suggests they move there

Published

on

By

X lawsuit vs. Apple and OpenAI stays in Fort Worth, Texas; judge suggests they move there

Thomas Fuller | SOPA Images | Lightrocket | Getty Images

A judge ordered that X and xAI’s lawsuit accusing Apple and OpenAI of trying to maintain monopolies in artificial intelligence markets must remain in federal court in Fort Worth, Texas, despite “at best minimal connections” to that geographic area by any of the companies.

Judge Mark Pittman, in a sharply ironic four-page order on Thursday, encouraged the companies to relocate their headquarters to Fort Worth, given their preference for the antitrust lawsuit to be heard there.

In a footnote, he even flagged the companies to the website of the Business Services unit of the City of Fort Worth “to get the process started” of relocating there.

Pittman’s order implicitly aims at the tendency of some plaintiffs of a conservative bent to file lawsuits in the Fort Worth division of the U.S. Northern District of Texas courts to increase their chances of winning favorable rulings from the two active judges there, both of whom were appointed by Republicans.

Those plaintiffs have included X and Tesla, both controlled by mega-billionaire Elon Musk, who, until earlier this year, was a top advisor to President Donald Trump.

Pittman was appointed by Trump, but has been critical of the practice of targeting lawsuits to specific judicial districts, known as forum-shopping.

In his order on Thursday, Pittman said that the Fort Worth division’s docket is two to three times busier than the docket of the Dallas division, which has more judges.

Pittman’s order noted that neither Apple nor OpenAI has a strong connection to Fort Worth, other than several Apple stores.

“And, of course, under that logic, there is not a district and division in the entire United States that would not be an appropriate venue for this lawsuit,” Pittman wrote.

X Corp. is headquartered in Bastrop, Texas — roughly 200 miles south of Fort Worth — while both Apple and OpenAI are headquartered in California. Musk’s xAI acquired his social media company X in March in an all-stock transaction.

“Given the present desire to have venue in Fort Worth, the numerous high-stakes lawsuits previously adjudicated in the Fort Worth Division, and the vitality of Fort Worth, the Court highly encourages the Parties to consider moving their headquarters to Fort Worth,” the judge wrote.

“Fort Worth has much more going for it than just the unique artwork on the fourth floor of its historic federal courthouse,” Pittman said.

The judge had asked the three companies to explain why the case belonged in the Fort Worth court.

But neither Apple nor OpenAI requested that the case be moved before the judge’s Oct. 9 deadline, Pittman noted in the order.

Read more CNBC politics coverage

Still, Pittman opted to keep the case in the Fort Worth division.

“The fact that neither Defendant filed a motion to transfer venue serves as a consideration for the Court,” the judge wrote. “And the Court ‘respect[s]’ Plaintiffs’ choice of venue.”

“But the Court does not make its decision lightly or without reservations. This case contains at best minimal connections to the Fort Worth Division of the Northern District of Texas,” Pittman wrote. “Possibly one of the strongest points made by Plaintiffs is the mere fact that ‘Apple sell[s] iPhones [in this Division] (and many other products) and OpenAI offer[s] ChatGPT nationwide.'”

“After more than a decade of service presiding over thousands of cases in three different courts, the undersigned continues to feel strongly that ‘[v]enue is not a continental breakfast; you cannot pick and choose on a Plaintiffs’ whim where and how a lawsuit is filed,'” the judge sniped.

But Pittman noted that he had little, if any, choice in the decision to keep the suit in his courthouse.

The U.S. 5th Circuit Court of Appeals, whose jurisdiction includes federal courts in Texas, has raised “the standard for transferring venue to new heights,” Pittman wrote.

Last year, the 5th Circuit twice slapped down orders by Pittman to transfer to Washington, D.C., a lawsuit by trade groups representing large banks challenging a rule issued by the Consumer Financial Protection Bureau, which capped credit card late fees at $8 per month.

The 5th Circuit said Pittman’s court “clearly abused its discretion” in trying to move the case.

OpenAI declined to comment to CNBC, referring a reporter to its public filings in the lawsuit. X and Apple did not immediately respond to a request for comment.

Musk’s X and xAI sued Apple and OpenAI in August, alleging the companies of an “anticompetitive scheme” to maintain monopolies in artificial intelligence markets.

The lawsuit accused Apple of favoring OpenAI’s ChatGPT on its App Store rankings and deprioritizing other competitors, such as xAI’s Grok.

Earlier this month, a judge in Washington, D.C., blocked Musk’s request to move the Securities and Exchange Commission’s lawsuit over his alleged improper disclosure of his stake in Twitter to Texas. Musk renamed Twitter to X after purchasing the company.

Continue Reading

Technology

Companies are blaming AI for job cuts. Critics say it’s a ‘good excuse’

Published

on

By

Companies are blaming AI for job cuts. Critics say it’s a 'good excuse'

More companies are announcing AI-driven layoffs from Salesforce to Accenture.

Twenty20

From tech to airlines, large global companies have been slashing staff as the real-world impact of artificial intelligence plays out, spooking employees. But critics say AI has become an easy excuse for firms looking to downsize.

Last month, tech consultancy firm Accenture announced a restructuring plan that includes quick exits for workers that aren’t first able to reskill on AI. Days later, Lufthansa said it was going to eliminate 4,000 jobs by 2030 as it leans on AI to increase efficiency.

Salesforce also laid off 4,000 customer support roles in September, saying that AI can do 50% of the work at the company. Meanwhile, fintech firm Klarna has reduced staff by 40% as it aggressively adopts AI tools.

Language-learning platform Duolingo has stated that it will gradually stop relying on contractors and use AI to fill the gaps.

The headlines are grim, but Fabian Stephany, assistant professor of AI and work at the Oxford Internet Institute, said there might be more to job cuts than meets the eye.

Previously there may have been some stigma attached to using AI, but now companies are “scapegoating” the technology to take the fall for challenging business moves such as layoffs.

“I’m really skeptical whether the layoffs that we see currently are really due to true efficiency gains. It’s rather really a projection into AI in the sense of ‘We can use AI to make good excuses,'” Stephany said in an interview with CNBC.

Companies can essentially position themselves at the frontier of AI technology to appear innovative and competitive, and simultaneously conceal the real reasons for layoffs, according to Stephany.

“There might be various other reasons why companies are having to get rid of part of their workforce … Duolingo or Klarna are really prime candidates for this because there has been overhiring during Corona [Covid-19 pandemic] as well,” the professor said.

Some companies that flourished during the pandemic “significantly overhired” and the recent layoffs might just be a “market clearance.”

“It’s to some extent firing people that for whom there had not been a sustainable long term perspective and instead of saying “we miscalculated this two, three years ago, they can now come to the scapegoating, and that is saying ‘it’s because of AI though,'” he added.

This pattern has sparked conversation online. One founder, Jean-Christophe Bouglé even said in a popular LinkedIn post that AI adoption is at a “much slower pace” than is being claimed and in large corporations “there’s not much happening” with AI projects even being rolled back due to cost or security concerns.

“At the same time there are announcements of big layoff plans ‘because of AI.’ It looks like a big excuse, in a context where the economy in many countries is slowing down, despite what the incredible performance of stock exchanges suggest,” said Bouglé, who co-founded Authentic.ly.

Feeding the fear of AI

Jasmine Escalera, a careers expert, said this concealment is “feeding the fear of AI” with employees globally concerned about their jobs being replaced as a result of AI.

“So we already know that employees are scared because companies are not being honest, open and communicative about how they’re implementing AI,” Escalera told CNBC Make It. “Now companies are openly stating ‘We’re doing this [layoffs] because of AI’ so it’s feeding the frenzy.”

Escalera said big companies need to be more responsible as they set the tone for what’s the norm in business decision making and avoid greenlighting “bad behavior.”

A Salesforce spokesperson clarified to CNBC that the company deployed its own AI agent, Agentforce, which reduced the number of customer support cases and eliminated the need to “backfill support engineer roles,” they said.

View taken inside a Lufthansa Airbus A350 airplane on March 19, 2025.

Lufthansa to cut 4,000 jobs as airline turns to AI to boost efficiency

“We’ve successfully redeployed hundreds of employees into other areas like professional services, sales, and customer success,” the Salesforce spokesperson added.

Klarna directed CNBC to its co-founder and CEO Sebastian Siemiatkowski’s comments on X where he explained that the company shrank its workforce from 5,500 to 3,000 people in two years but “AI is only part of that story.”

Siemiatkowski linked the workforce reduction to slimming down its analytics team to one “success team,” with many then leaving by natural attrition as well as the reduction of the company’s customer success team.

Lufthansa and Accenture declined to comment on the matter and did not share any further details on their AI restructuring strategy. Duolingo did not respond to CNBC’s request for comment.

Mass AI layoffs are not here

The Budget Lab, a non-partisan policy research center at Yale University, released a report on Wednesday which showed that U.S. labor has actually been little disrupted by AI automation since the release of ChatGPT in 2022.

The lab examined U.S. labor market data from November 2022 to July 2025 using a “dissimilarity index” which measured how much the occupational mix—the share of workers in different jobs—has shifted since AI’s debut and compared it to other technological shifts such as the introduction of computers and the internet. It found that AI hasn’t yet caused widespread job losses.

Additionally, New York Fed economists released research in early September which showed that AI use amongst firms “do not point to significant reductions in employment” across the services and manufacturing industry in the New York–Northern New Jersey region.

It found that 40% of service firms said they were using AI this year, up from 25% last year, while manufacturing firms saw a similar jump from 16% last year to 26% this year, but very few were using AI to layoff workers.

Only 1% of the services firm reported AI as the reason for laying off workers in the past six months, down from 10% that had laid off workers using AI in 2024. Meanwhile, 12% of services firms said AI made them hire less workers in 2025.

By contrast, 35% of services firms have used AI to retrain employees and 11% have hired more as a result.

Stephany said there isn’t much evidence from his research that shows large levels of technological unemployment due to AI.

“Economists call this structural unemployment, so the pie of work is not big enough for everybody anymore and so people will lose jobs definitely because of of AI, I don’t think that this is happening on a mass scale,” he said.

He added that concerns about technology putting an end to human work can be seen throughout history.

“It reoccurred this century alone a dozen times, you can go back to ancient times where Roman emperors put hold to certain machines because they were worried about this and always the contrary happened. The machine made companies, industries more productive.

“It allowed for the emergence of entirely new jobs. If you think about the internet 20 years ago, nobody would have known what a social media influencer is, what an app developer is because it didn’t exist.”

Read more about companies conducting AI layoffs below:

A logo sits illuminated at the Accenture booth in Mobile World Congress 2025 on March 03, 2025 in Barcelona, Spain.

Accenture plans on ‘exiting’ staff who can’t be reskilled on AI amid restructuring strategy

Continue Reading

Technology

Close to half of Kalshi user base experienced glitches, delays during Saturday college football games

Published

on

By

Close to half of Kalshi user base experienced glitches, delays during Saturday college football games

The Kalshi logo arranged on a laptop in New York, US, on Monday, Feb. 10, 2025.

Gabby Jones | Bloomberg | Getty Images

Close to half of Kalshi’s user base experienced glitches and delays on Saturday during college football games, a major source of trades, as some said they were temporarily unable to process orders.

In a message sent to a user obtained by CNBC, the predictions market service’s website apologized for any inconvenience and said it was “looking into” the issues traders were experiencing. 

“The Exchange is experiencing temporary delays,” the message read. “Balances and positions may not be accurately reflected at this time.” 

One user shared a screen recording and screenshots with CNBC that showed they were unable to see their balance or bets while the issues persisted.

A number of users on X reported the website was down when they were trying to place bets on college football games, with some saying they had open orders that wouldn’t process. When CNBC visited the website, it wouldn’t load, showing only a green K with a spinning circle around it for more than 20 minutes. The platform later loaded.

“Earlier today, Kalshi experienced minor glitches that temporarily affected some user experiences. No exchange outage occurred, no funds were affected, and the issues are now resolved,” the company said in a statement.

Earlier, a spokesperson denied there was an outage and said the exchange “never stopped functioning properly.” He added that there has been no impact on clearing, advanced trading, or institutional trading.

“There were some glitches and delays on our web and app product, which affected less than half of our user base,” the spokesperson said. 

A little over a week ago, Kalshi announced a $300 million Series D funding round that valued the company at $5 billion, more than double its $2 billion valuation in June after its Series C round. 

The round was co-led by Andreessen Horowitz (a16z) and Sequoia Capital, with participation from Paradigm. Additional backers included Coinbase Ventures, General Catalyst, Spark Capital and CapitalG. 

The company, founded in 2018, rose to prominence by offering bettors the ability to trade on a wide range of real-world events, from football games to who President Donald Trump could pardon this year.

WATCH: Kalshi CEO on $2B valuation: We’re one of the fastest growing companies in America

Kalshi CEO on hitting $2B valuation: We're one of the fastest growing companies in America

Continue Reading

Trending