Connect with us

Published

on

As the generative AI field heats up, consumer-facing chatbots are fielding questions about business strategy, designing study guides for math class, offering advice on salary negotiation and even writing wedding vows. And things are just getting started. 

OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Bing and Anthropic’s Claude are a few of today’s leading chatbots, but over the coming year, we’ll likely see more emerge: In the venture capital space, generative AI-related deals totaled $1.69 billion worldwide in Q1 of this year, a 130% spike from last quarter’s $0.73 billion – with another $10.68 billion worth of deals being announced but not yet completed in Q1, according to Pitchbook data. 

related investing news

Nvidia's 'iPhone moment' in AI signals tons of future growth. Here's our new price target

CNBC Investing Club

Two months after ChatGPT’s launch, it surpassed 100 million monthly active users, breaking records for the fastest-growing consumer application in history: “a phenomenal uptake – we’ve frankly never seen anything like it, and interest has grown ever since,” Brian Burke, a research VP at Gartner, told CNBC. “From its release on November 30 to now, our inquiry volume has shot up like a hockey stick; every client wants to know about generative AI and ChatGPT.” 

These types of chatbots are built atop large language models, or LLMs, a machine learning tool that uses large amounts of internet data to recognize patterns and generate human-sounding language. If you’re a beginner, many of the sources we spoke with agreed that the best way to start using a chatbot is to dive in and try things out. 

“People spend too much time trying to find the perfect prompt – 80% of it is just using it interactively,” Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania, who studies the effects of AI on work and education, told CNBC. 

Here are some tips from the pros:

Keep data privacy in mind. 

When you use a chatbot like ChatGPT or Bard, the information you put in – what you type, what you receive in response, and the changes you ask for – may be used to train future models. OpenAI says as much in its terms. Although some companies offer ways to opt out – OpenAI allows this under “data controls” in ChatGPT settings – it’s still best to refrain from sharing sensitive or private data in chatbot conversations, especially while companies are still finessing their privacy measures. For instance, a ChatGPT bug in March briefly allowed users to see parts of each others’ conversation histories. 

“If you wouldn’t post it on Facebook, don’t put it into ChatGPT,” Burke said. “Think about what you put into ChatGPT as being public information.”

Offer up context. 

For the best possible return on your time, give the chatbot context about how it should act in this scenario, and who it’s serving with this information. For example, you can write out the persona you want the chatbot to assume in this scenario: “You are a [marketer, teacher, philosopher, etc.].” You can also add context like: “I am a [client, student, beginner, etc.].” This could save time by directly telling the chatbot which kind of role it should assume, and which “lens” to pass the information through in a way that’s helpful to you. 

For instance, if you’re a creative consultant looking for a chatbot to help you with analysis on company logos, you could type out something like, “Act as if you are a graphic designer who studies logo design for companies. I am a client who owns a company and is looking to learn about which logos work best and why. Generate an analysis on the ‘best’ company logos for publicly listed companies and why they’re seen as good choices.” 

“If you ask Bard to write an inspirational speech, Bard’s response may be a bit more generic – but if you ask Bard to write a speech in a specific style, tone or format, you’ll likely get a much better response,” Sissie Hsiao, a VP at Google, told CNBC.

Make the chatbot do all the work.

Sometimes the best way to get what you want is to ask the chatbot itself for advice – whether you’re asking about what’s possible as a user, or about the best way to word your prompt.

“Ask it the simple question, what kinds of things can you do? And it’ll give you a list of things that would actually surprise most people,” Burke said. 

You can also game the system by asking something like, “What’s the best way to ask you for help writing a shopping list?” or even assigning the chatbot a prompt-writing job, like, “Your job is to generate the best and most efficient prompts for ChatGPT. Generate a list of the best prompts to ask ChatGPT for healthy one-pot dinner recipes.” 

Ask for help with brainstorming. 

Whether you’re looking for vacation destinations, date ideas, poetry prompts or content strategies for going viral on social media, many people are using chatbots as a jumping-off point for brainstorming sessions. 

“The biggest thing…that I find them to be helpful for is inspiring me as the user and helping me learn things that I wouldn’t have necessarily thought of on my own,” Josh Albrecht, CTO of Generally Intelligent, an AI research startup, told CNBC. “Maybe that’s why they’re called generative AI – they’re really helpful at the generative part, the brainstorming.” 

Create a crash course. 

Let’s say you’re trying to learn about geometry, and you consider yourself a beginner. You could kick off your studies by asking a chatbot something like, “Explain the basics of geometry as if I’m a beginner,” or, “Explain the Pythagorean Theorem as if I’m a five-year-old.” 

If you’re looking for something more expansive, you can ask a chatbot to create a “crash course” for you, specifying how much time you’ve got (three days, a week, a month) or how many hours you want to spend learning the new skill. You can write something like, “I’m a beginner who wants to learn how to skateboard. Create a two-week plan for how I can learn to skateboard and do a kickflip.” 

To expand your learning plan beyond the chatbot, you can also ask for a list of the most important books about a topic, some of the most influential people in the field and any other resources that could help you advance your skill set. 

Don’t be afraid to give notes and ask for changes. 

“The worst thing you could do if you’re actually trying to use the output of ChatGPT is [to] just ask it one thing once and then walk away,” Mollick said. “You’re going to get very generic output. You have to interact with it.”

Sometimes you won’t choose the perfect prompt, or the chatbot won’t generate the output you were looking for – and that’s okay. You can still make tweaks to make the information more helpful, like asking follow-up questions like, “Can you make it sound less generic?” or “Can you make the first paragraph more interesting?” or even restating your original ask in a different way. 

Take everything with many grains of salt.

Chatbots have a documented tendency to fabricate information, especially when their training data doesn’t fully cover an area you’re asking about, so it’s important to take everything with a grain of salt. Say you’re asking for a biography of Albert Einstein: A chatbot might tell you the famous scientist wrote a book called “How to Be Smart,” when, unfortunately, he never did. Also, since large language models are trained upon large swaths of the internet, they’re best at pattern recognition, meaning they can generate biased outputs or misinformation based on their training data. 

“Where there’s less information, it just makes stuff up,” Burke said, adding, “These hallucinations are extraordinarily convincing…You can’t trust these models to give you accurate information all the time.”

Experiment and try different approaches.

Whether you’re asking for a chatbot to generate a list of action items from a meeting transcript or translate something from English to Tagalog, there are an untold range of use cases for generative AI. So when you’re using a chatbot, it’s worth thinking about the things you want to learn or need help with and experimenting with how well the system can deliver. 

“AI is a general-purpose technology; it does a lot of stuff, so the idea is that whatever field you’re in and whatever job you’re in, it’s going to affect aspects of your job differently than anyone else on the planet,” Mollick said. “It’s about thinking about how you want to use it…You have to figure out a way to work with the system…and the only way to do that is through experimenting.” 


Subscribe to CNBC on YouTube. 

Continue Reading

Technology

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Published

on

By

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Elon Musk’s business empire is sprawling. It includes electric vehicle maker Tesla, social media company X, artificial intelligence startup xAI, computer interface company Neuralink, tunneling venture Boring Company and aerospace firm SpaceX. 

Some of his ventures already benefit tremendously from federal contracts. SpaceX has received more than $19 billion from contracts with the federal government, according to research from FedScout. Under a second Trump presidency, more lucrative contracts could come its way. SpaceX is on track to take in billions of dollars annually from prime contracts with the federal government for years to come, according to FedScout CEO Geoff Orazem.

Musk, who has frequently blamed the government for stifling innovation, could also push for less regulation of his businesses. Earlier this month, Musk and former Republican presidential candidate Vivek Ramaswamy were tapped by Trump to lead a government efficiency group called the Department of Government Efficiency, or DOGE.

In a recent commentary piece in the Wall Street Journal, Musk and Ramaswamy wrote that DOGE will “pursue three major kinds of reform: regulatory rescissions, administrative reductions and cost savings.” They went on to say that many existing federal regulations were never passed by Congress and should therefore be nullified, which President-elect Trump could accomplish through executive action. Musk and Ramaswamy also championed the large-scale auditing of agencies, calling out the Pentagon for failing its seventh consecutive audit. 

“The number one way Elon Musk and his companies would benefit from a Trump administration is through deregulation and defanging, you know, giving fewer resources to federal agencies tasked with oversight of him and his businesses,” says CNBC technology reporter Lora Kolodny.

To learn how else Elon Musk and his companies may benefit from having the ear of the president-elect watch the video.

Continue Reading

Technology

Why X’s new terms of service are driving some users to leave Elon Musk’s platform

Published

on

By

Why X's new terms of service are driving some users to leave Elon Musk's platform

Elon Musk attends the America First Policy Institute gala at Mar-A-Lago in Palm Beach, Florida, Nov. 14, 2024.

Carlos Barria | Reuters

X’s new terms of service, which took effect Nov. 15, are driving some users off Elon Musk’s microblogging platform. 

The new terms include expansive permissions requiring users to allow the company to use their data to train X’s artificial intelligence models while also making users liable for as much as $15,000 in damages if they use the platform too much. 

The terms are prompting some longtime users of the service, both celebrities and everyday people, to post that they are taking their content to other platforms. 

“With the recent and upcoming changes to the terms of service — and the return of volatile figures — I find myself at a crossroads, facing a direction I can no longer fully support,” actress Gabrielle Union posted on X the same day the new terms took effect, while announcing she would be leaving the platform.

“I’m going to start winding down my Twitter account,” a user with the handle @mplsFietser said in a post. “The changes to the terms of service are the final nail in the coffin for me.”

It’s unclear just how many users have left X due specifically to the company’s new terms of service, but since the start of November, many social media users have flocked to Bluesky, a microblogging startup whose origins stem from Twitter, the former name for X. Some users with new Bluesky accounts have posted that they moved to the service due to Musk and his support for President-elect Donald Trump.

Bluesky’s U.S. mobile app downloads have skyrocketed 651% since the start of November, according to estimates from Sensor Tower. In the same period, X and Meta’s Threads are up 20% and 42%, respectively. 

X and Threads have much larger monthly user bases. Although Musk said in May that X has 600 million monthly users, market intelligence firm Sensor Tower estimates X had 318 million monthly users as of October. That same month, Meta said Threads had nearly 275 million monthly users. Bluesky told CNBC on Thursday it had reached 21 million total users this week.

Here are some of the noteworthy changes in X’s new service terms and how they compare with those of rivals Bluesky and Threads.

Artificial intelligence training

X has come under heightened scrutiny because of its new terms, which say that any content on the service can be used royalty-free to train the company’s artificial intelligence large language models, including its Grok chatbot.

“You agree that this license includes the right for us to (i) provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type,” X’s terms say.

Additionally, any “user interactions, inputs and results” shared with Grok can be used for what it calls “training and fine-tuning purposes,” according to the Grok section of the X app and website. This specific function, though, can be turned off manually. 

X’s terms do not specify whether users’ private messages can be used to train its AI models, and the company did not respond to a request for comment.

“You should only provide Content that you are comfortable sharing with others,” read a portion of X’s terms of service agreement.

Though X’s new terms may be expansive, Meta’s policies aren’t that different. 

The maker of Threads uses “information shared on Meta’s Products and services” to get its training data, according to the company’s Privacy Center. This includes “posts or photos and their captions.” There is also no direct way for users outside of the European Union to opt out of Meta’s AI training. Meta keeps training data “for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely and efficiently,” according to its Privacy Center. 

Under Meta’s policy, private messages with friends or family aren’t used to train AI unless one of the users in a chat chooses to share it with the models, which can include Meta AI and AI Studio.

Bluesky, which has seen a user growth surge since Election Day, doesn’t do any generative AI training. 

“We do not use any of your content to train generative AI, and have no intention of doing so,” Bluesky said in a post on its platform Friday, confirming the same to CNBC as well.

Liquidated damages

Bluesky CEO: Our platform is 'radically different' from anything else in social media

Continue Reading

Technology

The Pentagon’s battle inside the U.S. for control of a new Cyber Force

Published

on

By

The Pentagon's battle inside the U.S. for control of a new Cyber Force

A recent Chinese cyber-espionage attack inside the nation’s major telecom networks that may have reached as high as the communications of President-elect Donald Trump and Vice President-elect J.D. Vance was designated this week by one U.S. senator as “far and away the most serious telecom hack in our history.”

The U.S. has yet to figure out the full scope of what China accomplished, and whether or not its spies are still inside U.S. communication networks.

“The barn door is still wide open, or mostly open,” Senator Mark Warner of Virginia and chairman of the Senate Intelligence Committee told the New York Times on Thursday.

The revelations highlight the rising cyberthreats tied to geopolitics and nation-state actor rivals of the U.S., but inside the federal government, there’s disagreement on how to fight back, with some advocates calling for the creation of an independent federal U.S. Cyber Force. In September, the Department of Defense formally appealed to Congress, urging lawmakers to reject that approach.

Among one of the most prominent voices advocating for the new branch is the Foundation for Defense of Democracies, a national security think tank, but the issue extends far beyond any single group. In June, defense committees in both the House and Senate approved measures calling for independent evaluations of the feasibility to create a separate cyber branch, as part of the annual defense policy deliberations.

Drawing on insights from more than 75 active-duty and retired military officers experienced in cyber operations, the FDD’s 40-page report highlights what it says are chronic structural issues within the U.S. Cyber Command (CYBERCOM), including fragmented recruitment and training practices across the Army, Navy, Air Force, and Marines.

“America’s cyber force generation system is clearly broken,” the FDD wrote, citing comments made in 2023 by then-leader of U.S. Cyber Command, Army General Paul Nakasone, who took over the role in 2018 and described current U.S. military cyber organization as unsustainable: “All options are on the table, except the status quo,” Nakasone had said.

Concern with Congress and a changing White House

The FDD analysis points to “deep concerns” that have existed within Congress for a decade — among members of both parties — about the military being able to staff up to successfully defend cyberspace. Talent shortages, inconsistent training, and misaligned missions, are undermining CYBERCOM’s capacity to respond effectively to complex cyber threats, it says. Creating a dedicated branch, proponents argue, would better position the U.S. in cyberspace. The Pentagon, however, warns that such a move could disrupt coordination, increase fragmentation, and ultimately weaken U.S. cyber readiness.

As the Pentagon doubles down on its resistance to establishment of a separate U.S. Cyber Force, the incoming Trump administration could play a significant role in shaping whether America leans toward a centralized cyber strategy or reinforces the current integrated framework that emphasizes cross-branch coordination.

Known for his assertive national security measures, Trump’s 2018 National Cyber Strategy emphasized embedding cyber capabilities across all elements of national power and focusing on cross-departmental coordination and public-private partnerships rather than creating a standalone cyber entity. At that time, the Trump’s administration emphasized centralizing civilian cybersecurity efforts under the Department of Homeland Security while tasking the Department of Defense with addressing more complex, defense-specific cyber threats. Trump’s pick for Secretary of Homeland Security, South Dakota Governor Kristi Noem, has talked up her, and her state’s, focus on cybersecurity.

Former Trump officials believe that a second Trump administration will take an aggressive stance on national security, fill gaps at the Energy Department, and reduce regulatory burdens on the private sector. They anticipate a stronger focus on offensive cyber operations, tailored threat vulnerability protection, and greater coordination between state and local governments. Changes will be coming at the top of the Cybersecurity and Infrastructure Security Agency, which was created during Trump’s first term and where current director Jen Easterly has announced she will leave once Trump is inaugurated.

Cyber Command 2.0 and the U.S. military

John Cohen, executive director of the Program for Countering Hybrid Threats at the Center for Internet Security, is among those who share the Pentagon’s concerns. “We can no longer afford to operate in stovepipes,” Cohen said, warning that a separate cyber branch could worsen existing silos and further isolate cyber operations from other critical military efforts.

Cohen emphasized that adversaries like China and Russia employ cyber tactics as part of broader, integrated strategies that include economic, physical, and psychological components. To counter such threats, he argued, the U.S. needs a cohesive approach across its military branches. “Confronting that requires our military to adapt to the changing battlespace in a consistent way,” he said.

In 2018, CYBERCOM certified its Cyber Mission Force teams as fully staffed, but concerns have been expressed by the FDD and others that personnel were shifted between teams to meet staffing goals — a move they say masked deeper structural problems. Nakasone has called for a CYBERCOM 2.0, saying in comments early this year “How do we think about training differently? How do we think about personnel differently?” and adding that a major issue has been the approach to military staffing within the command.

Austin Berglas, a former head of the FBI’s cyber program in New York who worked on consolidation efforts inside the Bureau, believes a separate cyber force could enhance U.S. capabilities by centralizing resources and priorities. “When I first took over the [FBI] cyber program … the assets were scattered,” said Berglas, who is now the global head of professional services at supply chain cyber defense company BlueVoyant. Centralization brought focus and efficiency to the FBI’s cyber efforts, he said, and it’s a model he believes would benefit the military’s cyber efforts as well. “Cyber is a different beast,” Berglas said, emphasizing the need for specialized training, advancement, and resource allocation that isn’t diluted by competing military priorities.

Berglas also pointed to the ongoing “cyber arms race” with adversaries like China, Russia, Iran, and North Korea. He warned that without a dedicated force, the U.S. risks falling behind as these nations expand their offensive cyber capabilities and exploit vulnerabilities across critical infrastructure.

Nakasone said in his comments earlier this year that a lot has changed since 2013 when U.S. Cyber Command began building out its Cyber Mission Force to combat issues like counterterrorism and financial cybercrime coming from Iran. “Completely different world in which we live in today,” he said, citing the threats from China and Russia.

Brandon Wales, a former executive director of the CISA, said there is the need to bolster U.S. cyber capabilities, but he cautions against major structural changes during a period of heightened global threats.

“A reorganization of this scale is obviously going to be disruptive and will take time,” said Wales, who is now vice president of cybersecurity strategy at SentinelOne.

He cited China’s preparations for a potential conflict over Taiwan as a reason the U.S. military needs to maintain readiness. Rather than creating a new branch, Wales supports initiatives like Cyber Command 2.0 and its aim to enhance coordination and capabilities within the existing structure. “Large reorganizations should always be the last resort because of how disruptive they are,” he said.

Wales says it’s important to ensure any structural changes do not undermine integration across military branches and recognize that coordination across existing branches is critical to addressing the complex, multidomain threats posed by U.S. adversaries. “You should not always assume that centralization solves all of your problems,” he said. “We need to enhance our capabilities, both defensively and offensively. This isn’t about one solution; it’s about ensuring we can quickly see, stop, disrupt, and prevent threats from hitting our critical infrastructure and systems,” he added.

Continue Reading

Trending