Connect with us

Published

on

Microsoft CEO Satya Nadella speaks at the company’s Ignite Spotlight event in Seoul on Nov. 15, 2022.

SeongJoon Cho | Bloomberg | Getty Images

Thanks to recent advances in artificial intelligence, new tools like ChatGPT are wowing consumers with their ability to create compelling writing based on people’s queries and prompts.

While these AI-powered tools have gotten much better at producing creative and sometimes humorous responses, they often include inaccurate information.

For instance, in February when Microsoft debuted its Bing chat tool, built using the GPT-4 technology created by Microsoft-backed OpenAI, people noticed that the tool was providing wrong answers during a demo related to financial earnings reports. Like other AI language tools, including similar software from Google, the Bing chat feature can occasionally present fake facts that users might believe to be the ground truth, a phenomenon that researchers call a “hallucination.”

These problems with the facts haven’t slowed down the AI race between the two tech giants.

On Tuesday, Google announced it was bringing AI-powered chat technology to Gmail and Google Docs, letting it help composing emails or documents. On Thursday, Microsoft said that its popular business apps like Word and Excel would soon come bundled with ChatGPT-like technology dubbed Copilot.

But this time, Microsoft is pitching the technology as being “usefully wrong.”

In an online presentation about the new Copilot features, Microsoft executives brought up the software’s tendency to produce inaccurate responses, but pitched that as something that could be useful. As long as people realize that Copilot’s responses could be sloppy with the facts, they can edit the inaccuracies and more quickly send their emails or finish their presentation slides.

For instance, if a person wants to create an email wishing a family member a happy birthday, Copilot can still be helpful even if it presents the wrong birth date. In Microsoft’s view, the mere fact that the tool generated text saved a person some time and is therefore useful. People just need to take extra care and make sure the text doesn’t contain any errors.

Researchers might disagree.

Indeed, some technologists like Noah Giansiracusa and Gary Marcus have voiced concerns that people may place too much trust in modern-day AI, taking to heart advice tools like ChatGPT present when they ask questions about health, finance and other high-stakes topics.

“ChatGPT’s toxicity guardrails are easily evaded by those bent on using it for evil and as we saw earlier this week, all the new search engines continue to hallucinate,” the two wrote in a recent Time opinion piece. “But once we get past the opening day jitters, what will really count is whether any of the big players can build artificial intelligence that we can genuinely trust.”

It’s unclear how reliable Copilot will be in practice.

Microsoft chief scientist and technical fellow Jaime Teevan said that when Copilot “gets things wrong or has biases or is misused,” Microsoft has “mitigations in place.” In addition, Microsoft will be testing the software with only 20 corporate customers at first so it can discover how it works in the real world, she explained.

“We’re going to make mistakes, but when we do, we’ll address them quickly,” Teevan said.

The business stakes are too high for Microsoft to ignore the enthusiasm over generative AI technologies like ChatGPT. The challenge will be for the company to incorporate that technology so that it doesn’t create public mistrust in the software or lead to major public relations disasters.

“I studied AI for decades and I feel this huge sense of responsibility with this powerful new tool,” Teevan said. “We have a responsibility to get it into people’s hands and to do so in the right way.”

Watch: A lot of room for growth for Microsoft and Google

A lot of room for growth with Microsoft and Google, says Oppenheimer analyst Tim Horan

Continue Reading

Technology

Microsoft Bing now uses OpenAI’s DALL-E A.I. to turn text into images

Published

on

By

Microsoft Bing now uses OpenAI's DALL-E A.I. to turn text into images

OpenAI displayed on screen with Microsoft Bing double photo exposure on mobile, seen in this photo illustration.

Nurphoto | Nurphoto | Getty Images

Microsoft on Tuesday added a new artificial intelligence-powered capability to its search slate: AI-generated visuals.

The new tool, powered by OpenAI’s DALL-E, will allow users to generate images using their own words, such as asking for a picture of “an astronaut walking through a galaxy of sunflowers,” the company explained in a press release.

related investing news

For investors looking for solid ground in an uncertain market, balance sheet is king

CNBC Investing Club

The feature, called “Bing Image Creator,” will be available to Bing and Microsoft Edge users in preview. It will first roll out in the search engine’s “Creative Mode.” Eventually, it’ll become fully integrated into the Bing chat experience, the company added.

On Microsoft Edge, the image generator will become available in the browser’s search bar.

Microsoft has bolstered its AI-assisted search functions in recent months, first announcing AI-powered updates to Bing and Edge in early February.

Last week, the tech giant also announced it would add its generative AI technology to some of its most popular business apps, including Word, PowerPoint and Excel.

Excitement around the promise of generative AI has been driven in large part by the runaway success of ChatGPT, which was released by Microsoft-backed OpenAI in November.

As Microsoft’s new capabilities became available to users, some beta testers identified issues, including threats, unhelpful advice and other glitches.

Microsoft says it’s taken steps to curb the misuse of Bing Image Creator by working with OpenAI to develop safety measures for the public.

These safety measures include controls “that aim to limit the generation of harmful or unsafe images,” plus a modified Bing icon that will be added to the bottom left corner of images, with the goal of clarifying the images were created using AI, Microsoft said.

Microsoft’s tiered approach to Bing Image Creator’s rollout is also inspired by the iterative approach the company attempted with past releases.

“People used it in some ways we expected and others we didn’t,” Microsoft said of Bing’s new capabilities. “In this spirit of learning and continuing to build new capabilities responsibly, we’re rolling out Bing Image Creator in a phased approach by flighting with a set of preview users before expanding more broadly.”

Continue Reading

Technology

Google CEO tells employees that 80,000 of them helped test Bard A.I., warns ‘things will go wrong’

Published

on

By

Google CEO tells employees that 80,000 of them helped test Bard A.I., warns 'things will go wrong'

Alphabet CEO Sundar Pichai gestures during a session at the World Economic Forum (WEF) annual meeting in Davos, on January 22, 2020.

Fabrice COFFRINI | AFP | Getty Images

Google and Alphabet CEO Sundar Pichai told employees that the success of its newly launched Bard A.I. program now hinges on public testing.

“As more people start to use Bard and test its capabilities, they’ll surprise us. Things will go wrong,” Pichai wrote in an internal email to employees Tuesday viewed by CNBC. “But the user feedback is critical to improving the product and the underlying technology.”

The message to employees comes as Google launched Bard as “an experiment” Tuesday morning, after months of anticipation. The product, which is built on Google’s LaMDA, or Language Model for Dialogue Applications, can offer chatty responses to complicated or open-ended questions, such as “give me ideas on how to introduce my daughter to fly fishing.”

Alphabet shares were up almost 4% in mid-day trading following the announcement.

In many disclaimers in the product, the company warns that Bard may make mistakes or “give inaccurate or inappropriate responses.” 

The latest internal messaging comes as the company tries to keep apace with the quickly evolving advancements in generative AI technology over the last several months — especially Microsoft-backed OpenAI and its ChatGPT technology.

Employees and investors criticized Google after Bard’s initial announcement in January, which appeared rushed to compete with Microsoft’s just-announced Bing integration of ChatGPT. In a recent all-hands meeting, employees’ top-rated questions included confusion around the purpose of Bard. At that meeting, executives defended Bard as an experiment and tried to make distinctions between the chatbot and its core search product.

Pichai’s Tuesday email also said 80,000 Google employees contributed to testing Bard, responding to Pichai’s all-hands-on-deck call to action last month, which included a plea for workers to re-write the chatbot’s bad answers.

Pichai’s Tuesday note also said the company is trying to test responsibly and invited 10,000 trusted testers “from a variety of backgrounds and perspectives.”

Pichai also said employees “should be proud of this work and the years of tech breakthroughs that led us here, including our 2017 Transformer research and foundational models such as PalM and BERT.” He added: “Even after all this progress, we’re still in the early stages of a long Al journey.”

“For now, I’m excited to see how Bard sparks more creativity and curiosity in the people who use it,” he said, adding he looks forward to sharing “the breadth of our progress in AI” at Google’s annual developer conference in May.

Here’s the full memo:

Hi, Googlers

Last week was an important week in Al with our announcements around Cloud, Developer, and Workspace. There’s even more to come this week as we begin to expand access to Bard, which we first announced in February.

Starting today, people in the US and the UK can sign up at bard.google.com. This is just a first step, and we’ll continue to roll it out to more countries and languages over time.

I’m grateful to the Bard team who has probably spent more time with Bard than anything or anyone else over the past few weeks. Also hugely appreciative of the 80,000 Googlers who have helped test it in the company-wide dogfood. We should be proud of this work and the years of tech breakthroughs that led us here, including our 2017 Transformer research and foundational models such as PalM and BERT.

Even after all this progress, we’re still in the early stages of a long Al journey. As more people start to use Bard and test its capabilities, they’ll surprise us. Things will go wrong. But the user feedback is critical to improving the product and the underlying technology.

We’ve taken a responsible approach to development, including inviting 10,000 trusted testers from a variety of backgrounds and perspectives, and we’ll continue to welcome all the feedback that’s about to come our way. We will learn from it and keep iterating and improving.

For now, I’m excited to see how Bard sparks more creativity and curiosity in the people who use it. And I look forward to sharing the full breadth of our progress in Al to help people, businesses and communities as we approach I/O in May.

—Sundar

Continue Reading

Technology

TikTok CEO appeals to U.S. users ahead of House testimony

Published

on

By

TikTok CEO appeals to U.S. users ahead of House testimony

Shou Zi Chew, chief executive officer of TikTok Inc., speaks during the Bloomberg New Economy Forum in Singapore, on Wednesday, Nov. 16, 2022.

Bryan van der Beek | Bloomberg | Getty Images

TikTok CEO Shou Zi Chew appealed directly to the app’s users ahead of what’s expected to be a heated grilling in the U.S. House Energy and Commerce Committee this week, in a video posted to the platform Tuesday.

Filming from Washington, D.C., Chew emphasized the large scale of TikTok users, small and medium-sized businesses and its own employees based in the U.S. that rely on the company. The message may preview his appeal to lawmakers Thursday, where he will be faced with questions about the ability of its Chinese parent company ByteDance, and the Chinese government, to access U.S. user information collected by the app.

TikTok says it has worked to create a risk mitigation plan to ensure that U.S. data doesn’t get into the hands of a foreign adversary through its app. The company has said U.S. user data is already stored outside of China.

But many lawmakers and intelligence officials seem to remain unconvinced that the information can be safe while TikTok is owned by a Chinese company. TikTok said last week that the Committee on Foreign Investment in the U.S., which is reviewing risks related to the app, is pushing for ByteDance to sell its stake or face a ban.

Chew disclosed in the video that TikTok has more than 150 million monthly active users, or MAUs, in the U.S., representing massive growth from August 2020, when it said for the first time that it has about 100 million MAUs in the country. That number includes 5 million businesses that use the app to reach their customers, with most of those being small or medium-sized businesses. He also said TikTok has 7,000 U.S.-based employees.

“This comes at a pivotal moment for us,” Chew said, referencing lawmakers’ threats of a TikTok ban. “This could take TikTok away from all 150 million of you.”

Chew then appealed to users directly to share in the comments what they want their representatives to know about why they love TikTok.

Subscribe to CNBC on YouTube.

WATCH: TikTok and ByteDance spied on this Forbes reporter

TikTok and ByteDance spied on this Forbes reporter

Continue Reading

Trending