Connect with us

Published

on

Bitcoin.

Nurphoto | Getty Images

The price of bitcoin shot above the $54,000 level on Monday after waking up from a week of tepid trading.

The flagship cryptocurrency was last higher by 5% at $54,460.00, according to Coin Metrics. At its session high, bitcoin hit $54,965.26 and reached its highest level since December 2021.

“Today is settlement day for bitcoin futures, which is contributing to the price jump we’re seeing,” said Ryan Rasmussen, analyst at Bitwise Asset Management. “We’re approaching the window where we typically see traders positioning themselves ahead of the bitcoin halving, which will happen in the second half of April. I suspect this is the day people start rolling into bullish positions pre-halving.”

Most of the crypto market got a lift from bitcoin. Ether gained more than 2% to trade at $3,173.87. Solana added more than 5%, and Cardano’s ADA token advanced about 4%. Polygon’s MATIC token rose 8%.

Crypto-related equities surged. Coinbase and Microstrategy leapt 16%. Riot Platforms and Marathon Digital, the largest bitcoin miners, soared 15% and 20%, respectively.

Bitcoin traded flat in the week leading up to Monday, when the breakout began, and put it on track for a 27% monthly gain.

“Bitcoin has been hovering around $52,000 for the past two weeks and looking for an opportunity to break out,” said Owen Lau, analyst at Oppenheimer, who cited positive idiosyncratic developments in crypto regulation and increasing retail participation.

In a recent note, JPMorgan’s Nikolaos Panigirtzoglou pointed out that after taking a pause in January, retail appetite for crypto rebounded in February and has been a significant driver of the upward price action. He pointed to three key catalysts that help explain the renewed retail interest: the bitcoin halving and Ethereum’s next tech upgrade — both of which JPMorgan sees as priced in — and the potential approval of spot ether ETFs.

Don’t miss these stories from CNBC PRO:

Continue Reading

Technology

OpenAI co-founder Ilya Sutskever says he will leave the startup

Published

on

By

OpenAI co-founder Ilya Sutskever says he will leave the startup

OpenAI co-founder Ilya Sutskever said Tuesday that he’s leaving the Microsoft-backed startup.

“I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time,” Sutskever wrote in an X post on Tuesday.

The departure comes months after OpenAI went through a leadership crisis in November involving co-founder and CEO Sam Altman.

In November, OpenAI’s board said in a statement that Altman had not been “consistently candid in his communications with the board.” The issue quickly caqme to look more complex. The Wall Street Journal and other media outlets reported that Sutskever came to focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were eager to push ahead with delivering new technology.

Almost all of OpenAI’s employees signed an open letter saying they would leave in response to the board’s action. Days later, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Sutskever, who had voted to oust Altman, were out. Adam D’Angelo, who had also voted to push out Altman, stayed on the board.

When Altman was asked about Sutskever’s status on a Zoom call with reporters at the time, he said there were no updates to share. “I love Ilya… I hope we work together for the rest of our careers, my career, whatever,” Altman said. “Nothing to announce today.”

On Tuesday, Altman shared his thoughts as Sutskever leaves.

“This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend,” Altman wrote on X. “His brilliance and vision are well known; his warmth and compassion are less well known but no less important.” Altman said research director Jakub Pachocki, who has been at OpenAI since 2017, will replace Sutskever as chief scientist.

OpenAI has announced new board members, including former Salesforce co-CEO Bret Taylor and former Treasury Secretary Larry Summers. Microsoft obtained a nonvoting board observer position.

In March, OpenAI announced its new board and the wrap-up of an internal investigation by U.S. law firm WilmerHale into the events leading up to Altman’s ouster. Altman rejoined OpenAI’s board, and three new board members were announced: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former EVP and Global General Counsel of Sony and President of Sony Entertainment; and Fidji Simo, CEO and Chair of Instacart.

The three new members will “work closely with current board members Adam D’Angelo, Larry Summers and Bret Taylor as well as Greg, Sam, and OpenAI’s senior management,” according to a company release in March.

News of Sutskever’s departure comes a day after OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface, the company’s latest effort to expand use of its popular chatbot.

The update brings the GPT-4 model to everyone, including OpenAI’s free users, technology chief Mira Murati said Monday in a livestreamed event. She added that the new model, GPT-4o, is “much faster,” with improved capabilities in text, video and audio. OpenAI said it eventually plans to allow users to video chat with ChatGPT. “This is the first time that we are really making a huge step forward when it comes to the ease of use,” Murati said.

In 2015, Altman and Tesla CEO Elon Musk, another OpenAI co-founder, wanted Sutskever, then a research scientist at Google, to become the budding startup’s top scientist, according to the lawsuit Musk filed against OpenAI in March.

“Dr. Sutskever went back and forth on whether to leave Google and join the project, but it was ultimately a call from Mr. Musk on the day OpenAI, Inc. was publicly announced that convinced Dr. Sutskever to commit to joining the project as OpenAI, Inc.’s Chief Scientist,” the legal filing said.

Continue Reading

Technology

Google CEO Pichai says company will ‘sort it out’ if OpenAI misused YouTube for AI training

Published

on

By

Google CEO Pichai says company will 'sort it out' if OpenAI misused YouTube for AI training

Alphabet CEO Sundar Pichai speaks at the Asia-Pacific Economic Cooperation CEO Summit in San Francisco on Nov. 16, 2023.

David Paul Morris | Bloomberg | Getty Images

Alphabet CEO Sundar Pichai said Google will “sort it out” if it determines Microsoft-backed OpenAI relied on YouTube content to train an artificial intelligence model that can generate videos.

The comments, in an interview Tuesday with CNBC’s Deirdre Bosa, come after OpenAI technology chief Mira Murati told the Wall Street Journal in March that she wasn’t sure if YouTube videos were part of the training data for the company’s Sora model introduced earlier in the year.

Murati said OpenAI had drawn on publicly available data and on licensed data. The New York Times later reported that OpenAI had transcribed over a million hours of YouTube videos.

Asked if Google would sue OpenAI if the startup violated the search company’s terms of service, Pichai didn’t offer specifics.

“Look, I think it’s a question for them to answer,” Pichai said. “I don’t have anything to add. We do have clear terms of service. And so, you know, I think normally in these things we engage with companies and make sure they understand our terms of service. And we’ll sort it out.”

Pichai said Google has processes in place to figure out if OpenAI failed to comply with the rules. Newspapers such as The New York Times have already taken aim at OpenAI for allegedly breaking copyright law and training models on their articles.

Pichai’s interview followed a keynote to developers at Google’s I/O conference, where executives announced new AI models, including one called Veo that can compose synthetic videos. Those looking to get early access will have to receive approval from Google.

OpenAI preempted the Google event on Monday. The company revealed an AI model called GPT-4o and showed how users of its ChatGPT mobile app would be able to hold realistic voice conversations, interrupting the AI assistant and having it analyze what appears in front of a smartphone camera. On Tuesday, Google showed off similar upcoming capabilities.

“I don’t think they’ve shipped their demo to their users yet,” Pichai said of OpenAI. “I don’t think it’s available in the product.”

OpenAI said in a blog post on Monday that customers of its ChatGPT Plus subscriptions will be able to try an early version of the new voice mode in the weeks ahead. Pichai said Google’s Project Astra multimedia chat capabilities will come to its Gemini chatbot later this year.

“We have a clear sense of how to approach it, and we’ll get it right,” Pichai said.

Google has reduced the cost of serving up AI models in web searches by 80% since showing off a preview last year, relying on its custom Tensor Processing Units (TPUs) and Nvidia’s popular graphics processing units, he said. Google said during the keynote that it’s starting to display its AI Overviews in search results for all users in the U.S.

In June, Apple will hold its Worldwide Developers Conference in Cupertino, California. Bloomberg reported in March that Apple was discussing the idea of adding Gemini to the iPhone. Pichai told Bosa that Google has enjoyed “a great partnership with Apple over the years.” A Google expert witness said in court last November that the company gives Apple 36% of its search advertising revenue from the Safari browser.

“We have focused on delivering great experiences for the Apple ecosystem,” Pichai said. “It is something we take very seriously and I’m confident — we have many ways to make sure our products are accessible. We see that today, AI Overviews have been a popular feature on iOS when we have tested, and so we’ll continue — including Gemini. We’ll continue working to bring that there.”

WATCH: Alphabet CEO on report OpenAI trained GPT-4 on YouTube: We have clear terms of service

Alphabet CEO on report OpenAI trained GPT-4 on YouTube: We have clear terms of service

Continue Reading

Technology

Google I/O wrap-up: Gemini AI updates, new search features and more

Published

on

By

Google I/O wrap-up: Gemini AI updates, new search features and more

Google CEO Sundar Pichai speaks at the Google I/O developer conference. 

Andrej Sokolow | Picture Alliance | Getty Images

Google on Tuesday hosted its annual I/O developer conference, and rolled out a range of artificial intelligence products, from new search and chat features to AI hardware for cloud customers. The announcements underscore the company’s focus on AI as it fends off competitors, such as OpenAI.

Many of the features or tools Google unveiled are only in a testing phase or limited to developers, but they give an idea of how the tech giant is thinking about AI and where it’s investing. Google makes money from AI by charging developers who use its models and from customers who pay for Gemini Advanced, its competitor to ChatGPT, which costs $19.99 per month and can help users summarize PDFs, Google Docs and more.

Tuesday’s announcements follow similar events held by its AI competitors. Earlier this month, Amazon-backed Anthropic announced its first-ever enterprise offering and a free iPhone app. Meanwhile, OpenAI on Monday launched a new AI model and desktop version of ChatGPT, along with a new user interface.

Here’s what Google announced.

Gemini AI updates

Google introduced updates to Gemini 1.5 Pro, its AI model that will soon be able to handle even more data — for example, the tool can summarize 1,500 pages of text uploaded by a user.

There’s also a new Gemini 1.5 Flash AI model, which the company said is more cost-effective and designed for smaller tasks like quickly summarizing conversations, captioning images and videos and pulling data from large documents.

Google CEO Sundar Pichai highlighted improvements to Gemini’s translations, adding that it will be available to all developers worldwide in 35 languages. Within Gmail, Gemini 1.5 Pro will analyze attached PDFs and videos, giving summaries and more, Pichai said. That means that if you missed a long email thread on vacation, Gemini will be able to summarize it along with any attachments.

The new Gemini updates are also helpful for searching Gmail. One example the company gave: If you’ve been comparing prices from different contractors to fix your roof and are looking for a summary to help you decide who to pick, Gemini could return three quotes along with the anticipated start dates offered in the different email threads.

Google said Gemini will eventually replace Google Assistant on Android phones, suggesting it’s going to be a more powerful competitor to Apple’s Siri on iPhone.

Google Veo, Imagen 3 and Audio Overviews

Google announced “Veo,” its latest model for generating high-definition video, and Imagen 3, its highest quality text-to-image model, which promises lifelike images and “fewer distracting visual artifacts than our prior models.”

The tools will be available for select creators on Monday and will come to Vertex AI, Google’s machine learning platform that lets developers train and deploy AI applications.

The company also showcased “Audio Overviews,” the ability to generate audio discussions based on text input. For instance, if a user uploads a lesson plan, the chatbot can speak a summary of it. Or, if you ask for an example of a science problem in real life, it can do so through interactive audio.

Alphabet CEO Sundar Pichai: We can do Google search a lot better with generative AI

Separately, the company also showcased “AI Sandbox,” a range of generative AI tools for creating music and sounds from scratch, based on user prompts.

Generative AI tools such as chatbots and image creators continue to have issues with accuracy, however.

Google search boss Prabhakar Raghavan told employees last month that competitors “may have a new gizmo out there that people like to play with, but they still come to Google to verify what they see there because it is the trusted source, and it becomes more critical in this era of generative AI.”

Earlier this year, Google introduced the Gemini-powered image generator. Users discovered historical inaccuracies that went viral online, and the company pulled the feature, saying it would relaunch it in the coming weeks. The feature has still not been re-released.

New search features

The tech giant is launching “AI Overviews” in Google Search on Monday in the U.S. AI Overviews show a quick summary of answers to the most complex search questions, according to Liz Reid, head of Google Search. For example, if a user searches for the best way to clean leather boots, the results page may display an “AI Overview” at the top with a multi-step cleaning process, gleaned from information it synthesized from around the web.

The company said it plans to introduce assistant-like planning capabilities directly within search. It explained that users will be able to search for something like, “‘Create a 3-day meal plan for a group that’s easy to prepare,'” and you’ll get a starting point with a wide range of recipes from across the web.

As far as its progress to offer “multimodality,” or integrating more images and video within generative AI tools, Google said it will begin testing the ability for users to ask questions through video, such as filming a problem with a product they own, uploading it and asking the search engine to figure out the problem. In one example, Google showed someone filming a broken record player while asking why it wasn’t working. Google Search found the model of the record player and suggested that it could be malfunctioning because it wasn’t properly balanced.

Another new feature being tested is called “AI Teammate,” which will integrate into a user’s Google Workspace. It can build a searchable collection of work from messages and email threads with more PDFs and documents. For instance, a founder-to-be could ask the AI Teammate, “Are we ready for launch?” and the assistant will provide an analysis and summary based on the information it can access in Gmail, Google Docs and other Workspace apps.

Project Astra

AI hardware

Google also announced Trillium, its sixth-generation TPU, or tensor processing unit — a piece of hardware integral to running complex AI operations — which is to be available to cloud customers in late 2024.

The TPUs aren’t meant to compete with other chips, like Nvidia’s graphics processing units. Pichai noted during I/O, for example, that Google Cloud will begin offering Nvidia’s Blackwell GPUs in early 2025.

Nvidia said in March that Google will be using the Blackwell platform for “various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances,” and that access to Nvidia’s systems will help Google offer large-scale tools for enterprise developers building large language models.

In his speech, Pichai highlighted Google’s “longstanding partnership with Nvidia.” The companies have been working together for more than a decade, and Pichai has said in the past that he expects them to still be doing so a decade from now.

Don’t miss these exclusives from CNBC PRO

Watch CNBC's full interview with Alphabet CEO Sundar Pichai

Continue Reading

Trending