IBM was early to AI, then lost its way. CEO Arvind Krishna explains what’s next
More Videos
Published
11 months agoon
By
adminIBM is angling hard for an AI comeback story, and CEO Arvind Krishna is counting on a recent pivot to get it there.
Since May, the company has reintroduced the Watson brand as part of the company’s larger strategy shift to monetize its AI products for businesses. WatsonX is a development studio for companies to “train, tune and deploy” machine learning models. Krishna says the product has already amounted to “low hundreds of millions of dollars” in bookings in the third quarter, and could be on track for a billion dollars in bookings per year.
But IBM has steep competition in the enterprise AI realm: Microsoft, Google, Amazon and others all have similar offerings. And the company has long been critiqued for falling behind in the AI race, particularly when it comes to making money from its products.
Nearly two years ago, IBM sold its Watson Health unit for an undisclosed amount to private equity firm Francisco Partners. Now, the company is in the midst of selling its weather unit, including The Weather Channel mobile app and websites, Weather.com, Weather Underground and Storm Radar, to the same firm, also for an undisclosed sum.
“I think that’s a fair criticism, that we were slow to monetize and slow to make really consumable the learnings from Watson winning Jeopardy, and the mistake we made was that I think we went after very big, monolithic answers, which the world was not ready to absorb,” IBM CEO Arvind Krishna told CNBC in an interview, adding, “Beginning that way was the wrong approach.”
Krishna talked with CNBC about his specific views on regulation, the business of generative AI, IBM’s mistakes and its future plan.
This interview has been lightly edited for length and clarity.
On the morning you took over as CEO in 2020, you sent an email to employees saying you’ll focus on AI and hybrid cloud as the future’s technologies. How has your view on AI’s use in business – real-life use cases, saturation – changed since that day?
If you don’t mind, I’ll use a baseball analogy just because it helps to sort of say – at the time when I called those two technologies, I think people understood cloud and AI as ‘Okay, he’s saying it, but not clear – is that a market, is it big, is it small, is it really that important? Cloud is 10 times bigger.’ So to use a baseball analogy, at that point cloud was maybe the third inning, and AI had not even entered the field.
If you fast-forward to today, I will tell you cloud is probably in its fifth or sixth inning of a game – so you know how it’s going, it’s a mature game, you kind of know where it’s going to play out. AI is in the first inning, so still unclear who all will be the winners, who all will not win, et cetera. The difference is that it is on the field, so it is a major league game. Unclear on who exactly is going to win – that may be the only question.
So my view, I looked at the amount of data, I looked at the nature of automation needed in the demographic shifts that are going on and I looked at the sheer amount of work that we all have to do. And you go look at the backlog that’s sitting inside places, inside government – the VA has six months worth of claims to process, insurance companies take months to get going for the tougher claims, you look at the backlog in customer service. You look at all those things, and you say, ‘This mixture of the data explosion and this need to get work done – which technology could help us address that?’ And just from my experience, you look across and you say, ‘The only one I can think of is artificial intelligence.’
That’s why you get… a massive shift going on with people and with data, a big unmet need and a technology that could possibly address it. Now it’s up to us as innovators, as inventors, as technologists to go make it happen.
Biden’s recent executive order had a long list of sections that related to AI-generated content and the risks involved, including the order that AI companies share safety test results with the U.S. government before the official release of AI systems. What changes will IBM need to make?
We are one of, I think, a total of a dozen companies who participated in the signing of the executive order on the 30th of October, and we endorsed it with no qualifications. Look, to me… all regulation is going to be imperfect, by its very nature. There’s no way that, even in this case a 100-page document, can capture the subtleties of such a massive, emerging, impactful, nascent technology. So if I put that [thought] on it, then we are completely fine with the EO as written – we support it, we believe that having something is better than not having something, we believe that having safeguards is better than having no guardrails.
Now, I think that this has now come down to how they want to implement it. Do I have any concerns with sharing what tests we have done with the federal government? Actually, I have none. I am one who’s publicly advocated that companies that put out AI models should be held accountable to their models. I actually go even further – I say you should put in legislation that requires us to be legally liable for what our models do, which means if your models do bad things, you can get sued. I’m not saying that’s a very popular viewpoint, but that is one that I have articulated.
So do I have concerns with sharing it with the government? No. Do I have concerns if the government is now going to put this into a public database so everybody else knows my secret recipes and what I do? Yeah, I do have concerns about that. Because I do believe that there should be competition – we should be allowed to have our own copyrighted ways of doing things, and those don’t need to be made public. So my concern is kind of on the edges, but they haven’t yet told us how they want us to do all those things, and I’m hoping that we can influence – whether it’s NIST or commerce or whoever is coming up with all these rules – to sort of allow for confidentiality. But behind confidentiality, I don’t really have concerns, per se, about this.
There’s an industry-wide debate, especially in light of the executive order, about too much regulation stifling innovation: Some say it’s irresponsible and even inefficient to move forward without oversight for bias and harms; some say it stifles advancement and open-source AI development. Share your thoughts and where you think trust/governance is headed?
I’m going to tell you what I told Senator Schumer… This is a really authentically and deeply-held point of view. Number one, we actually said that whatever we do should allow for a lot of open innovation and not stifle innovation. Two, I said that model developers should be held accountable for what they create. And three, I believe we should regulate use cases based on risk, not the technology or the algorithms themselves.
So… we strongly advocated that we should allow for open innovation. What does that then preclude? It would preclude a very onerous, hard licensing regime. So if you create a licensing regime, you more or less shut everybody who’s not part of the license out – because that is the one that would shut down. If somebody does open innovation and they can’t deploy because you need a license to deploy, then if you’re two kids in a basement, it’s really hard to run the gauntlet of getting a license from the federal government. So we advocated for that to be open, so you can allow AI innovation.
Now, if somebody’s going to deploy it, how are you going to be accountable? Well, accountability always depends on the depth of your pocketbook. So if you’re a larger company with more resources, by definition, you have more to lose, and more to gain – so that seems like a fair system of competition. And the reason we said to regulate the use case, not the technology, is so that open innovation can flourish. Because if you regulate the technology, now you’re stomping on the innovation – but use case, if it’s in medicine or self-driving cars, you probably want to be more careful than if it’s summarizing an email for you. So there is a different risk that we should accept that comes from real life.
Speaking of WatsonX – the development studio IBM began rolling out in July for companies to train, tune and deploy AI – it’s a big bet for IBM. What sets it apart from competing offerings from other big tech companies?
At one level, most of the companies are going to have their own studios, they have ways that their clients can both experiment with AI models and put them into production – so at that level, you’d say, “Hey, it kind of smells similar to this.” We use the word assistant, others use the word copilots – I’ll look at you and I’ll acknowledge that it’s kind of the same difference. Now it comes down to how do you deploy it, how much can you trust it, how curated is the data that went into it and what kind of protections do you give the end users? That’s where I’ll walk through some of the differences.
So we don’t want to constrain where people deploy it. Many of the current tech players – I won’t say all, but many – insist that it gets deployed only in their public cloud environment. I have clients in the Middle East, and they want to deploy it on their sovereign territory; I have clients in India who want to deploy it in India; we have clients in Japan who want to deploy it in Japan; I might have, maybe, hypothetically, a bank that is worrying a lot about the data that they might put into it, so they want to deploy it in their private infrastructure. So as you go through those examples, we don’t want to constrain where people deploy it. So they want to deploy it on a large public cloud, we’ll do it there. If they want to deploy it at IBM, we’ll do it at IBM. If they want to do it on their own, and they happen to have enough infrastructure, we’ll do it there. I think that’s a pretty big difference.
Also, we believe that models, in the end, are not going to be generated by a single company. So we also want to allow for a hybrid model environment, meaning you might pick up models from open source, you might pick up models from other companies, you will get models from IBM, and then we want to give you the flexibility to say which is which because they will come with different attributes. Some could be more capable, some could be cheaper, some could be smaller, some could be larger, some may have IP protection, some may not.
And how is WatsonX doing – can you give us growth numbers, specific clients that differ from the initial ones announced, etc.? Or any industries/sectors it’s being used for that surprised you?
We released it at the end of July, so until the second quarter, the revenue was zero. We did say in our third-quarter earnings – and I think that that’s the number I’ll probably stick to – that we did low hundreds of millions of dollars in bookings, across both large and small.
So going from zero to low hundreds [of millions], I think, is a pretty good rate. Now, that’s not a growth rate, that’s… sort of quarter-to-quarter. But you know, if I was to extrapolate low hundreds [of millions] – if I was just hypothetically, I’m not saying it is, but if you call it 200 [million], and you say you get a bit more over time, you’re getting close to a billion dollars a year, if you can maintain that rate for a year. That feels pretty good – it feels like you’re taking share, you’re getting a footprint, you’re getting there. This is across a mixture of large and small. So that characterizes it financially, probably, as much as I would at this time.
Now, you said sectors – this actually is one of the surprising technologies where we’re finding interest across the sectors. Yes, you would expect that IBM is naturally going to get traction in financial and regulated industries, but it’s much, much more than that – it’s telecom, it’s retail, it’s manufacturing. I really am finding that there’s a lot of interest from a lot of things, but different use cases. Some want it for, “How do you answer phone calls?” Some want it for, “How do you train your own employees?” Some want it for, “How do I take bureaucracy out of an organization?” Some want it for, “How do I make the finance team more effective?” So you’re getting a lot of different use cases, across people.
Critics say that IBM has fallen behind in the AI race. What would you tell them?
Well, let’s see. Deep Blue was 1996, 1997 – we certainly did monetize it. And then I’d look at it tongue-in-cheek and say, “I don’t know, maybe 20 years of… all the supercomputing records had something to do with the fact that we built Deep Blue.” Because I think from ’96 to 2015, we typically had a supercomputer in the world’s top five list… and all of the work we did there, I think, applied to the way we did weather modeling…
I’d then roll forward to 2011, and when Watson won Jeopardy. I think, honestly, history should show… that maybe was the moment when the world woke up to the potential for AI. I think then, I’ve got to give OpenAI credit – it’s kind of like the Netscape moment. Suddenly, the Netscape moment made the internet very tangible, very personal to everybody, and I think ChatGPT made AI very tangible to most people. So now the market need exploded, “Okay, I can get a sense of what this can do.” I’ve also got to give credit to many universities that worked on the underlying technology of large language models.
So, while the critique that you stated is accurate – that’s what people say – I actually think that they really mean something different. What they mean is, “Hey, you guys talked about Watson and Jeopardy back in 2011. Where’s the proof? Where’s the pudding? Where’s the return? You’re talking about these clients now, why not five years ago?” So I think that’s a fair criticism, that we were slow to monetize and slow to make really consumable the learnings from Watson winning Jeopardy. And the mistake we made was that I think we went after very big, monolithic answers, which the world was not ready to absorb. People wanted to be able to tinker with it, people wanted to be able to fine-tune things, people wanted to be able to experiment, people wanted to be able to say, “I want to modify this for my use case.” And in hindsight – and hindsight is 20/20 – every technology market has gone like that. It begins with people wanting to experiment and iterate and tinker. And only then do you go towards the monolithic answer. And so beginning that way was the wrong approach.
So that’s how we pivoted early this year, and that’s why we very quickly took the things we had, and the innovations – because we’ve been working on the same innovations as the rest of the industry – and then put them into the Watson X platform. Because as you could imagine, you couldn’t really do it in three months. It’s not like we announced it in May, and we had it in July. As you can imagine, we had been working on it for three or four years. And the moment was now. So that’s why now.
Let’s talk about the business of generative AI. This past quarter, IBM released Granite generative AI models for composing and summarizing text. And there are consumer apps galore but what does the technology really mean for businesses?
I think I would separate it across domains. In pure language, I think there will be a lot of – maybe not thousands, but there will be tens – of very successful models. I’ve got to give credit, in language, to what OpenAI does, what Microsoft does, what Google does, what Facebook does, because human language is a lot of what any consumer app is going to deal with. Now, you would say, “Okay, you give credit to all these people, and you’re acknowledging their very good models – why don’t you do it?” Well, because I do need a model in which I can offer indemnity to our clients, so I have to have something for which I know the data that is ingested, I know the guardrails built in… so we do our own.
I also want to separate the large language part and the generative part. I think the large language part is going to unlock massive productivity in enterprises. This is where I think the $4 trillion per year number from McKinsey is grounded in. By 2030 – I like McKinsey’s number, and we triangulate to about the same – they say $4.4 trillion of annual productivity by 2030. That’s massive for what enterprises and governments can achieve. The generative side is important because then the AI for simple use cases – “Hey, can you read this?” or “What is the example that my client was talking about yesterday…?” That is the large language side.
The generative side, here, is important, but it’s a minor role, which is, “Give the output in a way that is appealing to me as opposed to kind of robotic.” Now, the other side of generative – in terms of modifying artwork, creating images, advertisements, pictorials, music – we’re not the experts, we’re not going to be doing any of that side of it. And I do worry a little bit about copyright and some of the issues that have been brought up by artists on that side of it. But making writing better so that it’s more appealing and easy to read? That’s a great use of generative, as far as I’m concerned.
In that same vein, IBM today launched a governance product for businesses and companies who want to make sure their models comply with regulation, including “nutrition labels” for AI. What groups did the company work with to develop the bias and fairness monitoring metrics? Did you work with any minority leaders in the space?
We have been open, before, in terms of exposing everything we do to the whole community, both universities and some of the people from the past – I’m not going to name all the names – who have been pretty vocal about how these models can be…
Right now we try to be very careful. We don’t want to be the oracle, so we say, “What’s enshrined in law?” So in the US, I think there are 15 categories that are protected by law. Those are the categories that we will do the bias… Now, obviously, clients can choose to add more into that, but we try to stick to what’s enshrined in law in every place, and that is the way that we want to go forward…
We want to be active in, we want to influence, we want to advocate for these rules and safety standards, but I hesitate to say that we should be the complete arbiters… We should work with those in government and regulatory bodies, and in the larger community, there. I worry that the community doesn’t have enough resources to do this. If you want to go verify a large model and run some tests and see how it’s trained, you’re talking about hundreds of billions of dollars of infrastructure. So it’s got to be done by government, because I fear that even a well-intentioned NGO will not be able to get this done.
You’ve said in the past that AI will create more jobs than it takes, but in recent months, IBM announced a decision to replace about 8,000 jobs with AI. Does the company have any plans to use AI to upskill current employees in those sectors, or types of roles it’ll replace versus not?
We’re actually massively upskilling all of our employees on AI. In August, we took a week and ran a challenge inside IBM, where we encouraged all our employees to create what I call mini-applications using WatsonX as a platform – 160,000 of our employees participated for the week, and we had 30,000 teams, who all came up with really cool ideas. We picked the top dozen, which we rewarded, and we got to take those all the way to full production. In the next couple of months, we’ll do it again. So we really are taking a lot of time, we give them a lot of material, we encourage them to go learn about this and see how to use it and deploy it. I’m convinced that will make them much better employees, and it will also make them much more interesting to our clients. So it’s great – they’re good for us, and they’re more marketable, so it’s actually good for them.
I also think that many people when they hear this – I actually disagree with the way many economists and many people characterize it, that if you make somebody more productive, then you need less of them. That’s actually been false in history. If you are more productive, that means you have a natural economic advantage against your competition, which means you’re going to get more work, which means you’re going to need more people. And I think people forget that – they come from a zero-sum mentality to say it’s a zero-sum game… The world I live in, you’re more competitive, so that means you’re going to get more work, which means you need more people to do that work. So yes, certain roles will shrink because you don’t need so many people doing, maybe, email responses or phone calls, but then it will shift to maybe more applications will get done, or maybe you’ll be advertising to different markets that you previously could access. So there will be a shift – yes, the first bucket decreases, and everybody fixates on that. By the way, at our scale, that’s 3% of our entire employee population…
I fundamentally believe we’ll get more jobs. There wasn’t an internet job in 1995. How many are there today, 30 million…? There was no CNBC.com in 1995. There was a television channel.
In your eyes, what’s the most over-hyped and under-hyped aspect – specifically – of AI today?
The most overhyped is obviously this existential risk of AI taking over humanity. It is so overhyped that I think it’s fantastical, and I use that word publicly. The most underhyped is the productivity it’s going to bring to every one of the bureaucratic tasks we all live with, inside enterprises and with government.
You may like
Technology
Palantir jumps 9% to a record after announcing move to Nasdaq
Published
4 hours agoon
November 15, 2024By
admin
Alex Karp, CEO of Palantir Technologies speaks during the Digital X event on September 07, 2021 in Cologne, Germany.
Andreas Rentz | Getty Images
Palantir shares continued their torrid run on Friday, soaring as much as 9% to a record, after the developer of software for the military announced plans to transfer its listing to the Nasdaq from the New York Stock Exchange.
The stock jumped past $64.50 in afternoon trading, lifting the company’s market cap to $147 billion. The shares are now up more than 50% since Palantir’s better-than-expected earnings report last week and have almost quadrupled in value this year.
Palantir said late Thursday that it expects to begin trading on the Nasdaq on Nov. 26, under its existing ticker symbol “PLTR.” While changing listing sites does nothing to alter a company’s fundamentals, board member Alexander Moore, a partner at venture firm 8VC, suggested in a post on X that the move could be a win for retail investors because “it will force” billions of dollars in purchases by exchange-traded funds.
“Everything we do is to reward and support our retail diamondhands following,” Moore wrote, referring to a term popularized in the crypto community for long-term believers.
Moore appears to have subsequently deleted his X account. His firm, 8VC, didn’t immediately respond to a request for comment.
Last Monday after market close, Palantir reported third-quarter earnings and revenue that topped estimates and issued a fourth-quarter forecast that was also ahead of Wall Street’s expectations. CEO Alex Karp wrote in the earnings release that the company “absolutely eviscerated this quarter,” driven by demand for artificial intelligence technologies.
U.S. government revenue increased 40% from a year earlier to $320 million, while U.S. commercial revenue rose 54% to $179 million. On the earnings call, the company highlighted a five-year contract to expand its Maven technology across the U.S. military. Palantir established Maven in 2017 to provide AI tools to the Department of Defense.
The post-earnings rally coincides with the period following last week’s presidential election. Palantir is seen as a potential beneficiary given the company’s ties to the Trump camp. Co-founder and Chairman Peter Thiel was a major booster of Donald Trump’s first victorious campaign, though he had a public falling out with Trump in the ensuing years.
When asked in June about his position on the 2024 election, Thiel said, “If you hold a gun to my head I’ll vote for Trump.”
Thiel’s Palantir holdings have increased in value by about $3.2 billion since the earnings report and $2 billion since the election.
In September, S&P Global announced Palantir would join the S&P 500 stock index.
Analysts at Argus Research say the rally has pushed the stock too high given the current financials and growth projections. The analysts still have a long-term buy rating on the stock and said in a report last week that the company had a “stellar” quarter, but they downgraded their 12-month recommendation to a hold.
The stock “may be getting ahead of what the company fundamentals can support,” the analysts wrote.
Technology
Super Micro faces deadline to keep Nasdaq listing after 85% plunge in stock
Published
7 hours agoon
November 15, 2024By
admin
Charles Liang, chief executive officer of Super Micro Computer Inc., during the Computex conference in Taipei, Taiwan, on Wednesday, June 5, 2024. The trade show runs through June 7.
Annabelle Chih | Bloomberg | Getty Images
Super Micro Computer could be headed down a path to getting kicked off the Nasdaq as soon as Monday.
That’s the potential fate for the server company if it fails to file a viable plan for becoming compliant with Nasdaq regulations. Super Micro is late in filing its 2024 year-end report with the SEC, and has yet to replace its accounting firm. Many investors were expecting clarity from Super Micro when the company reported preliminary quarterly results last week. But they didn’t get it.
The primary component of that plan is how and when Super Micro will file its 2024 year-end report with the Securities and Exchange Commission, and why it was late. That report is something many expected would be filed alongside the company’s June fourth-quarter earnings but was not.
The Nasdaq delisting process represents a crossroads for Super Micro, which has been one of the primary beneficiaries of the artificial intelligence boom due to its longstanding relationship with Nvidia and surging demand for the chipmaker’s graphics processing units.
The one-time AI darling is reeling after a stretch of bad news. After Super Micro failed to file its annual report over the summer, activist short seller Hindenburg Research targeted the company in August, alleging accounting fraud and export control issues. The company’s auditor, Ernst & Young, stepped down in October, and Super Micro said last week that it was still trying to find a new one.
The stock is getting hammered. After the shares soared more than 14-fold from the end of 2022 to their peak in March of this year, they’ve since plummeted by 85%. Super Micro’s stock is now equal to where it was trading in May 2022, after falling another 11% on Thursday.
Getting delisted from the Nasdaq could be next if Super Micro doesn’t file a compliance plan by the Monday deadline or if the exchange rejects the company’s submission. Super Micro could also get an extension from the Nasdaq, giving it months to come into compliance. The company said Thursday that it would provide a plan to the Nasdaq in time.
A spokesperson told CNBC the company “intends to take all necessary steps to achieve compliance with the Nasdaq continued listing requirements as soon as possible.”
While the delisting issue mainly affects the stock, it could also hurt Super Micro’s reputation and standing with its customers, who may prefer to simply avoid the drama and buy AI servers from rivals such as Dell or HPE.
“Given that Super Micro’s accounting concerns have become more acute since Super Micro’s quarter ended, its weakness could ultimately benefit Dell more in the coming quarter,” Bernstein analyst Toni Sacconaghi wrote in a note this week.
A representative for the Nasdaq said the exchange doesn’t comment on the delisting process for individual companies, but the rules suggest the process could take about a year before a final decision.
A plan of compliance
The Nasdaq warned Super Micro on Sept. 17 that it was at risk of being delisted. That gave the company 60 days to submit a plan of compliance to the exchange, and because the deadline falls on a Sunday, the effective date for the submission is Monday.
If Super Micro’s plan is acceptable to Nasdaq staff, the company is eligible for an extension of up to 180 days to file its year-end report. The Nasdaq wants to see if Super Micro’s board of directors has investigated the company’s accounting problem, what the exact reason for the late filing was and a timeline of actions taken by the board.
The Nasdaq says it looks at several factors when evaluating a plan of compliance, including the reasons for the late filing, upcoming corporate events, the overall financial status of the company and the likelihood of a company filing an audited report within 180 days. The review can also look at information provided by outside auditors, the SEC or other regulators.
Last week, Super Micro said it was doing everything it could to remain listed on the Nasdaq, and said a special committee of its board had investigated and found no wrongdoing. Super Micro CEO Charles Liang said the company would receive the board committee’s report as soon as last week. A company spokesperson didn’t respond when asked by CNBC if that report had been received.
If the Nasdaq rejects Super Micro’s compliance plan, the company can request a hearing from the exchange’s Hearings Panel to review the decision. Super Micro won’t be immediately kicked off the exchange – the hearing panel request starts a 15-day stay for delisting, and the panel can decide to extend the deadline for up to 180 days.
If the panel rejects that request or if Super Micro gets an extension and fails to file the updated financials, the company can still appeal the decision to another Nasdaq body called the Listing Council, which can grant an exception.
Ultimately, the Nasdaq says the extensions have a limit: 360 days from when the company’s first late filing was due.
A poor track record
There’s one factor at play that could hurt Super Micro’s chances of an extension. The exchange considers whether the company has any history of being out of compliance with SEC regulations.
Between 2015 and 2017, Super Micro misstated financials and published key filings late, according to the SEC. It was delisted from the Nasdaq in 2017 and was relisted two years later.
Super Micro “might have a more difficult time obtaining extensions as the Nasdaq’s literature indicates it will in part ‘consider the company’s specific circumstances, including the company’s past compliance history’ when determining whether an extension is warranted,” Wedbush analyst Matt Bryson wrote in a note earlier this month. He has a neutral rating on the stock.
History also reveals just how long the delisting process can take.
Charles Liang, chief executive officer of Super Micro Computer Inc., right, and Jensen Huang, co-founder and chief executive officer of Nvidia Corp., during the Computex conference in Taipei, Taiwan, on Wednesday, June 5, 2024.
Annabelle Chih | Bloomberg | Getty Images
Super Micro missed an annual report filing deadline in June 2017, got an extension to December and finally got a hearing in May 2018, which gave it another extension to August of that year. It was only when it missed that deadline that the stock was delisted.
In the short term, the bigger worry for Super Micro is whether customers and suppliers start to bail.
Aside from the compliance problems, Super Micro is a fast-growing company making one of the most in-demand products in the technology industry. Sales more than doubled last year to nearly $15 billion, according to unaudited financial reports, and the company has ample cash on its balance sheet, analysts say. Wall Street is expecting even more growth to about $25 billion in sales in its fiscal 2025, according to FactSet.
Super Micro said last week that the filing delay has “had a bit of an impact to orders.” In its unaudited September quarter results reported last week, the company showed growth that was slower than Wall Street expected. It also provided light guidance.
The company said one reason for its weak results was that it hadn’t yet obtained enough supply of Nvidia’s next-generation chip, called Blackwell, raising questions about Super Micro’s relationship with its most important supplier.
“We don’t believe that Super Micro’s issues are a big deal for Nvidia, although it could move some sales around in the near term from one quarter to the next as customers direct orders toward Dell and others,” wrote Melius Research analyst Ben Reitzes in a note this week.
Super Micro’s head of corporate development, Michael Staiger, told investors on a call last week that “we’ve spoken to Nvidia and they’ve confirmed they’ve made no changes to allocations. We maintain a strong relationship with them.”
Don’t miss these insights from CNBC PRO
Technology
Alibaba posts profit beat as China looks to prop up tepid consumer spend
Published
9 hours agoon
November 15, 2024By
admin
Alibaba Offices In Beijing
Bloomberg | Bloomberg | Getty Images
Chinese e-commerce behemoth Alibaba on Friday beat profit expectations in its September quarter, but sales fell short as sluggishness in the world’s second-largest economy hit consumer spending.
Alibaba said net income rose 58% year on year to 43.9 billion yuan ($6.07 billion) in the company’s quarter ended Sept. 30, on the back of the performance of its equity investments. This compares with an LSEG forecast of 25.83 billion yuan.
“The year-over-year increases were primarily attributable to the mark-to-market changes from our equity investments, decrease in impairment of our investments and increase in income from operations,” the company said of the annual profit jump in its earnings statement.
Revenue, meanwhile, came in at 236.5 billion yuan, 5% higher year on year but below an analyst forecast of 238.9 billion yuan, according to LSEG data.
The company’s New York-listed shares have gained ground this year to date, up more than 13%. The stock fell more than 2% in morning trading on Friday, after the release of the quarterly earnings.
Sales sentiment
Investors are closely watching the performance of Alibaba’s main business units, Taobao and Tmall Group, which reported a 1% annual uptick in revenue to 98.99 billion yuan in the September quarter.
The results come at a tricky time for Chinese commerce businesses, given a tepid retail environment in the country. Chinese e-commerce group JD.com also missed revenue expectations on Thursday, according to Reuters.
Markets are now watching whether a slew of recent stimulus measures from Beijing, including a five-year 1.4 trillion yuan package announced last week, will help resuscitate the country’s growth and curtail a long-lived real estate market slump.
The impact on the retail space looks promising so far, with sales rising by a better-than-expected 4.8% year on year in October, while China’s recent Singles’ Day shopping holiday — widely seen as a barometer for national consumer sentiment — regained some of its luster.
Alibaba touted “robust growth” in gross merchandise volume — an industry measure of sales over time that does not equate to the company’s revenue — for its Taobao and Tmall Group businesses during the festival, along with a “record number of active buyers.”
“Alibaba’s outlook remains closely aligned with the trajectory of the Chinese economy and evolving regulatory policies,” ING analysts said Thursday, noting that the company’s Friday report will shed light on the Chinese economy’s growth momentum.
The e-commerce giant’s overseas online shopping businesses, such as Lazada and Aliexpress, meanwhile posted a 29% year-on-year hike in sales to 31.67 billion yuan.
Cloud business accelerates
Alibaba’s Cloud Intelligence Group reported year-on-year sales growth of 7% to 29.6 billion yuan in the September quarter, compared with a 6% annual hike in the three-month period ended in June. The slight acceleration comes amid ongoing efforts by the company to leverage its cloud infrastructure and reposition itself as a leader in the booming artificial intelligence space.
“Growth in our Cloud business accelerated from prior quarters, with revenues from public cloud products growing in double digits and AI-related product revenue delivering triple-digit growth. We are more confident in our core businesses than ever and will continue to invest in supporting long-term growth,” Alibaba CEO Eddie Wu said in a statement Friday.
Stymied by Beijing’s sweeping 2022 crackdown on large internet and tech companies, Alibaba last year overhauled the division’s leadership and has been shaping it as a future growth driver, stepping up competition with rivals including Baidu and Huawei domestically, and Microsoft and OpenAI in the U.S.
Alibaba, which rolled out its own ChatGPT-style product Tongyi Qianwen last year, this week unveiled its own AI-powered search tool for small businesses in Europe and the Americas, and clinched a key five-year partnership to supply cloud services to Indonesian tech giant GoTo in September.
Speaking at the Apsara Conference in September, Alibaba’s Wu said the company’s cloud unit is investing “with unprecedented intensity, in the research and development of AI technology and the building of its global infrastructure,” noting that the future of AI is “only beginning.”
Correction: This article has been updated to reflect that Alibaba’s Cloud Intelligence Group reported quarterly revenue of 29.6 billion yuan in the September quarter.
Trending
-
Sports2 years ago
‘Storybook stuff’: Inside the night Bryce Harper sent the Phillies to the World Series
-
Sports7 months ago
Story injured on diving stop, exits Red Sox game
-
Sports2 years ago
MLB Rank 2023: Ranking baseball’s top 100 players
-
Sports1 year ago
Game 1 of WS least-watched in recorded history
-
Environment1 year ago
Japan and South Korea have a lot at stake in a free and open South China Sea
-
Sports3 years ago
Team Europe easily wins 4th straight Laver Cup
-
Environment2 years ago
Game-changing Lectric XPedition launched as affordable electric cargo bike
-
Business2 years ago
Bank of England’s extraordinary response to government policy is almost unthinkable | Ed Conway