Computer scientists are questioning whether Alphabet’s DeepMind will ever make A.I. more human-like
More Videos
Published
2 years agoon
By
admin
Computer scientists are questioning whether DeepMind, the Alphabet-owned U.K. firm that’s widely regarded as one of the world’s premier AI labs, will ever be able to make machines with the kind of “general” intelligence seen in humans and animals.
In its quest for artificial general intelligence, which is sometimes called human-level AI, DeepMind is focusing a chunk of its efforts on an approach called “reinforcement learning.”
This involves programming an AI to take certain actions in order to maximize its chance of earning a reward in a certain situation. In other words, the algorithm “learns” to complete a task by seeking out these preprogrammed rewards. The technique has been successfully used to train AI models how to play (and excel at) games like Go and chess. But they remain relatively dumb, or “narrow.” DeepMind’s famous AlphaGo AI can’t draw a stickman or tell the difference between a cat and a rabbit, for example, while a seven-year-old can.
Despite this, DeepMind, which was acquired by Google in 2014 for around $600 million, believes that AI systems underpinned by reinforcement learning could theoretically grow and learn so much that they break the theoretical barrier to AGI without any new technological developments.
Researchers at the company, which has grown to around 1,000 people under Alphabet’s ownership, argued in a paper submitted to the peer-reviewed Artificial Intelligence journal last month that “Reward is enough” to reach general AI. The paper was first reported by VentureBeat last week.
In the paper, the researchers claim that if you keep “rewarding” an algorithm each time it does something you want it to, which is the essence of reinforcement learning, then it will eventually start to show signs of general intelligence.
“Reward is enough to drive behavior that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalization and imitation,” the authors write.
“We suggest that agents that learn through trial and error experience to maximize reward could learn behavior that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence.”
Not everyone is convinced, however.
Samim Winiger, an AI researcher in Berlin, told CNBC that DeepMind’s “reward is enough” view is a “somewhat fringe philosophical position, misleadingly presented as hard science.”
He said the path to general AI is complex and that the scientific community is aware that there are countless challenges and known unknowns that “rightfully instill a sense of humility” in most researchers in the field and prevent them from making “grandiose, totalitarian statements” such as “RL is the final answer, all you need is reward.”
DeepMind told CNBC that while reinforcement learning has been behind some of its most well-known research breakthroughs, the AI technique accounts for only a fraction of the overall research it carries out. The company said it thinks it’s important to understand things at a more fundamental level, which is why it pursues other areas such as “symbolic AI” and “population-based training.”
“In somewhat typical DeepMind fashion, they chose to make bold statements that grabs attention at all costs, over a more nuanced approach,” said Winiger. “This is more akin to politics than science.”
Stephen Merity, an independent AI researcher, told CNBC that there’s “a difference between theory and practice.” He also noted that “a stack of dynamite is likely enough to get one to the moon, but it’s not really practical.”
Ultimately, there’s no proof either way to say whether reinforcement learning will ever lead to AGI.
Rodolfo Rosini, a tech investor and entrepreneur with a focus on AI, told CNBC: “The truth is nobody knows and that DeepMind’s main product continues to be PR and not technical innovation or products.”
Entrepreneur William Tunstall-Pedoe, who sold his Siri-like app Evi to Amazon, told CNBC that even if the researchers are correct “that doesn’t mean we will get there soon, nor does it mean that there isn’t a better, faster way to get there.”
DeepMind’s “Reward is enough” paper was co-authored by DeepMind heavyweights Richard Sutton and David Silver, who met DeepMind CEO Demis Hassabis at the University of Cambridge in the 1990s.
“The key problem with the thesis put forth by ‘Reward is enough’ is not that it is wrong, but rather that it cannot be wrong, and thus fails to satisfy Karl Popper’s famous criterion that all scientific hypotheses be falsifiable,” said a senior AI researcher at a large U.S. tech firm, who wished to remain anonymous due to the sensitive nature of the discussion.
“Because Silver et al. are speaking in generalities, and the notion of reward is suitably underspecified, you can always either cherry pick cases where the hypothesis is satisfied, or the notion of reward can be shifted such that it is satisfied,” the source added.
“As such, the unfortunate verdict here is not that these prominent members of our research community have erred in any way, but rather that what is written is trivial. What is learned from this paper, in the end? In the absence of practical, actionable consequences from recognizing the unalienable truth of this hypothesis, was this paper enough?”
What is AGI?
While AGI is often referred to as the holy grail of the AI community, there’s no consensus on what AGI actually is. One definition is it’s the ability of an intelligent agent to understand or learn any intellectual task that a human being can.
But not everyone agrees with that and some question whether AGI will ever exist. Others are terrified about its potential impacts and whether AGI would build its own, even more powerful, forms of AI, or so-called superintelligences.
Ian Hogarth, an entrepreneur turned angel investor, told CNBC that he hopes reinforcement learning isn’t enough to reach AGI. “The more that existing techniques can scale up to reach AGI, the less time we have to prepare AI safety efforts and the lower the chance that things go well for our species,” he said.
Winiger argues that we’re no closer to AGI today than we were several decades ago. “The only thing that has fundamentally changed since the 1950/60s, is that science-fiction is now a valid tool for giant corporations to confuse and mislead the public, journalists and shareholders,” he said.
Fueled with hundreds of millions of dollars from Alphabet every year, DeepMind is competing with the likes of Facebook and OpenAI to hire the brightest people in the field as it looks to develop AGI. “This invention could help society find answers to some of the world’s most pressing and fundamental scientific challenges,” DeepMind writes on its website.
DeepMind COO Lila Ibrahim said on Monday that trying to “figure out how to operationalize the vision” has been the biggest challenge since she joined the company in April 2018.
You may like
Technology
The tech trade is back, driven by A.I. craze and prospect of a less aggressive Fed
Published
2 days agoon
May 26, 2023By
admin
Jen-Hsun Huang, president and chief executive officer of Nvidia Corp., speaks during the company’s event at Mobile World Congress Americas in Los Angeles, California, U.S., on Monday, Oct. 21, 2019.
Patrick T. Fallon | Bloomberg | Getty Images
Forget about the debt ceiling. Tech investors are in buy mode.
The Nasdaq Composite closed out its fifth-straight weekly gain on Friday, jumping 2.5% in the past five days, and is now up 24% this year, far outpacing the other major U.S. indexes. The S&P 500 is up 9.5% for the year and the Dow Jones Industrial Average is down slightly.
related investing news
Excitement surrounding chipmaker Nvidia’s blowout earnings report and its leadership position in artificial intelligence technology drove this week’s rally, but investors also snapped up shares of Microsoft, Meta and Alphabet, each of which have their own AI story to tell.
And with optimism brewing that lawmakers are close to a deal to raise the debt ceiling, and that the Federal Reserve may be slowing its pace of interest rate hikes, this year’s stock market is starting to look less like 2022 and more like the tech-happy decade that preceded it.
“Being concentrated in these mega-cap tech stocks has been where to be in this market,” said Victoria Greene, chief investment officer of G Squared Private Wealth, in an interview on CNBC’s “Worldwide Exchange” Friday morning. “You cannot deny the potential in AI, you cannot deny the earnings prowess that these companies have.”

To start the year, the main theme in tech was layoffs and cost cuts. Many of the biggest companies in the industry, including Meta, Alphabet, Amazon and Microsoft, were eliminating thousands of jobs following a dismal 2022 for revenue growth and stock prices. In earnings reports, they emphasized efficiency and their ability to “do more with less,” a theme that resonates with the Wall Street crowd.
But investors have shifted their focus to AI now that companies are showcasing real-world applications of the long-hyped technology. OpenAI has exploded after releasing the chatbot ChatGPT last year, and its biggest investor, Microsoft, is embedding the core technology in as many products as it can.
Google, meanwhile, is touting its rival AI model at every opportunity, and Meta CEO Mark Zuckerberg would much rather tell shareholders about his company’s AI advancements than the company’s money-bleeding metaverse efforts.
Enter Nvidia.
The chipmaker, known best for its graphics processing units (GPUs) that power advanced video games, is riding the AI wave. The stock soared 25% this week to a record and lifted the company’s market cap to nearly $1 trillion after first-quarter earnings topped estimates.
Nvidia shares are now up 167% this year, topping all companies in the S&P 500. The next three top gainers in the index are also tech companies: Meta, Advanced Micro Devices and Salesforce.
The story for Nvidia is based on what’s coming, as its revenue in the latest quarter fell 13% from a year earlier because of a 38% drop in the gaming division. But the company’s sales forecast for the current quarter was roughly 50% higher than Wall Street estimates, and CEO Jensen Huang said Nvidia is seeing “surging demand” for its data center products.
Nvidia said cloud vendors and internet companies are buying up GPU chips and using the processors to train and deploy generative AI applications like ChatGPT.
“At this point in the cycle, I think it’s really important to not fight consensus,” said Brent Bracelin, an analyst at Piper Sandler who covers cloud and software companies, in a Friday interview on CNBC’s “Squawk on the Street.”
“The consensus is, on AI, the big get bigger,” Bracelin said. “And I think that’s going to continue to be the best way to play the AI trends.”
Microsoft, which Bracelin recommends buying, rose 4.6% this week and is now up 39% for the year. Meta gained 6.7% for the week and has more than doubled in 2023 after losing almost two-thirds of its value last year. Alphabet rose 1.5% this week, bringing its increase for the year to 41%.
One of the biggest drags on tech stocks last year was the central bank’s consistent interest rate hikes. The increases have continued into 2023, with the fed funds target range climbing to 5%-5.25% in early May. But at the last Fed meeting, some members indicated that they expected a slowdown in economic growth to remove the need for further tightening, according to minutes released on Wednesday.
Less aggressive monetary policy is seen as a bullish sign for tech and other riskier assets, which typically outperform in a more stable rate environment.
Still, some investors are concerned that the tech rally has gone too far given the vulnerabilities that remain in the economy and in government. The divided Congress is making a debt ceiling deal difficult as the Treasury Department’s June 1 deadline approaches. Republican negotiator Rep. Garret Graves of Louisiana told reporters Friday afternoon in the Capitol that, “We continue to have major issues that we have not bridged the gap on.”
Treasury Secretary Janet Yellen said later on Friday that the U.S. will likely have enough reserves to push off a potential debt default until June 5.
Alli McCartney, managing director at UBS Private Wealth Management, told CNBC’s “Squawk on the Street” on Friday that following the recent rebound in tech stocks, “it’s probably time to take some of that off the table.” She said her group has spent a lot of time looking at the venture market and where deals are happening, and they’ve noticed some clear froth.
“You’re either AI or you’re not right now,” McCartney said. “We really have to be ready to see if we don’t get a perfect debt ceiling, if we don’t get a perfect landing, what does that mean, because at these kinds of levels we are definitely pricing in the U.S. hitting the high note on everything and that seems like a terribly precarious place to be given the risks out there.”

Technology
OpenAI’s Sam Altman reverses threat to cease European operations
Published
3 days agoon
May 26, 2023By
admin
Sam Altman, president of Y Combinator, pauses during the New Work Summit in Half Moon Bay, California, U.S., on Monday, Feb. 25, 2019.
David Paul Morris | Bloomberg | Getty Images
In just two days, OpenAI CEO Sam Altman seemed to do a 180 on his public views of European artificial intelligence regulation – first threatening to cease operations in Europe if regulation crossed a line, then reversing his claims and now saying the firm has “no plans to leave.”
On Wednesday, Altman spoke to reporters in London and detailed his concerns about the European Union’s AI Act, which is set to be finalized in 2024, the Financial Times reported.
“The details really matter,” Altman reportedly said. “We will try to comply, but if we can’t comply we will cease operating.”
Initially, the legislation – which could be the first of its kind as far as AI governance – was drafted for “high-risk” uses of AI, such as in medical equipment, hiring and loan decisions.
Now, during the generative AI boom, lawmakers have proposed expanded rules: Makers of large machine learning systems and tools like large language models, the kind that power chatbots like OpenAI’s ChatGPT, Google’s Bard and more, would need to disclose AI-generated content and publish summaries of any copyrighted information used as training data for their systems.
OpenAI drew criticism for not disclosing methods or training data for GPT-4, one of the models behind ChatGPT, after its release.
“The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back,” Altman said Wednesday in London, according to Reuters. “They are still talking about it.”
Lawmakers told Reuters the draft wasn’t up for debate, and Dragos Tudorache, a Romanian member of the European Parliament, said he does “not see any dilution happening anytime soon.”
Less than 48 hours after his initial comments about potentially ceasing operations, Altman tweeted about a “very productive week of conversations in Europe about how to best regulate AI,” adding that the OpenAI team is “excited to continue to operate here and of course have no plans to leave.”
The more recent proposal for the EU’s AI Act will be negotiated among the European Commission and member states over the coming year, the FT reported.
Technology
Tech layoffs ravage the teams that fight online misinformation and hate speech
Published
3 days agoon
May 26, 2023By
admin
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., left, arrives at federal court in San Jose, California, US, on Tuesday, Dec. 20, 2022.
David Paul Morris | Bloomberg | Getty Images
Toward the end of 2022, engineers on Meta’s team combating misinformation were ready to debut a key fact-checking tool that had taken half a year to build. The company needed all the reputational help it could get after a string of crises had badly damaged the credibility of Facebook and Instagram and given regulators additional ammunition to bear down on the platforms.
The new product would let third-party fact-checkers like The Associated Press and Reuters, as well as credible experts, add comments at the top of questionable articles on Facebook as a way to verify their trustworthiness.
But CEO Mark Zuckerberg’s commitment to make 2023 the “year of efficiency” spelled the end of the ambitious effort, according to three people familiar with the matter who asked not to be named due to confidentiality agreements.
Over multiple rounds of layoffs, Meta announced plans to eliminate roughly 21,000 jobs, a mass downsizing that had an outsized effect on the company’s trust and safety work. The fact-checking tool, which had initial buy-in from executives and was still in a testing phase early this year, was completely dissolved, the sources said.
A Meta spokesperson did not respond to questions related to job cuts in specific areas and said in an emailed statement that “we remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community.”
Across the tech industry, as companies tighten their belts and impose hefty layoffs to address macroeconomic pressures and slowing revenue growth, wide swaths of people tasked with protecting the internet’s most-populous playgrounds are being shown the exits. The cuts come at a time of increased cyberbullying, which has been linked to higher rates of adolescent self-harm, and as the spread of misinformation and violent content collides with the exploding use of artificial intelligence.
In their most recent earnings calls, tech executives highlighted their commitment to “do more with less,” boosting productivity with fewer resources. Meta, Alphabet, Amazon and Microsoft have all cut thousands of jobs after staffing up rapidly before and during the Covid pandemic. Microsoft CEO Satya Nadella recently said his company would suspend salary increases for full-time employees.
The slashing of teams tasked with trust and safety and AI ethics is a sign of how far companies are willing to go to meet Wall Street demands for efficiency, even with the 2024 U.S. election season — and the online chaos that’s expected to ensue — just months away from kickoff. AI ethics and trust and safety are different departments within tech companies but are aligned on goals related to limiting real-life harm that can stem from use of their companies’ products and services.
“Abuse actors are usually ahead of the game; it’s cat and mouse,” said Arjun Narayan, who previously served as a trust and safety lead at Google and TikTok parent ByteDance, and is now head of trust and safety at news aggregator app Smart News. “You’re always playing catch-up.”
For now, tech companies seem to view both trust and safety and AI ethics as cost centers.
Twitter effectively disbanded its ethical AI team in November and laid off all but one of its members, along with 15% of its trust and safety department, according to reports. In February, Google cut about one-third of a unit that aims to protect society from misinformation, radicalization, toxicity and censorship. Meta reportedly ended the contracts of about 200 content moderators in early January. It also laid off at least 16 members of Instagram’s well-being group and more than 100 positions related to trust, integrity and responsibility, according to documents filed with the U.S. Department of Labor.
Andy Jassy, chief executive officer of Amazon.Com Inc., during the GeekWire Summit in Seattle, Washington, U.S., on Tuesday, Oct. 5, 2021.
David Ryder | Bloomberg | Getty Images
In March, Amazon downsized its responsible AI team and Microsoft laid off its entire ethics and society team – the second of two layoff rounds that reportedly took the team from 30 members to zero. Amazon didn’t respond to a request for comment, and Microsoft pointed to a blog post regarding its job cuts.
At Amazon’s game streaming unit Twitch, staffers learned of their fate in March from an ill-timed internal post from Amazon CEO Andy Jassy.
Jassy’s announcement that 9,000 jobs would be cut companywide included 400 employees at Twitch. Of those, about 50 were part of the team responsible for monitoring abusive, illegal or harmful behavior, according to people familiar with the matter who spoke on the condition of anonymity because the details were private.
The trust and safety team, or T&S as it’s known internally, was losing about 15% of its staff just as content moderation was seemingly more important than ever.
In an email to employees, Twitch CEO Dan Clancy didn’t call out the T&S department specifically, but he confirmed the broader cuts among his staffers, who had just learned about the layoffs from Jassy’s post on a message board.
“I’m disappointed to share the news this way before we’re able to communicate directly to those who will be impacted,” Clancy wrote in the email, which was viewed by CNBC.
‘Hard to win back consumer trust’
A current member of Twitch’s T&S team said the remaining employees in the unit are feeling “whiplash” and worry about a potential second round of layoffs. The person said the cuts caused a big hit to institutional knowledge, adding that there was a significant reduction in Twitch’s law enforcement response team, which deals with physical threats, violence, terrorism groups and self-harm.
A Twitch spokesperson did not provide a comment for this story, instead directing CNBC to a blog post from March announcing the layoffs. The post didn’t include any mention of trust and safety or content moderation.
Narayan of Smart News said that with a lack of investment in safety at the major platforms, companies lose their ability to scale in a way that keeps pace with malicious activity. As more problematic content spreads, there’s an “erosion of trust,” he said.
“In the long run, it’s really hard to win back consumer trust,” Narayan added.
While layoffs at Meta and Amazon followed demands from investors and a dramatic slump in ad revenue and share prices, Twitter’s cuts resulted from a change in ownership.
Almost immediately after Elon Musk closed his $44 billion purchase of Twitter in October, he began eliminating thousands of jobs. That included all but one member of the company’s 17-person AI ethics team, according to Rumman Chowdhury, who served as director of Twitter’s machine learning ethics, transparency and accountability team. The last remaining person ended up quitting.
The team members learned of their status when their laptops were turned off remotely, Chowdhury said. Hours later, they received email notifications.
“I had just recently gotten head count to build out my AI red team, so these would be the people who would adversarially hack our models from an ethical perspective and try to do that work,” Chowdhury told CNBC. She added, “It really just felt like the rug was pulled as my team was getting into our stride.”
Part of that stride involved working on “algorithmic amplification monitoring,” Chowdhury said, or tracking elections and political parties to see if “content was being amplified in a way that it shouldn’t.”
Chowdhury referenced an initiative in July 2021, when Twitter’s AI ethics team led what was billed as the industry’s first-ever algorithmic bias bounty competition. The company invited outsiders to audit the platform for bias, and made the results public.
Chowdhury said she worries that now Musk “is actively seeking to undo all the work we have done.”
“There is no internal accountability,” she said. “We served two of the product teams to make sure that what’s happening behind the scenes was serving the people on the platform equitably.”
Twitter did not provide a comment for this story.

Advertisers are pulling back in places where they see increased reputational risk.
According to Sensor Tower, six of the top 10 categories of U.S. advertisers on Twitter spent much less in the first quarter of this year compared with a year earlier, with that group collectively slashing its spending by 53%. The site has recently come under fire for allowing the spread of violent images and videos.
The rapid rise in popularity of chatbots is only complicating matters. The types of AI models created by OpenAI, the company behind ChatGPT, and others make it easier to populate fake accounts with content. Researchers from the Allen Institute for AI, Princeton University and Georgia Tech ran tests in ChatGPT’s application programming interface (API), and found up to a sixfold increase in toxicity, depending on which type of functional identity, such as a customer service agent or virtual assistant, a company assigned to the chatbot.
Regulators are paying close attention to AI’s growing influence and the simultaneous downsizing of groups dedicated to AI ethics and trust and safety. Michael Atleson, an attorney at the Federal Trade Commission’s division of advertising practices, called out the paradox in a blog post earlier this month.
“Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering,” Atleson wrote. “If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look.”
Meta as a bellwether
For years, as the tech industry was enjoying an extended bull market and the top internet platforms were flush with cash, Meta was viewed by many experts as a leader in prioritizing ethics and safety.
The company spent years hiring trust and safety workers, including many with academic backgrounds in the social sciences, to help avoid a repeat of the 2016 presidential election cycle, when disinformation campaigns, often operated by foreign actors, ran rampant on Facebook. The embarrassment culminated in the 2018 Cambridge Analytica scandal, which exposed how a third party was illicitly using personal data from Facebook.
But following a brutal 2022 for Meta’s ad business — and its stock price — Zuckerberg went into cutting mode, winning plaudits along the way from investors who had complained of the company’s bloat.
Beyond the fact-checking project, the layoffs hit researchers, engineers, user design experts and others who worked on issues pertaining to societal concerns. The company’s dedicated team focused on combating misinformation suffered numerous losses, four former Meta employees said.
Prior to Meta’s first round of layoffs in November, the company had already taken steps to consolidate members of its integrity team into a single unit. In September, Meta merged its central integrity team, which handles social matters, with its business integrity group tasked with addressing ads and business-related issues like spam and fake accounts, ex-employees said.
In the ensuing months, as broader cuts swept across the company, former trust and safety employees described working under the fear of looming layoffs and for managers who sometimes failed to see how their work affected Meta’s bottom line.
For example, things like improving spam filters that required fewer resources could get clearance over long-term safety projects that would entail policy changes, such as initiatives involving misinformation. Employees felt incentivized to take on more manageable tasks because they could show their results in their six-month performance reviews, ex-staffers said.
Ravi Iyer, a former Meta project manager who left the company before the layoffs, said that the cuts across content moderation are less bothersome than the fact that many of the people he knows who lost their jobs were performing critical roles on design and policy changes.
“I don’t think we should reflexively think that having fewer trust and safety workers means platforms will necessarily be worse,” said Iyer, who’s now the managing director of the Psychology of Technology Institute at University of Southern California’s Neely Center. “However, many of the people I’ve seen laid off are amongst the most thoughtful in rethinking the fundamental designs of these platforms, and if platforms are not going to invest in reconsidering design choices that have been proven to be harmful — then yes, we should all be worried.”
A Meta spokesperson previously downplayed the significance of the job cuts in the misinformation unit, tweeting that the “team has been integrated into the broader content integrity team, which is substantially larger and focused on integrity work across the company.”
Still, sources familiar with the matter said that following the layoffs, the company has fewer people working on misinformation issues.

For those who’ve gained expertise in AI ethics, trust and safety and related content moderation, the employment picture looks grim.
Newly unemployed workers in those fields from across the social media landscape told CNBC that there aren’t many job openings in their area of specialization as companies continue to trim costs. One former Meta employee said that after interviewing for trust and safety roles at Microsoft and Google, those positions were suddenly axed.
An ex-Meta staffer said the company’s retreat from trust and safety is likely to filter down to smaller peers and startups that appear to be “following Meta in terms of their layoff strategy.”
Chowdhury, Twitter’s former AI ethics lead, said these types of jobs are a natural place for cuts because “they’re not seen as driving profit in product.”
“My perspective is that it’s completely the wrong framing,” she said. “But it’s hard to demonstrate value when your value is that you’re not being sued or someone is not being harmed. We don’t have a shiny widget or a fancy model at the end of what we do; what we have is a community that’s safe and protected. That is a long-term financial benefit, but in the quarter over quarter, it’s really hard to measure what that means.”
At Twitch, the T&S team included people who knew where to look to spot dangerous activity, according to a former employee in the group. That’s particularly important in gaming, which is “its own unique beast,” the person said.
Now, there are fewer people checking in on the “dark, scary places” where offenders hide and abusive activity gets groomed, the ex-employee added.
More importantly, nobody knows how bad it can get.

Trending
-
Sports7 months ago
‘Storybook stuff’: Inside the night Bryce Harper sent the Phillies to the World Series
-
Technology2 years ago
Game consoles were once banned in China. Now Chinese developers want a slice of the $49 billion pie
-
Sports2 years ago
Team Europe easily wins 4th straight Laver Cup
-
Politics1 year ago
Have the last few wobbly weeks seen a turning point for Johnson as PM?
-
Business8 months ago
Bank of England’s extraordinary response to government policy is almost unthinkable | Ed Conway
-
Politics1 year ago
Yvette Cooper promoted and Lisa Nandy to shadow Gove on levelling up brief in Labour reshuffle
-
Business8 months ago
Liz Truss’s ‘favourite’ economist says chancellor ‘took his eye off ball’ and ‘overstepped the mark’ with mini-budget
-
Videos8 months ago
World leaders come together for Queen Elizabeth’s funeral