Washington DC
New York
Toronto
Distribution: (800) 510 9863
Press ID
  • Login
Binghamton Herald
Advertisement
Sunday, April 19, 2026
  • Home
  • World
  • Politics
  • Business
  • Technology
  • Culture
  • Health
  • Entertainment
  • Trending
No Result
View All Result
Binghamton Herald
No Result
View All Result
Home Trending

Meta’s Findings Reveal AI Content Made Up Not Even 1% Of Election-Related Misinformation In 202

by Binghamton Herald Report
December 5, 2024
in Trending
Share on FacebookShare on Twitter

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 

The discovery was made through the social media giant’s analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.

ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here’s How Much The Upcoming Flagship Smartphone Might Cost You

Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”

Meta’s statements indicate that earlier concerns about AI’s role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new “covert influence operations.”

Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”

The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.

Meta Admits Excess Content Moderation During Pandemic

Recently, Meta’s Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”

He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”

Tags: MetaMeta NewsMisinformationTechnologywhat is misinformation and disinformationwhat is misinformation and fake newswhat is misinformation biaswhat is misinformation billwhat is misinformation definitionwhat is misinformation effectwhat is misinformation meaningwhat is misinformation on social media
Previous Post

California raw milk producer says RFK Jr. has encouraged him to apply for FDA position

Next Post

Mattel sued by mom over ‘Wicked’ merchandise with link to porn site

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

BROWSE BY CATEGORIES

  • Business
  • Culture
  • Entertainment
  • Health
  • Politics
  • Technology
  • Trending
  • Uncategorized
  • World
Binghamton Herald

© 2024 Binghamton Herald or its affiliated companies.

Navigate Site

  • About
  • Advertise
  • Terms & Conditions
  • Privacy Policy
  • Disclaimer
  • Contact

Follow Us

No Result
View All Result
  • Home
  • World
  • Politics
  • Business
  • Technology
  • Culture
  • Health
  • Entertainment
  • Trending

© 2024 Binghamton Herald or its affiliated companies.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In