⚠ All claims are alleged based on publicly documented user reports and media coverage
The Exposé

Manus AI:
The Fyre Festival
of Artificial Intelligence

All branding, no backbone. A documented investigation into the alleged billing fraud, catastrophic hallucinations, Chinese data sovereignty risks, and the AI visibility failures that Manus doesn't want you to know about.

0.2
Trustpilot Score
Out of 5 stars
0+
Credits Burned
In a single looping task
0
Failure Categories
Documented in 2-week test
0$
Unauthorized Charge
Reported by one user in March 2026
01
Chapter 01

The Hype Machine:
Nothing Revolutionary

"Manus proves Chinese can hype AI just as much as the US."

— TechFinitive, March 2025

When Manus AI launched in March 2025, the marketing machine went into overdrive. Viral invite codes. Breathless press coverage. Claims of outperforming ChatGPT and Google Gemini on the GAIA benchmark. The narrative was carefully crafted: a Chinese startup had allegedly built the world's most capable autonomous AI agent, and the West had better take notice. What actually shipped was something far less impressive — and far more dangerous to your wallet and your data.

Multiple independent analysts, including Forbes contributor Lutz Finger, were blunt in their assessment: Manus AI is "far from novel." Unlike DeepSeek, which allegedly introduced genuine architectural innovations, Manus is widely described as a sophisticated "wrapper" — a product that orchestrates existing foundational models, specifically Anthropic's Claude series and Alibaba's Qwen models, rather than representing any novel AI breakthrough. The company's own legal filings, made in the context of a U.S. Treasury Department review of Benchmark Capital's investment, reportedly leaned into this characterization to argue it wasn't building foundational AI — a convenient position that raises its own questions about what exactly users are paying for.

The invite-only launch was a masterclass in manufactured scarcity. By restricting access to invitation codes, Manus created the illusion of exclusivity and overwhelming demand. Early beta testers who finally gained access were greeted not by a revolutionary product, but by servers that were perpetually "busy," context windows that collapsed mid-project, and a credit system so opaque that users burned thousands of dollars before understanding what they had signed up for. The gap between the marketing and the reality was, by most accounts, staggering.

The GAIA benchmark scores that Manus cited in its launch materials have since been scrutinized. A study published in the National Center for Biotechnology Information found that in real-world diagnostic accuracy testing, Manus performed no better than Claude — the very model it is allegedly wrapping. Andy Stapleton's widely-viewed YouTube review found that Manus had the highest hallucination rate among the AI agents he tested. For a product marketed as a reliable autonomous agent capable of executing complex multi-step tasks, this is not a minor footnote — it is the central failure of the product.

02
Chapter 02

14 Failure Categories:
Documented in Two Weeks

A real-world deployment of Manus AI on two production websites over two weeks allegedly produced 14 distinct categories of failure. These are not edge cases or user errors — they are the documented, repeatable behaviors of a product that allegedly charges enterprise-level prices for consumer-grade reliability.

FAILURE #01

Hallucinated Success Reports

Manus allegedly tells you the task is complete when it hasn't done it. Users report the agent confidently presenting finished work that simply does not exist.

FAILURE #02

Recursive Agent Loops

The agent allegedly enters infinite loops of self-delegation, burning thousands of credits while producing nothing. One user reported 88,000 credits consumed in a single looping task.

FAILURE #03

Context Window Collapse

Just as projects gain momentum, the context length restriction allegedly kills progress. Even after splitting tasks into smaller chunks, individual sub-tasks reportedly face the same constraint.

FAILURE #04

Empty ZIP Downloads

Attempting to download generated code allegedly produces empty ZIP files — a bug that reportedly negates one of Manus's core advertised functionalities.

FAILURE #05

Perpetual Server Downtime

The server is allegedly "busy" so frequently that the service becomes functionally unusable during peak hours. Users report being unable to access the platform for days at a time.

FAILURE #06

Project Loss Without Recovery

Complex projects allegedly crash repeatedly with no rollback or backup functionality. Users report losing days of work with no recourse and no support response.

FAILURE #07

The Black Box Problem

Real software development requires architectural control. Manus allegedly makes structural decisions autonomously, making it unsuitable for production codebases where accountability matters.

FAILURE #08

Silent Rollbacks

Updates allegedly roll back without explanation, leaving users unable to determine what version of their project they are working with at any given time.

FAILURE #09

False Positive Floods

Manus allegedly generates large numbers of false positives in security and analysis contexts, wasting analyst time and producing reports that cannot be trusted.

FAILURE #10

Credit Charging for Failures

Credits are allegedly consumed even when tasks fail completely. Users report being billed for services that were never rendered, with no automatic refund mechanism.

FAILURE #11

Session Scarcity Throttling

New users receive only three project sessions on day one, then one per day — non-cumulative. Missing a day means losing a session permanently, severely limiting any meaningful work.

FAILURE #12

Forced Upgrade Hijacking

Users report that after Manus creates broken code, it allegedly prevents reverting changes unless the user upgrades their plan — a pattern described by multiple reviewers as coercive.

03
Alleged Fraud

Billing Nightmares:
Predatory Practices

"There should be a class action for this."

— Trustpilot reviewer, March 2026

Manus AI's Trustpilot rating sits at a staggering 1.2 out of 5 stars — a score that reflects not just product dissatisfaction, but a pattern of alleged billing practices that multiple users have described as fraudulent. The complaints are remarkably consistent across hundreds of reviews: unauthorized charges, inability to cancel subscriptions, credits consumed for failed services, and a customer support apparatus that allegedly does not exist in any meaningful form.

One reviewer filed a formal complaint with Japan's National Consumer Affairs Center after allegedly being charged $40 during what was presented as a free trial — even after deleting their account within the trial period. The reviewer noted that their refund requests were met with "the same copy-pasted automated reply multiple times" and that "no human review was provided, despite their FAQ explicitly stating refunds are available." Another user reported three unauthorized debit card charges and was unable to find any means of communicating with the company. A third reported being billed $440 for a month of service after attempting to downgrade — the second time the same billing bug had allegedly hit them.

Perhaps most damning is the alleged removal of the subscription cancellation feature from the user interface. Multiple Trustpilot reviewers explicitly state that Manus has "wilfully removed the ability for a user to cancel their subscription" — a practice that, if accurate, would constitute a violation of consumer protection laws in multiple jurisdictions. The company's response to these allegations has been, by most accounts, silence or automated deflection.

★☆☆☆☆

BILLING FRAUD ALERT

"Manus charged me 6,562 credits for completely failed services, then blamed their AI for their own broken infrastructure."

Trustpilot — Verified Review

★☆☆☆☆

Token Scam

"It literally hallucinates into telling you the task is done when it hasn't done it and then hijacks your project forcing you to upgrade before you can even revert changes of broken code it creates."

Trustpilot — March 2026

★☆☆☆☆

Recurring Billing Nightmare

"I was charged $440.00 for the month of March in what appears to be the same exact manner as before — a subscription level being applied or reactivated without my authorization. This is not a simple mistake; it's a recurring failure."

Trustpilot — March 2026

★☆☆☆☆

Predatory Billing

"They charged me significantly without justification and then flat-out refused to issue a refund, even after clear evidence of service failure. I am now handling this through my company's legal counsel."

Trustpilot — March 2026

04
Chapter 04

Data Sovereignty &
Corporate Opacity

Manus AI is developed by Monica, a Chinese AI startup operating under a parent company called "Butterfly Effect." The corporate structure is, to put it charitably, labyrinthine: a Cayman Islands incorporation, a Singapore-registered entity that has since reportedly been de-listed from the official registry, and operations with roots in China. This is not merely a matter of corporate housekeeping — it has drawn the attention of the U.S. Treasury Department, which has been reviewing Benchmark Capital's substantial investment in Butterfly Effect for potential violations of U.S. restrictions on funding AI development in China.

Cybersecurity experts have flagged Manus for its "lack of transparency regarding data storage." The company claims data is stored on non-Chinese servers, but this assertion is difficult to independently verify given the opacity of the corporate structure. A comprehensive privacy analysis published on LinkedIn found that Manus's privacy policy contains "notable gaps, including a lack of explicit identification of the data controller, insufficient detail on specific security measures, and an incomplete articulation of all data subject rights as mandated by comprehensive data protection regimes like GDPR."

The geopolitical dimension escalated significantly in March 2026 when Meta acquired Manus for approximately $2 billion. The Chinese government responded by taking actions to penalize individuals linked to the deal — a development that underscores the extent to which Manus has become entangled in U.S.-China technology competition. For enterprise users who fed sensitive business data into Manus's autonomous agent, the question of where that data went and who has access to it remains, at best, unanswered.

A publicly disclosed security vulnerability related to sandbox access raises additional questions about the robustness of Manus's security architecture. The platform's autonomous nature — its ability to browse the web, execute code, and interact with external services on behalf of users — creates an attack surface that the company has not adequately addressed in its public documentation. EU data protection authorities are reportedly investigating, as are officials in the U.S., Taiwan, and South Korea.

U.S. Treasury

Reviewing Benchmark Capital's investment for potential violations of U.S. restrictions on funding AI in China

EU Data Protection

Multiple EU data protection authorities allegedly investigating Manus's privacy practices

Singapore Registry

Butterfly Effect PTE. LTD. reportedly de-listed from Singapore's official business registry

GDPR Gaps

No explicit data controller identified; insufficient detail on security measures; incomplete data subject rights

Sandbox Vulnerability

Publicly disclosed security vulnerability related to sandbox access raises architecture concerns

China Retaliation

Chinese government took actions against individuals linked to Meta's $2B acquisition of Manus

05
Chapter 05

AI Visibility & SEO:
Blind Spots Everywhere

"Rich, dynamic sites built with modern AI tools have struggled to get the visibility they deserve on search engines."

— Manus AI's own blog, admitting the problem

In the world of AI-driven search — where AEO (Answer Engine Optimization), GEO (Generative Engine Optimization), and traditional SEO converge — Manus AI is not just failing to help; it is actively creating problems. The platform's own blog acknowledged in December 2025 that "rich, dynamic sites built with modern AI tools have struggled to get the visibility they deserve on search engines." That is a remarkable admission from a company charging premium prices for AI-powered web development.

The structural problems are well-documented. Sites built with Manus allegedly ship without schema markup by default — a critical omission in an era where structured data is the primary mechanism by which AI search engines like ChatGPT, Perplexity, and Google's AI Overviews understand and cite content. Without proper JSON-LD implementation, a site is essentially invisible to the AI-powered discovery layer that increasingly drives organic traffic. Manus's dynamic rendering approach also creates indexability challenges, as JavaScript-heavy pages may not be fully crawled by search engine bots.

The hallucination problem compounds the SEO failure in a particularly insidious way. When Manus agents are tasked with conducting SEO audits or implementing optimizations, they allegedly produce reports claiming issues have been resolved when the underlying problems remain untouched. A user deploying Manus for technical SEO work may believe their site is optimized when it is not — a false confidence that could cost months of organic search performance. The credit system further punishes iterative SEO work, which by its nature requires repeated testing, adjustment, and validation cycles.

For AI visibility specifically — the emerging discipline of ensuring content is cited and surfaced by large language models — Manus's failures are structural. AI models cite content that is well-structured, authoritative, and clearly attributed. Content generated by a hallucinating agent that cannot reliably complete tasks is unlikely to meet these criteria. The irony is profound: a company selling AI-powered productivity is producing outputs that are poorly positioned to be discovered, cited, or trusted by the AI systems that increasingly shape how information is found and consumed.

No Schema Markup

Sites built with Manus allegedly ship without structured data implementation, making them invisible to AI-powered search engines and rich result features.

Dynamic Rendering Issues

JavaScript-heavy output creates indexability challenges. Search engine bots may not fully crawl or render Manus-built pages, suppressing organic visibility.

Hallucinated SEO Reports

Agents allegedly claim SEO issues are fixed when they are not. Users may deploy believing their site is optimized while structural problems persist unaddressed.

AEO/GEO Blindness

No built-in support for Answer Engine Optimization or Generative Engine Optimization — the disciplines that determine whether AI models cite your content.

Credit Punishment Loop

Iterative SEO work requires repeated testing cycles. Manus's credit system makes this prohibitively expensive, discouraging the optimization work that actually moves rankings.

Untrustworthy Output

AI models prioritize authoritative, well-structured content. Content produced by a hallucinating agent with documented accuracy failures is unlikely to earn AI citations.

06
Final Verdict

The Bottom Line:
Save Your Money,
Protect Your Data

The evidence assembled here paints a consistent picture. Manus AI is a product that allegedly launched on hype, sustains itself on manufactured scarcity, and extracts revenue through a credit system that punishes failure with more charges. The technical failures are not growing pains — they are the documented, repeatable behaviors of a system that was not ready for the market it entered. The billing allegations are not isolated incidents — they are a pattern that has prompted formal regulatory complaints across multiple countries. The privacy risks are not theoretical — they are the structural consequence of a corporate architecture designed to obscure accountability.

For anyone working in AI-driven SEO, content strategy, or digital marketing, the AI visibility failures are particularly disqualifying. In an environment where being cited by AI models is increasingly the difference between organic growth and irrelevance, deploying a tool that allegedly hallucinates its own success reports and ships sites without schema markup is not just wasteful — it is actively harmful to your digital presence.

The alternatives are well-established and battle-tested. Claude, ChatGPT, and Gemini — the models Manus allegedly wraps — are available directly, without the markup, without the opaque credit system, and without the data sovereignty questions. For autonomous agent work, platforms with transparent pricing, genuine customer support, and verifiable security practices exist and are improving rapidly. There is no compelling reason to accept the alleged risks that come with Manus when better options are a browser tab away.

Disclaimer: All claims on this page are alleged and based on publicly available user reviews, media coverage, and published analyses. This site does not assert legal findings of fraud or wrongdoing. Readers are encouraged to conduct their own due diligence. Sources include Trustpilot, Reddit, Medium, LinkedIn, Forbes, TechFinitive, and academic publications.

ALLEGED VERDICT

"All branding, no backbone. All smoke, no fire."

— Reddit user, r/AI_Agents

TRUSTPILOT SUMMARY
1.2
Out of 5 stars based on hundreds of reviews
"Most reviewers were let down by their experience overall."
Share Your Manus Story