X Ads Manager Got a Makeover: New Dashboard, AI Targeting, Vertical Video. But the Migration Broke Things.
X redesigned its Ads Manager with real improvements: AI targeting, vertical video, unified dashboard. The product is better. But the migration brought data discrepancies between API and dashboard, a policy overhaul with 35% rejection rates, an aesthetic scoring system that financially penalizes non-compliant creative, and a global outage during the rollout.

In the last week of March 2026, X completed the rollout of its redesigned Ads Manager. This isn't a cosmetic facelift: it's an 18-month rebuild that changes how campaigns are created, optimized, and measured on the platform.
The new system introduces a unified dashboard that consolidates planning, management, and reporting into a single view, eliminating the fragmented navigation of the previous system. Real-time previews for desktop and mobile were added, solving a concrete operational problem: images that cropped unpredictably upon publishing. Conversion campaign setup was simplified to three steps (conversion event, budget, basic demographics), a change that lowers the barrier to entry, particularly for small teams.
The two biggest bets in the redesign are AI-powered Optimized Targeting and Vertical Video Ads. Optimized Targeting uses machine learning to expand audiences beyond manual parameters, functioning as an automated lookalike that adjusts based on real-time engagement signals. According to X's own beta testing data (no independent verification available), advertisers who activated it reported an average 10% increase in CTR and 16% increase in conversions, with 92% of beta participants choosing to keep it enabled. These numbers need context: there's no public information on sample size, verticals represented, or campaign objectives tested, which limits their applicability to operations outside the beta's profile.
Vertical Video Ads appear in the Immersive Media Viewer, and according to X, users are 7 times more likely to engage with a vertical video ad than with standard timeline formats. The format already accounts for 20% of total user time on the platform, suggesting the audience is there, though the conversion from user time to ad performance is a correlation advertisers will need to validate on their own.
Rounding out the redesign, X added a cost estimate tool that projects CPMs before launch, and adjacency controls for keyword exclusion to prevent ads from appearing next to unwanted content.
So far, the story is about a product that concretely improved. What follows is what happened when that product met the reality of migration.
What broke in the move
The most basic problem is that the data doesn't match. In the X Developer Community, advertisers and developers reported discrepancies between X Ads API data and what the new dashboard displays. For any team using external reporting tools (Supermetrics, Funnel, custom scripts) that pulls from the API to feed BI dashboards, this creates a concrete operational situation: internal report numbers don't match what the platform shows. X hasn't issued an official statement on these discrepancies or provided a resolution timeline. It's not publicly known which specific metrics are affected (impressions, conversions, spend), making it difficult to assess the real magnitude of the problem.
On April 2, 2026, less than a week after the rollout, X experienced a global outage lasting approximately one hour. Over 25,000 reports on Downdetector in the United States alone, with confirmed impact in Europe, India, and South America. Timelines displayed blank or showed content from days earlier. Any active campaign stopped receiving delivery during that period. X didn't communicate the cause of the outage or publish a post-mortem, leaving open the question of whether it was an isolated event or a symptom of infrastructure instability during the migration.
The policy overhaul
The Ads Manager redesign coincided with the most aggressive enforcement in the platform's history. So far in 2026, X has implemented 14 major policy updates affecting more than 67% of active ad categories. The philosophical shift is radical: the platform moved from "review after complaint" to "block before delivery." The automated Guardian system rejects 78% of violations within 90 seconds of submission.
Three mechanisms concentrate the most operational friction.
The Advertiser Trust Score (ATS) is a credibility score from 0 to 100 that determines review speed and delivery priority. New advertisers start at 50, and each violation deducts between 5 and 15 points. The problem is that the rules changed: what was previously compliant can now be a violation. An advertiser who operated without issues for months can see their ATS erode rapidly under the new policies without having changed anything in their campaigns. Below 30, mandatory pre-approval for every new campaign (48 to 96 hours of delay). Below 25, mandatory re-verification; failure to complete it within 14 days results in account suspension. For a multi-campaign operation, ATS erosion can become a bottleneck that slows down the entire team.
Geographic policy overlays add 42 country-specific compliance layers on top of global policy. The same ad can be compliant in one market and in violation in another. For any multi-market operation, this turns compliance into a combinatorial exercise that grows with each country added. In 2024, there were 12 overlays; the fact that they've tripled in two years suggests the trend is toward adding, not simplifying.
Post-delivery auditing represents an operational risk that extends beyond the paid team. Compliance crawlers continuously scan active campaigns. If a landing page changes after an ad was approved, or if user reports exceed 0.08% of impressions, the ad is pulled retroactively. This means an approved, running campaign can disappear without notice if another team (product, content, design) updates the landing page. In organizations with formal governance, this risk requires coordination between paid, product, and legal that the paid team can't resolve on its own.
For specific industries, the impact is disproportionate. Healthcare represents 3.8% of ad volume but generates 28% of all violations. Financial ads without a "Regulatory Info Card" are rejected in 94% of cases. 14% of "cause-based" ad rejections come from brands that didn't realize their messaging triggered political classification. If your vertical is healthcare, fintech, or anything adjacent to regulation, the compliance burden on X is significantly higher than on other platforms.
Aesthetic scoring: design as cost
On top of the policy changes, X introduced an aesthetic scoring system for ads. Each ad receives a score based on visual and textual criteria: clean, minimalist design raises the score; excessive emojis, hashtags, visible URLs, and cluttered creative lower it. This score functions as a modifier within the ad quality algorithm, directly impacting pricing and feed visibility. Ads with more than one emoji in the copy face lower quality scores and potentially higher prices. Ads that fail aesthetic standards are algorithmically buried.
For performance marketers who historically relied on urgency-driven copy, attention-grabbing emojis, and aggressive CTAs, this isn't a minor adjustment: it's a change in the rules of the game where what used to perform now costs more. There's no public data on how much CPM varies by aesthetic score, which constitutes a significant unknown for budget planning.
The central tension
The central tension isn't whether X improved (it did) or whether it has problems (it does). It's that the product and the infrastructure are at different stages of evolution, and operating in that gap has a cost that doesn't show up on any media plan.
The new Ads Manager is objectively better than its predecessor. But a good product on top of infrastructure that generates data discrepancies, suffers outages during rollout, and doesn't communicate problems transparently creates a specific kind of risk: operating with data you can't fully trust. For a team that needs to report results to a CMO or a board, that asterisk on data reliability can be more expensive than any CPM.
Aggressive policy enforcement is, in theory, better for brand safety. But the simultaneous execution of 14 policy updates, 42 geographic overlays, ATS, post-delivery audits, and aesthetic scoring creates an environment where compliance can consume more time than campaign optimization. For a small team, that means navigating 35% rejection rates, 48 to 96 hour delays when ATS drops below 30, and 42 geographic overlays that multiply complexity with each additional market. For operations with formal governance (legal, procurement, brand safety), the post-delivery audit adds a layer of risk requiring cross-functional coordination that extends beyond the paid team.
The AI targeting data is promising, but it comes exclusively from X's beta testing with no independent verification, no detail on verticals, no sample size. The question isn't whether +10% CTR and +16% conversions are good numbers (they are), but whether they hold outside a controlled environment, across diverse verticals, at scale. There's no public evidence of advertisers reporting positive results post-migration outside the beta. That absence of evidence isn't evidence of absence, but it's not a basis for allocating budget either.
Who this is for (and who it isn't)
Operating on the new X Ads Manager makes sense for teams that meet three conditions: sufficient budget to absorb CPM volatility while the aesthetic scoring stabilizes (likely $50K+/month on the platform as a floor), a team with dedicated compliance capacity (at least one person who can monitor ATS, policy overlays, and post-delivery risks), and tolerance for data uncertainty while API/Dashboard discrepancies are resolved.
It doesn't make sense for 2 to 3 person teams with limited budgets who need every hour for direct optimization. Nor for multi-market operations in regulated industries (healthcare, fintech) where the 42 geographic overlays intersect with sector-specific regulations. And it doesn't make sense for organizations with formal governance where the post-delivery audit risk (campaigns disappearing due to landing page changes) creates a cross-functional problem the paid team can't solve alone.
For those who decide to wait: monitor two signals. First, that X resolves the API/Dashboard discrepancies and communicates it officially. Second, that independent AI targeting results emerge outside the beta. When those two conditions are met, the risk-to-opportunity equation changes fundamentally.
What we don't know
There's no public data on how many advertiser accounts experienced history or data loss during the migration. The API/Dashboard discrepancies are documented but without official acknowledgment from X or a resolution timeline, and it's unknown which metrics are affected.
We don't know if the AI targeting numbers hold outside the beta. There's no independent verification, no detail on sample size or verticals represented.
We don't know how much CPM varies by aesthetic score. X confirmed that ads with low aesthetic quality pay more, but hasn't published the variation ranges.
We don't know if the 42 policy overlays will stabilize or keep growing. The trend (from 12 in 2024 to 42 in 2026) suggests the latter.
We don't know the cause of the April 2 outage. Without a public post-mortem, there's no way to evaluate whether it was an isolated event or a pattern of instability during the migration.
We don't know if third-party reporting tools (Supermetrics, Funnel, etc.) have adapted their connectors to the new system, or if they're still using endpoints from the previous one.
An AI wrote the first draft. A team of humans made it better. Or so we hope.