Seven Years of AI in Google Ads - What We've Learned
In short: We deployed a German AI tool in 2018 that automatically adjusted bids by location and time of day. That was before Google had Smart Bidding stable in the market. Seven years later we have guided the transition from external AI to Google's own AI in over 400 client accounts. The pattern stayed constant: the problem in most accounts is not the AI, but what you feed it.
📋 Table of contents
- What was actually happening in 2018 - we used a German tool, Google did not yet
- How Smart Bidding gradually took over - what we learned
- Performance Max - data multiplier, not magic
- The mental model shift: from steering CPC to defining goals
- Data prerequisites before launching PMax
- Asset group architecture (4 years of practice observation)
- Drift correction - the planned re-tuning of AI goals
- Change Discipline - one major change per 14 days
- Search themes - guide or killer
- Use Search Term Insights - the truth about your audience
- The typical SMB mistake
- Frequently Asked Questions
- Bottom Line: the AI is never the problem
We deployed a German AI tool in 2018 that automatically adjusted click prices by location and time of day. Google launched Smart Bidding shortly after. Seven years later, most advertisers still make the same mistake: they steer AI advertising the way they used to steer manual campaigns.
This article does something different from most Performance Max guides. It does not list "5 tips for better PMax campaigns". It shows what seven years of AI practice in online marketing across 400+ client accounts have taught us. And why the lessons from 2018 apply today the same way they did then.
What was actually happening in 2018 - we used a German tool, Google did not yet
Before Smart Bidding there was no AI control in Google Ads. Anyone who wanted bid adjustments did it via static rules. Like: "If location equals Klagenfurt, then bid plus 20 percent." Such rules were static. They did not react to what was actually happening.
We used an external tool from Berlin at the time: Adspert. It already adjusted click prices automatically by location, day of week and time of day - seven years ago. The tool analyzed the historical conversion data of our clients. It then decided on its own: "At this location, at this time, on this weekday, search queries convert above average. So set the bids 30 percent higher." Conversely, it lowered the bids when conversion probability was low.
We used it because Google did not have it. Google's own AI control was only rolled out gradually in late 2018. By the time it worked stably across the board it was 2019.
A concrete example from this period: a plumbing and heating business in Graz ran with Adspert. After three months the tool detected a pattern we never would have found manually. Tuesday and Wednesday between 4 and 7 PM had a conversion rate in the Graz-City and Graz-Surroundings districts that was nearly twice the account average. Adspert raised bids in this exact window by 60 to 80 percent. That cut the cost per lead in this campaign by 34 percent within two months.
This kind of pattern recognition was unreachable for most advertisers in 2018. Today Performance Max does it at larger scale across all Google inventories. The logic has stayed the same.
What we learned from these years: Automated bid optimization lives on two things. First, clean conversion tracking - any gap poisons the optimization. Second, sufficient volume - below 30 conversions per month the system does not learn stably. These two lessons apply to Performance Max today exactly as they did then.
How Smart Bidding gradually took over - what we learned
When Google's Smart Bidding became stable, we phased out our use of external bidding tools. Not because we had to, but because Google's model with internal auction-data advantage learned faster than any external system could.
The transition ran in three phases:
Phase 1 (2018-2019): Hybrid setup. Adspert kept running, in parallel we tested Google's own Smart Bidding logic in single campaigns. Smart Bidding was unreliable at first. We saw accounts where Google's automated control delivered 40 percent CPL reduction in two weeks. And accounts where it broke the budget within a week.
Phase 2 (2019-2020): Migration. Where conversion volume was sufficient and tracking was clean, we migrated to Google's Smart Bidding. Where not, we kept control via Adspert. This phase taught us the most important lesson: AI advertising does not work equally well for every account. The data preconditions decide.
Phase 3 (from 2021 with Performance Max): Full migration. Performance Max arrived in Q3 2021 as a unification. One campaign that uses Smart Bidding and serves across all Google inventories. We have managed over 150 PMax accounts since then. We see the same data lessons repeated, just at larger scale.
Performance Max - data multiplier, not magic
Here comes the most important insight from seven years of practice. Performance Max multiplies what you put in. Clean data plus thoughtful architecture scales. Bad data plus messy architecture also scales. Just in the wrong direction.
That sounds banal. In practice it is not. We regularly see accounts where the founder asks why Performance Max does not deliver the promised results. The answer almost always lies in one of three layers.
Layer 1: Conversion tracking. A 20 percent tracking gap is enough to weaken AI optimization by 30 to 50 percent. In about 70 percent of accounts that come to us, we find tracking gaps above 20 percent.
Layer 2: First-party data. Customer Match Lists, conversion values, audience signals. Without these inputs the AI optimizes on standard demographic profiles instead of your real customers.
Layer 3: Asset architecture. Asset groups, search themes, final URL expansion. Anyone who sorts these messily gives the AI contradictory signals.
Looking for a structured plan for the first 90 days? The 90-day plan from our book is free as a PDF. It shows the routines that keep AI projects productive.
The mental model shift: from steering CPC to defining goals
Manual bidding works on a clear mental model. You set the maximum CPC, the algorithm tries to deliver as many clicks as possible at that price. You hold the wheel for every single click price.
AI bidding works completely differently. You define the goal - a cost-per-acquisition or a ROAS target. The algorithm then decides what CPC to use in each individual auction to reach this goal. Sometimes 0.40 EUR, sometimes 4.00 EUR, depending on expected conversion probability.
Advertisers with manual mindset make three predictable mistakes:
1. They panic-lower the tCPA target as soon as costs spike short-term. By doing this they pull the learning rug out from under the AI. The algorithm has to relearn, performance breaks.
2. They deactivate expensive search terms via Negative Keywords. Sometimes that very expensive search term is the one with the highest lifetime value. The AI understood it, the advertiser did not.
3. They distrust fluctuating CPCs and revert to manual. A normal CPC corridor in Smart Bidding can vary by factor 5 between lowest and highest auction. In manual mode that would be a disaster. In Smart Bidding mode it is healthy behavior.
The right mental model: You define the goal and the guardrails. The AI optimizes within those guardrails. Your job is to set good guardrails - not to steer every single car.
Data prerequisites before launching PMax
Anyone who launches Performance Max without addressing these six points first pays for Google's learning curve with their own budget.
1. Conversion tracking at server-side level. Browser tracking alone is not enough in 2026. iOS restrictions, cookie banners and ad blockers create gaps of 15 to 35 percent. Server-side tracking via Google Tag Manager Server Container or custom setup closes these gaps to under 5 percent.
2. Calculate conversion values realistically and attach them. A conversion without an attached value is a conversion worth zero from the AI's perspective. It will serve broadly but not optimize value-driven. Most SMBs set lead values by gut feeling: "A lead is worth 50 EUR." That is reading coffee grounds. The clean calculation is: average customer lifetime value times lead-to-customer conversion rate. Example: if a customer brings 3,000 EUR in margin over the relationship and 20 percent of leads become customers, a lead is worth 600 EUR - not 50. Anyone who feeds the AI phantom values optimizes for phantom profit. Anyone who feeds real values optimizes for real margin.
3. First-party Customer Match Lists. Existing customers, high-quality leads, untouched newsletter subscribers. Load this data as Customer Match in Google Ads. It gives the AI the actual profile characteristics of your most valuable target group - instead of standard demographics.
4. Maintain audience signals. Performance Max accepts audience signals as hints. These hints are not enforced strictly - the AI can go beyond them when it finds better performance. But the learning phase accelerates massively when good signals are in place at launch.
5. Asset group plan before setup start. Which products or services form a thematic unit? Which need their own asset groups because conversion value or target group differs? These architecture decisions are hard to fix later.
6. Activate Brand Exclusions. Performance Max bids on your own brand without restriction. The result: you pay for conversions that would have come without your ad. These auto-conversions also distort the learning phase. The AI thinks: "Here it converts great, I'll push more." In reality the search queries converted because of brand loyalty, not because of the ad. Brand Exclusions have been available since 2024. They should be standard, but in most SMB accounts they are not. Anyone who cleanly excludes brand searches gains two things: less wasted budget and a clearer learning signal for the AI.
Asset group architecture (4 years of practice observation)
Performance Max knows two sorting logics for asset groups: by audience segments or by product/service lines. We have tested both intensively.
Audience sorting looks clean in theory. One asset group for young buying power, one for mature decision-makers, one for existing customers. The problem in practice: the AI gets contradictory conversion signals because the same products run across multiple asset groups. The learning phase extends, ROAS oscillates.
Thematic sorting by product or service line performs more stably long-term. One asset group equals one clear offering with its own conversion definition. Audience steering happens via audience signals within the group, not via separate groups.
Rule of thumb from 150+ accounts: three to seven asset groups per account. Less means too generic, more means too fragmented for stable learning phases. For e-commerce accounts with a large catalog, use Shopping campaigns for long-tail and limit PMax to product categories.
Drift correction - the planned re-tuning of AI goals
Here is an insight that does not appear in any of the common Performance Max guides. Smart Bidding goals drift over time. Anyone who sets tCPA or tROAS once and then changes nothing for six months optimizes against a target that is no longer current.
How the drift happens: Smart Bidding optimizes toward the set goal. When the AI reaches the goal easily, it gradually pushes higher (more volume) and uses the upward tolerance. When it barely reaches the goal, it stays conservative and your ROAS becomes the actual ceiling. Both directions are technically correct. Both cost money if you do not look.
What we have seen in 4 years of Performance Max: Accounts without active re-tuning drift systematically. Within 6 months the goals typically deviate 15 to 30 percent from the original value. Sometimes "cheaper than planned but not enough volume". Sometimes "volume is fine but CPA explodes". Both can be corrected - but only if you see them.
Routine we run with our clients: Quarterly re-tuning of tCPA/tROAS values. Based on the last 90 days of performance plus the current business environment. For larger shifts (new season, new market entry, new competitors) also off-cycle. Anyone who skips this runs an AI campaign with outdated targets and wonders why performance does not match anymore.
Change Discipline - one major change per 14 days
The second common mistake when maintaining Performance Max campaigns is hyperactivity. Marketing managers change five things at once - new asset group, different goal, higher budget, new audience signals, new final URL. Then they wonder why performance breaks.
Why it breaks: Every significant change to a Performance Max campaign partially or fully resets the learning phase. The AI has to recalibrate - which signal is relevant, which is not? When five changes hit at the same time, the AI cannot tell which change caused which effect. It experiments broadly, and that costs.
Rule of thumb from practice: maximum one major change per 14 days. Major means: budget change above 20 percent, new asset group, goal change above 10 percent, switching the bidding strategy. Minor changes (new ad variant, additional audience signals, small asset updates) are uncritical.
This discipline feels wrong to advertisers. Anyone coming from manual bidding is used to reacting fast. In AI bidding, patience is the most economical trait. Anyone who makes one change and waits 14 days before the next gives the AI time to learn which lever caused what.
Search themes - guide or killer
Search themes have been available in Performance Max since Q4 2023. They tell the AI: "These search terms are relevant for me." Theoretically a guide that accelerates the learning phase.
In practice it is more complicated. We have tested search themes in over 80 accounts and see a clear pattern.
When search themes help: accounts with broad search volume and high search diversity. Themes help the AI narrow down to relevant search clusters. Result: faster learning phase, more stable performance.
When search themes hurt: accounts in early learning phase or with thin conversion volume. Specific themes fragment the already-thin data. The AI gets too few signals per theme to learn stably.
Practice rule: deploy search themes only after the campaign shows at least three months of stable performance. Not at launch. And not too specific - rather three broad themes than twelve narrow ones.
Use Search Term Insights - the truth about your audience
Since 2024 Performance Max shows the actual search terms that triggered ad serves in the Insights view. This is gold and 90 percent of advertisers ignore this data.
Why the data is so valuable: It shows what the AI is actually serving your ads against. Sometimes you see search terms you never thought of. Sometimes you also see search terms that obviously do not match your offering. The first are goldmines for content strategy. The second are negative-keyword candidates.
Concrete example from one of our accounts: A B2B software vendor wondered about weak lead quality. A look at the Search Term Insights showed: 30 percent of budget went to generic search queries like "software cheap" or "free software". The AI was correctly optimizing for conversions, but the conversions were students and hobbyists - not business customers. With targeted negative keywords we redirected budget to qualified search queries. Lead quality doubled, lead value rose by 80 percent.
Routine that works: monthly review of Search Term Insights per asset group. Derive negative keywords from junk queries. Take goldmine queries into your own search themes or additional Search campaigns. This costs 30 minutes per month and over the years has saved accounts that would otherwise have slowly drained budget into the wrong directions.
The typical SMB mistake
We see it almost weekly. Founder activates Performance Max in the account without addressing the data prerequisites first. Marketing manager argues "Google is still learning" when performance does not appear after three months. Six months later the budget is burned.
What actually happened: The AI did not learn - because the learning ground was missing. It experiments broadly, because it gets no clear optimization signals. Every click costs, the conversion boost stays away.
The pattern has three stages:
Stage 1: Activation with default setup. Conversion tracking exists, but not server-side. Asset groups are default. Audience signals are empty. No Customer Match Lists.
Stage 2: First weeks look okay because PMax serves broadly and picks up all old search conversions. Marketing manager is optimistic.
Stage 3: After eight to twelve weeks performance flattens. CPL rises, ROAS drops. Marketing manager says "Google is still learning". Three months later it is clear: the system is not learning because the input is not stable enough.
What helps: data audit before PMax activation. Close tracking gaps, set conversion values, load Customer Match Lists, plan asset architecture. Only then activate. We call this the "data-prerequisites hour" and it is the most important hour in the entire PMax setup.
This logic applies beyond Google Ads. We describe comparable structural barriers in our analysis of why B2B marketing is really so hard. The data-multiplier pattern repeats across disciplines where AI amplifies the result.
Frequently Asked Questions
Why does Performance Max work better for some advertisers than others?
The main difference is the data foundation. Accounts with clean conversion tracking, sufficient first-party data and a thoughtful asset architecture give the AI clear optimization signals. Accounts with tracking gaps, weak audience signals or messy asset groups feed the AI phantom signals - it then optimizes on the wrong basis.
How many conversions does Smart Bidding need to learn?
Google recommends at least 30 conversions in 30 days per campaign. Industry benchmark from 400+ accounts: below this threshold the learning phase is unstable and CPC fluctuates heavily. Above 50 conversions per month, optimization stabilizes significantly.
What is the difference between Smart Bidding and Performance Max?
Smart Bidding is the bidding strategy - tCPA, tROAS, Maximize Conversions - usable in any campaign type. Performance Max is a campaign type that uses Smart Bidding and additionally serves automatically across all Google inventories: Search, Display, YouTube, Discover, Gmail, Maps.
Why does Performance Max sometimes burn budget without results?
In most cases the data foundation is too thin or unclean. Gaps in conversion tracking, missing value definition per conversion, messy asset groups or overly narrow search themes prevent the AI from building a stable optimization model. It then experiments broadly and that costs.
When should I switch from manual bidding to Smart Bidding?
When three conditions are met: first, working conversion tracking without gaps over at least 30 days. Second, at least 30 conversions per month in the account. Third, clear definition of what a conversion is worth. Switching earlier means giving the AI an unstable learning ground.
How do I structure asset groups in Performance Max effectively?
Practice insight from 4 years with Performance Max: thematic sorting by product or service line performs more stably long-term than sorting by audience segments. One asset group equals one clear offering with its own conversion definition. Mixed forms create cross-confusion in AI optimization.
Bottom Line: the AI is never the problem
Seven years of AI-in-bidding practice deliver one clear conclusion: the problem in most accounts is not the AI. It is what data goes in and what mental models the advertisers apply.
Anyone who runs clean conversion tracking, maintains first-party data and thinks through asset architecture gets a real multiplier effect from Performance Max. Anyone who does not gets an expensive memory of a good intention.
The AI is no longer magic in 2026, and it was not magic in 2018. Back then it was an experimental tool. Today it is a thought-through tool with clear prerequisites. Those who understand and deliver these prerequisites build a measurable advantage. Those who ignore them finance Google's optimization instead of their own growth.
One last observation that does not appear in any Performance Max guide: Performance Max unifies advertisers. Anyone running PMax exclusively gets largely the same output from the algorithm as any competitor running PMax exclusively. The algorithm is neutral, it has no brand loyalty. Differentiation requires something the AI cannot deliver: your own brand, your own content, your own channels. Those who run Performance Max in isolation become interchangeable. Those who run it as one layer in a system that also includes brand, content and direct customer relationships get a lever competitors cannot copy. That is the strategic truth behind the tactical questions about PMax optimization.