MegaSeats: A/B Testing “Hot Deals” to Improve Conversion

OVERVIEW

Led an end-to-end A/B testing initiative focused on optimizing deal labeling and merchandising strategy—specifically solving for cognitive overload in the browsing experience. As the sole designer on the project, I owned the full process from hypothesis and UI design to experiment rollout and iteration. I introduced the concept of more intentional, high-signal deal indicators (like “Hot Deals” badges) to help users quickly identify valuable tickets without overwhelming the interface. The goal wasn’t to add more labels, but to design smarter ones that actually guided decision-making.

YEAR

2025

ROLE

Senior Product Designer

CATEGORY

AB Testing • UI Design • UX Optimization

About the project

When everything is labeled a “Best Deal,” nothing really stands out—and users end up doing more work than they should.

We noticed that the browsing experience was starting to feel noisy, with too many competing signals and not enough clarity around what actually mattered. The opportunity was to reduce cognitive overload and make it easier for users to quickly identify high-value tickets without second-guessing every option.

Instead of immediately rolling changes out to TicketNetwork’s core platform, we used MegaSeats—a sister marketplace with similar inventory and user behavior—as a lower-risk testing ground. This allowed us to experiment quickly, validate what actually influenced user behavior, and scale only the highest-performing experiences back to TicketNetwork.

Goal

Increase click-through, add-to-cart rate, and overall revenue by improving how deals were surfaced and perceived—using data-driven experimentation to validate which messaging most effectively influenced user behavior before scaling to the core platform.

Challenges

We had a labeling problem. “Best Deal” was applied so broadly that it lost meaning—and users were starting to ignore it entirely. The tricky part? Internally, it felt like we were highlighting value everywhere. Externally, it just looked noisy. I audited how deal labels were used across the experience, then partnered with Product to define clearer criteria for what actually qualifies as a deal. From there, I explored alternative labeling strategies (including “Hot Deal”) and designed multiple badge variations that felt more intentional and scannable. Because if everything is a “best deal”… nothing is.

A/B testing sounds simple until you realize how many variables can accidentally skew results. To mitigate risk, I used MegaSeats as a controlled testing environment—allowing us to validate changes without immediately impacting TicketNetwork’s core revenue. I worked closely with Product and Engineering to define clean control vs. variant conditions, ensuring performance changes could be attributed directly to the badge. On the UI side, I iterated on badge placement, color, and hierarchy to make it noticeable—but not obnoxious (no one wants to shop inside a sticker book).

As an individual contributor, I owned the full lifecycle—which meant I also got to solve every edge case that popped up along the way. Things like: - Badge inconsistencies across desktop vs mobile - Multiple tickets qualifying for a deal at once - Engineering constraints around dynamic deal states I created flexible UI patterns and clear prioritization logic, then partnered closely with Engineering through implementation and QA to make sure everything held up in production. A lot of this project was iteration-heavy—design, test, learn, refine, repeat—until we landed on something that both looked right and performed.

Results

What started as a simple “maybe we should stop calling everything a best deal” experiment turned into a pretty meaningful business win. By refining how we labeled and surfaced deals, we saw a 35% increase in revenue profit—not from changing prices, but from changing perception (which, turns out, matters a lot). Engagement improved, users clicked with more confidence, and the experience felt less noisy and more intentional. We also proved that testing in MegaSeats before rolling changes into TicketNetwork was the right call—lower risk, faster learnings, better outcomes. And as a bonus, we walked away with a repeatable A/B testing framework that we could apply to future merchandising and pricing experiments—because once you see what good experimentation can do, you don’t really go back.

Smooth Scroll
This will hide itself!

MegaSeats: A/B Testing “Hot Deals” to Improve Conversion

OVERVIEW

Led an end-to-end A/B testing initiative focused on optimizing deal labeling and merchandising strategy—specifically solving for cognitive overload in the browsing experience. As the sole designer on the project, I owned the full process from hypothesis and UI design to experiment rollout and iteration. I introduced the concept of more intentional, high-signal deal indicators (like “Hot Deals” badges) to help users quickly identify valuable tickets without overwhelming the interface. The goal wasn’t to add more labels, but to design smarter ones that actually guided decision-making.

YEAR

2025

ROLE

Senior Product Designer

CATEGORY

AB Testing • UI Design • UX Optimization

About the project

When everything is labeled a “Best Deal,” nothing really stands out—and users end up doing more work than they should.

We noticed that the browsing experience was starting to feel noisy, with too many competing signals and not enough clarity around what actually mattered. The opportunity was to reduce cognitive overload and make it easier for users to quickly identify high-value tickets without second-guessing every option.

Instead of immediately rolling changes out to TicketNetwork’s core platform, we used MegaSeats—a sister marketplace with similar inventory and user behavior—as a lower-risk testing ground. This allowed us to experiment quickly, validate what actually influenced user behavior, and scale only the highest-performing experiences back to TicketNetwork.

Goal

Increase click-through, add-to-cart rate, and overall revenue by improving how deals were surfaced and perceived—using data-driven experimentation to validate which messaging most effectively influenced user behavior before scaling to the core platform.

Challenges

We had a labeling problem. “Best Deal” was applied so broadly that it lost meaning—and users were starting to ignore it entirely. The tricky part? Internally, it felt like we were highlighting value everywhere. Externally, it just looked noisy. I audited how deal labels were used across the experience, then partnered with Product to define clearer criteria for what actually qualifies as a deal. From there, I explored alternative labeling strategies (including “Hot Deal”) and designed multiple badge variations that felt more intentional and scannable. Because if everything is a “best deal”… nothing is.

A/B testing sounds simple until you realize how many variables can accidentally skew results. To mitigate risk, I used MegaSeats as a controlled testing environment—allowing us to validate changes without immediately impacting TicketNetwork’s core revenue. I worked closely with Product and Engineering to define clean control vs. variant conditions, ensuring performance changes could be attributed directly to the badge. On the UI side, I iterated on badge placement, color, and hierarchy to make it noticeable—but not obnoxious (no one wants to shop inside a sticker book).

As an individual contributor, I owned the full lifecycle—which meant I also got to solve every edge case that popped up along the way. Things like: - Badge inconsistencies across desktop vs mobile - Multiple tickets qualifying for a deal at once - Engineering constraints around dynamic deal states I created flexible UI patterns and clear prioritization logic, then partnered closely with Engineering through implementation and QA to make sure everything held up in production. A lot of this project was iteration-heavy—design, test, learn, refine, repeat—until we landed on something that both looked right and performed.

Results

What started as a simple “maybe we should stop calling everything a best deal” experiment turned into a pretty meaningful business win. By refining how we labeled and surfaced deals, we saw a 35% increase in revenue profit—not from changing prices, but from changing perception (which, turns out, matters a lot). Engagement improved, users clicked with more confidence, and the experience felt less noisy and more intentional. We also proved that testing in MegaSeats before rolling changes into TicketNetwork was the right call—lower risk, faster learnings, better outcomes. And as a bonus, we walked away with a repeatable A/B testing framework that we could apply to future merchandising and pricing experiments—because once you see what good experimentation can do, you don’t really go back.

Smooth Scroll
This will hide itself!

MegaSeats: A/B Testing “Hot Deals” to Improve Conversion

OVERVIEW

Led an end-to-end A/B testing initiative focused on optimizing deal labeling and merchandising strategy—specifically solving for cognitive overload in the browsing experience. As the sole designer on the project, I owned the full process from hypothesis and UI design to experiment rollout and iteration. I introduced the concept of more intentional, high-signal deal indicators (like “Hot Deals” badges) to help users quickly identify valuable tickets without overwhelming the interface. The goal wasn’t to add more labels, but to design smarter ones that actually guided decision-making.

YEAR

2025

ROLE

Senior Product Designer

CATEGORY

AB Testing • UI Design • UX Optimization

About the project

When everything is labeled a “Best Deal,” nothing really stands out—and users end up doing more work than they should.

We noticed that the browsing experience was starting to feel noisy, with too many competing signals and not enough clarity around what actually mattered. The opportunity was to reduce cognitive overload and make it easier for users to quickly identify high-value tickets without second-guessing every option.

Instead of immediately rolling changes out to TicketNetwork’s core platform, we used MegaSeats—a sister marketplace with similar inventory and user behavior—as a lower-risk testing ground. This allowed us to experiment quickly, validate what actually influenced user behavior, and scale only the highest-performing experiences back to TicketNetwork.

Goal

Increase click-through, add-to-cart rate, and overall revenue by improving how deals were surfaced and perceived—using data-driven experimentation to validate which messaging most effectively influenced user behavior before scaling to the core platform.

Challenges

We had a labeling problem. “Best Deal” was applied so broadly that it lost meaning—and users were starting to ignore it entirely. The tricky part? Internally, it felt like we were highlighting value everywhere. Externally, it just looked noisy. I audited how deal labels were used across the experience, then partnered with Product to define clearer criteria for what actually qualifies as a deal. From there, I explored alternative labeling strategies (including “Hot Deal”) and designed multiple badge variations that felt more intentional and scannable. Because if everything is a “best deal”… nothing is.

A/B testing sounds simple until you realize how many variables can accidentally skew results. To mitigate risk, I used MegaSeats as a controlled testing environment—allowing us to validate changes without immediately impacting TicketNetwork’s core revenue. I worked closely with Product and Engineering to define clean control vs. variant conditions, ensuring performance changes could be attributed directly to the badge. On the UI side, I iterated on badge placement, color, and hierarchy to make it noticeable—but not obnoxious (no one wants to shop inside a sticker book).

As an individual contributor, I owned the full lifecycle—which meant I also got to solve every edge case that popped up along the way. Things like: - Badge inconsistencies across desktop vs mobile - Multiple tickets qualifying for a deal at once - Engineering constraints around dynamic deal states I created flexible UI patterns and clear prioritization logic, then partnered closely with Engineering through implementation and QA to make sure everything held up in production. A lot of this project was iteration-heavy—design, test, learn, refine, repeat—until we landed on something that both looked right and performed.

Results

What started as a simple “maybe we should stop calling everything a best deal” experiment turned into a pretty meaningful business win. By refining how we labeled and surfaced deals, we saw a 35% increase in revenue profit—not from changing prices, but from changing perception (which, turns out, matters a lot). Engagement improved, users clicked with more confidence, and the experience felt less noisy and more intentional. We also proved that testing in MegaSeats before rolling changes into TicketNetwork was the right call—lower risk, faster learnings, better outcomes. And as a bonus, we walked away with a repeatable A/B testing framework that we could apply to future merchandising and pricing experiments—because once you see what good experimentation can do, you don’t really go back.

Smooth Scroll
This will hide itself!