AB TestingGame DevelopmentAnalyticsCommunity Management

The Role of Data in Game Design

August 29, 2025 6 min read

Why Community Feedback Isn’t Enough

In my last article, I wrote about the benefits and limits of community feedback in game design. Community voices are powerful, but they only represent a fraction of your players. The silent majority won’t post on Discord or Reddit, and many leave without ever telling you why. Even active community members encounter pain points they never bring up. If you want to understand those players, you need data.

Getting Feedback From Everyone

So if verbal feedback can only get you so far, how do you get better forms of feedback? In my role at Playsaurus, I also oversaw the implementation of an event tracking system to monitor how certain features were being used and how updates affected retention and revenue. Initially we used Amplitude to track in-game events and build dashboards, but since our games were free to play, the cost of Amplitude was too prohibitive to sustain long-term. Eventually we ended up building out our own analytics platform that had the features we wanted with some of the teachings we learned from Game Analytics: Retention and Monetization in Free-to-Play Mobile Games. To visualize the data we were collecting, we built out a dashboard in Tableau for broad in-depth analysis, but we also had an Admin website with KPIs.

In Tableau we could see which features people were playing and which ones were leading to the most revenue. This also meant we could see the ones people weren’t playing or the ones people weren’t spending on.

In the initial development of Mr. Mine, many of the features we introduced because we thought they were fun, but we hadn’t necessarily considered how those features would pay for themselves other than that they’d maybe increase retention.

Turning Data Into Design Decisions (Example: Scientists in Mr. Mine)

When we first introduced Scientists in Mr. Mine, the idea was simple: create a fun way for the player to augment their game by gathering relics which would provide passive buffs so long as they were stored in Relic Slots. To get Scientists, players would gather them from treasure chests, which they could either buy with in-game currency (i.e. tickets) or find for free while mining. On paper, this seemed fine, but the data told a different story.

After hours of play, we saw that players were swimming in free chests. That meant they never felt the need to actually purchase them, which left Scientists as a feature that had almost no monetization path. From a retention standpoint, the feature worked because players enjoyed Scientists. However, it wasn’t pulling its weight financially.

The data gave us two insights:

  1. Players liked Scientists enough to engage with them.
  2. Players had no reason to spend tickets on them.

So we made three changes:

  1. Relic Slots: Instead of unlocking extra relic slots through specific excavations, we let players buy additional slots directly with tickets.
  2. Revives: Scientists could now be revived with tickets after they burned out.
  3. Unlock Depth: Since Scientists seemed to improve retention significantly, we made them unlock sooner in the game, which helped extend the duration that buying chests were useful for Scientists.

These tweaks were small in scope, but they transformed Scientists from a “fun but free” feature into something players were happy to engage with and spend on. Without the event tracking and dashboards, we might have written scientists off as “just a retention tool.” With data, we could see where the gaps were and fill them.

It was a good reminder that fun alone doesn’t always equal sustainable. With the right data, even small design changes can turn retention features into monetization opportunities without hurting the player experience.

The Strengths and Weaknesses of Developing on Vibes Alone

Of course, data isn’t the whole story. Early on, we leaned heavily on intuition and vibes alone. Sometimes that worked. Other times, it blinded us to obvious improvements.

Once we had data showing how features were received, we could start tweaking them by making them more intuitive, unlocking earlier in the loop, and adjusting monetization strategies. That shift moved our focus from just serving vocal long-term players to helping more players reach that point in the first place. After all, later features are pointless if players don’t make it there.

The next challenge was validating whether these changes were actually working, and that’s where A/B testing with Statsig came in. We gave cohorts of players different experiences and tracked which variant led to the outcomes we wanted.

For example, in Mr. Mine, we ran a test to change our premium purchase icon from green tickets to gold tickets and then another test to change the order of the shop tabs. Both of these led to several percentage point increases in revenue. The changes don’t need to be big. Even a simple change, like turning a button from green to blue, can lift conversions by 3%. It may not sound like much, but stack enough of these improvements over time and they become the difference between a good shop and a great one. You’re never going to hear players praise or complain about a color change.

Figuring Out Which Features to A/B Test

I’m being somewhat facetious when I say changing button colors alone will lead to significant improvements over time, but look at your data and ask some questions: “Why isn’t this feature leading to the ____ we expected?” or “What's the smallest thing we could do to improve ____ with this feature?” I’m sure you have some ideas you can think of, but if you can’t this is also where the community comes in handy. I wouldn’t ask them outright about purchasing habits necessarily, but I would ask questions like “What simple thing do you think would make this feature more interesting?” or if I already had an idea, “Do you think ___ would make this feature more interesting?” This helped me find and validate ideas before ever having me or my team spend time developing them.

My background is community management so I’ll always include the community where I can, but any way that you can get fast feedback you should utilize it. This especially extends to A/B testing where you need the results to be statistically significant which requires getting enough data and having as many players to interact with the feature as possible. This usually means you should focus most of your efforts on making improvements earlier in the game loop so that you can get answers quickly. As you improve retention and get players further into the game loop you can shift your efforts further out.

Community tells you what players think. Data shows you what they do. The strongest designs come from balancing both. Listen to players, watch behaviors, and iterate until intuition, feedback, and data align.

In my next post, I’ll share how we applied this same testing mindset to something much bigger: regional pricing.

Share Your Thoughts

Have questions or want to discuss this article? Feel free to reach out.

Email

matthewkanderson95@gmail.com

LinkedIn

Connect with me

More Articles

More articles coming soon...

© 2025 Matthew Anderson. All rights reserved.