How This Mobile Gaming Company Does Data

MobilityWare built the first-ever Solitaire mobile game app. Here’s how the company keeps its data pipeline on the cutting edge.

Written by Adrienne Teeley
Published on Jul. 22, 2021
How This Mobile Gaming Company Does Data
Brand Studio Logo

It might not be immediately obvious, but building mobile games like Solitaire, Jigsaw Puzzle and Bubble Shooter Pop requires ironclad data pipelines. Between new releases, shifting compliance standards and partnerships, app developers have to work to keep their data game strong — and scalable. Sudhir Vallamkondu, chief technology officer at  MobilityWare, said his team has several processes to do just that. 

“We follow a standard process of estimating any necessary changes and developing an agile and iterative approach to getting there,” Vallamkondu said. “For the most part, our data pipeline is able to accommodate many of these types of scalability needs.”

They have a good system in place, but that’s not to say that his team doesn’t frequently reexamine how they handle data. To continue delighting users and keeping up with MobilityWare’s own company growth, Vallamkondu said data scientists on his team constantly monitor their pipeline — and keep their data available and easily parsed to those who need it.

To learn more about how data is managed at a popular mobile gaming company, Built In LA connected with Vallamkondu. He shed some light on the tools and processes that keep MobilityWare’s tech team on track for a winning hand, even as they juggle scaling, compliance and user satisfaction. 

 

Woman holding phone, playing mobile game

Sudhir Vallamkondu
Chief Technology Officer • MobilityWare

What technologies or tools are you currently using to build your data pipeline?

We use various technologies in our data pipeline — prominently Apache Spark, Snowflake, Apache Airflow, AWS Kinesis and Python — that are built on the MPP (massively parallel processing) architecture. We pick technologies that are built to handle big data and can dynamically scale based on the volume of data. We also emphasize alignment with industry standards, which allows us to find talent more easily. 

When we have a choice between managed services and native technology (like AWS EMR versus native Apache Spark cluster), we prefer managed services that provide us the right amount of visibility to optimize and debug issues, and still conform to the standard API of native service to prevent a lock-in. By using a managed service, we can focus more on the business problem than worrying about service setup and upkeep. 
 

Data democratization is a key success metric for our data pipeline architecture.”


Data democratization is a key success metric for our data pipeline architecture. We process and make data accessible in various forms, like Tableau dashboards, AWS Athena, Apache Superset and Snowflake to meet the needs of all of our end users. For data observability, we use a series of internal tools that we have developed to automate monitoring, alerting and triaging to identify and evaluate data quality and discoverability issues.

 

As your company and its volume of data grows, what steps are you taking to ensure your data pipeline continues to scale?

Scalability for us involves two big aspects: The ability to scale with the growth of our business, which increases the volume of data, and the ability to adapt to business transformations and anticipate diverse future needs. 

Every part of the data pipeline is designed and has evolved to meet these two core scalability requirements. Iterative design enhancements and learnings from past failures have helped us get to this place. Typical scaling events for us include launching a new game, being required to comply with a new compliance standard and integrating a new type of data partner into our infrastructure. 

With any of these scaling events, we follow a standard process of estimating any necessary changes and developing an agile and iterative approach to getting there. For the most part, our data pipeline is able to accommodate many of these types of scalability needs. However, there are cases where we had to redesign and reconstruct parts of the pipeline. For example, when we had to meet the GDPR/CCPA compliance, we had to rethink our data storage format and our ability to meet the needs of users quickly and in a cost-effective way. 

Data pipeline and storage cost is another key item that, if not monitored closely, can quickly spiral out of control and become misaligned with business value as you scale. 

Hiring Now
Ampersand
AdTech • Big Data • Machine Learning • Sales • Analytics