Professor Gibson
An AI image of a Pokemon character who has turned evil and is taking over the world.
Discerning the Digital Dilemma

Privacy, Trust, and AI: The Pokémon Go Data Controversy

Hello Trustworthy People,

It is sad that we are here again, but until regulators step in, we will continue to see situations where technology companies erode public trust. On today’s discussion board is the recent Pokémon Go revelation.

Niantic, the developer behind Pokémon Go, recently announced the creation of a “large geospatial model” using data collected from players to map the real world. This is not unlike Google Earth. The main difference, however, is this model will be from a “pedestrian perspective.”

While this innovation showcases the potential of AI and data to revolutionize how we interact with our environment, it also highlights a growing concern: the complexity of privacy policies and how they contribute to distrust.

When users don’t fully understand how their data is being used, even harmless or beneficial applications of that data can feel invasive. Worse, complex legal jargon and opaque practices often paint all data usage and AI applications as inherently “evil,” fostering suspicion and resistance.

This isn’t just a Niantic problem—it’s a systemic issue. Here’s how we can address it:

Demand Clear, Short, and Understandable Policies

Most privacy policies today are dense and filled with legalese, designed more to protect the company than to inform the user. The average person would need hours to read and comprehend the terms they agree to with a simple click.

If companies truly want to build trust, this needs to change. Privacy policies should be:

  • Concise: A few pages, not dozens.
  • Plainspoken: Written in everyday language, not legal jargon.
  • Transparent: Clearly outlining what data is being collected, how it will be used, and with whom it will be shared.

Companies have an opportunity to set a new standard by adopting user-friendly privacy statements that reflect a genuine commitment to transparency. And if companies are unwilling to do so, the government should force them to through legislation.

Educate Users About Privacy

Complex privacy policies don’t just frustrate users—they leave them uninformed. Many people don’t understand what happens to their data, whether it’s used responsibly or irresponsibly, or how AI algorithms process it. This gap in understanding contributes to fear and mistrust.

We need to focus on building privacy literacy—an essential skill in today’s digital world. Privacy literacy includes:

  • Understanding the basics of how data is collected, stored, and shared.
  • Evaluating risks and benefits of sharing personal information.
  • Knowing user rights under laws like GDPR or CCPA.

Imagine if every app or platform included brief, engaging tutorials on data use. This could demystify how data powers services while empowering users to make informed choices. Maybe it is time to require companies to participate in the effort to promote digital literacy.

After all, we’ve done it before—television stations are required to produce educational content as part of their public service obligations. Just as TV networks are expected to contribute to the public good, tech companies leveraging user data and AI could be held to a similar standard. Maybe they should be required to invest and create accessible, engaging educational content about privacy and digital tools.

Make Consent Simple and Meaningful

Another major hurdle is consent. Today’s “click-to-agree” model is neither meaningful nor sufficient. Users often feel they have no choice but to accept terms they don’t understand to access the services they need.

To rebuild trust, companies should implement user-friendly consent mechanisms:

  • Granular Controls: Allow users to choose specific data to share, rather than an all-or-nothing approach.
  • Visual Summaries: Replace walls of text with charts or infographics summarizing key points.
  • Ongoing Consent: Provide regular, easy-to-access opportunities for users to update their preferences.

By making consent a real choice, companies demonstrate respect for their users’ autonomy.

Highlight the Good Uses of Data

While we should hold companies accountable for protecting privacy, we should also celebrate examples where data is used responsibly and creatively. For instance, Niantic’s geospatial model could enable exciting AR experiences, help urban planners, or even contribute to public safety.

The problem isn’t data itself—it’s how it’s handled. Showing users the positive potential of their data, alongside clear protections, can help rebuild trust.

Address the Bigger Issue: Access and Power

Finally, we need to acknowledge a broader problem: the imbalance of power. Many users feel powerless when it comes to their data because they rely on essential services that demand it.

  • Governments need to enforce stronger regulations to protect user data and hold companies accountable.
  • Companies must prioritize ethical data practices, not just compliance with minimum standards.
  • Consumers should be empowered with tools and education to make informed decisions.

A Call for Change

If we continue down the path of complex privacy policies and opaque practices, the result will be more distrust and resistance to innovation. But it doesn’t have to be this way. We can create a digital future built on trust by demanding transparency, fostering privacy literacy, and implementing meaningful consent.

AI and data collection don’t have to be the villains of our stories. They can be powerful tools for good—if we commit to using them responsibly.

The choice is ours.


The image was created by Midjourney using the following prompt: Imagine a Pokemon who is changing from cute and friendly to evil. He has a plot to take over the world. He is aided by AI and the digital world. Make sure to represent the digital world in the image. Style is animation.