Privacy

ChatGPT’s Arrival on iPhone Sparks Reprise of Privacy Concerns

ChatGPT iOS app on iPhone
The launch of the ChatGPT iOS app has intensified the ongoing privacy debate, highlighting the calls for governments to regulate AI development.

Since OpenAI introduced ChatGPT, privacy advocates have warned consumers about the potential threat to privacy posed by generative AI apps. The arrival of a ChatGPT app in the Apple App Store has ignited a fresh round of caution.

“[B]efore you jump headfirst into the app, beware of getting too personal with the bot and putting your privacy at risk,” warned Muskaan Saxena in Tech Radar.

The iOS app comes with an explicit tradeoff that users should be aware of, she explained, including this admonition: “Anonymized chats may be reviewed by our AI trainer to improve our systems.”

Anonymization, though, is no ticket to privacy. Anonymized chats are stripped of information that can link them to particular users. “However, anonymization may not be an adequate measure to protect consumer privacy because anonymized data can still be re-identified by combining it with other sources of information,” Joey Stanford, vice president of privacy and security at Platform.sh, a maker of a cloud-based services platform for developers based in Paris, told TechNewsWorld.

“It’s been found that it’s relatively easy to de-anonymize information, especially if location information is used,” explained Jen Caltrider, lead researcher for Mozilla’s Privacy Not Included project.

“Publicly, OpenAI says it isn’t collecting location data, but its privacy policy for ChatGPT says they could collect that data,” she told TechNewsWorld.

Nevertheless, OpenAI does warn users of the ChatGPT app that their information will be used to train its large language model. “They’re honest about that. They’re not hiding anything,” Caltrider said.

Taking Privacy Seriously

Caleb Withers, a research assistant at the Center for New American Security, a national security and defense think tank in Washington, D.C., explained that if a user types their name, place of work, and other personal information into a ChatGPT query, that data will not be anonymized.

“You have to ask yourself, ‘Is this something I would say to an OpenAI employee?'” he told TechNewsWorld.

OpenAI has stated that it takes privacy seriously and implements measures to safeguard user data, noted Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.

“However, it’s always a good idea to review the specific privacy policies and practices of any service you use to understand how your data is handled and what protections are in place,” he told TechNewsWorld.

As dedicated to data security as an organization might be, vulnerabilities might exist that could be exploited by malicious actors, added James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.

“It’s always important to be cautious and consider the necessity of sharing sensitive information to ensure that your data is as secure as possible,” he told TechNewsWorld.

“Protecting your privacy is a shared responsibility between users and the companies that collect and use their data, which is documented in those long and often unread End User License Agreements,” he added.

Built-In Protections

McQuiggan noted that users of generative AI apps have been known to insert sensitive information such as birthdays, phone numbers, and postal and email addresses into their queries. “If the AI system is not adequately secured, it can be accessed by third parties and used for malicious purposes such as identity theft or targeted advertising,” he said.

He added that generative AI applications could also inadvertently reveal sensitive information about users through their generated content. “Therefore,” he continued, “users must know the potential privacy risks of using generative AI applications and take the necessary steps to protect their personal information.”

Unlike desktops and laptops, mobile phones have some built-in security features that can curb privacy incursions by apps running on them.

However, as McQuiggan points out, “While some measures, such as application permissions and privacy settings, can provide some level of protection, they may not thoroughly safeguard your personal information from all types of privacy threats as with any application loaded on the smartphone.”

Vena agreed that built-in measures like app permissions, privacy settings, and app store regulations offer some level of protection. “But they may not be sufficient to mitigate all privacy threats,” he said. “App developers and smartphone manufacturers have different approaches to privacy, and not all apps adhere to best practices.”

Even OpenAI’s practices vary from desktop to mobile phone. “If you’re using ChatGPT on the website, you have the ability to go into the data controls and opt out of your chat being used to improve ChatGPT. That setting doesn’t exist on the iOS app,” Caltrider noted.

Beware App Store Privacy Info

Caltrider also found the permissions used by OpenAI’s iOS app a bit fuzzy, noting that “In the Google Play Store, you can check and see what permissions are being used. You can’t do that through the Apple App Store.”

She warned users about depending on privacy information found in app stores. “The research that we’ve done into the Google Play Store safety information shows that it’s really unreliable,” she observed.

“Research by others into the Apple App Store shows it’s unreliable, too,” she continued. “Users shouldn’t trust the data safety information they find on app pages. They should do their own research, which is hard and tricky.”

“The companies need to be better at being honest about what they’re collecting and sharing,” she added. “OpenAI is honest about how they’re going to use the data they collect to train ChatGPT, but then they say that once they anonymize the data, they can use it in lots of ways that go beyond the standards in the privacy policy.”

Stanford noted that Apple has some policies in place that can address some of the privacy threats posed by generative AI apps. They include:

  • Requiring user consent for data collection and sharing by apps that use generative AI technologies;
  • Providing transparency and control over how data is used and by whom through the AppTracking Transparency feature that allows users to opt out of cross-app tracking;
  • Enforcing privacy standards and regulations for app developers through the App Store review process and rejecting apps that violate them.

However, he acknowledged, “These measures may not be enough to prevent generative AI apps from creating inappropriate, harmful, or misleading content that could affect users’ privacy and security.”

Call for Federal AI Privacy Law

“OpenAI is just one company. There are several creating large language models, and many more are likely to crop up in the near future,” added Hodan Omaar, a senior AI policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy, in Washington, D.C.

“We need to have a federal data privacy law to ensure all companies adhere to a set of clear standards,” she told TechNewsWorld.

“With the rapid growth and expansion of artificial intelligence,” added Caltrider, “there definitely needs to be solid, strong watchdogs and regulations to keep an eye out for the rest of us as this grows and becomes more prevalent.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Privacy

Technewsworld Channels