This year, Android and iPhone updates are set to be dominated by artificial intelligence (AI), with Google leading the charge. However, amidst all the excitement, it is essential for every smartphone user to be aware of the potential risks and implications. The incorporation of AI into our daily smartphone apps raises serious concerns regarding security and privacy.
One area of particular concern is generative AI chatbots. While we may exercise caution when it comes to app installations, permissions, browsers, and data sharing with platforms like Facebook and Google, we often forget about these precautions when engaging with AI chatbots. Suddenly, we find ourselves in what feels like private conversations with seemingly helpful virtual friends. The deceptive nature of these interactions masks the fact that these chatbots are ultimately part of a massive computing ecosystem driven by advertising and data brokering.
This issue was identified earlier when AI chat was introduced into private messaging apps. Now, as new apps like Bard transition to Gemini, Google has issued a warning to all Android and iPhone users to be cautious with these new technologies. In their statement, Google advises users not to disclose confidential information or share data they wouldn't want reviewers or Google to access and use for product improvement, service enhancement, and machine learning development. They acknowledge that Gemini Apps conversations, related product usage information, location data, and user feedback are collected and utilized for these purposes.
Fortunately, Google reassures users that Gemini Apps conversations are not currently employed for targeted advertising. However, they mentioned that this could change in the future, and if it does, they promise transparent communication with users. The concern arises when individuals unknowingly share sensitive information while interacting with AI chatbots that assist in drafting business plans, sales presentations, or even aiding in academic dishonesty. It's crucial to note that the questions asked and the answers provided during these conversations are stored and can be retrieved and reviewed, posing significant privacy risks.
Yet, standalone apps are just the beginning. Google also warns users that integrating Gemini Apps with other Google services allows for data accumulation and utilization in line with the associated services' privacy policies and Google's Privacy Policy. If users employ Gemini Apps to interact with third-party services, their data will be handled according to those third-party's privacy policies.
Awareness about the risks posed by generative AI is only just emerging. For instance, Google's Gemini (formerly Bard) seemingly requests users' past private messages to contextualize and generate suggestions, which may compromise end-to-end encryption in messaging services. The primary concern lies in the off-device storage of data. By default, Google stores this data for up to 18 months, adjustable to 3 or 36 months in users' Gemini Apps Activity settings. Additionally, location information, including general areas, IP addresses, and home or work addresses associated with users' Google Accounts, is also stored alongside Gemini Apps activity.
It is important to acknowledge that this issue is not unique to Google; data collection and usage of this nature are typical within the emerging generative AI industry. Evaluating the security and privacy practices of different players, such as Google and OpenAI, presents an ongoing challenge.
Jake Moore, an expert from ESET, highlights the fact that any data shared online, even through private channels, has the potential to be stored, analyzed, and potentially shared with third parties. As data becomes increasingly valuable, AI models are designed to extract vast amounts of personal information from users. This data sharing poses future risks to security and privacy, often unbeknownst to users.
Google suggests that users can disable long-term data collection within Gemini by adjusting the settings. Disabling this feature ensures that future conversations are not subject to human review or utilized to improve generative machine-learning models. Users can also delete chats from their pinned and recent chats, which simultaneously removes related activity from 'Your Gemini Apps Activity.' However, even after deletion, conversations are retained for up to 72 hours to facilitate service provision and feedback processing, without appearing in users' Gemini Apps Activity.
As previously mentioned, the dichotomy between on-device and off-device AI analysis will become increasingly significant in enabling new smartphone functionalities. Apple is likely to leverage on-device capabilities as much as possible, although they are also exploring on/off device performance. In contrast, Google's approach will primarily rely on cloud infrastructure due to its distinct setup and focus.
For the millions of Android and iPhone users currently utilizing Gemini-powered apps, decisions must be made regarding the trade-off between privacy and the allure of AI integrated into mainstream apps. The rest of us will soon face similar choices.
Ultimately, we must consider whether we prioritize our privacy and the considerable progress made in recent years regarding private browsing, tracking prevention, and location sharing. Alternatively, if we deem AI integration into everyday apps too enticing to resist, we must accept the potential consequences. If it is the latter, we should be cautious of the popular expression: 'be careful what you wish for.'