MENU

Pagoda Blog

The Risks of Sharing Personal Information with AI

January 18, 2024

AI is a powerful tool for business. It can quickly analyze and organize large sets of data, generate marketing content and improve sales copy, provide HR services, 24/7 customer support, and even provide IT services. There are many ways to leverage AI for your business but as with any technology, there are risks. 

 

Tools such as ChatGPT, Jasper, and Copy.AI all have privacy policies, but regardless of the level of cybersecurity boasted by AI software, your information is still vulnerable. A good rule of thumb is to assume that if you share information online, it’s never 100% secure. With AI, it’s also important to understand its limitations. The technology is continually evolving and although it can tackle an incredible number of tasks, it is still not anywhere close to replacing humans.

 

Here are a few reasons you should exercise extra caution when sharing personal information with AI.  

 

We can’t control how AI interprets and uses our data

We are continually learning how AI will interact with and use the data we give it. One thing we know about the current iteration of AI is that it lacks contextual understanding. This means it can take information out of context and interpret it in misleading and potentially harmful ways. Here’s one example of how this works: 

 

AI algorithms look for patterns in shared data and then use these patterns to draw a conclusion. This shared data can include personal and highly sensitive information such as income, race, age, health risks, job titles, marital status, number of kids, and behaviors and preferences. If left unchecked, AI can easily draw misleading correlations between data sets, resulting in practices or policies that unjustly impact certain populations both within a specific workplace and across entire regions. 

 

For example, personal information can be used by AI to determine whether an individual qualifies for a job opening, health and financial benefits, housing, and more. The algorithms simply aren’t sophisticated enough to recognize biases or coincidental patterns so we run the risk of perpetuating discriminatory practices. Experts are specifically raising the alarm bells around using AI as a search engine because there is a real risk of spreading both biased and inaccurate information. 

 

As a marketing tool, AI may determine to show inappropriate or simply irrelevant sponsored content to an individual based upon their online behavior and public data pulled from their social media platforms or data they voluntarily shared through an AI app. While typically not as harmful as using personal information to dictate benefits, AI-run marketing can have its own unintended consequences depending on the product or service its promoting.  In short, when you share personal information with an AI app, it’s impossible to entirely control how that app will use the data. 

 

AI apps are vulnerable to data breaches

AI programs learn from every user interaction. In order for this learning to occur, user interactions must be stored for prolonged periods of time. In addition, AI apps store voluntarily shared personal data such as your account information and communication with customer service or the app’s social media platforms. They also store data that’s automatically shared with them such as your browser, IP address, cookies, and how you interact with the app. 

 

This just scrapes the surface of data that may be collected, stored, and shared when you interact with AI. OpenAI, for example, shares in their privacy policy that they may share any of the above information with third-parties as required. These third-parties may include affiliates, legal entities, and business account administrators. Should OpenAI or any of the third-parties they work with experience a data breach, your personal information could be compromised.    

 

In an article by The Economic Times, Vipin Vindal, Quarks Technosoft CEO addresses these concerns around the growing use of AI: 

 

"It is critical to ensure that AI is developed and deployed responsibly. This involves ensuring that personal data is collected and used transparently and ethically, with clear guidelines around how it can be used and shared. It also means incorporating safeguards to prevent the misuse of AI technologies, such as developing mechanisms for individuals to control how their data is collected and used.” 

 

Best practices when sharing data with AI 

As you explore the many uses of AI for your business, here are some best practices when sharing data with an AI app: 

 

1. Share only what’s necessary

Always limit the amount of personal or sensitive information you share with AI (or with any online application). The less data you disclose, the less vulnerable you’ll be to a data breach. 

 

2. Remain anonymous 

Avoid using identifiable information, when possible. This means avoid disclosing full names, addresses, phone numbers, and any other sensitive data. A good gut check is to ask yourself, ‘Would this violate HIPAA if it were shared publicly?’ 

 

3. Never share passwords or other highly sensitive information 

Your passwords should only ever be stored in a highly secure password management system and never shared with a third-party, unless absolutely necessary and you’re sure the password will not be stored or misused. For more guidelines on securely sharing passwords, check out this post

 

4. Don’t skim the privacy policy 

Read through the AI app’s privacy policy to ensure you understand how your information is stored and used. Do they share it with a third-party? Review this policy every time you’re notified of an update. 

 

As business owners, we can’t ignore the important role AI can play in revolutionizing the way we conduct our work. It’s becoming increasingly necessary for us to find ways to incorporate AI into our operations in order to stay competitive. It is also critical as AI continues to evolve that we all remain vigilant and demand robust privacy protections and policies around the collection, storage, and use of our data.

 

 

Feature photo by Sanket Mishra on Unsplash

 

 

Want to get more posts like these in your inbox? Sign up for the Pagoda newsletter and we’ll send you the occasional email with content that will sharpen your technical skills, from cybersecurity to digital marketing

 

Did you know we also have a weekly LinkedIn newsletter? Make sure to subscribe for weekly actionable IT advice and tech tips to set your business up for success.

 

-------------------

 

About Pagoda Technologies IT services

Based in Santa Cruz, California, Pagoda Technologies provides trusted IT support to businesses and IT departments throughout Silicon Valley, the San Francisco Bay Area and across the globe. To learn how Pagoda Technologies can help your business, email us at support@pagoda-tech.com to schedule a complimentary IT consultation.

 



Return to Pagoda Blog Main Page


As your trusted IT service partner, Pagoda Technologies is here to help you achieve your near and long-term business goals through reliable and affordable IT support. 

Pagoda Technologies

101 Cooper Street

Santa Cruz, CA 95060

831-419-8000

Contact us for a free IT consultation

 

 

Get in touch 

Join our newsletter

Want IT to serve you better? 

 

 

Subscribe 

Follow Us

Facebook LinkedIn LinkedIn