Artificial intelligence (AI) has enormous potential to change the way we live and work, with businesses scrambling to understand how they can quickly leverage this major milestone in technological innovation.
Customer service could be transformed and it is feasible that jobs traditionally thought to need humans could soon be automated.
But amid the excitement, the consequences for online privacy must also be understood and actions must be taken.
The use cases for AI are growing: this is a technology which could become ubiquitous, meaning its implications for privacy must be taken extremely seriously.
AI can make us safer, by improving security, and healthier, by providing faster and more accurate medical diagnoses. AI and ML are being used for everything from creating content for the web and to write and check computer code, to powering driverless cars and modelling climate change.
Meanwhile chatbot services – accessible to the public via apps and web pages – use generative AI to produce human-sounding responses to questions and prompts.
AI is also big business. The worldwide market for AI-centric applications is expected to be worth over $150bn this year, according to IDC1.
But what of the privacy concerns?
AI models, including chatbots and generative AI, rely on vast quantities of training data. The more data an AI system can access, the more accurate its models should be. The problem is that there are few, if any, controls over how data is captured and used in these training models2. With some AI tools connecting directly to the public internet, that could easily include your data.
Then there is the question of what happens to queries from generative AI tools. Each service has its own policy for how it collects, and stores, personal data, as well as how they store query results. Anyone who uses a public AI service needs to be very careful about sharing either personal information, or sensitive business data. New laws will control the use of AI; the European Union, for example, plans to introduce its AI Act by the end of 20233. And individuals are, to an extent, protected from the misuse of their data by the GDPR and other privacy legislation.
But security professionals need to take special care of their confidential information.
Generative AI tools have already been used to create templates for malware attacks, especially phishing and ransomware; large language models can create attack vectors, such as emails, that are harder for both security software and human recipients to detect.
Security professionals are a key target group for cybercriminals, and the more information that is out there about an individual, the more effective such attacks can be, as AI tools trawl the net for data sources that can compromise individuals’ identities.
Fortunately, there are steps anyone can take to limit the how much of their personal information is on the public internet. This is the best method of making sure that our data does not end up in AI systems in the first place.
Of course, anyone using a chatbot or generative AI tool should avoid sharing personal, financial, and medical information, as well a sensitive business data. This includes information on the business and its security measures.
And it’s pragmatic to limit your own, personal digital footprint. This is where personal data removal services come into their own.
Services such as Incogni can help by contacting the hundreds of search engines, websites, social media outlets and data brokers that hold personal data.
And they keep your online presence under constant review, providing an invaluable additional layer of security. Review and manage your digital footprint using Incogni now.
[1] IDC Press Release, Worldwide Spending on AI-Centric Systems Forecast to Reach $154 Billion in 2023, According to IDC, March 2023 https://www.idc.com/getdoc.jsp?containerId=prUS50454123#:~:text=The%20ongoing%20incorporation%20of%20AI,surpass%20%24300%20billion%20in%202026.
[2] Politico, The struggle to control AI, May 2023, https://www.politico.eu/article/washington-eu-trade-and-tech-council-join-forces-to-stop-ai-harms/
[3] European Parliament, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence