If you ask ChatGPT to write an introduction for a blog post about ChatGPT in a privacy context, you get the following response:
“In the wake of digitalization and the growing need for automated and personalized services, the use of AI-based chat systems is becoming increasingly important. ChatGPT, an AI technology from OpenAI, is a promising example of generating human-like responses to text input. However, in addition to the numerous benefits ChatGPT offers to businesses and users, there are also privacy challenges that must be considered when using AI-based chat systems. In this blog post, we will take a closer look at ChatGPT in a data privacy context and discuss the most important aspects for businesses and users.”
This is just one of many examples through which it becomes clear how artificial intelligence (AI) has evolved. These developments make it increasingly difficult to distinguish human-generated texts from computer-generated ones. New challenges also arise in this context in terms of data protection law. This article answers the questions of what happens to the input in ChatGPT and what should be considered in order to use ChatGPT securely and in compliance with data protection.
What is ChatGPT?
ChatGPT is part of the US company Open AI, which was founded in 2015 by a group of technology entrepreneurs. The founders include Elon Musk and Sam Altman, among others. In addition, ChatGPT has several investors and partners, such as Microsoft.
The latest version of the chatbot – ChatGPT-4 – can do even more than the previous version. For example, it can handle much longer texts, up to 25,000 words, and conversely generate texts of the same length itself. In addition, the new AI can now also recognize images and describe them. Another special feature is that the latest version of ChatGPT can also recognize and explain humor. It was shown at the presentation of the new version that the AI can, for example, describe why a comic is funny.
What happens with input to ChatGPT?
When you send an input to ChatGPT, it is processed by the US servers running the AI model. The input is entered as text (as of ChatGPT 4, also images) and then interpreted and processed by a model to generate a response. The processing is done through a complex sequence of mathematical operations based on a neural network trained on large amounts of data (machine learning).
Initially, ChatGPT’s responses do not belong to anyone, as they are generated by an automated model. The model was developed and trained by OpenAI (OpenAI, LLC.) to generate answers to questions or comments based on the text given to it as input. The answers generated by ChatGPT are the result of processing data used during the training process of the model, and not the result of an individual human opinion or perspective.
Nevertheless, it can of course happen that ChatGPT makes use of proprietary sources. How to proceed in such a case has currently not been conclusively evaluated from a legal point of view. It is also problematic that ChatGPT does not automatically display the sources used if this was not explicitly requested during the input. The user would therefore have to display the sources and then check whether they are protected by copyright. However, according to reports, ChatGPT may invent sources that do not exist or answer a question differently when asked multiple times. One should be aware of these problems when using ChatGPT.
Privacy risks with ChatGPT
The fact that all entries can be made at ChatGPT is problematic from a data protection point of view. This may also include personal data. In addition, it is possible that the generated responses at ChatGPT may be considered personal data because they were generated by identifiable individuals and their context. This means that the user may be entering personal data into the chat, and ChatGPT may in turn be using this data to generate new responses. The core of the problem lies in the fact that, as a rule, there is no legal basis for this and no consent is obtained from the data subjects beforehand.
In addition, the data is transferred to the USA, which is currently considered an unsafe third country due to the level of data protection. This third country transfer must be secured, which is why standard contractual clauses must be concluded between OpenAI and companies that wish to use this service. ChatGPT provides an order processing contract, which often includes standard data protection clauses, for the business services. In addition, one also needs to perform a Transfer Impact Assessment for this processing activity.
Is it possible to integrate ChatGPT into own software?
Yes, it is possible to integrate ChatGPT into your own software. For this purpose, OpenAI provides various APIs and tools that allow ChatGPT to be integrated into your own applications or systems. When using APIs, the capabilities of ChatGPT can thus be accessed within one’s own system. Companies can use this API to create their own chatbots and train them to meet specific user needs.
If one integrates ChatGPT in one’s company, OpenAI must be listed as a subcontractor in order processing contracts with customers.
ChatGPT can be included in various applications and platforms. For example, it can be used with newsletter providers, messaging platforms and e-commerce platforms. But also Microsoft wants to use ChatGPT in the future, as well as Bing and many more.
How can ChatGPT be used securely?
Would you like to use an AI chatbot in your projects? Contact Enkronos team today.