03. ChatGPT and your business
Without a doubt, “ChatGPT” is one of the buzzwords of 2023. As excitement builds around this powerful chatbot and other tools based on large language models, many businesses will be considering how to make the most of this new technology in a risk-informed way.
Here are our top 5 things to consider when assessing possible use cases for ChatGPT within your business:
1) Doing nothing is a risk. Confronted with a potential minefield of commercial, legal and regulatory issues, you might be tempted to ban ChatGPT altogether. But what are your competitors doing? Are they enabling their staff to test possible use cases in a controlled way? If so, you run the risk of being left behind.
2) Read the terms of use. How many of us have clicked through standard terms of use without reading them, eager to test out the exciting new technology at our fingertips? Any effective and fully informed ChatGPT risk assessment should include a close review of the terms applicable to your use.
3) Protect confidential information. The basic premise of the model underpinning ChatGPT is that it is constantly learning from user inputs. The information your employees enter into the chatbot, and the responses they receive, are therefore non-confidential. As at the date of this post, the terms of use for ChatGPT indicate that users can opt out of having their content used for learning purposes, but it’s not yet clear how this safeguard works in practice.
4) Be aware of the data protection risks. Data protection and privacy laws typically regulate the use and processing of personal data, give individuals rights over their data (for example, to request deletion) and require impact assessments to be carried out in certain circumstances. Given ChatGPT’s learning functionality, personal data introduced by a user may be reproduced to other users in later responses. Processing any amount of personal data through ChatGPT or other large language models requires extremely careful thought.
5) Use outputs with caution. Large language models are known to “hallucinate”, leading them to produce convincing but factually inaccurate answers. Given that they learn from others’ inputs, their responses may also incorporate intellectual property owned by others (for example, copyright-protected code or text). Finally, courts in several jurisdictions are still grappling with the complex question of who owns the intellectual property rights in works created using artificial intelligence. In short, think twice before relying blindly on the responses you receive from ChatGPT.
Clearly it is not possible to eliminate all risks associated with use of ChatGPT. However, there are measures businesses can take to mitigate the risk. One option is to limit use to specified individuals for approved purposes, accompanied by strict rules (for example, banning all users from inputting personal data and confidential information), a careful review process and appropriate training.
For more information about Bloomworks Legal’s services (including support in assessing risk and drafting internal policies for use of emerging technology), click here.
The fine print:
This newsletter is provided for general information only and does not constitute legal advice or create any kind of solicitor/client relationship; please consult with a qualified professional if you need advice on a legal issue.