Since being released to Beta in November, where the public have been given access to ChatGPT, it has become immensely popular with over 1 million users signed up within 5 days. It has the potential to disrupt business, and almost all aspects of our professional lives, with it being used to generate content, write code, allegedly being smart enough to pass university exams and complete school and university assignments by writing to relevant level and potentially being capable of replacing Google’s search model.

However, with all this excitement, and vast potential, it also comes with many large potential implications from a data security perspective. Some of these are through hackers and other malicious users taking advantage of it to exploit the native flaws where it produces very convincing and yet inaccurate content.

Basics / Fundamentals

ChatGPT is an open source chatbot, meaning that its source code is generally available on the internet for anyone to access and modify. It has been trained on billions of data points, which means that it has access to a vast amount of data. This data can be used by malicious actors to target attacks.

The model is being constantly “moderated” and improved for its handling of malicious content or requests. However, re-phrasing questions can often still get the required response. An example of this is shown in the Generating Phishing Content section, where rephrasing a question allowed it to generate content, even though it was flagged as potentially violating the content policy. (Note. This test was done before the recent upgrade to GPT 4)

Data / Response accuracy

As any AI engine has only gathered its intelligence based on the data that it has been taught on, there are inherent risks of the accuracy of this data. This can include an inherent bias in the results that it gives based on what it knows, and how that differs from what the person asking the question knows. The answers it gives can also be sensitive to the wording of the question. OpenAI are very open with this, and these limitations are outlined as Limitations on both the ChatGPT blog website: https://openai.com/blog/chatgpt/, the main screen when you start interacting with it, and as disclaimers in responses that it offers.

Having a thorough vetting and verification / approval process where any content generated, whether through AI or by people is proofread and checked will assist to mitigate this threat.

Generating Phishing Content

Using ChatGPTs ability to write very convincing content, hackers are likely to generate content for their phishing and other social engineering attacks. This will allow non-native speakers to generate grammatically correct and convincing content. It also allows hackers to purporting to be from whatever type of business they are targeting and write life-like content for them. This will eliminate the current tell-tale signs of phishing emails where the content contains poorly constructed language and grammar.

As scamming gets more elaborate and convincing, including and enhancing staff training as part of your management system can help your staff know what to look for and recognise malicious emails and social media contact. This will help to protect your network from infiltration, and help your staff keep their personal data secure as well.

Generating Malicious Software

ChatGPT can be used to write code to help programmers’ productivity, which means that the code that it writes can be used for malicious purposes and malware. This creates more opportunities for those prospective hackers who are not good at writing code. ChatGPT can write code in a multitude of different languages meaning that it has the capability to write scripting tools to access servers and systems, right down to functionality for applications to extract data from an infiltrated system. 

For example, it can be used to scan text for credit card numbers and send these credit card details to another site so they can use them later. Interestingly, this was not flagged as violating the content policy.

An example of this is shown in the screenshot image below. Note, the screenshot has bee cropped to remove the code generated in the interest of not promoting malicious activities.

Loading...

This ability to much more easily generate malicious tools to assist hackers detect vulnerabilities and attack your system elevates the likelihood of attack as less technical hackers can now generate code. Protecting against this is complex, however, no more complex than the risks and existing vulnerabilities within your system. Implementing or enhancing an ISMS and obtaining certification to a security standard such as ISO 27001 will assist you to identify the vulnerabilities, assess the risk of them being exploited and implement controls to mitigate these risks.

Conclusion

Using AI tools such as ChatGPT to generate content can and will make the process much more efficient. However, appropriate checks will still need to be in place to ensure that what is produced is accurate, relevant, and written appropriately.

These tools can also be used to generate threats to your systems. As ChatGPT and OpenAI is such a game-changing technology, the threats are wide and varied. To best assess and mitigate the risks to your system that are enhanced or introduced by AI, implementing an ISMS or re-assessing the risks in your existing ISMS is the best course of action. Getting your ISMS accredited against a standard is prudent and will assist with continually improving the system and keeping up with the fast-moving industry.

Author
Brad Fabiny

DQS Product Manager - Cyber Security and auditor for the ISO 9001, ISO 27001 standards and information security management systems (ISMS) with extensive experience in software development.

Loading...