Skip to content

Impacts of Generative AI on Information Security

This field is young and rapidly evolving. GPT-4 (https://openai.com/product/gpt-4) which was only made available in March 2023, is a major advancement over ChatGPT’s engine. New insights on these reverberating impacts are being discovered all the time and will continue to come to light as we move forward. That being said, there are some emerging concepts to be aware of and help shape your specific and our collective approach to managing risk.

The Dangers

  • These tools are being used to create more convincing and grammatically correct phishing messages. This is concerning as several indicators of phishing emails (such as bad or odd grammar) are more easily corrected by amateur or even advanced phishing attackers. Though ChatGPT, for example, has safeguards in place such as denying requests for phishing email language, it has been easy to circumvent (by asking for an example phishing email for a security training, for example)
  • It is being used to write malicious code, and has been used in hacker contests to lead contestants to more sophisticated hacks which helped them win the Zero Day Initiative’s hack-a-thon in Miami last week. ChatGPT helped string together nonobvious vulnerabilities that they could create malware to chain and exploit.
  • A silver lining is malware written by these language models should be detected by modern antimalware systems since it has learned from previous examples. It is not…yet…coming up with its own novel variations but we could arrive at that point.
  • Teams are using these tools for business productivity including uploading sensitive business information to ChatGPT. See the graphic at right from Cyberhaven.
  • OpenAI discloses (along with several other companies) that they may use data uploaded by the users to train their models and, thus, may be exposed to unknown recipients.

The Benefits

  • These tools are being added into Security tools (such as Orca) to help provide more context and “plain language” clarity around what needs to be remediated. It is also being incorporated so you can ask direct questions to the tools and get meaningful, contextual answers.
  • Related to the above, it is supercharging SEIM tools and may reduce or eliminate the need for fully staffed Security Operations Centers at certain companies.
  • It is helping to review code for potential vulnerabilities (as well as optimizations), Though not yet powerful or accurate enough to replace current static and dynamic code scanning, it is rapidly improving and finding things scanners aren’t.

What to think about and do?

  • Raise awareness among employees on the importance of handling business and personal data in accordance with their classification level. This includes uploading information to Generative AI tools such as ChatGPT, Bard, etc.
  • From a policy and monitoring perspective, treat Generative AI.similar to other SaaS tools businesses might be using. This includes leveraging SaaS / Cloud Security Posture Management (SSPM/CSPM) processes and tools.
  • Give a special warning about processing personal data through ChatGPT or any other AI tool where that is not the purpose for which the data was collected (unless said otherwise and given appropriate information about it to the data owners / concerned individuals).
    OpenAI’s business API, according to their disclosures, does not make use of customer’s data for training, so might be a route forward for certain business use. They also provide an OptOut for using data uploaded through the web interface, but requires an org ID and specific email address for the account which may be tough to manage.
  • Review and revise education and policy related to intellectual property rights and plagiarism issues – more on https://intellectual-property-helpdesk.ec.europa.eu/news-events/news/intellectual-property-chatgpt-2023-02-20_en
  • If you are to use Generative AI, be transparent about it (both within and outside the org)
  • Teams should always ensure that you check any results given by Generative AI to verify whether or not they should be considered for further usage.
  • In the Cyberhaven study mentioned previously, currently “…at the average company, just 0.9% of employees are responsible for 80% of egress events — incidents of pasting company data into the [ChatGPT] site.” So, a few folks are major violators — for now. But this might help with training.

Releated Posts

Cybersecurity Risk is a Board-Level Issue

Elevating Cybersecurity: A Strategic Imperative for Boards This presentation addresses the imperative of understanding and managing cybersecurity risk at the board level. Despite the growing threat landscape, only

Taking the Temperature on AI’s Impact on Cybersecurity in 2024

A cornucopia of infosec insights to chew on these Holidays. In this episode, we carve up concerns around increased specialization and silos forming between red, SecOps, and compliance

NIST CSF 2.0: Making CISO’s Lives Easier with the New Govern Function

The National Institute of Standards and Technology (NIST) has recently unveiled Cybersecurity Framework 2.0 (CSF 2.0), marking a significant advancement in cybersecurity risk governance practices. This updated framework,