Growing concerns in ChatGPT, Cybersecurity, and user safety into 2024

ChatGPT example

Throughout 2023 and into 2024, there was no question of the impact of the growing accessibility of artificial intelligence and applications harnessing new technologies of generative AI. The impacts throughout the year proved more unpredictable than expected with ChatGPT immediately moving towards mainstream adoption. The groundbreaking application has exploded in attention, gaining top headlines in tech stores. It has penetrated so many industries that Cisco predicts ChatGPT and other similar tools will be operational within the back-end of IT systems and external products. One year since the public launch of the product we have seen greater investments by competitors with similar products, power struggles within parent companies and their primary investors, and cybersecurity risks that are expected to grow in scale and impact.

With vulnerabilities in sight and the capability to empower bad actors, cybersecurity must remain a critical priority to ensure the safe advancement of generative AI into the mainstream. 

Current Risks of AI

The consequences of ChatGPT’s mainstream adoption resulted in several challenges that have not only been fully addressed but exacerbated by malicious actors and methods of manipulating the language-learning model. These challenges include: 

  • Malicious Code Generation – Tricking the AI to create code for phishing, malware distribution, and other methods of cybercrime. 
  • Higher Quality Phishing Scams – The State of Phishing Report 2023 by SlashNext revealed an increase of 1265% more phishing emails since Q4 2022. With a 967% increase in credential phishing. 
  • Misinformation Campaigns – Could be used to manipulate public opinion and spread propaganda with greater reach and frequency than truthful sources. 
  • Social Engineering – Can be applied to impersonate real people with high accuracy. Targeted individuals can be convinced to give up sensitive information. 
  • Data Privacy – There is great concern of the legality of all the data used to train ChatCPT. There is a greater risk of having information leaked or misused. 
  • Lack of transparency/explainability – No way to show user how ChatGPT and other applications arrive at their conclusions. Greater difficulty for detecting and mitigating potential biases and security risks. 


Google Cloud’s Official 2024 Cybersecurity Forecast notes that, “If an attacker has access ot names, organizations, job titles, departments or even health data, they can now target a large set of people with very personal, tailored convincing emails since there is nothing inherently malicious about, for example, using gen AI to draft an invoice reminder… We’ve already seen attackers have success with other underground-as-a-service offerings, including ransomware used in cybercrime operations. 

Recognizing the need for disruption of threat actors, OpenAI terminated accounts that were operated by state-affiliated malicious actors. Actions taken by the attackers include conducting research on companies, generating scripts, creating content used for phishing, and forms of social engineering. Not only can content be created that is accurate and believable, but these actors can take it further by using LLMs to research companies and create more-targeted methods of social engineering. 

Current Actions Taken in Cybersecurity AI 

In response to the heightened risks and increased usage by bad actors. The plan for finding potential solutions to these challenges will take advances in the following areas: 

  • Improved Detection and Mitigation
  • Greater Transparency and Accountability
  • Guidelines and Regulations from World Governments
  • Educating the public about potential risks and protections from AI Models.


Defenders and Cybersecurity Professionals in 2024 will need to detect and respond to online adversaries at scale and with greater speed. New technologies in AI will help to augment and enhance the human capability to analyze, infer, and draw more precise conclusions in the event of a cyber attack. With a general election in the United States happening in 2024, protection from malware, misinformation, as well as manipulation of public processes is of increasing priority, especially with the threat of nation-state actors growing in influence and capabilities. 

Currently, Microsoft Threat Intelligence is tracking over 300 unique threat actors, including nation-state actors, ransomware groups, and others leveraging language learning models (LLMs) applied toward attacking digital infrastructures. To work against these tactics, techniques, and procedures, Microsoft is equipping technologies to continuously track malicious LLMs and create countermeasures for the following actions and beyond. 

  • LLM-informed reconnaissance
  • LLM-enhanced scripting techniques 
  • LLM-aided development
  • LLM-supported social engineering
  • LLM-assisted vulnerability research
  • LLM-optimized payload crafting
  • LLM-enhanced anomaly detection evasion
  • LLM-directed security feature bypass
  • LLM-advised resource development


In response to the forecasted increase in threats and bad actors leveraging AI technology, the White House addressed the risks and dangers of artificial intelligence by meeting with seven leading companies including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to jointly commit to best practices for their products to ensure safety from the conceptual/prior stages to the final launch of their product. This includes internal and external security testing before release, with companies sharing the information across the industry to best train public and private professionals for the needed levels of awareness. Companies must also do their diligence and research to ensure the safety of consumers.

Demand for More Trained Professionals and Solutions

Deep fakes, misinformation campaigns, and other forms of manipulated identities will be implemented further not only by the accessibility and capability of this software, but also by the cybersecurity skills gap. At ThriveDX it’s our mission to train and prepare members of the workforce to take on the growing threat of cyberattacks from malicious actors. 2024 will prove to be a busy year for cybersecurity but also one that is capable of solving a myriad of challenges by working together. For more information on how ThriveDX trains and prepares learners for digital transformation, we invite you to visit our website.

Protect Your Organization from Phishing


Explore More Resources

Your Trusted Source for Cyber Education

Sign up for ThriveDX's quarterly newsletter to receive information on the latest cybersecurity trends, expert takes, security news, and free resources.

Download Syllabus

Let’s Talk

Download Syllabus

Apprenticeship Program

Apprenticeship Program

Let’s Talk

Get Your Free Trial

Access our Free OWASP Top 10 for Web

Enter your information below to join our referral program and gain FREE access for 14 days

Follow the steps below to get FREE access to our OWASP top 10 for Web course for 14 days

  1. Simply copy the LinkedIn message below
  2. Post the message on your LinkedIn profile
  3. We will contact you as soon as possible on LinkedIn and send you an invite to access our OWASP Top 10 for Web course


Make sure you confirm the tag @ThriveDX Enterprise after pasting the text below in your LinkedIn to avoid delays in getting access to the course.
tagging ThriveDX Enterprise on LinkedIn

Ready to Share?

Take me to now >

Contact ThriveDX Partnerships

[forminator_form id=”10629″]
Skip to content