Artificial intelligence (AI) has entered the chat in almost every industry in 2023. From art and creativity to cybersecurity, data protection and the ethical use of technology, AI is taking society by storm, with no signs of stopping anytime soon.
On the other hand, it’s predicted to create challenges for many roles that could become obsolete due to this new form of technology. To answer these challenges is to pioneer a new path for a safer and more ethical ecosystem. The benefits of new technologies creates potential for a greater quality of life. There could be an acceleration of new opportunities and jobs in virtually every related industry, and 2023 is creating a turning point in how we apply this technology.
ChatGPT is the fastest growing app of all time. Years after the explosive growth of TikTok, the new chatbot developed by OpenAI is now officially the fastest growing consumer app, with 100 million users in only two months. Getting 100 million users took Instagram two years. Earlier this year, Google also launched their AI technology, Google Bard, a conversational AI service powered by Google’s Language Model for Dialogue Applications—also known as LaMDA.
Generative AI tools like ChatGPT, Bard and other forms of language modeling could present various threats to how information is generated, organized and made accessible by search engines like Google.
Addressing The Risks Of Language Models
The state of advancement in AI is doubling every six to 10 months. To address this growth, it’s important for companies to uphold the highest standards of science, privacy and responsible practices within their space of the industry. With greater capability for making advancements in humanitarian, social and environmental causes, outside influences may also grow empowered, using tools to further the effects of misinformation and manipulation, as well as the quality of false impersonation.
Microsoft has doubled down on the need for research and development, committing to an investment of $20 billion toward cybersecurity, expanding to partner with federal, state and local governments, as well as provide educational programs offering cybersecurity training.
Negative Applications Of New Technology
A study recently published by Georgetown University’s Center for Security and Emerging Technology, Stanford Internet Observatory, and OpenAI (the creators of ChatGPT themselves) notes: “There are also possible negative applications of generative language models, or ‘language models’ for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor.”
In these situations, there are three agents at play: the malicious actors with their motives behind a campaign, the behaviors used to deceive and disingenuously persuade the audience, and how the content itself is being interpreted by the machines.
We’ve already seen several attempts by tech companies, both small and large, to introduce AI tools with unpredictable, sometimes disastrous results based on how the chatbots were influenced.
There’ll always be unintended consequences with new technologies, but if conducted by malicious actors, we could see AI weaponized in all the wrong ways. This could go beyond influence and misinformation, and potentially be used toward social-engineering, cyberattacks, as well as more advanced phishing attempts with great compromises of privacy ahead.
Many believe that these actions will be inevitable. A survey by BlackBerry recently revealed that “51% of IT decision makers believe there will be a successful cyberattack credited to ChatGPT within the year.”
Closing The Skill Gap In Cybersecurity And Equipping Companies For The Future
These AI tools will likely grow not just in capability but in ease of use, with more hands at play to engineer these language models. What practices or applications will be used to disincentivize bad influences and misinformation from impacting society and individuals?
I think the best way to improve cybersecurity to meet the growing needs of online businesses is to train and upskill a stronger workforce in technology and cybersecurity. To meet rising needs, the workforce must achieve its potential. At the end of 2022, the World Economic Forum noted a cyber workforce gap of 3.4 million people—meaning in-demand jobs are not being filled due to a lack of training and expertise—and the workforce gap has increased by 26.2% from 2021 to 2022.
It is critical to train new employees and close the skill gap, as the consequences of falling further behind the growth of artificial intelligence could increase risks and costs for businesses already suffering from existing vulnerabilities. By investing further into our talent and furthering the digital transformation of the workforce, we can do our best to be equipped against every imaginable cyberthreat.
Business leaders can improve their cybersecurity training by adopting a cybersecurity-centric culture, continuously rolling out security awareness training and developing an effective cybersecurity plan to inform your workforce of where you stand when it comes to data protection and security. It’s important that you have the support and buy-in of managers, executives and C-suite leaders to ensure your cyber business needs are properly met.
When hiring, I suggest organizations focus less on the standard four-year degree and more on the mix of skills, experience and certifications professionals obtained to stay relevant in the cybersecurity field. There are many skills that are transferable from other industries that can easily convert to skills that are needed for a successful cyber professional. Prioritize upskilling and reskilling your professionals on an ongoing basis since cyberattacks continue to become more sophisticated in nature, and it’s important to stay ahead of that curve to keep your company secure.
In time, we will learn more of the risks and the feasibility of creating a more ethical environment for artificial intelligence. This is a diverse and dynamic new frontier, ripe with new challenges and unique consequences on the road ahead. I think the businesses that can cultivate new talent and create opportunities from the most underserved areas will gain the advantage and favor as the next wave of innovations come about.
Dan Vigdor
Dan Vigdor is the co-founder of ThriveDX, a global leader in providing cybersecurity training to upskill and reskill learners.