The phrase “a picture is worth 1,000 words” is now an understatement. One of the major leaps of 2022 includes a radical transformation in photography. Recently, a new trend came about, with social media users likely oversaturating your feed with portraits of Artificial Intelligence-generated selfies.
The term “Magic Avatar”, made popular by apps Lensa, appears to be a new inspiration for users. Yet, towards others, it is an invasive species posing threats to privacy, intellectual property, as well as the way we see ourselves. How does it affect users, enterprises and mean for the future?
How AI Uses Your Consumer Data, Privacy and Enterprise Data
Aside from reimagining and reassembling our facial data into posters of vikings, space travelers, super heroes, and anime, AI is redefining its role in writing, studying, organization and beyond. The persistent advancement of algorithms and machine learning has matured from a bicycle on training wheels to a particle accelerator on autopilot, a truly exponential advancement. Combined with over a decade of filtering and masking our selfies and daily adventures, this new chemistry is creating a new set of challenges for creators, entrepreneurs, the everyday user and enterprises.
How AI Makes Art — Where the Magic Avatars Happen
In 2022, a new image synthesis model known as Stable Diffusion was released. Originally created to further the capability of creating lifelike, realistic images from sentences and text, this process teaches the AI about a subject and over time does it’s own form of “photoshopping” and rendering the subject in repeated fashion to the point that the subject can be realistically portrayed, rendered and presented as if crafted by the hands of one hundred artists.
For the more varied and detailed varieties of AI, this process takes hours — not to generate the image but to sort trial and error through approaches and imperfections to find the high score to serve to the user.
How this technology is created, as well as how it is used by users, certainly begs the question of ethics, privacy, and safety.
Ethics of AI-Generated Artwork
To build upon the process of creating image synthesis, images are scraped from large image repositories. Some of these can be searched, one website Have I Been Trained? looks into the training data for image modeling, including popular image databases such as LAION-5B for Stable Diffusion and Google’s Imagen modeling to see if any of your images were put into the system.
Within these repositories are large volumes of original artwork found on popular online websites such as DeviantArt, Pinterest, and Getty Images. This is done with no permission or consultation, which has led to a growth in intellectual theft criticism by artists. Within this controversy, art communities and galleries have opted to ban artwork generated from AI. In this sense, the practice is not seen as innovation, but rather intrusion.
Potential Dangers of Artificial Intelligence Art and Photography
Aside from studying photography and art from large volumes of image searches, our data is involved too. Looking at Lensa as an example, their privacy policy states, that upon submission of the images you “make magic with”, you are providing an “irrevocable, non-exclusive, royalty-free, worldwide, fully paid-up, transferable, sublicensable license to use, reproduce, modify, adapt, translate, create derivative works from, and transfer your user content, without any further compensation to you and always subject to your further explicit consent to such use where required by applicable law.”
While Lensa states that the images are in the cloud and deleted within 24 hours, you’ve agreed to terms in the policy that provides Lensa with the consent to use the images how they deem fit.
Combine this with the source of the images, we’ve seen recent cases where medical photos were discovered to be scraped and used to train AI. Medical history and other sensitive knowledge and information could go further towards painting a more accurate portrait of someone, potentially crossing more dangerous boundaries in cases of stolen/mistaken identity.
With deep AI-learning of a subject, images and content can be created and manipulated to the most realistic detail to show someone committing crimes as well as illegal acts. This furthers the threat of how deep fakes and other technologies can be used to create false evidence.
Best Business Practices for AI and Cybersecurity
It’s critical for businesses to understand the importance of protecting their data not only from breaches and human error, but also machine learning. If a machine is being trained and composing algorithms with data that is sensitive to a business, then that data is as good as exposed. Furthermore, the quality of data can be compromised by outside parties, causing the machine learning to be biased or carry false positives. Businesses that integrate to these AI applications should also implement best practices to avoid many of the common ways hackers expose vulnerabilities in their code.
The increased capability of AI requires a more sophisticated approach towards protecting data from not only the wrong hands, but the wrong influences. Businesses should be aware of AI-Based vulnerability management to ensure the safety of the company and employees as well as having the right plans and practices to respond should the case arise.
Whether for research, recreation, and making up your own magic avatars, what may appear as one image at a time is the workings of several algorithms and machines making renderings in split seconds. It’s important for individuals and businesses to understand how their data is kept private, as well as what data is used for purposes of training machine learning. Ignorance to machine learning algorithms is creating vulnerabilities for users and without the right practices in place, the risks extend further.
Conclusion
Artificial intelligence will continue to expand further into the creation of art, content, gaming and further recreational use. It’s important to understand the trend and the risks of participation as a user and business. With growing questions pertaining to the ethics and practices of the technology, solutions and protections are no longer a subject outside of the horizon.
Until we learn more about the consequences of the new trendy apps and what can really happen, we can mitigate certain risks like the way an enterprises’ data can get leaked through security awareness training. For future AI products that come out that can be integrated with APIs for enterprise use, we can also control some of the security vulnerabilities in the code.
To learn more about how ThriveDX can upskill you with the latest innovations for a cybersecurity role or how TDX can protect your enterprise, visit https://thrivedx.com/.