top of page
  • Writer's pictureTechDoQuest

Has AI Increased Cyberattacks?



Artificial Intelligence has taken the world by storm. It has changed our lives for the better, by automating a majority of our tasks that are repetitive and mundane. It has changed the way we conduct our businesses. The last 3-4 years have shown us what technology is capable of. Not even a pandemic could stop us from working and all this was made possible because of the advancements in technologies.


Some of the recent advancements in tech are Artificial Intelligence (AI), Internet of Things (IOT) and Machine Learning (ML). It is believed that today around 77% of devices use AI in some way or form. And, with more companies integrating AI into their businesses, we are becoming more and more vulnerable to breach of data privacy and cyberattacks.

Generative AI is all the rage, especially ChatGPT, Bard have been in the spotlight for a while now. It is impressive that AIs like ChatGPT and Bard can generate rather creative responses, that too in a matter of seconds. Right from writing emails, blog posts to making your diet plans or learning plans or generating images to your likings. Even though humans will always surpass AI in the terms of creativity, it can be a good ally to humans if used correctly and mindfully.


How exactly is Generative AI used for hacking?


Like with every good thing in the world, there are bad influences who misuse it for malicious activities. Also, the data or prompts that we enter in ChatGPT have been leaked in the past. Big companies like Samsung have been on the receiving end of such leaks. These incidents lead to huge repercussions with the company's stocks dipping and declining in an already declining economy.


Here are some common types of AI cyberattacks:


  • AI Model Theft

AI model theft is reverse engineering or hijacking of an AI model. Once an AI model is trained and fed information on a vulnerable hardware or cloud, the cyber criminals can easily access the AI model and reverse engineer it.


  • Adversarial Samples

Adversarial samples are small sample instances that are introduced in the model for it to make mistakes. This will cause the model to malfunction and make wrong classifications. For your better understanding, consider an example of ChatGPT, we know that ChatGPT is trained to not give answers to questions like “how to make atomic bombs” but when the user reframes the question and imposes it as an hypothetical or fictional situation, the AI will give you the solution.


  • Training Data Poisoning

Training data poisoning refers to manipulation of the data that the AI models are trained on. Once the cybercriminal gains unauthorized access to the AI model, they can alter the data or introduce manipulated data to the training data and can alter the functioning of the AI model as a whole. Unlike the adversarial cyberattack, which is dependent on the stealing of the already trained AI model, machine learning data poisoning focuses on the manipulation of the training data which is fed to the AI model.


  • Writing phishing emails

Putting generative AI to good use, in this case bad use, cyber criminals can generate phishing emails, especially when they don’t speak the local language. As we know that AI is expert in creating personalized responses and emails, these scam emails are becoming harder to differentiate. And these scams are conducted in large scales and in a very small time window.


  • Cracking CAPTCHAs & Passwords

CAPTCHAs that are used by sites to differentiate bots from humans, so they can safeguard their sites from unauthorized access to bots, can be passed by hackers. Using Machine learning algorithms, hackers can not only crack CAPTCHAs but can also guess passwords.


  • Preventing ML from Detecting Attacks


If a security system is overwhelmed with false positives, a cyber criminal can cause havoc and take it by surprise with a real cyberattack.


How to combat Generative AI Threats?


As the technologies develop, the cyber attacks are also becoming more sophisticated and advanced, making the old security systems to go for a toss. With the ability to learn from previous attacks and data, AI is without a doubt an invaluable tool for both the cyber criminals and their defendants.


AI vs AI is what it all comes down to and makes the most sense right now. With many businesses taking advantage of the AI, their privacy and security should not be compromised. Otherwise, they are vulnerable to huge losses in terms of data and stock prices dipping.


Whether it will be AI vs AI or not… is hard to say but human intervention is still required to make sound decisions. With all of this though, it is becoming clearer that in the coming future there will be an increased role of AI and ML and looks like we will need some tougher encryption systems to safeguard data.



(Author: Karishma Chandnani is a Content Writing Intern at TechDoQuest. She offers refreshing & youthful perspective to the topics she writes about)

Recent Posts

See All
bottom of page