The evolution from human to bot attacks
Over the last several years of my career in cyber security, I have been fortunate to work with professionals who researched and developed new cyber security detection and prevention solutions that block high-end cyber attacks. Initially, these attacks were driven by humans and later by sophisticated bad bots. I felt I’d seen it all, or so I thought…
In my current position at Imperva’s Innovation Office, our team was required to engage in a drastic mind shift. Instead of incubating new cyber defenses for today’s threats, we were put to the task of analyzing and researching trends beyond the current cyber security landscape to predict and prepare for tomorrow’s threats.
Today, most bad bots mask themselves and attempt to interact with applications the same way a legitimate user would, making them harder to detect and block. Bad bots are used by a wide range of malicious operators; they can be competitors who operate in the gray area, attackers aiming to gain profit, and even hostile governments. There are many types of bot attacks, most of them involve high volume attacks while others in lower volumes are designed to target specific audiences.
Bad bots: what do they do?
Bad bots in general are software applications that run automated tasks with malicious intent. Bad bots are programmed and controlled to perform various activities such as web scraping, competitive data mining, personal and financial data harvesting, digital assets theft, brute-force login, digital ad fraud, denial of service (DoS), denial of inventory, spam, transaction fraud, and more.
In this post, we will focus on how bad bots can evolve to adapt to carry out criminal behavior. For example, behavioral crafted attacks specifically intended to facilitate competitive data mining, personal and financial data harvesting, transaction fraud, and theft of digital assets.
How bad bots are hurting businesses today
Here are some examples of how bad bots are utilized today to damage businesses:
Price Scraping – Competitors scrape your prices to beat you in the marketplace. You lose business because your competitor wins the SEO search on price. The lifetime value of customers worsens.
Content Scraping – Proprietary content is your business. When others steal your content they act as a parasite robbing you of your efforts. Duplicate content damages your SEO rankings.
Account Takeover – Bad actors test stolen credentials on your site. If successful, the ramifications are account lockouts, financial fraud, and increased customer complaints affecting customer loyalty and future revenues.
Account Creation – Cyber criminals leverage free accounts used to spam messages or amplify propaganda. They exploit any new account promotion credits (e.g., money, points, free plays, etc).
Credit card fraud – Criminals test credit card numbers to identify missing data (e.g., expiry date, CVV, etc.). This damages the fraud score of the business and drives increased customer service costs to process fraudulent chargebacks.
Gift Card Balance Checking – Fraudsters steal money from gift cards that contain a balance. This results in poor customer reputation and loss of future sales.
For a comprehensive accounting of how bad bots hurt business, download Imperva’s 2022 Imperva Bad Bot Report.
Where can bad bots go from here?
The evolution and progress made in Machine Learning (ML) and Artificial intelligence (AI) are remarkable; and when used for good purposes have proven indispensable in improving our lives in many ways.
Advanced chatbot AI incorporates psychological, behavioral, and social engineering factors into play. Bad AI bots might utilize the ability to learn and mimic the target user’s language and behavioral patterns, which in turn can be used to gain blind trust in their malicious requests. Unfortunately, bad bot operators are rapidly adopting these technologies to develop new malicious campaigns that incorporate machine intelligence in ways never seen before. In recent years, chatbots have gained significant momentum in consumer-facing activities such as sales, customer service, and relationship management.
We are seeing these technologies being adopted by malicious operators inspired by legitimate corporations who are abusing them and demonstrating the potential harm they can cause.
One notable example of this is Tay, a bot created by Microsoft. Tay was designed to mimic the language patterns of a teenage American girl and to learn from interacting with human users of Twitter.
Natural Language Processing (NLP), a machine learning technology, was the foundation of Tay. It was the first bot to understand the text, data & social patterns provided during social interactions, and then respond with adapted text semantics of its own. That means that a bad bot can now adapt to text or voice data, the social and behavioral patterns of the victim with whom it communicates.
In the case of Tay, some users on Twitter began tweeting politically incorrect phrases, teaching inflammatory messages revolving around common themes on the internet. As a result, Tay began releasing racist and sexually-offensive messages in response to other users’ tweets.
How AI makes a bot malicious
Disruption of service (DoS)
Malicious operators can train the AI\ML to learn language patterns of specific audiences and massively message an organization’s resources, no matter if it’s human or digital, it can confuse or overwhelm customer-facing services for a variety of reasons.
Corporate and brands reputation sabotage
In various political election seasons, countries’ national security bureaus and social applications providers identified networks of human-seeming chatbots with crafted online identities that spread false claims about candidates before the election. With enough chatbots running “Mindful” AI behind it, more advanced techniques can be used to effectively trash competitors and brands.
Coupon guessing and scraping
Criminals in the business of harvesting affiliate commissions utilize bad bots to guess or scrape marketing coupons from legitimate marketing affiliates. These bots mass hit websites, affect their performance, and abuse the campaigns for which the coupons were intended. NLP can be used for guessing coupon codes, especially if they are event-related or carry a textual pattern that “mindful” NLP can predict.
A hostile takeover of legitimate chatbots
In June 2021, Ticketmaster suffered a security breach caused by modifying its chatbot customer support service (by Inbenta). Names, addresses, email addresses, telephone numbers, payment details, and Ticketmaster login details of 40,000 customers were accessed and stolen.
Now imagine these examples of what these “legitimate” bots can do next.
Impersonation
Tinder is a dating app with approximately five million daily users. Tinder has warned that the service has been “invaded by bots” posing as humans. Those bots are usually programmed to impersonate women and ask victims to provide their payment card information for a variety of purposes.
These types of publicly known attacks can inspire malicious operators to go to the next level, and interact with corporate users as well as consumers via email, other messaging applications, or even social applications (Shadow IT) to establish relationships that lead to trust and extract valuable assets that can be exploited.
Gaming fraud
Gaming bots are used by cheaters in order to gain unfair competitive advantages in multiplayer games. There are many types of gaming bots aimed for cheating like farming bots, pre-recorded macros, and the most common example – “aimbot” which allows a player to automatically aim in a shooting game.
In some instances, these bots are used to gain profit. In 2019, it was estimated that the gaming industry lost around $29 billion in revenue to cheats.
Conclusion
Cyber security is on the verge of a major shift in its challenges, this shift may require developing the ability to successfully mitigate cyber threats driven by “mindful” bad bots. Cyber security vendors will need to design new detection and mitigation technologies where identifying and classifying the reputation and text patterns of attackers and their intent is just not good enough anymore. As malicious operators adopt new NLP technologies that provide personalized trust-based communication, security vendors must take action too, and sooner is better.
Machines are about to interact with victims and gain their trust by abusing their own language style and social and behavior patterns as well as their colleagues’ and peers’ social and behavioral patterns. It is reasonable to predict that a new generation of “Mindful” NLP technologies will be used in more sophisticated ways to gain profit and cause harm.
Note: This article refers to users targeted by malicious interactions of “Mindful” NLP bad bots. The same principles can be re-applied in a different context: Applications, their APIs, and how they can be abused by “Mindful” Machine Language Processing (MLP) Bad bots.
Try Imperva for Free
Protect your business for 30 days on Imperva.