Bot fraud covers all types of online fraud that can be perpetuated or assisted by malicious bots.

Bot fraud has become increasingly common as more cybercriminals use bots to perform various forms of cyber attacks: bot fraud with the use of scam bots, malicious vulnerability scanning, account takeover, DDoS, SQL injection, data breaches, and so on.

The Chinese giant e-commerce platform TaoBao (owned by Alibaba) was hit by a bot-driven brute force attack between October and November of 2015, compromising a whopping 20 million active user accounts. As you can see, even a single bot-driven attack can leave a massive impact.

It’s also crucial to understand that malicious bots are no longer an issue exclusive for giant enterprise and tech companies. Smaller businesses and individuals can be targeted too. 

Protecting yourself from bot fraud and scam bots, as well as other types of bot-driven attacks, is now a necessity in today’s digital age, and in this post, we will learn how. 

In this ultimate guide to bot-driven threats, you’ll learn: 

What are bots?

Bots—specifically internet bots—are programs designed to automatically perform specific tasks and operate over the internet. The term “bot traffic” refers to traffic coming from automated programs (aka “automated traffic“) to a website, application, or API. It is believed that over 40% of all internet traffic is comprised of bots. 

These bots are typically programmed to perform relatively simple but repetitive tasks, and a key reason why we use these bots is that they can execute these tasks at a much faster rate than a human user ever could. A relatively fast human typer can type around 75 words per minute. A bot? There are Python bots that can “type” over 7,000 words in a single minute. 

While the term bot is more often seen in a bad light, in truth, an internet bot is a neutral tool and isn’t inherently good or bad. 

There are bots owned and operated by reputable companies, such as Google or Facebook, that are actually beneficial to the websites and applications they are on. In fact, the presence of these “good bots” is a part of why defending against bot fraud can be extremely challenging (as we will discuss further below). 

It’s true, however, that there are malicious bots and scam bots owned by bad actors (hackers, fraudsters, cybercriminals), which are the focus of our bot fraud guide. 

What is bot fraud?

Bot fraud, or bot-driven fraud, is an umbrella term referring to all types of online fraud performed or assisted by malicious bots. The malicious bots dedicated to performing these bot fraud attempts are called scam bots

The key benefit of using bots is that they are much faster than human users, and in practice, cybercriminals can use this advantage in three different areas of bot fraud: 

  • Preparatory activity for bot fraud, for example, performing rapid vulnerability scanning to a website.
  • Performing the main aspect of an automated fraud attack, for example, automated phishing, bot-driven account takeover attacks (brute force, credential stuffing), and other types of automated fraud attempts.
  • Assisting fraud attempts by evading the anti-fraud defenses implemented by the target, for example by mimicking human behavior to avoid anti-bot safety measures.

A scam bot can fully perform automated bot fraud by using advanced technologies like AI and machine learning to interact with online businesses (such as an e-commerce site) in a very human-like way, and a bot fraud attack is simply effective due to the speed and volume of attempts they can perform. 

Only one attempt needs to be successful for the bot fraud to be a success, and even a simple, unsophisticated scam bot can be a serious threat for businesses and individuals. 

Cybercriminals can perform bot fraud attacks in many different forms and different monetization schemes. Some can be easy to execute with low-value targets, so the attack relies on a high volume of attacks to drive enough profit. For example, scam bots can be utilized to send a massive volume of spam emails, blog comments, and social media posts. This type of bot fraud may have a very low success rate, but even if just one or two victims clicked on the fraudulent links, the attack can already be profitable. 

Scam bots can also be used to perform attacks that won’t provide financial gains by themselves but will allow the bot operator to perform more severe subsequent attacks, so we can say that in these cases, the bot fraud ‘only’ lays a foundation for the actual attack. A good example is to use bots to attempt new account registrations on e-commerce websites, as well as to perform credential stuffing attacks. The attacker will then use the created or stolen account to perform the actual fraud in many different ways, as we will discuss below. 

Different Types of Bot Fraud

As discussed, bot frauds can be direct (that is, will directly allow the bot operator to make financial gains) or indirect. Bot fraud is only limited by the attacker’s creativity and the available attack surface. However, here are some of the most common bot fraud scenarios: 

1. Account Takeover (ATO)

In this type of bot fraud, the fraudster will use the malicious bots to gain unauthorized access to legitimate user accounts. Account takeover fraud comes in two basic forms: 

  • Credential Cracking: Also known as brute force attack, here the fraudster programs the scam bot to guess the credentials of an account. The basic form of a brute force attack involves the bot trying all possible combinations of the password (if it’s a 4-digit numeric pin, it will try 0000 to 9999, changing 1 digit at a time).
  • Credential Stuffing: In this type of account takeover attack, the fraudster already possesses a stolen credential or a list of stolen credentials (for example by purchasing them on the dark web), then uses the bot to test this credential on many different websites. Credential stuffing leverages the fact that many of us use the same passwords on many different sites and services. 

The objective of ATO fraud is to gain access to a legitimate user account and lock the real users out of the account. This may not lead to direct monetization, but the fraudster can then use the account or the information stored within the account (such as credit card credentials) to commit other types of fraud. 

However, sometimes ATO can lead to direct monetization. If, for example, the fraudster has gained access to an account of an e-commerce store, then they can make a purchase with the account right away and try to retrieve the goods for themselves. 

2. SQL Injection 

Cybercriminals may use malicious scam bots to either scan for SQL injection vulnerabilities or may target website form submissions to perform SQL injection to gain unauthorized access to the server-side database, post unwanted content, or even infect end-users with malware. 

When, for example, an attacker gained unauthorized access to a database after a successful SQL injection, they can gain access to sensitive data stored inside the database, and can then use this data (i.e. credit card information) to monetize the fraud. 

Website forms are a very common target for bot fraud due to JavaScript and SQL security vulnerabilities. 

3. Content Scraping

Fraudsters can also perform indirect fraud by using scam bots to perform scraping. 

Content scraping is an act of extracting data from websites or web applications using a scam bot (web crawler). Think of it as an effort of copy-pasting a website’s content, just performed rapidly by a bot. 

Content scraping or web scraping, in general, is not illegal by itself and can be considered a gray area. In fact, Google, Bing, and other search engines technically perform web scraping on your website every day to index your content. 

However, the scraped content can be used illegally, including to perform various types of bot fraud. Web scraper bots can scrape various types of content: text (blog posts), images, programming codes (HTML, CSS), e-commerce product and pricing data, metadata, and so on, and can then use this content in fraudulent ways, including: 

  • Stealing HTML/CSS codes to build a fake e-commerce site (with similar layout and branding) to defraud your users, technically performing a phishing scheme.
  • Collecting product pricing and/or inventory data and forward it to your competitors, so you lose competitive advantage. This is common in industries where pricing is very sensitive, for example traveling tickets and hotels. 
  • Republishing your content on other websites, which may affect your SEO performance. In a worst-case scenario, your site may be penalized by Google for publishing duplicated content (if you haven’t tagged your content with appropriate canonical tags). In doing this, the fraudster will also steal your site’s organic traffic. 
  • Collecting customer information and/or contact information, and sell this to other businesses, or can target your customers with other scams.

4.  API Abuse

Another common form of bot fraud, scam bots can leverage the fact that many APIs are now used to authorize access to sensitive data. 

API stands for Application Programming Interface. In modern programming and especially for web applications, an API acts as a “bridge” between two or more software solutions (or applications). An API defines the kinds of requests an application can make when it communicates with another application, how the requests should be made, the data formats that should be used, and so on, so two applications can communicate and exchange data. 

With that being said, hackers can use scam bots to attack APIs in several different forms: 

  • APIs are used by organizations to provide access to sensitive data, so automated scam bots can be deployed to extract data from these APIs
  • Hackers can perform DDoS (distributed denial of service) attacks by overloading the APIs with massive volumes of bot traffic. Once the system has been overloaded, the attacker can then hold the website/application owner for ransom. 
  • Scam bots can automatically send API calls to perform credential stuffing attacks by testing lists of stolen credentials. 

To prevent API abuse by bots, it’s crucial to continuously monitor and manage API calls coming from these scam bots, and also to implement adequate defensive infrastructure to prevent API access from sophisticated scam bots. 

How to Protect Yourself From Bot Fraud

Essentially protecting yourself and your digital assets from bot fraud perpetuated by scam bots will involve three key areas:

  1. The ability to distinguish between bots and legitimate human users, and between scam bots (and malicious bots in general) and good bots.
  2. Taking appropriate action on the scam bot activities and malicious requests. Blocking the bot activity is the most ideal, cost-effective request, but it’s not always possible, so case-by-case mitigation is required. 
  3. Perform regular monitoring and display actionable data to be used in your continuous bot management strategy. 

Each, however, has its own challenges, as we will discuss one by one below:

Effectively Identifying Scam Bots: Key Challenges

In theory, protecting our system from malicious scam bots might seem quite easy: detect the presence of scam bots, block their activities, and voila! After all, with today’s technology, distinguishing these bots should be easy, right? 

Unfortunately, that’s not the case, and in identifying scam bots, we’ll always face two core challenges: 

  • Good Bots vs. Bad Bots: There are good bots that are going to be beneficial for your website and/or web application. Crawler bots from Google, Bing, and other search engines are crucial for ensuring your website gets ranked on these search engines, and you wouldn’t want to accidentally block these bots and lose their benefits. However, distinguishing a scam bot from beneficial good bots can be extremely difficult.

 

  • Bots vs. Human Users: Today’s highly sophisticated scam bots are getting better at mimicking human-like behaviors. Malicious bot programmers have adopted advanced technologies like AI and machine learning, so the scam bots can be even more effective in masking their identities, for example by performing non-linear mouse movements when interacting with the web application, rotating between hundreds of different IP addresses, and so on. 

These two issues alone can be extremely challenging, and a bot management solution can take one of three different approaches to tackle them: 

1. Challenge-Based Detection

In this approach, the bot detection solution will challenge incoming traffic with a test that is designed to be very easy to solve by human users but impossible to solve by automated programs/bots. CAPTCHA is a popular example of challenge-based detection.

Challenge-based detection used to be effective, but finding the right difficulty of the challenge was always tricky. If the challenge was too easy, the bot could pass it, but too much difficulty negatively impacted the user experience. Today’s bots are sophisticated enough to make up 50% of passed reCAPTCHA challenges. Cybercriminals also use CAPTCHA farm services and other methods to bypass challenge-based detection. Challenge-based detection is not sufficient protection on its own. The latest CAPTCHA solution avoids showing a challenge 99.99% of the time and when it does, it considers challenge results among many other detection signals to identify bots.

2. Signature-Based Detection

The second approach is to detect the presence of scam bots based on known signatures. In this method, the basic principle is to collect as many “fingerprints” as you can from the client’s requests, then analyze the consistency of these signatures as well as compare them to known fingerprints of scam bots.  

The most basic type of signature-based detection is block-listing IP addresses that are known to be the source of malicious bots, but there are also various other types of signatures, including: 

  • Whether the client’s browser is running in a virtual machine (emulator)
  • Assessing the consistency of OS usage as it was claimed by the client’s usage
  • Checking the presence of headless (modified) browsers signatures, like those of Nightmare, PhantomJS, and so on. 
  • Checking the presence of properties that should or should not be in a claimed browser

While signature-based detection can indeed be effective, it has an obvious weakness: it can only detect scam bots with known signatures, so it’s not effective against brand new scam bots and zero-day attacks. Additionally, blocking IP addresses is ineffective against the more common use of proxy IP addresses among bot developers.

Also, sophisticated bot developers can remove known signatures/attributes from their scam bots, rendering this approach ineffective. 

3. Behavior-Based Detection

As opposed to signature-based detection, behavior-based detection is performed by collecting clients’ behaviors when interacting with the website or application, then analyzing and comparing them to legitimate users’ behaviors. 

This approach is typically performed by AI and machine learning technology that has been “taught” using the data of legitimate human behaviors as a baseline or benchmark. 

Here are some behaviors monitored by solutions utilizing this technique: 

  • Mouse clicks, scam bots may use certain patterns or clicking frequencies
  • Mouse movements
  • Keypress
  • Scroll speed and consistency
  • Total number of pages viewed per session
  • Total number of requests per session
  • Average dwell time per page

And so on. A well-trained, AI-powered behavioral bot detection software is not only effective at differentiating between scam bots and legitimate users but also between good bots and bad bots. 

Managing Scam Bot Traffic: To Block or Not to Block

A bot manager can also help you control malicious bot activities. Bad bots can cause various negative impacts, from stealing and posting your content elsewhere to launching credential stuffing and brute force attacks to a full-scale DDoS attack.

But just as applying static rules to protect against bot traffic is not enough, a good bot manager should provide attack responses that are optimized for each different kind of threat.

Regular Monitoring and Management of Scam Bots

Not only detecting and managing the scam bots’ activities can be a major challenge, as we’ve discussed above, but maintaining consistency in doing so can be an even bigger challenge. To do so, we need a way to continuously gather and analyze all web request data in real-time, and we’ll need a solution capable of doing so. 

DataDome bot protection solution blocks advanced bots before they reach your website, mobile app & API. It deploys in minutes on any web architecture and runs on autopilot. You will receive real-time notifications whenever your site is under attack, but no intervention is required. Once you have created an allow list of trusted partner bots, DataDome takes care of all unwanted traffic.

To protect against malicious vulnerability scanning, DataDome employs a two-layer bot detection engine based on artificial intelligence (AI) and machine learning. Our algorithm analyzes billions of daily events and continuously updates itself to pinpoint both known and zero-day threats.

Wrapping Up

E-commerce fraud prevention and protecting yourself and your business from bot fraud generated by scam bots can be quite challenging if you are not careful. Fraudsters will also get better and better at finding ways to exploit your system, network, and other digital assets—so if you don’t have a comprehensive strategy to defend against these scam bots, you are always at risk of various different types of bot fraud. 

With that being said, you should be proactive in protecting your assets from these bot frauds, and the key here is finding the right bot management solution capable of: 

  1. Identifying scam bots and distinguishing them from legitimate human users and good, beneficial bots.
  2. Taking the right course of actions to block or mitigate the scam bot traffic.
  3. Regularly and consistently monitoring requests to the website, mobile application, or API to maintain consistent protection.