Human Like Bots
Human Like Bots

AI and machine learning have progressed significantly over the years. Unfortunately, their applications are not limited to things that benefit humanity. Cybercriminals have also learned how to take advantage of these tech marvels. The rise of human-like bots, in particular, makes cybersecurity more challenging.

Human-like bots are now being employed to perform automated online fraud. According to the Big Bad Bot Problem 2020 report, 62.7% of bad bots targeting login pages can mimic human behavior. Also, the report says that 57.5% of the bots attacking checkout pages exhibit human-like patterns as they perform carding attacks.

Human-like bad bots

The researchers responsible for the report mentioned earlier say that there are four generations of bot evolution so far. The third and fourth generations have given rise to the human-like bots. Accordingly, Gen 3 bots are those that use full-fledged browsers as if they are humans using Chrome, Safari, or Firefox. They follow human patterns of interaction, having similar mouse movements and keystrokes.

Gen 4 bots, on the other hand, are the most sophisticated, as they feature the ability to undertake more advanced human-like interactions. They don't just imitate human patterns of web browsing; they are also capable of carrying out various cybersecurity violations. They appear to possess a high level of artificial intelligence and the ability to detect correlation and perform contextual analysis.

The emergence of human-like bots makes it difficult to identify and prevent security threats before they could do any damage. Businesses and organizations need to have more sophisticated detection and blocking tools. Dynamic IP addresses, IP-based protection systems, CAPTCHAs and other commonly used security challenges are no longer effective against Generation 3 and 4 bots. There's a need for something better, something that infuses behavior analysis capabilities, for example.

Automated human-like bot attacks

Based on information from the Open Web Application Security Project (OWASP), three automated human-like bot attacks focusing on online fraud stand out. They are as follows:

Carding. Also referred to as credit card stuffing, carding is a cyber threat wherein attackers attempt to authorize stolen credit card credentials using several parallel attempts. It is an automated payment fraud tactic that seeks to identify which card numbers and details match to proceed with a purchase (using a credit card).

Successful carding fraud results in unauthorized debt for the owners of stolen credit card information. For merchants that approved the unauthorized purchase, carding also has the consequence of harming their reputation. Carding victims usually complain about the attack and request for chargebacks. These chargebacks are disadvantageous for businesses, as they mean chargeback penalties and undesirable merchant histories.

Account takeover. OWASP lists two types of cyber threats that constitute an account takeover. These are credential cracking and credential stuffing. The former refers to the identification of valid login credentials by using various values for hte usernames and passwords. The latter entails mass login attempts to determine the validity of stolen usernames and passwords.

Fake account creation. As the phrase implies, this attack results in the creation of new accounts, often in bulk. There are also instances when this attack comes with profile population or the filling out of details in account's profile to create some semblance of legitimacy. The newly created accounts are then used for various abusive or cybercriminal activities including spamming, cash laundering, malware distribution, and fake reviews and surveys.

What makes these attacks human-like is that their symptoms resemble those of human-driven attempts to breach security systems. In automated account takeovers, for example, the indications of the attack include a high number of failed login attempts, increased reports of account hijacking, and sequential attempts to log in to an account using different sets of credentials from the same HTTP client.

In the case of fake account creation, the attack signs are comparable to the common activities of politically-motivated trolling operations or overeager "fandoms" (fan groups of celebrities) who are trying to sway public opinion, create an impression of mass support, or make it appear like someone is receiving heavy disapproval and criticism. These are common activities of real people online, which can now be simulated by bots.

For carding, the symptoms point to a scenario wherein someone tries to use a credit card numerous times but keeps getting rejected. It's like a typical hacker who tirelessly tries many things to access an account and use its different functions in an unauthorized manner.

Addressing the problem

In an interview at the 2020 RSAC, anti-fraud and bot mitigation expert Tamer Hassan shared his take on the growing presence of human-like bots. "If you can look like a million humans, what can you do? And the answer is: a lot of things, and it ranges from everything from account takeover and financial fraud to changing the popularity of something."

Online fraud powered by "smart" bots capable of simulating human actions is a complex problem. As such, it requires solutions that treat the problem as if they were really human-initiated. Filtering IP addresses identified as botnet herders' IPs no longer works.

Nevertheless, there are many solutions to address these evolving threats effectively. Security firm Imperva cites a number of these proven protective measures: the use of multi-factor authentication, device fingerprinting, browser validation, machine learning behavior analysis, reputation analysis, and enhanced API security. However, doing all of these one-by-one can be tedious and inefficient.

There's a need for something, a bot management system vs. carding online fraud for instance, that integrates most of the above-mentioned measures. Bad bots can repeat their attacks ceaselessly and tirelessly, so they should be addressed by something that can go toe-to-toe with their boundless recurrence.

The takeaway

Online fraud continues to be a serious cyber threat. COVID-19 failed to slow the attacks down. The pandemic has even become a tool to boost the effectiveness of some fraudulent schemes such as the bogus websites, fake apps, money muling, and deceptive investment opportunities.

The prevalence of human-like bots is an unwelcome complication. Nevertheless, online fraud through human-like bots is not unpreventable. With the right automated system backed by machine learning and other anti-bot measures, organizations can get the adequate protection they need. If these bad bots evolve, the solutions to defeat them can do the same.