Blog Post

How Does It Feel Responding to a Question: Are You a Robot?

CAPTCHA’s “Are you a robot?” prompt suggests the rise of AI-powered malicious bots and the need for advanced security to protect digital spaces.

If you have spent any amount of time browsing the internet, you have likely encountered that familiar existential question: Are you a robot? It usually appears as part of a CAPTCHA test, prompting you to click a checkbox, solve a puzzle, or decipher blurry text to prove your humanity. While its purpose is clear—to distinguish humans from bots—the experience can be surprisingly amusing or even thought-provoking. For a split second, being asked Are you a robot? can make you pause. It is such a simple question, yet the act of proving one’s humanness to a machine feels strangely unnatural. In many cases, checking a box is all it takes. But then there are times when the system decides you need an extra layer of verification. Suddenly, you are staring at a grid of blurry images, trying to identify bicycles, traffic lights, or crosswalks. Sometimes, the images are vague, leaving you second-guessing whether a tiny piece of a bus in the corner counts. It is in these moments that you might feel more robotic than ever — methodically scanning, analyzing, and making selections as if you are an AI model trained for object recognition. On the flip side, these challenges serve as a reminder of how sophisticated AI has become. The very fact that websites must verify humanity in such a way highlights the growing capabilities of automated systems and the ongoing arms race between developers and malicious bots.


The Rise of Malicious Bots and Their Growing Threat to Cybersecurity

As artificial intelligence (AI) continues to evolve, so does the dark side of technological progress—malicious bots. These automated malware programs, engineered to execute harmful operations with speed and scale, are emerging as formidable adversaries in the realm of cybersecurity. No longer limited to rudimentary attacks, today’s bots are increasingly sophisticated, capable of infiltrating systems, manipulating data, and evading detection. Their targets span the spectrum—from unsuspecting individuals to multinational corporations and even government infrastructures. Whether it is credential stuffing, orchestrating large-scale data breaches, spreading disinformation, or committing financial fraud, these bots are becoming more deceptive, agile, and challenging to detect. Consequently, traditional security mechanisms, such as intrusion detection systems and firewalls, are increasingly inadequate against modern threats, prompting an urgent need for more intelligent and adaptive defences (Mohamed, 2025).


Examples of Cyber-Attacks and How They Can Be Accelerated by AI

Credential-stuffing Attacks
Cybercriminals are increasingly leveraging AI-driven bots to launch cyberattacks that are faster, more adaptive, and harder to detect. One of the common threats is credential-stuffing attacks, where stolen usernames and passwords from data breaches are tested across multiple platforms to gain illegal access. Usually, these attacks exploit reused passwords and usernames. For example, the Dunkin’ Donuts data breach exposed thousands of customers to unauthorized access by hackers who managed to steal tens of thousands of dollars. The integration of AI capabilities in credential stuffing has simplified the identification of valid credentials and the bypassing of security measures. AI-powered attacks are becoming more effective as algorithms learn from past patterns and machine learning models identify vulnerable accounts through user behavior. Their scalability enables even small attacker groups to launch widespread campaigns targeting millions of users at once (Anglen, 2025). 

Scalping and Automated Fraud
Scalper bots are automated programs that mimic human behavior to quickly buy high-demand items like tickets, outpacing regular shoppers. These products are then resold at inflated prices on secondary markets, creating unfair access for typical customers. In 2020, the PlayStation 5 (PS5) launch was heavily disrupted by scalper bots, which acquired thousands of consoles for resale at inflated prices. Scalper bots are now powered by state-of-the-art technologies, including AI, to perform transactions at accelerated speed and interfere with website operations (Glenn, 2023). These bots are growing increasingly sophisticated, even capable of bypassing defence mechanisms such as CAPTCHA tests.

Social Media Manipulation and Misinformation
Malicious bots play a crucial role in spreading fake news, disinformation, and propaganda on social media. For example, during the 2016 U.S. presidential election, automated bots were used to amplify politically charged content, influencing public perception. Using advanced social bot detection techniques, the study by Bessi & Ferrara (2016) identified a substantial segment of the user base that appears to be non-human, responsible for generating nearly 20% of the total conversation content on social media. Recent advancements in deepfake technology further exacerbate this issue, enabling bots to generate highly realistic but fabricated media. Researchers warn that AI-generated misinformation is becoming harder to detect (Drolsbach and Pröllochs, 2025). Their enhanced scalability, ability to operate across multiple languages, and use of diverse content formats make detection more difficult and present major obstacles to traditional defence strategies used by digital platforms and users (Feuerriegel et al., 2023).


“The potential of AI to manipulate voters and sway public opinion during elections poses a significant threat to democratic processes around the world.”

 

Potential Security Measures

According to OWASP Cheat Series Sheet, Multi-factor Authentication (MFA) remains one of the most effective defences against password-based attacks like credential stuffing. Microsoft suggests that enabling multi-factor authentication reduces the likelihood of account compromise by over 99.9%. Thanks to widespread support from modern browsers and mobile devices for technologies like FIDO2 Passkeys - cryptographic credentials associated with an individual's account - MFA is now practical for most applications. To maintain a balance between security and user experience, MFA can be triggered selectively, only when a login attempt appears suspicious. Similarly, CAPTCHAs can help detect and deter automated bot activity during login attempts or when accessing online platforms. While not foolproof, they can slow down attacks and flag unusual behavior, especially when used dynamically based on risk signals. Monitoring solve rates can also reveal whether bots are bypassing these challenges. Notably, advanced computer vision algorithms and OCR technology can enable bots to analyze and interpret images and texts, allowing them to bypass visual CAPTCHAs. It is, therefore, necessary to have several layers of defence.  For instance, the use of honeypots to deceive and trap bots, an anti-spam plugin, and a Web Application Firewall. Moreover, as often described, AI is a double-edged sword; AI-driven techniques like deep learning and machine learning enable real-time data processing, pattern recognition, and rapid adaptation to emerging threats, making them essential for building scalable and resilient cybersecurity defences (Salem et al, 2024). For instance, Asiri et al. (2024) introduced PhishingRTDS, a deep learning-based system designed to detect and block phishing websites. The system reports a precision of 99%, demonstrating its high effectiveness in accurately distinguishing phishing URLs from legitimate ones. A recent comprehensive review of AI-driven detection techniques to advance cybersecurity can be found here.


Conclusion

Beyond the minor inconvenience, the question Are You a Robot? can spark deeper thoughts about digital identity. As AI continues to evolve, blurring the line between human and machine interactions, proving our authenticity online might become even more complicated. Existing mechanisms like CAPTCHA—once a clever gatekeeper—are increasingly being outsmarted by sophisticated bots capable of mimicking human behavior with high precision. This shift signals a broader challenge: the tools we rely on to distinguish real users from automated threats are losing their edge. Perhaps, in the near future, we may need to adopt more advanced, context-aware methods of verification, not just to access a website, but to engage responsibly with AI-driven platforms and protect our digital spaces. Biometric security, behavioral analytics, and dynamic trust scoring could replace the familiar checkbox, ushering in a new era of identity validation. While answering Are You a Robot? is usually just another step in internet browsing, it is also a reminder of the digital world we navigate daily, and of the escalating arms race between security and deception. Whether it distracts you or makes you ponder the nature of human-computer interactions, one thing is certain: for now, clicking that checkbox is a small price to pay for keeping the internet secure.

Suggested citation: Nyamawe Ally., "How Does It Feel Responding to a Question: Are You a Robot?," UNU Macau (blog), 2025-08-14, 2025, https://unu.edu/macau/blog-post/how-does-it-feel-responding-question-are-you-robot.