How does DataDome’s bot SOC team work?
A security operations center (SOC) provides both oversight and incident response services in cybersecurity. DataDome’s SOC team helps us counter the ever-increasing number of bot-driven attacks targeting our customers’ mobile apps, websites, and APIs.
Technology provides extremely powerful tools for bot detection, but human expertise is still fundamental. Behind the scenes, our bot SOC team stays hard at work to ensure that the DataDome solution constantly adapts to the changing patterns of automated threats.
Composed of expert threat researchers and data scientists, DataDome’s bot SOC team:
- Keeps a close watch on emerging and evolving browser technologies.
- Analyzes trends around bot usage and libraries.
- Infiltrates bot developer communities.
- Uses a red team approach to constantly challenge their technical findings.
The main output of the team’s effort is labeled datasets for our machine learning (ML) algorithms. ML is a branch of artificial intelligence (AI) that enables systems to learn and improve from experience, without human intervention. The datasets from our SOC team are 100% qualified as either bot or human patterns, and are used to fuel our “new model factory”—so every new ML model can be an authorized playground for our algorithms.
Here are some common FAQs about our bot SOC team answered in this article:
Bot SOC Team FAQs
1. How is the SOC team organized?
Led by our Head of Research Antoine Vastel, PhD, the SOC team is made up of experts ranging from analysts to data scientists and threat researchers. The team includes redundancy and diversity of skills to tackle different types of threats.
2. How does the SOC team monitor threats?
DataDome’s bot SOC team proactively monitors and mitigates customers’ automated traffic to ensure optimal security and performance at all times.
The team uses different types of monitoring, including:
- Real-Time Monitoring: Our streaming detection engine automatically computes outlier detection on several metrics and at different granularities (behavior per IP, behavior per session, distribution of country on a website, etc.). If something very abnormal happens, it automatically triggers an alert to be reviewed by someone on the SOC team. The process is discussed further in our OWASP presentation here.
- Aggregate Statistics: We also compute aggregate statistics to monitor false positives and false negatives, e.g. CAPTCHA pass rate and volume of suspicious requests, which can be based on a wide range of signals—fingerprints, location of requests, time of the day, etc. Similarly to the real-time monitoring, if something suspicious is detected, it triggers an automatic alert to be reviewed by someone on the SOC team.
Our team is always available to investigate suspicious activity or analyze mitigated bot attacks.
3. How does the SOC team report an event to the customer if needed?
The SOC communicates any relevant findings with the support team, who then contact the customer by their preferred contact method (email, Slack, or phone).
If needed, a SOC team member can join a call or video conference to quickly communicate with customers and answer questions.
4. What kind of actions does the SOC team take?
If the SOC team discovers the detection is not aggressive enough, or that part of an attack is unblocked, their priority is to create an efficient short-term mitigation to quickly stop any residual part of an attack that could be unblocked.
Our real-time detection engine is structured in a way that ensures, once an analyst/researcher detects something malicious, they can easily and quickly deploy new mitigations on all our points of presence in seconds.
These mitigations can be based on different signals, such as:
- Signatures: TLS fingerprints, HTTP headers, browser JS fingerprints, mobile SDK fingerprints, etc.
- Behavior: Client-side behavioral events (mouse movements, scroll speed, etc.) and server-side behavioral events (time series/graph analysis of how the user makes requests on the site, speed, frequency, browsing path, etc.).
- Reputation: The team computes reputation at the session and IP level. They also use information about the type of IP (DC/residential) to determine if the IP is part of a proxy network.
Once a short-term mitigation has been put in place, the team thoroughly reviews the traffic to understand more about what occurred and how our bot and fraud detection can be improved in the long term. Then, the explanation is provided to the customer.
The explanation can take different forms based on the findings, such as better understanding the modus operandi of the bot and its intent, finding new detection signals, adding new features to our machine learning models, etc.
If there are false positives, the SOC team identifies the root cause of the false positives and adapts the detection modes responsible.
Vital Role of the Bot SOC Team
Ultimately, DataDome’s solution would not be as effective without the human brains behind the technology. Our bot SOC team is a driving force behind our AI and ML models, which allow us to scale efficient bot and online fraud protection. Thanks to the team, the DataDome solution adapts quickly to changing threat patterns and protects our customers around the world 24/7.