PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2026-01-22T15:00:00+00:00

Automate your security whack-a-mole: Q&A with Exaforce

Security controls can be a bit of a cat and mouse game—you block one attack, new ones spring up.


January 22, 2026

Automate your security whack-a-mole: Q&A with Exaforce

Security controls can be a bit of a cat and mouse game—you block one attack, new ones spring up.

*Security controls can be a bit of a cat and mouse game—you block one attack, new ones spring up. Malicious actors continue to innovate new ways to hack your software, so responses end up being attack-specific and often manual. It’s not just your software, it’s your third-party dependencies, too. So Exaforce built software that can automate some of the responses and attack detection.*

I spoke with Ariful Huq, co-founder and head of product, and Marco Rodrigues, co-founder and head of product, at Exaforce last month at AWS re:Invent.

—---------------------------------

Q: Tell us a little bit about what what Exaforce does.

Ariful Huq: We are focused on helping organizations of all sizes, starting from high growth startups all the way to mid enterprises, depending on where they are in their SOC journey. If you do not have a SOC, we enable you to build one in days, literally without having to go buy tooling, get detection, engineers, get analysts. If you do have a security operations center when you have analysts, our goal is to amplify the capability of these analysts. Think about a team of two or three analysts—how do you make them a team of ten? That's essentially what we do.

Q: Where do you find that organizations are the most lacking, either pre-SOC II audits or after?

Marco Rodrigues: In our experience at least, customers tend to come to us once they have the SOC II compliance or ISO that's clearly an attestation and an evidence-driven security compliance framework. When it comes time to actually start putting together incident response plans or where there's legal liability that's being driven through their customer contracts, that’s where they tend to get a bit more serious.

A lot of these companies are at the early stage startups. They barely have one or two security engineers to begin with. Usually where they're lacking depends on the journey of the company. A lot of them can be where they have no tools at all, and they need some detection framework. They need individuals monitoring and actually writing those detections. You need a routine that actually responds and remediates to it. So we've seen a kind of a variance of companies in that space.

Some of the larger companies, they just can't keep up with the growth of detections as they come in. They need to augment their teams. The reality is that the skill set is not there—they can't hire these people even if they wanted to. They're using AI SOC, as an example, to augment and fill in that gap.

*Q: When you do construct these sort of detection frameworks for these operations, how much existing infrastructure are you building on? I know a lot of folks have a CloudFlare base to help with that, or HAProxy to route traffic. What are you coming in to? Does anyone just have nothing?*

AH: Surprisingly, even in the largest organizations that we work with, sometimes they have nothing, specifically around cloud and SaaS.We found in starting our journey in building this AI SOCplatform is that most of the market thinks about this as an AI analyst problem.

But we think about four primary tasks in the SOC and detection is one of them: detections, triaging, investigations, and response. If you're a very small organization, typically two to three person security organization, you don't even have the bandwidth to actually go think about detection engineering or building detections.

What you're really looking for is getting off the ground, right? So you come with out-of-a-box detections: great! If you have existing detections from, for instance, CloudFlare, we'll leverage those detections for enrichments and those sorts of things.

Even the larger organizations, like Fortune 2000 companies that we work with, what we find is a lot of them don't even have detection coverage for SaaS services that you would think they would consider very critical.

Q: Open to the internet.

AH: Exactly. Like GitHub, Snowflake, OpenAI. These are critical services where a lot of critical data resides today. And they don't have detections on top of it. We help those organizations with detection and coverage for those SaaS services.

If they already have an endpoint technology, email, that they're getting the right detections. There's no value that we can add there. Where there is value in creating additional coverage for critical data, we help there.

*Q: We wrote something about our own DDOS mitigation. We got hit by bunch of attacks, but it was almost like whack-a-mole. How do you do detections in a reliable and almost permanent way?*

AH: It's a tricky problem to solve. Anomaly detection has a little bit bad rep for being noisy. I'll give you a little bit of how the approach and the market has evolved, how the industry overall has evolved.

Most anomaly detection has been statistical in nature. It's based on baselining and those sorts of things. Sometimes these things are bespoke to every organization. What we find with anomaly detection that we're doing now is we still have statistical modeling because you certainly need to understand what is normal and then you can figure out what is known good from potentially what's bad, right?

But what's really interesting now is we're leveraging our large language models, our AI agents, to actually do the triaging for these detections. We're helping make anomaly detection much more reliable. We leverage statistical modeling behavior across lots of different types of data and then layer it with what we call a knowledge layer that's based on the large language models where we take business context from. Every customer has different business context, right? Different types of ways they leverage their technologies.

From there, we try to weed out what should be potentially good behavior in the environment that’s being flagged as potentially as something anomalous, like developers working within your cloud environment. Sometimes they may be doing things that an attacker may be doing, but it's normal for this person.

That's how we think about anomaly detection and leveraging this new wave AI agents. In the past, you couldn’t create higher fidelity because you didn’t have enough people to look at these detections. Now we actually have machines looking at them, so we can actually take even the lowest signals, put it all together, let machines do the stitching and bring up the fidelity.

Q: Do you find that AI triaging reliable? Do you have guardrails to make it more reliable?

AH: It's really how much guesswork you try to avoid. If it's you and me and somebody asks a question without directional guidance. Likely our responses could be in one direction, but it could deviate quite a bit. Yeah. With LLMs, we try to give them as much directional guidance as possible.

That's where we leverage a lot of the data—we gather, build semantics around it, build relationships, figure out, get a lot of context. Then we essentially give the LLMs reasoning capabilities. We answer a bunch of questions that are critical to understanding the specific detection, and then we let leverage the LLMs to do reasoning by giving it sufficient context, by actually narrowing the amount of data we give it too.

That's the other thing that people have to avoid. You give too much data. It's you reading a hundred page book. The first page versus the last page, what are you most likely to remember?

[...]


Original source

Reply