Latency

Last updated: February 20, 2026

A common question: How is latency impacted when ingesting via Realm/Pipeline.


For customers concerned about AI causing latency in our pipeline, I think it's worth being explicit that our rules strategy ensure the highest performance because our rules are evaluated directly, without latency from LLM or ML model evaluations. As we get more advanced with Privacy Guard, there will be cases where we may want to run performant ML models in-line to detect PII, but that's not actively done at this point, since we rely on the rules approach.

Currently overall latency (Time from when we receive events to when we deliver it to the output feed) is max 30s.

On average under normal load it should be well under 5s.