Hi everyone,
I built a SIP monitoring service that uses eBPF to capture SIP traffic directly in the Linux kernel and exports metrics to Prometheus-compatible systems (Prometheus, VictoriaMetrics, Grafana Cloud, etc.).
How it works:
SIP Traffic → NIC → eBPF socket filter → ringbuf → Go poller → SIP parser → Prometheus
The eBPF program (C) is compiled with clang and loaded via cilium/ebpf. It attaches as a socket filter, intercepts UDP packets on SIP ports (5060/5061), and pushes them to userspace through a ring buffer. The Go side polls the ring buffer, parses raw SIP messages, and updates Prometheus counters.
What it tracks:
- ~30 Prometheus counters: per-method (INVITE, BYE, REGISTER, OPTIONS, etc.), per-status code (1xx–6xx), active dialog count
- RFC 6076 Session Establishment Ratio — success rate of INVITE→200OK, excluding 3xx redirects per spec
- Dialog lifecycle: created on 200 OK to INVITE, terminated on 200 OK to BYE
Performance (userspace only, i7-8665U):
| Operation | Latency | Throughput |
|---|---|---|
| Packet parsing | ~124ns/op | 8M pkt/sec |
| Full processing with metrics | ~3μs/op | 300k pkt/sec |
Implementation highlights:
- Zero-copy packet transfer from kernel via ringbuf
- VLAN-tagged frame support
- E2E tests with SIPp via testcontainers-go — real traffic generation, metric verification, dialog cleanup validation
- Single container deployment, no external dependencies
Known limitation: The eBPF verifier doesn’t allow variable-length bpf_skb_load_bytes, so packets are copied in 64-byte blocks. I’m planning to migrate to AF_PACKET with PACKET_RX_RING for arbitrary packet sizes.
Happy to answer questions about the eBPF integration, SIP dialog state machine, or the Prometheus metric design.