lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABWYdi0ujdzC+MF_7fJ7h1m+16izL=pzAVWnRG296qNt_ati-w@mail.gmail.com>
Date: Thu, 11 Apr 2024 11:09:26 -0700
From: Ivan Babrou <ivan@...udflare.com>
To: bpf <bpf@...r.kernel.org>
Cc: kernel-team <kernel-team@...udflare.com>, Xu Kuohai <xukuohai@...weicloud.com>, 
	linux-kernel <linux-kernel@...r.kernel.org>, linux-arm-kernel@...ts.infradead.org
Subject: Incorrect BPF stats accounting for fentry on arm64

Hello,

We're seeing incorrect data for bpf runtime stats on arm64. Here's an example:

$ sudo bpftool prog show id 693110
693110: tracing  name __tcp_retransmit_skb  tag e37be2fbe8be4726  gpl
run_time_ns 2493581964213176 run_cnt 1133532 recursion_misses 1
    loaded_at 2024-04-10T22:33:09+0000  uid 62727
    xlated 312B  jited 344B  memlock 4096B  map_ids 8550445,8550441
    btf_id 8726522
    pids prometheus-ebpf(2224907)

According to bpftool, this program reported 66555800ns of runtime at
one point and then it jumped to 2493581675247416ns just 53s later when
we looked at it again. This is happening only on arm64 nodes in our
fleet on both v6.1.82 and v6.6.25.

We have two services that are involved:

* ebpf_exporter attaches bpf programs to the kernel and exports
prometheus metrics and opentelementry traces driven by its probes
* bpf_stats_exporter runs bpftool every 53s to capture bpf runtime metrics

The problematic fentry is attached to __tcp_retransmit_skb, but an
identical one is also attached to tcp_send_loss_probe, which does not
exhibit the same issue:

SEC("fentry/__tcp_retransmit_skb")
int BPF_PROG(__tcp_retransmit_skb, struct sock *sk)
{
  return handle_sk((struct pt_regs *) ctx, sk, sk_kind_tcp_retransmit_skb);
}

SEC("fentry/tcp_send_loss_probe")
int BPF_PROG(tcp_send_loss_probe, struct sock *sk)
{
  return handle_sk((struct pt_regs *) ctx, sk, sk_kind_tcp_send_loss_probe);
}

In handle_sk we do a map lookup and an optional ringbuf push. There is
no sleeping (I don't think it's even allowed on v6.1). It's
interesting that it only happens for the retransmit, but not for the
loss probe.

The issue manifests some time after we restart ebpf_exporter and
reattach the probes. It doesn't happen immediately, as we need to
capture metrics 53s apart to produce a visible spike in metrics.

There is no corresponding spike in execution count, only in execution time.

It doesn't happen deterministically. Some ebpf_exporter restarts show
it, some don't.

It doesn't keep happening after ebpf_exporter restart. It happens once
and that's it.

Maybe recursion_misses plays a role here? We see none for
tcp_send_loss_probe. We do see some for inet_sk_error_report
tracepoint, but it doesn't spike like __tcp_retransmit_skb does.

The biggest smoking gun is that it only happens on arm64.

I'm happy to try out patches to figure this one out.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ