lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzY=YskvOg+cr_XFF42kDWOL3T1mzx=vAoQcS2oAzPOUsQ@mail.gmail.com>
Date:   Tue, 18 Aug 2020 09:48:42 -0700
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Yonghong Song <yhs@...com>
Cc:     bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <kernel-team@...com>,
        "Paul E . McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH bpf 1/2] bpf: fix a rcu_sched stall issue with bpf
 task/task_file iterator

On Tue, Aug 18, 2020 at 9:26 AM Yonghong Song <yhs@...com> wrote:
>
> In our production system, we observed rcu stalls when
> 'bpftool prog` is running.

[...]

>
> Note that `bpftool prog` actually calls a task_file bpf iterator
> program to establish an association between prog/map/link/btf anon
> files and processes.
>
> In the case where the above rcu stall occured, we had a process
> having 1587 tasks and each task having roughly 81305 files.
> This implied 129 million bpf prog invocations. Unfortunwtely none of
> these files are prog/map/link/btf files so bpf iterator/prog needs
> to traverse all these files and not able to return to user space
> since there are no seq_file buffer overflow.
>
> The fix is to add cond_resched() during traversing tasks
> and files. So voluntarily releasing cpu gives other tasks, e.g.,
> rcu resched kthread, a chance to run.

What are the performance implications of doing this for every task
and/or file? Have you benchmarked `bpftool prog` before/after? What
was the difference?

I wonder if it's possible to amortize those cond_resched() and call
them only ever so often, based on CPU time or number of files/tasks
processed, if cond_resched() does turn out to slow bpf_iter down.

>
> Cc: Paul E. McKenney <paulmck@...nel.org>
> Signed-off-by: Yonghong Song <yhs@...com>
> ---
>  kernel/bpf/task_iter.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
> index f21b5e1e4540..885b14cab2c0 100644
> --- a/kernel/bpf/task_iter.c
> +++ b/kernel/bpf/task_iter.c
> @@ -27,6 +27,8 @@ static struct task_struct *task_seq_get_next(struct pid_namespace *ns,
>         struct task_struct *task = NULL;
>         struct pid *pid;
>
> +       cond_resched();
> +
>         rcu_read_lock();
>  retry:
>         pid = idr_get_next(&ns->idr, tid);
> @@ -137,6 +139,8 @@ task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info,
>         struct task_struct *curr_task;
>         int curr_fd = info->fd;
>
> +       cond_resched();
> +
>         /* If this function returns a non-NULL file object,
>          * it held a reference to the task/files_struct/file.
>          * Otherwise, it does not hold any reference.
> --
> 2.24.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ