[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dec87ef2-07c7-93ec-7a27-1902f257fb18@fb.com>
Date: Tue, 18 Aug 2020 10:07:14 -0700
From: Yonghong Song <yhs@...com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
CC: bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <kernel-team@...com>,
"Paul E . McKenney" <paulmck@...nel.org>,
Rik van Riel <riel@...riel.com>
Subject: Re: [PATCH bpf 1/2] bpf: fix a rcu_sched stall issue with bpf
task/task_file iterator
On 8/18/20 9:48 AM, Andrii Nakryiko wrote:
> On Tue, Aug 18, 2020 at 9:26 AM Yonghong Song <yhs@...com> wrote:
>>
>> In our production system, we observed rcu stalls when
>> 'bpftool prog` is running.
>
> [...]
>
>>
>> Note that `bpftool prog` actually calls a task_file bpf iterator
>> program to establish an association between prog/map/link/btf anon
>> files and processes.
>>
>> In the case where the above rcu stall occured, we had a process
>> having 1587 tasks and each task having roughly 81305 files.
>> This implied 129 million bpf prog invocations. Unfortunwtely none of
>> these files are prog/map/link/btf files so bpf iterator/prog needs
>> to traverse all these files and not able to return to user space
>> since there are no seq_file buffer overflow.
>>
>> The fix is to add cond_resched() during traversing tasks
>> and files. So voluntarily releasing cpu gives other tasks, e.g.,
>> rcu resched kthread, a chance to run.
>
> What are the performance implications of doing this for every task
> and/or file? Have you benchmarked `bpftool prog` before/after? What
> was the difference?
The cond_resched() internally has a condition should_resched()
to check whether rescheduling should be done or not. Most kernel
invocations (if not all) just call cond_resched() without
additional custom logic to guess when to call cond_resched().
I suppose should_resched() should cheaper enough already.
Maybe Rik can comment here.
Regarding to the measurement, I did measure with 'strace -T ./bpftool
prog` for 'read' syscall to complete with and without my patch.
e.g.,
read(7,
"#\0\0\0\322\23\0\0tcpeventd\0\0\0\0\0\0\0)\0\0\0\322\23\0\0"..., 4096)
= 4080 <27.094797>
or
read(7,
"#\0\0\0\322\23\0\0tcpeventd\0\0\0\0\0\0\0)\0\0\0\322\23\0\0"..., 4096)
= 4080 <34.281563>
The time various a lot during different runs. But based on
my observations, with and without cond_resched(), the range
of read() elapse time roughly the same.
>
> I wonder if it's possible to amortize those cond_resched() and call
> them only ever so often, based on CPU time or number of files/tasks
> processed, if cond_resched() does turn out to slow bpf_iter down.
>
>>
>> Cc: Paul E. McKenney <paulmck@...nel.org>
>> Signed-off-by: Yonghong Song <yhs@...com>
>> ---
>> kernel/bpf/task_iter.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
>> index f21b5e1e4540..885b14cab2c0 100644
>> --- a/kernel/bpf/task_iter.c
>> +++ b/kernel/bpf/task_iter.c
>> @@ -27,6 +27,8 @@ static struct task_struct *task_seq_get_next(struct pid_namespace *ns,
>> struct task_struct *task = NULL;
>> struct pid *pid;
>>
>> + cond_resched();
>> +
>> rcu_read_lock();
>> retry:
>> pid = idr_get_next(&ns->idr, tid);
>> @@ -137,6 +139,8 @@ task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info,
>> struct task_struct *curr_task;
>> int curr_fd = info->fd;
>>
>> + cond_resched();
>> +
>> /* If this function returns a non-NULL file object,
>> * it held a reference to the task/files_struct/file.
>> * Otherwise, it does not hold any reference.
>> --
>> 2.24.1
>>
Powered by blists - more mailing lists