[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2b90f131-5cb0-3c67-ea2e-f2c66ad918a7@fb.com>
Date: Fri, 4 Dec 2020 00:01:53 -0800
From: Yonghong Song <yhs@...com>
To: Jonathan Lemon <jonathan.lemon@...il.com>,
<netdev@...r.kernel.org>, <ast@...nel.org>, <daniel@...earbox.net>
CC: <kernel-team@...com>
Subject: Re: [PATCH v2 bpf-next] bpf: increment and use correct thread
iterator
On 12/3/20 7:43 PM, Jonathan Lemon wrote:
> From: Jonathan Lemon <bsd@...com>
Could you explain in the commit log what problem this patch
tries to solve? What bad things could happen without this patch?
>
> If unable to obtain the file structure for the current task,
> proceed to the next task number after the one returned from
> task_seq_get_next(), instead of the next task number from the
> original iterator.
This seems a correct change. The current code should still work
but it may do some redundant/unnecessary work in kernel.
This only happens when a task does not have any file,
no sure whether this is the culprit for the problem this
patch tries to address.
>
> Use thread_group_leader() instead of comparing tgid vs pid, which
> might may be racy.
I see
static inline bool thread_group_leader(struct task_struct *p)
{
return p->exit_signal >= 0;
}
I am not sure whether thread_group_leader(task) is equivalent
to task->tgid == task->pid or not. Any documentation or explanation?
Could you explain why task->tgid != task->pid in the original
code could be racy?
>
> Only obtain the task reference count at the end of the RCU section
> instead of repeatedly obtaining/releasing it when iterathing though
> a thread group.
I think this is an optimization and not about the correctness.
>
> Fixes: eaaacd23910f ("bpf: Add task and task/file iterator targets")
> Fixes: 203d7b054fc7 ("bpf: Avoid iterating duplicated files for task_file iterator")
>
> Signed-off-by: Jonathan Lemon <jonathan.lemon@...il.com>
> ---
> kernel/bpf/task_iter.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
> index 0458a40edf10..66a52fcf589a 100644
> --- a/kernel/bpf/task_iter.c
> +++ b/kernel/bpf/task_iter.c
> @@ -33,17 +33,17 @@ static struct task_struct *task_seq_get_next(struct pid_namespace *ns,
> pid = find_ge_pid(*tid, ns);
> if (pid) {
> *tid = pid_nr_ns(pid, ns);
> - task = get_pid_task(pid, PIDTYPE_PID);
> + task = pid_task(pid, PIDTYPE_PID);
> if (!task) {
> ++*tid;
> goto retry;
> - } else if (skip_if_dup_files && task->tgid != task->pid &&
> + } else if (skip_if_dup_files && !thread_group_leader(task) &&
> task->files == task->group_leader->files) {
> - put_task_struct(task);
> task = NULL;
> ++*tid;
> goto retry;
> }
> + get_task_struct(task);
> }
> rcu_read_unlock();
>
> @@ -164,7 +164,7 @@ task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info)
> curr_files = get_files_struct(curr_task);
> if (!curr_files) {
> put_task_struct(curr_task);
> - curr_tid = ++(info->tid);
> + curr_tid = curr_tid + 1;
> info->fd = 0;
> goto again;
> }
>
Powered by blists - more mailing lists