lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eca39189-9258-b1cc-0a1d-a0d7e6027861@fb.com>
Date:   Fri, 20 May 2022 09:16:08 -0700
From:   Yonghong Song <yhs@...com>
To:     Yosry Ahmed <yosryahmed@...gle.com>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>, Hao Luo <haoluo@...gle.com>,
        Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Shuah Khan <shuah@...nel.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Michal Hocko <mhocko@...nel.org>,
        Stanislav Fomichev <sdf@...gle.com>,
        David Rientjes <rientjes@...gle.com>,
        Greg Thelen <gthelen@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        Cgroups <cgroups@...r.kernel.org>
Subject: Re: [PATCH bpf-next v1 2/5] cgroup: bpf: add cgroup_rstat_updated()
 and cgroup_rstat_flush() kfuncs



On 5/20/22 9:08 AM, Yosry Ahmed wrote:
> On Fri, May 20, 2022 at 8:15 AM Yonghong Song <yhs@...com> wrote:
>>
>>
>>
>> On 5/19/22 6:21 PM, Yosry Ahmed wrote:
>>> Add cgroup_rstat_updated() and cgroup_rstat_flush() kfuncs to bpf
>>> tracing programs. bpf programs that make use of rstat can use these
>>> functions to inform rstat when they update stats for a cgroup, and when
>>> they need to flush the stats.
>>>
>>> Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
>>> ---
>>>    kernel/cgroup/rstat.c | 35 ++++++++++++++++++++++++++++++++++-
>>>    1 file changed, 34 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
>>> index e7a88d2600bd..a16a851bc0a1 100644
>>> --- a/kernel/cgroup/rstat.c
>>> +++ b/kernel/cgroup/rstat.c
>>> @@ -3,6 +3,11 @@
>>>
>>>    #include <linux/sched/cputime.h>
>>>
>>> +#include <linux/bpf.h>
>>> +#include <linux/btf.h>
>>> +#include <linux/btf_ids.h>
>>> +
>>> +
>>>    static DEFINE_SPINLOCK(cgroup_rstat_lock);
>>>    static DEFINE_PER_CPU(raw_spinlock_t, cgroup_rstat_cpu_lock);
>>>
>>> @@ -141,7 +146,12 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
>>>        return pos;
>>>    }
>>>
>>> -/* A hook for bpf stat collectors to attach to and flush their stats */
>>> +/*
>>> + * A hook for bpf stat collectors to attach to and flush their stats.
>>> + * Together with providing bpf kfuncs for cgroup_rstat_updated() and
>>> + * cgroup_rstat_flush(), this enables a complete workflow where bpf progs that
>>> + * collect cgroup stats can integrate with rstat for efficient flushing.
>>> + */
>>>    __weak noinline void bpf_rstat_flush(struct cgroup *cgrp,
>>>                                     struct cgroup *parent, int cpu)
>>>    {
>>> @@ -476,3 +486,26 @@ void cgroup_base_stat_cputime_show(struct seq_file *seq)
>>>                   "system_usec %llu\n",
>>>                   usage, utime, stime);
>>>    }
>>> +
>>> +/* Add bpf kfuncs for cgroup_rstat_updated() and cgroup_rstat_flush() */
>>> +BTF_SET_START(bpf_rstat_check_kfunc_ids)
>>> +BTF_ID(func, cgroup_rstat_updated)
>>> +BTF_ID(func, cgroup_rstat_flush)
>>> +BTF_SET_END(bpf_rstat_check_kfunc_ids)
>>> +
>>> +BTF_SET_START(bpf_rstat_sleepable_kfunc_ids)
>>> +BTF_ID(func, cgroup_rstat_flush)
>>> +BTF_SET_END(bpf_rstat_sleepable_kfunc_ids)
>>> +
>>> +static const struct btf_kfunc_id_set bpf_rstat_kfunc_set = {
>>> +     .owner          = THIS_MODULE,
>>> +     .check_set      = &bpf_rstat_check_kfunc_ids,
>>> +     .sleepable_set  = &bpf_rstat_sleepable_kfunc_ids,
>>
>> There is a compilation error here:
>>
>> kernel/cgroup/rstat.c:503:3: error: ‘const struct btf_kfunc_id_set’ has
>> no member named ‘sleepable_set’; did you mean ‘release_set’?
>>       503 |  .sleepable_set = &bpf_rstat_sleepable_kfunc_ids,
>>           |   ^~~~~~~~~~~~~
>>           |   release_set
>>     kernel/cgroup/rstat.c:503:19: warning: excess elements in struct
>> initializer
>>       503 |  .sleepable_set = &bpf_rstat_sleepable_kfunc_ids,
>>           |                   ^
>>     kernel/cgroup/rstat.c:503:19: note: (near initialization for
>> ‘bpf_rstat_kfunc_set’)
>>     make[3]: *** [scripts/Makefile.build:288: kernel/cgroup/rstat.o] Error 1
>>
>> Please fix.
> 
> This patch series is rebased on top of 2 patches in the mailing list:
> - bpf/btf: also allow kfunc in tracing and syscall programs
> - btf: Add a new kfunc set which allows to mark a function to be
>    sleepable
> 
> I specified this in the cover letter, do I need to do something else
> in this situation? Re-send the patches as part of my series?

At least put a link in the cover letter for the above two patches?
This way, people can easily find them to double check.

> 
> 
> 
>>
>>> +};
>>> +
>>> +static int __init bpf_rstat_kfunc_init(void)
>>> +{
>>> +     return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING,
>>> +                                      &bpf_rstat_kfunc_set);
>>> +}
>>> +late_initcall(bpf_rstat_kfunc_init);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ