lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZwYt-GJfzMoozTOU@google.com>
Date: Wed, 9 Oct 2024 00:17:12 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Roman Gushchin <roman.gushchin@...ux.dev>, Song Liu <song@...nel.org>,
	Alexei Starovoitov <ast@...nel.org>,
	Daniel Borkmann <daniel@...earbox.net>,
	Andrii Nakryiko <andrii@...nel.org>,
	Martin KaFai Lau <martin.lau@...ux.dev>,
	Eduard Zingerman <eddyz87@...il.com>,
	Yonghong Song <yonghong.song@...ux.dev>,
	John Fastabend <john.fastabend@...il.com>,
	KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...ichev.me>,
	Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>, bpf@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	Kees Cook <kees@...nel.org>
Subject: Re: [PATCH v4 bpf-next 2/3] mm/bpf: Add bpf_get_kmem_cache() kfunc

On Mon, Oct 07, 2024 at 02:57:08PM +0200, Vlastimil Babka wrote:
> On 10/4/24 11:25 PM, Roman Gushchin wrote:
> > On Fri, Oct 04, 2024 at 01:10:58PM -0700, Song Liu wrote:
> >> On Wed, Oct 2, 2024 at 11:10 AM Namhyung Kim <namhyung@...nel.org> wrote:
> >>>
> >>> The bpf_get_kmem_cache() is to get a slab cache information from a
> >>> virtual address like virt_to_cache().  If the address is a pointer
> >>> to a slab object, it'd return a valid kmem_cache pointer, otherwise
> >>> NULL is returned.
> >>>
> >>> It doesn't grab a reference count of the kmem_cache so the caller is
> >>> responsible to manage the access.  The intended use case for now is to
> >>> symbolize locks in slab objects from the lock contention tracepoints.
> >>>
> >>> Suggested-by: Vlastimil Babka <vbabka@...e.cz>
> >>> Acked-by: Roman Gushchin <roman.gushchin@...ux.dev> (mm/*)
> >>> Acked-by: Vlastimil Babka <vbabka@...e.cz> #mm/slab
> >>> Signed-off-by: Namhyung Kim <namhyung@...nel.org>
> 
> 
> So IIRC from our discussions with Namhyung and Arnaldo at LSF/MM I
> thought the perf use case was:
> 
> - at the beginning it iterates the kmem caches and stores anything of
> possible interest in bpf maps or somewhere - hence we have the iterator
> - during profiling, from object it gets to a cache, but doesn't need to
> access the cache - just store the kmem_cache address in the perf record
> - after profiling itself, use the information in the maps from the first
> step together with cache pointers from the second step to calculate
> whatever is necessary

Correct.

> 
> So at no point it should be necessary to take refcount to a kmem_cache?
> 
> But maybe "bpf_get_kmem_cache()" is implemented here as too generic
> given the above use case and it should be implemented in a way that the
> pointer it returns cannot be used to access anything (which could be
> unsafe), but only as a bpf map key - so it should return e.g. an
> unsigned long instead?

Yep, this should work for my use case.  Maybe we don't need the
iterator when bpf_get_kmem_cache() kfunc returns the valid pointer as
we can get the necessary info at the moment.  But I think it'd be less
efficient as more work need to be done at the event (lock contention).
It'd better setting up necessary info in a map before monitoring (using
the iterator), and just looking up the map with the kfunc while
monitoring the lock contention.

Thanks,
Namhyung

> 
> >>> ---
> >>>  kernel/bpf/helpers.c |  1 +
> >>>  mm/slab_common.c     | 19 +++++++++++++++++++
> >>>  2 files changed, 20 insertions(+)
> >>>
> >>> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> >>> index 4053f279ed4cc7ab..3709fb14288105c6 100644
> >>> --- a/kernel/bpf/helpers.c
> >>> +++ b/kernel/bpf/helpers.c
> >>> @@ -3090,6 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW)
> >>>  BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL)
> >>>  BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY)
> >>>  BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE)
> >>> +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL)
> >>>  BTF_KFUNCS_END(common_btf_ids)
> >>>
> >>>  static const struct btf_kfunc_id_set common_kfunc_set = {
> >>> diff --git a/mm/slab_common.c b/mm/slab_common.c
> >>> index 7443244656150325..5484e1cd812f698e 100644
> >>> --- a/mm/slab_common.c
> >>> +++ b/mm/slab_common.c
> >>> @@ -1322,6 +1322,25 @@ size_t ksize(const void *objp)
> >>>  }
> >>>  EXPORT_SYMBOL(ksize);
> >>>
> >>> +#ifdef CONFIG_BPF_SYSCALL
> >>> +#include <linux/btf.h>
> >>> +
> >>> +__bpf_kfunc_start_defs();
> >>> +
> >>> +__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr)
> >>> +{
> >>> +       struct slab *slab;
> >>> +
> >>> +       if (!virt_addr_valid(addr))
> >>> +               return NULL;
> >>> +
> >>> +       slab = virt_to_slab((void *)(long)addr);
> >>> +       return slab ? slab->slab_cache : NULL;
> >>> +}
> >>
> >> Do we need to hold a refcount to the slab_cache? Given
> >> we make this kfunc available everywhere, including
> >> sleepable contexts, I think it is necessary.
> > 
> > It's a really good question.
> > 
> > If the callee somehow owns the slab object, as in the example
> > provided in the series (current task), it's not necessarily.
> > 
> > If a user can pass a random address, you're right, we need to
> > grab the slab_cache's refcnt. But then we also can't guarantee
> > that the object still belongs to the same slab_cache, the
> > function becomes racy by the definition.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ