[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <zjvmhbfzxpv4ujc5v7c4aojpsecmaqrznyd34lukst57kx5h43@2necqcieafy5>
Date: Wed, 28 Aug 2024 13:16:14 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>,
Vlastimil Babka <vbabka@...e.cz>, David Rientjes <rientjes@...gle.com>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, Eric Dumazet <edumazet@...gle.com>,
"David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>, cgroups@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH v2] memcg: add charging of already allocated slab objects
On Wed, Aug 28, 2024 at 12:42:02PM GMT, Yosry Ahmed wrote:
> On Wed, Aug 28, 2024 at 12:14 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> >
> > On Tue, Aug 27, 2024 at 05:34:24PM GMT, Yosry Ahmed wrote:
> > > On Tue, Aug 27, 2024 at 4:52 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> > [...]
> > > > +
> > > > +#define KMALLOC_TYPE (SLAB_KMALLOC | SLAB_CACHE_DMA | \
> > > > + SLAB_ACCOUNT | SLAB_RECLAIM_ACCOUNT)
> > > > +
> > > > +static __fastpath_inline
> > > > +bool memcg_slab_post_charge(void *p, gfp_t flags)
> > > > +{
> > > > + struct slabobj_ext *slab_exts;
> > > > + struct kmem_cache *s;
> > > > + struct folio *folio;
> > > > + struct slab *slab;
> > > > + unsigned long off;
> > > > +
> > > > + folio = virt_to_folio(p);
> > > > + if (!folio_test_slab(folio)) {
> > > > + return __memcg_kmem_charge_page(folio_page(folio, 0), flags,
> > > > + folio_order(folio)) == 0;
> > >
> > > Will this charge the folio again if it was already charged? It seems
> > > like we avoid this for already charged slab objects below but not
> > > here.
> > >
> >
> > Thanks for catchig this. It's an easy fix and will do in v3.
> >
> > > > + }
> > > > +
> > > > + slab = folio_slab(folio);
> > > > + s = slab->slab_cache;
> > > > +
> > > > + /* Ignore KMALLOC_NORMAL cache to avoid circular dependency. */
> > > > + if ((s->flags & KMALLOC_TYPE) == SLAB_KMALLOC)
> > > > + return true;
> > >
> > > Would it be clearer to check if the slab cache is one of
> > > kmalloc_caches[KMALLOC_NORMAL]? This should be doable by comparing the
> > > address of the slab cache with the addresses of
> > > kmalloc_cache[KMALLOC_NORMAL] (perhaps in a helper). I need to refer
> > > to your reply to Roman to understand why this works.
> > >
> >
> > Do you mean looping over kmalloc_caches[KMALLOC_NORMAL] and comparing
> > the given slab cache address? Nah man why do long loop of pointer
> > comparisons when we can simply check the flag of the given kmem cache.
> > Also this array will increase with the recent proposed random kmalloc
> > caches.
>
> Oh I thought kmalloc_caches[KMALLOC_NORMAL] is an array of the actual
> struct kmem_cache objects, so I thought we can just check if:
> s >= kmalloc_caches[KMALLOC_NORMAL][0] &&
> s >= kmalloc_caches[KMALLOC_NORMAL][LAST_INDEX]
>
> I just realized it's an array of pointers, so we would need to loop
> and compare them.
>
> I still find the flags comparisons unclear and not very future-proof
> tbh. I think we can just store the type in struct kmem_cache? I think
> there are multiple holes there.
Do you mean adding a new SLAB_KMALLOC_NORMAL? I will wait for SLAB
maintainers for their opinion on that. BTW this kind of checks are in
the kernel particularly for gfp flags.
Powered by blists - more mailing lists