[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpGy_RrQBUy2yxvcZzAXO5cJU5BHxRko+b8p7wWLjQwXvA@mail.gmail.com>
Date: Wed, 31 Aug 2022 08:52:19 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Mel Gorman <mgorman@...e.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Kent Overstreet <kent.overstreet@...ux.dev>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Davidlohr Bueso <dave@...olabs.net>,
Matthew Wilcox <willy@...radead.org>,
"Liam R. Howlett" <liam.howlett@...cle.com>,
David Vernet <void@...ifault.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Laurent Dufour <ldufour@...ux.ibm.com>,
Peter Xu <peterx@...hat.com>,
David Hildenbrand <david@...hat.com>,
Jens Axboe <axboe@...nel.dk>, mcgrof@...nel.org,
masahiroy@...nel.org, nathan@...nel.org, changbin.du@...el.com,
ytcoode@...il.com, Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Benjamin Segall <bsegall@...gle.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Christopher Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>, 42.hyeyoo@...il.com,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>, dvyukov@...gle.com,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <songmuchun@...edance.com>, arnd@...db.de,
jbaron@...mai.com, David Rientjes <rientjes@...gle.com>,
Minchan Kim <minchan@...gle.com>,
Kalesh Singh <kaleshsingh@...gle.com>,
kernel-team <kernel-team@...roid.com>,
linux-mm <linux-mm@...ck.org>, iommu@...ts.linux.dev,
kasan-dev@...glegroups.com, io-uring@...r.kernel.org,
linux-arch@...r.kernel.org, xen-devel@...ts.xenproject.org,
linux-bcache@...r.kernel.org, linux-modules@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 10/30] mm: enable page allocation tagging for
__get_free_pages and alloc_pages
On Wed, Aug 31, 2022 at 8:45 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Wed, Aug 31, 2022 at 3:11 AM Mel Gorman <mgorman@...e.de> wrote:
> >
> > On Tue, Aug 30, 2022 at 02:48:59PM -0700, Suren Baghdasaryan wrote:
> > > Redefine alloc_pages, __get_free_pages to record allocations done by
> > > these functions. Instrument deallocation hooks to record object freeing.
> > >
> > > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > > +#ifdef CONFIG_PAGE_ALLOC_TAGGING
> > > +
> > > #include <linux/alloc_tag.h>
> > > #include <linux/page_ext.h>
> > >
> > > @@ -25,4 +27,37 @@ static inline void pgalloc_tag_dec(struct page *page, unsigned int order)
> > > alloc_tag_sub(get_page_tag_ref(page), PAGE_SIZE << order);
> > > }
> > >
> > > +/*
> > > + * Redefinitions of the common page allocators/destructors
> > > + */
> > > +#define pgtag_alloc_pages(gfp, order) \
> > > +({ \
> > > + struct page *_page = _alloc_pages((gfp), (order)); \
> > > + \
> > > + if (_page) \
> > > + alloc_tag_add(get_page_tag_ref(_page), PAGE_SIZE << (order));\
> > > + _page; \
> > > +})
> > > +
> >
> > Instead of renaming alloc_pages, why is the tagging not done in
> > __alloc_pages()? At least __alloc_pages_bulk() is also missed. The branch
> > can be guarded with IS_ENABLED.
>
> Hmm. Assuming all the other allocators using __alloc_pages are inlined, that
> should work. I'll try that and if that works will incorporate in the
> next respin.
> Thanks!
>
> I don't think IS_ENABLED is required because the tagging functions are already
> defined as empty if the appropriate configs are not enabled. Unless I
> misunderstood
> your node.
>
> >
> > > +#define pgtag_get_free_pages(gfp_mask, order) \
> > > +({ \
> > > + struct page *_page; \
> > > + unsigned long _res = _get_free_pages((gfp_mask), (order), &_page);\
> > > + \
> > > + if (_res) \
> > > + alloc_tag_add(get_page_tag_ref(_page), PAGE_SIZE << (order));\
> > > + _res; \
> > > +})
> > > +
> >
> > Similar, the tagging could happen in a core function instead of a wrapper.
Ack.
> >
> > > +#else /* CONFIG_PAGE_ALLOC_TAGGING */
> > > +
> > > +#define pgtag_alloc_pages(gfp, order) _alloc_pages(gfp, order)
> > > +
> > > +#define pgtag_get_free_pages(gfp_mask, order) \
> > > + _get_free_pages((gfp_mask), (order), NULL)
> > > +
> > > +#define pgalloc_tag_dec(__page, __size) do {} while (0)
> > > +
> > > +#endif /* CONFIG_PAGE_ALLOC_TAGGING */
> > > +
> > > #endif /* _LINUX_PGALLOC_TAG_H */
> > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > > index b73d3248d976..f7e6d9564a49 100644
> > > --- a/mm/mempolicy.c
> > > +++ b/mm/mempolicy.c
> > > @@ -2249,7 +2249,7 @@ EXPORT_SYMBOL(vma_alloc_folio);
> > > * flags are used.
> > > * Return: The page on success or NULL if allocation fails.
> > > */
> > > -struct page *alloc_pages(gfp_t gfp, unsigned order)
> > > +struct page *_alloc_pages(gfp_t gfp, unsigned int order)
> > > {
> > > struct mempolicy *pol = &default_policy;
> > > struct page *page;
> > > @@ -2273,7 +2273,7 @@ struct page *alloc_pages(gfp_t gfp, unsigned order)
> > >
> > > return page;
> > > }
> > > -EXPORT_SYMBOL(alloc_pages);
> > > +EXPORT_SYMBOL(_alloc_pages);
> > >
> > > struct folio *folio_alloc(gfp_t gfp, unsigned order)
> > > {
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index e5486d47406e..165daba19e2a 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -763,6 +763,7 @@ static inline bool pcp_allowed_order(unsigned int order)
> > >
> > > static inline void free_the_page(struct page *page, unsigned int order)
> > > {
> > > +
> > > if (pcp_allowed_order(order)) /* Via pcp? */
> > > free_unref_page(page, order);
> > > else
> >
> > Spurious wide-space change.
Ack.
> >
> > --
> > Mel Gorman
> > SUSE Labs
Powered by blists - more mailing lists