lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 12 Oct 2021 20:24:19 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Shakeel Butt <shakeelb@...gle.com>
Cc:     Vasily Averin <vvs@...tuozzo.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Roman Gushchin <guro@...com>,
        Uladzislau Rezki <urezki@...il.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Cgroups <cgroups@...r.kernel.org>, Linux MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, kernel@...nvz.org
Subject: Re: [PATCH mm v3] memcg: enable memory accounting in
 __alloc_pages_bulk

On Tue 12-10-21 09:08:38, Shakeel Butt wrote:
> On Tue, Oct 12, 2021 at 8:36 AM Michal Hocko <mhocko@...e.com> wrote:
> >
> > On Tue 12-10-21 17:58:21, Vasily Averin wrote:
> > > Enable memory accounting for bulk page allocator.
> >
> > ENOCHANGELOG
> >
> > And I have to say I am not very happy about the solution. It adds a very
> > tricky code where it splits different charging steps apart.
> >
> > Would it be just too inefficient to charge page-by-page once all pages
> > are already taken away from the pcp lists? This bulk should be small so
> > this shouldn't really cause massive problems. I mean something like
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index b37435c274cf..8bcd69195ef5 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -5308,6 +5308,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
> >
> >         local_unlock_irqrestore(&pagesets.lock, flags);
> >
> > +       if (memcg_kmem_enabled() && (gfp & __GFP_ACCOUNT)) {
> > +               /* charge pages here */
> > +       }
> 
> It is not that simple because __alloc_pages_bulk only allocate pages
> for empty slots in the page_array provided by the caller.
> 
> The failure handling for post charging would be more complicated.

If this is really that complicated (I haven't tried) then it would be
much more simple to completely skip the bulk allocator for __GFP_ACCOUNT
rather than add a tricky code. The bulk allocator is meant to be used
for ultra hot paths and memcg charging along with the reclaim doesn't
really fit into that model anyway. Or are there any actual users who
really need bulk allocator optimization and also need memcg accounting?
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ