lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod4o0sA8CM961ZCCp-Vv+i6awFY0U07oJfXFDiVfFiaZfg@mail.gmail.com>
Date:   Thu, 23 May 2019 11:49:41 -0700
From:   Shakeel Butt <shakeelb@...gle.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Kernel Team <kernel-team@...com>
Subject: Re: xarray breaks thrashing detection and cgroup isolation

On Thu, May 23, 2019 at 11:37 AM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Thu, May 23, 2019 at 01:43:49PM -0400, Johannes Weiner wrote:
> > I noticed that recent upstream kernels don't account the xarray nodes
> > of the page cache to the allocating cgroup, like we used to do for the
> > radix tree nodes.
> >
> > This results in broken isolation for cgrouped apps, allowing them to
> > escape their containment and harm other cgroups and the system with an
> > excessive build-up of nonresident information.
> >
> > It also breaks thrashing/refault detection because the page cache
> > lives in a different domain than the xarray nodes, and so the shadow
> > shrinker can reclaim nonresident information way too early when there
> > isn't much cache in the root cgroup.
> >
> > I'm not quite sure how to fix this, since the xarray code doesn't seem
> > to have per-tree gfp flags anymore like the radix tree did. We cannot
> > add SLAB_ACCOUNT to the radix_tree_node_cachep slab cache. And the
> > xarray api doesn't seem to really support gfp flags, either (xas_nomem
> > does, but the optimistic internal allocations have fixed gfp flags).
>
> Would it be a problem to always add __GFP_ACCOUNT to the fixed flags?
> I don't really understand cgroups.

Does xarray cache allocated nodes, something like radix tree's:

static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };

For the cached one, no __GFP_ACCOUNT flag.

Also some users of xarray may not want __GFP_ACCOUNT. That's the
reason we had __GFP_ACCOUNT for page cache instead of hard coding it
in radix tree.

Shakeel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ