[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200116140545.GT19428@dhcp22.suse.cz>
Date: Thu, 16 Jan 2020 15:05:45 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Vlastimil Babka <vbabka@...e.cz>,
Dan Carpenter <dan.carpenter@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Lee Schermerhorn <lee.schermerhorn@...com>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
syzbot <syzbot+e64a13c5369a194d67df@...kaller.appspotmail.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Al Viro <viro@...iv.linux.org.uk>, yang.shi@...ux.alibaba.com
Subject: Re: [PATCH] mm/mempolicy.c: Fix out of bounds write in
mpol_parse_str()
On Thu 16-01-20 13:41:39, Dmitry Vyukov wrote:
> On Thu, Jan 16, 2020 at 12:51 PM Michal Hocko <mhocko@...nel.org> wrote:
> >
> > On Thu 16-01-20 11:13:09, Dmitry Vyukov wrote:
> > > On Thu, Jan 16, 2020 at 8:39 AM Michal Hocko <mhocko@...nel.org> wrote:
> > [...]
> > > > > $ grep vmalloc\( net/netfilter/*.c
> > > > > net/netfilter/nf_tables_api.c: return kvmalloc(alloc, GFP_KERNEL);
> > > > > net/netfilter/x_tables.c: xt[af].compat_tab = vmalloc(mem);
> > > > > net/netfilter/x_tables.c: mem = vmalloc(len);
> > > > > net/netfilter/x_tables.c: info = kvmalloc(sz, GFP_KERNEL_ACCOUNT);
> > > > > net/netfilter/xt_hashlimit.c: /* FIXME: don't use vmalloc() here or
> > > > > anywhere else -HW */
> > > > > net/netfilter/xt_hashlimit.c: hinfo = vmalloc(struct_size(hinfo, hash, size));
> > > > >
> > > > > These are not bound to processes/threads as namespaces are orthogonal to tasks.
> > > >
> > > > I cannot really comment on those. This is for networking people to
> > > > examine and find out whether they allow an untrusted user to runaway.
> > >
> > > Unless I am missing an elephant in this whole picture, kernel code
> > > contains 20K+ unaccounted allocations and if I am not mistaken few of
> > > them were audited and are intentionally unaccounted rather than
> > > unaccounted just because it's the default. So if we want DoS
> > > protection, it's really for every kernel developer/maintainer to audit
> > > and fix these allocation sites. And since we have a unikernel, a
> > > single unaccounted allocation may compromise the whole kernel. I
> > > assume we would need something like GFP_UNACCOUNTED to mark audited
> > > allocations that don't need accounting and then slowly reduce number
> > > of allocations without both ACCOUNTED and UNACCOUNTED.
> >
> > This is the original approach which led to all sorts of problems and so
> > we switched the opt-out to opt-in. Have a look at a9bb7e620efd ("memcg:
> > only account kmem allocations marked as __GFP_ACCOUNT").
> > Our protection will never be perfect because that would require to
> > design the system with the protection in mind.
>
> I don't mean to switch the default. I mean adding a way to distinguish
> between reviewed and intentionally unaccounted allocation and
> unreviewed allocation which is unaccounted just because that's the
> default. This would allow to progress incrementally, rather than redo
> the same work again and again.
I am not really sure this would be viable just because of the sheer
number of allocations we have in the kernel. But I would be more than
happy to be proven wrong ;)
> > > > > Somebody told me that it's not good to use GFP_ACCOUNT if the
> > > > > allocation is not tied to the lifetime of the process. Is it still
> > > > > true?
> > > >
> > > > Those are more tricky. Mostly because there is no way to reclaim the
> > > > memory once the hard limit is hit. Even the memcg oom killer will not
> > > > help much. So a care should be taken when adding GFP_ACCOUNT for those.
> > > > On the other hand it would prevent an unbounded allocations at least
> > > > so the DoS would be reduced to the hard limited memcg.
> > >
> > > What exactly is this care in practice?
> > > It seems that in a148ce15375fc664ad64762c751c0c2aecb2cafe you just
> > > added it and the allocation is not tied to the process. At least I
> > > don't see any explanation as to why that one is safe, while accounting
> > > other similar allocation is not...
> >
> > My memory is dim but AFAIR the memcg accounting was compromise between
> > usability and the whole system stability. Really large tables could be
> > allocated by untrusted users and that was seen as a _real_ problem. The
> > previous solution added _some_ protection which led to regressions
> > even for reasonable cases though. Memcg accounting was deemed as
> > reasonable middle ground.
> >
> > The result is that a completely depleted memcg requires an admin
> > intervention and the admin has to know what to do to tear it down.
> > Kernel cannot do anything about that. And that is the trickiness I've
> > had in mind. Listing page tables is something admins can do quite
> > easily, right? There are many other objects which are much harder to act
> > about. E.g. what are you going to do with tmpfs mounts? Are you going to
> > remove them and cause potential data loss? That being said some objects
> > really have to be limited even before they start consuming memory IMHO.
>
> Interesting. But there is really no admin today, or at least nothing
> should rely on one in any way. Either because of the scale (you have
> thousands/millions of machines and spending human time on each of them
> individually is not going to fly) and/or because there is nobody
> qualified enough around (e.g. who is an admin of a median android
> phone? and what does they know about tearing down namespaces and
> mounts) or there is nobody interested enough (it's fun sometimes, but
> not always)...
If there is nobody in control then there should be a reasonably safe
policy defined at least. This is what I mentioned earlier when saying
that not all objects can be easily accounted. There has to be a strategy
defined for corner cases. And user namespaces seem to really beg for
that when you are allowed to control resources like tmpfs and others
alike.
> but I agree that retrofitting this level of resource
> control into a large existing complex system is close to impossible.
>
> Thank a lot for bearing with me and answering my questions. I have a
> better understanding of this now.
I am glad I could help.
Thanks for giving some scary examples ;)
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists