[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210728122448.lh2e3nr4txhsmcwt@example.org>
Date: Wed, 28 Jul 2021 14:24:48 +0200
From: Alexey Gladkov <legion@...nel.org>
To: Hillf Danton <hdanton@...a.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
syzbot+01985d7909f9468f013c@...kaller.appspotmail.com,
syzbot+59dd63761094a80ad06d@...kaller.appspotmail.com,
syzbot+6cd79f45bb8fa1c9eeae@...kaller.appspotmail.com,
syzbot+b6e65bd125a05f803d6b@...kaller.appspotmail.com,
"Eric W. Biederman" <ebiederm@...ssion.com>
Subject: Re: [PATCH v1] ucounts: Fix race condition between alloc_ucounts and
put_ucounts
On Wed, Jul 28, 2021 at 10:58:37AM +0800, Hillf Danton wrote:
> On Tue, 27 Jul 2021 17:24:18 +0200 Alexey Gladkov wrote:
> > +++ b/kernel/ucount.c
> > @@ -160,6 +160,7 @@ struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
> > {
> > struct hlist_head *hashent = ucounts_hashentry(ns, uid);
> > struct ucounts *ucounts, *new;
> > + long overflow;
> >
> > spin_lock_irq(&ucounts_lock);
> > ucounts = find_ucounts(ns, uid, hashent);
> > @@ -184,8 +185,12 @@ struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid)
> > return new;
> > }
> > }
> > + overflow = atomic_add_negative(1, &ucounts->count);
> > spin_unlock_irq(&ucounts_lock);
> > - ucounts = get_ucounts(ucounts);
> > + if (overflow) {
> > + put_ucounts(ucounts);
>
> Given if (atomic_add_unless(atomic, -1, 1))
> return 0;
>
> put can not help roll back overflow.
In case of overflow, we don't try to rollback overflow. We return an
error.
> BTW can you specify a bit on the real workloads with the risk of count overflow?
For example, one user has too many processes in one namespace.
It is necessary to check and handle the possibility of counter overflow
in this case. Linus described it here:
https://lore.kernel.org/lkml/CAHk-%3dwjYOCgM%2bmKzwTZwkDDg12DdYjFFkmoFKYLim7NFmR9HBg@mail.gmail.com/
> > + return NULL;
> > + }
> > return ucounts;
> > }
> >
> > @@ -193,8 +198,7 @@ void put_ucounts(struct ucounts *ucounts)
> > {
> > unsigned long flags;
> >
> > - if (atomic_dec_and_test(&ucounts->count)) {
> > - spin_lock_irqsave(&ucounts_lock, flags);
> > + if (atomic_dec_and_lock_irqsave(&ucounts->count, &ucounts_lock, flags)) {
> > hlist_del_init(&ucounts->node);
> > spin_unlock_irqrestore(&ucounts_lock, flags);
> > kfree(ucounts);
> > --
> > 2.29.3
>
--
Rgrds, legion
Powered by blists - more mailing lists