[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87o9ieq1ud.fsf@notabene.neil.brown.name>
Date: Fri, 20 Apr 2018 08:16:10 +1000
From: NeilBrown <neilb@...e.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Mike Snitzer <snitzer@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Kees Cook <keescook@...omium.org>,
Segher Boessenkool <segher@...nel.crashing.org>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
Andrew Morton <akpm@...uxfoundation.org>,
Boris Brezillon <boris.brezillon@...e-electrons.com>,
Richard Weinberger <richard@....at>,
David Woodhouse <dwmw2@...radead.org>,
Alasdair Kergon <agk@...hat.com>,
Anton Vorontsov <anton@...msg.org>,
Colin Cross <ccross@...roid.com>,
Tony Luck <tony.luck@...el.com>
Subject: Re: [patch V2 7/8] dm verity fec: Check result of init_rs()
On Thu, Apr 19 2018, Thomas Gleixner wrote:
> On Thu, 19 Apr 2018, Mike Snitzer wrote:
>
>> On Thu, Apr 19 2018 at 6:04am -0400,
>> Thomas Gleixner <tglx@...utronix.de> wrote:
>>
>> > From: Thomas Gleixner <tglx@...utronix.de>
>> >
>> > The allocation of the reed solomon control structure can fail, but
>> > fec_alloc_bufs() ignores that and subsequent operations in dm verity use
>> > the potential NULL pointer unconditionally.
>> >
>> > Add a proper check and abort if init_rs() fails.
>>
>> This changelog makes little sense: init_rs() isn't in play relative to
>> this patch.
>
> fio->rs = mempool_alloc(v->fec->rs_pool, GFP_NOIO);
>
> f->rs_pool = mempool_create(num_online_cpus(), fec_rs_alloc,
> fec_rs_free, (void *) v);
>
> static void *fec_rs_alloc(gfp_t gfp_mask, void *pool_data)
> {
> struct dm_verity *v = (struct dm_verity *)pool_data;
>
> return init_rs(8, 0x11d, 0, 1, v->fec->roots);
> }
>
> So init_rs() is part of the chain, right?
>
> Yes. I missed the NOIO part. But....
>
>> And it runs counter to this commit's changelog:
>>
>> commit 34c96507e8f6be497c15497be05f489fb34c5880
>> Author: NeilBrown <neilb@...e.com>
>> Date: Mon Apr 10 12:13:00 2017 +1000
>>
>> dm verity fec: fix GFP flags used with mempool_alloc()
>>
>> mempool_alloc() cannot fail for GFP_NOIO allocation, so there is no
>> point testing for failure.
>>
>> One place the code tested for failure was passing "0" as the GFP
>> flags. This is most unusual and is probably meant to be GFP_NOIO,
>> so that is changed.
>>
>> Also, allocation from ->extra_pool and ->prealloc_pool are repeated
>> before releasing the previous allocation. This can deadlock if the code
>> is servicing a write under high memory pressure. To avoid deadlocks,
>> change these to use GFP_NOWAIT and leave the error handling in place.
>>
>> Signed-off-by: NeilBrown <neilb@...e.com>
>> Signed-off-by: Mike Snitzer <snitzer@...hat.com>
>>
>> Seems there is no real need for this patch. Neil, what do you think?
I think the code is correct as-is.
>
> The analysis above forgot to look at the mempool->alloc() callback. So yes,
> while the NOIO is good at the mempool level, but init_rs() uses GPF_KERNEL
> so there might be a different can of wurms lurking.
The ->alloc call back is not relevant to the question of when
mempool_alloc() can return NULL.
If the ->alloc() callback returns a non-NULL value, it will be returned
by mempool_alloc().
If it returns NULL, that will not be returned.
mempool_alloc() *only* returns NULL in one place:
if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) {
spin_unlock_irqrestore(&pool->lock, flags);
return NULL;
}
so a NULL return is purely dependent on the GFP flags passed.
GFP_NOIO contains __GFP_DIRECT_RECLAIM, so NULL cannot be returned.
It seems quite broken that init_rs() uses GFP_KERNEL. It should take a
gfp_t arg for the allocation.
If the mempool_alloc() above really needs GFP_NOIO, then it could
theoretically deadlock as it performs a GFP_KERNEL allocation inside
rs_init(). So in that sense, the code is not correct as-is.
It could possibly be fixed by calling memalloc_noio_save() /
memalloc_noio_restore() around the call to init_rs() in fec_rs_alloc().
Thanks,
NeilBrown
Download attachment "signature.asc" of type "application/pgp-signature" (833 bytes)
Powered by blists - more mailing lists