[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1370356941.26799.120.camel@gandalf.local.home>
Date: Tue, 04 Jun 2013 10:42:21 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Mikulas Patocka <mpatocka@...hat.com>
Cc: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
stable@...nel.org, Alasdair G Kergon <agk@...hat.com>
Subject: Re: [07/65] dm bufio: avoid a possible __vmalloc deadlock
On Tue, 2013-06-04 at 08:59 -0400, Mikulas Patocka wrote:
>
> On Mon, 3 Jun 2013, Steven Rostedt wrote:
>
> > 3.6.11.5 stable review patch.
> > If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Mikulas Patocka <mpatocka@...hat.com>
> >
> > [ Upstream commit 502624bdad3dba45dfaacaf36b7d83e39e74b2d2 ]
> >
> > This patch uses memalloc_noio_save to avoid a possible deadlock in
> > dm-bufio. (it could happen only with large block size, at most
> > PAGE_SIZE << MAX_ORDER (typically 8MiB).
> >
> > __vmalloc doesn't fully respect gfp flags. The specified gfp flags are
> > used for allocation of requested pages, structures vmap_area, vmap_block
> > and vm_struct and the radix tree nodes.
> >
> > However, the kernel pagetables are allocated always with GFP_KERNEL.
> > Thus the allocation of pagetables can recurse back to the I/O layer and
> > cause a deadlock.
> >
> > This patch uses the function memalloc_noio_save to set per-process
> > PF_MEMALLOC_NOIO flag and the function memalloc_noio_restore to restore
> > it. When this flag is set, all allocations in the process are done with
> > implied GFP_NOIO flag, thus the deadlock can't happen.
> >
> > This should be backported to stable kernels, but they don't have the
> > PF_MEMALLOC_NOIO flag and memalloc_noio_save/memalloc_noio_restore
> > functions. So, PF_MEMALLOC should be set and restored instead.
> >
> > Signed-off-by: Mikulas Patocka <mpatocka@...hat.com>
> > Cc: stable@...nel.org
> > Signed-off-by: Alasdair G Kergon <agk@...hat.com>
> > [ Set and clear PF_MEMALLOC manually - SR ]
> > Signed-off-by: Steven Rostedt <rostedt@...dmis.org>
> > ---
> > drivers/md/dm-bufio.c | 26 +++++++++++++++++++++++++-
> > 1 file changed, 25 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
> > index c0fc827..a1f2487 100644
> > --- a/drivers/md/dm-bufio.c
> > +++ b/drivers/md/dm-bufio.c
> > @@ -321,6 +321,9 @@ static void __cache_size_refresh(void)
> > static void *alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask,
> > enum data_mode *data_mode)
> > {
> > + unsigned noio_flag;
> > + void *ptr;
> > +
> > if (c->block_size <= DM_BUFIO_BLOCK_SIZE_SLAB_LIMIT) {
> > *data_mode = DATA_MODE_SLAB;
> > return kmem_cache_alloc(DM_BUFIO_CACHE(c), gfp_mask);
> > @@ -334,7 +337,28 @@ static void *alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask,
> > }
> >
> > *data_mode = DATA_MODE_VMALLOC;
> > - return __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL);
> > +
> > + /*
> > + * __vmalloc allocates the data pages and auxiliary structures with
> > + * gfp_flags that were specified, but pagetables are always allocated
> > + * with GFP_KERNEL, no matter what was specified as gfp_mask.
> > + *
> > + * Consequently, we must set per-process flag PF_MEMALLOC_NOIO so that
> > + * all allocations done by this process (including pagetables) are done
> > + * as if GFP_NOIO was specified.
> > + */
> > +
> > + if (gfp_mask & __GFP_NORETRY) {
> > + noio_flag = current->flags;
>
> There should be noio_flag = current->flags & PF_MEMALLOC; because we don't
> want to restore other flags.
Thanks for the review. Will fix.
-- Steve
>
> > + current->flags |= PF_MEMALLOC;
> > + }
> > +
> > + ptr = __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL);
> > +
> > + if (gfp_mask & __GFP_NORETRY)
> > + current->flags = (current->flags & ~PF_MEMALLOC) | noio_flag;
> > +
> > + return ptr;
> > }
> >
> > /*
> > --
> > 1.7.10.4
>
> Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists