[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.1402032014140.29889@eggly.anvils>
Date: Mon, 3 Feb 2014 20:20:21 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Weijie Yang <weijie.yang@...sung.com>, hughd@...gle.com,
Minchan Kim <minchan@...nel.org>, shli@...nel.org,
Bob Liu <bob.liu@...cle.com>, weijie.yang.kh@...il.com,
Seth Jennings <sjennings@...iantweb.net>,
Heesub Shin <heesub.shin@...sung.com>, mquzik@...hat.com,
Linux-MM <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
stable@...r.kernel.org
Subject: Re: [PATCH 3/8] mm/swap: prevent concurrent swapon on the same
S_ISBLK blockdev
On Mon, 3 Feb 2014, Andrew Morton wrote:
> On Mon, 27 Jan 2014 18:03:04 +0800 Weijie Yang <weijie.yang@...sung.com> wrote:
>
> > When swapon the same S_ISBLK blockdev concurrent, the allocated two
> > swap_info could hold the same block_device, because claim_swapfile()
> > allow the same holder(here, it is sys_swapon function).
> >
> > To prevent this situation, This patch adds swap_lock protect to ensure
> > we can find this situation and return -EBUSY for one swapon call.
> >
> > As for S_ISREG swapfile, claim_swapfile() already prevent this scenario
> > by holding inode->i_mutex.
> >
> > This patch is just for a rare scenario, aim to correct of code.
> >
>
> hm, OK. Would it be saner to pass a unique `holder' to
> claim_swapfile()? Say, `p'?
>
> Truly, I am fed up with silly swapon/swapoff races. How often does
> anyone call these things? Let's slap a huge lock around the whole
> thing and be done with it?
That answer makes me sad: we can't be bothered to get it right,
even when Weijie goes to the trouble of presenting a series to do so.
But I sure don't deserve a vote until I've actually looked through it.
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists