[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAC=cRTPFDNpCKvjqMj+ggMoQND9tme4w+AGX31Yu2B4uzzPWZg@mail.gmail.com>
Date: Sat, 4 Mar 2017 19:53:10 +0800
From: huang ying <huang.ying.caritas@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "Huang, Ying" <ying.huang@...el.com>,
Hugh Dickins <hughd@...gle.com>, Shaohua Li <shli@...nel.org>,
Minchan Kim <minchan@...nel.org>,
Rik van Riel <riel@...hat.com>,
Tim Chen <tim.c.chen@...el.com>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm, swap: Fix a race in free_swap_and_cache()
Hi, Andrew,
Sorry, I clicked the wrong button in my mail client, so forgot Ccing
mailing list. Sorry for duplicated mail.
On Sat, Mar 4, 2017 at 6:43 AM, Andrew Morton <akpm@...ux-foundation.org> wrote:
> On Wed, 1 Mar 2017 22:38:09 +0800 "Huang, Ying" <ying.huang@...el.com> wrote:
>
>> Before using cluster lock in free_swap_and_cache(), the
>> swap_info_struct->lock will be held during freeing the swap entry and
>> acquiring page lock, so the page swap count will not change when
>> testing page information later. But after using cluster lock, the
>> cluster lock (or swap_info_struct->lock) will be held only during
>> freeing the swap entry. So before acquiring the page lock, the page
>> swap count may be changed in another thread. If the page swap count
>> is not 0, we should not delete the page from the swap cache. This is
>> fixed via checking page swap count again after acquiring the page
>> lock.
>
> What are the user-visible runtime effects of this bug? Please always
> include this info when fixing things, thanks.
Sure. I find the race when I review the code, so I didn't trigger the
race via a test program. If the race occurs for an anonymous page
shared by multiple processes via fork, multiple pages will be
allocated and swapped in from the swap device for the previously
shared one page. That is, the user-visible runtime effect is more
memory will be used and the access latency for the page will be
higher, that is, the performance regression.
Best Regards,
Huang, Ying
Powered by blists - more mailing lists