[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <z2y28c262361004260307m22f38d23y95c5615072b8f3a7@mail.gmail.com>
Date: Mon, 26 Apr 2010 19:07:56 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: Mel Gorman <mel@....ul.ie>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Christoph Lameter <cl@...ux.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [BUGFIX][mm][PATCH] fix migration race in rmap_walk
On Mon, Apr 26, 2010 at 6:49 PM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@...fujitsu.com> wrote:
> On Mon, 26 Apr 2010 18:48:42 +0900
> Minchan Kim <minchan.kim@...il.com> wrote:
>
>> On Mon, Apr 26, 2010 at 6:28 PM, KAMEZAWA Hiroyuki
>> <kamezawa.hiroyu@...fujitsu.com> wrote:
>> > On Mon, 26 Apr 2010 08:49:01 +0900
>> > KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
>> >
>> >> On Sat, 24 Apr 2010 11:43:24 +0100
>> >> Mel Gorman <mel@....ul.ie> wrote:
>> >
>> >> > It looks nice but it still broke after 28 hours of running. The
>> >> > seq-counter is still insufficient to catch all changes that are made to
>> >> > the list. I'm beginning to wonder if a) this really can be fully safely
>> >> > locked with the anon_vma changes and b) if it has to be a spinlock to
>> >> > catch the majority of cases but still a lazy cleanup if there happens to
>> >> > be a race. It's unsatisfactory and I'm expecting I'll either have some
>> >> > insight to the new anon_vma changes that allow it to be locked or Rik
>> >> > knows how to restore the original behaviour which as Andrea pointed out
>> >> > was safe.
>> >> >
>> >> Ouch.
>> >
>> > Ok, reproduced. Here is status in my test + printk().
>> >
>> > * A race doesn't seem to happen if swap=off.
>> > I need to swapon to cause the bug
>>
>> FYI,
>>
>> Do you have a swapon/off bomb test?
>
> No. Just running test under swapoff, and running the same test after swapon.
>
>
>> When I saw your mail, I feel it might be culprit.
>>
>> http://lkml.org/lkml/2010/4/22/762.
>>
>> It is just guessing. I don't have a time to look into, now.
>>
> Hmm. BTW.
>
> ==
> static int expand_downwards(struct vm_area_struct *vma,
> unsigned long address)
> {
> ....
> /* Somebody else might have raced and expanded it already */
> if (address < vma->vm_start) {
> unsigned long size, grow;
>
> size = vma->vm_end - address;
> grow = (vma->vm_start - address) >> PAGE_SHIFT;
>
> error = acct_stack_growth(vma, size, grow);
> if (!error) {
> vma->vm_start = address;
> vma->vm_pgoff -= grow;
> }
> }
> ==
>
> I feel this part needs care. No ?
Yes. Andrea pointed it out.
I didn't followed the thread whole yet but It seems Mel and Andrea
want to restore anon_vma's atomicity like old than one by one healing.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists