[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+Z86=NoNPrS-vgtJiB54Akwq6FfAPf2wnBA1FX2BHafWQ@mail.gmail.com>
Date: Wed, 27 Jan 2016 22:11:44 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Konstantin Khlebnikov <koct9i@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Chen Gang <gang.chen.5i5j@...il.com>,
Michal Hocko <mhocko@...e.com>,
Piotr Kwapulinski <kwapulinski.piotr@...il.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Hugh Dickins <hughd@...gle.com>,
Sasha Levin <sasha.levin@...cle.com>,
syzkaller <syzkaller@...glegroups.com>,
Kostya Serebryany <kcc@...gle.com>,
Alexander Potapenko <glider@...gle.com>
Subject: Re: mm: BUG in expand_downwards
On Wed, Jan 27, 2016 at 8:41 PM, Oleg Nesterov <oleg@...hat.com> wrote:
> On 01/27, Dmitry Vyukov wrote:
>>
>> On Wed, Jan 27, 2016 at 1:24 PM, Dmitry Vyukov <dvyukov@...gle.com> wrote:
>> > On Wed, Jan 27, 2016 at 12:49 PM, Konstantin Khlebnikov
>> > <koct9i@...il.com> wrote:
>> >> It seems anon_vma appeared between lock and unlock.
>> >>
>> >> This should fix the bug and make code faster (write lock isn't required here)
>> >>
>> >> --- a/mm/mmap.c
>> >> +++ b/mm/mmap.c
>> >> @@ -453,12 +453,16 @@ static void validate_mm(struct mm_struct *mm)
>> >> struct vm_area_struct *vma = mm->mmap;
>> >>
>> >> while (vma) {
>> >> + struct anon_vma *anon_vma = vma->anon_vma;
>> >> struct anon_vma_chain *avc;
>> >>
>> >> - vma_lock_anon_vma(vma);
>> >> - list_for_each_entry(avc, &vma->anon_vma_chain, same_vma)
>> >> - anon_vma_interval_tree_verify(avc);
>> >> - vma_unlock_anon_vma(vma);
>> >> + if (anon_vma) {
>> >> + anon_vma_lock_read(anon_vma);
>> >> + list_for_each_entry(avc, &vma->anon_vma_chain, same_vma)
>> >> + anon_vma_interval_tree_verify(avc);
>> >> + anon_vma_unlock_read(anon_vma);
>> >> + }
>> >> +
>> >> highest_address = vma->vm_end;
>> >> vma = vma->vm_next;
>> >> i++;
>> >
>> >
>> > Now testing with this patch. Thanks for quick fix!
>>
>>
>> Hit the same BUG with this patch.
>
> Do you mean the same "bad unlock balance detected" BUG? this should be "obviously"
> fixed by the patch above...
>
> Or you mean the 2nd VM_BUG_ON_MM() ?
>
>> Please try to reproduce it locally and test.
>
> tried to reproduce, doesn't work.
Sorry, I meant only the second once. The mm bug.
I guess you need at least CONFIG_DEBUG_VM. Run it in a tight parallel
loop with CPU oversubscription (e.g. 32 parallel processes on 2 cores)
for at least an hour.
Powered by blists - more mailing lists