[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BB0D500.1090409@redhat.com>
Date: Mon, 29 Mar 2010 12:27:44 -0400
From: Rik van Riel <riel@...hat.com>
To: Kent Overstreet <kent.overstreet@...il.com>
CC: linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: KVM bug, git bisected
On 03/29/2010 12:11 PM, Rik van Riel wrote:
> On 03/27/2010 08:43 AM, Kent Overstreet wrote:
>> commit 5beb49305251e5669852ed541e8e2f2f7696c53e
>> Author: Rik van Riel <riel@...hat.com>
>> Date: Fri Mar 5 13:42:07 2010 -0800
>>
>> mm: change anon_vma linking to fix multi-process server scalability issue
>>
>> I get this when starting kvm. The warning hasn't caused me problems, but
>> I've also been getting a scheduling while atomic panic when I start kvm
>> that I can only reproduce when I don't want to. It's definitely config
>> dependent, I'd guess preempt might have something to do with it.
>
> From your trace, it looks like mm_take_all_locks is taking close
> to 256 locks, which is where the preempt_count could overflow into
> the softirq count.
>
> Since kvm-qemu is exec'd, I am guessing you have a very large
> number of VMAs in your qemu process. Is that correct?
I just took a look at a qemu-kvm process on my own system.
It has a staggering 385 VMAs!
That definately has the potential of overflowing PREEMPT_BITS
when undergoing mm_take_all_locks...
> Peter, would it be safe to increase PREEMPT_BITS to eg. 10?
Looks like we'll have to. At least on 64 bits...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists