[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1417707038.21214.4@mail.thefacebook.com>
Date: Thu, 4 Dec 2014 10:30:38 -0500
From: Chris Mason <clm@...com>
To: Dave Hansen <dave.hansen@...el.com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
John Stultz <john.stultz@...aro.org>,
Dave Jones <davej@...hat.com>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Dâniel Fraga <fragabr@...il.com>,
Sasha Levin <sasha.levin@...cle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: frequent lockups in 3.18rc4
On Thu, Dec 4, 2014 at 10:22 AM, Dave Hansen <dave.hansen@...el.com>
wrote:
> On 12/03/2014 09:49 PM, Linus Torvalds wrote:
>> On Wed, Dec 3, 2014 at 7:15 PM, Chris Mason <clm@...com> wrote:
>>>
>>> One guess is that trinity is generating a huge number of tlb
>>> invalidations over sparse and horrible ranges. Perhaps the old
>>> code was
>>> falling back to full tlb flushes before Dave Hansen's string of
>>> fixes?
>>
>> Hmm. I agree that we've had some of the backtraces look like TLB
>> flushing might be involved. Not all, though. And I'm not seeing
>> where
>> a loop over up to 33 pages should matter over doing a full TLB
>> flush.
>>
>> What *might* matter is if we somehow get that number wrong, and the
>> loops like
>>
>> addr = f->flush_start;
>> while (addr < f->flush_end) {
>> __flush_tlb_single(addr);
>> addr += PAGE_SIZE;
>> }
>>
>> ends up looping a *lot* due to some bug, and then the IPI itself
>> would
>> take so long that the watchdog could trigger.
>>
>> But I do not see how that could actually happen. As far as I can
>> tell,
>> either the number of pages is limited to less than 33, or we have
>> that
>> TLB_FLUSH_ALL case.
>>
>> Do you see something I don't?
>
> The one thing I _do_ see now is a missed TLB flush is we're flushing
> one
> page at the end of the address space. We'd overflow flush_end back so
> flush_end=0:
>
> if (!f->flush_end)
> f->flush_end = f->flush_start + PAGE_SIZE; <--
> overflow
>
> and we'll never enter the while loop where we actually do the flush:
>
> while (addr < f->flush_end) {
> __flush_tlb_single(addr);
> addr += PAGE_SIZE;
> }
>
> But we have a hole up there on x86_64, so this will never happen in
> practice there. It might theoretically apply to 32-bit, but this
> still
> doesn't help with the bug.
>
> Oh, and the tracepoint is spitting out bogus numbers because we need
> some parenthesis around the 'nr_pages' calculation.
Yeah, I didn't see any problems with your changes, but I was hoping
that even a small change like doing 33 flushes at a time was pushing
Dave's box just over the line.
-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists