[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFz-QhQOQuFPKbxBDAOjJ+fR02m3-unA4iD9kLwhWS38cA@mail.gmail.com>
Date: Wed, 3 Dec 2014 21:49:04 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Chris Mason <clm@...com>, Thomas Gleixner <tglx@...utronix.de>,
John Stultz <john.stultz@...aro.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Dave Jones <davej@...hat.com>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Dâniel Fraga <fragabr@...il.com>,
Sasha Levin <sasha.levin@...cle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: frequent lockups in 3.18rc4
On Wed, Dec 3, 2014 at 7:15 PM, Chris Mason <clm@...com> wrote:
>
> One guess is that trinity is generating a huge number of tlb
> invalidations over sparse and horrible ranges. Perhaps the old code was
> falling back to full tlb flushes before Dave Hansen's string of fixes?
Hmm. I agree that we've had some of the backtraces look like TLB
flushing might be involved. Not all, though. And I'm not seeing where
a loop over up to 33 pages should matter over doing a full TLB flush.
What *might* matter is if we somehow get that number wrong, and the loops like
addr = f->flush_start;
while (addr < f->flush_end) {
__flush_tlb_single(addr);
addr += PAGE_SIZE;
}
ends up looping a *lot* due to some bug, and then the IPI itself would
take so long that the watchdog could trigger.
But I do not see how that could actually happen. As far as I can tell,
either the number of pages is limited to less than 33, or we have that
TLB_FLUSH_ALL case.
Do you see something I don't?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists