lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <519CBB30.3060200@redhat.com>
Date:	Wed, 22 May 2013 08:33:52 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Steven Rostedt <rostedt@...dmis.org>
CC:	Stanislav Meduna <stano@...una.org>,
	"linux-rt-users@...r.kernel.org" <linux-rt-users@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Hai Huang <hhuang@...hat.com>
Subject: Re: [PATCH - sort of] x86: Livelock in handle_pte_fault

On 05/21/2013 08:39 PM, Steven Rostedt wrote:
> On Fri, 2013-05-17 at 10:42 +0200, Stanislav Meduna wrote:
>> Hi all,
>>
>> I don't know whether this is linux-rt specific or applies to
>> the mainline too, so I'll repeat some things the linux-rt
>> readers already know.
>>
>> Environment:
>>
>> - Geode LX or Celeron M
>> - _not_ CONFIG_SMP
>> - linux 3.4 with realtime patches and full preempt configured
>> - an application consisting of several mostly RR-class threads
>
> The threads do a mlockall too right? I'm not sure mlock will lock memory
> for a new thread's stack.
>
>> - the application runs with mlockall()
>
> With both MCL_FUTURE and MCL_CURRENT set, right?
>
>> - there is no swap
>
> Hmm, doesn't mean that code can't be swapped out, as it is just mapped
> from the file it came from. But you'd think mlockall would prevent that.
>
>>
>> Problem:
>>
>> - after several hours to 1-2 weeks some of the threads start to loop
>>    in the following way
>>
>>    0d...0 62811.755382: function:  do_page_fault
>>    0....0 62811.755386: function:     handle_mm_fault
>>    0....0 62811.755389: function:        handle_pte_fault
>>    0d...0 62811.755394: function:  do_page_fault
>>    0....0 62811.755396: function:     handle_mm_fault
>>    0....0 62811.755398: function:        handle_pte_fault
>>    0d...0 62811.755402: function:  do_page_fault
>>    0....0 62811.755404: function:     handle_mm_fault
>>    0....0 62811.755406: function:        handle_pte_fault
>>
>>    and stay in the loop until the RT throttling gets activated.
>>    One of the faulting addresses was in code (after returning
>>    from a syscall), a second one in stack (inside put_user right
>>    before a syscall ends), both were surely mapped.
>>
>> - After RT throttler activates it somehow magically fixes itself,
>>    probably (not verified) because another _process_ gets scheduled.
>>    When throttled the RR and FF threads are not allowed to run for
>>    a while (20 ms in my configuration). The livelocks lasts around
>>    1-3 seconds, and there is a SCHED_OTHER process that runs each
>>    2 seconds.
>
> Hmm, if there was a missed TLB flush, and we are faulting due to a bad
> TLB table, and it goes into an infinite faulting loop, the only thing
> that will stop it is the RT throttle. Then a new task gets scheduled,
> and we flush the TLB and everything is fine again.

That sounds like maybe we DO want a TLB flush on spurious
page faults, so we get rid of this problem.

Last fall we thought this problem could not happen on x86,
but your bug report suggests that it might.

We can get flush_tlb_fix_spurious_fault to do a local TLB
invalidate of just the address in question by removing the
x86-specific dummy version, falling back to the asm-generic
version that does something.

Can you test the attached patch?

-- 
All rights reversed

View attachment "flush-tlb-on-spurious-fault.patch" of type "text/x-patch" (1004 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ