[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <519C7475.5080400@meduna.org>
Date: Wed, 22 May 2013 09:32:05 +0200
From: Stanislav Meduna <stano@...una.org>
To: Steven Rostedt <rostedt@...dmis.org>
CC: "linux-rt-users@...r.kernel.org" <linux-rt-users@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
riel <riel@...hat.com>
Subject: Re: [PATCH - sort of] x86: Livelock in handle_pte_fault
On 22.05.2013 02:39, Steven Rostedt wrote:
> The threads do a mlockall too right? I'm not sure mlock will lock memory
> for a new thread's stack.
They don't. However,
https://rt.wiki.kernel.org/index.php/Threaded_RT-application_with_memory_locking_and_stack_handling_example
claims
"Threads started after a call to mlockall(MCL_CURRENT | MCL_FUTURE) will
generate page faults immediately since the new stack is immediately forced
to RAM (due to the MCL_FUTURE flag)."
and as the ps -o min_flt reports zero page faults for the threads
so I think it is also the case.
Anyway, both particular addresses were surely mapped long before
the fault.
>> - the application runs with mlockall()
>
> With both MCL_FUTURE and MCL_CURRENT set, right?
Yes.
>> - there is no swap
>
> Hmm, doesn't mean that code can't be swapped out, as it is just mapped
> from the file it came from. But you'd think mlockall would prevent that.
mlockall also forces the stack to be mapped immediately and not
generating pagefaults when incrementally expanding.
> Seems a bit extreme. Looks to me there's a missing flush TLB somewhere.
Probably.
One interesting thing: the test for "need to reload something"
looks a bit differently for the ARM architecture in
arch/arm/include/asm/mmu_context.h:
if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) {
and they do something also for the
!CONFIG_SMP && !cpumask_test_and_set_cpu(cpu, mm_cpumask(next)
case. I don't know what exactly is semantics of mm_cpumask,
but the difference is suspicious.
> Do you have a reproducer you can share. That way, maybe we can all share
> the joy.
Unfortunately not and I have really tried :( If I get new ideas, I will
try again.
Thanks
--
Stano
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists