[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8b52b0d6-c973-f959-b44a-1b54fb808a04@efficios.com>
Date: Wed, 11 Jan 2023 09:52:22 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Florian Weimer <fweimer@...hat.com>, linux-kernel@...r.kernel.org
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
libc-alpha@...rceware.org
Subject: Re: rseq CPU ID not correct on 6.0 kernels for pinned threads
On 2023-01-11 06:26, Florian Weimer wrote:
> The glibc test suite contains a test that verifies that sched_getcpu
> returns the expected CPU number for a thread that is pinned (via
> sched_setaffinity) to a specific CPU. There are other threads running
> which attempt to de-schedule the pinned thread from its CPU. I believe
> the test is correctly doing what it is expected to do; it is invalid
> only if one believes that it is okay for the kernel to disregard the
> affinity mask for scheduling decisions.
>
> These days, we use the cpu_id rseq field as the return value of
> sched_getcpu if the kernel has rseq support (which it has in these
> cases).
>
> This test has started failing sporadically for us, some time around
> kernel 6.0. I see failure occasionally on a Fedora builder, it runs:
>
> Linux buildvm-x86-26.iad2.fedoraproject.org 6.0.15-300.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Dec 21 18:33:23 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
>
> I think I've seen it on the x86-64 builder only, but that might just be
> an accident.
>
> The failing tests log this output:
>
> =====FAIL: nptl/tst-thread-affinity-pthread.out=====
> info: Detected CPU set size (in bits): 64
> info: Maximum test CPU: 5
> error: Pinned thread 1 ran on impossible cpu 0
> error: Pinned thread 0 ran on impossible cpu 0
> info: Main thread ran on 4 CPU(s) of 6 available CPU(s)
> info: Other threads ran on 6 CPU(s)
> =====FAIL: nptl/tst-thread-affinity-pthread2.out=====
> info: Detected CPU set size (in bits): 64
> info: Maximum test CPU: 5
> error: Pinned thread 1 ran on impossible cpu 1
> error: Pinned thread 2 ran on impossible cpu 0
> error: Pinned thread 3 ran on impossible cpu 3
> info: Main thread ran on 5 CPU(s) of 6 available CPU(s)
> info: Other threads ran on 6 CPU(s)
>
> But I also encountered one local failure, but it is rare. Maybe it's
> load-related. There shouldn't be any CPU unplug or anything like that
> involved here.
>
> I am not entirely sure if something is changing CPU affinities from
> outside the process (which would be quite wrong, but not a kernel bug).
> But in the past, our glibc test has detected real rseq cpu_id
> brokenness, so I'm leaning towards that as the cause this time, too.
It can be caused by rseq failing to update the cpu number field on
return to userspace. Tthis could be validated by printing the regular
getcpu vdso value and/or the value returned by the getcpu system call
when the error is triggered, and see whether it matches the rseq cpu id
value.
It can also be caused by scheduler failure to take the affinity into
account.
As you also point out, it can also be caused by some other task
modifying the affinity of your task concurrently. You could print
the result of sched_getaffinity on error to get a better idea of
the expected vs actual mask.
Lastly, it could be caused by CPU hotplug which would set all bits
in the affinity mask as a fallback. As you mention it should not be
the cause there.
Can you share your kernel configuration ?
Thanks,
Mathieu
>
> Thanks,
> Florian
>
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
Powered by blists - more mailing lists