[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1709052131130.2393@nanos>
Date: Tue, 5 Sep 2017 21:37:03 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: "chengjian (D)" <cj.chengjian@...wei.com>
cc: huawei.libin@...wei.com, mingo@...hat.com, peterz@...radead.org,
dvhart@...radead.org, linux-kernel@...r.kernel.org
Subject: Re: a competition when some threads acquire futex
On Tue, 5 Sep 2017, chengjian (D) wrote:
> int main(int argc, char** argv)
> {
> pthread_t id1;
> pthread_t id2;
>
> printf("main pid = %d\n", getpid( ));
>
> pthread_mutex_init(&mutex, NULL);
So this is a plain mutex, which maps to a plain futex.
> use perf ftrace to shows the graph of the function calls. We found that the
> process 17327 auquire the lock quickly after call futex_wake( ), so the
> process 17328 futex_wait( ) all the time.
The observed functions look correct for that futex type.
> diff --git a/kernel/futex.c b/kernel/futex.c
> index 3d38eaf..0b2d17a 100644
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -1545,6 +1545,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval,
> struct futex_pi_state *pi_
> spin_unlock(&hb->lock);
> wake_up_q(&wake_q);
> + _cond_resched( );
What's less correct is the placement of the cond_resched() which you patch
into the function:
wake_futex_pi()
wake_futex_pi() is not even remotely involved in this problem, so I have a
hard time to understand how this patch 'solves' it.
Thanks,
tglx
Powered by blists - more mailing lists