[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5149C76D.7090409@redhat.com>
Date: Wed, 20 Mar 2013 10:27:57 -0400
From: Rik van Riel <riel@...hat.com>
To: Davidlohr Bueso <davidlohr.bueso@...com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Emmanuel Benisty <benisty.e@...il.com>,
"Vinod, Chegu" <chegu_vinod@...com>,
"Low, Jason" <jason.low2@...com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>, aquini@...hat.com,
Michel Lespinasse <walken@...gle.com>,
Ingo Molnar <mingo@...nel.org>,
Larry Woodman <lwoodman@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH v2 4/4] ipc: sem: do not hold ipc lock more than necessary
On 03/05/2013 04:36 AM, Davidlohr Bueso wrote:
> @@ -1476,8 +1539,8 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
> queue.sleeper = current;
>
> sleep_again:
> - current->state = TASK_INTERRUPTIBLE;
> sem_unlock(sma);
> + current->state = TASK_INTERRUPTIBLE;
>
> if (timeout)
> jiffies_left = schedule_timeout(jiffies_left);
After modifying my test case to start with a semaphore value of 1 on
every semaphore, and do down followed by up (to have only one process
take each semaphore at a time), I started seeing lost wakeups and the
test case being stuck.
I believe the change above is the cause of that issue.
By unlocking before setting current->state to TASK_INTERRUPTIBLE,
there is a small window where the next lock holder can grab the
lock and wake us up, before we set ourselves to TASK_INTERRUPTIBLE
and go to sleep.
I have reverted your change in my code and am building a test kernel
now.
If things work, I'll clean up the whole patch series for a re-posting
today.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists