[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+icZUU5TQEH-SX9G97RFqfgUs1i2YHPU=HvUOY+YDKrU4RNzQ@mail.gmail.com>
Date: Wed, 4 Sep 2013 00:46:50 +0200
From: Sedat Dilek <sedat.dilek@...il.com>
To: Vineet Gupta <Vineet.Gupta1@...opsys.com>
Cc: Manfred Spraul <manfred@...orfullife.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Davidlohr Bueso <dave.bueso@...il.com>,
Davidlohr Bueso <davidlohr.bueso@...com>,
linux-next <linux-next@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-mm <linux-mm@...ck.org>, Andi Kleen <andi@...stfloor.org>,
Rik van Riel <riel@...hat.com>,
Jonathan Gonzalez <jgonzalez@...ets.cl>
Subject: Re: ipc msg now works (was Re: ipc-msg broken again on 3.11-rc7?)
On Tue, Sep 3, 2013 at 12:32 PM, Vineet Gupta
<Vineet.Gupta1@...opsys.com> wrote:
> On 09/03/2013 03:47 PM, Manfred Spraul wrote:
>> Hi Vineet,
>>
>> On 09/03/2013 11:51 AM, Vineet Gupta wrote:
>>> On 09/03/2013 02:53 PM, Manfred Spraul wrote:
>>>> The access to msq->q_cbytes is not protected.
>>>>
>>>> Vineet, could you try to move the test for free space after ipc_lock?
>>>> I.e. the lock must not get dropped between testing for free space and
>>>> enqueueing the messages.
>>> Hmm, the code movement is not trivial. I broke even the simplest of cases (patch
>>> attached). This includes the additional change which Linus/Davidlohr had asked for.
>> The attached patch should work. Could you try it?
>>
>
> Yes this did the trick, now the default config of 100k iterations + 16 processes
> runs to completion.
>
Manfred's patch "ipc/msg.c: Fix lost wakeup in msgsnd()." is now upstream.
- Sedat -
[1] http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=bebcb928c820d0ee83aca4b192adc195e43e66a2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists