[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bcf68a7f-d860-2c7e-69ed-341eb81c69e6@colorfullife.com>
Date: Sat, 2 Dec 2017 07:03:30 +0100
From: Manfred Spraul <manfred@...orfullife.com>
To: Davidlohr Bueso <dave@...olabs.net>,
Philippe Mikoyan <philippe.mikoyan@...t.systems>
Cc: akpm@...ux-foundation.org, viro@...iv.linux.org.uk,
linux-kernel@...r.kernel.org, edgar.kaziakhmedov@...tuozzo.com
Subject: Re: [PATCH 2/2] ipc: Fix ipc data structures inconsistency
Hi,
On 12/01/2017 06:20 PM, Davidlohr Bueso wrote:
> On Thu, 30 Nov 2017, Philippe Mikoyan wrote:
>
>> As described in the title, this patch fixes <ipc>id_ds inconsistency
>> when <ipc>ctl_stat runs concurrently with some ds-changing function,
>> e.g. shmat, msgsnd or whatever.
>>
>> For instance, if shmctl(IPC_STAT) is running concurrently with shmat,
>> following data structure can be returned:
>> {... shm_lpid = 0, shm_nattch = 1, ...}
>
The patch appears to be good. I'll try to perform some tests, but I'm
not sure when I will be able to.
Especially: I don't know the shm code good enough to immediately check
the change you make to nattach.
And, perhaps as a side information:
There appears to be a use-after-free in shm, I now got a 2nd mail from
syzbot:
http://lkml.iu.edu/hypermail/linux/kernel/1702.3/02480.html
> Hmm yeah that's pretty fishy, also shm_atime = 0, no?
>
> So I think this patch is fine as we can obviously race at a user level.
> This is another justification for converting the ipc lock to rwlock;
> performance wise they are the pretty much the same (being queued)...
> but that's irrelevant to this patch. I like that you manage to do
> security and such checks still only under rcu, like all ipc calls
> work; *_stat() is no longer special.
>
I don't like rwlock, they add complexity without reducing the cache line
pressure.
What I would like to try is to create a mutex_lock_rcu() function, and
then convert everything to a mutex.
As pseudocode::
rcu_lock();
idr_lookup();
mutex_trylock();
if (failed) {
getref();
rcu_unlock();
mutex_lock();
putref();
} else {
rcu_unlock();
}
Obviously, the getref then within the mutex framework, i.e. only if
mutex_lock() really sleeps.
If the code in ipc gets significantly simpler, then perhaps convert it
to an rw mutex.
Powered by blists - more mailing lists