[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <58c2536e-aebd-e21e-6d78-003dcd10443d@colorfullife.com>
Date: Sun, 18 Sep 2016 07:10:45 +0200
From: Manfred Spraul <manfred@...orfullife.com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 4/5] ipc/msg: Lockless security checks for msgsnd
Hi Davidlohr,
> Just as with msgrcv (along with the rest of sysvipc since a few years
> ago), perform the security checks without holding the ipc object lock.
Thinking about it: isn't this wrong?
CPU1:
* msgrcv()
* ipcperms()
<sleep>
CPU2:
* msgctl(), change permissions
** msgctl() returns, new permissions should now be in effect
* msgsnd(), send secret message
** msgsnd() returns, new message stored.
CPU1: resumes, receives secret message
Obviously, we could argue that the msgrcv() was already ongoing and
therefore the old permissions still apply - but then we don't need to
recheck after sleeping at all.
> This also reduces the hogging of the lock for the entire duration of a
> sender, as we drop the lock upon every iteration -- and this is
> exactly
> why we also check for racing with RMID in the first place.
Which hogging do you mean? The lock is dopped uppon every iteration, the
schedule() is in the middle.
Which your patch, the lock are now dropped twice:
> -
> for (;;) {
> struct msg_sender s;
>
> err = -EACCES;
> if (ipcperms(ns, &msq->q_perm, S_IWUGO))
> - goto out_unlock0;
> + goto out_unlock1;
> +
> + ipc_lock_object(&msq->q_perm);
>
> /* raced with RMID? */
> if (!ipc_valid_object(&msq->q_perm)) {
> @@ -681,6 +681,7 @@ long do_msgsnd(int msqid, long mtype, void __user *mtext,
> goto out_unlock0;
> }
>
> + ipc_unlock_object(&msq->q_perm);
> }
>
>
This means the lock is dropped, just for ipcperms().
This doubles the lock acquire/release cycles.
--
Manfred
Powered by blists - more mailing lists