lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Sep 2016 15:21:22 -0700
From:   Davidlohr Bueso <dave@...olabs.net>
To:     Manfred Spraul <manfred@...orfullife.com>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        akpm@...ux-foundation.org
Subject: Re: [PATCH 4/5] ipc/msg: Lockless security checks for msgsnd

On Sun, 18 Sep 2016, Manfred Spraul wrote:

>>Just as with msgrcv (along with the rest of sysvipc since a few years
>>    ago), perform the security checks without holding the ipc object lock.
>Thinking about it: isn't this wrong?
>
>CPU1:
>* msgrcv()
>* ipcperms()
><sleep>
>
>CPU2:
>* msgctl(), change permissions
>** msgctl() returns, new permissions should now be in effect
>* msgsnd(), send secret message
>** msgsnd() returns, new message stored.
>
>CPU1: resumes, receives secret message

Hmm, would this not apply to everything IPC_SET, we do lockless ipcperms()
all over the place.

>Obviously, we could argue that the msgrcv() was already ongoing and 
>therefore the old permissions still apply - but then we don't need to 
>recheck after sleeping at all.

There is that, and furthermore we make no such guarantees under concurrency.
Another way of looking at it could perhaps be IPC_SET returning EPERM if
there's an unserviced msgrcv -- but I'm not suggesting doing this btw ;)

>
>>    This also reduces the hogging of the lock for the entire duration of a
>>    sender, as we drop the lock upon every iteration -- and this is 
>>exactly
>>    why we also check for racing with RMID in the first place.
>
>Which hogging do you mean? The lock is dopped uppon every iteration, 
>the schedule() is in the middle.
>Which your patch, the lock are now dropped twice:
>>-
>>  	for (;;) {
>>  		struct msg_sender s;
>>  		err = -EACCES;
>>  		if (ipcperms(ns, &msq->q_perm, S_IWUGO))
>>-			goto out_unlock0;
>>+			goto out_unlock1;
>>+
>>+		ipc_lock_object(&msq->q_perm);
>>  		/* raced with RMID? */
>>  		if (!ipc_valid_object(&msq->q_perm)) {
>>@@ -681,6 +681,7 @@ long do_msgsnd(int msqid, long mtype, void __user *mtext,
>>  			goto out_unlock0;
>>  		}
>>+		ipc_unlock_object(&msq->q_perm);
>>  	}
>>
>>
>This means the lock is dropped, just for ipcperms().
>This doubles the lock acquire/release cycles.

The effectiveness all depends on the workload and degree of contention. But
I have no problem dropping this patch either, although this is standard for
all things ipc.

Thanks,
Davidlohr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ