[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140624183024.GA1258@redhat.com>
Date: Tue, 24 Jun 2014 20:30:24 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Kees Cook <keescook@...omium.org>
Cc: linux-kernel@...r.kernel.org,
Andy Lutomirski <luto@...capital.net>,
Alexei Starovoitov <ast@...mgrid.com>,
"Michael Kerrisk (man-pages)" <mtk.manpages@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Borkmann <dborkman@...hat.com>,
Will Drewry <wad@...omium.org>,
Julien Tinnes <jln@...omium.org>,
David Drysdale <drysdale@...gle.com>,
linux-api@...r.kernel.org, x86@...nel.org,
linux-arm-kernel@...ts.infradead.org, linux-mips@...ux-mips.org,
linux-arch@...r.kernel.org, linux-security-module@...r.kernel.org
Subject: Re: [PATCH v7 3/9] seccomp: introduce writer locking
I am puzzled by the usage of smp_load_acquire(),
On 06/23, Kees Cook wrote:
>
> static u32 seccomp_run_filters(int syscall)
> {
> - struct seccomp_filter *f;
> + struct seccomp_filter *f = smp_load_acquire(¤t->seccomp.filter);
> struct seccomp_data sd;
> u32 ret = SECCOMP_RET_ALLOW;
>
> /* Ensure unexpected behavior doesn't result in failing open. */
> - if (WARN_ON(current->seccomp.filter == NULL))
> + if (WARN_ON(f == NULL))
> return SECCOMP_RET_KILL;
>
> populate_seccomp_data(&sd);
> @@ -186,9 +186,8 @@ static u32 seccomp_run_filters(int syscall)
> * All filters in the list are evaluated and the lowest BPF return
> * value always takes priority (ignoring the DATA).
> */
> - for (f = current->seccomp.filter; f; f = f->prev) {
> + for (; f; f = smp_load_acquire(&f->prev)) {
> u32 cur_ret = SK_RUN_FILTER(f->prog, (void *)&sd);
> -
> if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION))
> ret = cur_ret;
OK, in this case the 1st one is probably fine, altgough it is not
clear to me why it is better than read_barrier_depends().
But why do we need a 2nd one inside the loop? And if we actually need
it (I don't think so) then why it is safe to use f->prog without
load_acquire ?
> void get_seccomp_filter(struct task_struct *tsk)
> {
> - struct seccomp_filter *orig = tsk->seccomp.filter;
> + struct seccomp_filter *orig = smp_load_acquire(&tsk->seccomp.filter);
> if (!orig)
> return;
This one looks unneeded.
First of all, afaics atomic_inc() should work correctly without any barriers,
otherwise it is buggy. But even this doesn't matter.
With this changes get_seccomp_filter() must be called under ->siglock, it can't
race with add-filter and thus tsk->seccomp.filter should be stable.
> /* Reference count is bounded by the number of total processes. */
> @@ -361,7 +364,7 @@ void put_seccomp_filter(struct task_struct *tsk)
> /* Clean up single-reference branches iteratively. */
> while (orig && atomic_dec_and_test(&orig->usage)) {
> struct seccomp_filter *freeme = orig;
> - orig = orig->prev;
> + orig = smp_load_acquire(&orig->prev);
> seccomp_filter_free(freeme);
> }
This one looks unneeded too. And note that this patch does not add
smp_load_acquire() to read tsk->seccomp.filter.
atomic_dec_and_test() adds mb(), we do not need more barriers to access
->prev ?
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists