[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikK=dy6U0QBjkJxZXeqYXVHwZqjmRhnmYBH-22r@mail.gmail.com>
Date: Thu, 2 Dec 2010 10:30:48 +0800
From: Changli Gao <xiaosuo@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>, hagen@...u.net,
wirelesser@...il.com, netdev@...r.kernel.org,
Dan Rosenberg <drosenberg@...curity.com>
Subject: Re: [PATCH net-next-2.6] filter: add a security check at install time
On Thu, Dec 2, 2010 at 4:45 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le mercredi 01 décembre 2010 à 12:23 -0800, David Miller a écrit :
>> From: Eric Dumazet <eric.dumazet@...il.com>
>> Date: Wed, 01 Dec 2010 20:48:57 +0100
>>
>> > Le mercredi 01 décembre 2010 à 10:44 -0800, David Miller a écrit :
>> >> From: Eric Dumazet <eric.dumazet@...il.com>
>> >> Date: Wed, 01 Dec 2010 19:24:53 +0100
>> >>
>> >> > A third work in progress (from my side) is to add a check in
>> >> > sk_chk_filter() to remove the memvalid we added lately to protect the
>> >> > LOAD M(K).
>> >>
>> >> I understand your idea, but the static checkers are still going to
>> >> complain. So better add a huge comment in sk_run_filter() explaining
>> >> why the checker's complaint should be ignored :-)
>> >
>> > Sure, here is the patch I plan to test ASAP
>>
>> Looks good to me.
>
> Yes, it survives tests I did.
>
> I submit the patch and Cc Dan Rosenberg, I would like him to double
> check it if he likes.
>
> Thanks
>
> [PATCH net-next-2.6] filter: add a security check at install time
>
> We added some security checks in commit 57fe93b374a6
> (filter: make sure filters dont read uninitialized memory) to close a
> potential leak of kernel information to user.
>
> This added a potential extra cost at run time, while we can perform a
> check of the filter itself, to make sure a malicious user doesnt try to
> abuse us.
>
> This patch adds a check_loads() function, whole unique purpose is to
> make this check, allocating a temporary array of mask. We scan the
> filter and propagate a bitmask information, telling us if a load M(K) is
> allowed because a previous store M(K) is guaranteed. (So that
> sk_run_filter() can possibly not read unitialized memory)
>
> Note: this can uncover application bug, denying a filter attach,
> previously allowed.
>
> Signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
> Cc: Dan Rosenberg <drosenberg@...curity.com>
> ---
> net/core/filter.c | 70 ++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 61 insertions(+), 9 deletions(-)
>
> diff --git a/net/core/filter.c b/net/core/filter.c
> index a44d27f..00a0d50 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -166,11 +166,9 @@ unsigned int sk_run_filter(struct sk_buff *skb, const struct sock_filter *fentry
> u32 A = 0; /* Accumulator */
> u32 X = 0; /* Index Register */
> u32 mem[BPF_MEMWORDS]; /* Scratch Memory Store */
> - unsigned long memvalid = 0;
> u32 tmp;
> int k;
>
> - BUILD_BUG_ON(BPF_MEMWORDS > BITS_PER_LONG);
> /*
> * Process array of filter instructions.
> */
> @@ -318,12 +316,10 @@ load_b:
> X = K;
> continue;
> case BPF_S_LD_MEM:
> - A = (memvalid & (1UL << K)) ?
> - mem[K] : 0;
> + A = mem[K];
> continue;
> case BPF_S_LDX_MEM:
> - X = (memvalid & (1UL << K)) ?
> - mem[K] : 0;
> + X = mem[K];
> continue;
> case BPF_S_MISC_TAX:
> X = A;
> @@ -336,11 +332,9 @@ load_b:
> case BPF_S_RET_A:
> return A;
> case BPF_S_ST:
> - memvalid |= 1UL << K;
> mem[K] = A;
> continue;
> case BPF_S_STX:
> - memvalid |= 1UL << K;
> mem[K] = X;
> continue;
> default:
> @@ -419,6 +413,64 @@ load_b:
> }
> EXPORT_SYMBOL(sk_run_filter);
>
> +/*
> + * Security :
> + * A BPF program is able to use 16 cells of memory to store intermediate
> + * values (check u32 mem[BPF_MEMWORDS] in sk_run_filter())
> + * As we dont want to clear mem[] array for each packet going through
> + * sk_run_filter(), we check that filter loaded by user never try to read
> + * a cell if not previously written, and we check all branches to be sure
> + * a malicious user doesnt try to abuse us.
> + */
> +static int check_loads(struct sock_filter *filter, int flen)
> +{
> + u16 *masks, memvalid = 0; /* one bit per cell, 16 cells */
> + int pc, ret = 0;
> +
> + BUILD_BUG_ON(BPF_MEMWORDS > 16);
> + masks = kmalloc(flen * sizeof(*masks), GFP_KERNEL);
> + if (!masks)
> + return -ENOMEM;
> + memset(masks, 0xff, flen * sizeof(*masks));
> +
> + for (pc = 0; pc < flen; pc++) {
> + memvalid &= masks[pc];
> +
It seems wrong. Think about the following instructions:
/* m[1] isn't set */
jeq jt jf
jt:
st m[1]
jmp ja
jf:
jmp ja2 /* m[1] is invalidated by masks */
ja:
ld m[1] /* -EINVAL is returned */
ja2:
So you need to search all the possible branches to validate the instructions.
> + switch (filter[pc].code) {
> + case BPF_S_ST:
> + case BPF_S_STX:
> + memvalid |= (1 << filter[pc].k);
> + break;
> + case BPF_S_LD_MEM:
> + case BPF_S_LDX_MEM:
> + if (!(memvalid & (1 << filter[pc].k))) {
> + ret = -EINVAL;
> + goto error;
> + }
> + break;
> + case BPF_S_JMP_JA:
> + /* a jump must set masks on target */
> + masks[pc + 1 + filter[pc].k] &= memvalid;
> + break;
> + case BPF_S_JMP_JEQ_K:
> + case BPF_S_JMP_JEQ_X:
> + case BPF_S_JMP_JGE_K:
> + case BPF_S_JMP_JGE_X:
> + case BPF_S_JMP_JGT_K:
> + case BPF_S_JMP_JGT_X:
> + case BPF_S_JMP_JSET_X:
> + case BPF_S_JMP_JSET_K:
> + /* a jump must set masks on targets */
> + masks[pc + 1 + filter[pc].jt] &= memvalid;
> + masks[pc + 1 + filter[pc].jf] &= memvalid;
> + break;
> + }
> + }
> +error:
> + kfree(masks);
> + return ret;
> +}
> +
> /**
> * sk_chk_filter - verify socket filter code
> * @filter: filter to verify
> @@ -547,7 +599,7 @@ int sk_chk_filter(struct sock_filter *filter, int flen)
> switch (filter[flen - 1].code) {
> case BPF_S_RET_K:
> case BPF_S_RET_A:
> - return 0;
> + return check_loads(filter, flen);
> }
> return -EINVAL;
> }
>
>
>
--
Regards,
Changli Gao(xiaosuo@...il.com)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists