[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1291280402.2871.20.camel@edumazet-laptop>
Date: Thu, 02 Dec 2010 10:00:02 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Changli Gao <xiaosuo@...il.com>
Cc: David Miller <davem@...emloft.net>, hagen@...u.net,
wirelesser@...il.com, netdev@...r.kernel.org,
Dan Rosenberg <drosenberg@...curity.com>
Subject: Re: [PATCH net-next-2.6] filter: add a security check at install
time
Le jeudi 02 décembre 2010 à 09:53 +0100, Eric Dumazet a écrit :
> Le jeudi 02 décembre 2010 à 16:11 +0800, Changli Gao a écrit :
>
> > It seems correct to me now.
> >
> > Acked-by: Changli Gao <xiaosuo@...il.com>
> >
>
> Thanks for reviewing Changli.
>
> Now I am thinking about not denying the filter installation, but change
> the problematic LOAD M(1) and LOADX M(1) by LOADI #0 (BPF_S_LD_IMM
> K=0) and LOADIX #0 (BPF_S_LDX_IMM K=0)
>
> (ie pretend the value of memory is 0, not a random value taken from
> stack)
>
>
> [PATCH v3 net-next-2.6] filter: add a security check at install time
Doh, I sent a version with old (V1) check_load_and_stores() name, here
is a V4 with shorter name check_loads() as mentioned in changelog.
Sorry for the mess.
[PATCH v4 net-next-2.6] filter: add a security check at install time
We added some security checks in commit 57fe93b374a6
(filter: make sure filters dont read uninitialized memory) to close a
potential leak of kernel information to user.
This added a potential extra cost at run time, while we can perform a
check of the filter itself, to make sure a malicious user doesnt try to
abuse us.
This patch adds a check_loads() function, whole unique purpose is to
make this check, allocating a temporary array of mask. We scan the
filter and propagate a bitmask information, telling us if a load M(K) is
allowed because a previous store M(K) is guaranteed.
If we detect a problematic load M(K), we replace it by a load of
immediate value 0
Signed-off-by: Eric Dumazet <eric.dumazet@...il.com>
Cc: Dan Rosenberg <drosenberg@...curity.com>
Cc: Changli Gao <xiaosuo@...il.com>
---
v4: really use check_loads(), not check_load_and_stores()
v3: replace problematic loads M(K) by load of immediate 0 value,
dont report an error to user.
v2: set memvalid to ~0 on JMP instructions
net/core/filter.c | 78 ++++++++++++++++++++++++++++++++++++++------
1 file changed, 69 insertions(+), 9 deletions(-)
diff --git a/net/core/filter.c b/net/core/filter.c
index a44d27f..2bd7dbc 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -166,11 +166,9 @@ unsigned int sk_run_filter(struct sk_buff *skb, const struct sock_filter *fentry
u32 A = 0; /* Accumulator */
u32 X = 0; /* Index Register */
u32 mem[BPF_MEMWORDS]; /* Scratch Memory Store */
- unsigned long memvalid = 0;
u32 tmp;
int k;
- BUILD_BUG_ON(BPF_MEMWORDS > BITS_PER_LONG);
/*
* Process array of filter instructions.
*/
@@ -318,12 +316,10 @@ load_b:
X = K;
continue;
case BPF_S_LD_MEM:
- A = (memvalid & (1UL << K)) ?
- mem[K] : 0;
+ A = mem[K];
continue;
case BPF_S_LDX_MEM:
- X = (memvalid & (1UL << K)) ?
- mem[K] : 0;
+ X = mem[K];
continue;
case BPF_S_MISC_TAX:
X = A;
@@ -336,11 +332,9 @@ load_b:
case BPF_S_RET_A:
return A;
case BPF_S_ST:
- memvalid |= 1UL << K;
mem[K] = A;
continue;
case BPF_S_STX:
- memvalid |= 1UL << K;
mem[K] = X;
continue;
default:
@@ -419,6 +413,72 @@ load_b:
}
EXPORT_SYMBOL(sk_run_filter);
+/*
+ * Security :
+ * A BPF program is able to use 16 cells of memory to store intermediate
+ * values (check u32 mem[BPF_MEMWORDS] in sk_run_filter())
+ * As we dont want to clear mem[] array for each packet going through
+ * sk_run_filter(), we check that filter loaded by user never try to read
+ * a cell if not previously written, and we check all branches to be sure
+ * a malicious user doesnt try to abuse us.
+ * If such malicious (or buggy) read is detected, its replaced by a
+ * load of immediate zero value.
+ */
+static int check_loads(struct sock_filter *filter, int flen)
+{
+ u16 *masks, memvalid = 0; /* one bit per cell, 16 cells */
+ int pc;
+
+ BUILD_BUG_ON(BPF_MEMWORDS > 16);
+ masks = kmalloc(flen * sizeof(*masks), GFP_KERNEL);
+ if (!masks)
+ return -ENOMEM;
+ memset(masks, 0xff, flen * sizeof(*masks));
+
+ for (pc = 0; pc < flen; pc++) {
+ memvalid &= masks[pc];
+
+ switch (filter[pc].code) {
+ case BPF_S_ST:
+ case BPF_S_STX:
+ memvalid |= (1 << filter[pc].k);
+ break;
+ case BPF_S_LD_MEM:
+ if (!(memvalid & (1 << filter[pc].k))) {
+ filter[pc].code = BPF_S_LD_IMM;
+ filter[pc].k = 0;
+ }
+ break;
+ case BPF_S_LDX_MEM:
+ if (!(memvalid & (1 << filter[pc].k))) {
+ filter[pc].code = BPF_S_LDX_IMM;
+ filter[pc].k = 0;
+ }
+ break;
+ case BPF_S_JMP_JA:
+ /* a jump must set masks on target */
+ masks[pc + 1 + filter[pc].k] &= memvalid;
+ memvalid = ~0;
+ break;
+ case BPF_S_JMP_JEQ_K:
+ case BPF_S_JMP_JEQ_X:
+ case BPF_S_JMP_JGE_K:
+ case BPF_S_JMP_JGE_X:
+ case BPF_S_JMP_JGT_K:
+ case BPF_S_JMP_JGT_X:
+ case BPF_S_JMP_JSET_X:
+ case BPF_S_JMP_JSET_K:
+ /* a jump must set masks on targets */
+ masks[pc + 1 + filter[pc].jt] &= memvalid;
+ masks[pc + 1 + filter[pc].jf] &= memvalid;
+ memvalid = ~0;
+ break;
+ }
+ }
+ kfree(masks);
+ return 0;
+}
+
/**
* sk_chk_filter - verify socket filter code
* @filter: filter to verify
@@ -547,7 +607,7 @@ int sk_chk_filter(struct sock_filter *filter, int flen)
switch (filter[flen - 1].code) {
case BPF_S_RET_K:
case BPF_S_RET_A:
- return 0;
+ return check_loads(filter, flen);
}
return -EINVAL;
}
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists