[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180330002100.5724-3-bhole_prashant_q7@lab.ntt.co.jp>
Date: Fri, 30 Mar 2018 09:21:00 +0900
From: Prashant Bhole <bhole_prashant_q7@....ntt.co.jp>
To: Daniel Borkmann <daniel@...earbox.net>,
Alexei Starovoitov <ast@...nel.org>,
"David S . Miller" <davem@...emloft.net>
Cc: Prashant Bhole <bhole_prashant_q7@....ntt.co.jp>,
John Fastabend <john.fastabend@...il.com>,
netdev@...r.kernel.org
Subject: [PATCH v2 bpf-next 2/2] bpf: sockmap: initialize sg table entries properly
When CONFIG_DEBUG_SG is set, sg->sg_magic is initialized in
sg_init_table() and it is verified in sg api while navigating. We hit
BUG_ON when magic check is failed.
In functions sg_tcp_sendpage and sg_tcp_sendmsg, the struct containing
the scatterlist is already zeroed out. So to avoid extra memset, we
use sg_init_marker() to initialize sg_magic.
Fixed following things:
- In bpf_tcp_sendpage: initialize sg using sg_init_marker
- In bpf_tcp_sendmsg: Replace sg_init_table with sg_init_marker
- In bpf_tcp_push: Replace memset with sg_init_table where consumed
sg entry needs to be re-initialized.
Signed-off-by: Prashant Bhole <bhole_prashant_q7@....ntt.co.jp>
---
kernel/bpf/sockmap.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 69c5bccabd22..b4f01656c452 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -312,7 +312,7 @@ static int bpf_tcp_push(struct sock *sk, int apply_bytes,
md->sg_start++;
if (md->sg_start == MAX_SKB_FRAGS)
md->sg_start = 0;
- memset(sg, 0, sizeof(*sg));
+ sg_init_table(sg, 1);
if (md->sg_start == md->sg_end)
break;
@@ -656,7 +656,7 @@ static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
}
sg = md.sg_data;
- sg_init_table(sg, MAX_SKB_FRAGS);
+ sg_init_marker(sg, MAX_SKB_FRAGS);
rcu_read_unlock();
lock_sock(sk);
@@ -763,10 +763,14 @@ static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
lock_sock(sk);
- if (psock->cork_bytes)
+ if (psock->cork_bytes) {
m = psock->cork;
- else
+ sg = &m->sg_data[m->sg_end];
+ } else {
m = &md;
+ sg = m->sg_data;
+ sg_init_marker(sg, MAX_SKB_FRAGS);
+ }
/* Catch case where ring is full and sendpage is stalled. */
if (unlikely(m->sg_end == m->sg_start &&
@@ -774,7 +778,6 @@ static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
goto out_err;
psock->sg_size += size;
- sg = &m->sg_data[m->sg_end];
sg_set_page(sg, page, size, offset);
get_page(page);
m->sg_copy[m->sg_end] = true;
--
2.14.3
Powered by blists - more mailing lists