[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpVT5=yptx-Q-e5hKvyOJ7+gi1uRLX_KXzcczSrDSA_6Dw@mail.gmail.com>
Date: Thu, 11 Mar 2021 16:47:45 -0800
From: Cong Wang <xiyou.wangcong@...il.com>
To: Jakub Sitnicki <jakub@...udflare.com>
Cc: Linux Kernel Network Developers <netdev@...r.kernel.org>,
bpf <bpf@...r.kernel.org>, duanxiongchun@...edance.com,
Dongdong Wang <wangdongdong.6@...edance.com>,
Jiang Wang <jiang.wang@...edance.com>,
Cong Wang <cong.wang@...edance.com>,
John Fastabend <john.fastabend@...il.com>,
Daniel Borkmann <daniel@...earbox.net>,
Lorenz Bauer <lmb@...udflare.com>
Subject: Re: [Patch bpf-next v4 03/11] skmsg: introduce skb_send_sock() for sock_map
On Thu, Mar 11, 2021 at 3:42 AM Jakub Sitnicki <jakub@...udflare.com> wrote:
>
> On Wed, Mar 10, 2021 at 06:32 AM CET, Cong Wang wrote:
> > From: Cong Wang <cong.wang@...edance.com>
> >
> > We only have skb_send_sock_locked() which requires callers
> > to use lock_sock(). Introduce a variant skb_send_sock()
> > which locks on its own, callers do not need to lock it
> > any more. This will save us from adding a ->sendmsg_locked
> > for each protocol.
> >
> > To reuse the code, pass function pointers to __skb_send_sock()
> > and build skb_send_sock() and skb_send_sock_locked() on top.
> >
> > Cc: John Fastabend <john.fastabend@...il.com>
> > Cc: Daniel Borkmann <daniel@...earbox.net>
> > Cc: Jakub Sitnicki <jakub@...udflare.com>
> > Cc: Lorenz Bauer <lmb@...udflare.com>
> > Signed-off-by: Cong Wang <cong.wang@...edance.com>
> > ---
> > include/linux/skbuff.h | 1 +
> > net/core/skbuff.c | 52 ++++++++++++++++++++++++++++++++++++------
> > 2 files changed, 46 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index 0503c917d773..2fc8c3657c53 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -3626,6 +3626,7 @@ int skb_splice_bits(struct sk_buff *skb, struct sock *sk, unsigned int offset,
> > unsigned int flags);
> > int skb_send_sock_locked(struct sock *sk, struct sk_buff *skb, int offset,
> > int len);
> > +int skb_send_sock(struct sock *sk, struct sk_buff *skb, int offset, int len);
> > void skb_copy_and_csum_dev(const struct sk_buff *skb, u8 *to);
> > unsigned int skb_zerocopy_headlen(const struct sk_buff *from);
> > int skb_zerocopy(struct sk_buff *to, struct sk_buff *from,
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index 545a472273a5..396586bd6ae3 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -2500,9 +2500,12 @@ int skb_splice_bits(struct sk_buff *skb, struct sock *sk, unsigned int offset,
> > }
> > EXPORT_SYMBOL_GPL(skb_splice_bits);
> >
> > -/* Send skb data on a socket. Socket must be locked. */
> > -int skb_send_sock_locked(struct sock *sk, struct sk_buff *skb, int offset,
> > - int len)
> > +typedef int (*sendmsg_func)(struct sock *sk, struct msghdr *msg,
> > + struct kvec *vec, size_t num, size_t size);
> > +typedef int (*sendpage_func)(struct sock *sk, struct page *page, int offset,
> > + size_t size, int flags);
> > +static int __skb_send_sock(struct sock *sk, struct sk_buff *skb, int offset,
> > + int len, sendmsg_func sendmsg, sendpage_func sendpage)
> > {
> > unsigned int orig_len = len;
> > struct sk_buff *head = skb;
> > @@ -2522,7 +2525,7 @@ int skb_send_sock_locked(struct sock *sk, struct sk_buff *skb, int offset,
> > memset(&msg, 0, sizeof(msg));
> > msg.msg_flags = MSG_DONTWAIT;
> >
> > - ret = kernel_sendmsg_locked(sk, &msg, &kv, 1, slen);
> > + ret = sendmsg(sk, &msg, &kv, 1, slen);
>
>
> Maybe use INDIRECT_CALLABLE_DECLARE() and INDIRECT_CALL_2() since there
> are just two possibilities? Same for sendpage below.
Yeah. Actually I wanted to call __skb_send_sock() in espintcp for
tcp_sendmsg(), but it actually could be TCP over IPv6 too, so I decided
not to touch it.
Thanks.
Powered by blists - more mailing lists