[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ace8e72488fbf2473efaed9fc0680886897939ab.camel@redhat.com>
Date: Wed, 25 Mar 2020 12:52:11 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>,
"David S . Miller" <davem@...emloft.net>
Cc: netdev <netdev@...r.kernel.org>,
Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH net-next] net: use indirect call wrappers for
skb_copy_datagram_iter()
On Tue, 2020-03-24 at 19:23 -0700, Eric Dumazet wrote:
> TCP recvmsg() calls skb_copy_datagram_iter(), which
> calls an indirect function (cb pointing to simple_copy_to_iter())
> for every MSS (fragment) present in the skb.
>
> CONFIG_RETPOLINE=y forces a very expensive operation
> that we can avoid thanks to indirect call wrappers.
>
> This patch gives a 13% increase of performance on
> a single flow, if the bottleneck is the thread reading
> the TCP socket.
>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> ---
> net/core/datagram.c | 14 +++++++++++---
> 1 file changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/net/core/datagram.c b/net/core/datagram.c
> index 4213081c6ed3d4fda69501641a8c76e041f26b42..639745d4f3b94a248da9a685f45158410a85bec7 100644
> --- a/net/core/datagram.c
> +++ b/net/core/datagram.c
> @@ -51,6 +51,7 @@
> #include <linux/slab.h>
> #include <linux/pagemap.h>
> #include <linux/uio.h>
> +#include <linux/indirect_call_wrapper.h>
>
> #include <net/protocol.h>
> #include <linux/skbuff.h>
> @@ -403,6 +404,11 @@ int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags)
> }
> EXPORT_SYMBOL(skb_kill_datagram);
>
> +INDIRECT_CALLABLE_DECLARE(static size_t simple_copy_to_iter(const void *addr,
> + size_t bytes,
> + void *data __always_unused,
> + struct iov_iter *i));
> +
> static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
> struct iov_iter *to, int len, bool fault_short,
> size_t (*cb)(const void *, size_t, void *,
> @@ -416,7 +422,8 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
> if (copy > 0) {
> if (copy > len)
> copy = len;
> - n = cb(skb->data + offset, copy, data, to);
> + n = INDIRECT_CALL_1(cb, simple_copy_to_iter,
> + skb->data + offset, copy, data, to);
> offset += n;
> if (n != copy)
> goto short_copy;
> @@ -438,8 +445,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
>
> if (copy > len)
> copy = len;
> - n = cb(vaddr + skb_frag_off(frag) + offset - start,
> - copy, data, to);
> + n = INDIRECT_CALL_1(cb, simple_copy_to_iter,
> + vaddr + skb_frag_off(frag) + offset - start,
> + copy, data, to);
> kunmap(page);
> offset += n;
> if (n != copy)
I wondered if we could add a second argument for
'csum_and_copy_to_iter', but I guess that is a slower path anyway and
more datapoint would be needed. The patch LGTM, thanks!
Acked-by: Paolo Abeni <pabeni@...hat.com>
Powered by blists - more mailing lists