lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+FuTSeYGYr3Umij+Mezk9CUcaxYwqEe5sPSuXF8jPE2yMFJAw@mail.gmail.com>
Date:   Wed, 5 Feb 2020 16:43:51 -0500
From:   Willem de Bruijn <willemdebruijn.kernel@...il.com>
To:     Yadu Kishore <kyk.segfault@...il.com>
Cc:     Network Development <netdev@...r.kernel.org>
Subject: Re: TCP checksum not offloaded during GSO

On Tue, Feb 4, 2020 at 12:55 AM Yadu Kishore <kyk.segfault@...il.com> wrote:
>
> Hi,
>
> I'm working on enhancing a driver for a Network Controller that
> supports "Checksum Offloads".
> So I'm offloading TCP/UDP checksum computation in the network driver
> using NETIF_F_HW_CSUM on
> linux kernel version 4.19.23 aarch64 for hikey android platform. The
> Network Controller does not support scatter-gather (SG) DMA.
> Hence I'm not enabling the NETIF_IF_SG feature.
> I see that GSO for TCP is enabled by default in the kernel 4.19.23
> When running iperf TCP traffic I observed that the TCP checksum is not
> offloaded for the majority
> of the TCP packets. Most of the skbs received in the output path in
> the driver have skb->ip_summed
> set to CHECKSUM_NONE.
> The csum is offloaded only for the initial TCP connection establishment packets.
> For UDP I do not observe this problem.
> It appears that a decision was taken not to offload TCP csum (during GSO)
> if the network driver does not support SG :
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 02c638a643ea..9c065ac72e87 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3098,8 +3098,9 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
>   if (nskb->len == len + doffset)
>   goto perform_csum_check;
>
> - if (!sg && !nskb->remcsum_offload) {
> - nskb->ip_summed = CHECKSUM_NONE;
> + if (!sg) {
> + if (!nskb->remcsum_offload)
> + nskb->ip_summed = CHECKSUM_NONE;
>   SKB_GSO_CB(nskb)->csum =
>   skb_copy_and_csum_bits(head_skb, offset,
>         skb_put(nskb, len),
>
> The above is a code snippet from the actual commit :
>
> commit 7fbeffed77c130ecf64e8a2f7f9d6d63a9d60a19

This behavior goes back to the original introduction of gso and
skb_segment, commit f4c50d990dcf ("[NET]: Add software TSOv4").

Without scatter-gather, the data has to be copied from skb linear to
nskb linear. This code is probably the way that it is because if it
has to copy anyway, it might as well perform the copy-and-checksum
optimization.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ