lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 26 Mar 2024 09:40:32 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Richard Gobert <richardbgobert@...il.com>, 
 Willem de Bruijn <willemdebruijn.kernel@...il.com>, 
 davem@...emloft.net, 
 edumazet@...gle.com, 
 kuba@...nel.org, 
 pabeni@...hat.com, 
 dsahern@...nel.org, 
 xeb@...l.ru, 
 shuah@...nel.org, 
 idosch@...dia.com, 
 amcohen@...dia.com, 
 petrm@...dia.com, 
 jbenc@...hat.com, 
 bpoirier@...dia.com, 
 b.galvani@...il.com, 
 liujian56@...wei.com, 
 horms@...nel.org, 
 linyunsheng@...wei.com, 
 therbert@...gle.com, 
 netdev@...r.kernel.org, 
 linux-kernel@...r.kernel.org, 
 linux-kselftest@...r.kernel.org
Subject: Re: [PATCH net-next v4 4/4] net: gro: move L3 flush checks to
 tcp_gro_receive

Richard Gobert wrote:
> Willem de Bruijn wrote:
> > In v3 we discussed how the flush on network layer differences (like
> > TTL or ToS) currently only affect the TCP GRO path, but should apply
> > more broadly.
> > 
> > We agreed that it is fine to leave that to a separate patch series.
> > 
> > But seeing this patch, it introduces a lot of churn, but also makes
> > it harder to address that issue for UDP, as it now moves network
> > layer checks directly to the TCP code.
> Currently the logic of flush_id is scattered in tcp_gro_receive and
> {inet,ipv6}_gro_receive with conditionals rewriting ->flush and ->flush_id,
> so IMO the code should be more concise when it's in one place - in addition
> to not doing checks against non relevant packets.
> 
> With this patch, the fix will probably be simple, most likely just calling
> gro_network_flush from skb_gro_receive or from the relevant flow in
> udp_gro_receive_segment. Since this bug fix should be simple and it being
> not relevant to the optimization, I'd like to solve it in another series
> and properly test that new flow. Do you agree?

My main concern is moving this code to tcp_offload.c, if it likely
soon will be moved elsewhere again.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ