lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1305663288.2691.2.camel@edumazet-laptop>
Date:	Tue, 17 May 2011 22:14:48 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	David Miller <davem@...emloft.net>
Cc:	netdev@...r.kernel.org
Subject: Re: small RPS cache for fragments?

Le mardi 17 mai 2011 à 14:33 -0400, David Miller a écrit :
> It seems to me that we can solve the UDP fragmentation problem for
> flow steering very simply by creating a (saddr/daddr/IPID) entry in a
> table that maps to the corresponding RPS flow entry.
> 
> When we see the initial frag with the UDP header, we create the
> saddr/daddr/IPID mapping, and we tear it down when we hit the
> saddr/daddr/IPID mapping and the packet has the IP_MF bit clear.
> 

> We only inspect the saddr/daddr/IPID cache when iph->frag_off is
> non-zero.
> 

> It's best effort and should work quite well.
> 
> Even a one-behind cache, per-NAPI instance, would do a lot better than
> what happens at the moment.  Especially since the IP fragments mostly
> arrive as one packet train.
> --

OK but do we have workloads actually needing this optimization at all ?

(IP defrag hits a read_lock(&ip4_frags.lock)), so maybe steer all frags
on a given cpu ?)




--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ