lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Jul 2018 16:48:21 +0200
From:   Paolo Abeni <pabeni@...hat.com>
To:     Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Cc:     "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Florian Westphal <fw@...len.de>, NeilBrown <neilb@...e.com>
Subject: Re: [RFC PATCH] ip: re-introduce fragments cache worker

Hi,

On Mon, 2018-07-09 at 05:50 -0700, Eric Dumazet wrote:
> On 07/09/2018 04:39 AM, Eric Dumazet wrote:
> 
> > Alternatively, you could try to patch fq_codel to drop all frags of one UDP datagram
> > instead of few of them.
> 
> A first step would be to make sure fq_codel_hash() (using skb_get_hash(skb)) selects
> the same bucket for all frags of a datagram :/

I gave the above a shot and I have some non upstream ready but somewhat
working code. Anyway it has some issues I'm unable to solve: 
* it's very invasive for fq_codel, because I need to parse each packet
looking for the fragment id
* the parsing overhead can't be easily avoided for non fragments

I tried also something hopefully along the same lines of your other
suggestion (drop eariler the fragment queues when above low threshold):
when allocating a new frag queue and the ipfrag mem is above the low
th, another frag queue is selected in a pseudorandom way and dropped.

This latter patch is much smaller, cope quite well with fragment drops,
and the goodput degradates gracefully when the ipfrag cache is
overloaded. 

I'm wodering if you could consider this second option, too.

Thank you,

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ