lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 06 Jul 2018 13:56:56 +0200
From:   Paolo Abeni <pabeni@...hat.com>
To:     Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Cc:     "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Florian Westphal <fw@...len.de>, NeilBrown <neilb@...e.com>
Subject: Re: [RFC PATCH] ip: re-introduce fragments cache worker

Hi,

On Fri, 2018-07-06 at 04:23 -0700, Eric Dumazet wrote:
> Ho hum. No please.
> 
> I do not think adding back a GC is wise, since my patches were going in the direction
> of allowing us to increase limits on current hardware.
> 
> Meaning that the amount of frags to evict would be quite big under DDOS.
> (One inet_frag_queue allocated for every incoming tiny frame :/ )
> 
> A GC is a _huge_ problem, burning one cpu (you would have to provision for this CPU)
> compared to letting normal per frag timer doing its job.
> 
> My plan was to reduce the per frag timer under load (default is 30 seconds), since
> this is exactly what your patch is indirectly doing, by aggressively pruning
> frags under stress.
> 
> That would be a much simpler heuristic. [1]
> 
> BTW my own results (before patch) are :
> 
> lpaa5:/export/hda3/google/edumazet# ./super_netperf 10 -H 10.246.7.134 -t UDP_STREAM -l 60
>    9602
> lpaa5:/export/hda3/google/edumazet# ./super_netperf 200 -H 10.246.7.134 -t UDP_STREAM -l 60
>    9557
> 
> On receiver (normal settings here) I had :
> 
> lpaa6:/export/hda3/google/edumazet# grep . /proc/sys/net/ipv4/ipfrag_*
> /proc/sys/net/ipv4/ipfrag_high_thresh:104857600
> /proc/sys/net/ipv4/ipfrag_low_thresh:78643200
> /proc/sys/net/ipv4/ipfrag_max_dist:0
> /proc/sys/net/ipv4/ipfrag_secret_interval:0
> /proc/sys/net/ipv4/ipfrag_time:30
> 
> lpaa6:/export/hda3/google/edumazet# grep FRAG /proc/net/sockstat
> FRAG: inuse 824 memory 53125312

Than you for the feedback.

With your setting, you need a bit more concurrent connections (400 ?)
to saturate the ipfrag cache. Above that number, performances will
still sink.

> diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
> index c9e35b81d0931df8429a33e8d03e719b87da0747..88ed61bcda00f3357724e5c4dbcb97400b4a8b21 100644
> --- a/net/ipv4/inet_fragment.c
> +++ b/net/ipv4/inet_fragment.c
> @@ -155,9 +155,15 @@ static struct inet_frag_queue *inet_frag_alloc(struct netns_frags *nf,
>                                                struct inet_frags *f,
>                                                void *arg)
>  {
> +       long high_thresh = READ_ONCE(nf->high_thresh);
>         struct inet_frag_queue *q;
> +       u64 timeout;
> +       long usage;
>  
> -       if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh)
> +       if (!high_thresh)
> +               return NULL;
> +       usage = frag_mem_limit(nf);
> +       if (usage > high_thresh)
>                 return NULL;
>  
>         q = kmem_cache_zalloc(f->frags_cachep, GFP_ATOMIC);
> @@ -171,6 +177,8 @@ static struct inet_frag_queue *inet_frag_alloc(struct netns_frags *nf,
>         timer_setup(&q->timer, f->frag_expire, 0);
>         spin_lock_init(&q->lock);
>         refcount_set(&q->refcnt, 3);
> +       timeout = (u64)nf->timeout * (high_thresh - usage);
> +       mod_timer(&q->timer, jiffies + div64_long(timeout, high_thresh));
>  
>         return q;
>  }

This looks nice, I'll try to test it in my use case and I'll report
here.

Perhaps we can use the default timeout when usage < low_thresh, to
avoid some maths in possibly common scenario?

I have doubt: under DDOS we will trigger <max numfrags> timeout per
jiffy, can that keep a CPU busy, too?

Cheers,

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ