lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170830142208.1c08bbaa@redhat.com>
Date:   Wed, 30 Aug 2017 14:22:08 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Florian Westphal <fw@...len.de>
Cc:     "liujian (CE)" <liujian56@...wei.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "edumazet@...gle.com" <edumazet@...gle.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "Wangkefeng (Kevin)" <wangkefeng.wang@...wei.com>,
        "weiyongjun (A)" <weiyongjun1@...wei.com>, brouer@...hat.com
Subject: Re: Question about ip_defrag

On Wed, 30 Aug 2017 13:58:20 +0200
Florian Westphal <fw@...len.de> wrote:

> Jesper Dangaard Brouer <brouer@...hat.com> wrote:
> > > I take 2) back.  Its wrong to do this, for large NR_CPU values it
> > > would even overflow.  
> > 
> > Alternatively solution 3:
> > Why do we want to maintain a (4MBytes) memory limit, across all CPUs?
> > Couldn't we just allow each CPU to have a memory limit?  
> 
> Consider ipv4, ipv6, nf ipv6 defrag, 6lowpan, and 8k cpus... This will
> render any limit useless.

With 8K CPUs I agree, that this might be a bad idea!

> > > > To me it looks like we/I have been using the wrong API for comparing
> > > > against percpu_counters.  I guess we should have used __percpu_counter_compare().    
> > > 
> > > Are you sure?  For liujian use case (64 cores) it looks like we would
> > > always fall through to percpu_counter_sum() so we eat spinlock_irqsave
> > > cost for all compares.
> > > 
> > > Before we entertain this we should consider reducing frag_percpu_counter_batch
> > > to a smaller value.  
> > 
> > Yes, I agree, we really need to lower/reduce the frag_percpu_counter_batch.
> > As you say, else the __percpu_counter_compare() call will be useless
> > (around systems with >= 32 CPUs).
> > 
> > I think the bug is in frag_mem_limit().  It just reads the global
> > counter (fbc->count), without considering other CPUs can have upto 130K
> > that haven't been subtracted yet (due to 3M low limit, become dangerous
> > at >=24 CPUs).  The  __percpu_counter_compare() does the right thing,
> > and takes into account the number of (online) CPUs and batch size, to
> > account for this.  
> 
> Right, I think we should at very least use __percpu_counter_compare
> before denying a new frag queue allocation request.
> 
> I'll create a patch.

Oh, I've already started working on a patch, that I'm testing now.  But
if you want to take the assignment then I'm fine with that!.  I just
though that it was my responsibility to fix, given I introduced
percpu_counter usage (back in 2013-01-28 / 6d7b857d541e).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ