lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Dec 2016 11:43:08 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        Michal Hocko <mhocko@...e.com>,
        Vlastimil Babka <vbabka@...e.cz>,
        Johannes Weiner <hannes@...xchg.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Linux-MM <linux-mm@...ck.org>,
        Linux-Kernel <linux-kernel@...r.kernel.org>, brouer@...hat.com
Subject: Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7

On Thu, 8 Dec 2016 09:18:06 +0000
Mel Gorman <mgorman@...hsingularity.net> wrote:

> On Thu, Dec 08, 2016 at 09:22:31AM +0100, Jesper Dangaard Brouer wrote:
> > On Wed, 7 Dec 2016 23:25:31 +0000
> > Mel Gorman <mgorman@...hsingularity.net> wrote:
> >   
> > > On Wed, Dec 07, 2016 at 09:19:58PM +0000, Mel Gorman wrote:  
> > > > At small packet sizes on localhost, I see relatively low page allocator
> > > > activity except during the socket setup and other unrelated activity
> > > > (khugepaged, irqbalance, some btrfs stuff) which is curious as it's
> > > > less clear why the performance was improved in that case. I considered
> > > > the possibility that it was cache hotness of pages but that's not a
> > > > good fit. If it was true then the first test would be slow and the rest
> > > > relatively fast and I'm not seeing that. The other side-effect is that
> > > > all the high-order pages that are allocated at the start are physically
> > > > close together but that shouldn't have that big an impact. So for now,
> > > > the gain is unexplained even though it happens consistently.
> > > >     
> > > 
> > > Further investigation led me to conclude that the netperf automation on
> > > my side had some methodology errors that could account for an artifically
> > > low score in some cases. The netperf automation is years old and would
> > > have been developed against a much older and smaller machine which may be
> > > why I missed it until I went back looking at exactly what the automation
> > > was doing. Minimally in a server/client test on remote maching there was
> > > potentially higher packet loss than is acceptable. This would account why
> > > some machines "benefitted" while others did not -- there would be boot to
> > > boot variations that some machines happened to be "lucky". I believe I've
> > > corrected the errors, discarded all the old data and scheduled a rest to
> > > see what falls out.  
> > 
> > I guess you are talking about setting the netperf socket queue low
> > (+256 bytes above msg size), that I pointed out in[1].   
> 
> Primarily, yes.
> 
> > From the same commit[2] I can see you explicitly set (local+remote):
> > 
> >   sysctl net.core.rmem_max=16777216
> >   sysctl net.core.wmem_max=16777216
> >   
> 
> Yes, I set it for higher speed networks as a starting point to remind me
> to examine rmem_default or socket configurations if any significant packet
> loss is observed.
> 
> > Eric do you have any advice on this setting?
> > 
> > And later[4] you further increase this to 32MiB.  Notice that the
> > netperf UDP_STREAM test will still use the default value from:
> > net.core.rmem_default = 212992.
> >   
> 
> That's expected. In the initial sniff-test, I saw negligible packet loss.
> I'm waiting to see what the full set of network tests look like before
> doing any further adjustments.

For netperf I will not recommend adjusting the global default
/proc/sys/net/core/rmem_default as netperf have means of adjusting this
value from the application (which were the options you setup too low
and just removed). I think you should keep this as the default for now
(unless Eric says something else), as this should cover most users.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ