lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Dec 2017 11:10:00 -0600 (CST)
From:   Christopher Lameter <cl@...ux.com>
To:     kemi <kemi.wang@...el.com>
cc:     Michal Hocko <mhocko@...nel.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Johannes Weiner <hannes@...xchg.org>,
        YASUAKI ISHIMATSU <yasu.isimatu@...il.com>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Nikolay Borisov <nborisov@...e.com>,
        Pavel Tatashin <pasha.tatashin@...cle.com>,
        David Rientjes <rientjes@...gle.com>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Dave <dave.hansen@...ux.intel.com>,
        Andi Kleen <andi.kleen@...el.com>,
        Tim Chen <tim.c.chen@...el.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Ying Huang <ying.huang@...el.com>,
        Aaron Lu <aaron.lu@...el.com>, Aubrey Li <aubrey.li@...el.com>,
        Linux MM <linux-mm@...ck.org>,
        Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 3/5] mm: enlarge NUMA counters threshold size

On Thu, 21 Dec 2017, kemi wrote:

> Some thinking about that:
> a) the overhead due to cache bouncing caused by NUMA counter update in fast path
> severely increase with more and more CPUs cores
> b) AFAIK, the typical usage scenario (similar at least)for which this optimization can
> benefit is 10/40G NIC used in high-speed data center network of cloud service providers.

I think you are fighting a lost battle there. As evident from the timing
constraints on packet processing in a 10/40G you will have a hard time to
process data if the packets are of regular ethernet size. And we alrady
have 100G NICs in operation here.

We can try to get the performance as high as possible but full rate high
speed networking invariable must use offload mechanisms and thus the
statistics would only be available from the hardware devices that can do
wire speed processing.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ