[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1712261300040.10830@nuc-kabylake>
Date: Tue, 26 Dec 2017 13:05:34 -0600 (CST)
From: Christopher Lameter <cl@...ux.com>
To: kemi <kemi.wang@...el.com>
cc: Michal Hocko <mhocko@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>,
Johannes Weiner <hannes@...xchg.org>,
YASUAKI ISHIMATSU <yasu.isimatu@...il.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Nikolay Borisov <nborisov@...e.com>,
Pavel Tatashin <pasha.tatashin@...cle.com>,
David Rientjes <rientjes@...gle.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Dave <dave.hansen@...ux.intel.com>,
Andi Kleen <andi.kleen@...el.com>,
Tim Chen <tim.c.chen@...el.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Ying Huang <ying.huang@...el.com>,
Aaron Lu <aaron.lu@...el.com>, Aubrey Li <aubrey.li@...el.com>,
Linux MM <linux-mm@...ck.org>,
Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 3/5] mm: enlarge NUMA counters threshold size
On Fri, 22 Dec 2017, kemi wrote:
> > I think you are fighting a lost battle there. As evident from the timing
> > constraints on packet processing in a 10/40G you will have a hard time to
> > process data if the packets are of regular ethernet size. And we alrady
> > have 100G NICs in operation here.
> >
>
> Not really.
> For 10/40G NIC or even 100G, I admit DPDK is widely used in data center network
> rather than kernel driver in production environment.
Shudder. I would rather have an user space API that is vendor neutral and
that allows the use of multiple NICs. The Linux kernel has an RDMA
subsystem that does just that.
But time budget is difficult to deal with even using RDMA or DPKG where we
can avoid the OS overhead.
> That's due to the slow page allocator and long pipeline processing in network
> protocol stack.
Right the timing budget there for processing a single packet gets below a
microsecond at some point and there its going to be difficult to do much.
Some aggregation / offloading is required and that increases as speeds
become higher.
> That's not easy to change this state in short time, but if we can do something
> here to change it a little, why not.
How much of an improvement is this going to be? If it is significant then
by all means lets do it.
> > We can try to get the performance as high as possible but full rate high
> > speed networking invariable must use offload mechanisms and thus the
> > statistics would only be available from the hardware devices that can do
> > wire speed processing.
> >
>
> I think you may be talking something about SmartNIC (e.g. OpenVswitch offload +
> VF pass through). That's usually used in virtualization environment to eliminate
> the overhead from device emulation and packet processing in software virtual
> switch(OVS or linux bridge).
The switch offloads Can also be used elsewhere. Also the RDMA subsystem
has counters like that.
Powered by blists - more mailing lists