[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170125162039-mutt-send-email-mst@kernel.org>
Date: Wed, 25 Jan 2017 16:23:03 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: John Fastabend <john.fastabend@...il.com>,
David Miller <davem@...emloft.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jason Wang <jasowang@...hat.com>,
virtualization@...ts.linux-foundation.org,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH v2] virtio_net: fix PAGE_SIZE > 64k
On Tue, Jan 24, 2017 at 08:07:40PM -0800, Alexei Starovoitov wrote:
> On Tue, Jan 24, 2017 at 7:48 PM, John Fastabend
> <john.fastabend@...il.com> wrote:
> >
> > It is a concern on my side. I want XDP and Linux stack to work
> > reasonably well together.
>
> btw the micro benchmarks showed that page per packet approach
> that xdp took in mlx4 should be 10% slower vs normal operation
> for tcp/ip stack.
Interesting. TCP only or UDP too? What's the packet size? Are you tuning
your rmem limits at all? The slowdown would be more noticeable with
UDP with default values and small packet sizes.
> We thought that for our LB use case
> it will be an acceptable slowdown, but turned out that overall we
> got a performance boost, since xdp model simplified user space
> and got data path faster, so we magically got extra free cpu
> that is used for other apps on the same host and overall
> perf win despite extra overhead in tcp/ip.
> Not all use cases are the same and not everyone will be as lucky,
> so I'd like to see performance of xdp_pass improving too, though
> it turned out to be not as high priority as I initially estimated.
Powered by blists - more mailing lists