[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEvTLG0Ayg+TtbN4q4pPW-ycgCCs3sC3-TF8cuRTf7Pp1A@mail.gmail.com>
Date: Thu, 24 Feb 2022 11:23:39 +0800
From: Jason Wang <jasowang@...hat.com>
To: Harold Huang <baymaxhuang@...il.com>
Cc: users@...k.org, Maxime Coquelin <maxime.coquelin@...hat.com>,
Chenbo Xia <chenbo.xia@...el.com>,
netdev <netdev@...r.kernel.org>
Subject: Re: Question about the sndbuf of the tap interface with vhost-net
Adding netdev.
On Wed, Feb 23, 2022 at 9:46 PM Harold Huang <baymaxhuang@...il.com> wrote:
>
> Sorry. The performance tested by iperf is degraded from 4.5 Gbps to
> 750Mbps per flow.
>
> Harold Huang <baymaxhuang@...il.com> 于2022年2月23日周三 21:13写道:
> >
> > I see in dpdk virtio-user driver, the TUNSETSNDBUF is initialized with
> > INT_MAX, see: https://github.com/DPDK/dpdk/blob/main/drivers/net/virtio/virtio_user/vhost_kernel_tap.c#L169
Note that Linux use INT_MAX as default sndbuf for tuntap.
> > It is ok because tap driver uses it to support tx baching, see this
> > patch: https://github.com/torvalds/linux/commit/0a0be13b8fe2cac11da2063fb03f0f39359b3069
> >
> > But in tun_xdp_one, napi is not supported and I want to user napi in
> > tun_get_user to enable gro.
NAPI is not enabled in this path, want to send a patch to do that?
Btw, NAPI mode is used for kernel networking stack hardening at start,
but it would be interesting to see if it helps for the performance.
> > As I result, I change the sndbuf to a
> > value such as 212992 in /proc/sys/net/core/wmem_default.
Can you describe your setup in detail? Where did you run the iperf
server and client and where did you change the wmem_default?
> > But the
> > performance tested by iperf is greatly degraded, from 4.5 Gbps to
> > 750Gbps per flow. I see the the iperf server consume 100% cpu core,
> > which should be the bottleneck of the this test. The perf top result
> > of iperf server cpu core is as follows:
> >
> > '''
> > Samples: 72 of event 'cycles', 4000 Hz, Event count (approx.):
> > 22685278 lost: 0/0 drop: 0/0
> > Overhead Shared O Symbol
> > 59.86% [kernel] [k] report_bug
> > 20.66% [kernel] [k] module_find_bug
> > 6.51% [kernel] [k] common_interrupt
> > 2.82% [kernel] [k] __slab_free
> > 1.48% [kernel] [k] copy_user_enhanced_fast_string
> > 1.44% [kernel] [k] __skb_datagram_iter
> > 1.42% [kernel] [k] notifier_call_chain
> > 1.41% [kernel] [k] irq_work_run_list
> > 1.41% [kernel] [k] update_irq_load_avg
> > 1.41% [kernel] [k] task_tick_fair
> > 1.41% [kernel] [k] cmp_ex_search
> > 0.16% [kernel] [k] __ghes_peek_estatus.isra.12
> > 0.02% [kernel] [k] acpi_os_read_memory
> > 0.00% [kernel] [k] native_apic_mem_write
> > '''
> > I am not clear about the test result. Can we change the sndbuf size in
> > dpdk? Is any way to enable vhost_net to use napi without changing the
> > tun kernel driver?
You can do this by not using INT_MAX as sndbuf.
Thanks
>
Powered by blists - more mailing lists