[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1173ab1f-e2b6-26b3-8c3c-bd5ceaa1bd8e@redhat.com>
Date: Wed, 20 Sep 2017 14:27:20 +0800
From: Jason Wang <jasowang@...hat.com>
To: Matthew Rosato <mjrosato@...ux.vnet.ibm.com>,
netdev@...r.kernel.org
Cc: davem@...emloft.net, mst@...hat.com
Subject: Re: Regression in throughput between kvm guests over virtual bridge
On 2017年09月19日 02:11, Matthew Rosato wrote:
> On 09/18/2017 03:36 AM, Jason Wang wrote:
>>
>> On 2017年09月18日 11:13, Jason Wang wrote:
>>>
>>> On 2017年09月16日 03:19, Matthew Rosato wrote:
>>>>> It looks like vhost is slowed down for some reason which leads to more
>>>>> idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
>>>>> perf.diff on host, one for rx and one for tx.
>>>>>
>>>> perf data below for the associated vhost threads, baseline=4.12,
>>>> delta1=4.13, delta2=4.13+VHOST_RX_BATCH=1
>>>>
>>>> Client vhost:
>>>>
>>>> 60.12% -11.11% -12.34% [kernel.vmlinux] [k] raw_copy_from_user
>>>> 13.76% -1.28% -0.74% [kernel.vmlinux] [k] get_page_from_freelist
>>>> 2.00% +3.69% +3.54% [kernel.vmlinux] [k] __wake_up_sync_key
>>>> 1.19% +0.60% +0.66% [kernel.vmlinux] [k] __alloc_pages_nodemask
>>>> 1.12% +0.76% +0.86% [kernel.vmlinux] [k] copy_page_from_iter
>>>> 1.09% +0.28% +0.35% [vhost] [k] vhost_get_vq_desc
>>>> 1.07% +0.31% +0.26% [kernel.vmlinux] [k] alloc_skb_with_frags
>>>> 0.94% +0.42% +0.65% [kernel.vmlinux] [k] alloc_pages_current
>>>> 0.91% -0.19% -0.18% [kernel.vmlinux] [k] memcpy
>>>> 0.88% +0.26% +0.30% [kernel.vmlinux] [k] __next_zones_zonelist
>>>> 0.85% +0.05% +0.12% [kernel.vmlinux] [k] iov_iter_advance
>>>> 0.79% +0.09% +0.19% [vhost] [k] __vhost_add_used_n
>>>> 0.74% [kernel.vmlinux] [k] get_task_policy.part.7
>>>> 0.74% -0.01% -0.05% [kernel.vmlinux] [k] tun_net_xmit
>>>> 0.60% +0.17% +0.33% [kernel.vmlinux] [k] policy_nodemask
>>>> 0.58% -0.15% -0.12% [ebtables] [k] ebt_do_table
>>>> 0.52% -0.25% -0.22% [kernel.vmlinux] [k] __alloc_skb
>>>> ...
>>>> 0.42% +0.58% +0.59% [kernel.vmlinux] [k] eventfd_signal
>>>> ...
>>>> 0.32% +0.96% +0.93% [kernel.vmlinux] [k] finish_task_switch
>>>> ...
>>>> +1.50% +1.16% [kernel.vmlinux] [k] get_task_policy.part.9
>>>> +0.40% +0.42% [kernel.vmlinux] [k] __skb_get_hash_symmetr
>>>> +0.39% +0.40% [kernel.vmlinux] [k] _copy_from_iter_full
>>>> +0.24% +0.23% [vhost_net] [k] vhost_net_buf_peek
>>>>
>>>> Server vhost:
>>>>
>>>> 61.93% -10.72% -10.91% [kernel.vmlinux] [k] raw_copy_to_user
>>>> 9.25% +0.47% +0.86% [kernel.vmlinux] [k] free_hot_cold_page
>>>> 5.16% +1.41% +1.57% [vhost] [k] vhost_get_vq_desc
>>>> 5.12% -3.81% -3.78% [kernel.vmlinux] [k] skb_release_data
>>>> 3.30% +0.42% +0.55% [kernel.vmlinux] [k] raw_copy_from_user
>>>> 1.29% +2.20% +2.28% [kernel.vmlinux] [k] copy_page_to_iter
>>>> 1.24% +1.65% +0.45% [vhost_net] [k] handle_rx
>>>> 1.08% +3.03% +2.85% [kernel.vmlinux] [k] __wake_up_sync_key
>>>> 0.96% +0.70% +1.10% [vhost] [k] translate_desc
>>>> 0.69% -0.20% -0.22% [kernel.vmlinux] [k] tun_do_read.part.10
>>>> 0.69% [kernel.vmlinux] [k] tun_peek_len
>>>> 0.67% +0.75% +0.78% [kernel.vmlinux] [k] eventfd_signal
>>>> 0.52% +0.96% +0.98% [kernel.vmlinux] [k] finish_task_switch
>>>> 0.50% +0.05% +0.09% [vhost] [k] vhost_add_used_n
>>>> ...
>>>> +0.63% +0.58% [vhost_net] [k] vhost_net_buf_peek
>>>> +0.32% +0.32% [kernel.vmlinux] [k] _copy_to_iter
>>>> +0.19% +0.19% [kernel.vmlinux] [k] __skb_get_hash_symmetr
>>>> +0.11% +0.21% [vhost] [k] vhost_umem_interval_tr
>>>>
>>> Looks like for some unknown reason which leads more wakeups.
>>>
>>> Could you please try to attached patch to see if it solves or mitigate
>>> the issue?
>>>
>>> Thanks
>> My bad, please try this.
>>
>> Thanks
> Thanks Jason. Built 4.13 + supplied patch, I see some decrease in
> wakeups, but there's still quite a bit more compared to 4.12
> (baseline=4.12, delta1=4.13, delta2=4.13+patch):
>
> client:
> 2.00% +3.69% +2.55% [kernel.vmlinux] [k] __wake_up_sync_key
>
> server:
> 1.08% +3.03% +1.85% [kernel.vmlinux] [k] __wake_up_sync_key
>
>
> Throughput was roughly equivalent to base 4.13 (so, still seeing the
> regression w/ this patch applied).
>
Seems to make some progress on wakeup mitigation. Previous patch tries
to reduce the unnecessary traversal of waitqueue during rx. Attached
patch goes even further which disables rx polling during processing tx.
Please try it to see if it has any difference.
And two questions:
- Is the issue existed if you do uperf between 2VMs (instead of 4VMs)
- Can enable batching in the tap of sending VM improve the performance
(ethtool -C $tap rx-frames 64)
Thanks
View attachment "0001-vhost_net-avoid-unnecessary-wakeups-during-tx.patch" of type "text/x-patch" (1939 bytes)
Powered by blists - more mailing lists