lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b8e2770f-f273-ac7d-5bbf-041313d7e51c@redhat.com>
Date:   Wed, 11 Oct 2017 10:41:48 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Matthew Rosato <mjrosato@...ux.vnet.ibm.com>,
        netdev@...r.kernel.org
Cc:     davem@...emloft.net, mst@...hat.com
Subject: Re: Regression in throughput between kvm guests over virtual bridge



On 2017年10月06日 04:07, Matthew Rosato wrote:
> On 09/25/2017 04:18 PM, Matthew Rosato wrote:
>> On 09/22/2017 12:03 AM, Jason Wang wrote:
>>>
>>> On 2017年09月21日 03:38, Matthew Rosato wrote:
>>>>> Seems to make some progress on wakeup mitigation. Previous patch tries
>>>>> to reduce the unnecessary traversal of waitqueue during rx. Attached
>>>>> patch goes even further which disables rx polling during processing tx.
>>>>> Please try it to see if it has any difference.
>>>> Unfortunately, this patch doesn't seem to have made a difference.  I
>>>> tried runs with both this patch and the previous patch applied, as well
>>>> as only this patch applied for comparison (numbers from vhost thread of
>>>> sending VM):
>>>>
>>>> 4.12    4.13     patch1   patch2   patch1+2
>>>> 2.00%   +3.69%   +2.55%   +2.81%   +2.69%   [...] __wake_up_sync_key
>>>>
>>>> In each case, the regression in throughput was still present.
>>> This probably means some other cases of the wakeups were missed. Could
>>> you please record the callers of __wake_up_sync_key()?
>>>
>> Hi Jason,
>>
>> With your 2 previous patches applied, every call to __wake_up_sync_key
>> (for both sender and server vhost threads) shows the following stack trace:
>>
>>       vhost-11478-11520 [002] ....   312.927229: __wake_up_sync_key
>> <-sock_def_readable
>>       vhost-11478-11520 [002] ....   312.927230: <stack trace>
>>   => dev_hard_start_xmit
>>   => sch_direct_xmit
>>   => __dev_queue_xmit
>>   => br_dev_queue_push_xmit
>>   => br_forward_finish
>>   => __br_forward
>>   => br_handle_frame_finish
>>   => br_handle_frame
>>   => __netif_receive_skb_core
>>   => netif_receive_skb_internal
>>   => tun_get_user
>>   => tun_sendmsg
>>   => handle_tx
>>   => vhost_worker
>>   => kthread
>>   => kernel_thread_starter
>>   => kernel_thread_starter
>>
> Ping...  Jason, any other ideas or suggestions?
>

Sorry for the late, recovering from a long holiday. Will go back to this 
soon.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ