[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9459e227-a943-8553-732b-d7f5225a0f22@redhat.com>
Date: Fri, 14 Dec 2018 11:42:18 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next 0/3] vhost: accelerate metadata access through
vmap()
On 2018/12/13 下午11:27, Michael S. Tsirkin wrote:
> On Thu, Dec 13, 2018 at 06:10:19PM +0800, Jason Wang wrote:
>> Hi:
>>
>> This series tries to access virtqueue metadata through kernel virtual
>> address instead of copy_user() friends since they had too much
>> overheads like checks, spec barriers or even hardware feature
>> toggling.
> Userspace accesses through remapping tricks and next time there's a need
> for a new barrier we are left to figure it out by ourselves.
I don't get here, do you mean spec barriers? It's completely unnecessary
for vhost which is kernel thread. And even if you're right, vhost is not
the only place, there's lots of vmap() based accessing in kernel. Think
in another direction, this means we won't suffer form unnecessary
barriers for kthread like vhost in the future, we will manually pick the
one we really need (but it should have little possibility).
Please notice we only access metdata through remapping not the data
itself. This idea has been used for high speed userspace backend for
years, e.g packet socket or recent AF_XDP. The only difference is the
page was remap to from kernel to userspace.
> I don't
> like the idea I have to say. As a first step, why don't we switch to
> unsafe_put_user/unsafe_get_user etc?
Several reasons:
- They only have x86 variant, it won't have any difference for the rest
of architecture.
- unsafe_put_user/unsafe_get_user is not sufficient for accessing
structures (e.g accessing descriptor) or arrays (batching).
- Unless we can batch at least the accessing of two places in three of
avail, used and descriptor in one run. There will be no difference. E.g
we can batch updating used ring, but it won't make any difference in
this case.
> That would be more of an apples to apples comparison, would it not?
Apples to apples comparison only help if we are the No.1. But the fact
is we are not. If we want to compete with e.g dpdk or AF_XDP, vmap() is
the fastest method AFAIK.
Thanks
>
>
>> Test shows about 24% improvement on TX PPS. It should benefit other
>> cases as well.
>>
>> Please review
>>
>> Jason Wang (3):
>> vhost: generalize adding used elem
>> vhost: fine grain userspace memory accessors
>> vhost: access vq metadata through kernel virtual address
>>
>> drivers/vhost/vhost.c | 281 ++++++++++++++++++++++++++++++++++++++----
>> drivers/vhost/vhost.h | 11 ++
>> 2 files changed, 266 insertions(+), 26 deletions(-)
>>
>> --
>> 2.17.1
Powered by blists - more mailing lists