lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b2a6309-9d21-7172-a581-9f0f1d5c1427@redhat.com>
Date:   Sat, 12 Oct 2019 15:28:49 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>, linux-kernel@...r.kernel.org
Cc:     kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
        netdev@...r.kernel.org
Subject: Re: [PATCH RFC v1 1/2] vhost: option to fetch descriptors through an
 independent struct


On 2019/10/11 下午9:45, Michael S. Tsirkin wrote:
> The idea is to support multiple ring formats by converting
> to a format-independent array of descriptors.
>
> This costs extra cycles, but we gain in ability
> to fetch a batch of descriptors in one go, which
> is good for code cache locality.
>
> To simplify benchmarking, I kept the old code
> around so one can switch back and forth by
> writing into a module parameter.
> This will go away in the final submission.
>
> This patch causes a minor performance degradation,
> it's been kept as simple as possible for ease of review.
> Next patch gets us back the performance by adding batching.
>
> Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
> ---
>   drivers/vhost/test.c  |  17 ++-
>   drivers/vhost/vhost.c | 299 +++++++++++++++++++++++++++++++++++++++++-
>   drivers/vhost/vhost.h |  16 +++
>   3 files changed, 327 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
> index 056308008288..39a018a7af2d 100644
> --- a/drivers/vhost/test.c
> +++ b/drivers/vhost/test.c
> @@ -18,6 +18,9 @@
>   #include "test.h"
>   #include "vhost.h"
>   
> +static int newcode = 0;
> +module_param(newcode, int, 0644);
> +
>   /* Max number of bytes transferred before requeueing the job.
>    * Using this limit prevents one virtqueue from starving others. */
>   #define VHOST_TEST_WEIGHT 0x80000
> @@ -58,10 +61,16 @@ static void handle_vq(struct vhost_test *n)
>   	vhost_disable_notify(&n->dev, vq);
>   
>   	for (;;) {
> -		head = vhost_get_vq_desc(vq, vq->iov,
> -					 ARRAY_SIZE(vq->iov),
> -					 &out, &in,
> -					 NULL, NULL);
> +		if (newcode)
> +			head = vhost_get_vq_desc_batch(vq, vq->iov,
> +						       ARRAY_SIZE(vq->iov),
> +						       &out, &in,
> +						       NULL, NULL);
> +		else
> +			head = vhost_get_vq_desc(vq, vq->iov,
> +						 ARRAY_SIZE(vq->iov),
> +						 &out, &in,
> +						 NULL, NULL);
>   		/* On error, stop handling until the next kick. */
>   		if (unlikely(head < 0))
>   			break;
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 36ca2cf419bf..36661d6cb51f 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -301,6 +301,7 @@ static void vhost_vq_reset(struct vhost_dev *dev,
>   			   struct vhost_virtqueue *vq)
>   {
>   	vq->num = 1;
> +	vq->ndescs = 0;
>   	vq->desc = NULL;
>   	vq->avail = NULL;
>   	vq->used = NULL;
> @@ -369,6 +370,9 @@ static int vhost_worker(void *data)
>   
>   static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq)
>   {
> +	kfree(vq->descs);
> +	vq->descs = NULL;
> +	vq->max_descs = 0;
>   	kfree(vq->indirect);
>   	vq->indirect = NULL;
>   	kfree(vq->log);
> @@ -385,6 +389,10 @@ static long vhost_dev_alloc_iovecs(struct vhost_dev *dev)
>   
>   	for (i = 0; i < dev->nvqs; ++i) {
>   		vq = dev->vqs[i];
> +		vq->max_descs = dev->iov_limit;
> +		vq->descs = kmalloc_array(vq->max_descs,
> +					  sizeof(*vq->descs),
> +					  GFP_KERNEL);


Is iov_limit too much here? It can obviously increase the footprint. I 
guess the batching can only be done for descriptor without indirect or 
next set. Then we may batch 16 or 64.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ