lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230518-lokomotive-aufziehen-dbc432136b76@brauner>
Date:   Thu, 18 May 2023 16:18:28 +0200
From:   Christian Brauner <brauner@...nel.org>
To:     Mike Christie <michael.christie@...cle.com>
Cc:     oleg@...hat.com, linux@...mhuis.info, nicolas.dichtel@...nd.com,
        axboe@...nel.dk, ebiederm@...ssion.com,
        torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
        virtualization@...ts.linux-foundation.org, mst@...hat.com,
        sgarzare@...hat.com, jasowang@...hat.com, stefanha@...hat.com
Subject: Re: [RFC PATCH 5/8] vhost: Add callback that stops new work and
 waits on running ones

On Wed, May 17, 2023 at 07:09:17PM -0500, Mike Christie wrote:
> When the vhost_task gets a SIGKILL we want to stop new work from being
> queued and also wait for and handle completions for running work. For the
> latter, we still need to use the vhost_task to handle the completing work
> so we can't just exit right away. But, this has us kick off the stopping
> and flushing/stopping of the device/vhost_task/worker to the system work
> queue while the vhost_task handles completions. When all completions are
> done we will then do vhost_task_stop and we will exit.
> 
> Signed-off-by: Mike Christie <michael.christie@...cle.com>
> ---
>  drivers/vhost/net.c   |  2 +-
>  drivers/vhost/scsi.c  |  4 ++--
>  drivers/vhost/test.c  |  3 ++-
>  drivers/vhost/vdpa.c  |  2 +-
>  drivers/vhost/vhost.c | 48 ++++++++++++++++++++++++++++++++++++-------
>  drivers/vhost/vhost.h | 10 ++++++++-
>  drivers/vhost/vsock.c |  4 ++--
>  7 files changed, 58 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 8557072ff05e..90c25127b3f8 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -1409,7 +1409,7 @@ static int vhost_net_open(struct inode *inode, struct file *f)
>  	vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX,
>  		       UIO_MAXIOV + VHOST_NET_BATCH,
>  		       VHOST_NET_PKT_WEIGHT, VHOST_NET_WEIGHT, true,
> -		       NULL);
> +		       NULL, NULL);
>  
>  	vhost_poll_init(n->poll + VHOST_NET_VQ_TX, handle_tx_net, EPOLLOUT, dev);
>  	vhost_poll_init(n->poll + VHOST_NET_VQ_RX, handle_rx_net, EPOLLIN, dev);
> diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
> index bb10fa4bb4f6..40f9135e1a62 100644
> --- a/drivers/vhost/scsi.c
> +++ b/drivers/vhost/scsi.c
> @@ -1820,8 +1820,8 @@ static int vhost_scsi_open(struct inode *inode, struct file *f)
>  		vqs[i] = &vs->vqs[i].vq;
>  		vs->vqs[i].vq.handle_kick = vhost_scsi_handle_kick;
>  	}
> -	vhost_dev_init(&vs->dev, vqs, nvqs, UIO_MAXIOV,
> -		       VHOST_SCSI_WEIGHT, 0, true, NULL);
> +	vhost_dev_init(&vs->dev, vqs, nvqs, UIO_MAXIOV, VHOST_SCSI_WEIGHT, 0,
> +		       true, NULL, NULL);
>  
>  	vhost_scsi_init_inflight(vs, NULL);
>  
> diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c
> index 42c955a5b211..11a2823d7532 100644
> --- a/drivers/vhost/test.c
> +++ b/drivers/vhost/test.c
> @@ -120,7 +120,8 @@ static int vhost_test_open(struct inode *inode, struct file *f)
>  	vqs[VHOST_TEST_VQ] = &n->vqs[VHOST_TEST_VQ];
>  	n->vqs[VHOST_TEST_VQ].handle_kick = handle_vq_kick;
>  	vhost_dev_init(dev, vqs, VHOST_TEST_VQ_MAX, UIO_MAXIOV,
> -		       VHOST_TEST_PKT_WEIGHT, VHOST_TEST_WEIGHT, true, NULL);
> +		       VHOST_TEST_PKT_WEIGHT, VHOST_TEST_WEIGHT, true, NULL,
> +		       NULL);
>  
>  	f->private_data = n;
>  
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 8c1aefc865f0..de9a83ecb70d 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -1279,7 +1279,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep)
>  		vqs[i]->handle_kick = handle_vq_kick;
>  	}
>  	vhost_dev_init(dev, vqs, nvqs, 0, 0, 0, false,
> -		       vhost_vdpa_process_iotlb_msg);
> +		       vhost_vdpa_process_iotlb_msg, NULL);
>  
>  	r = vhost_vdpa_alloc_domain(v);
>  	if (r)
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 1ba9e068b2ab..4163c86db50c 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -336,6 +336,7 @@ static void vhost_vq_reset(struct vhost_dev *dev,
>  static int vhost_worker(void *data)
>  {
>  	struct vhost_worker *worker = data;
> +	struct vhost_dev *dev = worker->dev;
>  	struct vhost_work *work, *work_next;
>  	struct llist_node *node;
>  
> @@ -352,12 +353,13 @@ static int vhost_worker(void *data)
>  		if (!node) {
>  			schedule();
>  			/*
> -			 * When we get a SIGKILL our release function will
> -			 * be called. That will stop new IOs from being queued
> -			 * and check for outstanding cmd responses. It will then
> -			 * call vhost_task_stop to exit us.
> +			 * When we get a SIGKILL we kick off a work to
> +			 * run the driver's helper to stop new work and
> +			 * handle completions. When they are done they will
> +			 * call vhost_task_stop to tell us to exit.
>  			 */
> -			vhost_task_get_signal();
> +			if (vhost_task_get_signal())
> +				schedule_work(&dev->destroy_worker);
>  		}

I'm pretty sure you still need to actually call exit here. Basically
mirror what's done in io_worker_exit() minus the io specific bits.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ