lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <53991bdf-bff8-6d94-77dc-c8aba70eeba5@oracle.com>
Date:   Wed, 14 Jun 2023 12:59:07 -0500
From:   Mike Christie <michael.christie@...cle.com>
To:     "Eric W. Biederman" <ebiederm@...ssion.com>
Cc:     Oleg Nesterov <oleg@...hat.com>, linux@...mhuis.info,
        nicolas.dichtel@...nd.com, axboe@...nel.dk,
        torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
        virtualization@...ts.linux-foundation.org, mst@...hat.com,
        sgarzare@...hat.com, jasowang@...hat.com, stefanha@...hat.com,
        brauner@...nel.org
Subject: Re: Can vhost translate to io_uring?

On 6/14/23 1:25 AM, michael.christie@...cle.com wrote:
> It would be nice if the vhost layer could use the io-wq code as sort of
> generic worker. I can look into what that would take if Jens is ok
> with that type of thing.

We could use the io-wq code, but we hit the same problems as before:

1. We still need to modify the vhost drivers like I mentioned below so when
the task gets SIGKILL the drivers fail instead of running the work like
normal.

2. We still need some code like the patch below so when the worker task
exits and is freed the vhost drivers stop calling io_wq_enqueue and
don't access the io_wq anymore.

3. There's some other small things which seem easy to change like we need
to create the worker thread/task_struct when io_wq_create is run instead of
io_wq_enqueue. The problem is that we can queue work from threads that
have different mms than we want to use.


I've done #2 in the patch below. I'm almost done with #1. Just testing it
now. When that's done, we can remove the signal hacks and then decide if we
want to go further and switch to io-wq.


> 
> For vhost, I just submitted a patch to the vhost layer that allow us to
> switch out the vhost worker thread when IO is running:
> 
> https://lists.linuxfoundation.org/pipermail/virtualization/2023-June/067246.html
> 
> After that patch, I just need to modify vhost_worker/vhost_task_fn so
> when get_signal returns true we set the worker to NULL and do a synchronize_rcu.
> Then I just need to modify vhost-scsi so it detects when the worker is NULL
> and modify the flush code so it handles work that didn't get to run.
> 
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ