lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Dec 2008 14:40:06 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Vladislav Bolkhovitin <vst@...b.net>
Cc:	Fabio Checconi <fabio@...dalf.sssup.it>,
	linux-kernel@...r.kernel.org
Subject: Re: Dynamic switching of io_context

On Wed, Dec 17 2008, Vladislav Bolkhovitin wrote:
> >I haven't seen the rest of the code, so I may be wrong, but I suppose
> >that a better approach would be to use CLONE_IO to share io contexts,
> >if possible.
> 
> Unfortunately, it would be very non-optimal. As it is known, to achieve 
> the best performance with async. IO, it should be submitted by a limited 
> number of threads <= CPU count. So, the only way to submit IO from each 
> of, e.g. 100, clients in a dedicated per-client IO context is to 
> dynamically switch io_context of the current threads to io_context of 
> the client before IO submission.

There's also likely to be another use of exactly the same type of thing
- the acall patches from Zach. At least my vision of the punt-to-thread
approach would be very similar: grab an available thread and attach it
to a given IO context.

So while I did mention exactly what Fabio outlines in my initial mail on
this, a generic way to attach/detach IO contexts from processes/threads
would be useful outside of this project. nfsd comes to mind as well.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ