lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4941636A.3050303@vlnb.net>
Date:	Thu, 11 Dec 2008 22:00:58 +0300
From:	Vladislav Bolkhovitin <vst@...b.net>
To:	Jens Axboe <jens.axboe@...cle.com>
CC:	linux-scsi@...r.kernel.org,
	James Bottomley <James.Bottomley@...senPartnership.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>,
	Mike Christie <michaelc@...wisc.edu>,
	Jeff Garzik <jeff@...zik.org>,
	Boaz Harrosh <bharrosh@...asas.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, scst-devel@...ts.sourceforge.net,
	Bart Van Assche <bart.vanassche@...il.com>,
	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>
Subject: Re: [PATCH][RFC 13/23]: Export of alloc_io_context() function

Jens Axboe wrote:
> On Thu, Dec 11 2008, Vladislav Bolkhovitin wrote:
>> Jens Axboe wrote:
>>> On Wed, Dec 10 2008, Vladislav Bolkhovitin wrote:
>>>> This patch exports alloc_io_context() function. For performance reasons 
>>>> SCST queues commands using a pool of IO threads. It is considerably 
>>>> better for performance (>30% increase on sequential reads) if threads in 
>>>>  a pool have the same IO context. Since SCST can be built as a module, 
>>>> it needs alloc_io_context() function exported.
>>>>
>>>> Signed-off-by: Vladislav Bolkhovitin <vst@...b.net>
>>>> ---
>>>>  block/blk-ioc.c |    1 +
>>>>  1 file changed, 1 insertion(+)
>>>>
>>>> diff -upkr linux-2.6.27.2/block/blk-ioc.c linux-2.6.27.2/block/blk-ioc.c
>>>> --- linux-2.6.27.2/block/blk-ioc.c	2008-10-10 02:13:53.000000000 +0400
>>>> +++ linux-2.6.27.2/block/blk-ioc.c	2008-11-25 21:27:01.000000000 +0300
>>>> @@ -105,6 +105,7 @@ struct io_context *alloc_io_context(gfp_
>>>>
>>>> 	return ret;
>>>> }
>>>> +EXPORT_SYMBOL(alloc_io_context);
>>> Why is this needed, can't you just use CLONE_IO?
>> There are two reasons for that:
>>
>> 1. kthread interface doesn't support passing CLONE_IO flag.
> 
> Then you fix that instead of working around it! :-)

It doesn't worth the effort, because of (2) below.

>> 2. Each (virtual) device has own pool of threads, which serves it. 
>> Threads in each such pools should have a common IO context, but 
>> different pools should have different IO contexts. So, it would be 
>> necessary to implement two levels start of IO threads in each pool. At 
>> first, one thread would be started. Then it would call get_io_context() 
>> to gain io_context. Then it would create the remaining threads with 
>> CLONE_IO flag. Definitely, it's a lot more complicated than a simple 
>> call of alloc_io_context() and assignment of the returned context to 
>> each just created thread in a loop before they were ran.
> 
> Just start the first thread without CLONE_IO, and subsequent threads
> fork off that with CLONE_IO set? 

Yes, that would be the two stages threads creation. A *LOT* more 
complicated, than with the direct io_context assignment using 
alloc_io_context().

> I think we need to make sure that we
> allocate an IO context for the 'parent' if it doesn't have one already
> and CLONE_IO is set, but that is something that can easily be rectified.

Sorry, I don't feel I understood you here..

> It may seem more complex, but if you use this approach you are pretty
> much free to worry about any changes in the future there.

Worrying about future changes is regular in Linux kernel, where there is 
no stable API ;-)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ