lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49FE5FEB.6040207@cn.fujitsu.com>
Date:	Mon, 04 May 2009 11:24:27 +0800
From:	Li Zefan <lizf@...fujitsu.com>
To:	Ryo Tsuruta <ryov@...inux.co.jp>
CC:	Alan.Brunelle@...com, dm-devel@...hat.com,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH dm-ioband] Added in blktrace msgs for dm-ioband

Ryo Tsuruta wrote:
> Hi Alan,
> 
>> Hi Ryo -
>>
>> I don't know if you are taking in patches, but whilst trying to uncover
>> some odd behavior I added some blktrace messages to dm-ioband-ctl.c. If
>> you're keeping one code base for old stuff (2.6.18-ish RHEL stuff) and
>> upstream you'll have to #if around these (the blktrace message stuff
>> came in around 2.6.26 or 27 I think).
>>
>> My test case was to take a single 400GB storage device, put two 200GB
>> partitions on it and then see what the "penalty" or overhead for adding
>> dm-ioband on top. To do this I simply created an ext2 FS on each
>> partition in parallel (two processes each doing a mkfs to one of the
>> partitions). Then I put two dm-ioband devices on top of the two
>> partitions (setting the weight to 100 in both cases - thus they should
>> have equal access).
>>
>> Using default values I was seeing /very/ large differences - on the
>> order of 3X. When I bumped the number of tokens to a large number
>> (10,240) the timings got much closer (<2%). I have found that using
>> weight-iosize performs worse than weight (closer to 5% penalty).
> 
> I could reproduce similar results. One dm-ioband device seems to stop
> issuing I/Os for a few seconds at times. I'll investigate more on that.
>  
>> I'll try to formalize these results as I go forward and report out on
>> them. In any event, I thought I'd share this patch with you if you are
>> interested...
> 
> Thanks. I'll include your patche into the next release.
>  

IMO we should use TRACE_EVENT instead of adding new blk_add_trace_msg().

>> Here's a sampling from some blktrace output (sorry for the wrapping) - I
>> should note that I'm a bit scared to see such large numbers of holds
>> going on when the token count should be >5,000 for each device...
>> Holding these back in an equal access situation is inhibiting the block
>> I/O layer to merge (most) of these (as mkfs performs lots & lots of
>> small but sequential I/Os).

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ