lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 22 Jan 2015 14:12:49 -0700
From:	Jens Axboe <axboe@...com>
To:	Jeff Moyer <jmoyer@...hat.com>
CC:	Huang Ying <ying.huang@...el.com>, Christoph Hellwig <hch@....de>,
	LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: Re: [LKP] [block] 34b48db66e0: +3291.6% iostat.sde.wrqm/s

On 01/22/2015 02:08 PM, Jeff Moyer wrote:
> Jens Axboe <axboe@...com> writes:
> 
>> On 01/22/2015 01:49 PM, Jeff Moyer wrote:
>>> Jens Axboe <axboe@...com> writes:
>>>
>>>>> Agreed on all above, but are the actual benchmark numbers included
>>>>> somewhere in all this mess?  I'd like to see if the benchmark numbers
>>>>> improved first, before digging into the guts of which functions are
>>>>> called more or which stats changed.
>>>>
>>>> I deleted the original email, but the latter tables had drive throughput
>>>> rates and it looked higher for the ones I checked on the newer kernel.
>>>> Which the above math would indicate as well, multiplying reqs-per-sec
>>>> and req-size.
>>>
>>> Looking back at the original[1], I think I see the throughput numbers for
>>> iozone.  The part that confused me was that each table mixes different
>>> types of data.  I'd much prefer if different data were put in different
>>> tables, along with column headers that stated what was being reported
>>> and the units for the measurements.
>>>
>>> Anyway, I find the increased service time troubling, especially this
>>> one:
>>>
>>> testbox/testcase/testparams: ivb44/fsmark/performance-1x-1t-1HDD-xfs-4M-60G-NoSync
>>>
>>>        544 ?  0%   +1268.9%       7460 ?  0%  iostat.sda.w_await
>>>        544 ?  0%   +1268.5%       7457 ?  0%  iostat.sda.await
>>>
>>> I'll add this to my queue of things to look into.
>>
>> From that same table:
>>
>>       1009 ±  0%   +1255.7%      13682 ±  0%  iostat.sda.avgrq-sz
>>
>> the average request size has gone up equally. This is clearly a streamed
>> oriented benchmark, if the IOs get that big.
> 
> Hmm, ok, I'll buy that.  However, I am surprised that the relationship
> between i/o size and service time is 1:1 here...

Should be pretty close to 1:1, given that the smaller requests are still
sequential. And we're obviously doing a well enough job not to service
them out of sequence.

My original worry on bumping max_sectors was that we'd introduce slower
bubbles in the pipeline, for eg interleaved IO patterns where one does
large streamed IO and the other small non sequential. So it'd be
interesting to see a test for something like that.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ