[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200107134106.GD25547@quack2.suse.cz>
Date: Tue, 7 Jan 2020 14:41:06 +0100
From: Jan Kara <jack@...e.cz>
To: kernel test robot <rong.a.chen@...el.com>
Cc: Matthew Bobrowski <mbobrowski@...browski.org>,
Theodore Ts'o <tytso@....edu>, Jan Kara <jack@...e.cz>,
Ritesh Harjani <riteshh@...ux.ibm.com>,
LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
lkp@...ts.01.org
Subject: Re: [ext4] b1b4705d54: filebench.sum_bytes_mb/s -20.2% regression
Hello,
On Tue 24-12-19 08:59:15, kernel test robot wrote:
> FYI, we noticed a -20.2% regression of filebench.sum_bytes_mb/s due to commit:
>
>
> commit: b1b4705d54abedfd69dcdf42779c521aa1e0fbd3 ("ext4: introduce direct I/O read using iomap infrastructure")
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>
> in testcase: filebench
> on test machine: 8 threads Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 8G memory
> with following parameters:
>
> disk: 1HDD
> fs: ext4
> test: fivestreamreaddirect.f
> cpufreq_governor: performance
> ucode: 0x27
I was trying to reproduce this but I failed with my test VM. I had SATA SSD
as a backing store though so maybe that's what makes a difference. Maybe
the new code results in somewhat more seeks because the five threads which
compete in submitting sequential IO end up being more interleaved?
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists