lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 9 Jan 2020 13:34:27 +0100
From:   Jan Kara <jack@...e.cz>
To:     "Theodore Y. Ts'o" <tytso@....edu>
Cc:     Ritesh Harjani <riteshh@...ux.ibm.com>, Jan Kara <jack@...e.cz>,
        Xiaoguang Wang <xiaoguang.wang@...ux.alibaba.com>,
        Ext4 Developers List <linux-ext4@...r.kernel.org>,
        joseph.qi@...ux.alibaba.com, Liu Bo <bo.liu@...ux.alibaba.com>
Subject: Re: Discussion: is it time to remove dioread_nolock?

On Wed 08-01-20 12:42:59, Theodore Y. Ts'o wrote:
> On Wed, Jan 08, 2020 at 04:15:13PM +0530, Ritesh Harjani wrote:
> > > Yes, that's a good point. And I'm not opposed to that if it makes the life
> > > simpler. But I'd like to see some performance numbers showing how much is
> > > writeback using unwritten extents slower so that we don't introduce too big
> > > regression with this...
> > > 
> > 
> > Yes, let me try to get some performance numbers with dioread_nolock as
> > the default option for buffered write on my setup.
> 
> I started running some performance runs last night, and the

Thanks for the numbers! What is the difference between 'default-1' and
'default-2' configurations (and similarly between dioread_nolock-1 and -2
configurations)?

> interesting thing that I found was that fs_mark actually *improved*
> with dioread_nolock (with fsync enabled).  That may be an example of
> where fixing the commit latency caused by writeback can actually show
> up in a measurable way with benchmarks.

Yeah, that could be.

> Dbench was slightly impacted; I didn't see any real differences with
> compilebench or postmark.

Interestingly, dbench is also fsync(2)-bound workload (because the working
set is too small for anything else to actually reach the disk in
contemporary systems). But file sizes with dbench are smaller (under 100k)
than with fs-mark (1MB) so probably that's what makes the difference.

>  dioread_nolock did improve fio with
> sequential reads; which is interesting, since I would have expected
> with the inode_lock improvements, there shouldn't have been any
> difference.  So that may be a bit of wierdness that we should try to
> understand.

Yes, this is indeed strange. I don't see anything the DIO read path where
dioread_nolock would actually still matter.

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists