lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 21 Jul 2011 20:35:23 +0200
From:	Jan Kara <jack@...e.cz>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Jan Kara <jack@...e.cz>, Curt Wohlgemuth <curtw@...gle.com>,
	Al Viro <viro@...iv.linux.org.uk>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	fengguang.wu@...el.com
Subject: Re: [PATCH] writeback: Don't wait for completion in
 writeback_inodes_sb_nr

On Tue 19-07-11 12:56:39, Christoph Hellwig wrote:
> On Fri, Jul 15, 2011 at 01:08:54AM +0200, Jan Kara wrote:
> >   Actually, it's the other way around writeback_inodes_sb() is superfluous
> > because of wakeup_flusher_threads(). So something like attached patch could
> > improve sync times (especially in presence of other IO). So far I have only
> > checked that sync times look reasonable with it but didn't really compare
> > them with original kernel...
> 
> This changes the order in which ->quota_sync is called relatively to
> other operations, see my other mail about it.  Also the code gets really
> confusing at this point, I think you're better of stopping to try to
> shared code between syncfs, umount & co and sys_sync with these bits.
  Good point. I'll stop sharing the code for these two cases.

> You're also skipping the ->sync_fs and quotasync calls the first round.
  Well, kind of. Since writeback is running asynchronously, we have no way
to call ->sync_fs(sb, 0) just after async inode writeback is done (and it
doesn't really make sense to call it before that moment). So we call
synchronous inode writeback first and only then ->sync_fs().

> I know for XFS sync_fs without wait is already mostly a no-op, but we'll
> need to benchmark and document this change, and apply it to the non-sync
> caller as well.
  I already have some numbers:
I've checked pure sync overhead by
  sync; time for (( i = 0; i < 100; i++ )); do sync; done
the results of 5 runs are:
  avg 1.130400s, stddev 0.027369 (unpatched kernel)
  avg 1.073200s, stddev 0.040848 (patched kernel)
so as expected slightly better for patched kernel.

And test with 20000 4k files in 100 directories on xfs:
  avg 155.995600s, stddev 1.879084 (unpatched kernel)
  avg 155.942200s, stddev 1.843881 (patched kernel)
so no real difference.

The same test with ext3:
  avg 154.597200s, stddev 1.556965 (unpatched kernel)
  avg 153.109800s, stddev 1.339094 (patched kernel)
again almost no difference.

  Where would you like to document the change?

								Honza

-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ