lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 13 Jun 2012 11:34:20 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Fengguang Wu <fengguang.wu@...el.com>
Cc:	Wanpeng Li <liwp.linux@...il.com>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	Gavin Shan <shangw@...ux.vnet.ibm.com>
Subject: Re: [PATCH V2] writeback: fix hung_task alarm when sync block

Fengguang Wu <fengguang.wu@...el.com> writes:

> Hi Jeff,
>
> On Wed, Jun 13, 2012 at 10:27:50AM -0400, Jeff Moyer wrote:
>> Wanpeng Li <liwp.linux@...il.com> writes:
>> 
>> > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
>> > index f2d0109..df879ee 100644
>> > --- a/fs/fs-writeback.c
>> > +++ b/fs/fs-writeback.c
>> > @@ -1311,7 +1311,11 @@ void writeback_inodes_sb_nr(struct super_block *sb,
>> >  
>> >  	WARN_ON(!rwsem_is_locked(&sb->s_umount));
>> >  	bdi_queue_work(sb->s_bdi, &work);
>> > -	wait_for_completion(&done);
>> > +	if (sysctl_hung_task_timeout_secs)
>> > +		while (!wait_for_completion_timeout(&done, HZ/2))
>> > +			;
>> > +	else
>> > +		wait_for_completion(&done);
>> >  }
>> >  EXPORT_SYMBOL(writeback_inodes_sb_nr);
>> 
>> Is it really expected that writeback_inodes_sb_nr will routinely queue
>> up more than 2 seconds worth of I/O (Yes, I understand that it isn't the
>> only entity issuing I/O)? 
>
> Yes, in the case of syncing the whole superblock.
> Basically sync() does its job in two steps:
>
> for all sb:
>         writeback_inodes_sb_nr() # WB_SYNC_NONE
>         sync_inodes_sb()         # WB_SYNC_ALL
>
>> For devices that are really slow, it may make
>> more sense to tune the system so that you don't have too much writeback
>> I/O submitted at once.  Dropping nr_requests for the given queue should
>> fix this situation, I would think.
>
> The worried case is about sync() waiting
>
>         (nr_dirty + nr_writeback) / write_bandwidth
>
> time, where it is nr_dirty that could grow rather large.
>
> For example, if dirty threshold is 1GB and write_bandwidth is 10MB/s,
> the sync() will have to wait for 100 seconds. If there are heavy
> dirtiers running during the sync, it will typically take several
> hundreds of seconds (which looks not that good, but still much better
> than being livelocked in some old kernels)..
>
>> This really feels like we're papering over the problem.
>
> That's true. The majority users probably don't want to cache 100s
> worth of data in memory. It may be worthwhile to add a new per-bdi
> limit whose unit is number-of-seconds (of dirty data).

Hi, Fengguang,

Another option is to limit the amount of time we wait to the amount of
time we expect to have to wait.  IOW, if we can estimate the amount of
time we think the I/O will take to complete, we can set the
hung_task_timeout[1] to *that* (with some fudge factor).  Do you have a
mechanism in place today to make such an estimate?  The benefit of this
solution is obvious: you still get notified when tasks are actually
hung, but you don't get false warnings.

Thanks for your quick and detailed response, by the way!

-Jeff

[1] I realize that hung_task_timeout is global.  We could simulate a
per-task timeout by simply looping in wait_for_completion_timeout until
expected_time - waited_time <= hung_task_timeout, and then doing
the wait_for_completion (without the timeout).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ