lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87627d6lae.fsf@devron.myhome.or.jp>
Date:	Mon, 17 Sep 2012 18:39:05 +0900
From:	OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
To:	Jan Kara <jack@...e.cz>
Cc:	Fengguang Wu <fengguang.wu@...el.com>, viro@...iv.linux.org.uk,
	hch@....de, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Fix queueing work if !bdi_cap_writeback_dirty()

Jan Kara <jack@...e.cz> writes:

>> I think you know how to solve it though. You can add the periodic flush
>> in own task. And you can check bdi->dirty_exceeded in any handlers.
>   Sure, you can have your private thread. That is possible but you will
> have to duplicate flusher logic and you will still get odd behavior e.g.
> when your filesystem is on one partition and another filesystem is on a
> different partition of the same disk.

Right. But it is what current FSes are doing more or less.

>> Well, ok. The alternative plan but more bigger change is to add the
>> handler to writeback task path. This would be better way, and core
>> should be able to request to flush with usual way (I guess this is what
>> you are concerning).  And I believe some FS can implement the simpler
>> and more efficient writeback path.
>> 
>> But this would look like what reiserfs4 was submitted in past (before
>> bdi was introduced), and unfortunately never accepted though.
>> 
>> Since situation was changed, will we accept it?
>> 
>> OK, why my FS requires it? Because basic strategy try to keep the
>> consistency of user view, not only internal metadata consistency.
>> I.e. it works like to flush the snapshot of user view.
>> 
>> So, flushing metadata/data by arbitrary order like current writeback
>> task does is unacceptable (of course, except request by user). And
>> writeback task will never know the correct order of FS.
>   OK, thanks for explanation. Now I understand what you are trying to do.
> Would it be enough if you could track dirty inodes inside your filesystem
> and provide some callback for flusher so that you can queue these inodes in
> the IO queue?

Yes, I guess so. I'm not doing the experiment this plan yet, so I'm not
sure though.  If we provide the callback something like
->writeback_sb_inodes(), it would work.

And the better design is to remove duplication of dirty inode tracking
on writeback task and own FS though. (However, this is quite optional)

Thanks.
-- 
OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ