lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49D0D4D0.5000108@oss.ntt.co.jp>
Date:	Mon, 30 Mar 2009 23:18:56 +0900
From:	Fernando Luis Vázquez Cao 
	<fernando@....ntt.co.jp>
To:	Jens Axboe <jens.axboe@...cle.com>
CC:	Jeff Garzik <jeff@...zik.org>,
	Christoph Hellwig <hch@...radead.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Theodore Tso <tytso@....edu>, Ingo Molnar <mingo@...e.hu>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Arjan van de Ven <arjan@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Nick Piggin <npiggin@...e.de>, David Rees <drees76@...il.com>,
	Jesper Krogh <jesper@...gh.cc>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	chris.mason@...cle.com, david@...morbit.com, tj@...nel.org
Subject: Re: [PATCH 5/7] vfs: Add  wbcflush sysfs knob to disable storage
 device writeback cache flushes

Jens Axboe wrote:
> On Mon, Mar 30 2009, Fernando Luis Vázquez Cao wrote:
>> Add a sysfs knob to disable storage device writeback cache flushes.
>>
>> Signed-off-by: Fernando Luis Vazquez Cao <fernando@....ntt.co.jp>
>> ---
>>
>> diff -urNp linux-2.6.29-orig/block/blk-barrier.c linux-2.6.29/block/blk-barrier.c
>> --- linux-2.6.29-orig/block/blk-barrier.c	2009-03-24 08:12:14.000000000 +0900
>> +++ linux-2.6.29/block/blk-barrier.c	2009-03-30 17:08:28.000000000 +0900
>> @@ -318,6 +318,9 @@ int blkdev_issue_flush(struct block_devi
>>  	if (!q)
>>  		return -ENXIO;
>>
>> +	if (blk_queue_nowbcflush(q))
>> +		return -EOPNOTSUPP;
>> +
>>  	bio = bio_alloc(GFP_KERNEL, 0);
>>  	if (!bio)
>>  		return -ENOMEM;
>> diff -urNp linux-2.6.29-orig/block/blk-core.c linux-2.6.29/block/blk-core.c
>> --- linux-2.6.29-orig/block/blk-core.c	2009-03-24 08:12:14.000000000 +0900
>> +++ linux-2.6.29/block/blk-core.c	2009-03-30 17:08:28.000000000 +0900
>> @@ -1452,7 +1452,8 @@ static inline void __generic_make_reques
>>  			goto end_io;
>>  		}
>>  		if (bio_barrier(bio) && bio_has_data(bio) &&
>> -		    (q->next_ordered == QUEUE_ORDERED_NONE)) {
>> +		    (blk_queue_nowbcflush(q) ||
>> +		     q->next_ordered == QUEUE_ORDERED_NONE)) {
>>  			err = -EOPNOTSUPP;
>>  			goto end_io;
>>  		}
> 
> This (and the above hunk) should be changed. -EOPNOTSUPP means the
> target does not support barriers, that is a different thing to flushes
> not being needed. A file system issuing a barrier and getting
> -EOPNOTSUPP back will disable barriers, since it now thinks that
> ordering cannot be guaranteed.

The reason I decided to use -EOPNOTSUPP was that I wanted to keep
barriers and device flushes from entering the block layer when
they are not needed. I feared that if we pass them down the block
stack (knowing in advance they will not be actually submitted to
disk) we may end up slowing things down unnecessarily.

As you mentioned, filesystems such as ext3/4 will disable
barriers if they get -EOPNOTSUPP when issuing one. I was planning
to add a notifier mechanism so that we can notify filesystems has
been a change in the barrier settings. This might be
over-engineering, though. Especially considering that "-o
remount,barrier=1" will bring us the barriers back.

> A more appropriate change would be to successfully complete a flush
> without actually sending it down to the device if blk_queue_nowbcflush()
> is true. Then blkdev_issue_flush() would just work as well. It also
> needs to take stacking into account, or stacked drivers will have to
> propagate the settings up the stack. If you allow simply the barrier to
> be passed down, you get that for free.

Aren't we risking slowing things down? Does the small optimization above
make sense (especially taking the remount trick into account)?

Maybe I am worrying too much about the possible performance penalty.

>> +static struct queue_sysfs_entry queue_wbcflush_entry = {
>> +	.attr = {.name = "wbcflush", .mode = S_IRUGO | S_IWUSR },
>> +	.show = queue_wbcflush_show,
>> +	.store = queue_wbcflush_store,
>> +};
>> +
> 
> Naming is also pretty bad, perhaps something like "honor_cache_flush"
> would be better, or perhaps "cache_flush_needed". At least something
> that is more descriptive of this setting actually controls, wbcflush
> does not do that.

You are right, wbcflush is a pretty ugly name. I will use
"honor_cache_flush" in the next iteration of the patches.


Thanks,

Fernando
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ