[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49siwsewhx.fsf@segfault.boston.devel.redhat.com>
Date: Wed, 25 Sep 2013 11:44:58 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: majianpeng <majianpeng@...il.com>
Cc: axboe <axboe@...nel.dk>, viro <viro@...iv.linux.org.uk>,
LKML <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH V2 0/2] Auto stop async-write on block device when device removed.
majianpeng <majianpeng@...il.com> writes:
>>The bigger question is whether we want to change this long-standing
>>behaviour of how our write-back cache works. I don't know that it's
>>really worth it, honestly. If you want to ensure data is on disk, you
>>open the file O_SYNC or you issue an fsync, and those calls will return
>>an error for a removed block device. So, I guess I'll ask the same
>>question again: why are you looking at this? Is there some application
>>you care about that does buffered I/O to the block device and never does
>>an fsync?
>>
> Yes, for my company, we used our filesystem in userspace on block-device.
> For the performance, we used buffer-wrtite not sync-write.
> For my workload, we allow user to remove disk whether disk working or not.
> Now, we check the state of disk from /proc/partitions at the same interval.
>
> This patchset don't change write-back cache works.It only let vfs know
> the state of lower-device. I think it make a sense.
I'm still curious to know how you maintain a consistent file system
without the use of fsync, but that's an unrelated issue.
I looked at the rescan partition code path more closely, and it will
only really trigger if the partitions themselves aren't open. So, I
don't think there is a problem in your approach.
I'll ack patch 1. I still think patch 2 is not neessary. Please
correct me if I'm wrong.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists