[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEsagEirOdtisf5DrnmnEsfdWpAt31qDm9rwD_i4LhmBgYoPVw@mail.gmail.com>
Date: Thu, 27 Jun 2013 00:22:18 -0700
From: Daniel Phillips <daniel.raymond.phillips@...il.com>
To: OGAWA Hirofumi <hirofumi@...l.parknet.co.jp>
Cc: Dave Chinner <david@...morbit.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"tux3@...3.org" <tux3@...3.org>, Al Viro <viro@...iv.linux.org.uk>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] Optimize wait_sb_inodes()
On Wed, Jun 26, 2013 at 10:18 PM, OGAWA Hirofumi
<hirofumi@...l.parknet.co.jp> wrote:
> ...vfs can't know the data is whether after sync point or before
> sync point, and have to wait or not. FS is using the behavior like
> data=journal has tracking of those already, and can reuse it.
Clearly showing why the current interface is inefficient.
In the old days, core liked to poke its nose into the buffers of
every fs too, and those days are thankfully gone (thanks to
akpm iirc). Maybe the days of core sticking its nose into the
the inode dirty state of every fs are soon to end as well.
Why does core even need to know about the inodes a fs is
using? Maybe for some filesystems it saves implementation
code, but for Tux3 it just wastes CPU and adds extra code.
Regards,
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists