[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080923164623.ce82c1c2.akpm@linux-foundation.org>
Date: Tue, 23 Sep 2008 16:46:23 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mikulas Patocka <mpatocka@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...r.kernel.org,
agk@...hat.com, mbroz@...hat.com, chris@...chsys.com
Subject: Re: [PATCH] Memory management livelock
On Tue, 23 Sep 2008 19:11:51 -0400 (EDT)
Mikulas Patocka <mpatocka@...hat.com> wrote:
>
>
> > > wait_on_page_writeback_range is another example where the livelock
> > > happened, there is no protection at all against starvation.
> >
> > um, OK. So someone else is initiating IO for this inode and this
> > thread *never* gets to initiate any writeback. That's a bit of a
> > surprise.
> >
> > How do we fix that? Maybe decrement nt_to_write for these pages as
> > well?
>
> And what do you want to do with wait_on_page_writeback_range?
Don't know. I was asking you.
> When I
> solved that livelock in write_cache_pages(), I got another livelock in
> wait_on_page_writeback_range.
>
> > > BTW. that .nr_to_write = mapping->nrpages * 2 looks like a dangerous thing
> > > to me.
> > >
> > > Imagine this case: You have two pages with indices 4 and 5 dirty in a
> > > file. You call fsync(). It sets nr_to_write to 4.
> > >
> > > Meanwhile, another process makes pages 0, 1, 2, 3 dirty.
> > >
> > > The fsync() process goes to write_cache_pages, writes the first 4 dirty
> > > pages and exits because it goes over the limit.
> > >
> > > result --- you violate fsync() semantics, pages that were dirty before
> > > call to fsync() are not written when fsync() exits.
> >
> > yup, that's pretty much unfixable, really, unless new locks are added
> > which block threads which are writing to unrelated sections of the
> > file, and that could hurt some workloads quite a lot, I expect.
>
> It is fixable with the patch I sent --- it doesn't take any locks unless
> the starvation happens. Then, you don't have to use .nr_to_write for
> fsync anymore.
I agree that the patch is low-impact and relatively straightforward.
The main problem is making the address_space larger - there can (and
often are) millions and millions of these things in memory. Making it
larger is a big deal. We should work hard to seek an alternative and
afacit that isn't happening here.
We already have existing code and design which attempts to avoid
livelock without adding stuff to the address_space. Can it be modified
so as to patch up this quite obscure and rarely-occuring problem?
> Another solution could be to record in page structure jiffies when the
> page entered dirty state and writeback state. The start writeback/wait on
> writeback functions could then trivially ignore pages that were
> dirtied/writebacked while the function was in progress.
>
> > Hopefully high performance applications are instantiating the file
> > up-front and are using sync_file_range() to prevent these sorts of
> > things from happening. But they probably aren't.
>
> --- for databases it is pretty much possible that one thread is writing
> already journaled data (so it doesn't care when the data are really
> written) and another thread is calling fsync() on the same inode
> simultaneously --- so fsync() could mistakenly write the data generated by
> the first thread and ignore the data generated by the second thread, that
> it should really write.
>
> Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists