[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181018184726.fb8da5c733da5e0c6a235101@linux-foundation.org>
Date: Thu, 18 Oct 2018 18:47:26 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Andrea Arcangeli <aarcange@...hat.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...nel.org>,
Hugh Dickins <hughd@...gle.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Davidlohr Bueso <dave@...olabs.net>,
Alexander Viro <viro@...iv.linux.org.uk>,
stable@...r.kernel.org
Subject: Re: [PATCH] hugetlbfs: dirty pages as they are added to pagecache
On Thu, 18 Oct 2018 20:46:21 -0400 Andrea Arcangeli <aarcange@...hat.com> wrote:
> On Thu, Oct 18, 2018 at 04:16:40PM -0700, Mike Kravetz wrote:
> > I was not sure about this, and expected someone could come up with
> > something better. It just seems there are filesystems like huegtlbfs,
> > where it makes no sense wasting cycles traversing the filesystem. So,
> > let's not even try.
> >
> > Hoping someone can come up with a better method than hard coding as
> > I have done above.
>
> It's not strictly required after marking the pages dirty though. The
> real fix is the other one? Could we just drop the hardcoding and let
> it run after the real fix is applied?
>
> The performance of drop_caches doesn't seem critical, especially with
> gigapages. tmpfs doesn't seem to be optimized away from drop_caches
> and the gain would be bigger for tmpfs if THP is not enabled in the
> mount, so I'm not sure if we should worry about hugetlbfs first.
I guess so. I can't immediately see a clean way of expressing this so
perhaps it would need a new BDI_CAP_NO_BACKING_STORE. Such a
thing hardly seems worthwhile for drop_caches.
And drop_caches really shouldn't be there anyway. It's a standing
workaround for ongoing suckage in pagecache and metadata reclaim
behaviour :(
Powered by blists - more mailing lists