[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <de213e3e-01c7-0fef-d5cf-6a69ec670c70@amazon.com>
Date: Thu, 4 Feb 2021 16:32:45 +0200
From: Gal Pressman <galpress@...zon.com>
To: Peter Xu <peterx@...hat.com>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
CC: Wei Zhang <wzam@...zon.com>, Matthew Wilcox <willy@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jason Gunthorpe <jgg@...pe.ca>, Christoph Hellwig <hch@....de>,
Andrea Arcangeli <aarcange@...hat.com>,
Jan Kara <jack@...e.cz>,
Kirill Shutemov <kirill@...temov.name>,
David Gibson <david@...son.dropbear.id.au>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
Jann Horn <jannh@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 0/4] mm/hugetlb: Early cow on fork, and a few cleanups
On 03/02/2021 23:08, Peter Xu wrote:
> As reported by Gal [1], we still miss the code clip to handle early cow for
>
> hugetlb case, which is true. Again, it still feels odd to fork() after using a
>
> few huge pages, especially if they're privately mapped to me.. However I do
>
> agree with Gal and Jason in that we should still have that since that'll
>
> complete the early cow on fork effort at least, and it'll still fix issues
>
> where buffers are not well under control and not easy to apply MADV_DONTFORK.
>
>
>
> The first two patches (1-2) are some cleanups I noticed when reading into the
>
> hugetlb reserve map code. I think it's good to have but they're not necessary
>
> for fixing the fork issue.
>
>
>
> The last two patches (3-4) is the real fix.
>
>
>
> I tested this with a fork() after some vfio-pci assignment, so I'm pretty sure
>
> the page copy path could trigger well (page will be accounted right after the
>
> fork()), but I didn't do data check since the card I assigned is some random
>
> nic. Gal, please feel free to try this if you have better way to verify the
>
> series.
Thanks Peter, once v2 is submitted I'll pull the patches and we'll run the tests
that discovered the issue to verify it works.
Powered by blists - more mailing lists