[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200917220900.GO8409@ziepe.ca>
Date: Thu, 17 Sep 2020 19:09:00 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: Peter Xu <peterx@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
John Hubbard <jhubbard@...dia.com>,
Leon Romanovsky <leonro@...dia.com>,
Linux-MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"Maya B . Gokhale" <gokhale2@...l.gov>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Marty Mcfadden <mcfadden8@...l.gov>,
Kirill Shutemov <kirill@...temov.name>,
Oleg Nesterov <oleg@...hat.com>, Jann Horn <jannh@...gle.com>,
Jan Kara <jack@...e.cz>, Kirill Tkhai <ktkhai@...tuozzo.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Christoph Hellwig <hch@....de>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 1/4] mm: Trial do_wp_page() simplification
On Thu, Sep 17, 2020 at 05:40:59PM -0400, Peter Xu wrote:
> On Thu, Sep 17, 2020 at 01:35:56PM -0700, Linus Torvalds wrote:
> > For that to happen, we'd need to have the vma flag so that we wouldn't
> > have any worry about non-pinners, but as you suggested, I think even
> > just a mm-wide counter - or flag - to deal with the fast-bup case is
> > likely perfectly sufficient.
>
> Would mm_struct.pinned_vm suffice?
I think that could be a good long term goal
IIRC last time we dug into the locked_vm vs pinned_vm mess it didn't
get fixed. There is a mix of both kinds, as you saw, and some
resistance I don't clearly remember to changing it.
My advice for this -rc fix is to go with a single bit in the mm_struct
set on any call to pin_user_pages*
Then only users using pin_user_pages and forking are the only ones who
would ever do extra COW on fork. I think that is OK for -rc, this
workload should be rare due to the various historical issues. Anyhow,
a slow down regression is better than a it is broken regression.
This can be improved into a counter later. Due to the pinned_vm
accounting all call sites should have the mm_struct at unpin, but I
have a feeling it will take a alot of driver patches to sort it all
out.
Jason
Powered by blists - more mailing lists