[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20181105083719.GA6953@quack2.suse.cz>
Date: Mon, 5 Nov 2018 09:37:19 +0100
From: Jan Kara <jack@...e.cz>
To: John Hubbard <jhubbard@...dia.com>
Cc: Jason Gunthorpe <jgg@...pe.ca>, Jan Kara <jack@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
john.hubbard@...il.com, Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...nel.org>,
Christopher Lameter <cl@...ux.com>,
Dan Williams <dan.j.williams@...el.com>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
linux-rdma <linux-rdma@...r.kernel.org>,
linux-fsdevel@...r.kernel.org, Al Viro <viro@...iv.linux.org.uk>,
Jerome Glisse <jglisse@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
Ralph Campbell <rcampbell@...dia.com>
Subject: Re: [PATCH v4 2/3] mm: introduce put_user_page*(), placeholder
versions
On Sun 04-11-18 23:17:58, John Hubbard wrote:
> On 10/22/18 12:43 PM, Jason Gunthorpe wrote:
> > On Thu, Oct 11, 2018 at 06:23:24PM -0700, John Hubbard wrote:
> >> On 10/11/18 6:20 AM, Jason Gunthorpe wrote:
> >>> On Thu, Oct 11, 2018 at 10:49:29AM +0200, Jan Kara wrote:
> >>>
> >>>>> This is a real worry. If someone uses a mistaken put_page() then how
> >>>>> will that bug manifest at runtime? Under what set of circumstances
> >>>>> will the kernel trigger the bug?
> >>>>
> >>>> At runtime such bug will manifest as a page that can never be evicted from
> >>>> memory. We could warn in put_page() if page reference count drops below
> >>>> bare minimum for given user pin count which would be able to catch some
> >>>> issues but it won't be 100% reliable. So at this point I'm more leaning
> >>>> towards making get_user_pages() return a different type than just
> >>>> struct page * to make it much harder for refcount to go wrong...
> >>>
> >>> At least for the infiniband code being used as an example here we take
> >>> the struct page from get_user_pages, then stick it in a sgl, and at
> >>> put_page time we get the page back out of the sgl via sg_page()
> >>>
> >>> So type safety will not help this case... I wonder how many other
> >>> users are similar? I think this is a pretty reasonable flow for DMA
> >>> with user pages.
> >>>
> >>
> >> That is true. The infiniband code, fortunately, never mixes the two page
> >> types into the same pool (or sg list), so it's actually an easier example
> >> than some other subsystems. But, yes, type safety doesn't help there. I can
> >> take a moment to look around at the other areas, to quantify how much a type
> >> safety change might help.
> >
> > Are most (all?) of the places working with SGLs?
>
> I finally put together a spreadsheet, in order to answer this sort of thing.
> Some notes:
>
> a) There are around 100 call sites of either get_user_pages*(), or indirect
> calls via iov_iter_get_pages*().
Quite a bit...
> b) There are only a few SGL users. Most are ad-hoc, instead: some loop that
> either can be collapsed nicely into the new put_user_pages*() APIs, or...
> cannot.
>
> c) The real problem is: around 20+ iov_iter_get_pages*() call sites. I need
> to change both the iov_iter system a little bit, and also change the callers
> so that they don't pile all the gup-pinned pages into the same page** array
> that also contains other allocation types. This can be done, it just takes
> time, that's the good news.
Yes, but looking into iov_iter_get_pages() users, lot of them then end up
feeding the result either in SGL, SKB (which is basically the same thing,
just for networking), or BVEC (which is again a very similar thing, just for
generic block layer). I'm not saying that we must have _sgl() interface as
untangling all those users might be just too complex but there is certainly
some space for unification and common interfaces ;)
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists