lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <224908c1-5d0f-8e01-baa9-94ec2374971f@nvidia.com>
Date:   Mon, 21 Sep 2020 16:53:38 -0700
From:   John Hubbard <jhubbard@...dia.com>
To:     Peter Xu <peterx@...hat.com>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>
CC:     Jason Gunthorpe <jgg@...pe.ca>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>, Michal Hocko <mhocko@...e.com>,
        Kirill Tkhai <ktkhai@...tuozzo.com>,
        Kirill Shutemov <kirill@...temov.name>,
        Hugh Dickins <hughd@...gle.com>,
        Christoph Hellwig <hch@....de>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Oleg Nesterov <oleg@...hat.com>,
        Leon Romanovsky <leonro@...dia.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        "Jann Horn" <jannh@...gle.com>
Subject: Re: [PATCH 1/5] mm: Introduce mm_struct.has_pinned

On 9/21/20 2:17 PM, Peter Xu wrote:
> (Commit message collected from Jason Gunthorpe)
> 
> Reduce the chance of false positive from page_maybe_dma_pinned() by keeping

Not yet, it doesn't. :)  More:

> track if the mm_struct has ever been used with pin_user_pages(). mm_structs
> that have never been passed to pin_user_pages() cannot have a positive
> page_maybe_dma_pinned() by definition. This allows cases that might drive up
> the page ref_count to avoid any penalty from handling dma_pinned pages.
> 
> Due to complexities with unpining this trivial version is a permanent sticky
> bit, future work will be needed to make this a counter.

How about this instead:

Subsequent patches intend to reduce the chance of false positives from
page_maybe_dma_pinned(), by also considering whether or not a page has
even been part of an mm struct that has ever had pin_user_pages*()
applied to any of its pages.

In order to allow that, provide a boolean value (even though it's not
implemented exactly as a boolean type) within the mm struct, that is
simply set once and never cleared. This will suffice for an early, rough
implementation that fixes a few problems.

Future work is planned, to provide a more sophisticated solution, likely
involving a counter, and *not* involving something that is set and never
cleared.

> 
> Suggested-by: Jason Gunthorpe <jgg@...pe.ca>
> Signed-off-by: Peter Xu <peterx@...hat.com>
> ---
>   include/linux/mm_types.h | 10 ++++++++++
>   kernel/fork.c            |  1 +
>   mm/gup.c                 |  6 ++++++
>   3 files changed, 17 insertions(+)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 496c3ff97cce..6f291f8b74c6 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -441,6 +441,16 @@ struct mm_struct {
>   #endif
>   		int map_count;			/* number of VMAs */
>   
> +		/**
> +		 * @has_pinned: Whether this mm has pinned any pages.  This can
> +		 * be either replaced in the future by @pinned_vm when it
> +		 * becomes stable, or grow into a counter on its own. We're
> +		 * aggresive on this bit now - even if the pinned pages were
> +		 * unpinned later on, we'll still keep this bit set for the
> +		 * lifecycle of this mm just for simplicity.
> +		 */
> +		int has_pinned;

I think this would be elegant as an atomic_t, and using atomic_set() and
atomic_read(), which seem even more self-documenting that what you have here.

But it's admittedly a cosmetic point, combined with my perennial fear that
I'm missing something when I look at a READ_ONCE()/WRITE_ONCE() pair. :)

It's completely OK to just ignore this comment, but I didn't want to completely
miss the opportunity to make it a tiny bit cleaner to the reader.

thanks,
-- 
John Hubbard
NVIDIA

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ