[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f790427-ea87-b41e-b386-820ccdb7dd38@nvidia.com>
Date: Thu, 28 Mar 2019 14:59:50 -0700
From: John Hubbard <jhubbard@...dia.com>
To: <jglisse@...hat.com>, <linux-mm@...ck.org>
CC: <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH v2 07/11] mm/hmm: add default fault flags to avoid the
need to pre-fill pfns arrays.
On 3/25/19 7:40 AM, jglisse@...hat.com wrote:
> From: Jérôme Glisse <jglisse@...hat.com>
>
> The HMM mirror API can be use in two fashions. The first one where the HMM
> user coalesce multiple page faults into one request and set flags per pfns
> for of those faults. The second one where the HMM user want to pre-fault a
> range with specific flags. For the latter one it is a waste to have the user
> pre-fill the pfn arrays with a default flags value.
>
> This patch adds a default flags value allowing user to set them for a range
> without having to pre-fill the pfn array.
>
> Signed-off-by: Jérôme Glisse <jglisse@...hat.com>
> Reviewed-by: Ralph Campbell <rcampbell@...dia.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: John Hubbard <jhubbard@...dia.com>
> Cc: Dan Williams <dan.j.williams@...el.com>
> ---
> include/linux/hmm.h | 7 +++++++
> mm/hmm.c | 12 ++++++++++++
> 2 files changed, 19 insertions(+)
>
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index 79671036cb5f..13bc2c72f791 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -165,6 +165,8 @@ enum hmm_pfn_value_e {
> * @pfns: array of pfns (big enough for the range)
> * @flags: pfn flags to match device driver page table
> * @values: pfn value for some special case (none, special, error, ...)
> + * @default_flags: default flags for the range (write, read, ...)
> + * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter
> * @pfn_shifts: pfn shift value (should be <= PAGE_SHIFT)
> * @valid: pfns array did not change since it has been fill by an HMM function
> */
> @@ -177,6 +179,8 @@ struct hmm_range {
> uint64_t *pfns;
> const uint64_t *flags;
> const uint64_t *values;
> + uint64_t default_flags;
> + uint64_t pfn_flags_mask;
> uint8_t pfn_shift;
> bool valid;
> };
> @@ -521,6 +525,9 @@ static inline int hmm_vma_fault(struct hmm_range *range, bool block)
> {
> long ret;
>
> + range->default_flags = 0;
> + range->pfn_flags_mask = -1UL;
Hi Jerome,
This is nice to have. Let's constrain it a little bit more, though: the pfn_flags_mask
definitely does not need to be a run time value. And we want some assurance that
the mask is
a) large enough for the flags, and
b) small enough to avoid overrunning the pfns field.
Those are less certain with a run-time struct field, and more obviously correct with
something like, approximately:
#define PFN_FLAGS_MASK 0xFFFF
or something.
In other words, this is more flexibility than we need--just a touch too much,
IMHO.
> +
> ret = hmm_range_register(range, range->vma->vm_mm,
> range->start, range->end);
> if (ret)
> diff --git a/mm/hmm.c b/mm/hmm.c
> index fa9498eeb9b6..4fe88a196d17 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -415,6 +415,18 @@ static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma_walk,
> if (!hmm_vma_walk->fault)
> return;
>
> + /*
> + * So we not only consider the individual per page request we also
> + * consider the default flags requested for the range. The API can
> + * be use in 2 fashions. The first one where the HMM user coalesce
> + * multiple page fault into one request and set flags per pfns for
> + * of those faults. The second one where the HMM user want to pre-
> + * fault a range with specific flags. For the latter one it is a
> + * waste to have the user pre-fill the pfn arrays with a default
> + * flags value.
> + */
> + pfns = (pfns & range->pfn_flags_mask) | range->default_flags;
Need to verify that the mask isn't too large or too small.
> +
> /* We aren't ask to do anything ... */
> if (!(pfns & range->flags[HMM_PFN_VALID]))
> return;
>
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists