[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d2008b88-962f-b7b4-8351-9e1df95ea2cc@nvidia.com>
Date: Thu, 28 Mar 2019 16:28:47 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Jerome Glisse <jglisse@...hat.com>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [PATCH v2 07/11] mm/hmm: add default fault flags to avoid the
need to pre-fill pfns arrays.
On 3/28/19 4:21 PM, Jerome Glisse wrote:
> On Thu, Mar 28, 2019 at 03:40:42PM -0700, John Hubbard wrote:
>> On 3/28/19 3:31 PM, Jerome Glisse wrote:
>>> On Thu, Mar 28, 2019 at 03:19:06PM -0700, John Hubbard wrote:
>>>> On 3/28/19 3:12 PM, Jerome Glisse wrote:
>>>>> On Thu, Mar 28, 2019 at 02:59:50PM -0700, John Hubbard wrote:
>>>>>> On 3/25/19 7:40 AM, jglisse@...hat.com wrote:
>>>>>>> From: Jérôme Glisse <jglisse@...hat.com>
[...]
>> Hi Jerome,
>>
>> I think you're talking about flags, but I'm talking about the mask. The
>> above link doesn't appear to use the pfn_flags_mask, and the default_flags
>> that it uses are still in the same lower 3 bits:
>>
>> +static uint64_t odp_hmm_flags[HMM_PFN_FLAG_MAX] = {
>> + ODP_READ_BIT, /* HMM_PFN_VALID */
>> + ODP_WRITE_BIT, /* HMM_PFN_WRITE */
>> + ODP_DEVICE_BIT, /* HMM_PFN_DEVICE_PRIVATE */
>> +};
>>
>> So I still don't see why we need the flexibility of a full 0xFFFFFFFFFFFFFFFF
>> mask, that is *also* runtime changeable.
>
> So the pfn array is using a device driver specific format and we have
> no idea nor do we need to know where the valid, write, ... bit are in
> that format. Those bits can be in the top 60 bits like 63, 62, 61, ...
> we do not care. They are device with bit at the top and for those you
> need a mask that allows you to mask out those bits or not depending on
> what the user want to do.
>
> The mask here is against an _unknown_ (from HMM POV) format. So we can
> not presume where the bits will be and thus we can not presume what a
> proper mask is.
>
> So that's why a full unsigned long mask is use here.
>
> Maybe an example will help let say the device flag are:
> VALID (1 << 63)
> WRITE (1 << 62)
>
> Now let say that device wants to fault with at least read a range
> it does set:
> range->default_flags = (1 << 63)
> range->pfn_flags_mask = 0;
>
> This will fill fault all page in the range with at least read
> permission.
>
> Now let say it wants to do the same except for one page in the range
> for which its want to have write. Now driver set:
> range->default_flags = (1 << 63);
> range->pfn_flags_mask = (1 << 62);
> range->pfns[index_of_write] = (1 << 62);
>
> With this HMM will fault in all page with at least read (ie valid)
> and for the address: range->start + index_of_write << PAGE_SHIFT it
> will fault with write permission ie if the CPU pte does not have
> write permission set then handle_mm_fault() will be call asking for
> write permission.
>
>
> Note that in the above HMM will populate the pfns array with write
> permission for any entry that have write permission within the CPU
> pte ie the default_flags and pfn_flags_mask is only the minimun
> requirement but HMM always returns all the flag that are set in the
> CPU pte.
>
>
> Now let say you are an "old" driver like nouveau upstream, then it
> means that you are setting each individual entry within range->pfns
> with the exact flags you want for each address hence here what you
> want is:
> range->default_flags = 0;
> range->pfn_flags_mask = -1UL;
>
> So that what we do is (for each entry):
> (range->pfns[index] & range->pfn_flags_mask) | range->default_flags
> and we end up with the flags that were set by the driver for each of
> the individual range->pfns entries.
>
>
> Does this help ?
>
Yes, the key point for me was that this is an entirely device driver specific
format. OK. But then we have HMM setting it. So a comment to the effect that
this is device-specific might be nice, but I'll leave that up to you whether
it is useful.
Either way, you can add:
Reviewed-by: John Hubbard <jhubbard@...dia.com>
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists