[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3b10057c-e117-89fa-1bd4-23fb5a4efb5f@redhat.com>
Date: Mon, 8 Feb 2021 19:18:56 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, jgg@...pe.ca,
linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
dan.j.williams@...el.com
Subject: Re: [PATCH 1/2] mm: provide a sane PTE walking API for modules
On 08/02/21 18:39, Christoph Hellwig wrote:
>> +int follow_pte(struct mm_struct *mm, unsigned long address,
>> + pte_t **ptepp, spinlock_t **ptlp)
>> +{
>> + return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp);
>> +}
>> +EXPORT_SYMBOL_GPL(follow_pte);
>
> I still don't think this is good as a general API. Please document this
> as KVM only for now, and hopefully next merge window I'll finish an
> export variant restricting us to specific modules.
Fair enough. I would expect that pretty much everyone using follow_pfn
will at least want to switch to this one (as it's less bad and not
impossible to use correctly), but I'll squash this in:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 90b527260edf..24b292fce8e5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1659,8 +1659,8 @@ void free_pgd_range(struct mmu_gather *tlb,
unsigned long addr,
int
copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct
*src_vma);
int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
- struct mmu_notifier_range *range, pte_t **ptepp, pmd_t **pmdpp,
- spinlock_t **ptlp);
+ struct mmu_notifier_range *range, pte_t **ptepp,
+ pmd_t **pmdpp, spinlock_t **ptlp);
int follow_pte(struct mm_struct *mm, unsigned long address,
pte_t **ptepp, spinlock_t **ptlp);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 3632f7416248..c8679b15c004 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4792,6 +4792,9 @@ int follow_invalidate_pte(struct mm_struct *mm,
unsigned long address,
* Only IO mappings and raw PFN mappings are allowed. The mmap semaphore
* should be taken for read.
*
+ * KVM uses this function. While it is arguably less bad than
+ * ``follow_pfn``, it is not a good general-purpose API.
+ *
* Return: zero on success, -ve otherwise.
*/
int follow_pte(struct mm_struct *mm, unsigned long address,
@@ -4809,6 +4812,9 @@ EXPORT_SYMBOL_GPL(follow_pte);
*
* Only IO mappings and raw PFN mappings are allowed.
*
+ * This function does not allow the caller to read the permissions
+ * of the PTE. Do not use it.
+ *
* Return: zero and the pfn at @pfn on success, -ve otherwise.
*/
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
(apologies in advance if Thunderbird destroys the patch).
Paolo
Powered by blists - more mailing lists