[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22d79ad1-0064-42d2-9f69-883ec2872c11@lucifer.local>
Date: Mon, 27 Jan 2025 18:53:23 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Li Zhijian <lizhijian@...itsu.com>, Sumit Garg <sumit.garg@...aro.org>
Subject: Re: [PATCH v2] intel_th: avoid using deprecated page->mapping, index
fields
+cc some commit signers due to total lack of response here.
Hi Alexander/anybody at intel/elsewhere who can look at this.
These fields are deprecated and will soon be non-optionally removed, it
would be good to confirm this works on actual hardware before this change
goes in.
So this patch is probably more urgent than you think.
If there is a problem with maintainership in this area, MAINTAINERS should
be updated accordingly so I know who to contact.
Thanks.
On Mon, Jan 06, 2025 at 11:06:56AM +0000, Lorenzo Stoakes wrote:
> Hi Alexander,
>
> Happy New Year! I realise it's been the holidays recently, but Wondering if
> you'd had a chance to look at this?
>
> Thanks, Lorenzo
>
> On Tue, Dec 03, 2024 at 08:00:01AM +0000, Lorenzo Stoakes wrote:
> > The struct page->mapping, index fields are deprecated and soon to be only
> > available as part of a folio.
> >
> > It is likely the intel_th code which sets page->mapping, index is was
> > implemented out of concern that some aspect of the page fault logic may
> > encounter unexpected problems should they not.
> >
> > However, the appropriate interface for inserting kernel-allocated memory is
> > vm_insert_page() in a VM_MIXEDMAP. By using the helper function
> > vmf_insert_mixed() we can do this with minimal churn in the existing fault
> > handler.
> >
> > By doing so, we bypass the remainder of the faulting logic. The pages are
> > still pinned so there is no possibility of anything unexpected being done
> > with the pages once established.
> >
> > It would also be reasonable to pre-map everything on fault, however to
> > minimise churn we retain the fault handler.
> >
> > We also eliminate all code which clears page->mapping on teardown as this
> > has now become unnecessary.
> >
> > The MSU code relies on faulting to function correctly, so is by definition
> > dependent on CONFIG_MMU. We avoid spurious reports about compilation
> > failure for unsupported platforms by making this requirement explicit in
> > Kconfig as part of this change too.
> >
> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> > ---
> >
> > v2:
> > * Make the MSU driver dependent on CONFIG_MMU to avoid spurious compilation
> > failure reports.
> >
> > v1:
> > https://lore.kernel.org/all/20241202122127.51313-1-lorenzo.stoakes@oracle.com/
> >
> > drivers/hwtracing/intel_th/Kconfig | 1 +
> > drivers/hwtracing/intel_th/msu.c | 31 +++++++-----------------------
> > 2 files changed, 8 insertions(+), 24 deletions(-)
> >
> > diff --git a/drivers/hwtracing/intel_th/Kconfig b/drivers/hwtracing/intel_th/Kconfig
> > index 4b6359326ede..4f7d2b6d79e2 100644
> > --- a/drivers/hwtracing/intel_th/Kconfig
> > +++ b/drivers/hwtracing/intel_th/Kconfig
> > @@ -60,6 +60,7 @@ config INTEL_TH_STH
> >
> > config INTEL_TH_MSU
> > tristate "Intel(R) Trace Hub Memory Storage Unit"
> > + depends on MMU
> > help
> > Memory Storage Unit (MSU) trace output device enables
> > storing STP traces to system memory. It supports single
> > diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
> > index 66123d684ac9..93b65a9731d7 100644
> > --- a/drivers/hwtracing/intel_th/msu.c
> > +++ b/drivers/hwtracing/intel_th/msu.c
> > @@ -19,6 +19,7 @@
> > #include <linux/io.h>
> > #include <linux/workqueue.h>
> > #include <linux/dma-mapping.h>
> > +#include <linux/pfn_t.h>
> >
> > #ifdef CONFIG_X86
> > #include <asm/set_memory.h>
> > @@ -967,7 +968,6 @@ static void msc_buffer_contig_free(struct msc *msc)
> > for (off = 0; off < msc->nr_pages << PAGE_SHIFT; off += PAGE_SIZE) {
> > struct page *page = virt_to_page(msc->base + off);
> >
> > - page->mapping = NULL;
> > __free_page(page);
> > }
> >
> > @@ -1149,9 +1149,6 @@ static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win)
> > int i;
> >
> > for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) {
> > - struct page *page = msc_sg_page(sg);
> > -
> > - page->mapping = NULL;
> > dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE,
> > sg_virt(sg), sg_dma_address(sg));
> > }
> > @@ -1592,22 +1589,10 @@ static void msc_mmap_close(struct vm_area_struct *vma)
> > {
> > struct msc_iter *iter = vma->vm_file->private_data;
> > struct msc *msc = iter->msc;
> > - unsigned long pg;
> >
> > if (!atomic_dec_and_mutex_lock(&msc->mmap_count, &msc->buf_mutex))
> > return;
> >
> > - /* drop page _refcounts */
> > - for (pg = 0; pg < msc->nr_pages; pg++) {
> > - struct page *page = msc_buffer_get_page(msc, pg);
> > -
> > - if (WARN_ON_ONCE(!page))
> > - continue;
> > -
> > - if (page->mapping)
> > - page->mapping = NULL;
> > - }
> > -
> > /* last mapping -- drop user_count */
> > atomic_dec(&msc->user_count);
> > mutex_unlock(&msc->buf_mutex);
> > @@ -1617,16 +1602,14 @@ static vm_fault_t msc_mmap_fault(struct vm_fault *vmf)
> > {
> > struct msc_iter *iter = vmf->vma->vm_file->private_data;
> > struct msc *msc = iter->msc;
> > + struct page *page;
> >
> > - vmf->page = msc_buffer_get_page(msc, vmf->pgoff);
> > - if (!vmf->page)
> > + page = msc_buffer_get_page(msc, vmf->pgoff);
> > + if (!page)
> > return VM_FAULT_SIGBUS;
> >
> > - get_page(vmf->page);
> > - vmf->page->mapping = vmf->vma->vm_file->f_mapping;
> > - vmf->page->index = vmf->pgoff;
> > -
> > - return 0;
> > + get_page(page);
> > + return vmf_insert_mixed(vmf->vma, vmf->address, page_to_pfn_t(page));
> > }
> >
> > static const struct vm_operations_struct msc_mmap_ops = {
> > @@ -1667,7 +1650,7 @@ static int intel_th_msc_mmap(struct file *file, struct vm_area_struct *vma)
> > atomic_dec(&msc->user_count);
> >
> > vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> > - vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY);
> > + vm_flags_set(vma, VM_DONTEXPAND | VM_DONTCOPY | VM_MIXEDMAP);
> > vma->vm_ops = &msc_mmap_ops;
> > return ret;
> > }
> > --
> > 2.47.1
Powered by blists - more mailing lists