lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241129085252.GA15382@noisy.programming.kicks-ass.net>
Date: Fri, 29 Nov 2024 09:52:52 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Ingo Molnar <mingo@...hat.com>,
	Arnaldo Carvalho de Melo <acme@...nel.org>,
	Namhyung Kim <namhyung@...nel.org>,
	Mark Rutland <mark.rutland@....com>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
	Jiri Olsa <jolsa@...nel.org>, Ian Rogers <irogers@...gle.com>,
	Adrian Hunter <adrian.hunter@...el.com>,
	Kan Liang <kan.liang@...ux.intel.com>,
	linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH] perf: map pages in advance

On Thu, Nov 28, 2024 at 08:47:28PM +0000, Lorenzo Stoakes wrote:
> Peter - not sure whether it's easy for you to make a simple adjustment to this
> patch or if you want me to just send a v2, but I have to pop an #ifdef CONFIG_MMU
> into the code.
> 

> > +static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
> > +{
> > +	unsigned long nr_pages = vma_pages(vma);
> > +	int err = 0;
> > +	unsigned long pgoff;
> > +
> > +	for (pgoff = 0; pgoff < nr_pages; pgoff++) {
> > +		unsigned long va = vma->vm_start + PAGE_SIZE * pgoff;
> > +		struct page *page = perf_mmap_to_page(rb, pgoff);
> > +
> > +		if (page == NULL) {
> > +			err = -EINVAL;
> > +			break;
> > +		}
> > +
> > +		/* Map readonly, perf_mmap_pfn_mkwrite() called on write fault. */
> > +		err = remap_pfn_range(vma, va, page_to_pfn(page), PAGE_SIZE,
> > +				      vm_get_page_prot(vma->vm_flags & ~VM_SHARED));
> > +		if (err)
> > +			break;
> > +	}
> > +
> 
> Need a:
> 
> #ifdef CONFIG_MMU
> > +	/* Clear any partial mappings on error. */
> > +	if (err)
> > +		zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE, NULL);
> #endif
> 
> Here to work around the wonders of nommu :)

All good, I'll edit the thing.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ