lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aGzTEA6tx4rqubbF@kernel.org>
Date: Tue, 8 Jul 2025 11:13:04 +0300
From: Mike Rapoport <rppt@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andy Lutomirski <luto@...nel.org>, Borislav Petkov <bp@...en8.de>,
	Daniel Gomez <da.gomez@...sung.com>,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Ingo Molnar <mingo@...hat.com>,
	Luis Chamberlain <mcgrof@...nel.org>,
	Mark Rutland <mark.rutland@....com>,
	Masami Hiramatsu <mhiramat@...nel.org>,
	"H. Peter Anvin" <hpa@...or.com>, Petr Pavlu <petr.pavlu@...e.com>,
	Sami Tolvanen <samitolvanen@...gle.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, linux-modules@...r.kernel.org,
	linux-trace-kernel@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH 3/8] execmem: rework execmem_cache_free()

On Tue, Jul 08, 2025 at 09:26:49AM +0200, Peter Zijlstra wrote:
> On Mon, Jul 07, 2025 at 06:12:26PM +0300, Mike Rapoport wrote:
> > On Mon, Jul 07, 2025 at 11:06:25AM -0400, Liam R. Howlett wrote:
> > > * Mike Rapoport <rppt@...nel.org> [250707 07:32]:
> > > > On Mon, Jul 07, 2025 at 01:11:02PM +0200, Peter Zijlstra wrote:
> > > > > 
> > > > > 	err = __execmem_cache_free(&mas, ptr, GFP_KERNEL | __GFP_NORETRY);
> > > > > 	if (err) {
> > > > > 		mas_store_gfp(&mas, pending_free_set(ptr), GFP_KERNEL);
> > > > > 		execmem_cache.pending_free_cnt++;
> > > > > 		schedule_delayed_work(&execmem_cache_free_work, FREE_DELAY);
> > > > > 		return true;
> > > > > 	}
> > > > > 
> > > > > 	schedule_work(&execmem_cache_clean_work);
> > > > > 	return true;
> > > > > }
> > > > > 
> > > > > And now I have to ask what happens if mas_store_gfp() returns an error?
> > > > 
> > > > AFAIU it won't. mas points to exact slot we've got the area from, nothing else
> > > > can modify the tree because of the mutex, so that mas_store_gfp()
> > > > essentially updates the value at an existing entry.
> > > > 
> > > > I'll add a comment about it.
> > > > 
> > > > Added @Liam to make sure I'm not saying nonsense :)
> > > > 
> > > 
> > > Yes, if there is already a node with a value with the same range, there
> > > will be no allocations that will happen, so it'll just change the
> > > pointer for you.  This is a slot store operation.
> > > 
> > > But, if it's possible to have no entries (an empty tree, or a single
> > > value at 0), you will most likely allocate a node to store it, which is
> > > 256B.
> > > 
> > > I don't think this is a concern in this particular case though as you
> > > are searching for an entry and storing, so it needs to exist.  So
> > > really, the only scenario here is if you store 1 - ULONG_MAX (without
> > > having expanded a root node) or 0 - ULONG_MAX, and that seems invalid.
> > 
> > Thanks for clarification, Liam!
> > The tree cannot be empty at that point and if it has a single value, it
> > won't be at 0, I'm quite sure no architecture has execmem areas at 0.
> 
> Would it make sense to have something like GFP_NO_ALLOC to pass to
> functions like this where we know it won't actually allocate -- and
> which when it does reach the allocator generates a WARN and returns NULL
> ?

We can add a WARN at the caller as well, that won't require a new gfp flag.
The question is how to recover if such thing happen, I don't really see
what execmem can do here if mas_store_gfp() returns an error :/

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ