lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87y0qerbld.fsf@toke.dk>
Date: Tue, 16 Sep 2025 11:27:42 +0200
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Mina Almasry <almasrymina@...gle.com>, Helge Deller <deller@....de>
Cc: Helge Deller <deller@...nel.org>, David Hildenbrand <david@...hat.com>,
 Jesper Dangaard Brouer <hawk@...nel.org>, Ilias Apalodimas
 <ilias.apalodimas@...aro.org>, "David S. Miller" <davem@...emloft.net>,
 Linux Memory Management List <linux-mm@...ck.org>, netdev@...r.kernel.org,
 Linux parisc List <linux-parisc@...r.kernel.org>, Andrew Morton
 <akpm@...ux-foundation.org>
Subject: Re: [PATCH][RESEND][RFC] Fix 32-bit boot failure due inaccurate
 page_pool_page_is_pp()

Mina Almasry <almasrymina@...gle.com> writes:

> On Mon, Sep 15, 2025 at 6:08 AM Helge Deller <deller@....de> wrote:
>>
>> On 9/15/25 13:44, Toke Høiland-Jørgensen wrote:
>> > Helge Deller <deller@...nel.org> writes:
>> >
>> >> Commit ee62ce7a1d90 ("page_pool: Track DMA-mapped pages and unmap them when
>> >> destroying the pool") changed PP_MAGIC_MASK from 0xFFFFFFFC to 0xc000007c on
>> >> 32-bit platforms.
>> >>
>> >> The function page_pool_page_is_pp() uses PP_MAGIC_MASK to identify page pool
>> >> pages, but the remaining bits are not sufficient to unambiguously identify
>> >> such pages any longer.
>> >
>> > Why not? What values end up in pp_magic that are mistaken for the
>> > pp_signature?
>>
>> As I wrote, PP_MAGIC_MASK changed from 0xFFFFFFFC to 0xc000007c.
>> And we have PP_SIGNATURE == 0x40  (since POISON_POINTER_DELTA is zero on 32-bit platforms).
>> That means, that before page_pool_page_is_pp() could clearly identify such pages,
>> as the (value & 0xFFFFFFFC) == 0x40.
>> So, basically only the 0x40 value indicated a PP page.
>>
>> Now with the mask a whole bunch of pointers suddenly qualify as being a pp page,
>> just showing a few examples:
>> 0x01111040
>> 0x082330C0
>> 0x03264040
>> 0x0ad686c0 ....
>>
>> For me it crashes immediately at bootup when memblocked pages are handed
>> over to become normal pages.
>>
>
> I tried to take a look to double check here and AFAICT Helge is correct.
>
> Before the breaking patch with PP_MAGIC_MASK==0xFFFFFFFC, basically
> 0x40 is the only pointer that may be mistaken as a valid pp_magic.
> AFAICT each bit we 0 in the PP_MAGIC_MASK (aside from the 3 least
> significant bits), doubles the number of pointers that can be mistaken
> for pp_magic. So with 0xFFFFFFFC, only one value (0x40) can be
> mistaken as a valid pp_magic, with  0xc000007c AFAICT 2^22 values can
> be mistaken as pp_magic?
>
> I don't know that there is any bits we can take away from
> PP_MAGIC_MASK I think? As each bit doubles the probablity :(
>
> I would usually say we can check the 3 least significant bits to tell
> if pp_magic is a pointer or not, but pp_magic is unioned with
> page->lru I believe which will use those bits.

So if the pointers stored in the same field can be any arbitrary value,
you are quite right, there is no safe value. The critical assumption in
the bit stuffing scheme is that the pointers stored in the field will
always be above PAGE_OFFSET, and that PAGE_OFFSET has one (or both) of
the two top-most bits set (that is what the VMSPLIT reference in the
comment above the PP_DMA_INDEX_SHIFT definition is alluding to).

The crash Helge reported obviously indicates that this assumption
doesn't hold. What I'd like to understand if whether this is because I
have completely misunderstood how things work, or whether it is only on
*some* 32-bit systems that this assumption on the range of kernel
pointers doesn't hold?

> AFAICT, only proper resolution I see is a revert of the breaking patch
> + reland after we can make pp a page-flag and deprecate using
> pp_magic. Sorry about that. Thoughts Toke? Anything better you can
> think of here?

We can just conditionally disable the tracking if we don't have enough
bits? Something like the below (which could maybe be narrowed down
further depending on the answer to my question above).

-Toke


diff --git i/include/linux/mm.h w/include/linux/mm.h
index 1ae97a0b8ec7..3e3b090104d9 100644
--- i/include/linux/mm.h
+++ w/include/linux/mm.h
@@ -4175,8 +4175,8 @@ int arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status);
  */
 #define PP_DMA_INDEX_BITS MIN(32, __ffs(POISON_POINTER_DELTA) - PP_DMA_INDEX_SHIFT)
 #else
-/* Always leave out the topmost two; see above. */
-#define PP_DMA_INDEX_BITS MIN(32, BITS_PER_LONG - PP_DMA_INDEX_SHIFT - 2)
+/* Can't store the DMA index if we don't have a poison offset */
+#define PP_DMA_INDEX_BITS 0
 #endif
 
 #define PP_DMA_INDEX_MASK GENMASK(PP_DMA_INDEX_BITS + PP_DMA_INDEX_SHIFT - 1, \
diff --git i/net/core/netmem_priv.h w/net/core/netmem_priv.h
index cd95394399b4..afc5a56bba03 100644
--- i/net/core/netmem_priv.h
+++ w/net/core/netmem_priv.h
@@ -38,6 +38,7 @@ static inline void netmem_set_dma_addr(netmem_ref netmem,
 
 static inline unsigned long netmem_get_dma_index(netmem_ref netmem)
 {
+#if PP_DMA_INDEX_BITS > 0
 	unsigned long magic;
 
 	if (WARN_ON_ONCE(netmem_is_net_iov(netmem)))
@@ -46,11 +47,13 @@ static inline unsigned long netmem_get_dma_index(netmem_ref netmem)
 	magic = __netmem_clear_lsb(netmem)->pp_magic;
 
 	return (magic & PP_DMA_INDEX_MASK) >> PP_DMA_INDEX_SHIFT;
+#endif
 }
 
 static inline void netmem_set_dma_index(netmem_ref netmem,
 					unsigned long id)
 {
+#if PP_DMA_INDEX_BITS > 0
 	unsigned long magic;
 
 	if (WARN_ON_ONCE(netmem_is_net_iov(netmem)))
@@ -58,5 +61,6 @@ static inline void netmem_set_dma_index(netmem_ref netmem,
 
 	magic = netmem_get_pp_magic(netmem) | (id << PP_DMA_INDEX_SHIFT);
 	__netmem_clear_lsb(netmem)->pp_magic = magic;
+#endif
 }
 #endif
diff --git i/net/core/page_pool.c w/net/core/page_pool.c
index ba70569bd4b0..427fdf92b82c 100644
--- i/net/core/page_pool.c
+++ w/net/core/page_pool.c
@@ -495,6 +495,7 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem, gfp_t g
 		goto unmap_failed;
 	}
 
+#if PP_DMA_INDEX_BITS > 0
 	if (in_softirq())
 		err = xa_alloc(&pool->dma_mapped, &id, netmem_to_page(netmem),
 			       PP_DMA_INDEX_LIMIT, gfp);
@@ -507,6 +508,7 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem, gfp_t g
 	}
 
 	netmem_set_dma_index(netmem, id);
+#endif
 	page_pool_dma_sync_for_device(pool, netmem, pool->p.max_len);
 
 	return true;
@@ -688,6 +690,7 @@ static __always_inline void __page_pool_release_netmem_dma(struct page_pool *poo
 		 */
 		return;
 
+#if PP_DMA_INDEX_BITS > 0
 	id = netmem_get_dma_index(netmem);
 	if (!id)
 		return;
@@ -698,7 +701,7 @@ static __always_inline void __page_pool_release_netmem_dma(struct page_pool *poo
 		old = xa_cmpxchg_bh(&pool->dma_mapped, id, page, NULL, 0);
 	if (old != page)
 		return;
-
+#endif
 	dma = page_pool_get_dma_addr_netmem(netmem);
 
 	/* When page is unmapped, it cannot be returned to our pool */


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ