lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <319b09bc-56a2-207f-6180-3cc7d8cd43d1@arm.com>
Date:   Thu, 20 Jan 2022 12:22:35 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     "Russell King (Oracle)" <linux@...linux.org.uk>
Cc:     Matthew Wilcox <willy@...radead.org>,
        Yury Norov <yury.norov@...il.com>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Nicholas Piggin <npiggin@...il.com>,
        Ding Tianhong <dingtianhong@...wei.com>,
        Anshuman Khandual <anshuman.khandual@....com>,
        Alexey Klimov <aklimov@...hat.com>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [PATCH] vmap(): don't allow invalid pages

On 2022-01-19 19:12, Russell King (Oracle) wrote:
> On Wed, Jan 19, 2022 at 06:43:10PM +0000, Robin Murphy wrote:
>> Indeed, my impression is that the only legitimate way to get hold of a page
>> pointer without assumed provenance is via pfn_to_page(), which is where
>> pfn_valid() comes in. Thus pfn_valid(page_to_pfn()) really *should* be a
>> tautology.
> 
> That can only be true if pfn == page_to_pfn(pfn_to_page(pfn)) for all
> values of pfn.
> 
> Given how pfn_to_page() is defined in the sparsemem case:
> 
> #define __pfn_to_page(pfn)                              \
> ({	unsigned long __pfn = (pfn);                    \
> 	struct mem_section *__sec = __pfn_to_section(__pfn);    \
> 	__section_mem_map_addr(__sec) + __pfn;          \
> })
> #define page_to_pfn __page_to_pfn
> 
> that isn't the case, especially when looking at page_to_pfn():
> 
> #define __page_to_pfn(pg)                                       \
> ({      const struct page *__pg = (pg);                         \
>          int __sec = page_to_section(__pg);                      \
> 	(unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \
> })
> 
> Where:
> 
> static inline unsigned long page_to_section(const struct page *page)
> {
> 	return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK;
> }
> 
> So if page_to_section() returns something that is, e.g. zero for an
> invalid page in a non-zero section, you're not going to end up with
> the right pfn from page_to_pfn().

Right, I emphasised "should" in an attempt to imply "in the absence of 
serious bugs that have further-reaching consequences anyway".

> As I've said now a couple of times, trying to determine of a struct
> page pointer is valid is the wrong question to be asking.

And doing so in one single place, on the justification of avoiding an 
incredibly niche symptom, is even more so. Not to mention that an 
address size fault is one of the best possible outcomes anyway, vs. the 
untold damage that may stem from accesses actually going through to 
random parts of the physical memory map.

Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ