[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1641871726.fshx7g5r92.astroid@bobo.none>
Date: Tue, 11 Jan 2022 14:37:25 +1000
From: Nicholas Piggin <npiggin@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Kees Cook <keescook@...omium.org>,
Laura Abbott <labbott@...hat.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linuxppc-dev@...ts.ozlabs.org, Mark Rutland <mark.rutland@....com>,
Michael Ellerman <mpe@...erman.id.au>,
Paul Mackerras <paulus@...ba.org>,
Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [PATCH v2 1/2] powerpc: Fix virt_addr_valid() check
Excerpts from Kefeng Wang's message of January 8, 2022 9:58 pm:
> Hi PPC maintainers, ping..
Hmm. I might have confused myself about this. I'm going back and
trying to work out what I was thinking when I suggested it. This
works on 64e because vmalloc space is below the kernel linear map,
right?
On 64s it is the other way around and it is still possible to enable
flatmem on 64s. Altough we might just not hit the problem there because
__pa() will not mask away the vmalloc offset for 64s so it will still
return something that's outside the pfn_valid range for flatmem. That's
very subtle though.
The checks added to __pa actually don't prevent vmalloc memory from
being passed to it either on 64s, only a more basic test.
I think 64s wants (addr >= PAGE_OFFSET && addr < KERN_VIRT_START) as
the condition. Could possibly add that check to __pa as well to
catch vmalloc addresses.
Thanks,
Nick
>
> On 2021/12/25 20:06, Kefeng Wang wrote:
>> When run ethtool eth0, the BUG occurred,
>>
>> usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
>> kernel BUG at mm/usercopy.c:99
>> ...
>> usercopy_abort+0x64/0xa0 (unreliable)
>> __check_heap_object+0x168/0x190
>> __check_object_size+0x1a0/0x200
>> dev_ethtool+0x2494/0x2b20
>> dev_ioctl+0x5d0/0x770
>> sock_do_ioctl+0xf0/0x1d0
>> sock_ioctl+0x3ec/0x5a0
>> __se_sys_ioctl+0xf0/0x160
>> system_call_exception+0xfc/0x1f0
>> system_call_common+0xf8/0x200
>>
>> The code shows below,
>>
>> data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>> copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>
>> The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
>> on PowerPC64, which leads to the panic.
>>
>> As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
>> and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
>> the virt_addr_valid().
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
>> ---
>> arch/powerpc/include/asm/page.h | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> index 254687258f42..300d4c105a3a 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
>> #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
>> #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
>>
>> -#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
>> +#define virt_addr_valid(vaddr) ({ \
>> + unsigned long _addr = (unsigned long)vaddr; \
>> + (unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr)); \
>> +})
>>
>> /*
>> * On Book-E parts we need __va to parse the device tree and we can't
>
Powered by blists - more mailing lists