lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1640429980.38ev9qg7xc.astroid@bobo.none>
Date:   Sat, 25 Dec 2021 21:04:38 +1000
From:   Nicholas Piggin <npiggin@...il.com>
To:     Andrew
 Morton <akpm@...ux-foundation.org>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Christophe Leroy <christophe.leroy@...roup.eu>,
        Kees Cook <keescook@...omium.org>,
        Laura Abbott <labbott@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
        Mark Rutland <mark.rutland@....com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Paul Mackerras <paulus@...ba.org>,
        Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: Re: [PATCH] Revert "mm/usercopy: Drop extra is_vmalloc_or_module()
 check"

Excerpts from Kefeng Wang's message of December 25, 2021 12:05 pm:
> 
> On 2021/12/24 21:18, Christophe Leroy wrote:
>>
>> Le 24/12/2021 à 08:06, Kefeng Wang a écrit :
>>> On 2021/12/24 14:01, Christophe Leroy wrote:
>>>> Le 23/12/2021 à 11:21, Kefeng Wang a écrit :
>>>>> This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b.
>>>>>
>>>>>      usercopy: Kernel memory exposure attempt detected from SLUB
>>>>> object not in SLUB page?! (offset 0, size 1048)!
>>>>>      kernel BUG at mm/usercopy.c:99
>>>>>      ...
>>>>>      usercopy_abort+0x64/0xa0 (unreliable)
>>>>>      __check_heap_object+0x168/0x190
>>>>>      __check_object_size+0x1a0/0x200
>>>>>      dev_ethtool+0x2494/0x2b20
>>>>>      dev_ioctl+0x5d0/0x770
>>>>>      sock_do_ioctl+0xf0/0x1d0
>>>>>      sock_ioctl+0x3ec/0x5a0
>>>>>      __se_sys_ioctl+0xf0/0x160
>>>>>      system_call_exception+0xfc/0x1f0
>>>>>      system_call_common+0xf8/0x200
>>>>>
>>>>> When run ethtool eth0, the BUG occurred, the code shows below,
>>>>>
>>>>>      data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>>>>>      copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>>>>
>>>>> The data is alloced by vmalloc(),  virt_addr_valid(ptr) will return true
>>>>> on PowerPC64, which leads to the panic, add back the
>>>>> is_vmalloc_or_module()
>>>>> check to fix it.
>>>> Is it expected that virt_addr_valid() returns true on PPC64 for
>>>> vmalloc'ed memory ? If that's the case it also means that
>>>> CONFIG_DEBUG_VIRTUAL won't work as expected either.
>>> Our product reports this bug to me, after let them do some test,
>>>
>>> I found virt_addr_valid return true for vmalloc'ed memory on their board.
>>>
>>> I think DEBUG_VIRTUAL could not be work well too, but I can't test it.
>>>
>>>> If it is unexpected, I think you should fix PPC64 instead of adding this
>>>> hack back. Maybe the ARM64 fix can be used as a starting point, see
>>>> commit 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using
>>>> __is_lm_address()")
>>> Yes, I check the history,  fix virt_addr_valid() on PowerPC is what I
>>> firstly want to do,
>>>
>>> but I am not familiar with PPC, and also HARDENED_USERCOPY on other's
>>> ARCHs could
>>>
>>> has this issue too, so I add the workaround back.
>>>
>>>
>>> 1) PPC maintainer/expert, any suggestion ?
>>>
>>> 2) Maybe we could add some check to WARN this scenario.
>>>
>>> --- a/mm/usercopy.c
>>> +++ b/mm/usercopy.c
>>> @@ -229,6 +229,8 @@ static inline void check_heap_object(const void
>>> *ptr, unsigned long n,
>>>           if (!virt_addr_valid(ptr))
>>>                   return;
>>>
>>> +       WARN_ON_ONCE(is_vmalloc_or_module_addr(ptr));
> 
>>>
>>>> In the meantime, can you provide more information on your config,
>>>> especially which memory model is used ?
>>> Some useful configs,
>>>
>>> CONFIG_PPC64=y
>>> CONFIG_PPC_BOOK3E_64=y
>>> CONFIG_E5500_CPU=y
>>> CONFIG_TARGET_CPU_BOOL=y
>>> CONFIG_PPC_BOOK3E=y
>>> CONFIG_E500=y
>>> CONFIG_PPC_E500MC=y
>>> CONFIG_PPC_FPU=y
>>> CONFIG_FSL_EMB_PERFMON=y
>>> CONFIG_FSL_EMB_PERF_EVENT=y
>>> CONFIG_FSL_EMB_PERF_EVENT_E500=y
>>> CONFIG_BOOKE=y
>>> CONFIG_PPC_FSL_BOOK3E=y
>>> CONFIG_PTE_64BIT=y
>>> CONFIG_PHYS_64BIT=y
>>> CONFIG_PPC_MMU_NOHASH=y
>>> CONFIG_PPC_BOOK3E_MMU=y
>>> CONFIG_SELECT_MEMORY_MODEL=y
>>> CONFIG_FLATMEM_MANUAL=y
>>> CONFIG_FLATMEM=y
>>> CONFIG_FLAT_NODE_MEM_MAP=y
>>> CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
>>>
>> OK so it is PPC64 book3e and with flatmem.
>>
>> The problem is virt_to_pfn() which uses __pa()
>>
>> __pa(x) on PPC64 is (x) & 0x0fffffffffffffffUL
>>
>> And on book3e/64 we have
>>
>> VMALLOC_START = KERN_VIRT_START = ASM_CONST(0x8000000000000000)
>>
>>
>> It means that __pa() will return a valid PFN for VMALLOCed addresses.
>>
>>
>> So an additional check is required in virt_addr_valid(), maybe check
>> that (kaddr & PAGE_OFFSET) == PAGE_OFFSET
>>
>> Can you try that ?
>>
>> #define virt_addr_valid(kaddr)	((kaddr & PAGE_OFFSET) == PAGE_OFFSET &&
>> pfn_valid(virt_to_pfn(kaddr)))
> 
> I got this commit,
> 
> commit 4dd7554a6456d124c85e0a4ad156625b71390b5c
> 
> Author: Nicholas Piggin <npiggin@...il.com>
> Date:   Wed Jul 24 18:46:37 2019 +1000
> 
>      powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
> 
>      Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
>      given a virtual address above PAGE_OFFSET.
> 
> It has check the PAGE_OFFSET in __pa,  will test it and resend the 
> patch(with above warning changes).

What did you get with this commit? Is this what causes the crash?

riscv for example with flatmem also relies on pfn_valid to do the right
thing, so as far as I can see the check should exclude vmalloc addresses
and it's just a matter of virt_addr_valid not to give virt_to_pfn an
address < PAGE_OFFSET.

If we take riscv's implementation

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 254687258f42..7713188516a6 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -132,7 +132,10 @@ static inline bool pfn_valid(unsigned long pfn)
 #define virt_to_page(kaddr)    pfn_to_page(virt_to_pfn(kaddr))
 #define pfn_to_kaddr(pfn)      __va((pfn) << PAGE_SHIFT)
 
-#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
+#define virt_addr_valid(vaddr) ({                                      \
+       unsigned long _addr = (unsigned long)vaddr;                     \
+       (unsigned long)(_addr) >= PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr)); \
+})
 
 /*
  * On Book-E parts we need __va to parse the device tree and we can't

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ