lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e2ddc83-bec3-fdd4-4d91-3ade0de0b7c8@csgroup.eu>
Date:   Fri, 24 Dec 2021 13:18:08 +0000
From:   Christophe Leroy <christophe.leroy@...roup.eu>
To:     Kefeng Wang <wangkefeng.wang@...wei.com>,
        Kees Cook <keescook@...omium.org>,
        Laura Abbott <labbott@...hat.com>,
        Mark Rutland <mark.rutland@....com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Michael Ellerman <mpe@...erman.id.au>,
        Benjamin Herrenschmidt <benh@...nel.crashing.org>,
        Paul Mackerras <paulus@...ba.org>,
        "linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
        Nick Piggin <npiggin@...il.com>
Subject: Re: [PATCH] Revert "mm/usercopy: Drop extra is_vmalloc_or_module()
 check"



Le 24/12/2021 à 08:06, Kefeng Wang a écrit :
> 
> On 2021/12/24 14:01, Christophe Leroy wrote:
>>
>> Le 23/12/2021 à 11:21, Kefeng Wang a écrit :
>>> This reverts commit 517e1fbeb65f5eade8d14f46ac365db6c75aea9b.
>>>
>>>     usercopy: Kernel memory exposure attempt detected from SLUB 
>>> object not in SLUB page?! (offset 0, size 1048)!
>>>     kernel BUG at mm/usercopy.c:99
>>>     ...
>>>     usercopy_abort+0x64/0xa0 (unreliable)
>>>     __check_heap_object+0x168/0x190
>>>     __check_object_size+0x1a0/0x200
>>>     dev_ethtool+0x2494/0x2b20
>>>     dev_ioctl+0x5d0/0x770
>>>     sock_do_ioctl+0xf0/0x1d0
>>>     sock_ioctl+0x3ec/0x5a0
>>>     __se_sys_ioctl+0xf0/0x160
>>>     system_call_exception+0xfc/0x1f0
>>>     system_call_common+0xf8/0x200
>>>
>>> When run ethtool eth0, the BUG occurred, the code shows below,
>>>
>>>     data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
>>>     copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
>>>
>>> The data is alloced by vmalloc(),  virt_addr_valid(ptr) will return true
>>> on PowerPC64, which leads to the panic, add back the 
>>> is_vmalloc_or_module()
>>> check to fix it.
>> Is it expected that virt_addr_valid() returns true on PPC64 for
>> vmalloc'ed memory ? If that's the case it also means that
>> CONFIG_DEBUG_VIRTUAL won't work as expected either.
> 
> Our product reports this bug to me, after let them do some test,
> 
> I found virt_addr_valid return true for vmalloc'ed memory on their board.
> 
> I think DEBUG_VIRTUAL could not be work well too, but I can't test it.
> 
>>
>> If it is unexpected, I think you should fix PPC64 instead of adding this
>> hack back. Maybe the ARM64 fix can be used as a starting point, see
>> commit 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using
>> __is_lm_address()")
> 
> Yes, I check the history,  fix virt_addr_valid() on PowerPC is what I 
> firstly want to do,
> 
> but I am not familiar with PPC, and also HARDENED_USERCOPY on other's 
> ARCHs could
> 
> has this issue too, so I add the workaround back.
> 
> 
> 1) PPC maintainer/expert, any suggestion ?
> 
> 2) Maybe we could add some check to WARN this scenario.
> 
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -229,6 +229,8 @@ static inline void check_heap_object(const void 
> *ptr, unsigned long n,
>          if (!virt_addr_valid(ptr))
>                  return;
> 
> +       WARN_ON_ONCE(is_vmalloc_or_module_addr(ptr));
> 
>> In the meantime, can you provide more information on your config,
>> especially which memory model is used ?
> 
> Some useful configs,
> 
> CONFIG_PPC64=y
> CONFIG_PPC_BOOK3E_64=y
> CONFIG_E5500_CPU=y
> CONFIG_TARGET_CPU_BOOL=y
> CONFIG_PPC_BOOK3E=y
> CONFIG_E500=y
> CONFIG_PPC_E500MC=y
> CONFIG_PPC_FPU=y
> CONFIG_FSL_EMB_PERFMON=y
> CONFIG_FSL_EMB_PERF_EVENT=y
> CONFIG_FSL_EMB_PERF_EVENT_E500=y
> CONFIG_BOOKE=y
> CONFIG_PPC_FSL_BOOK3E=y
> CONFIG_PTE_64BIT=y
> CONFIG_PHYS_64BIT=y
> CONFIG_PPC_MMU_NOHASH=y
> CONFIG_PPC_BOOK3E_MMU=y
> CONFIG_SELECT_MEMORY_MODEL=y
> CONFIG_FLATMEM_MANUAL=y
> CONFIG_FLATMEM=y
> CONFIG_FLAT_NODE_MEM_MAP=y
> CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
> 

OK so it is PPC64 book3e and with flatmem.

The problem is virt_to_pfn() which uses __pa()

__pa(x) on PPC64 is (x) & 0x0fffffffffffffffUL

And on book3e/64 we have

VMALLOC_START = KERN_VIRT_START = ASM_CONST(0x8000000000000000)


It means that __pa() will return a valid PFN for VMALLOCed addresses.


So an additional check is required in virt_addr_valid(), maybe check 
that (kaddr & PAGE_OFFSET) == PAGE_OFFSET

Can you try that ?

#define virt_addr_valid(kaddr)	((kaddr & PAGE_OFFSET) == PAGE_OFFSET && 
pfn_valid(virt_to_pfn(kaddr)))


Thanks
Christophe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ