lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zrqb8n3DikE+K+Xm@MiWiFi-R3L-srv>
Date: Tue, 13 Aug 2024 07:34:10 +0800
From: Baoquan He <bhe@...hat.com>
To: Will Deacon <will@...nel.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	Zhaoyang Huang <zhaoyang.huang@...soc.com>,
	"Hailong . Liu" <hailong.liu@...o.com>,
	Uladzislau Rezki <urezki@...il.com>,
	Christoph Hellwig <hch@...radead.org>,
	Lorenzo Stoakes <lstoakes@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>, stable@...r.kernel.org
Subject: Re: [PATCH] mm: vmalloc: Ensure vmap_block is initialised before
 adding to queue

On 08/12/24 at 06:16pm, Will Deacon wrote:
> Commit 8c61291fd850 ("mm: fix incorrect vbq reference in
> purge_fragmented_block") extended the 'vmap_block' structure to contain
> a 'cpu' field which is set at allocation time to the id of the
> initialising CPU.
> 
> When a new 'vmap_block' is being instantiated by new_vmap_block(), the
> partially initialised structure is added to the local 'vmap_block_queue'
> xarray before the 'cpu' field has been initialised. If another CPU is
> concurrently walking the xarray (e.g. via vm_unmap_aliases()), then it
> may perform an out-of-bounds access to the remote queue thanks to an
> uninitialised index.
> 
> This has been observed as UBSAN errors in Android:
> 
>  | Internal error: UBSAN: array index out of bounds: 00000000f2005512 [#1] PREEMPT SMP
>  |
>  | Call trace:
>  |  purge_fragmented_block+0x204/0x21c
>  |  _vm_unmap_aliases+0x170/0x378
>  |  vm_unmap_aliases+0x1c/0x28
>  |  change_memory_common+0x1dc/0x26c
>  |  set_memory_ro+0x18/0x24
>  |  module_enable_ro+0x98/0x238
>  |  do_init_module+0x1b0/0x310
> 
> Move the initialisation of 'vb->cpu' in new_vmap_block() ahead of the
> addition to the xarray.
> 
> Cc: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> Cc: Hailong.Liu <hailong.liu@...o.com>
> Cc: Uladzislau Rezki (Sony) <urezki@...il.com>
> Cc: Baoquan He <bhe@...hat.com>
> Cc: Christoph Hellwig <hch@...radead.org>
> Cc: Lorenzo Stoakes <lstoakes@...il.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: <stable@...r.kernel.org>
> Fixes: 8c61291fd850 ("mm: fix incorrect vbq reference in purge_fragmented_block")
> Signed-off-by: Will Deacon <will@...nel.org>
> ---

Good catch, this truly could happen and collapse system.

Reviewed-by: Baoquan He <bhe@...hat.com>

> 
> I _think_ the insertion into the free list is ok, as the vb shouldn't be
> considered for purging if it's clean. It would be great if somebody more
> familiar with this code could confirm either way, however.

It's OK, please see below comment.

static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
{
......
        vaddr = vmap_block_vaddr(va->va_start, 0);
        spin_lock_init(&vb->lock);
        vb->va = va;
        /* At least something should be left free */
        BUG_ON(VMAP_BBMAP_BITS <= (1UL << order));
        bitmap_zero(vb->used_map, VMAP_BBMAP_BITS);
        vb->free = VMAP_BBMAP_BITS - (1UL << order);
         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
         Here we have cut away one piece according to vb_alloc() and set vb->free.
        vb->dirty = 0;
        vb->dirty_min = VMAP_BBMAP_BITS;
        vb->dirty_max = 0;
        bitmap_set(vb->used_map, 0, (1UL << order));
        INIT_LIST_HEAD(&vb->free_list);
...
}

static bool purge_fragmented_block(struct vmap_block *vb,
                struct list_head *purge_list, bool force_purge)
{
        struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, vb->cpu);

        if (vb->free + vb->dirty != VMAP_BBMAP_BITS ||
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The above setting
                of vb->free and vb->dirty will fail conditional check here.
                So it won't be purged. 
            vb->dirty == VMAP_BBMAP_BITS)
                return false;
 
        /* Don't overeagerly purge usable blocks unless requested */
        if (!(force_purge || vb->free < VMAP_PURGE_THRESHOLD))
                return false;
...
}
> 
>  mm/vmalloc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6b783baf12a1..64c0a2c8a73c 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2626,6 +2626,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
>  	vb->dirty_max = 0;
>  	bitmap_set(vb->used_map, 0, (1UL << order));
>  	INIT_LIST_HEAD(&vb->free_list);
> +	vb->cpu = raw_smp_processor_id();
>  
>  	xa = addr_to_vb_xa(va->va_start);
>  	vb_idx = addr_to_vb_idx(va->va_start);
> @@ -2642,7 +2643,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
>  	 * integrity together with list_for_each_rcu from read
>  	 * side.
>  	 */
> -	vb->cpu = raw_smp_processor_id();
>  	vbq = per_cpu_ptr(&vmap_block_queue, vb->cpu);
>  	spin_lock(&vbq->lock);
>  	list_add_tail_rcu(&vb->free_list, &vbq->free);
> -- 
> 2.46.0.76.ge559c4bf1a-goog
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ