lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 31 May 2024 09:40:01 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: hailong liu <hailong.liu@...o.com>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	Uladzislau Rezki <urezki@...il.com>, Christoph Hellwig <hch@...radead.org>, 
	Lorenzo Stoakes <lstoakes@...il.com>, Baoquan He <bhe@...hat.com>, 
	Thomas Gleixner <tglx@...utronix.de>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	steve.kang@...soc.com
Subject: Re: [PATCHv2] mm: fix incorrect vbq reference in purge_fragmented_block

On Fri, May 31, 2024 at 9:27 AM hailong liu <hailong.liu@...o.com> wrote:
>
> On Fri, 31. May 08:50, zhaoyang.huang wrote:
> > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> >
> > vmalloc area runs out in our ARM64 system during an erofs test as
> > vm_map_ram failed[1]. By following the debug log, we find that
> > vm_map_ram()->vb_alloc() will allocate new vb->va which corresponding
> > to 4MB vmalloc area as list_for_each_entry_rcu returns immediately
> > when vbq->free->next points to vbq->free. That is to say, 65536 times
> > of page fault after the list's broken will run out of the whole
> > vmalloc area. This should be introduced by one vbq->free->next point to
> > vbq->free which makes list_for_each_entry_rcu can not iterate the list
> > and find the BUG.
> >
> > [1]
> > PID: 1        TASK: ffffff80802b4e00  CPU: 6    COMMAND: "init"
> >  #0 [ffffffc08006afe0] __switch_to at ffffffc08111d5cc
> >  #1 [ffffffc08006b040] __schedule at ffffffc08111dde0
> >  #2 [ffffffc08006b0a0] schedule at ffffffc08111e294
> >  #3 [ffffffc08006b0d0] schedule_preempt_disabled at ffffffc08111e3f0
> >  #4 [ffffffc08006b140] __mutex_lock at ffffffc08112068c
> >  #5 [ffffffc08006b180] __mutex_lock_slowpath at ffffffc08111f8f8
> >  #6 [ffffffc08006b1a0] mutex_lock at ffffffc08111f834
> >  #7 [ffffffc08006b1d0] reclaim_and_purge_vmap_areas at ffffffc0803ebc3c
> >  #8 [ffffffc08006b290] alloc_vmap_area at ffffffc0803e83fc
> >  #9 [ffffffc08006b300] vm_map_ram at ffffffc0803e78c0
> >
> > Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks")
> >
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> > ---
> > v2: introduce cpu in vmap_block to record the right CPU number
> > ---
> > ---
> >  mm/vmalloc.c | 11 +++++++----
> >  1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 22aa63f4ef63..ca962b554fa0 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -2458,6 +2458,7 @@ struct vmap_block {
> >       struct list_head free_list;
> >       struct rcu_head rcu_head;
> >       struct list_head purge;
> > +     unsigned int cpu;
> >  };
> >
> >  /* Queue of free and dirty vmap blocks, for allocation and flushing purposes */
> > @@ -2574,6 +2575,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
> >       vb->dirty = 0;
> >       vb->dirty_min = VMAP_BBMAP_BITS;
> >       vb->dirty_max = 0;
> if task migration to other CPU at this time, this may lead to get incorrect vbq.
ok, thanks for the prompt. If this works?
    vb->cpu =get_cpu();
    ...
    put_cpu();
    return vaddr;

> > +     vb->cpu = smp_processor_id();
> >       bitmap_set(vb->used_map, 0, (1UL << order));
> >       INIT_LIST_HEAD(&vb->free_list);
> >
> > @@ -2614,9 +2616,10 @@ static void free_vmap_block(struct vmap_block *vb)
> >  }
> >
> >  static bool purge_fragmented_block(struct vmap_block *vb,
> > -             struct vmap_block_queue *vbq, struct list_head *purge_list,
> > -             bool force_purge)
> > +             struct list_head *purge_list, bool force_purge)
> >  {
> > +     struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, vb->cpu);
> > +
> >       if (vb->free + vb->dirty != VMAP_BBMAP_BITS ||
> >           vb->dirty == VMAP_BBMAP_BITS)
> >               return false;
> > @@ -2664,7 +2667,7 @@ static void purge_fragmented_blocks(int cpu)
> >                       continue;
> >
> >               spin_lock(&vb->lock);
> > -             purge_fragmented_block(vb, vbq, &purge, true);
> > +             purge_fragmented_block(vb, &purge, true);
> >               spin_unlock(&vb->lock);
> >       }
> >       rcu_read_unlock();
> > @@ -2801,7 +2804,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
> >                        * not purgeable, check whether there is dirty
> >                        * space to be flushed.
> >                        */
> > -                     if (!purge_fragmented_block(vb, vbq, &purge_list, false) &&
> > +                     if (!purge_fragmented_block(vb, &purge_list, false) &&
> >                           vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) {
> >                               unsigned long va_start = vb->va->va_start;
> >                               unsigned long s, e;
> > --
> > 2.25.1
> >
> >
>
> --
>
> Best Regards,
> Hailong.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ