[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGWkznE=akrSBEQyq+f6tDN6fJ_J59WhJ-bvxpfrLUgTJ73h4g@mail.gmail.com>
Date: Thu, 30 May 2024 15:35:52 +0800
From: Zhaoyang Huang <huangzhaoyang@...il.com>
To: Baoquan He <bhe@...hat.com>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>, Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>, Christoph Hellwig <hch@...radead.org>,
Lorenzo Stoakes <lstoakes@...il.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
steve.kang@...soc.com, Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] mm: fix incorrect vbq reference in purge_fragmented_block
On Thu, May 30, 2024 at 3:19 PM Baoquan He <bhe@...hat.com> wrote:
>
> On 05/30/24 at 10:51am, zhaoyang.huang wrote:
> > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> >
> > Broken vbq->free reported on a v6.6 based system which is caused
> > by invalid vbq->lock protect over vbq->free in purge_fragmented_block.
> > This should be introduced by the Fixes below which ignored vbq->lock
> > matter.
>
> It will be helpful to provide more details, what's the symptom of the
> brekage, and in which case vbq->free is broken.
Vmalloc area runs out in our ARM64 system during an erofs test as
vm_map_ram failed[1]. We find that one vbq->free->next point to
vbq->free which makes list_for_each_entry_rcu can not iterate the list
and find the BUG.
[1]
PID: 1 TASK: ffffff80802b4e00 CPU: 6 COMMAND: "init"
#0 [ffffffc08006afe0] __switch_to at ffffffc08111d5cc
#1 [ffffffc08006b040] __schedule at ffffffc08111dde0
#2 [ffffffc08006b0a0] schedule at ffffffc08111e294
#3 [ffffffc08006b0d0] schedule_preempt_disabled at ffffffc08111e3f0
#4 [ffffffc08006b140] __mutex_lock at ffffffc08112068c
#5 [ffffffc08006b180] __mutex_lock_slowpath at ffffffc08111f8f8
#6 [ffffffc08006b1a0] mutex_lock at ffffffc08111f834
#7 [ffffffc08006b1d0] reclaim_and_purge_vmap_areas at ffffffc0803ebc3c
#8 [ffffffc08006b290] alloc_vmap_area at ffffffc0803e83fc
#9 [ffffffc08006b300] vm_map_ram at ffffffc0803e78c0
#10 [ffffffc08006b420] z_erofs_lz4_decompress at ffffffc0806a49b0
#11 [ffffffc08006b670] z_erofs_decompress_queue at ffffffc0806a8fd0
#12 [ffffffc08006b860] z_erofs_runqueue at ffffffc0806a8744
#13 [ffffffc08006b970] z_erofs_readahead at ffffffc0806a6cfc
#14 [ffffffc08006ba00] read_pages at ffffffc08037ed78
#15 [ffffffc08006ba70] page_cache_ra_unbounded at ffffffc08037eb58
#16 [ffffffc08006bb00] page_cache_ra_order at ffffffc08037f42c
#17 [ffffffc08006bbb0] do_sync_mmap_readahead at ffffffc080371d3c
#18 [ffffffc08006bc40] filemap_fault at ffffffc080371774
#19 [ffffffc08006bd60] handle_mm_fault at ffffffc0803cc118
#20 [ffffffc08006bdc0] do_page_fault at ffffffc08112a618
#21 [ffffffc08006be20] do_translation_fault at ffffffc08112a36c
#22 [ffffffc08006be30] do_mem_abort at ffffffc0800bfbf0
#23 [ffffffc08006be70] el0_ia at ffffffc08111583c
#24 [ffffffc08006bea0] el0t_64_sync_handler at ffffffc0811156a4
#25 [ffffffc08006bfe0] el0t_64_sync at ffffffc080091584
>
> >
> > Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks")
> >
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> > ---
> > mm/vmalloc.c | 11 +++++++----
> > 1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 22aa63f4ef63..112b50431725 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -2614,9 +2614,10 @@ static void free_vmap_block(struct vmap_block *vb)
> > }
> >
> > static bool purge_fragmented_block(struct vmap_block *vb,
> > - struct vmap_block_queue *vbq, struct list_head *purge_list,
> > - bool force_purge)
> > + struct list_head *purge_list, bool force_purge)
> > {
> > + struct vmap_block_queue *vbq;
> > +
> > if (vb->free + vb->dirty != VMAP_BBMAP_BITS ||
> > vb->dirty == VMAP_BBMAP_BITS)
> > return false;
> > @@ -2625,6 +2626,8 @@ static bool purge_fragmented_block(struct vmap_block *vb,
> > if (!(force_purge || vb->free < VMAP_PURGE_THRESHOLD))
> > return false;
> >
> > + vbq = container_of(addr_to_vb_xa(vb->va->va_start),
> > + struct vmap_block_queue, vmap_blocks);
> > /* prevent further allocs after releasing lock */
> > WRITE_ONCE(vb->free, 0);
> > /* prevent purging it again */
> > @@ -2664,7 +2667,7 @@ static void purge_fragmented_blocks(int cpu)
> > continue;
> >
> > spin_lock(&vb->lock);
> > - purge_fragmented_block(vb, vbq, &purge, true);
> > + purge_fragmented_block(vb, &purge, true);
> > spin_unlock(&vb->lock);
> > }
> > rcu_read_unlock();
> > @@ -2801,7 +2804,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush)
> > * not purgeable, check whether there is dirty
> > * space to be flushed.
> > */
> > - if (!purge_fragmented_block(vb, vbq, &purge_list, false) &&
> > + if (!purge_fragmented_block(vb, &purge_list, false) &&
> > vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) {
> > unsigned long va_start = vb->va->va_start;
> > unsigned long s, e;
> > --
> > 2.25.1
> >
> >
>
Powered by blists - more mailing lists