lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a1e006af-c935-4246-a239-669debb4717d@paulmck-laptop>
Date:   Wed, 5 Apr 2023 11:46:29 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     Ziwei Dai <ziwei.dai@...soc.com>, urezki@...il.com,
        frederic@...nel.org, quic_neeraju@...cinc.com,
        josh@...htriplett.org, rostedt@...dmis.org,
        mathieu.desnoyers@...icios.com, jiangshanlai@...il.com,
        rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
        shuang.wang@...soc.com, yifan.xin@...soc.com, ke.wang@...soc.com,
        xuewen.yan@...soc.com, zhiguo.niu@...soc.com,
        zhaoyang.huang@...soc.com
Subject: Re: [PATCH V2] rcu: Make sure new krcp free business is handled
 after the wanted rcu grace period.

On Wed, Apr 05, 2023 at 02:12:02PM -0400, Joel Fernandes wrote:
> On Wed, Apr 5, 2023 at 1:39 PM Joel Fernandes <joel@...lfernandes.org> wrote:
> >
> > On Fri, Mar 31, 2023 at 8:43 AM Ziwei Dai <ziwei.dai@...soc.com> wrote:
> > >
> > > In kfree_rcu_monitor(), new free business at krcp is attached to any free
> > > channel at krwp. kfree_rcu_monitor() is responsible to make sure new free
> > > business is handled after the rcu grace period. But if there is any none-free
> > > channel at krwp already, that means there is an on-going rcu work,
> > > which will cause the kvfree_call_rcu()-triggered free business is done
> > > before the wanted rcu grace period ends.
> > >
> > > This commit ignore krwp which has non-free channel at kfree_rcu_monitor(),
> > > to fix the issue that kvfree_call_rcu() loses effectiveness.
> > >
> > > Below is the css_set obj "from_cset" use-after-free case caused by
> > > kvfree_call_rcu() losing effectiveness.
> > > CPU 0 calls rcu_read_lock(), then use "from_cset", then hard irq comes,
> > > the task is schedule out.
> > > CPU 1 calls kfree_rcu(cset, rcu_head), willing to free "from_cset" after new gp.
> > > But "from_cset" is freed right after current gp end. "from_cset" is reallocated.
> > > CPU 0 's task arrives back, references "from_cset"'s member, which causes crash.
> > >
> > > CPU 0                                   CPU 1
> > > count_memcg_event_mm()
> > > |rcu_read_lock()  <---
> > > |mem_cgroup_from_task()
> > >  |// css_set_ptr is the "from_cset" mentioned on CPU 1
> > >  |css_set_ptr = rcu_dereference((task)->cgroups)
> > >  |// Hard irq comes, current task is scheduled out.
> > >
> > >                                         cgroup_attach_task()
> > >                                         |cgroup_migrate()
> > >                                         |cgroup_migrate_execute()
> > >                                         |css_set_move_task(task, from_cset, to_cset, true)
> > >                                         |cgroup_move_task(task, to_cset)
> > >                                         |rcu_assign_pointer(.., to_cset)
> > >                                         |...
> > >                                         |cgroup_migrate_finish()
> > >                                         |put_css_set_locked(from_cset)
> > >                                         |from_cset->refcount return 0
> > >                                         |kfree_rcu(cset, rcu_head) // means to free from_cset after new gp
> > >                                         |add_ptr_to_bulk_krc_lock()
> > >                                         |schedule_delayed_work(&krcp->monitor_work, ..)
> > >
> > >                                         kfree_rcu_monitor()
> > >                                         |krcp->bulk_head[0]'s work attached to krwp->bulk_head_free[]
> > >                                         |queue_rcu_work(system_wq, &krwp->rcu_work)
> > >                                         |if rwork->rcu.work is not in WORK_STRUCT_PENDING_BIT state,
> > >                                         |call_rcu(&rwork->rcu, rcu_work_rcufn) <--- request a new gp
> > >
> > >                                         // There is a perious call_rcu(.., rcu_work_rcufn)
> > >                                         // gp end, rcu_work_rcufn() is called.
> > >                                         rcu_work_rcufn()
> > >                                         |__queue_work(.., rwork->wq, &rwork->work);
> > >
> > >                                         |kfree_rcu_work()
> > >                                         |krwp->bulk_head_free[0] bulk is freed before new gp end!!!
> > >                                         |The "from_cset" is freed before new gp end.
> > >
> > > // the task is scheduled in after many ms.
> > >  |css_set_ptr->subsys[(subsys_id) <--- Caused kernel crash, because css_set_ptr is freed.
> > >
> > > v2: Use helper function instead of inserted code block at kfree_rcu_monitor().
> > >
> > > Fixes: c014efeef76a ("rcu: Add multiple in-flight batches of kfree_rcu() work")
> > > Signed-off-by: Ziwei Dai <ziwei.dai@...soc.com>
> >
> > Please update the fixes tag to:
> > 5f3c8d620447 ("rcu/tree: Maintain separate array for vmalloc ptrs")
> 
> Vlad pointed out in another thread that the fix is actually to 34c881745549.
> 
> So just to be sure, it could be updated to:
> Fixes: 34c881745549 ("rcu: Support kfree_bulk() interface in kfree_rcu()")
> Fixes: 5f3c8d620447 ("rcu/tree: Maintain separate array for vmalloc ptrs")

Ziwei Dai, does this change in Fixes look good to you?

If so, I will update the commit log in this commit that I am planning
to submit into v6.3.  It is strictly speaking not a v6.3 regression,
but it is starting to show up in the wild and the patch is contained
enough to be considered an urgent fix.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ