lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Mar 2020 09:20:29 -0700
From:   Bjorn Andersson <bjorn.andersson@...aro.org>
To:     Arnaud POULIQUEN <arnaud.pouliquen@...com>
Cc:     Andy Gross <agross@...nel.org>, Ohad Ben-Cohen <ohad@...ery.com>,
        linux-arm-msm@...r.kernel.org, linux-remoteproc@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Mathieu Poirier <mathieu.poirier@...aro.org>
Subject: Re: [PATCH v4 1/4] remoteproc: Traverse rproc_list under RCU read
 lock

On Tue 10 Mar 06:41 PDT 2020, Arnaud POULIQUEN wrote:

> Hi Bjorn,
> 
> 
> On 3/10/20 7:38 AM, Bjorn Andersson wrote:
> > In order to be able to traverse the mostly read-only rproc_list without
> > locking during panic migrate traversal to be done under rcu_read_lock().
> > 
> > Mutual exclusion for modifications of the list continues to be handled
> > by the rproc_list_mutex and a synchronization point is added before
> > releasing objects that are popped from the list.
> > 
> > Signed-off-by: Bjorn Andersson <bjorn.andersson@...aro.org>
> > ---
> > 
> > Change v3:
> > - New patch
> > 
> >  drivers/remoteproc/remoteproc_core.c | 13 ++++++++-----
> >  1 file changed, 8 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
> > index 097f33e4f1f3..f0a77c30c6b1 100644
> > --- a/drivers/remoteproc/remoteproc_core.c
> > +++ b/drivers/remoteproc/remoteproc_core.c
> > @@ -1854,8 +1854,8 @@ struct rproc *rproc_get_by_phandle(phandle phandle)
> >  	if (!np)
> >  		return NULL;
> >  
> > -	mutex_lock(&rproc_list_mutex);
> > -	list_for_each_entry(r, &rproc_list, node) {
> > +	rcu_read_lock();
> > +	list_for_each_entry_rcu(r, &rproc_list, node) {
> >  		if (r->dev.parent && r->dev.parent->of_node == np) {
> >  			/* prevent underlying implementation from being removed */
> >  			if (!try_module_get(r->dev.parent->driver->owner)) {
> > @@ -1868,7 +1868,7 @@ struct rproc *rproc_get_by_phandle(phandle phandle)
> >  			break;
> >  		}
> >  	}
> > -	mutex_unlock(&rproc_list_mutex);
> > +	rcu_read_unlock();
> >  
> >  	of_node_put(np);
> >  
> > @@ -1925,7 +1925,7 @@ int rproc_add(struct rproc *rproc)
> >  
> >  	/* expose to rproc_get_by_phandle users */
> >  	mutex_lock(&rproc_list_mutex);
> > -	list_add(&rproc->node, &rproc_list);
> > +	list_add_rcu(&rproc->node, &rproc_list);
> >  	mutex_unlock(&rproc_list_mutex);
> >  
> >  	return 0;
> > @@ -2140,9 +2140,12 @@ int rproc_del(struct rproc *rproc)
> >  
> >  	/* the rproc is downref'ed as soon as it's removed from the klist */
> >  	mutex_lock(&rproc_list_mutex);
> > -	list_del(&rproc->node);
> > +	list_del_rcu(&rproc->node);
> >  	mutex_unlock(&rproc_list_mutex);
> i'm not familiar with rcu but as rproc_panic_handler can be called in interrupt context, 
> does mutex should be replaced by a spinlock?
> 

Code traversing the list doesn't need to hold a lock, because the
rculist implementation ensures that the list itself is always
consistent.

Updates however can not be done concurrently, so that's why we're
maintaining this lock - which can be a mutex, because it now only
protects modifications.

And then the last piece is to guarantee that a node is not freed while
it's being accessed by the code traversing the list. This is ensured by
the synchronize_rcu() call below, which makes sure that no code holding
a rcu_read_lock() is still traversing the list.

Regards,
Bjorn

> Regards,
> Arnaud
> >  
> > +	/* Ensure that no readers of rproc_list are still active */
> > +	synchronize_rcu();
> > +
> >  	device_del(&rproc->dev);
> >  
> >  	return 0;
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ