lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 11 Sep 2012 09:28:47 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Michael Wang <wangyun@...ux.vnet.ibm.com>
Cc:	NeilBrown <neilb@...e.de>, LKML <linux-kernel@...r.kernel.org>,
	linux-raid@...r.kernel.org
Subject: Re: [PATCH 1/3] raid: replace	list_for_each_continue_rcu with new
 interface

On Tue, Sep 11, 2012 at 02:27:42PM +0800, Michael Wang wrote:
> On 09/11/2012 02:21 PM, NeilBrown wrote:
> > On Mon, 10 Sep 2012 16:30:11 +0800 Michael Wang <wangyun@...ux.vnet.ibm.com>
> > wrote:
> > 
> >> On 08/24/2012 08:51 AM, Michael Wang wrote:
> >>> On 08/17/2012 12:33 PM, Michael Wang wrote:
> >>>> From: Michael Wang <wangyun@...ux.vnet.ibm.com>
> >>>>
> >>>> This patch replaces list_for_each_continue_rcu() with
> >>>> list_for_each_entry_continue_rcu() to save a few lines
> >>>> of code and allow removing list_for_each_continue_rcu().
> >>>>
> >>>
> >>> Hi, Neil
> >>>
> >>> Could I get some comments on this patch?
> >>
> >> Hi, Neil
> >>
> >> Could I get some comments?
> >>
> >> And please forgive and warn me if this patch has came to the wrong
> >> place...I get the address from get_maintainer.
> > 
> > Sorry, August was a bad month.
> > 
> > Yes, patch looks good.  Shall I include it in my tree, do you want to submit
> > them altogether through some rcu tree?
> > Either way is fine by me.  If you want to submit it through some other tree, 
> >    Acked-by: NeilBrown <neilb@...e.de>
> 
> Thanks for your review ;-)
> 
> I think submit to rcu tree may be better, what's your opinion, Paul?

I am fine either way.  If I don't see it in mainline this coming merge
window, I will pull it into the -rcu tree.

							Thanx, Paul

> Regards,
> Michael Wang
> 
> 
> > 
> > If not, it'll probably appear in my -next soonish.
> > 
> > Thanks,
> > NeilBrown
> > 
> > 
> >>
> >> Regards,
> >> Michael Wang
> >>
> >>>
> >>> Regards,
> >>> Michael Wang
> >>>
> >>>> Signed-off-by: Michael Wang <wangyun@...ux.vnet.ibm.com>
> >>>> ---
> >>>>  drivers/md/bitmap.c |    9 +++------
> >>>>  1 files changed, 3 insertions(+), 6 deletions(-)
> >>>>
> >>>> diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
> >>>> index 15dbe03..b160828 100644
> >>>> --- a/drivers/md/bitmap.c
> >>>> +++ b/drivers/md/bitmap.c
> >>>> @@ -163,20 +163,17 @@ static struct md_rdev *next_active_rdev(struct md_rdev *rdev, struct mddev *mdde
> >>>>  	 * As devices are only added or removed when raid_disk is < 0 and
> >>>>  	 * nr_pending is 0 and In_sync is clear, the entries we return will
> >>>>  	 * still be in the same position on the list when we re-enter
> >>>> -	 * list_for_each_continue_rcu.
> >>>> +	 * list_for_each_entry_continue_rcu.
> >>>>  	 */
> >>>> -	struct list_head *pos;
> >>>>  	rcu_read_lock();
> >>>>  	if (rdev == NULL)
> >>>>  		/* start at the beginning */
> >>>> -		pos = &mddev->disks;
> >>>> +		rdev = list_entry_rcu(&mddev->disks, struct md_rdev, same_set);
> >>>>  	else {
> >>>>  		/* release the previous rdev and start from there. */
> >>>>  		rdev_dec_pending(rdev, mddev);
> >>>> -		pos = &rdev->same_set;
> >>>>  	}
> >>>> -	list_for_each_continue_rcu(pos, &mddev->disks) {
> >>>> -		rdev = list_entry(pos, struct md_rdev, same_set);
> >>>> +	list_for_each_entry_continue_rcu(rdev, &mddev->disks, same_set) {
> >>>>  		if (rdev->raid_disk >= 0 &&
> >>>>  		    !test_bit(Faulty, &rdev->flags)) {
> >>>>  			/* this is a usable devices */
> >>>>
> >>>
> > 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists