lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 8 May 2008 21:59:48 -0700
From:	"Dan Williams" <dan.j.williams@...el.com>
To:	"Neil Brown" <neilb@...e.de>
Cc:	"Jens Axboe" <jens.axboe@...cle.com>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	"Jacek Luczak" <difrost.kernel@...il.com>,
	"Prakash Punnoor" <prakash@...noor.de>,
	"Linux Kernel list" <linux-kernel@...r.kernel.org>,
	linux-raid@...r.kernel.org
Subject: Re: WARNING in 2.6.25-07422-gb66e1f1

On Thu, May 8, 2008 at 7:15 PM, Neil Brown <neilb@...e.de> wrote:
> On Thursday May 8, dan.j.williams@...el.com wrote:
>  > @@ -133,8 +137,10 @@ static linear_conf_t *linear_conf(mddev_t *mddev, int raid_disks)
>  >
>  >               disk->rdev = rdev;
>  >
>  > +             spin_lock(&conf->device_lock);
>  >               blk_queue_stack_limits(mddev->queue,
>  >                                      rdev->bdev->bd_disk->queue);
>  > +             spin_unlock(&conf->device_lock);
>  >               /* as we don't honour merge_bvec_fn, we must never risk
>  >                * violating it, so limit ->max_sector to one PAGE, as
>  >                * a one page request is never in violation.
>
>  This shouldn't be necessary.
>  There is no actual race here -- mddev->queue->queue_flags is not going to be
>  accessed by anyone else until do_md_run does
>         mddev->queue->make_request_fn = mddev->pers->make_request;
>  which is much later.
>  So we only need to be sure that "queue_is_locked" doesn't complain.
>  And as q->queue_lock is still NULL at this point, it won't complain.
>
>  I think that the *only* change that is needs is to put
>
>
>  > +     /* blk-core uses queue_lock to verify protection of the queue flags */
>  > +     mddev->queue->queue_lock = &conf->device_lock;
>
>  after each
>
> > +     spin_lock_init(&conf->device_lock);
>
>  i.e. in raid1.c, raid10.c and raid5.c
>
>  ??

Yes, locking shouldn't be needed at those points; however, the warning
still fires because blk_queue_stack_limits() is using
queue_flag_clear() instead of queue_flag_unlocked().  Taking a look at
converting it to queue_flag_clear_unlocked() uncovered a couple more
overlooked sites (multipath.c:multipath_add_disk and
raid1.c:raid1_add_disk) where ->run has already been called...

The options I am thinking of all seem ugly:
1/ keep the unnecessary locking in MD
2/ make blk_queue_stack_limits() use queue_flag_clear_unlocked() even
though it needs to be locked sometimes
3/ conditionally use queue_flag_clear_unlocked if !t->queue_lock

--
Dan
I'm having a h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ