lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 11 Sep 2012 11:36:28 -0700
From:	Muthu Kumar <muthu.lkml@...il.com>
To:	Kent Overstreet <koverstreet@...gle.com>
Cc:	Tejun Heo <tj@...nel.org>, linux-bcache@...r.kernel.org,
	linux-kernel@...r.kernel.org, dm-devel@...hat.com, axboe@...nel.dk,
	Vivek Goyal <vgoyal@...hat.com>,
	Mikulas Patocka <mpatocka@...hat.com>, bharrosh@...asas.com,
	david@...morbit.com
Subject: Re: [PATCH 2/2] block: Avoid deadlocks with bio allocation by
 stacking drivers

Kent,

On Mon, Sep 10, 2012 at 2:56 PM, Kent Overstreet <koverstreet@...gle.com> wrote:
..
<snip>
..

>
> +static void bio_alloc_rescue(struct work_struct *work)
> +{
> +       struct bio_set *bs = container_of(work, struct bio_set, rescue_work);
> +       struct bio *bio;
> +
> +       while (1) {
> +               spin_lock(&bs->rescue_lock);
> +               bio = bio_list_pop(&bs->rescue_list);
> +               spin_unlock(&bs->rescue_lock);
> +
> +               if (!bio)
> +                       break;
> +
> +               generic_make_request(bio);
> +       }
> +}
> +
> +static void punt_bios_to_rescuer(struct bio_set *bs)
> +{
> +       struct bio_list punt, nopunt;
> +       struct bio *bio;
> +
> +       /*
> +        * In order to guarantee forward progress we must punt only bios that
> +        * were allocated from this bio_set; otherwise, if there was a bio on
> +        * there for a stacking driver higher up in the stack, processing it
> +        * could require allocating bios from this bio_set, and doing that from
> +        * our own rescuer would be bad.
> +        *
> +        * Since bio lists are singly linked, pop them all instead of trying to
> +        * remove from the middle of the list:
> +        */
> +
> +       bio_list_init(&punt);
> +       bio_list_init(&nopunt);
> +
> +       while ((bio = bio_list_pop(current->bio_list)))
> +               bio_list_add(bio->bi_pool == bs ? &punt : &nopunt, bio);
> +
> +       *current->bio_list = nopunt;
> +
> +       spin_lock(&bs->rescue_lock);
> +       bio_list_merge(&bs->rescue_list, &punt);
> +       spin_unlock(&bs->rescue_lock);
> +
> +       queue_work(bs->rescue_workqueue, &bs->rescue_work);
> +}


Does this preserve the CPU from which the bio was submitted
originally. Not familiar with cmwq, may be Tejun can clarify.

Tejun - the question is, do we honor the rq_affinity with the above
rescue worker implementation?

Regards,
Muthu
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ