lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E5C7359.7070602@suse.de>
Date:	Tue, 30 Aug 2011 10:51:29 +0530
From:	Suresh Jayaraman <sjayaraman@...e.de>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Jens Axboe <axboe@...nel.dk>, LKML <linux-kernel@...r.kernel.org>,
	Shaohua Li <shaohua.li@...el.com>,
	Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH v2] block: document blk-plug

On 08/30/2011 03:18 AM, Andrew Morton wrote:
> On Mon, 29 Aug 2011 16:58:21 +0530
> Suresh Jayaraman <sjayaraman@...e.de> wrote:
> 
>> --- a/include/linux/blkdev.h
>> +++ b/include/linux/blkdev.h
>> @@ -863,17 +863,23 @@ struct request_queue *blk_alloc_queue_node(gfp_t, int);
>>  extern void blk_put_queue(struct request_queue *);
>>  
>>  /*
>> + * blk_plug allows to build up a queue of related requests by holding the I/O
>> + * fragments for a short period. This allows merging of sequential requests
>> + * into single larger request. As the requests are moved from per-task list to
>> + * the device's request_queue in a batch, this results in improved
>> + * scalability as the lock contention for request_queue lock is reduced.
>> + *
>>   * Note: Code in between changing the blk_plug list/cb_list or element of such
>>   * lists is preemptable, but such code can't do sleep (or be very careful),
>>   * otherwise data is corrupted. For details, please check schedule() where
>>   * blk_schedule_flush_plug() is called.
> 
> What does the older part of this comment mean?  If a code section is
> preemptible then it *will* sleep.  That's what preemption does.
> 

>From what I can understand, we don't need to explicitly disable preemption
when modifying the blk_plug->list because interrupts are disabled when we
are there.

void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
{

..

        /*
         * Save and disable interrupts here, to avoid doing it for every
         * queue lock we have to take.
         */
        local_irq_save(flags);
        while (!list_empty(&list)) {
                rq = list_entry_rq(list.next);
                list_del_init(&rq->queuelist);
                BUG_ON(!rq->q);
                if (rq->q != q) {
                        /*
                         * This drops the queue lock
                         */
                        if (q)
                                queue_unplugged(q, depth, from_schedule);
                        q = rq->q;
                        depth = 0;
                        spin_lock(q->queue_lock);
                }


..

}

When blk_flush_plug_list() is called from schedule() via
blk_schedule_flush_plug() we must be very careful to not cause
need_resched set and thus result in a preemption check?

Does that what your comment intend to mean? Shaohua?



-- 
Suresh Jayaraman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ