lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 08 Nov 2012 16:32:57 -0500
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Lukas Czerner <lczerner@...hat.com>, axboe@...nel.dk,
	dchinner@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] loop: Limit the number of requests in the bio list

Andrew Morton <akpm@...ux-foundation.org> writes:

> On Tue, 16 Oct 2012 11:21:45 +0200
> Lukas Czerner <lczerner@...hat.com> wrote:
>
>> Currently there is not limitation of number of requests in the loop bio
>> list. This can lead into some nasty situations when the caller spawns
>> tons of bio requests taking huge amount of memory. This is even more
>> obvious with discard where blkdev_issue_discard() will submit all bios
>> for the range and wait for them to finish afterwards. On really big loop
>> devices and slow backing file system this can lead to OOM situation as
>> reported by Dave Chinner.
>> 
>> With this patch we will wait in loop_make_request() if the number of
>> bios in the loop bio list would exceed 'nr_requests' number of requests.
>> We'll wake up the process as we process the bios form the list. Some
>> threshold hysteresis is in place to avoid high frequency oscillation.
>> 
>
> What's happening with this?

Still waiting for review, I guess.  I'll have a look.

>> --- a/drivers/block/loop.c
>> +++ b/drivers/block/loop.c
>> @@ -463,6 +463,7 @@ out:
>>   */
>>  static void loop_add_bio(struct loop_device *lo, struct bio *bio)
>>  {
>> +	lo->lo_bio_count++;
>>  	bio_list_add(&lo->lo_bio_list, bio);
>>  }
>>  
>> @@ -471,6 +472,7 @@ static void loop_add_bio(struct loop_device *lo, struct bio *bio)
>>   */
>>  static struct bio *loop_get_bio(struct loop_device *lo)
>>  {
>> +	lo->lo_bio_count--;
>>  	return bio_list_pop(&lo->lo_bio_list);
>>  }
>>  
>> @@ -489,6 +491,14 @@ static void loop_make_request(struct request_queue *q, struct bio *old_bio)
>>  		goto out;
>>  	if (unlikely(rw == WRITE && (lo->lo_flags & LO_FLAGS_READ_ONLY)))
>>  		goto out;
>> +	if (lo->lo_bio_count >= lo->lo_queue->nr_requests) {
>> +		unsigned int nr;
>> +		spin_unlock_irq(&lo->lo_lock);
>> +		nr = lo->lo_queue->nr_requests - (lo->lo_queue->nr_requests/8);
>> +		wait_event_interruptible(lo->lo_req_wait,
>> +					 lo->lo_bio_count < nr);
>> +		spin_lock_irq(&lo->lo_lock);
>> +	}
>
> Two things.
>
> a) wait_event_interruptible() will return immediately if a signal is
>    pending (eg, someone hit ^C).  This is not the behaviour you want. 
>    If the calling process is always a kernel thread then
>    wait_event_interruptible() is OK and is the correct thing to use. 
>    Otherwise, it will need to be an uninterruptible sleep.

Good catch, this needs fixing.

> b) Why is it safe to drop lo_lock here?  What data is that lock protecting?

lo_lock is protecting access to state and the bio list.  Dropping the
lock looks okay to me.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ