lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0804291554490.26643@engineering.redhat.com>
Date:	Tue, 29 Apr 2008 16:02:09 -0400 (EDT)
From:	Mikulas Patocka <mpatocka@...hat.com>
To:	Jens Axboe <jens.axboe@...cle.com>
cc:	linux-kernel@...r.kernel.org,
	Mike Anderson <andmike@...ux.vnet.ibm.com>,
	Alasdair Graeme Kergon <agk@...hat.com>
Subject: Re: [PATCH] Optimize lock in queue unplugging



On Tue, 29 Apr 2008, Jens Axboe wrote:

> On Tue, Apr 29 2008, Mikulas Patocka wrote:
>> Hi
>>
>> Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
>> disks mapped to one logical device via device mapper.
>>
>> He found that there was a slowdown on request_queue->lock in function
>> generic_unplug_device. The slowdown is caused by the fact that when some
>> code calls unplug on the device mapper, device mapper calls unplug on all
>> physical disks. These unplug calls take the lock, find that the queue is
>> already unplugged, release the lock and exit.
>>
>> With the below patch, performance of the benchmark was increased by 18%
>> (the whole OLTP application, not just block layer microbenchmarks).
>>
>> So I'm submitting this patch for upstream. I think the patch is correct,
>> because when more threads call simultaneously plug and unplug, it is
>> unspecified, if the queue is or isn't plugged (so the patch can't make
>> this worse). And the caller that plugged the queue should unplug it
>> anyway. (if it doesn't, there's 3ms timeout).
>
> Where were these unplug calls coming from? The block layer will
> generally only unplug when it is already unplugged, so if you are seeing
> so many unplug calls that the patch redues overhead by as much
> described, perhaps the callsite is buggy?
>
> -- 
> Jens Axboe

unplug is called on any wait_on_buffer (and similar calls) 
__wait_on_buffer -> sync_buffer -> blk_run_address_space -> 
blk_run_backing_dev -> bdi->unplug_io_fn(bdi, page);

(I'm not sure that this was the IBM's case, I'm just guessing - this is 
the most obvious example where unplug is called repeatedly)


There is not any test that the queue is plugged and there shouldn't be. If 
you have this situation

dm-linear(unplugged) -> physical-disk(plugged)

then uplung should be called on dm-linear (that will call dm-unplug method 
dm_unplug_all and that will unplug the disk). If you add the test of 
plugged queue to the upper layer, you mess this situation with stacked 
drivers completely.

The test for already plugged queue should be at the lowest physical device 
driver, not in upper layers.

Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ