lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 26 Mar 2014 15:35:42 -0700 (PDT)
From:	David Lang <david@...g.hm>
To:	Andy Lutomirski <luto@...capital.net>
cc:	Andres Freund <andres@...quadrant.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	Linux FS Devel <linux-fsdevel@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	lsf@...ts.linux-foundation.org,
	Wu Fengguang <fengguang.wu@...el.com>, rhaas@...razel.de
Subject: Re: [Lsf] Postgresql performance problems with IO latency, especially
 during fsync()

On Wed, 26 Mar 2014, Andy Lutomirski wrote:

>>> I'm not sure I understand the request queue stuff, but here's an idea.
>>>  The block core contains this little bit of code:
>>
>> I haven't read enough of the code yet, to comment intelligently ;)
>
> My little patch doesn't seem to help.  I'm either changing the wrong
> piece of code entirely or I'm penalizing readers and writers too much.
>
> Hopefully some real block layer people can comment as to whether a
> refinement of this idea could work.  The behavior I want is for
> writeback to be limited to using a smallish fraction of the total
> request queue size -- I think that writeback should be able to enqueue
> enough requests to get decent sorting performance but not enough
> requests to prevent the io scheduler from doing a good job on
> non-writeback I/O.

The thing is that if there are no reads that are waiting, why not use every bit 
of disk I/O available to write? If you can do that reliably with only using part 
of the queue, fine, but aren't you getting fairly close to just having separate 
queues for reading and writing with such a restriction?

> As an even more radical idea, what if there was a way to submit truly
> enormous numbers of lightweight requests, such that the queue will
> give the requester some kind of callback when the request is nearly
> ready for submission so the requester can finish filling in the
> request?  This would allow things like dm-crypt to get the benefit of
> sorting without needing to encrypt hundreds of MB of data in advance
> of having that data actually be to the backing device.  It might also
> allow writeback to submit multiple gigabytes of writes, in arbitrarily
> large pieces, but not to need to pin pages or do whatever expensive
> things are needed until the IO actually happens.

the problem with a callback is that you then need to wait for that source to get 
the CPU and finish doing it's work. What happens if that takes long enough for 
you to run out of data to write? And is it worth the extra context switches to 
bounce around when the writing process was finished with that block already.

David Lang

> For reference, here's my patch that doesn't work well:
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 4cd5ffc..c0dedc3 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -941,11 +941,11 @@ static struct request *__get_request(struct request_list *
>        }
>
>        /*
> -        * Only allow batching queuers to allocate up to 50% over the defined
> -        * limit of requests, otherwise we could have thousands of requests
> -        * allocated with any setting of ->nr_requests
> +        * Only allow batching queuers to allocate up to 50% of the
> +        * defined limit of requests, so that non-batching queuers can
> +        * get into the queue and thus be scheduled properly.
>         */
> -       if (rl->count[is_sync] >= (3 * q->nr_requests / 2))
> +       if (rl->count[is_sync] >= (q->nr_requests + 3) / 4)
>                return NULL;
>
>        q->nr_rqs[is_sync]++;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ