lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z0f05sdTiL8kW9U8@casper.infradead.org>
Date: Thu, 28 Nov 2024 04:43:18 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: Bharata B Rao <bharata@....com>, linux-block@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	linux-mm@...ck.org, nikunj@....com, vbabka@...e.cz,
	david@...hat.com, akpm@...ux-foundation.org, yuzhao@...gle.com,
	axboe@...nel.dk, viro@...iv.linux.org.uk, brauner@...nel.org,
	jack@...e.cz, joshdon@...gle.com, clm@...a.com
Subject: Re: [RFC PATCH 0/1] Large folios in block buffered IO path

On Thu, Nov 28, 2024 at 05:22:41AM +0100, Mateusz Guzik wrote:
> This means that the folio waiting stuff has poor scalability, but
> without digging into it I have no idea what can be done. The easy way

Actually the easy way is to change:

#define PAGE_WAIT_TABLE_BITS 8

to a larger number.

> out would be to speculatively spin before buggering off, but one would
> have to check what happens in real workloads -- presumably the lock
> owner can be off cpu for a long time (I presume there is no way to
> store the owner).

So ...

 - There's no space in struct folio to put a rwsem.
 - But we want to be able to sleep waiting for a folio to (eg) do I/O.

This is the solution we have.  For the read case, there are three
important bits in folio->flags to pay attention to:

 - PG_locked.  This is held during the read.
 - PG_uptodate.  This is set if the read succeeded.
 - PG_waiters.  This is set if anyone is waiting for PG_locked [*]

The first thread comes along, allocates a folio, locks it, inserts
it into the mapping.
The second thread comes along, finds the folio, sees it's !uptodate,
sets the waiter bit, adds itself to the waitqueue.
The third thread, ditto.
The read completes.  In interrupt or maybe softirq context, the
BIO completion sets the uptodate bit, clears the locked bit and tests
the waiter bit.  Since the waiter bit is set, it walks the waitqueue
looking for waiters which match the locked bit and folio (see
folio_wake_bit()).

So there's not _much_ of a thundering herd problem here.  Most likely
the waitqueue is just too damn long with a lot of threads waiting for
I/O.

[*] oversimplification; don't worry about it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ