lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240902215705.GF26776@twin.jikos.cz>
Date: Mon, 2 Sep 2024 23:57:05 +0200
From: David Sterba <dsterba@...e.cz>
To: Luca Stefani <luca.stefani.ge1@...il.com>
Cc: Jens Axboe <axboe@...nel.dk>, Chris Mason <clm@...com>,
	Josef Bacik <josef@...icpanda.com>, David Sterba <dsterba@...e.com>,
	linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-btrfs@...r.kernel.org
Subject: Re: [PATCH v2 1/3] block: Export bio_discard_limit

On Mon, Sep 02, 2024 at 10:56:10PM +0200, Luca Stefani wrote:
> It can be used to calculate the sector size limit of each
> discard call allowing filesystem to implement their own
> chunked discard logic with customized behavior, for example
> cancellation due to signals.

Maybe to add context for block layer people why we want to export this:

The fs trim loops over ranges and sends discard requests, some ranges
can be large so it's all transparently handled by blkdev_issue_discard()
and processed in smaller chunks.

We need to insert checks for cancellation (or suspend) requests into the
the loop. Rather than setting an arbitrary chunk length on the
filesystem level I've suggested to use bio_discard_limit() assuming it
will do optimal number of IO requests. Then we don't have to guess
whether 1G or 10G is the right value, unnecessarily increasing the
number of requests when the device could handle larger ranges.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ