lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240520154425.GB1104@lst.de>
Date: Mon, 20 May 2024 17:44:25 +0200
From: Christoph Hellwig <hch@....de>
To: Mike Snitzer <snitzer@...nel.org>
Cc: Christoph Hellwig <hch@....de>, Theodore Ts'o <tytso@....edu>,
	dm-devel@...ts.linux.dev, fstests@...r.kernel.org,
	linux-ext4@...r.kernel.org, regressions@...ts.linux.dev,
	linux-block@...r.kernel.org
Subject: Re: dm: use queue_limits_set

On Mon, May 20, 2024 at 11:39:14AM -0400, Mike Snitzer wrote:
> That's fair.  My criticism was more about having to fix up DM targets
> to cope with the new normal of max_discard_sectors being set as a
> function of max_hw_discard_sectors and max_user_discard_sectors.
> 
> With stacked devices in particular it is _very_ hard for the user to
> know their exerting control over a max discard limit is correct.

The user forcing a limit is always very sketchy, which is why I'm
not a fan of it.

> Yeah, but my concern is that if a user sets a value that is too low
> it'll break targets like DM thinp (which Ted reported).  So forcibly
> setting both to indirectly set the required max_discard_sectors seems
> necessary.

Dm-think requiring a minimum discard size is a rather odd requirement.
Is this just a debug asswert, or is there a real technical reason
for it?  If so we can introduce a now to force a minimum size or
disable user setting the value entirely. 

> diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
> index 4793ad2aa1f7..c196f39579af 100644
> --- a/drivers/md/dm-thin.c
> +++ b/drivers/md/dm-thin.c
> @@ -4497,7 +4499,8 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits)
>  
>  	if (pool->pf.discard_enabled) {
>  		limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT;
> -		limits->max_discard_sectors = pool->sectors_per_block * BIO_PRISON_MAX_RANGE;
> +		limits->max_hw_discard_sectors = limits->max_user_discard_sectors =
> +			pool->sectors_per_block * BIO_PRISON_MAX_RANGE;
>  	}

Drivers really have no business setting max_user_discard_sector,
the whole point of the field is to separate device/driver capabilities
from user policy.  So if dm-think really has no way of handling
smaller discards, we need to ensure they can't be set.

I'm also kinda curious what actually sets a user limit in Ted's case
as that feels weird.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ