lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3B8BC663-3B34-454D-AE79-4FCE50001D6E@didiglobal.com>
Date: Sun, 24 Nov 2024 13:44:41 +0000
From: 戴坤海 Tony Dai <daikunhai@...iglobal.com>
To: Yu Kuai <yukuai1@...weicloud.com>, "tj@...nel.org" <tj@...nel.org>,
	"josef@...icpanda.com" <josef@...icpanda.com>, "axboe@...nel.dk"
	<axboe@...nel.dk>
CC: "cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
	"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "yukuai (C)"
	<yukuai3@...wei.com>
Subject: Re: [PATCH] block: iocost: ensure hweight_inuse is at least 1

In fact, we did encounter such a special situation where the kernel printed out `iocg: invalid donation weights in /a/b/c: active=1 donating=1 after=0`, and then it immediately panic. I analyzed the code but could not figure out how this happened; it might be a concurrency issue or some other hidden bug.

Our kernel is not the latest, but it includes the patch edaa26334c117a584add6053f48d63a988d25a6e (iocost: Fix divide-by-zero on donation from low hweight cgroup).

在 2024/11/22 16:16,“Yu Kuai”<yukuai1@...weicloud.com <mailto:yukuai1@...weicloud.com>> 写入:


Hi,


在 2024/11/22 15:26, Kunhai Dai 写道:
> The hweight_inuse calculation in transfer_surpluses() could potentially
> result in a value of 0, which would lead to division by zero errors in
> subsequent calculations that use this value as a divisor.
> 
> Signed-off-by: Kunhai Dai <daikunhai@...iglobal.com <mailto:daikunhai@...iglobal.com>>
> ---
> block/blk-iocost.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/block/blk-iocost.c b/block/blk-iocost.c
> index 384aa15e8260..65cdb55d30cc 100644
> --- a/block/blk-iocost.c
> +++ b/block/blk-iocost.c
> @@ -1999,9 +1999,10 @@ static void transfer_surpluses(struct list_head *surpluses, struct ioc_now *now)
> parent = iocg->ancestors[iocg->level - 1];
> 
> /* b' = gamma * b_f + b_t' */
> - iocg->hweight_inuse = DIV64_U64_ROUND_UP(
> - (u64)gamma * (iocg->hweight_active - iocg->hweight_donating),
> - WEIGHT_ONE) + iocg->hweight_after_donation;
> + iocg->hweight_inuse = max_t(u64, 1,
> + DIV64_U64_ROUND_UP(
> + (u64)gamma * (iocg->hweight_active - iocg->hweight_donating),
> + WEIGHT_ONE) + iocg->hweight_after_donation);


I'm confused, how could DIV64_U64_Round_UP() end up less than 1?


#define DIV64_U64_ROUND_UP(ll, d) \
({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); })


AFAIK, the only case that could happen is that
iocg->hweight_active - iocg->hweight_donating is 0, then I don't
get it now how cound active iocg donate all the hweight, if this
really happend perhaps the better solution is to avoid such case.


Thanks,
Kuai


> 
> /* w' = s' * b' / b'_p */
> inuse = DIV64_U64_ROUND_UP(
> 





Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ