lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28688fa8-718b-4ee6-8417-822efac8b603@arm.com>
Date: Tue, 14 Jan 2025 13:29:04 +0000
From: Christian Loehle <christian.loehle@....com>
To: Florian Schmaus <flo@...kplace.eu>, Ingo Molnar <mingo@...hat.com>,
 Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Dietmar Eggemann <dietmar.eggemann@....com>,
 Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
 Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
 Kent Overstreet <kent.overstreet@...ux.dev>
Cc: linux-bcachefs@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] bcachefs: set rebalance thread to SCHED_BATCH and
 nice 19

On 1/14/25 12:47, Florian Schmaus wrote:
> While the rebalance thread is isually not compute bound, it does cause

s/isually/usually

> a considerable amount of I/O. Since "reducing" the nice level from 0
> to 19, also implicitly reduces the threads best-effort I/O scheduling
> class level from 4 to 7, the reblance thread's I/O will be depriotized

s/depriotized/deprioritized/

> over normal I/O.
> 
> Furthermore, we set the rebalance thread's scheduling class to BATCH,
> which means that it will potentially receive a higher scheduling
> latency. Making room for threads that need a low
> schedulinglatency (e.g., interactive onces).

s/schedulinglatency/
I know nothing about bcachefs internals, but could this also be a problem?
The rebalance thread might not run for O(second) or so? 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ