[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6604a1a1-e1cb-4d34-997e-de5a4263a68a@flourine.local>
Date: Tue, 12 Nov 2024 10:45:47 +0100
From: Daniel Wagner <dwagner@...e.de>
To: Costa Shulyupin <costa.shul@...hat.com>
Cc: ming.lei@...hat.com, Jens Axboe <axboe@...nel.dk>,
Waiman Long <longman@...hat.com>, Zefan Li <lizefan.x@...edance.com>, Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>, Michal Koutný <mkoutny@...e.com>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org, cgroups@...r.kernel.org
Subject: Re: [RFC PATCH v1] blk-mq: isolate CPUs from hctx
On Fri, Nov 08, 2024 at 07:48:30AM +0200, Costa Shulyupin wrote:
> The housekeeping CPU masks, set up by the "isolcpus" and "nohz_full"
> boot command line options, are used at boot time to exclude selected
> CPUs from running some kernel housekeeping subsystems to minimize
> disturbance to latency sensitive userspace applications such as DPDK.
> This options can only be changed with a reboot. This is a problem for
> containerized workloads running on OpenShift/Kubernetes where a
> mix of low latency and "normal" workloads can be created/destroyed
> dynamically and the number of CPUs allocated to each workload is often
> not known at boot time.
>
> Cgroups allow configuring isolated_cpus at runtime.
> However, blk-mq may still use managed interrupts on the
> newly isolated CPUs.
>
> Rebuild hctx->cpumask considering isolated CPUs to avoid
> managed interrupts on those CPUs and reclaim non-isolated ones.
As far I understand this doesn't address the issue that the drivers
need also to be aware of isolcpu mask changes. That means even though
the cpumask is updated in the block layer, the driver doesn't know about
it and still runs on the isolated CPUs.
Powered by blists - more mailing lists