[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170206170304.GF24601@localhost.localdomain>
Date: Mon, 6 Feb 2017 12:03:05 -0500
From: Keith Busch <keith.busch@...el.com>
To: Christoph Hellwig <hch@....de>
Cc: Joe Korty <joe.korty@...r.com>, linux-nvme@...ts.infradead.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/6] genirq: allow assigning affinity to present but not
online CPUs
On Sun, Feb 05, 2017 at 05:40:23PM +0100, Christoph Hellwig wrote:
> Hi Joe,
>
> On Fri, Feb 03, 2017 at 08:58:09PM -0500, Joe Korty wrote:
> > IIRC, some years ago I ran across a customer system where
> > the #cpus_present was twice as big as #cpus_possible.
> >
> > Hyperthreading was turned off in the BIOS so it was not
> > entirely out of line for the extra cpus to be declared
> > present, even though none of them would ever be available
> > for use.
>
> This sounds like a system we should quirk around instead of optimizing
> for it. Unless I totally misunderstand the idea behind cpu_possible
> and cpu_present.
Can we use the online CPUs and create a new hot-cpu notifier to the nvme
driver to free/reallocate as needed? We were doing that before blk-mq. Now
blk-mq can change the number hardware contexts on a live queue, so we
can reintroduce that behavior to nvme and only allocate what we need.
Powered by blists - more mailing lists