[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1810301824570.5984@nanos.tec.linutronix.de>
Date: Tue, 30 Oct 2018 18:25:47 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Jens Axboe <axboe@...nel.dk>
cc: Keith Busch <keith.busch@...el.com>, linux-block@...r.kernel.org,
linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 11/14] irq: add support for allocating (and affinitizing)
sets of IRQs
Jens,
On Tue, 30 Oct 2018, Jens Axboe wrote:
> On 10/30/18 10:02 AM, Keith Busch wrote:
> > pci_alloc_irq_vectors_affinity() starts at the provided max_vecs. If
> > that doesn't work, it will iterate down to min_vecs without returning to
> > the caller. The caller doesn't have a chance to adjust its sets between
> > iterations when you provide a range.
> >
> > The 'masks' overrun problem happens if the caller provides min_vecs
> > as a smaller value than the sum of the set (plus any reserved).
> >
> > If it's up to the caller to ensure that doesn't happen, then min and
> > max must both be the same value, and that value must also be the same as
> > the set sum + reserved vectors. The range just becomes redundant since
> > it is already bounded by the set.
> >
> > Using the nvme example, it would need something like this to prevent the
> > 'masks' overrun:
>
> OK, now I hear what you are saying. And you are right, the callers needs
> to provide minvec == maxvec for sets, and then have a loop around that
> to adjust as needed.
But then we should enforce it in the core code, right?
Thanks,
tglx
Powered by blists - more mailing lists