[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cfc3b77b-7e02-d5b7-382c-00cccc4e2914@nvidia.com>
Date: Tue, 31 Jan 2023 10:45:38 -0600
From: Shanker Donthineni <sdonthineni@...dia.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Marc Zyngier <maz@...nel.org>, Michael Walle <michael@...le.cc>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Hans de Goede <hdegoede@...hat.com>,
Wolfram Sang <wsa+renesas@...g-engineering.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] genirq: Use the maple tree for IRQ descriptors
management
On 1/31/23 03:52, Thomas Gleixner wrote:
> External email: Use caution opening links or attachments
>
>
> On Sun, Jan 29 2023 at 18:57, Shanker Donthineni wrote:
>> The current implementation uses a static bitmap and a radix tree
>> to manage IRQ allocation and irq_desc pointer store respectively.
>> However, the size of the bitmap is constrained by the build time
>> macro MAX_SPARSE_IRQS, which may not be sufficient to support the
>> high-end servers, particularly those with GICv4.1 hardware, which
>> require a large interrupt space to cover LPIs and vSGIs
>>
>> The maple tree is a highly efficient data structure for storing
>> non-overlapping ranges and can handle a large number of entries,
>> up to ULONG_MAX. It can be utilized for both storing IRQD and
>
> IRQD ??. Please write it out: interrupt descriptors
>
> Changelogs have no space constraints and there is zero value to
> introduce unreadable acronyms.
>
>> static DEFINE_MUTEX(sparse_irq_lock);
>> -static DECLARE_BITMAP(allocated_irqs, MAX_SPARSE_IRQS);
>> +static struct maple_tree sparse_irqs = MTREE_INIT_EXT(sparse_irqs,
>> + MT_FLAGS_ALLOC_RANGE |
>> + MT_FLAGS_LOCK_EXTERN |
>> + MT_FLAGS_USE_RCU, sparse_irq_lock);
>
> Nit. Can we please format this properly:
>
> static struct maple_tree sparse_irqs = MTREE_INIT_EXT(sparse_irqs,
> MT_FLAGS_ALLOC_RANGE |
> MT_FLAGS_LOCK_EXTERN |
> MT_FLAGS_USE_RCU,
> sparse_irq_lock);
>
> Other than that this looks really good.
>
I'll update in v2 patch.
Thanks,
Shanker
Powered by blists - more mailing lists