[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d0fd4794-c22d-910d-8287-8ae5e319b094@hisilicon.com>
Date: Mon, 22 May 2023 20:52:55 +0800
From: wangwudi <wangwudi@...ilicon.com>
To: Marc Zyngier <maz@...nel.org>
CC: <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] irqchip: gic-v3: Collection table support muti pages
在 2023/5/16 15:16, Marc Zyngier 写道:
> On Tue, 16 May 2023 03:53:06 +0100,
> wangwudi <wangwudi@...ilicon.com> wrote:
>>
>>
>>
>> 在 2023/5/16 9:57, wangwudi 写道:
>>>
>>>
>>> -----邮件原件-----
>>> 发件人: Marc Zyngier [mailto:maz@...nel.org]
>>> 发送时间: 2023年5月15日 20:45
>>> 收件人: wangwudi <wangwudi@...ilicon.com>
>>> 抄送: linux-kernel@...r.kernel.org; Thomas Gleixner <tglx@...utronix.de>
>>> 主题: Re: [PATCH] irqchip: gic-v3: Collection table support muti pages
>>>
>>> On Mon, 15 May 2023 13:10:04 +0100,
>>> wangwudi <wangwudi@...ilicon.com> wrote:
>>>>
>>>> Only one page is allocated to the collection table.
>>>> Recalculate the page number of collection table based on the number of
>>>> CPUs.
>>>
>>> Please document *why* we should even consider this. Do you know of
>>> any existing implementation that is so large (or need so much
>>> memory for its collection) that it would result in overflowing the
>>> collection table?
>>
>> Each CPU occupies an entry in the collection table. When there are a
>> large number of CPUs and only one page of the collection table, some
>> CPUs fail to execute ITS-MAPC cmd, and fail to receive LPI
>> interrupts.
>>
>> For example, GITS_BASER indicates that the page_size of the
>> collection table is 4 KB, the entry size is 16 Bytes, and only 256
>> entries can be stored on one page. When the number of CPUs is more
>> than 256(which is common in the SMP system of the server), the
>> subsequent CPUs cannot receive the LPI.
>
> You're stating the obvious. My question was whether we were anywhere
> close to that limit on any existing, or even planned HW.
>
>> It is noticed by code review, not by on actual HW.
>
> Right. So let me repeat my question: do you of any existing or planned
> implementation that is both:
>
> - using a small ITS page size
> - having large per-collection memory requirements
> - with a potentially large number of CPUs
>
> that would result in CPUs not fitting in the collection table?
>
Yes, it is noticed in an internal simulation research:
- the page_size of collection table is 4 KB
- the entry_size of collection table is 16 Byte
- with 512 CPUs
> Assuming this is the case, is the CPU numbering space so large and
> potentially sparse that it would benefit from 2 level tables instead
> of a larger single-level table?
>
Make sense.
> Finally, assuming all the above conditions are satisfied, what
> actually populates the second level table in your patch? I don't see
> anything that does. Which makes me think that it was never properly
> tested.
>
How do you think populating the second level table in its_cpu_init_collection:
+static void its_cpu_init_collection(struct its_node *its, struct its_baser *baser)
{
int cpu = smp_processor_id();
u64 target;
@@ -3210,6 +3265,9 @@ static void its_cpu_init_collection(struct its_node *its)
return;
}
+ its_alloc_table_entry(its, baser, cpu);
+
/*
* We now have to bind each collection to its target
* redistributor.
@@ -3237,11 +3295,14 @@ static void its_cpu_init_collection(struct its_node *its)
static void its_cpu_init_collections(void)
{
struct its_node *its;
+ struct its_baser *baser;
raw_spin_lock(&its_lock);
- list_for_each_entry(its, &its_nodes, entry)
- its_cpu_init_collection(its);
+ list_for_each_entry(its, &its_nodes, entry) {
+ baser = its_get_baser(its, GITS_BASER_TYPE_COLLECTION);
+ its_cpu_init_collection(its, baser);
+ }
raw_spin_unlock(&its_lock);
> Thanks,
>
> M.
>
Powered by blists - more mailing lists