[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <11b1c67e-ab07-5eed-0e2d-8c85f6a89725@huawei.com>
Date: Thu, 10 Nov 2022 19:39:56 +0800
From: Xu Kuohai <xukuohai@...wei.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
CC: <bpf@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>
Subject: Re: [PATCH bpf-next v3] bpf: Initialize same number of free nodes for
each pcpu_freelist
On 11/10/2022 2:23 PM, Xu Kuohai wrote:
> On 11/10/2022 12:05 PM, Andrii Nakryiko wrote:
>> On Wed, Nov 9, 2022 at 7:33 PM Xu Kuohai <xukuohai@...wei.com> wrote:
>>>
>>> pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
>>> free nodes for some CPUs, and then possibly one CPU with fewer nodes,
>>> followed by remaining cpus with 0 nodes. For example, when nr_elems == 256
>>> and num_possible_cpus() == 32, CPU 0~27 each gets 9 free nodes, CPU 28 gets
>>> 4 free nodes, CPU 29~31 get 0 free nodes, while in fact each CPU should get
>>> 8 nodes equally.
>>>
>>> This patch initializes nr_elems / num_possible_cpus() free nodes for each
>>> CPU firstly, then allocates the remaining free nodes by one for each CPU
>>> until no free nodes left.
>>>
>>> Signed-off-by: Xu Kuohai <xukuohai@...wei.com>
>>> Acked-by: Yonghong Song <yhs@...com>
>>> ---
>>> v3: Simplify code as suggested by Andrii
>>> v2: Update commit message and add Yonghong's ack
>>> ---
>>> kernel/bpf/percpu_freelist.c | 27 ++++++++++++++-------------
>>> 1 file changed, 14 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
>>> index b6e7f5c5b9ab..bd60070c079f 100644
>>> --- a/kernel/bpf/percpu_freelist.c
>>> +++ b/kernel/bpf/percpu_freelist.c
>>> @@ -100,22 +100,23 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>>> u32 nr_elems)
>>> {
>>> struct pcpu_freelist_head *head;
>>> - int i, cpu, pcpu_entries;
>>> + unsigned int cpu, cpu_idx, i, j, n, m;
>>>
>>> - pcpu_entries = nr_elems / num_possible_cpus() + 1;
>>> - i = 0;
>>> + n = nr_elems / num_possible_cpus();
>>> + m = nr_elems % num_possible_cpus();
>>> +
>>> + cpu_idx = 0;
>>>
>>> for_each_possible_cpu(cpu) {
>>> -again:
>>> - head = per_cpu_ptr(s->freelist, cpu);
>>> - /* No locking required as this is not visible yet. */
>>> - pcpu_freelist_push_node(head, buf);
>>> - i++;
>>> - buf += elem_size;
>>> - if (i == nr_elems)
>>> - break;
>>> - if (i % pcpu_entries)
>>> - goto again;
>>> + j = min(n + (cpu_idx < m ? 1 : 0), nr_elems);
>>
>> why the min() here?
>>
>
> to avoid out-of-bounds in case nr_elems is less than the total number of CPUs,
> seems not very necessary, but the original code avoids this as well, I just kept
> the logic
>
sorry, no out-of-bounds here, when nr_elems is less than the total number of CPUs,
j will be 0 when cpu_idx >= m, will post v4
>>> + for (i = 0; i < j; i++) {
>>> + head = per_cpu_ptr(s->freelist, cpu);
>>> + /* No locking required as this is not visible yet. */
>>> + pcpu_freelist_push_node(head, buf);
>>> + buf += elem_size;
>>> + }
>>> + nr_elems -= j;
>>> + cpu_idx++;
>>> }
>>> }
>>>
>>> --
>>> 2.30.2
>>>
>> .
>
>
> .
Powered by blists - more mailing lists