[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM0PR04MB448161D9ED7D152AD58B53E9887B0@AM0PR04MB4481.eurprd04.prod.outlook.com>
Date: Tue, 26 Feb 2019 00:09:28 +0000
From: Peng Fan <peng.fan@....com>
To: "dennis@...nel.org" <dennis@...nel.org>
CC: "tj@...nel.org" <tj@...nel.org>, "cl@...ux.com" <cl@...ux.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"van.freenix@...il.com" <van.freenix@...il.com>
Subject: RE: [RFC] percpu: decrease pcpu_nr_slots by 1
Hi Dennis,
> -----Original Message-----
> From: dennis@...nel.org [mailto:dennis@...nel.org]
> Sent: 2019年2月25日 23:24
> To: Peng Fan <peng.fan@....com>
> Cc: tj@...nel.org; cl@...ux.com; linux-mm@...ck.org;
> linux-kernel@...r.kernel.org; van.freenix@...il.com
> Subject: Re: [RFC] percpu: decrease pcpu_nr_slots by 1
>
> On Sun, Feb 24, 2019 at 09:17:08AM +0000, Peng Fan wrote:
> > Entry pcpu_slot[pcpu_nr_slots - 2] is wasted with current code logic.
> > pcpu_nr_slots is calculated with `__pcpu_size_to_slot(size) + 2`.
> > Take pcpu_unit_size as 1024 for example, __pcpu_size_to_slot will
> > return max(11 - PCPU_SLOT_BASE_SHIFT + 2, 1), it is 8, so the
> > pcpu_nr_slots will be 10.
> >
> > The chunk with free_bytes 1024 will be linked into pcpu_slot[9].
> > However free_bytes in range [512,1024) will be linked into
> > pcpu_slot[7], because `fls(512) - PCPU_SLOT_BASE_SHIFT + 2` is 7.
> > So pcpu_slot[8] is has no chance to be used.
> >
> > According comments of PCPU_SLOT_BASE_SHIFT, 1~31 bytes share the
> same
> > slot and PCPU_SLOT_BASE_SHIFT is defined as 5. But actually 1~15 share
> > the same slot 1 if we not take PCPU_MIN_ALLOC_SIZE into consideration,
> > 16~31 share slot 2. Calculation as below:
> > highbit = fls(16) -> highbit = 5
> > max(5 - PCPU_SLOT_BASE_SHIFT + 2, 1) equals 2, not 1.
> >
> > This patch by decreasing pcpu_nr_slots to avoid waste one slot and let
> > [PCPU_MIN_ALLOC_SIZE, 31) really share the same slot.
> >
> > Signed-off-by: Peng Fan <peng.fan@....com>
> > ---
> >
> > V1:
> > Not very sure about whether it is intended to leave the slot there.
> >
> > mm/percpu.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/percpu.c b/mm/percpu.c index
> > 8d9933db6162..12a9ba38f0b5 100644
> > --- a/mm/percpu.c
> > +++ b/mm/percpu.c
> > @@ -219,7 +219,7 @@ static bool pcpu_addr_in_chunk(struct pcpu_chunk
> > *chunk, void *addr) static int __pcpu_size_to_slot(int size) {
> > int highbit = fls(size); /* size is in bytes */
> > - return max(highbit - PCPU_SLOT_BASE_SHIFT + 2, 1);
> > + return max(highbit - PCPU_SLOT_BASE_SHIFT + 1, 1);
> > }
>
> Honestly, it may be better to just have [1-16) [16-31) be separate. I'm working
> on a change to this area, so I may change what's going on here.
>
> >
> > static int pcpu_size_to_slot(int size) @@ -2145,7 +2145,7 @@ int
> > __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> > * Allocate chunk slots. The additional last slot is for
> > * empty chunks.
> > */
> > - pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2;
> > + pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 1;
> > pcpu_slot = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_slot[0]),
> > SMP_CACHE_BYTES);
> > for (i = 0; i < pcpu_nr_slots; i++)
> > --
> > 2.16.4
> >
>
> This is a tricky change. The nice thing about keeping the additional
> slot around is that it ensures a distinction between a completely empty
> chunk and a nearly empty chunk.
Are there any issues met before if not keeping the unused slot?
From reading the code and git history I could not find information.
I tried this code on aarch64 qemu and did not meet issues.
It happens to be that the logic creates
> power of 2 chunks which ends up being an additional slot anyway.
So,
> given that this logic is tricky and architecture dependent,
Could you share more information about architecture dependent?
Thanks,
Peng.
I don't feel
> comfortable making this change as the risk greatly outweighs the
> benefit.
>
> Thanks,
> Dennis
Powered by blists - more mailing lists