[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201120050423.GE3113267@google.com>
Date: Thu, 19 Nov 2020 21:04:23 -0800
From: Minchan Kim <minchan@...nel.org>
To: Zhenhua Huang <zhenhuah@...eaurora.org>
Cc: vjitta@...eaurora.org, linux-mm <linux-mm@...ck.org>,
glider@...gle.com, Dan Williams <dan.j.williams@...el.com>,
broonie@...nel.org, mhiramat@...nel.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Yogesh Lal <ylal@...eaurora.org>,
Vinayak Menon <vinmenon@...eaurora.org>, tingwei@...eaurora.org
Subject: Re: [PATCH] lib: stackdepot: Add support to configure STACK_HASH_SIZE
On Thu, Nov 19, 2020 at 11:34:32AM +0800, Zhenhua Huang wrote:
> On Wed, Nov 04, 2020 at 07:27:03AM +0800, Minchan Kim wrote:
> > Sorry if this mail corrupts the mail thread or had heavy mangling
> > since I lost this mail from my mailbox so I am sending this mail by
> > web gmail.
> >
> > On Thu, Oct 22, 2020 at 10:18 AM <vjitta@...eaurora.org> wrote:
> > >
> > > From: Yogesh Lal <ylal@...eaurora.org>
> > >
> > > Use STACK_HASH_ORDER_SHIFT to configure STACK_HASH_SIZE.
> > >
> > > Aim is to have configurable value for STACK_HASH_SIZE,
> > > so depend on use case one can configure it.
> > >
> > > One example is of Page Owner, default value of
> > > STACK_HASH_SIZE lead stack depot to consume 8MB of static memory.
> > > Making it configurable and use lower value helps to enable features like
> > > CONFIG_PAGE_OWNER without any significant overhead.
> > >
> > > Signed-off-by: Yogesh Lal <ylal@...eaurora.org>
> > > Signed-off-by: Vinayak Menon <vinmenon@...eaurora.org>
> > > Signed-off-by: Vijayanand Jitta <vjitta@...eaurora.org>
> > > ---
> > > lib/Kconfig | 9 +++++++++
> > > lib/stackdepot.c | 3 +--
> > > 2 files changed, 10 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/lib/Kconfig b/lib/Kconfig
> > > index 18d76b6..b3f8259 100644
> > > --- a/lib/Kconfig
> > > +++ b/lib/Kconfig
> > > @@ -651,6 +651,15 @@ config STACKDEPOT
> > > bool
> > > select STACKTRACE
> > >
> > > +config STACK_HASH_ORDER_SHIFT
> > > + int "stack depot hash size (12 => 4KB, 20 => 1024KB)"
> > > + range 12 20
> > > + default 20
> > > + depends on STACKDEPOT
> > > + help
> > > + Select the hash size as a power of 2 for the stackdepot hash
> > > table.
> > > + Choose a lower value to reduce the memory impact.
> > > +
> > > config SBITMAP
> > > bool
> > >
> > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> > > index 2caffc6..413c20b 100644
> > > --- a/lib/stackdepot.c
> > > +++ b/lib/stackdepot.c
> > > @@ -142,8 +142,7 @@ static struct stack_record *depot_alloc_stack(unsigned
> > > long *entries, int size,
> > > return stack;
> > > }
> > >
> > > -#define STACK_HASH_ORDER 20
> > > -#define STACK_HASH_SIZE (1L << STACK_HASH_ORDER)
> > > +#define STACK_HASH_SIZE (1L << CONFIG_STACK_HASH_ORDER_SHIFT)
> > > #define STACK_HASH_MASK (STACK_HASH_SIZE - 1)
> > > #define STACK_HASH_SEED 0x9747b28c
> > >
> > > --
> > > QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
> > > of Code Aurora Forum, hosted by The Linux Foundation
> > > 2.7.4
> > >
> >
> > 1. When we don't use page_owner, we don't want to waste any memory for
> > stackdepot hash array.
> > 2. When we use page_owner, we want to have reasonable stackdeport hash array
> >
> > With this configuration, it couldn't meet since we always need to
> > reserve a reasonable size for the array.
> > Can't we make the hash size as a kernel parameter?
> > With it, we could use it like this.
> >
> > 1. page_owner=off, stackdepot_stack_hash=0 -> no more wasted memory
> > when we don't use page_owner
> > 2. page_owner=on, stackdepot_stack_hash=8M -> reasonable hash size
> > when we use page_owner.
> Seems we have other users like kasan, and dma_buf_ref which we introduced.
> Also we can't guarantee there will not be any other users for stackdepot, so
> it's better we not depend on only page owner?
I didn't mean to make it page_owner dependent. What I suggested is just
to define kernel parameter for stackdeport hash size so admin could
override it at right size when we really need it.
Powered by blists - more mailing lists