[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200929102851.3m5ardu2orfbhe3d@linutronix.de>
Date: Tue, 29 Sep 2020 12:28:51 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
Cc: "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
"davem@...emloft.net" <davem@...emloft.net>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Luis Claudio R . Goncalves" <lgoncalv@...hat.com>,
Mahipal Challa <mahipalreddy2006@...il.com>,
Seth Jennings <sjenning@...hat.com>,
Dan Streetman <ddstreet@...e.org>,
Vitaly Wool <vitaly.wool@...sulko.com>,
"Wangzhou (B)" <wangzhou1@...ilicon.com>,
"fanghao (A)" <fanghao11@...wei.com>,
Colin Ian King <colin.king@...onical.com>
Subject: Re: [PATCH v6] mm/zswap: move to use crypto_acomp API for hardware
acceleration
On 2020-09-29 10:02:15 [+0000], Song Bao Hua (Barry Song) wrote:
> > My point was that there will be a warning at run-time and you don't want
> > that. There are raw_ accessors if you know what you are doing. But…
>
> I have only seen get_cpu_ptr/var() things will disable preemption. I don't think
> we will have a warning as this_cpu_ptr() won't disable preemption.
Good. Just enable CONFIG_DEBUG_PREEMPT and tell please what happens.
> > Earlier you had compression/decompression with disabled preemption and
>
> No. that is right now done in enabled preemption context with this patch. The code before this patch
> was doing (de)compression in preemption-disabled context by using get_cpu_ptr and get_cpu_var.
Exactly what I am saying. And within this get_cpu_ptr() section there
was the compression/decompression sitting. So compression/decompression
happend while preemtion was off.
> > strict per-CPU memory allocation. Now if you keep this per-CPU memory
> > allocation then you gain a possible bottleneck.
> > In the previous email you said that there may be a bottleneck in the
> > upper layer where you can't utilize all that memory you allocate. So you
> > may want to rethink that strategy before that rework.
>
> we are probably not talking about same thing :-)
> I was talking about possible generic swap bottleneck. For example, LRU is global,
> while swapping, multiple cores might have some locks on this LRU. for example,
> if we have 8 inactive pages to swap out, I am not sure if mm can use 8 cores
> to swap them out at the same time.
In that case you probably don't need 8* per-CPU memory for this task.
> >
> > > 2. while allocating mutex, we can put the mutex into local memory by using
> > kmalloc_node().
> > > If we move to "struct mutex lock" directly, most CPUs in a NUMA server will
> > have to access
> > > remote memory to read/write the mutex, therefore, this will increase the
> > latency dramatically.
> >
> > If you need something per-CPU then DEFINE_PER_CPU() will give it to you.
>
> Yes. It is true.
>
> > It would be very bad for performance if this allocations were not from
> > CPU-local memory, right? So what makes you think this is worse than
> > kmalloc_node() based allocations?
>
> Yes. If your read zswap code, it has considered NUMA very carefully by allocating various
> memory locally. And in crypto framework, I also added API to allocate local compression.
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7bc13b5b60e94
> this zswap patch has used the new node-aware API.
>
> Memory access crossing NUMA node, practically crossing packages, can dramatically increase,
> like double, triple or more.
So you are telling me, DEFINE_PER_CPU() does not allocate the memory for
each CPU to be local but kmalloc_node() does?
> Thanks
> Barry
Sebastian
Powered by blists - more mailing lists