[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200929093113.3cv63szruo3c4inu@linutronix.de>
Date: Tue, 29 Sep 2020 11:31:13 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
Cc: "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"herbert@...dor.apana.org.au" <herbert@...dor.apana.org.au>,
"davem@...emloft.net" <davem@...emloft.net>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Luis Claudio R . Goncalves" <lgoncalv@...hat.com>,
Mahipal Challa <mahipalreddy2006@...il.com>,
Seth Jennings <sjenning@...hat.com>,
Dan Streetman <ddstreet@...e.org>,
Vitaly Wool <vitaly.wool@...sulko.com>,
"Wangzhou (B)" <wangzhou1@...ilicon.com>,
"fanghao (A)" <fanghao11@...wei.com>,
Colin Ian King <colin.king@...onical.com>
Subject: Re: [PATCH v6] mm/zswap: move to use crypto_acomp API for hardware
acceleration
On 2020-09-29 05:14:31 [+0000], Song Bao Hua (Barry Song) wrote:
> After second thought and trying to make this change, I would like to change my mind
> and disagree with this idea. Two reasons:
> 1. while using this_cpu_ptr() without preemption lock, people usually put all things bound
> with one cpu to one structure, so that once we get the pointer of the whole structure, we get
> all its parts belonging to the same cpu. If we move the dstmem and mutex out of the structure
> containing them, we will have to do:
> a. get_cpu_ptr() for the acomp_ctx //lock preemption
> b. this_cpu_ptr() for the dstmem and mutex
> c. put_cpu_ptr() for the acomp_ctx //unlock preemption
> d. mutex_lock()
> sg_init_one()
> compress/decompress etc.
> ...
> mutex_unlock
>
> as the get() and put() have a preemption lock/unlock, this will make certain this_cpu_ptr()
> in the step "b" will return the right dstmem and mutex which belong to the same cpu with
> step "a".
>
> The steps from "a" to "c" are quite silly and confusing. I believe the existing code aligns
> with the most similar code in kernel better:
> a. this_cpu_ptr() //get everything for one cpu
> b. mutex_lock()
> sg_init_one()
> compress/decompress etc.
> ...
> mutex_unlock
My point was that there will be a warning at run-time and you don't want
that. There are raw_ accessors if you know what you are doing. But…
Earlier you had compression/decompression with disabled preemption and
strict per-CPU memory allocation. Now if you keep this per-CPU memory
allocation then you gain a possible bottleneck.
In the previous email you said that there may be a bottleneck in the
upper layer where you can't utilize all that memory you allocate. So you
may want to rethink that strategy before that rework.
> 2. while allocating mutex, we can put the mutex into local memory by using kmalloc_node().
> If we move to "struct mutex lock" directly, most CPUs in a NUMA server will have to access
> remote memory to read/write the mutex, therefore, this will increase the latency dramatically.
If you need something per-CPU then DEFINE_PER_CPU() will give it to you.
It would be very bad for performance if this allocations were not from
CPU-local memory, right? So what makes you think this is worse than
kmalloc_node() based allocations?
> Thanks
> Barry
Sebastian
Powered by blists - more mailing lists