[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d18d122e.fsf@yhuang-dev.intel.com>
Date: Thu, 03 Aug 2017 16:35:21 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: "Huang\, Ying" <ying.huang@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
<linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...nel.org>,
Michael Ellerman <mpe@...erman.id.au>,
Borislav Petkov <bp@...e.de>,
Thomas Gleixner <tglx@...utronix.de>,
"Juergen Gross" <jgross@...e.com>, Aaron Lu <aaron.lu@...el.com>
Subject: Re: [PATCH 3/3] IPI: Avoid to use 2 cache lines for one call_single_data
Eric Dumazet <eric.dumazet@...il.com> writes:
> On Wed, 2017-08-02 at 16:52 +0800, Huang, Ying wrote:
>> From: Huang Ying <ying.huang@...el.com>
>>
>> struct call_single_data is used in IPI to transfer information between
>> CPUs. Its size is bigger than sizeof(unsigned long) and less than
>> cache line size. Now, it is allocated with no any alignment
>> requirement. This makes it possible for allocated call_single_data to
>> cross 2 cache lines. So that double the number of the cache lines
>> that need to be transferred among CPUs. This is resolved by aligning
>> the allocated call_single_data with cache line size.
>>
>> To test the effect of the patch, we use the vm-scalability multiple
>> thread swap test case (swap-w-seq-mt). The test will create multiple
>> threads and each thread will eat memory until all RAM and part of swap
>> is used, so that huge number of IPI will be triggered when unmapping
>> memory. In the test, the throughput of memory writing improves ~5%
>> compared with misaligned call_single_data because of faster IPI.
>>
>> Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
>> Cc: Peter Zijlstra <peterz@...radead.org>
>> Cc: Ingo Molnar <mingo@...nel.org>
>> Cc: Michael Ellerman <mpe@...erman.id.au>
>> Cc: Borislav Petkov <bp@...e.de>
>> Cc: Thomas Gleixner <tglx@...utronix.de>
>> Cc: Juergen Gross <jgross@...e.com>
>> Cc: Aaron Lu <aaron.lu@...el.com>
>> ---
>> kernel/smp.c | 6 ++++--
>> 1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/smp.c b/kernel/smp.c
>> index 3061483cb3ad..81d9ae08eb6e 100644
>> --- a/kernel/smp.c
>> +++ b/kernel/smp.c
>> @@ -51,7 +51,7 @@ int smpcfd_prepare_cpu(unsigned int cpu)
>> free_cpumask_var(cfd->cpumask);
>> return -ENOMEM;
>> }
>> - cfd->csd = alloc_percpu(struct call_single_data);
>> + cfd->csd = alloc_percpu_aligned(struct call_single_data);
>
> I do not believe allocating 64 bytes (per cpu) for this structure is
> needed. That would be an increase of cache lines.
>
> What we can do instead is to force an alignment on 4*sizeof(void *).
> (32 bytes on 64bit, 16 bytes on 32bit arches)
>
> Maybe something like this :
>
> diff --git a/include/linux/smp.h b/include/linux/smp.h
> index 68123c1fe54918c051292eb5ba3427df09f31c2f..f7072bf173c5456e38e958d6af85a4793bced96e 100644
> --- a/include/linux/smp.h
> +++ b/include/linux/smp.h
> @@ -19,7 +19,7 @@ struct call_single_data {
> smp_call_func_t func;
> void *info;
> unsigned int flags;
> -};
> +} __attribute__((aligned(4 * sizeof(void *))));
>
> /* total number of cpus in this system (may exceed NR_CPUS) */
> extern unsigned int total_cpus;
OK. And if the sizeof(struct call_single_data) changes, we need to
change the alignment accordingly too. So I add some BUILD_BUG_ON() for
that.
Best Regards,
Huang, Ying
------------------>8------------------
>From 2c400e9b1793f1c1d33bc278f5bc066e32ca4fee Mon Sep 17 00:00:00 2001
From: Huang Ying <ying.huang@...el.com>
Date: Thu, 27 Jul 2017 16:43:20 +0800
Subject: [PATCH -v2] IPI: Avoid to use 2 cache lines for one call_single_data
struct call_single_data is used in IPI to transfer information between
CPUs. Its size is bigger than sizeof(unsigned long) and less than
cache line size. Now, it is allocated with no any alignment
requirement. This makes it possible for allocated call_single_data to
cross 2 cache lines. So that double the number of the cache lines
that need to be transferred among CPUs.
This is resolved by aligning the allocated call_single_data with 4 *
sizeof(void *). If the size of struct call_single_data is changed in
the future, the alignment should be changed accordingly. It should be
more than sizeof(struct call_single_data) and the power of 2.
To test the effect of the patch, we use the vm-scalability multiple
thread swap test case (swap-w-seq-mt). The test will create multiple
threads and each thread will eat memory until all RAM and part of swap
is used, so that huge number of IPI will be triggered when unmapping
memory. In the test, the throughput of memory writing improves ~5%
compared with misaligned call_single_data because of faster IPI.
[Align with 4 * sizeof(void*) instead of cache line size]
Suggested-by: Eric Dumazet <eric.dumazet@...il.com>
Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Michael Ellerman <mpe@...erman.id.au>
Cc: Borislav Petkov <bp@...e.de>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Juergen Gross <jgross@...e.com>
Cc: Aaron Lu <aaron.lu@...el.com>
---
include/linux/smp.h | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/include/linux/smp.h b/include/linux/smp.h
index 68123c1fe549..4d3b372d50b0 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -13,13 +13,22 @@
#include <linux/init.h>
#include <linux/llist.h>
+#define CSD_ALIGNMENT (4 * sizeof(void *))
+
typedef void (*smp_call_func_t)(void *info);
struct call_single_data {
struct llist_node llist;
smp_call_func_t func;
void *info;
unsigned int flags;
-};
+} __aligned(CSD_ALIGNMENT);
+
+/* To avoid allocate csd across 2 cache lines */
+static inline void check_alignment_of_csd(void)
+{
+ BUILD_BUG_ON((CSD_ALIGNMENT & (CSD_ALIGNMENT - 1)) != 0);
+ BUILD_BUG_ON(sizeof(struct call_single_data) > CSD_ALIGNMENT);
+}
/* total number of cpus in this system (may exceed NR_CPUS) */
extern unsigned int total_cpus;
--
2.13.2
Powered by blists - more mailing lists