[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aaea79e3-c0af-1a0c-f1c3-d8b12b3c2bb7@c-s.fr>
Date: Tue, 3 Sep 2019 08:51:57 +0200
From: Christophe Leroy <christophe.leroy@....fr>
To: Alastair D'Silva <alastair@....ibm.com>
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Thomas Gleixner <tglx@...utronix.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Qian Cai <cai@....pw>, Nicholas Piggin <npiggin@...il.com>,
Allison Randal <allison@...utok.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
David Hildenbrand <david@...hat.com>,
linuxppc-dev@...ts.ozlabs.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 4/6] powerpc: Chunk calls to flush_dcache_range in
arch_*_memory
Le 03/09/2019 à 08:25, Alastair D'Silva a écrit :
> On Tue, 2019-09-03 at 08:19 +0200, Christophe Leroy wrote:
>>
>> Le 03/09/2019 à 07:23, Alastair D'Silva a écrit :
>>> From: Alastair D'Silva <alastair@...ilva.org>
>>>
>>> When presented with large amounts of memory being hotplugged
>>> (in my test case, ~890GB), the call to flush_dcache_range takes
>>> a while (~50 seconds), triggering RCU stalls.
>>>
>>> This patch breaks up the call into 1GB chunks, calling
>>> cond_resched() inbetween to allow the scheduler to run.
>>>
>>> Signed-off-by: Alastair D'Silva <alastair@...ilva.org>
>>> ---
>>> arch/powerpc/mm/mem.c | 18 ++++++++++++++++--
>>> 1 file changed, 16 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>> index cd540123874d..854aaea2c6ae 100644
>>> --- a/arch/powerpc/mm/mem.c
>>> +++ b/arch/powerpc/mm/mem.c
>>> @@ -104,11 +104,14 @@ int __weak remove_section_mapping(unsigned
>>> long start, unsigned long end)
>>> return -ENODEV;
>>> }
>>>
>>> +#define FLUSH_CHUNK_SIZE SZ_1G
>>
>> Maybe the name is a bit long for a local define. See if we could
>> reduce
>> code line splits below by shortening this name.
>>
>>> +
>>> int __ref arch_add_memory(int nid, u64 start, u64 size,
>>> struct mhp_restrictions *restrictions)
>>> {
>>> unsigned long start_pfn = start >> PAGE_SHIFT;
>>> unsigned long nr_pages = size >> PAGE_SHIFT;
>>> + u64 i;
>>> int rc;
>>>
>>> resize_hpt_for_hotplug(memblock_phys_mem_size());
>>> @@ -120,7 +123,12 @@ int __ref arch_add_memory(int nid, u64 start,
>>> u64 size,
>>> start, start + size, rc);
>>> return -EFAULT;
>>> }
>>> - flush_dcache_range(start, start + size);
>>> +
>>> + for (i = 0; i < size; i += FLUSH_CHUNK_SIZE) {
>>> + flush_dcache_range(start + i,
>>> + min(start + size, start + i +
>>> FLUSH_CHUNK_SIZE));
>>
>> My eyes don't like it.
>>
>> What about
>> for (; i < size; i += FLUSH_CHUNK_SIZE) {
>> int len = min(size - i, FLUSH_CHUNK_SIZE);
>>
>> flush_dcache_range(start + i, start + i + len);
>> cond_resched();
>> }
>>
>> or
>>
>> end = start + size;
>> for (; start < end; start += FLUSH_CHUNK_SIZE, size -=
>> FLUSH_CHUNK_SIZE) {
>> int len = min(size, FLUSH_CHUNK_SIZE);
>>
>> flush_dcache_range(start, start + len);
>> cond_resched();
>> }
>>
>>> + cond_resched();
>>> + }
>>>
>>> return __add_pages(nid, start_pfn, nr_pages, restrictions);
>>> }
>>> @@ -131,13 +139,19 @@ void __ref arch_remove_memory(int nid, u64
>>> start, u64 size,
>>> unsigned long start_pfn = start >> PAGE_SHIFT;
>>> unsigned long nr_pages = size >> PAGE_SHIFT;
>>> struct page *page = pfn_to_page(start_pfn) +
>>> vmem_altmap_offset(altmap);
>>> + u64 i;
>>> int ret;
>>>
>>> __remove_pages(page_zone(page), start_pfn, nr_pages, altmap);
>>>
>>> /* Remove htab bolted mappings for this section of memory */
>>> start = (unsigned long)__va(start);
>>> - flush_dcache_range(start, start + size);
>>> + for (i = 0; i < size; i += FLUSH_CHUNK_SIZE) {
>>> + flush_dcache_range(start + i,
>>> + min(start + size, start + i +
>>> FLUSH_CHUNK_SIZE));
>>> + cond_resched();
>>> + }
>>> +
>>
>> This piece of code looks pretty similar to the one before. Can we
>> refactor into a small helper ?
>>
>
> Not much point, it's removed in a subsequent patch.
>
But you tell me that you leave to people the opportunity to not apply
that subsequent patch, and that's the reason you didn't put that patch
before this one. In that case adding a helper is worth it.
Christophe
Powered by blists - more mailing lists