[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c47924cbaf94448691cdd2503ecbf5ad@svr-chch-ex1.atlnz.lc>
Date: Tue, 28 May 2019 21:03:38 +0000
From: Chris Packham <Chris.Packham@...iedtelesis.co.nz>
To: Paul Burton <paul.burton@...s.com>
CC: "ralf@...ux-mips.org" <ralf@...ux-mips.org>,
James Hogan <jhogan@...nel.org>,
"linux-mips@...r.kernel.org" <linux-mips@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Hamish Martin <Hamish.Martin@...iedtelesis.co.nz>
Subject: Re: MIPS r4k cache operations with SMP enabled
Hi Paul,
On 29/05/19 9:01 AM, Paul Burton wrote:
> Hi Chris,
>
> On Tue, May 28, 2019 at 05:19:37AM +0000, Chris Packham wrote:
>> On 28/05/19 2:52 PM, Chris Packham wrote:
>>> Hi,
>>>
>>> I'm trying to port a fairly old Broadcom integrated chip (BCM6818) to
>>> the latest Linux kernel using the mips/bmips support.
>>>
>>> The chip has a BMIPS4355 core. This has two "thread processors" (cpu
>>> cores) with separate I-caches but a shared D-cache.
>>>
>>> I've got things booting but I encounter the following BUG()
>>>
>>> BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
>>> caller is blast_dcache16+0x24/0x154
>>> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.1.0-at1 #5
>>> Stack : 00000036 8008d0d0 806a0000 807c0000 80754e10 0000000b 80754684
>>> 8f831c8c
>>> 80900000 8f828424 807986e7 8071348c 00000000 10008f00 8f831c30
>>> 7fb69e2a
>>> 00000000 00000000 80920000 00000056 00002335 00000000 807a0000
>>> 00000000
>>> 6d6d3a20 00000000 00000056 73776170 00000000 ffffffff 10008f01
>>> 807c0000
>>> 80790000 00002cc2 ffffffff 80900000 00000010 8f83198c 00000000
>>> 80900000
>>> ...
>>> Call Trace:
>>> [<8001c208>] show_stack+0x30/0x100
>>> [<8063282c>] dump_stack+0x9c/0xd0
>>> [<802f1cec>] debug_smp_processor_id+0xfc/0x110
>>> [<8002e274>] blast_dcache16+0x24/0x154
>>> [<80122978>] map_vm_area+0x58/0x70
>>> [<80123888>] __vmalloc_node_range+0x1fc/0x2b4
>>> [<80123b54>] vmalloc+0x44/0x50
>>> [<807d15d0>] jffs2_zlib_init+0x24/0x94
>>> [<807d1354>] jffs2_compressors_init+0x10/0x30
>>> [<807d151c>] init_jffs2_fs+0x68/0xf8
>>> [<8001016c>] do_one_initcall+0x7c/0x1f0
>>> [<807bee30>] kernel_init_freeable+0x17c/0x258
>>> [<80650d1c>] kernel_init+0x10/0xf8
>>> [<80015e6c>] ret_from_kernel_thread+0x14/0x1c
>>>
>>> In blast_dcache16 current_cpu_data is used which invokes
>>> smp_processor_id() triggering the BUG(). I can fix this by sprinkling
>>> preempt_disable/preempt_enable through arch/mips/mm/c-r4k.c but that
>>> seems kind of wrong. Does anyone have any suggestion as to the right way
>>> to avoid this BUG()?
>
> Ah, cache aliasing, will it ever cease to provide suprises? :)
>
>> I think the following might do the trick
>>
>> ---- 8< ----
>> diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
>> index 5166e38cd1c6..1fa7f093b59c 100644
>> --- a/arch/mips/mm/c-r4k.c
>> +++ b/arch/mips/mm/c-r4k.c
>> @@ -559,14 +559,19 @@ static inline int has_valid_asid(const struct
>> mm_struct *mm, unsigned int type)
>> return 0;
>> }
>>
>> -static void r4k__flush_cache_vmap(void)
>> +static inline void local_r4k_flush_cache(void *args)
>> {
>> r4k_blast_dcache();
>> }
>>
>> +void r4k__flush_cache_vmap(void)
>> +{
>> + r4k_on_each_cpu(R4K_INDEX, local_r4k_flush_cache, NULL);
>> +}
>> +
>> static void r4k__flush_cache_vunmap(void)
>> {
>> - r4k_blast_dcache();
>> + r4k_on_each_cpu(R4K_INDEX, local_r4k_flush_cache, NULL);
>> }
>>
>> /*
>> @@ -1758,6 +1763,43 @@ static int __init cca_setup(char *str)
>> return 0;
>> }
>> ---- 8< ----
>>
>> The rest of the call sites for r4k_blast_dcache() already run with
>> preemption disabled.
>
> That looks reasonable, but I'm wondering why these are separate to our
> implementation of flush_kernel_vmap_range(). The latter already handles
> SMP & avoids flushing the whole dcache(s) when the area to flush is
> smaller than the cache.
>
> Would it work to just redefine flush_cache_vmap() & flush_cache_vunmap()
> as calls to flush_kernel_vmap_range?
>
I imagine it would. I'll give it a try and send a proper patch if it's
successful.
Powered by blists - more mailing lists