[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5420FB25.8050102@jp.fujitsu.com>
Date: Tue, 23 Sep 2014 13:46:29 +0900
From: Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Wanpeng Li <wanpeng.li@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>, <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...nel.org>, <x86@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
David Rientjes <rientjes@...gle.com>,
Prarit Bhargava <prarit@...hat.com>,
Steven Rostedt <srostedt@...hat.com>,
Toshi Kani <toshi.kani@...com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5] x86, cpu-hotplug: fix llc shared map unreleased during
cpu hotplug
(2014/09/17 16:17), Wanpeng Li wrote:
> BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
> IP: [..] find_busiest_group
> PGD 5a9d5067 PUD 13067 PMD 0
> Oops: 0000 [#3] SMP
> [...]
> Call Trace:
> load_balance
> ? _raw_spin_unlock_irqrestore
> idle_balance
> __schedule
> schedule
> schedule_timeout
> ? lock_timer_base
> schedule_timeout_uninterruptible
> msleep
> lock_device_hotplug_sysfs
> online_store
> dev_attr_store
> sysfs_write_file
> vfs_write
> SyS_write
> system_call_fastpath
>
> This bug can be triggered by hot add and remove large number of xen
> domain0's vcpus repeatedly.
>
> Last level cache shared map is built during cpu up and build sched domain
> routine takes advantage of it to setup sched domain cpu topology, however,
> llc shared map is unreleased during cpu disable which lead to invalid sched
> domain cpu topology. This patch fix it by release llc shared map correctly
> during cpu disable.
>
> Reviewed-by: Toshi Kani <toshi.kani@...com>
> Reviewed-by: Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>
> Tested-by: Linn Crosetto <linn@...com>
> Signed-off-by: Wanpeng Li <wanpeng.li@...ux.intel.com>
Yasuaki reported this can happen on our real hardware.
https://lkml.org/lkml/2014/7/22/1018
Our case is here.
==
Here is a example on my system.
My system has 4 sockets and each socket has 15 cores and HT is enabled.
In this case, each core of sockes is numbered as follows:
| CPU#
Socket#0 | 0-14 , 60-74
Socket#1 | 15-29, 75-89
Socket#2 | 30-44, 90-104
Socket#3 | 45-59, 105-119
Then llc_shared_mask of CPU#30 has 0x3fff80000001fffc0000000.
It means that last level cache of Socket#2 is shared with
CPU#30-44 and 90-104.
When hot-removing socket#2 and #3, each core of sockets is numbered
as follows:
| CPU#
Socket#0 | 0-14 , 60-74
Socket#1 | 15-29, 75-89
But llc_shared_mask is not cleared. So llc_shared_mask of CPU#30 remains
having 0x3fff80000001fffc0000000.
After that, when hot-adding socket#2 and #3, each core of sockets is
numbered as follows:
| CPU#
Socket#0 | 0-14 , 60-74
Socket#1 | 15-29, 75-89
Socket#2 | 30-59
Socket#3 | 90-119
Then llc_shared_mask of CPU#30 becomes 0x3fff8000fffffffc0000000.
It means that last level cache of Socket#2 is shared with CPU#30-59
and 90-104. So the mask has wrong value.
At first, I cleared hot-removed CPU number's bit from llc_shared_map
when hot removing CPU. But Borislav suggested that the problem will
disappear if readded CPU is assigned same CPU number. And llc_shared_map
must not be changed.
==
So, please.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists