lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <501E3EAD.9090905@huawei.com>
Date:	Sun, 5 Aug 2012 17:36:45 +0800
From:	Jiang Liu <jiang.liu@...wei.com>
To:	Yinghai Lu <yinghai@...nel.org>
CC:	Jiang Liu <liuj97@...il.com>, Len Brown <lenb@...nel.org>,
	Tony Luck <tony.luch@...el.com>,
	Bob Moore <robert.moore@...el.com>,
	Huang Ying <ying.huang@...el.com>,
	Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
	Kenji Kaneshige <kaneshige.kenji@...fujitsu.com>,
	Wen Congyang <wency@...fujitsu.com>,
	Taku Izumi <izumi.taku@...fujitsu.com>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	Hanjun Guo <guohanjun@...wei.com>,
	<linux-kernel@...r.kernel.org>, <linux-acpi@...r.kernel.org>,
	<linux-pci@...r.kernel.org>, Gaohuai Han <hangaohuai@...wei.com>
Subject: Re: [RFC PATCH 2/3] ACPIHP: ACPI system device hotplug slot enumerator

On 2012-8-5 4:14, Yinghai Lu wrote:
> On Sat, Jul 28, 2012 at 4:42 AM, Jiang Liu <liuj97@...il.com> wrote:
>> The first is an ACPI hotplug slot enumerator, which enumerates ACPI hotplug
>> slots on load and provides callbacks to manage those hotplug slots.
>> An ACPI hotplug slot is an abstraction of receptacles, where a group of
>> system devices could be connected to. This patch implements the skeleton for
>> ACPI system device hotplug slot enumerator. On loading, the driver scans the
>> whole ACPI namespace for hotplug slots and creates a device node for each
>> hotplug slots. Every slot is associated with a device class named
>> acpihp_slot_class and will be managed by ACPI hotplug drivers.
> 
> I was thinking:
>    We can have module in ACPI DSDT, and every module is coresponding
> to SystemModule.
>    so it will be
> 	\_SB.NOD1
> 		CPU0
> 		CPU1
> 		CPU2
> 		CPU3
> 		MEM0
> 		MEM1
> 		MEM2
> 		MEM3
> 		PCI0
> 		PCI1
> 		PCI2
> 		PCI3
> 		NTFY
> 		STAT
> 		STOP
>     NTFY will be something like:
> 	Notify(\_SB.NOD1.CPU0,....)
> 	Notify(\_SB.NOD1.CPU1,....)
> 	Notify(\_SB.NOD1.CPU2,....)
> 	Notify(\_SB.NOD1.CPU3,....)
> 
> 	Notify(\_SB.NOD1.MEM0,....)
> 	Notify(\_SB.NOD1.MEM1,....)
> 	Notify(\_SB.NOD1.MEM2,....)
> 	Notify(\_SB.NOD1.MEM3,....)
> 
> 	Notify(\_SB.NOD1.PCI0,....)
> 	Notify(\_SB.NOD1.PCI1,....)
> 	Notify(\_SB.NOD1.PCI2,....)
> 	Notify(\_SB.NOD1.PCI3,....)
> 
>    and will link GPE button for SystemModule to call NTFY.
> 
>    STAT could be 32bit integer for final turn off the power.
> 	every CPU, MEM, PCI will own one bit, it will clear that bit in this own
> 	_EJ0.
> 	Every _EJ0 will double check if all are cleared, then it call extra STOP
> 	to power off the whole SystemModule.
> 
> if OS already have seperated handler for those type objects (CPU, MEM,
> PCI),  we may not need to change to much to os.

Hi Yinghai,
	Thanks for your comments.
	It's one of the major concerns that we may need to make too many changes
to existing code, and even break backward compatibilities:(

	There are two possible ways to support hotplug in ACPI BIOS:
	1) send hotplug notifications to each sub-component of an FRU/module.
	2) send hotplug notifications to the FRU itself. 

	We have had discussions with BIOS team and chose to adopt the second 
solution because:
	1) It's more convenient for user to operate on FRUs instead of sub-components.
	2) BIOS will be simpler because it only need to track status of FRU itself
	instead	of sub-components.
	3) It will be much more complex to do error recover if OS/BIOS cooperate on
	sub-component granularity.
	Any suggestions here?
	Regards!
	Gerry

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ