[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 4 Apr 2017 16:23:44 +0530
From: George Cherian <gcherian@...iumnetworks.com>
To: Hoan Tran <hotran@....com>,
"Prakash, Prashanth" <pprakash@...eaurora.org>
Cc: George Cherian <george.cherian@...ium.com>,
linux acpi <linux-acpi@...r.kernel.org>,
lkml <linux-kernel@...r.kernel.org>, devel@...ica.org,
Ashwin Chaugule <ashwin.chaugule@...aro.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Len Brown <lenb@...nel.org>,
Jassi Brar <jassisinghbrar@...il.com>,
Robert Moore <robert.moore@...el.com>,
Lv Zheng <lv.zheng@...el.com>
Subject: Re: [PATCH 0/2] Make cppc acpi driver aware of pcc subspace ids
Hi Hoan/Prashanth,
On 04/03/2017 11:20 PM, Hoan Tran wrote:
> Hi George,
>
> On Mon, Apr 3, 2017 at 9:44 AM, Prakash, Prashanth
> <pprakash@...eaurora.org> wrote:
>> Hi George,
>>
>> On 3/31/2017 12:24 AM, George Cherian wrote:
>>> The current cppc acpi driver works with only one pcc subspace id.
>>> It maintains and registers only one pcc channel even if the acpi table has
>>> different pcc subspace ids. The series tries to address the same by making
>>> cppc acpi driver aware of multiple possible pcc subspace ids.
>> The current ACPI 6.1 spec restricts the CPPC to a single PCC subspace. See section:
>> 8.4.7.1.9 Using PCC Registers, which states "If the PCC register space is used, all PCC
>> registers must be defined to be in the same subspace."
>
> Agreed with Prashanth, beside of that, spec also says "To amortize the
> cost of PCC transactions, OSPM should read or
> write all PCC registers via a single read or write command when possible."
Yes indeed the spec says so but it was not at all a scalable solution
when platform has more number of CPU's and CPU domains. That is why we
took this approach.
>
> Thanks
> Hoan
>
>>
>> --
>> Thanks,
>> Prashanth
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
>> the body of a message to majordomo@...r.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists