[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <551D99F3.2080804@intel.com>
Date: Thu, 02 Apr 2015 22:35:15 +0300
From: Adrian Hunter <adrian.hunter@...el.com>
To: Len Brown <lenb@...nel.org>
CC: Ulf Hansson <ulf.hansson@...aro.org>,
linux-mmc <linux-mmc@...r.kernel.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Len Brown <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
Kevin Hilman <khilman@...aro.org>,
Tomeu Vizoso <tomeu.vizoso@...labora.com>,
Linux PM list <linux-pm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 0/4] mmc: sdhci: Support maximum DMA latency request
via PM QoS
On 1/04/2015 10:59 p.m., Len Brown wrote:
>> Ad hoc testing with Lenovo Thinkpad 10 showed a stress
>> test could run for at least 24 hours with the patches,
>> compared to less than an hour without.
>
> There is a patch in linux-next to delete C1E from BYT,
> since it is problematic on multiple platforms.
> I don't suppose that just disabling that state without disabling C6
> is sufficient to fix the Thinkpad 10? (I'm betting not, but
> it can't hurt to try -- you can use the "disable" attribute for the state
> in /sys/devices/system/cpu/cpu*/cpuidle/stateN)
>
> I think your choice of the PM_QOS sub-system here is the right one,
> and that your selection of 20usec threshold is also a good choice
> for what you want to do -- though on non-intel_idle machine somplace,
> there may be some ACPI BIOS _CST with random number for C6 latency.
>
> It would be interesting to see how your C6 residency (turbostat
> --debug will show this to you)
> and your battery life are changed by disabling C6 during MMC activity.
I will do some more testing as you suggest, although it will have to
wait until next week due to Easter holidays here.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists