lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Feb 2014 11:45:51 +0800
From:	Li Guang <lig.fnst@...fujitsu.com>
To:	Juan Manuel Cabo <juanmanuel.cabo@...il.com>
CC:	Kieran Clancy <clancy.kieran@...il.com>,
	Len Brown <lenb@...nel.org>,
	"Rafael J. Wysocki" <rjw@...ysocki.net>,
	linux-acpi@...r.kernel.org, linux-kernel@...r.kernel.org,
	Lan Tianyu <tianyu.lan@...el.com>,
	Dennis Jansen <dennis.jansen@....de>
Subject: Re: [PATCH] ACPI / EC: Clear stale EC events on Samsung systems

Juan Manuel Cabo wrote:
> On 02/27/2014 12:14 AM, Li Guang wrote:
>    
>> oh, sorry, I'm referring internal EC firmware code
>> for Q event queuing, not ACPI SPEC, ;-)
>> for machine you tested, 8 is the queue size,
>> but for some unknown also nasty EC firmwares(let's suppose it exists),
>> it may queue more Q events.
>> and I saw several firmwares queued 32 events by default,
>> then, let's say, they be used for some samsung products,
>> and also they also forgot to deal with sleep/resume state,
>> then, we'll also leave stale Q event there.
>>
>> Thanks!
>>
>>      
> We tested each on our different samsung models (intel, amd), and it
> was 8 across. But you're right, there might be more in the future.
>
>       I even saw a bug report in ubuntu's launchpad of an HP with a similar
> sounding problem, ( https://bugs.launchpad.net/ubuntu/+source/linux-source-2.6.20/+bug/89860 )
> which I have no idea if it was caused by the same issue, but if in the future,
> the flag ec_clear_on_resume is used to match other DMI's, it might
> be a good idea to make the max iteration count bigger.
>
>        The only reason that there is a max iteration count, was to prevent
> an unexpected case in which an unknown EC never returns 0 after
> queue emptied. So far it hasn't been the case. Can we count on it?.
> The loop currently does finish early when there are no more events.
>
> I guess changing it 255 or 1000 would be enough, right?
>
>    

can't imagine 1K bytes be dissipated on Q event,
EC's ram is usually expensive,
I think 255 is really enough. :-)

Thanks!



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists