lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <75f758e5-a26f-4f41-8009-288ca2a4d182@app.fastmail.com>
Date: Mon, 19 Aug 2024 15:15:58 +0200
From: "Arnd Bergmann" <arnd@...db.de>
To: "Rohit Agarwal" <rohiagar@...omium.org>,
 "Tomas Winkler" <tomas.winkler@...el.com>,
 "Greg Kroah-Hartman" <gregkh@...uxfoundation.org>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [RFC] Order 4 allocation failures in the MEI client driver

On Tue, Aug 13, 2024, at 10:45, Rohit Agarwal wrote:
> Hi All,
>
> I am seeing an inconsistent allocation (kmalloc) failure in the mei 
> client driver [1]
> in chromebooks. The crash indicates the driver is requesting for an 
> order 4 allocation
> that is unavailable at that particular snapshot of the system.
>
> I am new to this and do not know the history behind the roundup to order
> 4 [2]. According to the sources, order 4 allocations are not guaranteed and
> should be avoided most of the time. And considering the chromebooks
> limited memory, this may become an expected behavior.

I don't see how that commit is related to the failure in
mei_cl_alloc_cb(), that one only rounds the size up to a multiple
of four bytes.

What is the call chain you see in the kernel messages? Is it
always the same?

> Can we have more details on this as to why order 4 allocation is
> required? Or can we have a lower order allocation request that can be
> helpful for low memory platforms?
>
> Some solutions that I explored and weren't applicable/helpful here:
> 1. using a vmalloc/kvmalloc instead of kmalloc (Due to DMA usage).
> 2. using a scatter gather list (Would require a lot of rework in the 
> driver and still
> not sure if that would work as it would require changes in the 
> underlying layer as well)
> 3. retry mechanism (would help in few instances only).
> 4. allocating from the DMA pool?

Those (1, 2 and 4) would have been my suggestions as well ;-)
I don't think 4 helps on x86 machines though, since there
is no CMA area (and there should not need to be either).

If this happens during runtime (after the system is alredy
fully booted), another idea would be to move the allocation
to boot time where it is very likely to succeed and just never
free it again. Whether that works or not depends on the
exact call chain.

Allocating 64KB of consecutive pages repeatedly is clearly
a problem at runtime, but having a single allocation during
probe time is not as bad.

       Arnd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ