lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACRpkdYPn3xxZQP+xXggPpoHercBL3L7dmMBnbXww5SEsFx5tg@mail.gmail.com>
Date:   Wed, 4 Oct 2017 09:53:08 +0200
From:   Linus Walleij <linus.walleij@...aro.org>
To:     Adrian Hunter <adrian.hunter@...el.com>
Cc:     Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
        Pavel Machek <pavel@....cz>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-mmc@...r.kernel.org" <linux-mmc@...r.kernel.org>,
        linux-mm@...ck.org
Subject: Re: 4.14-rc2 on thinkpad x220: out of memory when inserting mmc card

On Tue, Oct 3, 2017 at 8:30 AM, Adrian Hunter <adrian.hunter@...el.com> wrote:
> On 02/10/17 17:09, Linus Walleij wrote:
>> On Sun, Oct 1, 2017 at 12:57 PM, Tetsuo Handa
>> <penguin-kernel@...ove.sakura.ne.jp> wrote:
>>
>>>>> I inserted u-SD card, only to realize that it is not detected as it
>>>>> should be. And dmesg indeed reveals:
>>>>
>>>> Tetsuo asked me to report this to linux-mm.
>>>>
>>>> But 2^4 is 16 pages, IIRC that can't be expected to work reliably, and
>>>> thus this sounds like MMC bug, not mm bug.
>>
>>
>> I'm not sure I fully understand this error message:
>> "worker/2:1: page allocation failure: order:4"
>>
>> What I guess from context is that the mmc_init_request()
>> call is failing to allocate 16 pages, meaning for 4K pages
>> 64KB which is the typical bounce buffer.
>>
>> This is what the code has always allocated as bounce buffer,
>> but it used to happen upfront, when probing the MMC block layer,
>> rather than when allocating the requests.
>
> That is not exactly right.  As I already wrote, the memory allocation used
> to be optional but became mandatory with:
>
>   commit 304419d8a7e9204c5d19b704467b814df8c8f5b1
>   Author: Linus Walleij <linus.walleij@...aro.org>
>   Date:   Thu May 18 11:29:32 2017 +0200
>
>       mmc: core: Allocate per-request data using the block layer core

Yes you are right, it used to look like this, with the bounce buffer
hiding behind a Kconfig symbol:

#ifdef CONFIG_MMC_BLOCK_BOUNCE
    if (host->max_segs == 1) {
        unsigned int bouncesz;

        bouncesz = MMC_QUEUE_BOUNCESZ;

        if (bouncesz > host->max_req_size)
            bouncesz = host->max_req_size;
        if (bouncesz > host->max_seg_size)
            bouncesz = host->max_seg_size;
        if (bouncesz > (host->max_blk_count * 512))
            bouncesz = host->max_blk_count * 512;

        if (bouncesz > 512 &&
            mmc_queue_alloc_bounce_bufs(mq, bouncesz)) {
            blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_ANY);
            blk_queue_max_hw_sectors(mq->queue, bouncesz / 512);
            blk_queue_max_segments(mq->queue, bouncesz / 512);
            blk_queue_max_segment_size(mq->queue, bouncesz);

            ret = mmc_queue_alloc_bounce_sgs(mq, bouncesz);
            if (ret)
                goto cleanup_queue;
            bounce = true;
        }
    }
#endif

I recently concluded that I find no evidence whatsoever that anyone
turned this symbol on. Actually. (Checked defconfigs and distro configs.)
The option was just sitting there unused.
This code was never exercised except by some people who turned it
on on their custom kernels in the past. It's in practice dead code.

My patch started to allocate and use bounce buffers for all hosts
with max_segs == 1, unless specifically flagged NOT to use bounce
buffers.

That wasn't smart, I should have just deleted them. Mea culpa.

So that is why I asked Ulf to simply put the patch deleting the bounce
buffers that noone is using to fixes, and it should fix this problem.

Yours,
Linus Walleij

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ