lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 30 Nov 2016 11:27:43 -0700
From:   Jens Axboe <axboe@...com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Marc MERLIN <marc@...lins.org>,
        Kent Overstreet <kent.overstreet@...il.com>,
        Tejun Heo <tj@...nel.org>
CC:     Michal Hocko <mhocko@...nel.org>, Vlastimil Babka <vbabka@...e.cz>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        "Joonsoo Kim" <iamjoonsoo.kim@....com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: 4.8.8 kernel trigger OOM killer repeatedly when I have lots of
 RAM that should be free

On 11/30/2016 11:14 AM, Linus Torvalds wrote:
> On Wed, Nov 30, 2016 at 9:47 AM, Marc MERLIN <marc@...lins.org> wrote:
>>
>> I gave it a thought again, I think it is exactly the nasty situation you
>> described.
>> bcache takes I/O quickly while sending to SSD cache. SSD fills up, now
>> bcache can't handle IO as quickly and has to hang until the SSD has been
>> flushed to spinning rust drives.
>> This actually is exactly the same as filling up the cache on a USB key
>> and now you're waiting for slow writes to flash, is it not?
> 
> It does sound like you might hit exactly the same kind of situation, yes.
> 
> And the fact that you have dmcrypt running too just makes things pile
> up more. All those IO's end up slowed down by the scheduling too.
> 
> Anyway, none of this seems new per se. I'm adding Kent and Jens to the
> cc (Tejun already was), in the hope that maybe they have some idea how
> to control the nasty worst-case behavior wrt workqueue lockup (it's
> not really a "lockup", it looks like it's just hundreds of workqueues
> all waiting for IO to complete and much too deep IO queues).

Honestly, the easiest would be to wire it up to the blk-wbt stuff that
is queued up for 4.10, which attempts to limit the queue depths to
something reasonable instead of letting them run amok. This is largely
(exclusively, almost) a problem with buffered writeback.

On devices utilizing the stacked interface, they never get any depth
throttling. Obviously it's worse if each IO ends up queueing work, but
it's a big problem even if they do not.

> I think it's the traditional "throughput is much easier to measure and
> improve" situation, where making queues big help some throughput
> situation, but ends up causing chaos when things go south.

Yes, and the longer queues never buy you anything, but they end up
causing tons of problems at the other end of the spectrum.

Still makes sense to limit dirty memory for highmem, though.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ