lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cf7542c0-46ed-009f-a214-561855561b3c@fb.com>
Date:   Thu, 1 Dec 2016 11:46:00 -0700
From:   Jens Axboe <axboe@...com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
CC:     Kent Overstreet <kent.overstreet@...il.com>,
        Tejun Heo <tj@...nel.org>, Marc MERLIN <marc@...lins.org>,
        Michal Hocko <mhocko@...nel.org>,
        "Vlastimil Babka" <vbabka@...e.cz>, linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        "Greg Kroah-Hartman" <gregkh@...uxfoundation.org>
Subject: Re: 4.8.8 kernel trigger OOM killer repeatedly when I have lots of
 RAM that should be free

On 12/01/2016 11:37 AM, Linus Torvalds wrote:
> On Thu, Dec 1, 2016 at 10:30 AM, Jens Axboe <axboe@...com> wrote:
>>
>> It's two different kinds of throttling. The vm absolutely should
>> throttle at dirty time, to avoid having insane amounts of memory dirty.
>> On the block layer side, throttling is about avoid the device queues
>> being too long. It's very similar to the buffer bloating on the
>> networking side. The block layer throttling is not a fix for the vm
>> allowing too much memory to be dirty and causing issues, it's about
>> keeping the device response latencies in check.
> 
> Sure. But if we really do just end up blocking in the block layer (in
> situations where we didn't used to), that may be a bad thing. It might
> be better to feed that information back to the VM instead,
> particularly for writes, where the VM layer already tries to ratelimit
> the writes.

It's not a new blocking point, it's the same blocking point that we
always end up in, if we run out of requests. The problem with bcache and
other stacked drivers is that they don't have a request pool, so they
never really need to block there.

> And frankly, it's almost purely writes that matter. There  just aren't
> a lot of ways to get that many parallel reads in real life.

Exactly, it's almost exclusively a buffered write problem, as I wrote in
the initial reply. Most other things tend to throttle nicely on their
own.

> I haven't looked at your patches, so maybe you already do this.

It's currently not fed back, but that would be pretty trivial to do. The
mechanism we have for that (queue congestion) is a bit of a mess,
though, so it would need to be revamped a bit.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ