lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k1wfgcmb.fsf@anholt.net>
Date:   Thu, 18 Jan 2018 12:01:32 -0800
From:   Eric Anholt <eric@...olt.net>
To:     Michal Hocko <mhocko@...nel.org>,
        Andrey Grodzovsky <andrey.grodzovsky@....com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        dri-devel@...ts.freedesktop.org, amd-gfx@...ts.freedesktop.org,
        Christian.Koenig@....com
Subject: Re: [RFC] Per file OOM badness

Michal Hocko <mhocko@...nel.org> writes:

> On Thu 18-01-18 18:00:06, Michal Hocko wrote:
>> On Thu 18-01-18 11:47:48, Andrey Grodzovsky wrote:
>> > Hi, this series is a revised version of an RFC sent by Christian König
>> > a few years ago. The original RFC can be found at 
>> > https://lists.freedesktop.org/archives/dri-devel/2015-September/089778.html
>> > 
>> > This is the same idea and I've just adressed his concern from the original RFC 
>> > and switched to a callback into file_ops instead of a new member in struct file.
>> 
>> Please add the full description to the cover letter and do not make
>> people hunt links.
>> 
>> Here is the origin cover letter text
>> : I'm currently working on the issue that when device drivers allocate memory on
>> : behalf of an application the OOM killer usually doesn't knew about that unless
>> : the application also get this memory mapped into their address space.
>> : 
>> : This is especially annoying for graphics drivers where a lot of the VRAM
>> : usually isn't CPU accessible and so doesn't make sense to map into the
>> : address space of the process using it.
>> : 
>> : The problem now is that when an application starts to use a lot of VRAM those
>> : buffers objects sooner or later get swapped out to system memory, but when we
>> : now run into an out of memory situation the OOM killer obviously doesn't knew
>> : anything about that memory and so usually kills the wrong process.
>
> OK, but how do you attribute that memory to a particular OOM killable
> entity? And how do you actually enforce that those resources get freed
> on the oom killer action?
>
>> : The following set of patches tries to address this problem by introducing a per
>> : file OOM badness score, which device drivers can use to give the OOM killer a
>> : hint how many resources are bound to a file descriptor so that it can make
>> : better decisions which process to kill.
>
> But files are not killable, they can be shared... In other words this
> doesn't help the oom killer to make an educated guess at all.

Maybe some more context would help the discussion?

The struct file in patch 3 is the DRM fd.  That's effectively "my
process's interface to talking to the GPU" not "a single GPU resource".
Once that file is closed, all of the process's private, idle GPU buffers
will be immediately freed (this will be most of their allocations), and
some will be freed once the GPU completes some work (this will be most
of the rest of their allocations).

Some GEM BOs won't be freed just by closing the fd, if they've been
shared between processes.  Those are usually about 8-24MB total in a
process, rather than the GBs that modern apps use (or that our testcases
like to allocate and thus trigger oomkilling of the test harness instead
of the offending testcase...)

Even if we just had the private+idle buffers being accounted in OOM
badness, that would be a huge step forward in system reliability.

>> : So question at every one: What do you think about this approach?
>
> I thing is just just wrong semantically. Non-reclaimable memory is a
> pain, especially when there is way too much of it. If you can free that
> memory somehow then you can hook into slab shrinker API and react on the
> memory pressure. If you can account such a memory to a particular
> process and make sure that the consumption is bound by the process life
> time then we can think of an accounting that oom_badness can consider
> when selecting a victim.

For graphics, we can't free most of our memory without also effectively
killing the process.  i915 and vc4 have "purgeable" interfaces for
userspace (on i915 this is exposed all the way to GL applications and is
hooked into shrinker, and on vc4 this is so far just used for
userspace-internal buffer caches to be purged when a CMA allocation
fails).  However, those purgeable pools are expected to be a tiny
fraction of the GPU allocations by the process.

Download attachment "signature.asc" of type "application/pgp-signature" (833 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ