[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3026d8c5-9313-cb8b-91ef-09c02baf27db@amd.com>
Date: Tue, 30 Jan 2018 11:40:06 +0100
From: Christian König <christian.koenig@....com>
To: Michel Dänzer <michel@...nzer.net>,
Michal Hocko <mhocko@...nel.org>,
dri-devel@...ts.freedesktop.org, Roman Gushchin <guro@...com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
amd-gfx@...ts.freedesktop.org
Subject: Re: [RFC] Per file OOM badness
Am 30.01.2018 um 10:43 schrieb Michel Dänzer:
> [SNIP]
>> Would it be ok to hang onto potentially arbitrary mmget references
>> essentially forever? If that's ok I think we can do your process based
>> account (minus a few minor inaccuracies for shared stuff perhaps, but no
>> one cares about that).
> Honestly, I think you and Christian are overthinking this. Let's try
> charging the memory to every process which shares a buffer, and go from
> there.
My problem is that this needs to be bullet prove.
For example imagine an application which allocates a lot of BOs, then
calls fork() and let the parent process die. The file descriptor lives
on in the child process, but the memory is not accounted against the child.
Otherwise we would allow easy construction of deny of service problems.
To avoid that I think we need to add something like new file_operations
callbacks which informs a file descriptor that it is going to be used in
a new process or stopped to be used in a process.
Regards,
Christian.
Powered by blists - more mailing lists