lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MWHPR1201MB0127CEE71F679F43BF0D25B6FDFA0@MWHPR1201MB0127.namprd12.prod.outlook.com>
Date:   Thu, 1 Feb 2018 06:13:20 +0000
From:   "He, Roger" <Hongbo.He@....com>
To:     Michal Hocko <mhocko@...nel.org>,
        "Koenig, Christian" <Christian.Koenig@....com>
CC:     "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>
Subject: RE: [PATCH] mm/swap: add function get_total_swap_pages to expose
 total_swap_pages

Hi Michal:

How about only  
EXPORT_SYMBOL_GPL(total_swap_pages) ?

Thanks
Roger(Hongbo.He)

-----Original Message-----
From: He, Roger 
Sent: Wednesday, January 31, 2018 1:52 PM
To: 'Michal Hocko' <mhocko@...nel.org>; Koenig, Christian <Christian.Koenig@....com>
Cc: linux-mm@...ck.org; linux-kernel@...r.kernel.org; dri-devel@...ts.freedesktop.org
Subject: RE: [PATCH] mm/swap: add function get_total_swap_pages to expose total_swap_pages

	I do think you should completely ignore the size of the swap space. IMHO you should forbid further allocations when your current 	buffer storage cannot be reclaimed. So you need some form of feedback mechanism that would tell you: "Your buffers have 	grown too much". If you cannot do that then simply assume that you cannot swap at all rather than rely on having some portion 	of it for yourself. 

If we assume the swap cache size is zero always, that is overkill for GTT size actually user can get. And not make sense as well I think.

	There are many other users of memory outside of your subsystem. Any scaling based on the 50% of resource belonging to me is 	simply broken.

And that is only a threshold to avoid  overuse  rather than really reserved to TTM at the start. In addition, for most cases TTM only uses a little or not use swap disk at all. Only special test case use more or probably that is intentional.


Thanks
Roger(Hongbo.He)

-----Original Message-----
From: Michal Hocko [mailto:mhocko@...nel.org]
Sent: Tuesday, January 30, 2018 8:29 PM
To: Koenig, Christian <Christian.Koenig@....com>
Cc: He, Roger <Hongbo.He@....com>; linux-mm@...ck.org; linux-kernel@...r.kernel.org; dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH] mm/swap: add function get_total_swap_pages to expose total_swap_pages

On Tue 30-01-18 11:32:49, Christian König wrote:
> Am 30.01.2018 um 11:18 schrieb Michal Hocko:
> > On Tue 30-01-18 10:00:07, Christian König wrote:
> > > Am 30.01.2018 um 08:55 schrieb Michal Hocko:
> > > > On Tue 30-01-18 02:56:51, He, Roger wrote:
> > > > > Hi Michal:
> > > > > 
> > > > > We need a API to tell TTM module the system totally has how 
> > > > > many swap cache.  Then TTM module can use it to restrict how 
> > > > > many the swap cache it can use to prevent triggering OOM.  For 
> > > > > Now we set the threshold of swap size TTM used as 1/2 * total 
> > > > > size and leave the rest for others use.
> > > > Why do you so much memory? Are you going to use TB of memory on 
> > > > large systems? What about memory hotplug when the memory is added/released?
> > > For graphics and compute applications on GPUs it isn't unusual to 
> > > use large amounts of system memory.
> > > 
> > > Our standard policy in TTM is to allow 50% of system memory to be 
> > > pinned for use with GPUs (the hardware can't do page faults).
> > > 
> > > When that limit is exceeded (or the shrinker callbacks tell us to 
> > > make room) we wait for any GPU work to finish and copy buffer 
> > > content into a shmem file.
> > > 
> > > This copy into a shmem file can easily trigger the OOM killer if 
> > > there isn't any swap space left and that is something we want to avoid.
> > > 
> > > So what we want to do is to apply this 50% rule to swap space as 
> > > well and deny allocation of buffer objects when it is exceeded.
> > How does that help when the rest of the system might eat swap?
> 
> Well it doesn't, but that is not the problem here.
> 
> When an application keeps calling malloc() it sooner or later is 
> confronted with an OOM killer.
> 
> But when it keeps for example allocating OpenGL textures the 
> expectation is that this sooner or later starts to fail because we run 
> out of memory and not trigger the OOM killer.

There is nothing like running out of memory and not triggering the OOM killer. You can make a _particular_ allocation to bail out without the oom killer. Just use __GFP_NORETRY. But that doesn't make much difference when you have already depleted your memory and live with the bare remainings. Any desperate soul trying to get its memory will simply trigger the OOM.

> So what we do is to allow the application to use all of video memory + 
> a certain amount of system memory + swap space as last resort fallback (e.g.
> when you Alt+Tab from your full screen game back to your browser).
> 
> The problem we try to solve is that we haven't limited the use of swap 
> space somehow.

I do think you should completely ignore the size of the swap space. IMHO you should forbid further allocations when your current buffer storage cannot be reclaimed. So you need some form of feedback mechanism that would tell you: "Your buffers have grown too much". If you cannot do that then simply assume that you cannot swap at all rather than rely on having some portion of it for yourself. There are many other users of memory outside of your subsystem. Any scaling based on the 50% of resource belonging to me is simply broken.
--
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ