lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 15 Feb 2013 17:48:28 +0100
From:	Michal Hocko <mhocko@...e.cz>
To:	Rob van der Heij <rvdheij@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>, Hugh Dickins <hughd@...gle.com>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: fadvise: Drain all pagevecs if POSIX_FADV_DONTNEED
 fails to discard all pages

On Fri 15-02-13 17:14:10, Rob van der Heij wrote:
> On 15 February 2013 12:04, Michal Hocko <mhocko@...e.cz> wrote:
> > On Thu 14-02-13 12:39:26, Andrew Morton wrote:
> >> On Thu, 14 Feb 2013 12:03:49 +0000
> >> Mel Gorman <mgorman@...e.de> wrote:
> >>
> >> > Rob van der Heij reported the following (paraphrased) on private mail.
> >> >
> >> >     The scenario is that I want to avoid backups to fill up the page
> >> >     cache and purge stuff that is more likely to be used again (this is
> >> >     with s390x Linux on z/VM, so I don't give it as much memory that
> >> >     we don't care anymore). So I have something with LD_PRELOAD that
> >> >     intercepts the close() call (from tar, in this case) and issues
> >> >     a posix_fadvise() just before closing the file.
> >> >
> >> >     This mostly works, except for small files (less than 14 pages)
> >> >     that remains in page cache after the face.
> >>
> >> Sigh.  We've had the "my backups swamp pagecache" thing for 15 years
> >> and it's still happening.
> >>
> >> It should be possible nowadays to toss your backup application into a
> >> container to constrain its pagecache usage.  So we can type
> >>
> >>       run-in-a-memcg -m 200MB /my/backup/program
> >>
> >> and voila.  Does such a script exist and work?
> >
> > The script would be as simple as:
> > cgcreate -g memory:backups/`whoami`
> > cgset -r memory.limit_in_bytes=200MB backups/`whoami`
> > cgexec -g memory:backups/`whoami` /my/backup/program
> >
> > It just expects that admin sets up backups group which allows the user
> > to create a subgroup (w permission on the directory) and probably set up
> > some reasonable cap for all backups
> 
> Cool. This is promising enough to bridge my skills gap. It appears to
> work as promised, but I would have to understand why it takes
> significantly more CPU than my ugly posix_fadvise() call on close...

I would guess that a lot of reclaim would be an answer. Note that each
memcg has its own LRU and the limit is neforced by the per group
reclaim.
I wouldn't expect the difference to be very big, though. What do you
mean by significantly more?

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ