lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170301174955.GB20360@dhcp22.suse.cz>
Date:   Wed, 1 Mar 2017 18:49:56 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Shaohua Li <shli@...com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Kernel-team@...com, minchan@...nel.org, hughd@...gle.com,
        hannes@...xchg.org, riel@...hat.com, mgorman@...hsingularity.net,
        akpm@...ux-foundation.org
Subject: Re: [PATCH V5 6/6] proc: show MADV_FREE pages info in smaps

On Wed 01-03-17 09:37:10, Shaohua Li wrote:
> On Wed, Mar 01, 2017 at 02:36:24PM +0100, Michal Hocko wrote:
> > On Fri 24-02-17 13:31:49, Shaohua Li wrote:
> > > show MADV_FREE pages info of each vma in smaps. The interface is for
> > > diganose or monitoring purpose, userspace could use it to understand
> > > what happens in the application. Since userspace could dirty MADV_FREE
> > > pages without notice from kernel, this interface is the only place we
> > > can get accurate accounting info about MADV_FREE pages.
> > 
> > I have just got to test this patchset and noticed something that was a
> > bit surprising
> > 
> > madvise(mmap(len), len, MADV_FREE)
> > Size:             102400 kB
> > Rss:              102400 kB
> > Pss:              102400 kB
> > Shared_Clean:          0 kB
> > Shared_Dirty:          0 kB
> > Private_Clean:    102400 kB
> > Private_Dirty:         0 kB
> > Referenced:            0 kB
> > Anonymous:        102400 kB
> > LazyFree:         102368 kB
> > 
> > It took me a some time to realize that LazyFree is not accurate because
> > there are still pages on the per-cpu lru_lazyfree_pvecs. I believe this
> > is an implementation detail which shouldn't be visible to the userspace.
> > Should we simply drain the pagevec? A crude way would be to simply
> > lru_add_drain_all after we are done with the given range. We can also
> > make this lru_lazyfree_pvecs specific but I am not sure this is worth
> > the additional code.
> 
> Minchan's original patch includes a drain of pvec. I discard it because I think
> it's not worth the effort. There aren't too many memory in the per-cpu vecs.

but multiply that by the number of CPUs.

> Like what you said, I doubt this is noticeable to userspace.

maybe I wasn't clear enough. I've noticed and I expect others would as
well. We really shouldn't leak implementation details like that. So I
_believe_ this should be fixed. Draining all pagevecs is rather coarse
but it is the simplest thing to do. If you do not want to fold this
into the original patch I can send a standalone one. Or do you have any
concerns about draining?
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ