lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 10 May 2020 18:24:58 -0700 (PDT)
From:   David Rientjes <>
To:     Guilherme Piccoli <>
cc:     "Guilherme G. Piccoli" <>,
        Andrew Morton <>,,, Gavin Guo <>,
        Mel Gorman <>
Subject: Re: [PATCH] mm, compaction: Indicate when compaction is manually
 triggered by sysctl

On Fri, 8 May 2020, Guilherme Piccoli wrote:

> On Fri, May 8, 2020 at 3:31 PM David Rientjes <> wrote:
> > It doesn't make sense because it's only being done here for the entire
> > system, there are also per-node sysfs triggers so you could do something
> > like iterate over the nodemask of all nodes with memory and trigger
> > compaction manually and then nothing is emitted to the kernel log.
> >
> > There is new statsfs support that Red Hat is proposing that can be used
> > for things like this.  It currently only supports KVM statistics but
> > adding MM statistics is something that would be a natural extension and
> > avoids polluting both the kernel log and /proc/vmstat.
> Thanks for the review. Is this what you're talking about [0] ? Very interesting!


> Also, I agree about the per-node compaction, it's a good point. But at
> the same time, having the information on the number of manual
> compaction triggered is interesting, at least for some users. What if
> we add that as a per-node stat in zoneinfo?

The kernel log is not preferred for this (or drop_caches, really) because 
the amount of info can causing important information to be lost.  We don't 
really gain anything by printing that someone manually triggered 
compaction; they could just write to the kernel log themselves if they 
really wanted to.  The reverse is not true: we can't suppress your kernel 
message with this patch.

Instead, a statsfs-like approach could be used to indicate when this has 
happened and there is no chance of losing events because it got scrolled 
off the kernel log.  It has the added benefit of not requiring the entire 
log to be parsed for such events.

Powered by blists - more mailing lists