lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 25 Oct 2012 15:40:09 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Anton Vorontsov <anton.vorontsov@...aro.org>
Cc:	Mel Gorman <mgorman@...e.de>, Pekka Enberg <penberg@...nel.org>,
	Leonid Moiseichuk <leonid.moiseichuk@...ia.com>,
	KOSAKI Motohiro <kosaki.motohiro@...il.com>,
	Bartlomiej Zolnierkiewicz <b.zolnierkie@...sung.com>,
	John Stultz <john.stultz@...aro.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, linaro-kernel@...ts.linaro.org,
	patches@...aro.org, kernel-team@...roid.com,
	linux-man@...r.kernel.org
Subject: Re: [RFC v2 0/2] vmevent: A bit reworked pressure attribute + docs +
 man page

Hi Anton,

On Mon, Oct 22, 2012 at 04:19:28AM -0700, Anton Vorontsov wrote:
> Hi all,
> 
> So this is the second RFC. The main change is that I decided to go with
> discrete levels of the pressure.

I am very happy with that because I already have yelled it several time.

> 
> When I started writing the man page, I had to describe the 'reclaimer
> inefficiency index', and while doing this I realized that I'm describing
> how the kernel is doing the memory management, which we try to avoid in
> the vmevent. And applications don't really care about these details:
> reclaimers, its inefficiency indexes, scanning window sizes, priority
> levels, etc. -- it's all "not interesting", and purely kernel's stuff. So
> I guess Mel Gorman was right, we need some sort of levels.
> 
> What applications (well, activity managers) are really interested in is
> this:
> 
> 1. Do we we sacrifice resources for new memory allocations (e.g. files
>    cache)?
> 2. Does the new memory allocations' cost becomes too high, and the system
>    hurts because of this?
> 3. Are we about to OOM soon?

Good but I think 3 is never easy.
But early notification would be better than late notification which can kill
someone.

> 
> And here are the answers:
> 
> 1. VMEVENT_PRESSURE_LOW
> 2. VMEVENT_PRESSURE_MED
> 3. VMEVENT_PRESSURE_OOM
> 
> There is no "high" pressure, since I really don't see any definition of
> it, but it's possible to introduce new levels without breaking ABI. The
> levels described in more details in the patches, and the stuff is still
> tunable, but now via sysctls, not the vmevent_fd() call itself (i.e. we
> don't need to rebuild applications to adjust window size or other mm
> "details").
> 
> What I couldn't fix in this RFC is making vmevent_{scanned,reclaimed}
> stuff per-CPU (there's a comment describing the problem with this). But I
> made it lockless and tried to make it very lightweight (plus I moved the
> vmevent_pressure() call to a more "cold" path).

Your description doesn't include why we need new vmevent_fd(2).
Of course, it's very flexible and potential to add new VM knob easily but
the thing we is about to use now is only VMEVENT_ATTR_PRESSURE.
Is there any other use cases for swap or free? or potential user?
Adding vmevent_fd without them is rather overkill.

And I want to avoid timer-base polling of vmevent if possbile.
mem_notify of KOSAKI doesn't use such timer.

I don't object but we need rationale for adding new system call which should
be maintained forever once we add it.

> 
> Thanks,
> Anton.
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ