lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4c067b75e06aadd34eff5b60fc7c59967aa30809.camel@redhat.com>
Date: Thu, 05 Dec 2024 15:33:46 +0100
From: Gabriele Monaco <gmonaco@...hat.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Ingo Molnar	
 <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Andrew Morton	
 <akpm@...ux-foundation.org>, Mel Gorman <mgorman@...e.de>,
 linux-mm@...ck.org, 	linux-kernel@...r.kernel.org
Cc: Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
	 <vincent.guittot@...aro.org>
Subject: Re: [PATCH] sched: Move task_mm_cid_work to mm delayed work

The patch is fundamentally broken since I somehow lost the line calling
schedule_delayed_work in task_mm_cid_work to re-schedule itself.
Before sending a V2, however, I'd like to get some more insights about
the requirements of this function.

The current behaviour upstream is to call task_mm_cid_work for the task
running after the scheduler tick. The function checks that we don't run
too often for the same mm, but it seems possible that some process with
short runtime would rarely run during the tick.

The behaviour imposed by this patch (at least the intended one) is to
run the task_mm_cid_work with the configured periodicity (plus
scheduling latency) for each active mm.
This behaviour seem to me more predictable, but would that even be
required for rseq or is it just an overkill?

In other words, was the tick chosen out of simplicity or is there some
property that has to be preserved?

P.S. I run the rseq self tests on both this and the previous patch
(both broken) and saw no failure.

Thanks,
Gabriele


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ