lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c35722db-bb79-7e09-ac02-e82ab827e1e3@gmail.com>
Date:   Sat, 7 Sep 2019 23:12:59 +0200
From:   Viktor Rosendahl <viktor.rosendahl@...il.com>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 1/4] ftrace: Implement fs notification for
 tracing_max_latency

On 9/6/19 4:17 PM, Joel Fernandes wrote:
> On Thu, Sep 05, 2019 at 03:25:45PM +0200, Viktor Rosendahl wrote:
<clip>
>> +
>> +__init static int latency_fsnotify_init(void)
>> +{
>> +	fsnotify_wq = alloc_workqueue("tr_max_lat_wq",
>> +				      WQ_UNBOUND | WQ_HIGHPRI, 0);
>> +	if (!fsnotify_wq) {
>> +		pr_err("Unable to allocate tr_max_lat_wq\n");
>> +		return -ENOMEM;
>> +	}
> 
> Why not just use the system workqueue instead of adding another workqueue?
> 

For the the latency-collector to work properly in the worst case, when a
new latency occurs immediately, the fsnotify must be received in less
time than what the threshold is set to. If we always are slower we will
always lose certain latencies.

My intention was to minimize latency in some important cases, so that
user space receives the notification sooner rather than later.

There doesn't seem to be any system workqueue with WQ_UNBOUND and
WQ_HIGHPRI. My thinking was that WQ_UNBOUND might help with the latency
in some important cases.

If we use:

queue_work(system_highpri_wq, &tr->fsnotify_work);

then the work will (almost) always execute on the same CPU but if we are
unlucky that CPU could be too busy while there could be another CPU in
the system that would be able to process the work soon enough.

queue_work_on() could be used to queue the work on another CPU but it
seems difficult to select the right CPU.

best regards,

Viktor

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ