lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 20 May 2017 05:26:46 -0300
From:   Marcelo Tosatti <mtosatti@...hat.com>
To:     Christoph Lameter <cl@...ux.com>
Cc:     Luiz Capitulino <lcapitulino@...hat.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Rik van Riel <riel@...hat.com>,
        Linux RT Users <linux-rt-users@...r.kernel.org>,
        cmetcalf@...lanox.com
Subject: Re: [patch 2/2] MM: allow per-cpu vmstat_threshold and vmstat_worker
 configuration

On Fri, May 19, 2017 at 12:13:26PM -0500, Christoph Lameter wrote:
> On Fri, 19 May 2017, Marcelo Tosatti wrote:
> 
> > Use-case: realtime application on an isolated core which for some reason
> > updates vmstatistics.
> 
> Ok that is already only happening every 2 seconds by default and that
> interval is configurable via the vmstat_interval proc setting.
> 
> > > Just a measurement of vmstat_worker. Pointless.
> >
> > Shouldnt the focus be on general scenarios rather than particular
> > usecases, so that the solution covers a wider range of usecases?
> 
> Yes indeed and as far as I can tell the wider usecases are covered. Not
> sure that there is anything required here.
> 
> > The situation as i see is as follows:
> >
> > Your point of view is: an "isolated CPU" with a set of applications
> > cannot update vm statistics, otherwise they pay the vmstat_update cost:
> >
> >      kworker/5:1-245   [005] ....1..   673.454295: workqueue_execute_start: work struct ffffa0cf6e493e20: function vmstat_update
> >      kworker/5:1-245   [005] ....1..   673.454305: workqueue_execute_end: work struct ffffa0cf6e493e20
> >
> > Thats 10us for example.
> 
> Well with a decent cpu that is 3 usec and it occurs infrequently on the
> order of once per multiple seconds.
> 
> > So if want to customize a realtime setup whose code updates vmstatistic,
> > you are dead. You have to avoid any systemcall which possibly updates
> > vmstatistics (now and in the future kernel versions).
> 
> You are already dead because you allow IPIs and other kernel processing
> which creates far more overhead. Still fail to see the point.
> 
> > The point is that these vmstat updates are rare. From
> > http://www.7-cpu.com/cpu/Haswell.html:
> >
> > RAM Latency = 36 cycles + 57 ns (3.4 GHz i7-4770)
> > RAM Latency = 62 cycles + 100 ns (3.6 GHz E5-2699 dual)
> >
> > Lets round to 100ns = 0.1us.
> 
> That depends on the kernel functionality used.
> 
> > You need 100 vmstat updates (all misses to RAM, the worst possible case)
> > to have equivalent amount of time of the batching version.
> 
> The batching version occurs every couple of seconds if at all.
> 
> > But thats not the point. The point is the 10us interruption
> > to execution of the realtime app (which can either mean
> > your current deadline requirements are not met, or that
> > another application with lowest latency requirement can't
> > be used).
> 
> Ok then you need to get rid of the IPIs and the other stuff that you have
> going on with the OS first I think.

I'll measure the cost of all IPIs in the system to confirm
vmstat_update's costs is larger than the cost of any IPI.

> > So why are you against integrating this simple, isolated patch which
> > does not affect how current logic works?
> 
> Frankly the argument does not make sense. Vmstat updates occur very
> infrequently (probably even less than you IPIs and the other OS stuff that
> also causes additional latencies that you seem to be willing to tolerate).
> 
> And you can configure the interval of vmstat updates freely.... Set
> the vmstat_interval to 60 seconds instead of 2 for a try? Is that rare
> enough?

Not rare enough. Never is rare enough.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ