lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <576BAE3F.5030603@linaro.org>
Date:	Thu, 23 Jun 2016 11:39:11 +0200
From:	Daniel Lezcano <daniel.lezcano@...aro.org>
To:	Thomas Gleixner <tglx@...utronix.de>
Cc:	nicolas.pitre@...aro.org, shreyas@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org, peterz@...radead.org,
	rafael@...nel.org, vincent.guittot@...aro.org
Subject: Re: [PATCH V7] irq: Track the interrupt timings

On 06/23/2016 10:41 AM, Thomas Gleixner wrote:
> On Fri, 17 Jun 2016, Daniel Lezcano wrote:
>> The interrupt framework gives a lot of information about each interrupt.
>>
>> It does not keep track of when those interrupts occur though.
>>
>> This patch provides a mean to record the elapsed time between successive
>> interrupt occurrences in a per-IRQ per-CPU circular buffer to help with the
>> prediction of the next occurrence using a statistical model.
>>
>> A new function is added to browse the different interrupts and retrieve the
>> timing information stored in it.
>>
>> A static key is introduced so when the irq prediction is switched off at
>> runtime, we can reduce the overhead near to zero. The irq timings is
>> supposed to be potentially used by different sub-systems and for this reason
>> the static key is a ref counter, so when the last use releases the irq
>> timings that will result on the effective deactivation of the irq measurement.
>
> Before merging this I really have to ask a few more questions. I'm a bit
> worried about the usage site of this. It's going to iterate over all
> interrupts in the system to do a next interrupt prediction. On larger machines
> that's going to be quite some work and you touch a gazillion of cache lines
> and many of them just to figure out that nothing happened.
>
> Is it really required to do this per interrupt rather than providing per cpu
> statistics of interrupts which arrived in the last X seconds or whatever
> timeframe is relevant for this.

Perhaps I am misunderstanding but if the statistics are done per cpu 
without tracking per irq timings, it is not possible to extract a 
repeating pattern for each irq and have an accurate prediction.

Today, the code stores per cpu and per irq timings and the usage is to 
compute the next irq event by taking the earliest next irq event on the 
current cpu.

@@ -51,6 +52,9 @@ struct irq_desc {
         struct irq_data         irq_data;
         unsigned int __percpu   *kstat_irqs;
         irq_flow_handler_t      handle_irq;
+#ifdef CONFIG_IRQ_TIMINGS
+       struct irq_timings __percpu *timings;
+#endif
  #ifdef CONFIG_IRQ_PREFLOW_FASTEOI
         irq_preflow_handler_t   preflow_handler;
  #endif

If we step back and look at the potential users of this framework, we have:

  - mobile: by nature the interrupt line number is small and the devices 
are "slow"

  - desktop and laptop : a few interrupts are really interesting us, 
ethernet and sdd (the other ones are rare, or ignored like timers or IPI)

  - server : the interrupt line number is bigger, but not so much.

  - other big system: I don't know

Usually, server and super sized system want full performance and low 
latency. For this reason the kernel is configured with periodic tick and 
that makes the next prediction algorithm superfluous, especially when 
the latency is set to 0. So I don't think the irq timings + next irq 
event code path will be ever used in this case.

As you mentioned it, there are some parts we can make evolve and 
optimize like preventing to lookup an empty irq events cpu.





-- 
  <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ