lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 9 Jan 2019 11:59:44 -0800
From:   Matthew Wilcox <>
To:     Waiman Long <>
Cc:     Andrew Morton <>,
        Alexey Dobriyan <>,
        Kees Cook <>,
        Thomas Gleixner <>,,,
        Davidlohr Bueso <>,
        Miklos Szeredi <>,
        Daniel Colascione <>,
        Dave Chinner <>,
        Randy Dunlap <>
Subject: Re: [PATCH v2 0/4] /proc/stat: Reduce irqs counting performance

On Wed, Jan 09, 2019 at 01:54:36PM -0500, Waiman Long wrote:
> If you read patch 4, you can see that quite a bit of CPU cycles was
> spent looking up the radix tree to locate the IRQ descriptor for each of
> the interrupts. Those overhead will still be there even if I use percpu
> counters. So using percpu counter alone won't be as performant as this
> patch or my previous v1 patch.

Hm, if that's the overhead, then the radix tree (and the XArray) have
APIs that can reduce that overhead.  Right now, there's only one caller
of kstat_irqs_usr() (the proc code).  If we change that to fill an array
instead of returning a single value, it can look something like this:

void kstat_irqs_usr(unsigned int *sums)
	XA_STATE(xas, &irq_descs, 0);
	struct irq_desc *desc;

	xas_for_each(&xas, desc, ULONG_MAX) {
		unsigned int sum = 0;

		if (!desc->kstat_irqs)
			sum += *per_cpu_ptr(desc->kstat_irqs, cpu);

		sums[xas->xa_index] = sum;

Powered by blists - more mailing lists