lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190109195944.GQ6310@bombadil.infradead.org>
Date:   Wed, 9 Jan 2019 11:59:44 -0800
From:   Matthew Wilcox <willy@...radead.org>
To:     Waiman Long <longman@...hat.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Alexey Dobriyan <adobriyan@...il.com>,
        Kees Cook <keescook@...omium.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        Davidlohr Bueso <dave@...olabs.net>,
        Miklos Szeredi <miklos@...redi.hu>,
        Daniel Colascione <dancol@...gle.com>,
        Dave Chinner <david@...morbit.com>,
        Randy Dunlap <rdunlap@...radead.org>
Subject: Re: [PATCH v2 0/4] /proc/stat: Reduce irqs counting performance
 overhead

On Wed, Jan 09, 2019 at 01:54:36PM -0500, Waiman Long wrote:
> If you read patch 4, you can see that quite a bit of CPU cycles was
> spent looking up the radix tree to locate the IRQ descriptor for each of
> the interrupts. Those overhead will still be there even if I use percpu
> counters. So using percpu counter alone won't be as performant as this
> patch or my previous v1 patch.

Hm, if that's the overhead, then the radix tree (and the XArray) have
APIs that can reduce that overhead.  Right now, there's only one caller
of kstat_irqs_usr() (the proc code).  If we change that to fill an array
instead of returning a single value, it can look something like this:

void kstat_irqs_usr(unsigned int *sums)
{
	XA_STATE(xas, &irq_descs, 0);
	struct irq_desc *desc;

	xas_for_each(&xas, desc, ULONG_MAX) {
		unsigned int sum = 0;

		if (!desc->kstat_irqs)
			continue;
		for_each_possible_cpu(cpu)
			sum += *per_cpu_ptr(desc->kstat_irqs, cpu);

		sums[xas->xa_index] = sum;
	}
}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ