lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 Jul 2008 19:57:22 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	Mike Travis <travis@....com>
Cc:	"H. Peter Anvin" <hpa@...or.com>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jack Steiner <steiner@....com>, linux-kernel@...r.kernel.org,
	Arjan van de Ven <arjan@...radead.org>
Subject: Re: [RFC 00/15] x86_64: Optimize percpu accesses

Mike Travis <travis@....com> writes:

> If you could dig that up, that would be great.  Another engr here at SGI
> took that task off my hands and he's been able to do a few things to reduce
> the "# irqs" but irq_desc is still one of the bigger static arrays (>256k).

So the part I had completed which turns takes the NR_IRQS array out of kernel_stat
I posted.  Here are my mental notes on how to handle the rest.

Also if you will notice on x86_64 everything that is per irq is in irq_cfg.
Which explains why irq_cfg grows.  We have those crazy bitmaps almost useless
bitmaps of which cpu we want to direct irqs to.  In the irq configuration so that
doesn't help.

The array sized by NR_IRQS irqs are in:

drivers/char/random.c:static struct timer_rand_state *irq_timer_state[NR_IRQS];
  looks like it should go in irq_desc (it's a generic feature).

drivers/pcmcia/pcmcia_resource.c:static u8 pcmcia_used_irq[NR_IRQS];
  That number should be 16 possibly 32 for sanity not NR_IRQS.

drivers/net/hamradio/scc.c:static struct irqflags { unsigned char used : 1; } Ivec[NR_IRQS];
drivers/serial/68328serial.c:struct m68k_serial *IRQ_ports[NR_IRQS];
drivers/serial/8250.c:static struct irq_info irq_lists[NR_IRQS];
drivers/serial/m32r_sio.c:static struct irq_info irq_lists[NR_IRQS];
  The are all drivers and should allocate a proper per irq structure like every other driver.

drivers/xen/events.c:static struct packed_irq irq_info[NR_IRQS];
drivers/xen/events.c:static int irq_bindcount[NR_IRQS];
  For all intents and purposes this is another architecture, that should be fixed up
  at some point.


The interfaces from include/linux/interrupt.h that take irq interfaces
that take an irq number are slow path.

So it is just a matter of writing an irq_descp(irq) that returns
an irq_desc and returns an irq.  The definition would go something
like:

#ifndef CONFIG_DYNAMIC_NR_IRQ
#define irq_descp(irq) (irq >= 0 && irq < NR_IRQS)? (irq_desc + irq) : NULL
#else
struct irq_desc *irq_descp(int irq)
{
        struct irq_desc *desc;
        rcu_read_lock();
        list_for_each_entry_rcu(desc, &irq_list, list) {
        	if (desc->irq == irq)
                	return desc;
        }
        rcu_read_unlock();
        return NULL;
}
#endif

Then the generic irq code just needs to use irq_descp through out.
And the arch code needs to allocate/free irq_descs and add them to the
list.  With say:
int add_irq_desc(int irq, struct irq_desc *desc)
{
        struct irq_desc *old;
        int error = -EINVAL;
	spin_lock(&irq_list_lock);
        old = irq_descp(irq);
        if (old)
        	goto out;
	list_add_rcu(&desc.list, &irc_list);
        error = 0;
        return error;    
}
With architecture picking the irq number so it can be stable and have meaning
to users.

Starting from that direction it isn't too hard and it should yield timely results.

Eric

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ