lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 19 Nov 2009 11:08:05 +0100
From:	Andi Kleen <andi@...stfloor.org>
To:	Tom Herbert <therbert@...gle.com>
Cc:	Andi Kleen <andi@...stfloor.org>,
	David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [PATCH 1/2] rps: core implementation

On Mon, Nov 16, 2009 at 09:02:32AM -0800, Tom Herbert wrote:

Sorry for the late answer.

> >> +     case __constant_htons(ETH_P_IPV6):
> >> +             if (!pskb_may_pull(skb, sizeof(*ip6)))
> >> +                     return -1;
> >> +
> >> +             ip6 = (struct ipv6hdr *) skb->data;
> >> +             ip_proto = ip6->nexthdr;
> >> +             addr1 = ip6->saddr.s6_addr32[3];
> >> +             addr2 = ip6->daddr.s6_addr32[3];
> >
> > Why only [3] ? Is this future proof?
> >
> No.  But it's same as inet6_ehashfn :-)

Perhaps it would be good to consolidate all these ipv6 hashes
into one place where they could be at least fixed easily.

> 
> >> +     for_each_cpu_mask_nr(cpu, __get_cpu_var(rps_remote_softirq_cpus)) {
> >> +             struct softnet_data *queue = &per_cpu(softnet_data, cpu);
> >> +             __smp_call_function_single(cpu, &queue->csd, 0);
> >
> > How do you get around the standard deadlocks with IPI called from
> > irq disabled section?
> >
> 
> What are the standard deadlocks?  Looks like __send_remote_softirq
> will call __smp_call_function with irq's disabled...

The traditional deadlock (that was before the queue smp_call_function)
was

A                        B
                         grab lock
interrupts off
spin on lock                 
                         send IPI
                         wait for specific CPU

never answers because
interrupts are off
                         hangs forever


I think with the queued smp_call_function it's better because
the locks are only hold much shorter and that particular scenario
is gone, but I'm not sure the problem has fully gone away. 

At least there are still plenty of WARN_ON( ... irqs_disabled()) in 
kernel/smp.c


> > It's a standard pet peeve of me, but it's quite unlikely you'll
> > get any useful entropy at this time of kernel startup.
> >
> > Normally it's always the same.
> >
> Would it make sense to just use skb_tx_hashrnd for the receive hash
> key also (renaming it to be more general)?

That has the same problem, although it's at least a bit later,
but I suspect it would be still not very random.

You could just drop it and always use a constant hash rnd?

-Andi

-- 
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ