lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Apr 2008 03:11:54 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Ingo Molnar <mingo@...e.hu>, Jens Axboe <jens.axboe@...cle.com>,
	linux-arch@...r.kernel.org,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	peterz@...radead.org, sam@...nborg.org
Subject: Re: [PATCH 2/11] x86: convert to generic helpers for IPI function calls

On Tue, Apr 22, 2008 at 12:50:30PM -0700, Linus Torvalds wrote:
> 
> 
> On Tue, 22 Apr 2008, Ingo Molnar wrote:
> > 
> > ok. In which case the reschedule vector could be consolidated into that 
> > as well (it's just a special single-CPU call). Then there would be no 
> > new vector allocations needed at all, just the renaming of 
> > RESCHEDULE_VECTOR to something more generic.
> 
> Yes.
> 
> Btw, don't get me wrong - I'm not against multiple vectors per se. I just 
> wonder if there is any real reason for the code duplication. 
> 
> And there certainly *can* be tons of valid reasons for it. For example, 
> some of the LAPIC can only have something like two pending interrupts per 
> vector, and after that IPI's would get lost.
> 
> However, since the queuing is actually done with the data structures, I 
> don't think it matters for the IPI's - they don't need any hardware 
> queuing at all, afaik, since even if two IPI's would be merged into one 
> (due to lack of hw queueing) the IPI handling code still has its list of 
> events, so it doesn't matter.
> 
> And performance can be a valid reason ("too expensive to check the shared 
> queue if we only have per-cpu events"), although I$ issues can cause that 
> argument to go both ways.
> 
> I was also wondering whether there are deadlock issues (ie one type of IPI 
> has to complete even if a lock is held for the other type). 
> 
> So I don't dislike the patch per se, I just wanted to understand _why_ the 
> IPI's wanted separate vectors.

The "too expensive to check the shared queue" is one aspect of it. The
shared queue need not have events *for us* (at least, unless Jens has
changed the implementation a bit) but it can still have events that we
would need to check through.

I don't think deadlock is a problem (any more than with multiple vectors).


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ