lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Apr 2008 09:08:21 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>, linux-arch@...r.kernel.org,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	peterz@...radead.org, sam@...nborg.org
Subject: Re: [PATCH 2/11] x86: convert to generic helpers for IPI function calls

On Wed, Apr 23 2008, Nick Piggin wrote:
> On Tue, Apr 22, 2008 at 12:50:30PM -0700, Linus Torvalds wrote:
> > 
> > 
> > On Tue, 22 Apr 2008, Ingo Molnar wrote:
> > > 
> > > ok. In which case the reschedule vector could be consolidated into that 
> > > as well (it's just a special single-CPU call). Then there would be no 
> > > new vector allocations needed at all, just the renaming of 
> > > RESCHEDULE_VECTOR to something more generic.
> > 
> > Yes.
> > 
> > Btw, don't get me wrong - I'm not against multiple vectors per se. I just 
> > wonder if there is any real reason for the code duplication. 
> > 
> > And there certainly *can* be tons of valid reasons for it. For example, 
> > some of the LAPIC can only have something like two pending interrupts per 
> > vector, and after that IPI's would get lost.
> > 
> > However, since the queuing is actually done with the data structures, I 
> > don't think it matters for the IPI's - they don't need any hardware 
> > queuing at all, afaik, since even if two IPI's would be merged into one 
> > (due to lack of hw queueing) the IPI handling code still has its list of 
> > events, so it doesn't matter.
> > 
> > And performance can be a valid reason ("too expensive to check the shared 
> > queue if we only have per-cpu events"), although I$ issues can cause that 
> > argument to go both ways.
> > 
> > I was also wondering whether there are deadlock issues (ie one type of IPI 
> > has to complete even if a lock is held for the other type). 
> > 
> > So I don't dislike the patch per se, I just wanted to understand _why_ the 
> > IPI's wanted separate vectors.
> 
> The "too expensive to check the shared queue" is one aspect of it. The
> shared queue need not have events *for us* (at least, unless Jens has
> changed the implementation a bit) but it can still have events that we
> would need to check through.

That is still the case, the loop works the same way still.

To answer Linus' question on why it was done the way it was - the
thought of sharing the IPI just didn't occur to me. For performance
reasons I'd like to keep the current setup, but it's certainly a viable
alternative for archs with limited number of IPIs available (like the
mips case that Ralf has disclosed).

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists