[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110226001611.GA19630@random.random>
Date: Sat, 26 Feb 2011 01:16:11 +0100
From: Andrea Arcangeli <aarcange@...hat.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Mel Gorman <mel@....ul.ie>,
Andrew Morton <akpm@...ux-foundation.org>,
Arthur Marsh <arthur.marsh@...ernode.on.net>,
Clemens Ladisch <cladisch@...glemail.com>,
Linux-MM <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] mm: compaction: Minimise the time IRQs are
disabled while isolating pages for migration
On Fri, Feb 25, 2011 at 11:32:04PM +0100, Johannes Weiner wrote:
> I don't understand why this conditional is broken up like this.
> cond_resched() will have the right checks anyway. Okay, you would
> save fatal_signal_pending() in the 'did one cluster' case. Is it that
> expensive? Couldn't this be simpler like
>
> did_cluster = ((low_pfn + 1) % SWAP_CLUSTER_MAX) == 0
> lock_contended = spin_is_contended(&zone->lru_lock);
> if (did_cluster || lock_contended || need_resched()) {
> spin_unlock_irq(&zone->lru_lock);
> cond_resched();
> spin_lock_irq(&zone->lru_lock);
> if (fatal_signal_pending(current))
> break;
> }
>
> instead?
If we don't release irqs first, how can need_resched get set in the
first place if the local apic irq can't run? I guess that's why
there's no cond_resched_lock_irq. BTW, I never liked too much clearing
interrupts for locks that can't ever be taken from irq (it's a
scalability boost to reduce contention but it makes things like above
confusing and it increases irq latency a bit.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists