[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAmzW4MxZYXCV3UqmPpCfzunLS5ufcqNOjeTSHABEyfTASTn=w@mail.gmail.com>
Date: Thu, 15 Nov 2012 02:09:04 +0900
From: JoonSoo Kim <js1304@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Mel Gorman <mel@....ul.ie>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH v2 4/5] mm, highmem: makes flush_all_zero_pkmaps() return
index of first flushed entry
Hi, Minchan.
2012/11/14 Minchan Kim <minchan@...nel.org>:
> On Tue, Nov 13, 2012 at 11:12:28PM +0900, JoonSoo Kim wrote:
>> 2012/11/13 Minchan Kim <minchan@...nel.org>:
>> > On Tue, Nov 13, 2012 at 09:30:57AM +0900, JoonSoo Kim wrote:
>> >> 2012/11/3 Minchan Kim <minchan@...nel.org>:
>> >> > Hi Joonsoo,
>> >> >
>> >> > On Sat, Nov 03, 2012 at 04:07:25AM +0900, JoonSoo Kim wrote:
>> >> >> Hello, Minchan.
>> >> >>
>> >> >> 2012/11/1 Minchan Kim <minchan@...nel.org>:
>> >> >> > On Thu, Nov 01, 2012 at 01:56:36AM +0900, Joonsoo Kim wrote:
>> >> >> >> In current code, after flush_all_zero_pkmaps() is invoked,
>> >> >> >> then re-iterate all pkmaps. It can be optimized if flush_all_zero_pkmaps()
>> >> >> >> return index of first flushed entry. With this index,
>> >> >> >> we can immediately map highmem page to virtual address represented by index.
>> >> >> >> So change return type of flush_all_zero_pkmaps()
>> >> >> >> and return index of first flushed entry.
>> >> >> >>
>> >> >> >> Additionally, update last_pkmap_nr to this index.
>> >> >> >> It is certain that entry which is below this index is occupied by other mapping,
>> >> >> >> therefore updating last_pkmap_nr to this index is reasonable optimization.
>> >> >> >>
>> >> >> >> Cc: Mel Gorman <mel@....ul.ie>
>> >> >> >> Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>
>> >> >> >> Cc: Minchan Kim <minchan@...nel.org>
>> >> >> >> Signed-off-by: Joonsoo Kim <js1304@...il.com>
>> >> >> >>
>> >> >> >> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
>> >> >> >> index ef788b5..97ad208 100644
>> >> >> >> --- a/include/linux/highmem.h
>> >> >> >> +++ b/include/linux/highmem.h
>> >> >> >> @@ -32,6 +32,7 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size)
>> >> >> >>
>> >> >> >> #ifdef CONFIG_HIGHMEM
>> >> >> >> #include <asm/highmem.h>
>> >> >> >> +#define PKMAP_INVALID_INDEX (LAST_PKMAP)
>> >> >> >>
>> >> >> >> /* declarations for linux/mm/highmem.c */
>> >> >> >> unsigned int nr_free_highpages(void);
>> >> >> >> diff --git a/mm/highmem.c b/mm/highmem.c
>> >> >> >> index d98b0a9..b365f7b 100644
>> >> >> >> --- a/mm/highmem.c
>> >> >> >> +++ b/mm/highmem.c
>> >> >> >> @@ -106,10 +106,10 @@ struct page *kmap_to_page(void *vaddr)
>> >> >> >> return virt_to_page(addr);
>> >> >> >> }
>> >> >> >>
>> >> >> >> -static void flush_all_zero_pkmaps(void)
>> >> >> >> +static unsigned int flush_all_zero_pkmaps(void)
>> >> >> >> {
>> >> >> >> int i;
>> >> >> >> - int need_flush = 0;
>> >> >> >> + unsigned int index = PKMAP_INVALID_INDEX;
>> >> >> >>
>> >> >> >> flush_cache_kmaps();
>> >> >> >>
>> >> >> >> @@ -141,10 +141,13 @@ static void flush_all_zero_pkmaps(void)
>> >> >> >> &pkmap_page_table[i]);
>> >> >> >>
>> >> >> >> set_page_address(page, NULL);
>> >> >> >> - need_flush = 1;
>> >> >> >> + if (index == PKMAP_INVALID_INDEX)
>> >> >> >> + index = i;
>> >> >> >> }
>> >> >> >> - if (need_flush)
>> >> >> >> + if (index != PKMAP_INVALID_INDEX)
>> >> >> >> flush_tlb_kernel_range(PKMAP_ADDR(0), PKMAP_ADDR(LAST_PKMAP));
>> >> >> >> +
>> >> >> >> + return index;
>> >> >> >> }
>> >> >> >>
>> >> >> >> /**
>> >> >> >> @@ -152,14 +155,19 @@ static void flush_all_zero_pkmaps(void)
>> >> >> >> */
>> >> >> >> void kmap_flush_unused(void)
>> >> >> >> {
>> >> >> >> + unsigned int index;
>> >> >> >> +
>> >> >> >> lock_kmap();
>> >> >> >> - flush_all_zero_pkmaps();
>> >> >> >> + index = flush_all_zero_pkmaps();
>> >> >> >> + if (index != PKMAP_INVALID_INDEX && (index < last_pkmap_nr))
>> >> >> >> + last_pkmap_nr = index;
>> >> >> >
>> >> >> > I don't know how kmap_flush_unused is really fast path so how my nitpick
>> >> >> > is effective. Anyway,
>> >> >> > What problem happens if we do following as?
>> >> >> >
>> >> >> > lock()
>> >> >> > index = flush_all_zero_pkmaps();
>> >> >> > if (index != PKMAP_INVALID_INDEX)
>> >> >> > last_pkmap_nr = index;
>> >> >> > unlock();
>> >> >> >
>> >> >> > Normally, last_pkmap_nr is increased with searching empty slot in
>> >> >> > map_new_virtual. So I expect return value of flush_all_zero_pkmaps
>> >> >> > in kmap_flush_unused normally become either less than last_pkmap_nr
>> >> >> > or last_pkmap_nr + 1.
>> >> >>
>> >> >> There is a case that return value of kmap_flush_unused() is larger
>> >> >> than last_pkmap_nr.
>> >> >
>> >> > I see but why it's problem? kmap_flush_unused returns larger value than
>> >> > last_pkmap_nr means that there is no free slot at below the value.
>> >> > So unconditional last_pkmap_nr update is vaild.
>> >>
>> >> I think that this is not true.
>> >> Look at the slightly different example.
>> >>
>> >> Assume last_pkmap = 20 and index 1-9, 12-19 is kmapped. 10, 11 is kunmapped.
>> >>
>> >> do kmap_flush_unused() => flush index 10,11 => last_pkmap = 10;
>> >> do kunmap() with index 17
>> >> do kmap_flush_unused() => flush index 17 => last_pkmap = 17?
>> >>
>> >> In this case, unconditional last_pkmap_nr update skip one kunmapped index.
>> >> So, conditional update is needed.
>> >
>> > Thanks for pouinting out, Joonsoo.
>> > You're right. I misunderstood your flush_all_zero_pkmaps change.
>> > As your change, flush_all_zero_pkmaps returns first *flushed* free slot index.
>> > What's the benefit returning flushed flushed free slot index rather than free slot index?
>>
>> If flush_all_zero_pkmaps() return free slot index rather than first
>> flushed free slot,
>> we need another comparison like as 'if pkmap_count[i] == 0' and
>> need another local variable for determining whether flush is occurred or not.
>> I want to minimize these overhead and churning of the code, although
>> they are negligible.
>>
>> > I think flush_all_zero_pkmaps should return first free slot because customer of
>> > flush_all_zero_pkmaps doesn't care whether it's just flushed or not.
>> > What he want is just free or not. In such case, we can remove above check and it makes
>> > flusha_all_zero_pkmaps more intuitive.
>>
>> Yes, it is more intuitive, but as I mentioned above, it need another comparison,
>> so with that, a benefit which prevent to re-iterate when there is no
>> free slot, may be disappeared.
>
> If you're very keen on the performance, why do you have such code?
> You can remove below branch if you were keen on the performance.
>
> diff --git a/mm/highmem.c b/mm/highmem.c
> index c8be376..44a88dd 100644
> --- a/mm/highmem.c
> +++ b/mm/highmem.c
> @@ -114,7 +114,7 @@ static unsigned int flush_all_zero_pkmaps(void)
>
> flush_cache_kmaps();
>
> - for (i = 0; i < LAST_PKMAP; i++) {
> + for (i = LAST_PKMAP - 1; i >= 0; i--) {
> struct page *page;
>
> /*
> @@ -141,8 +141,7 @@ static unsigned int flush_all_zero_pkmaps(void)
> pte_clear(&init_mm, PKMAP_ADDR(i), &pkmap_page_table[i]);
>
> set_page_address(page, NULL);
> - if (index == PKMAP_INVALID_INDEX)
> - index = i;
> + index = i;
> }
> if (index != PKMAP_INVALID_INDEX)
> flush_tlb_kernel_range(PKMAP_ADDR(0), PKMAP_ADDR(LAST_PKMAP));
>
>
> Anyway, if you have the concern of performance, Okay let's give up making code clear
> although I didn't see any report about kmap perfomance. Instead, please consider above
> optimization because you have already broken what you mentioned.
> If we can't make function clear, another method for it is to add function comment. Please.
Yes, I also didn't see any report about kmap performance.
By your reviewing comment, I eventually reach that this patch will not
give any benefit.
So how about to drop it?
Thanks for review.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists