lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 Jul 2010 17:06:56 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	linux@....linux.org.uk, Yinghai Lu <yinghai@...nel.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Shaohua Li <shaohua.li@...el.com>,
	Yakui Zhao <yakui.zhao@...el.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	arm-kernel@...ts.infradead.org, kgene.kim@...sung.com,
	Mel Gorman <mel@....ul.ie>
Subject: Re: [RFC] Tight check of pfn_valid on sparsemem

On Tue, Jul 13, 2010 at 3:40 PM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@...fujitsu.com> wrote:
> On Tue, 13 Jul 2010 15:04:00 +0900
> Minchan Kim <minchan.kim@...il.com> wrote:
>
>> >> >  2. This can't be help for a case where a section has multiple small holes.
>> >>
>> >> I agree. But this(not punched hole but not filled section problem)
>> >> isn't such case. But it would be better to handle it altogether. :)
>> >>
>> >> >
>> >> > Then, my proposal for HOLES_IN_MEMMAP sparsemem is below.
>> >> > ==
>> >> > Some architectures unmap memmap[] for memory holes even with SPARSEMEM.
>> >> > To handle that, pfn_valid() should check there are really memmap or not.
>> >> > For that purpose, __get_user() can be used.
>> >>
>> >> Look at free_unused_memmap. We don't unmap pte of hole memmap.
>> >> Is __get_use effective, still?
>> >>
>> > __get_user() works with TLB and page table, the vaddr is really mapped or not.
>> > If you got SEGV, __get_user() returns -EFAULT. It works per page granule.
>>
>> I mean following as.
>> For example, there is a struct page in on 0x20000000.
>>
>> int pfn_valid_mapped(unsigned long pfn)
>> {
>>        struct page *page = pfn_to_page(pfn); /* hole page is 0x2000000 */
>>        char *lastbyte = (char *)(page+1)-1;  /* lastbyte is 0x2000001f */
>>        char byte;
>>
>>        /* We pass this test since free_unused_memmap doesn't unmap pte */
>>        if(__get_user(byte, page) != 0)
>>                return 0;
>
> why ? When the page size is 4096 byte.
>
>      0x1ffff000 - 0x1ffffffff
>      0x20000000 - 0x200000fff are on the same page. And memory is mapped per page.

sizeof(struct page) is 32 byte.
So lastbyte is address of struct page + 32 byte - 1.

> What we access by above __get_user() is a byte at [0x20000000, 0x20000001)

Right.

> and it's unmapped if 0x20000000 is unmapped.

free_unused_memmap doesn't unmap pte although it returns the page to
free list of buddy.

>
> Thanks,
> -Kame
>
>



-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ