lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LNX.2.00.0904022114040.4265@swampdragon.chaosbits.net>
Date:	Thu, 2 Apr 2009 21:22:26 +0200 (CEST)
From:	Jesper Juhl <jj@...osbits.net>
To:	Izik Eidus <ieidus@...hat.com>
cc:	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	linux-mm@...ck.org, avi@...hat.com, aarcange@...hat.com,
	chrisw@...hat.com, riel@...hat.com, jeremy@...p.org,
	mtosatti@...hat.com, hugh@...itas.com, corbet@....net,
	yaniv@...hat.com, dmonakhov@...nvz.org
Subject: Re: [PATCH 0/4] ksm - dynamic page sharing driver for linux

Hi,

On Tue, 31 Mar 2009, Izik Eidus wrote:

> KSM is a linux driver that allows dynamicly sharing identical memory
> pages between one or more processes.
> 
> Unlike tradtional page sharing that is made at the allocation of the
> memory, ksm do it dynamicly after the memory was created.
> Memory is periodically scanned; identical pages are identified and
> merged.
> The sharing is unnoticeable by the process that use this memory.
> (the shared pages are marked as readonly, and in case of write
> do_wp_page() take care to create new copy of the page)
> 
> To find identical pages ksm use algorithm that is split into three
> primery levels:
> 
> 1) Ksm will start scan the memory and will calculate checksum for each
>    page that is registred to be scanned.
>    (In the first round of the scanning, ksm would only calculate
>     this checksum for all the pages)
> 

One question;

Calcolating a checksum is a fine way to find pages that are "likely to be 
identical", but there is no guarantee that two pages with the same 
checksum really are identical - there *will* be checksum collisions 
eventually. So, I really hope that your implementation actually checks 
that two pages that it find that have identical checksums really are 100% 
identical by comparing them bit by bit before throwing one away.
If you rely only on a checksum then eventually a user will get bitten by a 
checksum collision and, in the best case, something will crash, and in the 
worst case, data will silently be corrupted.

Do you rely only on the checksum or do you actually compare pages to check 
they are 100% identical before sharing?

I must admit that I have not read through the patch to find the answer, I 
just read your description and became concerned.

-- 
Jesper Juhl <jj@...osbits.net>             http://www.chaosbits.net/
Plain text mails only, please      http://www.expita.com/nomime.html
Don't top-post  http://www.catb.org/~esr/jargon/html/T/top-post.html

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ