lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALZtONAuJhgZLJECxwQOyKPj2n02d+521d+eHCkqLjjc=Ba9FQ@mail.gmail.com>
Date:	Fri, 12 Sep 2014 12:43:22 -0400
From:	Dan Streetman <ddstreet@...e.org>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Linux-MM <linux-mm@...ck.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Nitin Gupta <ngupta@...are.org>,
	Seth Jennings <sjennings@...iantweb.net>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 01/10] zsmalloc: fix init_zspage free obj linking

On Fri, Sep 12, 2014 at 12:59 AM, Minchan Kim <minchan@...nel.org> wrote:
> On Thu, Sep 11, 2014 at 04:53:52PM -0400, Dan Streetman wrote:
>> When zsmalloc creates a new zspage, it initializes each object it contains
>> with a link to the next object, so that the zspage has a singly-linked list
>> of its free objects.  However, the logic that sets up the links is wrong,
>> and in the case of objects that are precisely aligned with the page boundries
>> (e.g. a zspage with objects that are 1/2 PAGE_SIZE) the first object on the
>> next page is skipped, due to incrementing the offset twice.  The logic can be
>> simplified, as it doesn't need to calculate how many objects can fit on the
>> current page; simply checking the offset for each object is enough.
>
> If objects are precisely aligned with the page boundary, pages_per_zspage
> should be 1 so there is no next page.

ah, ok.  I wonder if it should be changed anyway so it doesn't rely on
that detail, in case that's ever changed in the future.  It's not
obvious the existing logic relies on that for correct operation.  And
this simplifies the logic too.

>
>>
>> Change zsmalloc init_zspage() logic to iterate through each object on
>> each of its pages, checking the offset to verify the object is on the
>> current page before linking it into the zspage.
>>
>> Signed-off-by: Dan Streetman <ddstreet@...e.org>
>> Cc: Minchan Kim <minchan@...nel.org>
>> ---
>>  mm/zsmalloc.c | 14 +++++---------
>>  1 file changed, 5 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> index c4a9157..03aa72f 100644
>> --- a/mm/zsmalloc.c
>> +++ b/mm/zsmalloc.c
>> @@ -628,7 +628,7 @@ static void init_zspage(struct page *first_page, struct size_class *class)
>>       while (page) {
>>               struct page *next_page;
>>               struct link_free *link;
>> -             unsigned int i, objs_on_page;
>> +             unsigned int i = 1;
>>
>>               /*
>>                * page->index stores offset of first object starting
>> @@ -641,14 +641,10 @@ static void init_zspage(struct page *first_page, struct size_class *class)
>>
>>               link = (struct link_free *)kmap_atomic(page) +
>>                                               off / sizeof(*link);
>> -             objs_on_page = (PAGE_SIZE - off) / class->size;
>>
>> -             for (i = 1; i <= objs_on_page; i++) {
>> -                     off += class->size;
>> -                     if (off < PAGE_SIZE) {
>> -                             link->next = obj_location_to_handle(page, i);
>> -                             link += class->size / sizeof(*link);
>> -                     }
>> +             while ((off += class->size) < PAGE_SIZE) {
>> +                     link->next = obj_location_to_handle(page, i++);
>> +                     link += class->size / sizeof(*link);
>>               }
>>
>>               /*
>> @@ -660,7 +656,7 @@ static void init_zspage(struct page *first_page, struct size_class *class)
>>               link->next = obj_location_to_handle(next_page, 0);
>>               kunmap_atomic(link);
>>               page = next_page;
>> -             off = (off + class->size) % PAGE_SIZE;
>> +             off %= PAGE_SIZE;
>>       }
>>  }
>>
>> --
>> 1.8.3.1
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@...ck.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
>
> --
> Kind regards,
> Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ