lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMJBoFOWKHuarYuf3=0WWE3uDg8+7nCKgKw6y5rW3YZGokL8uA@mail.gmail.com>
Date:   Sat, 26 Nov 2016 10:12:08 +0100
From:   Vitaly Wool <vitalywool@...il.com>
To:     Dan Streetman <ddstreet@...e.org>
Cc:     Linux-MM <linux-mm@...ck.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Arnd Bergmann <arnd@...db.de>
Subject: Re: [PATCH 2/3] z3fold: don't fail kernel build if z3fold_header is
 too big

On Fri, Nov 25, 2016 at 7:33 PM, Dan Streetman <ddstreet@...e.org> wrote:
> On Fri, Nov 25, 2016 at 11:25 AM, Vitaly Wool <vitalywool@...il.com> wrote:
>> On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman <ddstreet@...e.org> wrote:
>>> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool <vitalywool@...il.com> wrote:
>>>> Currently the whole kernel build will be stopped if the size of
>>>> struct z3fold_header is greater than the size of one chunk, which
>>>> is 64 bytes by default. This may stand in the way of automated
>>>> test/debug builds so let's remove that and just fail the z3fold
>>>> initialization in such case instead.
>>>>
>>>> Signed-off-by: Vitaly Wool <vitalywool@...il.com>
>>>> ---
>>>>  mm/z3fold.c | 11 ++++++++---
>>>>  1 file changed, 8 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/mm/z3fold.c b/mm/z3fold.c
>>>> index 7ad70fa..ffd9353 100644
>>>> --- a/mm/z3fold.c
>>>> +++ b/mm/z3fold.c
>>>> @@ -870,10 +870,15 @@ MODULE_ALIAS("zpool-z3fold");
>>>>
>>>>  static int __init init_z3fold(void)
>>>>  {
>>>> -       /* Make sure the z3fold header will fit in one chunk */
>>>> -       BUILD_BUG_ON(sizeof(struct z3fold_header) > ZHDR_SIZE_ALIGNED);
>>>
>>> Nak.  this is the wrong way to handle this.  The build bug is there to
>>> indicate to you that your patch makes the header too large, not as a
>>> runtime check to disable everything.
>>
>> Okay, let's agree to drop it.
>>
>>> The right way to handle it is to change the hardcoded assumption that
>>> the header fits into a single chunk; e.g.:
>>>
>>> #define ZHDR_SIZE_ALIGNED round_up(sizeof(struct z3fold_header), CHUNK_SIZE)
>>> #define ZHDR_CHUNKS (ZHDR_SIZE_ALIGNED >> CHUNK_SHIFT)
>>>
>>> then use ZHDR_CHUNKS in all places where it's currently assumed the
>>> header is 1 chunk, e.g. in num_free_chunks:
>>>
>>>   if (zhdr->middle_chunks != 0) {
>>>     int nfree_before = zhdr->first_chunks ?
>>> -      0 : zhdr->start_middle - 1;
>>> +      0 : zhdr->start_middle - ZHDR_CHUNKS;
>>>
>>> after changing all needed places like that, the build bug isn't needed
>>> anymore (unless we want to make sure the header isn't larger than some
>>> arbitrary number N chunks)
>>
>> That sounds overly complicated to me. I would rather use bit_spin_lock
>> as Arnd suggested. What would you say?
>
> using the correctly-calculated header size instead of a hardcoded
> value is overly complicated?  i don't agree with that...i'd say it
> should have been done in the first place ;-)
>
> bit spin locks are hard to debug and slower and should only be used
> where space really is absolutely required to be minimal, which
> definitely isn't the case here.  this should use regular spin locks
> and change the hardcoded assumption of zhdr size < chunk size (which
> always was a bad assumption) to calculate it correctly.  it's really
> not that hard; there are only a few places where the offset position
> of the chunks is calculated.

I gave this a second thought after having run some quick benchmarking
using bit_spin_lock. The perofrmance drop is substantial, so your
suggestion is the way to go, thanks.

~vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ