lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 17 Oct 2018 22:23:59 -0400
From:   "Theodore Y. Ts'o" <tytso@....edu>
To:     Andreas Dilger <adilger@...ger.ca>
Cc:     liu.song11@....com.cn, fishland@...yun.com, wang.yi59@....com.cn,
        Ext4 Developers List <linux-ext4@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] ext4: direct return when jinode allocate failed

On Wed, Oct 17, 2018 at 05:28:55PM -0600, Andreas Dilger wrote:
> 
> Looking at the patch there are two effects that it has:
> - optimize a very rare case where there is an allocation failure before locking
> - return an unnecessary error if "ei->jinode" is allocated before locking
> 
> I don't think it is worthwhile to optimize this case, since allocation failures
> will have a serious impact on the application, and I'd rather avoid the rare
> case where we don't return an unnecessary error than make the error case faster.

To be fair, the "unnecessery error" case is also extremely rare.  In
order for that to happen, two processes would need to be racing to
allocate allocate the jinode structure for an inode (since we bail
earlier when we check for the ei->jinode == NULL case before we take
the lock), where the first process succeeds in allocating memory ---
but the second one fails.

Both of these are super-rare cases, and involve the system thrashing
due to super-high memory pressure --- at which point worrying about a
micro-optimization or an unnecessary failure is going to be the last
of the system's problem.

This is why I said, "the patch probably doesn't hurt, but I also don't
see it helping much".

Cheers,

						- Ted

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ