lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZLNMvZnHjTiqJwTD@casper.infradead.org>
Date:   Sun, 16 Jul 2023 02:49:49 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Chao Yu <chao@...nel.org>
Cc:     viro@...iv.linux.org.uk, brauner@...nel.org,
        linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] fs: select: reduce stack usage in do_sys_poll()

On Sun, Jul 16, 2023 at 09:07:14AM +0800, Chao Yu wrote:
> struct poll_wqueues table caused the stack usage of do_sys_poll() to
> grow beyond the warning limit on 32-bit architectures w/ gcc.
> 
> fs/select.c: In function ‘do_sys_poll’:
> fs/select.c:1053:1: warning: the frame size of 1328 bytes is larger than 1024 bytes [-Wframe-larger-than=]

That seems particularly high.  But it's only 604 bytes, so half of the
stack frame:

struct poll_wqueues {
        poll_table                 pt;                   /*     0     8 */
        struct poll_table_page *   table;                /*     8     4 */
        struct task_struct *       polling_task;         /*    12     4 */
        int                        triggered;            /*    16     4 */
        int                        error;                /*    20     4 */
        int                        inline_index;         /*    24     4 */
        struct poll_table_entry    inline_entries[18];   /*    28   576 */

        /* size: 604, cachelines: 10, members: 7 */
        /* last cacheline: 28 bytes */
};

Also, you can see it's deliberately sized to fit on the stack (see
include/linux/poll.h).  So you're completely destroying that optimisation.

You need to figure out why the stack is now so big.  This isn't the
right solution.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ