lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20220923202822.2667581-8-keescook@chromium.org> Date: Fri, 23 Sep 2022 13:28:13 -0700 From: Kees Cook <keescook@...omium.org> To: Vlastimil Babka <vbabka@...e.cz> Cc: Kees Cook <keescook@...omium.org>, Chris Mason <clm@...com>, Josef Bacik <josef@...icpanda.com>, linux-btrfs@...r.kernel.org, David Sterba <dsterba@...e.com>, "Ruhl, Michael J" <michael.j.ruhl@...el.com>, Hyeonggon Yoo <42.hyeyoo@...il.com>, Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew Morton <akpm@...ux-foundation.org>, "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Nick Desaulniers <ndesaulniers@...gle.com>, Alex Elder <elder@...nel.org>, Sumit Semwal <sumit.semwal@...aro.org>, Christian König <christian.koenig@....com>, Jesse Brandeburg <jesse.brandeburg@...el.com>, Daniel Micay <danielmicay@...il.com>, Yonghong Song <yhs@...com>, Marco Elver <elver@...gle.com>, Miguel Ojeda <ojeda@...nel.org>, linux-kernel@...r.kernel.org, linux-mm@...ck.org, netdev@...r.kernel.org, linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org, linaro-mm-sig@...ts.linaro.org, linux-fsdevel@...r.kernel.org, intel-wired-lan@...ts.osuosl.org, dev@...nvswitch.org, x86@...nel.org, llvm@...ts.linux.dev, linux-hardening@...r.kernel.org Subject: [PATCH v2 07/16] btrfs: send: Proactively round up to kmalloc bucket size Instead of discovering the kmalloc bucket size _after_ allocation, round up proactively so the allocation is explicitly made for the full size, allowing the compiler to correctly reason about the resulting size of the buffer through the existing __alloc_size() hint. Cc: Chris Mason <clm@...com> Cc: Josef Bacik <josef@...icpanda.com> Cc: linux-btrfs@...r.kernel.org Acked-by: David Sterba <dsterba@...e.com> Link: https://lore.kernel.org/lkml/20220922133014.GI32411@suse.cz Signed-off-by: Kees Cook <keescook@...omium.org> --- fs/btrfs/send.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index e7671afcee4f..d40d65598e8f 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -435,6 +435,11 @@ static int fs_path_ensure_buf(struct fs_path *p, int len) path_len = p->end - p->start; old_buf_len = p->buf_len; + /* + * Allocate to the next largest kmalloc bucket size, to let + * the fast path happen most of the time. + */ + len = kmalloc_size_roundup(len); /* * First time the inline_buf does not suffice */ @@ -448,11 +453,7 @@ static int fs_path_ensure_buf(struct fs_path *p, int len) if (!tmp_buf) return -ENOMEM; p->buf = tmp_buf; - /* - * The real size of the buffer is bigger, this will let the fast path - * happen most of the time - */ - p->buf_len = ksize(p->buf); + p->buf_len = len; if (p->reversed) { tmp_buf = p->buf + old_buf_len - path_len - 1; -- 2.34.1
Powered by blists - more mailing lists