[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyv0k9e6bBqGm-LL3CUwimS4+rSu341P7SOV5ezYrrW_g@mail.gmail.com>
Date: Tue, 25 Oct 2016 18:33:26 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Dave Jones <davej@...emonkey.org.uk>, Chris Mason <clm@...com>,
Andy Lutomirski <luto@...capital.net>,
Andy Lutomirski <luto@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Jens Axboe <axboe@...com>, Al Viro <viro@...iv.linux.org.uk>,
Josef Bacik <jbacik@...com>, David Sterba <dsterba@...e.com>,
linux-btrfs <linux-btrfs@...r.kernel.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Dave Chinner <david@...morbit.com>
Subject: Re: bio linked list corruption.
On Tue, Oct 25, 2016 at 5:27 PM, Dave Jones <davej@...emonkey.org.uk> wrote:
>
> DaveC: Do these look like real problems, or is this more "looks like
> random memory corruption" ? It's been a while since I did some stress
> testing on XFS, so these might not be new..
Andy, do you think we could just do some poisoning of the stack as we
free it, to see if that catches anything?
Something truly stupid like just
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -218,6 +218,7 @@ static inline void free_thread_stack(struct
task_struct *tsk)
unsigned long flags;
int i;
+ memset(tsk->stack_vm_area->addr, 0xd0, THREAD_SIZE);
local_irq_save(flags);
for (i = 0; i < NR_CACHED_STACKS; i++) {
if (this_cpu_read(cached_stacks[i]))
or similar?
It seems like DaveJ had an easier time triggering these problems with
the stack cache, but they clearly didn't go away when the stack cache
was disabled. So maybe the stack cache just made the reuse more likely
and faster, making the problem show up faster too. But if we actively
poison things, we'll corrupt the free'd stack *immediately* if there
is some stale use..
Completely untested. Maybe there's some reason we can't write to the
whole thing like that?
Linus
Linus
Powered by blists - more mailing lists