lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 6 May 2008 18:56:32 +1000
From:	David Chinner <dgc@....com>
To:	Marco Berizzi <pupilla@...mail.com>
Cc:	David Chinner <dgc@....com>, linux-kernel@...r.kernel.org,
	xfs@....sgi.com
Subject: Re: XFS shutdown in xfs_iunlink_remove() (was Re: 2.6.25: swapper: page allocation failure. order:3, mode:0x4020)

On Tue, May 06, 2008 at 09:03:06AM +0200, Marco Berizzi wrote:
> David Chinner wrote:
> > > May  5 14:31:38 Pleiadi kernel: xfs_inactive:^Ixfs_ifree() returned
> an
> > > error = 22 on hda8
> >
> > Is it reproducable?
> 
> honestly, I don't know. As you may see from the
> dmesg output this box has been started on 24 april
> and the crash has happened yesterday.

Yeah, I noticed that it happened after substantial uptime.

> IMHO the crash happended because of this:
> At 12:23 squid complain that there is no left space
> on device, and it start to shrinking cache_dir, and
> at 12:57 the kernel start logging...
> This box is pretty slow (celeron) and the hda8 filesystem
> is about 2786928 1k-blocks.

Hmmmmm - interesting. Both the reports of this problem are from
machines running as squid proxies. Are you using AUFS for the cache?

Interesting the ENOSPC condition, but I'm not sure it is at all
relevant - the other case seemed to be triggered by some cron job
doing cache cleanup so I think it's just the removal files that is
triggering this....

> > What were you doing at the time the problem occurred?
> 
> this box is running squid (http proxy): hda8 is where
> squid cache and logs are stored.
> I haven't rebooted this box since the problem happened.
> If you need ssh access just email me.
> This is the output from xfs_repair:

You've run repair, there's not much I can look at now.

As a suggestion, when the cache gets close to full next time, can
you take a metadump of the filesystem (obfuscates names and contains
no data) and then trigger the cache cleanup function? If the
filesystem falls over, I'd be very interested in getting a copy of
hte metadump image and trying to reproduce the problem locally.
(BTW, you'll need a newer xfsprogs to get xfs_metadump).

Still, thank you for the information - the bit about squid proxies
if definitely relevant, I think...

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ