lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 08 Sep 2012 19:22:54 +0100 From: Tvrtko Ursulin <tvrtko@...ulin.net> To: David Howells <dhowells@...hat.com> CC: linux-cachefs@...hat.com, linux-kernel@...r.kernel.org Subject: fscache scheduling while atomic bugs under 3.4.10-rt18 Hi, I get a lot of those and was wondering if it is a generic fscache issues or in some way related to the RT patchset. Perhaps that the two do not play well together? Traces typically look like this: BUG: scheduling while atomic: kworker/u:2/4151/0x00000002 Modules linked in: snd_usb_audio snd_hwdep snd_usbmidi_lib fuse ip6table_filter ip6_tables ebtable_nat ebtables cachefiles ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack ipt_REJECT iptable_mangle xt_tcpudp iptable_filter ip_tables x_tables bridge stp llc kvm_amd kvm autofs4 dm_crypt joydev usbhid hid snd_ice1712 nfsd snd_ice17xx_ak4xxx bnep snd_ak4xxx_adda snd_cs8427 snd_ac97_codec nfs bluetooth snd_pcm lockd fscache auth_rpcgss snd_page_alloc nfs_acl ac97_bus binfmt_misc sunrpc snd_i2c snd_mpu401_uart snd_seq_midi microcode snd_rawmidi snd_seq_midi_event snd_seq snd_timer snd_seq_device snd psmouse soundcore ext4 firewire_ohci mbcache r8169 firewire_core jbd2 mii crc_itu_t pata_atiixp fglrx(PO) i2c_piix4 it87 hwmon_vid raid10 raid456 async_pq async_xor xor async_memcpy async_raid6_recov raid6_pq async_tx raid1 linear xfs raid0 ahci libahci Pid: 4151, comm: kworker/u:2 Tainted: P O 3.4.10-rt18 #1 Call Trace: [<ffffffff813d8b0f>] __schedule_bug+0x43/0x45 [<ffffffff813ddcda>] __schedule+0x64a/0x680 [<ffffffff813de149>] schedule+0x29/0x80 [<ffffffff813deda7>] rt_spin_lock_slowlock+0x14b/0x1e4 [<ffffffff813df271>] rt_spin_lock+0x21/0x30 [<ffffffff81045520>] __queue_work+0x180/0x330 [<ffffffff8105644a>] ? ttwu_do_activate.constprop.76+0x4a/0x60 [<ffffffff8104571d>] queue_work_on+0x1d/0x30 [<ffffffff8104577e>] queue_work+0x2e/0x50 [<ffffffff81055c9b>] ? migrate_enable+0xcb/0x1b0 [<ffffffffa047aca5>] fscache_enqueue_object+0x65/0xd0 [fscache] [<ffffffffa047b2d2>] fscache_object_work_func+0x342/0xa20 [fscache] [<ffffffff81055c9b>] ? migrate_enable+0xcb/0x1b0 [<ffffffffa047af90>] ? fscache_drop_object+0x170/0x170 [fscache] [<ffffffff81045a87>] process_one_work+0x117/0x3a0 [<ffffffff81045d34>] process_scheduled_works+0x24/0x40 [<ffffffff81046781>] worker_thread+0x271/0x340 [<ffffffff81046510>] ? manage_workers.isra.30+0x230/0x230 [<ffffffff8104bbde>] kthread+0x8e/0xa0 [<ffffffff810546f9>] ? finish_task_switch+0x49/0xf0 [<ffffffff813e1034>] kernel_thread_helper+0x4/0x10 [<ffffffff8104bb50>] ? __init_kthread_worker+0x50/0x50 [<ffffffff813e1030>] ? gs_change+0xb/0xb Thanks, Tvrtko -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists