watchdog: BUG: soft lockup - CPU#0 stuck for 186s
Even after increasing memory to 512mb on qemu, I still occasionally see this.
/ # [90216.140000] watchdog: BUG: soft lockup - CPU#0 stuck for 186s! [kworker/u3:3:2073]
[90216.140000] Modules linked in:
[90216.140000] CPU: 0 PID: 2073 Comm: kworker/u3:3 Tainted: G W 5.7.0-rc2-simple-smp-00001-g6bd140e14d9a-dirty #109
[90216.140000] Workqueue: xprtiod xs_stream_data_receive_workfn
[90216.140000] CPU #: 0
[90216.140000] PC: c03c1694 SR: 00008a7f SP: d89a3c90
[90216.140000] GPR00: 00000000 GPR01: d89a3c90 GPR02: d89a3cb4 GPR03: 00000000
[90216.140000] GPR04: eb895d10 GPR05: 00000000 GPR06: c8000000 GPR07: 00000000
[90216.140000] GPR08: c029c738 GPR09: c006b898 GPR10: d89a2000 GPR11: fffffe5c
[90216.140000] GPR12: 1a69ac61 GPR13: 00000000 GPR14: df4f5800 GPR15: 00000000
[90216.140000] GPR16: c08c2c20 GPR17: eb895d10 GPR18: 20000000 GPR19: 00000001
[90216.140000] GPR20: 098a4d3e GPR21: 000001a3 GPR22: 000051df GPR23: 00000000
[90216.140000] GPR24: 0000001a GPR25: 00000000 GPR26: 01125cf2 GPR27: 00000000
[90216.140000] GPR28: 002c771e GPR29: c04c4fa1 GPR30: df4f5b9c GPR31: c04b63c0
[90216.140000] RES: fffffe5c oGPR11: ffffffff
[90216.140000] Process kworker/u3:3 (pid: 2073, stackpage=d8850000)
[90216.140000]
[90216.140000] Stack:
[90216.140000] Call trace:
[90216.140000] [<8558de7f>] ktime_get+0x70/0x1bc
[90216.140000] [<8d9c6218>] tcp_mstamp_refresh+0x30/0x190
[90216.140000] [<d9699fc9>] tcp_rcv_space_adjust+0x38/0x300
[90216.140000] [<93cbbb5f>] tcp_recvmsg+0x558/0xebc
[90216.140000] [<a2442133>] ? __switch_to+0x50/0x7c
[90216.140000] [<e17f0602>] ? __enqueue_entity.constprop.0+0xc4/0x104
[90216.140000] [<85044956>] inet_recvmsg+0x3c/0x64
[90216.140000] [<96e0a9e6>] sock_recvmsg+0x24/0x34
[90216.140000] [<a02d272c>] xs_sock_recvmsg.constprop.0+0x44/0x88
[90216.140000] [<4c68e7cc>] xs_stream_data_receive_workfn+0x110/0xaac
[90216.140000] [<e17f0602>] ? __enqueue_entity.constprop.0+0xc4/0x104
[90216.140000] [<ee8f26ff>] process_one_work+0x254/0x538
[90216.140000] [<a246f55d>] worker_thread+0x64/0x694
[90216.140000] [<3bf47ca9>] ? worker_thread+0x0/0x694
[90216.140000] [<818fceb2>] kthread+0x120/0x150
[90216.140000] [<0db975eb>] ? kthread+0x0/0x150
[90216.140000] [<336cbdd5>] ret_from_fork+0x1c/0x9c
[90434.850000] watchdog: BUG: soft lockup - CPU#0 stuck for 186s! [kworker/u3:0:2435]
[90434.850000] Modules linked in:
[90434.850000] CPU: 0 PID: 2435 Comm: kworker/u3:0 Tainted: G W L 5.7.0-rc2-simple-smp-00001-g6bd140e14d9a-dirty #109
[90434.850000] Workqueue: xprtiod xs_stream_data_receive_workfn
[90434.850000] CPU #: 0
[90434.850000] PC: c0006d40 SR: 0000827f SP: d88a5a90
[90434.850000] GPR00: 00000000 GPR01: d88a5a90 GPR02: d88a5a98 GPR03: 0000827f
[90434.850000] GPR04: bc00440c GPR05: 00000000 GPR06: d88a5b10 GPR07: d88a5a8c
[90434.850000] GPR08: 00000001 GPR09: c0274acc GPR10: d88a4000 GPR11: df0f0600
[90434.850000] GPR12: 00004000 GPR13: 00000138 GPR14: df0beec0 GPR15: 00000003
[90434.850000] GPR16: df0beec0 GPR17: 00008279 GPR18: d974da90 GPR19: fffffff9
[90434.850000] GPR20: 00424800 GPR21: 000f0000 GPR22: 00000408 GPR23: 00ff0000
[90434.850000] GPR24: 1f0f0600 GPR25: 00000000 GPR26: 0000040c GPR27: 00000008
[90434.850000] GPR28: 00000001 GPR29: 00000008 GPR30: 00000000 GPR31: 0000013a
[90434.850000] RES: df0f0600 oGPR11: ffffffff
[90434.850000] Process kworker/u3:0 (pid: 2435, stackpage=d97e3b90)
[90434.850000]
[90434.850000] Stack:
[90434.850000] Call trace:
[90434.850000] [<948bc844>] ethoc_start_xmit+0x1e4/0x36c
[90434.850000] [<dc228f84>] ? skb_checksum_help+0xc4/0x1d4
[90434.850000] [<28d011c4>] ? netif_skb_features+0x148/0x3a0
[90434.850000] [<74dd9b51>] dev_hard_start_xmit+0xc4/0x1bc
[90434.850000] [<f3ac5f3d>] sch_direct_xmit+0x174/0x32c
[90434.850000] [<707490e7>] __qdisc_run+0x194/0x658
[90434.850000] [<8f2a1121>] ? __do_softirq+0x170/0x320
[90434.850000] [<ed9217a4>] __dev_queue_xmit+0x65c/0x830
[90434.850000] [<e10c603b>] dev_queue_xmit+0x18/0x28
[90434.850000] [<ea3a2b2b>] ip_finish_output2+0x490/0x784
[90434.850000] [<6fd69efd>] __ip_finish_output+0x248/0x298
[90434.850000] [<fb555f04>] ip_output+0xcc/0xec
[90434.850000] [<439f8ee8>] ip_local_out+0x58/0x74
[90434.850000] [<41232458>] __ip_queue_xmit+0x1d4/0x4f8
[90434.850000] [<d7f4ed47>] ? tcp_queue_rcv+0x54/0x21c
[90434.850000] [<b56edd9a>] ip_queue_xmit+0x18/0x28
[90434.850000] [<f41f34ed>] __tcp_transmit_skb+0x670/0xe2c
[90434.850000] [<8ff6d6ed>] __tcp_send_ack.part.0+0xf4/0x15c
[90434.850000] [<9b1f6bc5>] tcp_send_ack+0x2c/0x44
[90434.850000] [<39ea58f7>] tcp_cleanup_rbuf+0xb0/0x1bc
[90434.850000] [<cd8c1332>] tcp_recvmsg+0x358/0xebc
[90434.850000] [<f45595f9>] ? queue_work_on+0x78/0xb4
[90434.850000] [<85044956>] inet_recvmsg+0x3c/0x64
[90434.850000] [<96e0a9e6>] sock_recvmsg+0x24/0x34
[90434.850000] [<a02d272c>] xs_sock_recvmsg.constprop.0+0x44/0x88
[90434.890000] BUG: workqueue lockup - pool cpus=0 flags=0x4 nice=0 stuck for 197s!
[90434.890000] Showing busy workqueues and worker pools:
[90434.890000] workqueue events: flags=0x0
[90434.890000] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
[90434.890000] pending: stop_one_cpu_nowait_workfn
[90434.890000] workqueue events_power_efficient: flags=0x80
[90434.890000] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=4/256 refcnt=5
[90434.890000] pending: phy_state_machine, check_lifetime, do_cache_clean, neigh_periodic_work
[90434.890000] workqueue rpciod: flags=0xa
[90434.890000] pwq 2: cpus=0 flags=0x4 nice=0 active=4/256 refcnt=6
[90434.890000] in-flight: 2067:rpc_async_schedule
[90434.890000] pending: rpc_async_schedule, rpc_async_schedule, __rpc_queue_timer_fn
[90434.900000] workqueue xprtiod: flags=0x1a
[90434.900000] pwq 3: cpus=0 flags=0x4 nice=-20 active=1/256 refcnt=3
[90434.900000] in-flight: 2435:xs_stream_data_receive_workfn
[90434.900000] pool 2: cpus=0 flags=0x4 nice=0 hung=197s workers=10 idle: 1304 2070 2445 2071 2444 2075 2066 1625 2076
[90434.900000] pool 3: cpus=0 flags=0x4 nice=-20 hung=197s workers=6 idle: 2082 2073 2074 2062 2068
[90434.910000] nfs: server 10.0.0.27 not responding, still trying
We had a similar case with ktime_get_ts64, handling a syscall in a user space program.
[93353.230000] watchdog: BUG: soft lockup - CPU#0 stuck for 187s! [ld-linux-or1k.s:92]
[93353.230000] Modules linked in:
[93353.230000] CPU: 0 PID: 92 Comm: ld-linux-or1k.s Tainted: G W L 5.7.0-rc2-simple-smp-00001-g6bd140e14d9a-dirty #109
[93353.230000] CPU #: 0
[93353.230000] PC: c03c16d4 SR: 00008a7f SP: df49be34
[93353.230000] GPR00: 00000000 GPR01: df49be34 GPR02: df49be58 GPR03: 00000000
[93353.230000] GPR04: eb895f45 GPR05: 00000000 GPR06: c8000000 GPR07: 7fb158d4
[93353.230000] GPR08: 00000000 GPR09: c006acd0 GPR10: df49a000 GPR11: 0000c800
[93353.230000] GPR12: 4a6de800 GPR13: 301ac9b4 GPR14: 00000000 GPR15: 0000827e
[93353.230000] GPR16: 00016be3 GPR17: 4a6de800 GPR18: 20000000 GPR19: b8030800
[93353.230000] GPR20: df49beac GPR21: 00000000 GPR22: 16a0793e GPR23: 00000000
[93353.230000] GPR24: 00000000 GPR25: 00000010 GPR26: 0000001a GPR27: fffffff9
[93353.230000] GPR28: 0041ec48 GPR29: c011dad8 GPR30: 00000000 GPR31: d8895c24
[93353.230000] RES: 0000c800 oGPR11: ffffffff
[93353.230000] Process ld-linux-or1k.s (pid: 92, stackpage=df538000)
[93353.230000]
[93353.230000] Stack:
[93353.230000] Call trace:
[93353.230000] [<48f53e3b>] ktime_get_ts64+0x80/0x2a4
[93353.230000] [<a5ff5e89>] poll_select_set_timeout+0xbc/0x134
[93353.230000] [<bd428b75>] do_pselect+0x94/0x148
[93353.230000] [<b1841f6e>] sys_pselect6_time32+0x70/0x94
[93353.230000] [<cfae1e30>] _syscall_return+0x0/0x4
Warning again, strange that its always ~187 seconds. I have a hunch that its something in qemu, I also notice the CPU time used by qemu is very high.
# [ 672.530000] watchdog: BUG: soft lockup - CPU#0 stuck for 187s! [find:68]
[ 672.530000] Modules linked in:
[ 672.530000] CPU: 0 PID: 68 Comm: find Not tainted 5.7.0-rc2-simple-smp-00001-g6bd140e14d9a-dirty #109
[ 672.530000] CPU #: 0
[ 672.530000] PC: c03c1694 SR: 00008a7f SP: df107c3c
[ 672.530000] GPR00: 00000000 GPR01: df107c3c GPR02: df107c60 GPR03: 00000000
[ 672.530000] GPR04: eb8a9e88 GPR05: 00000000 GPR06: c8000000 GPR07: df107bcf
[ 672.530000] GPR08: fffffff0 GPR09: c006b898 GPR10: df106000 GPR11: fffffffd
[ 672.530000] GPR12: 2224ce92 GPR13: 00000000 GPR14: df107df4 GPR15: 00000000
[ 672.530000] GPR16: c08c2c20 GPR17: eb8a9e88 GPR18: e0000000 GPR19: 00000001
[ 672.530000] GPR20: 5e948ab4 GPR21: 00000002 GPR22: 0000006e GPR23: 00000010
[ 672.530000] GPR24: 0000001a GPR25: 00000563 GPR26: 00013640 GPR27: 00000001
[ 672.530000] GPR28: 00db0fc5 GPR29: d8634000 GPR30: dca46010 GPR31: d86356c0
[ 672.530000] RES: fffffffd oGPR11: ffffffff
[ 672.530000] Process find (pid: 68, stackpage=df136000)
[ 672.530000]
[ 672.530000] Stack:
[ 672.530000] Call trace:
[ 672.530000] [<a747bb5c>] ktime_get+0x70/0x1bc
[ 672.530000] [<27329367>] rpc_new_task+0x160/0x1d4
[ 672.530000] [<d7ee08da>] rpc_run_task+0x24/0x240
[ 672.530000] [<4d80ff3e>] rpc_call_sync+0x60/0xec
[ 672.530000] [<2b88228d>] nfs3_rpc_wrapper+0x48/0xbc
[ 672.530000] [<ff2084be>] nfs3_proc_getattr+0x78/0x98
[ 672.530000] [<02cfa7c7>] __nfs_revalidate_inode+0xd4/0x22c
[ 672.530000] [<7f7a6fa3>] nfs_lookup_verify_inode+0xec/0x120
[ 672.530000] [<4b0a1b28>] nfs_do_lookup_revalidate+0x2a4/0x394
[ 672.530000] [<0f326b1c>] nfs_lookup_revalidate+0x90/0xc0
[ 672.530000] [<8711b743>] lookup_fast+0x120/0x1e4
[ 672.530000] [<a5a2bce5>] path_openat+0x160/0x124c
[ 672.530000] [<adae4104>] do_filp_open+0x78/0xf8
[ 672.530000] [<5a437d71>] ? __alloc_fd+0x5c/0x2c4
[ 672.530000] [<054df2bd>] do_sys_openat2+0x32c/0x404
[ 672.530000] [<6510ae45>] do_sys_open+0x54/0xa0
[ 672.530000] [<b8f86578>] sys_openat+0x14/0x24
[ 672.530000] [<30023bc1>] ? _syscall_return+0x0/0x4
My hunch is that the CPU is being starved and cannot service the queue/timer requests. Doing some profiling I found that qemu is spending a lot of time100% cpu time running l.mtspr to flush tlb buffers.
I traced this to the fact that our kernel flushes every page on tlb changes.
WIth this patch, the cpu usage goes from 100% to 30% and the timeouts seem to go away, but we now get failures sometimes:
diff --git a/arch/openrisc/mm/tlb.c b/arch/openrisc/mm/tlb.c
index 4b680aed8f5f..36f65f554e61 100644
--- a/arch/openrisc/mm/tlb.c
+++ b/arch/openrisc/mm/tlb.c
@@ -28,7 +28,7 @@
#define NO_CONTEXT -1
-#define NUM_DTLB_SETS (1 << ((mfspr(SPR_IMMUCFGR) & SPR_IMMUCFGR_NTS) >> \
+#define NUM_DTLB_SETS (1 << ((mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_NTS) >> \
SPR_DMMUCFGR_NTS_OFF))
#define NUM_ITLB_SETS (1 << ((mfspr(SPR_IMMUCFGR) & SPR_IMMUCFGR_NTS) >> \
SPR_IMMUCFGR_NTS_OFF))
@@ -49,13 +49,16 @@ void local_flush_tlb_all(void)
unsigned long num_tlb_sets;
/* Determine number of sets for IMMU. */
- /* FIXME: Assumption is I & D nsets equal. */
num_tlb_sets = NUM_ITLB_SETS;
for (i = 0; i < num_tlb_sets; i++) {
- mtspr_off(SPR_DTLBMR_BASE(0), i, 0);
mtspr_off(SPR_ITLBMR_BASE(0), i, 0);
}
+
+ num_tlb_sets = NUM_DTLB_SETS;
+ for (i = 0; i < num_tlb_sets; i++) {
+ mtspr_off(SPR_DTLBMR_BASE(0), i, 0);
+ }
}
#define have_dtlbeir (mfspr(SPR_DMMUCFGR) & SPR_DMMUCFGR_TEIRI)
@@ -116,20 +119,14 @@ void local_flush_tlb_range(struct vm_area_struct *vma,
}
}
-/*
- * Invalidate the selected mm context only.
- *
- * FIXME: Due to some bug here, we're flushing everything for now.
- * This should be changed to loop over over mm and call flush_tlb_range.
- */
-
void local_flush_tlb_mm(struct mm_struct *mm)
{
+ struct vm_area_struct *vma = mm->mmap;
- /* Was seeing bugs with the mm struct passed to us. Scrapped most of
- this function. */
- /* Several architctures do this */
- local_flush_tlb_all();
+ while (vma != NULL) {
+ local_flush_tlb_range(vma, vma->vm_start, vma->vm_end);
+ vma = vma->vm_next;
+ }
}
Crashes caused by accessing bad pages.
[ 4.850000] Unable to handle kernel access
[ 4.850000] at virtual address 0x7665006f
[ 4.860000]
[ 4.860000] Oops#: 0000
[ 4.860000] CPU #: 0
[ 4.860000] PC: c000b3a8 SR: 0000827f SP: c1f9fd8c
[ 4.860000] GPR00: 00000000 GPR01: c1f9fd8c GPR02: c1f9fd98 GPR03: 7665006b
[ 4.860000] GPR04: 6b6e2d3e GPR05: 61637469 GPR06: 00000000 GPR07: 00000001
[ 4.860000] GPR08: 00000000 GPR09: c000b3b4 GPR10: c1f9e000 GPR11: 00000006
[ 4.860000] GPR12: 00000007 GPR13: 00000000 GPR14: fffffff8 GPR15: 00000000
[ 4.860000] GPR16: 7665006b GPR17: 00000000 GPR18: 00000000 GPR19: 0000001c
[ 4.860000] GPR20: c1f98ce0 GPR21: 0000001c GPR22: c1f98ce0 GPR23: 00000002
[ 4.860000] GPR24: c1b84d70 GPR25: 00000000 GPR26: c1b92bc0 GPR27: 00000000
[ 4.860000] GPR28: c1f99030 GPR29: 00000000 GPR30: 00000000 GPR31: ffffffff
[ 4.860000] RES: 00000006 oGPR11: ffffffff
[ 4.860000] Process swapper (pid: 1, stackpage=c1f98ce0)
[ 4.860000]
[ 4.860000] Stack:
[ 4.860000] Call trace:
[ 4.860000] [<(ptrval)>] local_flush_tlb_mm+0x34/0x5c
[ 4.870000] [<(ptrval)>] switch_mm+0x28/0x40
[ 4.870000] [<(ptrval)>] begin_new_exec+0x658/0x970
[ 4.870000] [<(ptrval)>] ? kernel_read+0x24/0x3c
[ 4.870000] [<(ptrval)>] load_elf_binary+0x720/0x146c
[ 4.870000] [<(ptrval)>] ? lock_acquire+0x118/0x4b8
[ 4.870000] [<(ptrval)>] ? __do_execve_file+0x6d8/0xcf8
[ 4.870000] [<(ptrval)>] ? lock_release+0x1bc/0x360
[ 4.870000] [<(ptrval)>] __do_execve_file+0x6cc/0xcf8
[ 4.870000] [<(ptrval)>] ? __do_execve_file+0x598/0xcf8
[ 4.870000] [<(ptrval)>] do_execve+0x3c/0x4c
[ 4.870000] [<(ptrval)>] run_init_process+0xdc/0x100
[ 4.870000] [<(ptrval)>] ? kernel_init+0x0/0x144
[ 4.870000] [<(ptrval)>] kernel_init+0x5c/0x144
[ 4.870000] [<(ptrval)>] ? schedule_tail+0x54/0x94
[ 4.870000] [<(ptrval)>] ret_from_fork+0x1c/0x150
Errors showing this thread xs_stream_data_receive_workfn is starving the RCU task are related to NFS. During these failures an find runing over nfs was running. It seems that caused an extra high load of nfs activity starting the RCU cleanup thread.
We are also seeing this warning:
/ # ./setup.sh
creating user shorne ...
mounting home work ...
[ 34.290000] mount (71) used greatest stack depth: 5360 bytes left
setting clock ...
Tue Aug 11 08:27:56 GMT 2020
enabling login for shorne ...
setting coredumps ...
setting up /dev/stderr /dev/stdin /dev/stdout ...
configuring dropbear private key ...
generating entropy for ssh, must be higher than 500 ...
before: 12
after: 12
starting dropbear ...
[ 73.300000] ------------[ cut here ]------------
[ 73.300000] WARNING: CPU: 1 PID: 16 at net/sched/sch_generic.c:442 dev_watchdog+0x36c/0x374
[ 73.300000] NETDEV WATCHDOG: eth0 (ethoc): transmit queue 0 timed out
[ 73.300000] Modules linked in:
[ 73.300000] CPU: 1 PID: 16 Comm: ksoftirqd/1 Not tainted 5.8.0-simple-smp-00012-g55b2662ec665-dirty #257
[ 73.300000] Call trace:
[ 73.300000] [<(ptrval)>] dump_stack+0xc4/0x100
[ 73.300000] [<(ptrval)>] __warn+0xf4/0x124
[ 73.300000] [<(ptrval)>] ? dev_watchdog+0x36c/0x374
[ 73.300000] [<(ptrval)>] warn_slowpath_fmt+0x7c/0x94
[ 73.300000] [<(ptrval)>] dev_watchdog+0x36c/0x374
[ 73.300000] [<(ptrval)>] ? dev_watchdog+0x0/0x374
[ 73.320000] [<(ptrval)>] call_timer_fn.isra.0+0x24/0xa0
[ 73.320000] [<(ptrval)>] run_timer_softirq+0x2a4/0x2b8
[ 73.320000] [<(ptrval)>] ? dev_watchdog+0x0/0x374
[ 73.320000] [<(ptrval)>] ? set_next_entity+0x154/0x558
[ 73.320000] [<(ptrval)>] __do_softirq+0x16c/0x364
[ 73.320000] [<(ptrval)>] ? __schedule+0x2fc/0xa30
[ 73.320000] [<(ptrval)>] run_ksoftirqd+0x74/0x94
[ 73.320000] [<(ptrval)>] smpboot_thread_fn+0x1e4/0x2a0
[ 73.320000] [<(ptrval)>] ? _raw_spin_unlock_irqrestore+0x24/0x34
[ 73.320000] [<(ptrval)>] ? smpboot_thread_fn+0x0/0x2a0
[ 73.320000] [<(ptrval)>] kthread+0x124/0x154
[ 73.320000] [<(ptrval)>] ? kthread+0x0/0x154
[ 73.320000] [<(ptrval)>] ret_from_fork+0x1c/0x9c
[ 73.320000] ---[ end trace 60dafde39c415454 ]---
Note this is right after starting up dropbear which is accessed over NFS.
Another boot we see:
[ 7806.080000] INFO: timekeeping: Cycle offset (3951585492) is larger than the 'openrisc_timer' clock's 50% safety margin (2147483647)
[ 7806.080000] timekeeping: Your kernel is still fine, but is feeling a bit nervous
[ 7967.090000] watchdog: BUG: soft lockup - CPU#1 stuck for 151s! [swapper/1:0]
[ 7967.090000] Modules linked in:
[ 7967.090000] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.8.0-simple-smp-00012-g55b2662ec665-dirty #257
[ 7967.090000] CPU #: 1
[ 7967.090000] PC: c0005730 SR: 0000807f SP: df039fb0
[ 7967.090000] GPR00: 00000000 GPR01: df039fb0 GPR02: df039fc0 GPR03: 0000827f
[ 7967.090000] GPR04: df038000 GPR05: df08fe60 GPR06: df08c050 GPR07: 00000000
[ 7967.090000] GPR08: 00000000 GPR09: c000570c GPR10: df038000 GPR11: 00000006
[ 7967.090000] GPR12: 08f0d180 GPR13: 00000000 GPR14: 00000000 GPR15: 00000000
[ 7967.090000] GPR16: c0572550 GPR17: 00000010 GPR18: 00000001 GPR19: 00000000
[ 7967.090000] GPR20: c0573ab4 GPR21: df038004 GPR22: 00010000 GPR23: 00000000
[ 7967.090000] GPR24: c053be64 GPR25: 587a8000 GPR26: c04e186c GPR27: 00000719
[ 7967.090000] GPR28: 00000060 GPR29: df03a000 GPR30: c05458e0 GPR31: fefefeff
[ 7967.090000] RES: 00000006 oGPR11: ffffffff
[ 7967.090000] Process swapper/1 (pid: 0, stackpage=df033320)
[ 7967.090000]
[ 7967.090000] Stack:
[ 7967.090000] Call trace:
[ 7967.090000] [<(ptrval)>] arch_cpu_idle+0x18/0x4c
[ 7967.090000] [<(ptrval)>] default_idle_call+0x58/0x80
[ 7967.090000] [<(ptrval)>] do_idle+0x140/0x260
[ 7967.090000] [<(ptrval)>] cpu_startup_entry+0x34/0x3c
[ 7967.090000] [<(ptrval)>] ? secondary_start_kernel+0x11c/0x134