作者简介:中年码农,做过电信、手机、安全、芯片等行业,靠linux混饭吃。

简介

Kdump 提供了一种机制在内核出现故障的时候把系统的所有内存信息和寄存器信息 dump 出来成一个文件,后续通过 gdb/crash 等工具进行分析和调试。和用户态程序的 coredump 机制类似。它的主要流程如下图所示:

linuxpwd常用命令(LinuxKdump机制详解)(1)

可以看到它的核心原理是保留一段内存并且预先加载了一个备用的 kernel,在主 kernel 出现故障时跳转到备用 kernel,在备用 kernel 中把主 kernel 使用的内存和发生故障时的寄存器信息 dump 到一个磁盘文件中供后续分析。这个文件的格式是 elf core 文件格式。

kdump 主要还是用来捕捉纯软件的故障,在嵌入式领域还需要加上对硬件故障的捕捉,仿照其原理并进行加强和改造,就能构造出自己的 coredump 机制。

下面就来详细的分析整个 kdump 机制的详细原理。

安装

之前的 kdump 安装需要手工的一个个安装 kExec-tools、kdump-tools、crash,手工配置 grub cmdline 参数。在现在的 ubuntu 中只需要安装一个 linux-crashdump 软件包就自动帮你搞定:

sudo apt-get install linux-crashdump

安装完后,可以通过 kdump-config 命令检查系统是否配置正确:

$ kdump-config showDUMP_MODE: kdumpUSE_KDUMP: 1KDUMP_SYSCTL: kernel.panic_on_oops=1KDUMP_COREDIR: /var/crash // kdump 文件的存储目录crashkernel addr: 0x/var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-5.8.18 kdump initrd:/var/lib/kdump/initrd.img: symbolic link to /var/lib/kdump/initrd.img-5.8.18 current state: ready to kdump // 显示 ready 状态,说明系统 kdmup 机制已经准备就绪kexec command:/sbin/kexec -p --command-line="BOOT_IMAGE=/boot/vmlinuz-5.8.18 root=UUID=9ee42fe2-4e73-4703-8b6d-bb238ffdb003 ro find_preseed=/preseed.cfg auto noprompt priority=critical locale=en_US quiet reset_devices systemd.unit=kdump-tools-dump.service nr_CPUs=1 irqpoll nousb ata_piix.prefer_ms_hyperv=0" --initrd=/var/lib/kdump/initrd.img /var/lib/kdump/vmlinuz

linux-crashdump 的本质还是由一个个分离的软件包组成的:

$ sudo apt-get install linux-crashdump -dReading package lists... DoneBuilding dependency treeReading state information... DoneThe following additional packages will be installed:crash efibootmgr grub-common grub-efi-arm64 grub-efi-arm64-bingrub-efi-arm64-signed grub2-common kdump-tools kexec-tools libfreetype6libsnappy1v5 makedumpFile os-proberSuggested packages:multiboot-doc xorriso desktop-baseRecommended packages:secureboot-dbThe following NEW packages will be installed:crash efibootmgr grub-common grub-efi-arm64 grub-efi-arm64-bingrub-efi-arm64-signed grub2-common kdump-tools kexec-tools libfreetype6libsnappy1v5 linux-crashdump makedumpfile os-prober0 upgraded, 14 newly installed, 0 to remove and 67 not upgraded.Need to get 6611 kB of archives.

触发 kdump

在 kdump 就绪以后我们手工触发一次 panic :

$ sudo bash# echo c > /proc/sysrq-trigger

在系统 kdump 完成,重新启动以后。我们在 /var/crash 目录下可以找到 kdump 生成的内存转存储文件:

$ ls -l /var/crash/202107011353/total 65324-rw------- 1 root whoopsie 119480 Jul 1 13:53 dmesg.202107011353 // 系统 kernel log 信息-rw------- 1 root whoopsie 66766582 Jul 1 13:53 dump.202107011353 // 内存转存储文件,压缩格式$ sudo file /var/crash/202107011353/dump.202107011353/var/crash/202107011353/dump.202107011353: Kdump compressed dump v6, system Linux, node ubuntu, release 5.8.18 , version #18 SMP Thu Jul 1 11:24:39 CST 2021, machine x86_64, domain (none)

默认生成的 dump 文件是经过 makedumpfile 压缩过的,或者我们修改一些配置生成原始的 elf core 文件:

$ ls -l /var/crash/202107011132/total 1785584-rw------- 1 root whoopsie 117052 Jul 1 11:32 dmesg.202107011132 // 系统 kernel log 信息-r-----r-- 1 root whoopsie 1979371520 Jul 1 11:32 vmcore.202107011132 // 内存转存储文件,原始 Elf 格式$ file /var/crash/202107011132/vmcore.202107011132/var/crash/202107011132/vmcore.202107011132: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style

调试 kdump

使用 crash 工具可以很方便对 kdump 文件进行分析, crash 是对 gdb 进行了一些包装,生成了更多的调试内核的快捷命令。同样可以利用 gdb 和 trace32 工具进行分析

$ sudo crash /usr/lib/debug/boot/vmlinux-5.8.0-43-generic /var/crash/202106170338/dump.202106170338

kdump-tools.service 流程分析

在前面我们说过可以把 kdump 默认的压缩格式改成原生 ELF Core 文件格式,本节我们就来实现这个需求。

把/proc/vmcore文件从内存拷贝到磁盘是 crash kernel 中的 kdump-tools.service 服务完成的,我们来详细分析一下其中的流程:

  1. 首先从 kdump-config 配置中可以看到,第二份 crash kernel 启动后 systemd 只需要启动一个服务 kdump-tools-dump.service:

# kdump-config showDUMP_MODE: kdumpUSE_KDUMP: 1KDUMP_SYSCTL: kernel.panic_on_oops=1KDUMP_COREDIR: /var/crashcrashkernel addr: 0x73000000/var/lib/kdump/vmlinuz: symbolic link to /boot/vmlinuz-5.8.0-43-generickdump initrd:/var/lib/kdump/initrd.img: symbolic link to /var/lib/kdump/initrd.img-5.8.0-43-genericcurrent state: ready to kdumpkexec command:/sbin/kexec -p --command-line="BOOT_IMAGE=/boot/vmlinuz-5.8.0-43-generic root=UUID=9ee42fe2-4e73-4703-8b6d-bb238ffdb003 ro find_preseed=/preseed.cfg auto noprompt priority=critical locale=en_US quiet reset_devices systemd.unit=kdump-tools-dump.service nr_cpus=1 irqpoll nousb ata_piix.prefer_ms_hyperv=0" --initrd=/var/lib/kdump/initrd.img /var/lib/kdump/vmlinuz

  1. kdump-tools-dump.service 服务本质是调用 kdump-tools start 脚本:

# systemctl cat kdump-tools-dump.service# /lib/systemd/system/kdump-tools-dump.service[Unit]Description=Kernel crash dump capture serviceWants=network-online.target dbus.socket systemd-resolved.serviceAfter=network-online.target dbus.socket systemd-resolved.service[Service]Type=oneshotStandardOutput=syslog consoleEnvironmentFile=/etc/default/kdump-toolsExecStart=/etc/init.d/kdump-tools startExecStop=/etc/init.d/kdump-tools stopRemainAfterExit=yes

  1. kdump-tools 调用了 kdump-config savecore:

# vim /etc/init.d/kdump-toolsKDUMP_SCRIPT=/usr/sbin/kdump-configecho -n "Starting $DESC: "$KDUMP_SCRIPT savecore

  1. kdump-config 调用了 makedumpfile -c -d 31 /proc/vmcore dump.xxxxxx:

MAKEDUMP_ARGS=${MAKEDUMP_ARGS:="-c -d 31"}vmcore_file=/proc/vmcoremakedumpfile $MAKEDUMP_ARGS $vmcore_file $KDUMP_CORETEMP

kdump-tools-dump.service 默认调用 makedumpfile 生成压缩的 dump 文件。但是我们想分析原始的 elf 格式的 vmcore 文件,怎么办?

4.1) 首先我们修改 /usr/sbin/kdump-config 文件中的 MAKEDUMP_ARGS 参数让其出错。

MAKEDUMP_ARGS=${MAKEDUMP_ARGS:="-xxxxx -c -d 31"} // 其中 -xxxxx 是随便加的选项

4.2) 然后 kdump-config 就会调用 cp /proc/vmcore vmcore.xxxxxx 命令来生成原始 elf 格式的 vmcore 文件

log_action_msg "running makedumpfile $MAKEDUMP_ARGS $vmcore_file $KDUMP_CORETEMP"makedumpfile $MAKEDUMP_ARGS $vmcore_file $KDUMP_CORETEMP // 先调用 makedumpfile 生成压缩格式的 dump 文件ERROR=$?if [ $ERROR -ne 0 ] ; then // 如果 makedumpfile 调用失败log_failure_msg "$NAME: makedumpfile failed, falling back to 'cp'"logger -t $NAME "makedumpfile failed, falling back to 'cp'"KDUMP_CORETEMP="$KDUMP_STAMPDIR/vmcore-incomplete"KDUMP_COREFILE="$KDUMP_STAMPDIR/vmcore.$KDUMP_STAMP"cp $vmcore_file $KDUMP_CORETEMP // 再尝试使用 cp 拷贝原始的 vmcore elf 文件ERROR=$?fi

原理分析

kexec 实现了crash kernel 的加载。核心分为两部分:

  • kexec_file_load/kexec_load 负责在起始时就把备份的 kernel 和 initrd 加载好到内存。

  • __crash_kexec 负责在故障时跳转到备份 kernel 中。

kdump 主要实现把 vmcore 文件从内存拷贝到磁盘,并进行一些瘦身。

本次并不打算对 kexec 加载内核和地址转换流程以及 kdump 的拷贝裁剪进行详细的解析,我们只关注其中的两个重要文件 /proc/kcore 和 /proc/vmcore。其中:

  • /proc/kcore 是在 normal kernel 中把 normal kernel 的内存模拟成一个 elf core 文件,可以使用gdb 对当前系统进行在线调试,因为是自己调试自己会存在一些限制。

  • /proc/vmcore 是在 crash kernel 中把 normal kernel 的内存模拟成一个 elf core 文件,因为这时 normal kernel 已经停止运行,所以可以无限制的进行调试。我们 kdump 最后得到的 dump 文件,就是把 /proc/vmcore 文件从内存简单拷贝到了磁盘,或者再加上点裁剪和压缩。

所以可以看到 /proc/kcore 和 /proc/vmcore 这两个文件是整个机制的核心,我们重点分析这两部分的实现。

elf core 文件格式

关于 ELF 文件格式,我们熟知它有三种格式 .o文件(ET_REL)、.so文件(ET_EXEC)、exe文件(ET_DYN)。但是关于它的第四种格式 core文件(ET_CORE) 一直很神秘,也很神奇 gdb 一调试就能恢复到故障现场。

以下是 elf core 文件的大致格式:

linuxpwd常用命令(LinuxKdump机制详解)(2)

可以看到 elf core 文件只关注运行时状态,所以它只有 segment 信息,没有 section 信息。其主要包含两种类型的 segment 信息:

  1. PT_LOAD。每个 segemnt 用来记录一段 memory 区域,还记录了这段 memory 对应的物理地址、虚拟地址和长度。

  2. PT_NOTE。这个是 elf core 中新增的 segment,记录了解析 memory 区域的关键信息。PT_NOTE segment 被分成了多个 elf_note结构,其中 NT_PRSTATUS 类型的记录了复位前 CPU 的寄存器信息,NT_TASKstruct 记录了进程的 task_struct 信息,还有一个最关键0类型的自定义 VMCOREINFO 结论记录了内核的一些关键信息。

elf core 文件的大部分内容用 PT_LOAD segemnt 来记录 memeory 信息,但是怎么利用这些内存信息的钥匙记录在PT_NOTE segemnt 当中。

我们来看一个具体 vmcore 文件的例子:

  1. 首先我们查询 elf header 信息:

$ sudo readelf -e vmcore.202107011132 ELF Header:Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00Class: ELF64Data: 2's complement, little endianVersion: 1 (current)OS/ABI: UNIX - System VABI Version: 0Type: CORE (Core file) // 可以看到文件类型是 ET_COREMachine: Advanced Micro Devices X86-64Version: 0x1Entry point address: 0x0Start of program headers: 64 (bytes into file)Start of section headers: 0 (bytes into file)Flags: 0x0Size of this header: 64 (bytes)Size of program headers: 56 (bytes)Number of program headers: 6Size of section headers: 0 (bytes)Number of section headers: 0Section header string table index: 0There are no sections in this file.// 可以看到包含了 PT_NOTE 和 PT_LOAD 两种类型的 segmentProgram Headers:Type Offset VirtAddr PhysAddrFileSiz MemSiz Flags AlignNOTE 0x0000000000001000 0x0000000000000000 0x00000000000000000x0000000000001318 0x0000000000001318 0x0LOAD 0x0000000000003000 0xffffffffb7200000 0x0000000006c000000x000000000202c000 0x000000000202c000 RWE 0x0LOAD 0x000000000202f000 0xffff903a00001000 0x00000000000010000x000000000009d800 0x000000000009d800 RWE 0x0LOAD 0x00000000020cd000 0xffff903a00100000 0x00000000001000000x0000000072f00000 0x0000000072f00000 RWE 0x0LOAD 0x0000000074fcd000 0xffff903a7f000000 0x000000007f0000000x0000000000ee0000 0x0000000000ee0000 RWE 0x0LOAD 0x0000000075ead000 0xffff903a7ff00000 0x000000007ff000000x0000000000100000 0x0000000000100000 RWE 0x0

  1. 可以进一步查看 PT_NOTE 存储的具体内容:

$ sudo readelf -n vmcore.202107011132 Displaying notes found at file offset 0x00001000 with length 0x00001318:Owner Data size DescriptionCORE 0x00000150 NT_PRSTATUS (prstatus structure) // 因为系统有8个CPU,所以保存了8份 prstatus 信息CORE 0x00000150 NT_PRSTATUS (prstatus structure)CORE 0x00000150 NT_PRSTATUS (prstatus structure)CORE 0x00000150 NT_PRSTATUS (prstatus structure)CORE 0x00000150 NT_PRSTATUS (prstatus structure)CORE 0x00000150 NT_PRSTATUS (prstatus structure)CORE 0x00000150 NT_PRSTATUS (prstatus structure)CORE 0x00000150 NT_PRSTATUS (prstatus structure)VMCOREINFO 0x000007dd Unknown note type: (0x00000000) // 自定义的VMCOREINFO信息description data: 4f 53 52 45 4c 45 41 53 45 3d 35 2e 38 2e 31 38 2b 0a 50 41 47 45 53 49 5a 45 3d 34 30 39 36 0a 53 59 4d 42 4f 4c 28 69 6e 69 74 5f 75 74 73 5f 6e 73 29 3d 66 66 66 66 66 66 66 66 ...

  1. 可以进一步解析 VMCOREINFO 存储的信息,description data 后面是一段 16 进制的码流转换以后得到:

OSRELEASE=5.8.0-43-genericPAGESIZE=4096SYMBOL(init_uts_ns)=ffffffffa5014620SYMBOL(node_online_map)=ffffffffa5276720SYMBOL(swapper_pg_dir)=ffffffffa500a000SYMBOL(_stext)=ffffffffa3a00000SYMBOL(vmap_area_list)=ffffffffa50f2560SYMBOL(mem_section)=ffff91673ffd2000LENGTH(mem_section)=2048SIZE(mem_section)=16OFFSET(mem_section.section_mem_map)=0SIZE(page)=64SIZE(pglist_data)=171968SIZE(zone)=1472SIZE(free_area)=88SIZE(list_head)=16SIZE(nodemask_t)=128OFFSET(page.flags)=0OFFSET(page._refcount)=52OFFSET(page.mapping)=24OFFSET(page.lru)=8OFFSET(page._mapcount)=48OFFSET(page.private)=40OFFSET(page.compound_dtor)=16OFFSET(page.compound_order)=17OFFSET(page.compound_head)=8OFFSET(pglist_data.node_zones)=0OFFSET(pglist_data.nr_zones)=171232OFFSET(pglist_data.node_start_pfn)=171240OFFSET(pglist_data.node_spanned_pages)=171256OFFSET(pglist_data.node_id)=171264OFFSET(zone.free_area)=192OFFSET(zone.vm_stat)=1280OFFSET(zone.spanned_pages)=120OFFSET(free_area.free_list)=0OFFSET(list_head.next)=0OFFSET(list_head.prev)=8OFFSET(vmap_area.va_start)=0OFFSET(vmap_area.list)=40LENGTH(zone.free_area)=11SYMBOL(log_buf)=ffffffffa506a6e0SYMBOL(log_buf_len)=ffffffffa506a6dcSYMBOL(log_first_idx)=ffffffffa55f55d8SYMBOL(clear_idx)=ffffffffa55f55a4SYMBOL(log_next_idx)=ffffffffa55f55c8SIZE(printk_log)=16OFFSET(printk_log.ts_nsec)=0OFFSET(printk_log.len)=8OFFSET(printk_log.text_len)=10OFFSET(printk_log.dict_len)=12LENGTH(free_area.free_list)=5NUMBER(NR_FREE_PAGES)=0NUMBER(PG_lru)=4NUMBER(PG_private)=13NUMBER(PG_swapcache)=10NUMBER(PG_swapbacked)=19NUMBER(PG_slab)=9NUMBER(PG_hwpoison)=23NUMBER(PG_head_mask)=65536NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE)=-129NUMBER(HUGETLB_PAGE_DTOR)=2NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE)=-257NUMBER(phys_base)=1073741824SYMBOL(init_top_pgt)=ffffffffa500a000NUMBER(pgtable_l5_enabled)=0SYMBOL(node_data)=ffffffffa5271da0LENGTH(node_data)=1024KERNELOFFSET=22a00000NUMBER(KERNEL_IMAGE_SIZE)=1073741824NUMBER(sme_mask)=0CRASHTIME=1623937823

/proc/kcore

有人在清理磁盘空间的常常会碰到 /proc/kcore 文件,因为它显示出来的体积非常的大,有时高达128T。但是实际上她没有占用任何磁盘空间,它是一个内存文件系统中的文件。它也没有占用多少内存空间,除了一些控制头部分占用少量内存,大块的空间都是模拟的,只有在用户读操作的时候才会从对应的内存空间去读取的。

上一节已经介绍了 /proc/kcore 是把当前系统的内存模拟成一个 elf core 文件,可以使用gdb 对当前系统进行在线调试。那本节我们就来看看具体的模拟过程。

准备数据

初始化就是构建 kclist_head 链表的一个过程,链表中每一个成员对应一个 PT_LOAD segment。在读操作的时候再用 elf 的 PT_LOAD segment 呈现这些成员。

static int __init proc_kcore_init(void){/* (1) 创建 /proc/kcore 文件 */proc_root_kcore = proc_create("kcore", S_IRUSR, , &kcore_proc_ops);if (!proc_root_kcore) {pr_err("couldn't create /proc/kcore\n");return 0; /* Always returns 0. */}/* Store text area if it's special *//* (2) 将内核代码段 _text 加入kclist_head链表,kclist_head链表中每一个成员对应一个 PT_LOAD segment */proc_kcore_text_init;/* Store vmalloc area *//* (3) 将 VMALLOC 段内存加入kclist_head链表 */kclist_add(&kcore_vmalloc, (void *)VMALLOC_START,VMALLOC_END - VMALLOC_START, KCORE_VMALLOC);/* (4) 将 MODULES_VADDR 模块内存加入kclist_head链表 */add_modules_range;/* Store direct-map area from physical memory map *//* (5) 遍历系统内存布局表,将有效内存加入kclist_head链表 */kcore_update_ram;register_hotmemory_notifier(&kcore_callback_nb);return 0;}static intkcore_update_ram(void){LIST_HEAD(list);LIST_HEAD(garbage);int nphdr;size_t phdrs_len, notes_len, data_offset;struct kcore_list *tmp, *pos;int ret = 0;down_write(&kclist_lock);if (!xchg(&kcore_need_update, 0))goto out;/* (5.1) 遍历系统内存布局表,将符合`IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY`内存加入list链表 */ret = kcore_ram_list(&list);if (ret) {/* Couldn't get the RAM list, try again next time. */WRITE_ONCE(kcore_need_update, 1);list_splice_tail(&list, &garbage);goto out;}/* (5.2) 删除掉原有 kclist_head 链表中的 KCORE_RAM/KCORE_VMEMMAP 区域,因为全局链表中已经覆盖了 */list_for_each_entry_safe(pos, tmp, &kclist_head, list) {if (pos->type == KCORE_RAM || pos->type == KCORE_VMEMMAP)list_move(&pos->list, &garbage);}/* (5.3) 将原有 kclist_head 链表 和全局链表 list 拼接到一起 */list_splice_tail(&list, &kclist_head);/* (5.4) 更新 kclist_head 链表的成员个数,一个成员代表一个 PT_LOAD segment。计算 PT_NOTE segment 的长度计算 `/proc/kcore` 文件的长度,这个长度是个虚值,最大是虚拟地址的最大范围*/proc_root_kcore->size = get_kcore_size(&nphdr, &phdrs_len, &notes_len,&data_offset);out:up_write(&kclist_lock);/* (5.5) 释放掉上面删除的链表成员占用的空间 */list_for_each_entry_safe(pos, tmp, &garbage, list) {list_del(&pos->list);kfree(pos);}return ret;}

其中的一个关键从遍历系统内存布局表,关键代码如下:

kcore_ram_list → walk_system_ram_range:intwalk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,void *arg, int (*func)(unsigned long, unsigned long, void *)){resource_size_t start, end;unsigned long flags;struct resource res;unsigned long pfn, end_pfn;int ret = -EINVAL;start = (u64) start_pfn << PAGE_SHIFT;end = ((u64)(start_pfn nr_pages) << PAGE_SHIFT) - 1;/* (5.1.1) 从 iomem_resource 链表中查找符合 IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY 的资源段 */flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY;while (start < end &&!find_next_iomem_res(start, end, flags, IORES_DESC_NONE,false, &res)) {pfn = PFN_UP(res.start);end_pfn = PFN_DOWN(res.end 1);if (end_pfn > pfn)ret = (*func)(pfn, end_pfn - pfn, arg);if (ret)break;start = res.end 1;}return ret;}

其实就相当于以下命令:

$ sudo cat /proc/iomem | grep "System RAM"00001000-0009e7ff : System RAM00100000-7fedffff : System RAM7ff00000-7fffffff : System RAM

读取 elf core

准备好数据以后,还是在读 /proc/kcore 文件时,以 elf core 的格式呈现。

static const struct proc_ops kcore_proc_ops = {.proc_read = read_kcore,.proc_open = open_kcore,.proc_release = release_kcore,.proc_lseek = default_llseek,};↓static ssize_tread_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos){char *buf = file->private_data;size_t phdrs_offset, notes_offset, data_offset;size_t phdrs_len, notes_len;struct kcore_list *m;size_t tsz;int nphdr;unsigned long start;size_t orig_buflen = buflen;int ret = 0;down_read(&kclist_lock);/* (1) 获取到PT_LOAD segment个数、PT_NOTE segment 的长度等信息,开始动态构造 elf core 文件了 */get_kcore_size(&nphdr, &phdrs_len, &notes_len, &data_offset);phdrs_offset = sizeof(struct elfhdr);notes_offset = phdrs_offset phdrs_len;/* ELF file header. *//* (2) 构造 ELF 文件头,并拷贝给给用户态读内存 */if (buflen && *fpos < sizeof(struct elfhdr)) {struct elfhdr ehdr = {.e_ident = {[EI_MAG0] = ELFMAG0,[EI_MAG1] = ELFMAG1,[EI_MAG2] = ELFMAG2,[EI_MAG3] = ELFMAG3,[EI_CLASS] = ELF_CLASS,[EI_DATA] = ELF_DATA,[EI_VERSION] = EV_CURRENT,[EI_OSABI] = ELF_OSABI,},.e_type = ET_CORE,.e_machine = ELF_ARCH,.e_version = EV_CURRENT,.e_phoff = sizeof(struct elfhdr),.e_flags = ELF_CORE_EFLAGS,.e_ehsize = sizeof(struct elfhdr),.e_phentsize = sizeof(struct elf_phdr),.e_phnum = nphdr,};tsz = min_t(size_t, buflen, sizeof(struct elfhdr) - *fpos);if (copy_to_user(buffer, (char *)&ehdr *fpos, tsz)) {ret = -EFAULT;goto out;}buffer = tsz;buflen -= tsz;*fpos = tsz;}/* ELF program headers. *//* (3) 构造 ELF program 头,并拷贝给给用户态读内存 */if (buflen && *fpos < phdrs_offset phdrs_len) {struct elf_phdr *phdrs, *phdr;phdrs = kzalloc(phdrs_len, GFP_KERNEL);if (!phdrs) {ret = -ENOMEM;goto out;}/* (3.1) PT_NOTE segment 不需要物理地址和虚拟地址 */phdrs[0].p_type = PT_NOTE;phdrs[0].p_offset = notes_offset;phdrs[0].p_filesz = notes_len;phdr = &phdrs[1];/* (3.2) 逐个计算 PT_LOAD segment 的物理地址、虚拟地址和长度 */list_for_each_entry(m, &kclist_head, list) {phdr->p_type = PT_LOAD;phdr->p_flags = PF_R | PF_W | PF_X;phdr->p_offset = kc_vaddr_to_offset(m->addr) data_offset;if (m->type == KCORE_REMAP)phdr->p_vaddr = (size_t)m->vaddr;elsephdr->p_vaddr = (size_t)m->addr;if (m->type == KCORE_RAM || m->type == KCORE_REMAP)phdr->p_paddr = __pa(m->addr);else if (m->type == KCORE_TEXT)phdr->p_paddr = __pa_symbol(m->addr);elsephdr->p_paddr = (elf_addr_t)-1;phdr->p_filesz = phdr->p_memsz = m->size;phdr->p_align = PAGE_SIZE;phdr ;}tsz = min_t(size_t, buflen, phdrs_offset phdrs_len - *fpos);if (copy_to_user(buffer, (char *)phdrs *fpos - phdrs_offset,tsz)) {kfree(phdrs);ret = -EFAULT;goto out;}kfree(phdrs);buffer = tsz;buflen -= tsz;*fpos = tsz;}/* ELF note segment. *//* (4) 构造 PT_NOTE segment,并拷贝给给用户态读内存 */if (buflen && *fpos < notes_offset notes_len) {struct elf_prstatus prstatus = {};struct elf_prpsinfo prpsinfo = {.pr_sname = 'R',.pr_fname = "vmlinux",};char *notes;size_t i = 0;strlcpy(prpsinfo.pr_psargs, saved_command_line,sizeof(prpsinfo.pr_psargs));notes = kzalloc(notes_len, GFP_KERNEL);if (!notes) {ret = -ENOMEM;goto out;}/* (4.1) 添加 NT_PRSTATUS */append_kcore_note(notes, &i, CORE_STR, NT_PRSTATUS, &prstatus,sizeof(prstatus));/* (4.2) 添加 NT_PRPSINFO */append_kcore_note(notes, &i, CORE_STR, NT_PRPSINFO, &prpsinfo,sizeof(prpsinfo));/* (4.3) 添加 NT_TASKSTRUCT */append_kcore_note(notes, &i, CORE_STR, NT_TASKSTRUCT, current,arch_task_struct_size);/** vmcoreinfo_size is mostly constant after init time, but it* can be changed by crash_save_vmcoreinfo. Racing here with a* panic on another CPU before the machine goes down is insanely* unlikely, but it's better to not leave potential buffer* overflows lying around, regardless.* Vmcoreinfo_size在初始化后基本保持不变,但可以通过crash_save_vmcoreinfo修改。在机器宕机之前,在另一个CPU上出现恐慌是不太可能的,但无论如何,最好不要让潜在的缓冲区溢出到处存在。*//* (4.4) 添加 VMCOREINFO */append_kcore_note(notes, &i, VMCOREINFO_NOTE_NAME, 0,vmcoreinfo_data,min(vmcoreinfo_size, notes_len - i));tsz = min_t(size_t, buflen, notes_offset notes_len - *fpos);if (copy_to_user(buffer, notes *fpos - notes_offset, tsz)) {kfree(notes);ret = -EFAULT;goto out;}kfree(notes);buffer = tsz;buflen -= tsz;*fpos = tsz;}/** Check to see if our file offset matches with any of* the addresses in the elf_phdr on our list.*/start = kc_offset_to_vaddr(*fpos - data_offset);if ((tsz = (PAGE_SIZE - (start & ~PAGE_MASK))) > buflen)tsz = buflen;m = ;/* (5) 构造 PT_LOAD segment,并拷贝给给用户态读内存 */while (buflen) {/** If this is the first iteration or the address is not within* the previous entry, search for a matching entry.*/if (!m || start < m->addr || start >= m->addr m->size) {list_for_each_entry(m, &kclist_head, list) {if (start >= m->addr &&start < m->addr m->size)break;}}if (&m->list == &kclist_head) {if (clear_user(buffer, tsz)) {ret = -EFAULT;goto out;}m = ; /* skip the list anchor */} else if (!pfn_is_ram(__pa(start) >> PAGE_SHIFT)) {if (clear_user(buffer, tsz)) {ret = -EFAULT;goto out;}} else if (m->type == KCORE_VMALLOC) {vread(buf, (char *)start, tsz);/* we have to zero-fill user buffer even if no read */if (copy_to_user(buffer, buf, tsz)) {ret = -EFAULT;goto out;}} else if (m->type == KCORE_USER) {/* User page is handled prior to normal kernel page: */if (copy_to_user(buffer, (char *)start, tsz)) {ret = -EFAULT;goto out;}} else {if (kern_addr_valid(start)) {/** Using bounce buffer to bypass the* hardened user copy kernel text checks.*/if (copy_from_kernel_nofault(buf, (void *)start,tsz)) {if (clear_user(buffer, tsz)) {ret = -EFAULT;goto out;}} else {if (copy_to_user(buffer, buf, tsz)) {ret = -EFAULT;goto out;}}} else {if (clear_user(buffer, tsz)) {ret = -EFAULT;goto out;}}}buflen -= tsz;*fpos = tsz;buffer = tsz;start = tsz;tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen);}out:up_read(&kclist_lock);if (ret)return ret;return orig_buflen - buflen;}

/proc/vmcore

/proc/vmcore 是在 crash kernel 中把 normal kernel 的内存模拟成一个 elf core 文件。

它的文件格式构造是和上一节的 /proc/kcore 是类似的,不同的是它的数据准备工作是分成两部分完成的:

  • normal kernel 负责事先把 elf header 准备好。

  • crash kernel 负责把传递过来的 elf header 封装成 /proc/vmcore 文件,并且保存到磁盘。

下面我们就来详细分析具体的过程。

准备 elf header (运行在 normal kernel)

在系统发生故障时状态是很不稳定的,时间也是很紧急的,所以我们在 normal kernel 中就尽可能早的把 /proc/vomcore 文件的 elf header 数据准备好。虽然 normal kernel 不会呈现 /proc/vmcore,只会在 crash kernel 中呈现。

在 kexec_tools 使用 kexec_file_load 系统调用加载 crash kernel 时,就顺带把 /proc/vmcore 的 elf header 需要的大部分数据准备好了:

kexec_file_load → kimage_file_alloc_init → kimage_file_prepare_segments → arch_kexec_kernel_image_load → image->fops->load → kexec_bzImage64_ops.load → bzImage64_load → crash_load_segments → prepare_elf_headers → crash_prepare_elf64_headers:static intprepare_elf_headers(struct kimage *image, void **addr,unsigned long *sz){struct crash_mem *cmem;int ret;/* (1) 遍历系统内存布局表统计有效内存区域的个数,根据个数分配 cmem 空间 */cmem = fill_up_crash_elf_data;if (!cmem)return -ENOMEM;/* (2) 再次遍历系统内存布局表统计有效内存区域,记录到 cmem 空间 */ret = walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callback);if (ret)goto out;/* Exclude unwanted mem ranges *//* (3) 排除掉一些不会使用的内存区域 */ret = elf_header_exclude_ranges(cmem);if (ret)goto out;/* By default prepare 64bit headers *//* (4) 开始构造 elf header */ret = crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr, sz);out:vfree(cmem);return ret;}intcrash_prepare_elf64_headers(struct crash_mem *mem, int kernel_map,void **addr, unsigned long *sz){Elf64_Ehdr *ehdr;Elf64_Phdr *phdr;unsigned long nr_cpus = num_possible_cpus, nr_phdr, elf_sz;unsigned char *buf;unsigned int cpu, i;unsigned long long notes_addr;unsigned long mstart, mend;/* extra phdr for vmcoreinfo elf note */nr_phdr = nr_cpus 1;nr_phdr = mem->nr_ranges;/** kexec-tools creates an extra PT_LOAD phdr for kernel text mapping* area (for example, ffffffff80000000 - ffffffffa0000000 on x86_64).* I think this is required by tools like gdb. So same physical* memory will be mapped in two elf headers. One will contain kernel* text virtual addresses and other will have __va(physical) addresses.*/nr_phdr ;elf_sz = sizeof(Elf64_Ehdr) nr_phdr * sizeof(Elf64_Phdr);elf_sz = ALIGN(elf_sz, ELF_CORE_HEADER_ALIGN);buf = vzalloc(elf_sz);if (!buf)return -ENOMEM;/* (4.1) 构造 ELF 文件头 */ehdr = (Elf64_Ehdr *)buf;phdr = (Elf64_Phdr *)(ehdr 1);memcpy(ehdr->e_ident, ELFMAG, SELFMAG);ehdr->e_ident[EI_CLASS] = ELFCLASS64;ehdr->e_ident[EI_DATA] = ELFDATA2LSB;ehdr->e_ident[EI_VERSION] = EV_CURRENT;ehdr->e_ident[EI_OSABI] = ELF_OSABI;memset(ehdr->e_ident EI_PAD, 0, EI_NIDENT - EI_PAD);ehdr->e_type = ET_CORE;ehdr->e_machine = ELF_ARCH;ehdr->e_version = EV_CURRENT;ehdr->e_phoff = sizeof(Elf64_Ehdr);ehdr->e_ehsize = sizeof(Elf64_Ehdr);ehdr->e_phentsize = sizeof(Elf64_Phdr);/* Prepare one phdr of type PT_NOTE for each present cpu *//* (4.2) 构造 ELF program 头,每个 cpu 独立构造一个 PT_LOAD segmentsegment 的数据存放在 per_cpu_ptr(crash_notes, cpu) 变量当中注意 crash_notes 中目前还没有数据,当前只是记录了物理地址。只有在 crash 发生以后,才会实际往里面存储数据*/for_each_present_cpu(cpu) {phdr->p_type = PT_NOTE;notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));phdr->p_offset = phdr->p_paddr = notes_addr;phdr->p_filesz = phdr->p_memsz = sizeof(note_buf_t);(ehdr->e_phnum) ;phdr ;}/* Prepare one PT_NOTE header for vmcoreinfo *//* (4.3) 构造 ELF program 头,VMCOREINFO 独立构造一个 PT_LOAD segment注意当前只是记录了 vmcoreinfo_note 的物理地址,实际数据也是分几部分更新的*/phdr->p_type = PT_NOTE;phdr->p_offset = phdr->p_paddr = paddr_vmcoreinfo_note;phdr->p_filesz = phdr->p_memsz = VMCOREINFO_NOTE_SIZE;(ehdr->e_phnum) ;phdr ;/* Prepare PT_LOAD type program header for kernel text region *//* (4.4) 构造 ELF program 头,内核代码段对应的 PT_LOAD segment */if (kernel_map) {phdr->p_type = PT_LOAD;phdr->p_flags = PF_R|PF_W|PF_X;phdr->p_vaddr = (unsigned long) _text;phdr->p_filesz = phdr->p_memsz = _end - _text;phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);ehdr->e_phnum ;phdr ;}/* Go through all the ranges in mem->ranges and prepare phdr *//* (4.5) 遍历 cmem,把系统中的有效内存创建成 PT_LOAD segment */for (i = 0; i < mem->nr_ranges; i ) {mstart = mem->ranges[i].start;mend = mem->ranges[i].end;phdr->p_type = PT_LOAD;phdr->p_flags = PF_R|PF_W|PF_X;phdr->p_offset = mstart;phdr->p_paddr = mstart;phdr->p_vaddr = (unsigned long) __va(mstart);phdr->p_filesz = phdr->p_memsz = mend - mstart 1;phdr->p_align = 0;ehdr->e_phnum ;phdr ;pr_debug("Crash PT_LOAD elf header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n",phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz,ehdr->e_phnum, phdr->p_offset);}*addr = buf;*sz = elf_sz;return 0;}

1. crash_notes 数据的更新

只有在发生 panic 以后,才会往 crash_notes 中保存实际的 CPU 寄存器数据。其更新过程如下:

__crash_kexec → machine_crash_shutdown → crash_save_cpu:ipi_cpu_crash_stop → crash_save_cpu:voidcrash_save_cpu(struct pt_regs *regs, int cpu){struct elf_prstatus prstatus;u32 *buf;if ((cpu < 0) || (cpu >= nr_cpu_ids))return;/* Using ELF notes here is opportunistic.* I need a well defined structure format* for the data I pass, and I need tags* on the data to indicate what information I have* squirrelled away. ELF notes happen to provide* all of that, so there is no need to invent something new.*/buf = (u32 *)per_cpu_ptr(crash_notes, cpu);if (!buf)return;/* (1) 清零 */memset(&prstatus, 0, sizeof(prstatus));/* (2) 保存 pid */prstatus.pr_pid = current->pid;/* (3) 保存 寄存器 */elf_core_copy_kernel_regs(&prstatus.pr_reg, regs);/* (4) 以 elf_note 格式存储到 crash_notes 中 */buf = append_elf_note(buf, KEXEC_CORE_NOTE_NAME, NT_PRSTATUS,&prstatus, sizeof(prstatus));/* (5) 追加一个全零的 elf_note 当作结尾 */final_note(buf);}

2. vmcoreinfo_note 数据的更新

vmcoreinfo_note 分成两部分来更新:

  • 2.1 第一部分在系统初始化的时候准备好了大部分的数据:

static int __init crash_save_vmcoreinfo_init(void){/* (1.1) 分配 vmcoreinfo_data 空间 */vmcoreinfo_data = (unsigned char *)get_zeroed_page(GFP_KERNEL);if (!vmcoreinfo_data) {pr_warn("Memory allocation for vmcoreinfo_data failed\n");return -ENOMEM;}/* (1.2) 分配 vmcoreinfo_note 空间 */vmcoreinfo_note = alloc_pages_exact(VMCOREINFO_NOTE_SIZE,GFP_KERNEL | __GFP_ZERO);if (!vmcoreinfo_note) {free_page((unsigned long)vmcoreinfo_data);vmcoreinfo_data = ;pr_warn("Memory allocation for vmcoreinfo_note failed\n");return -ENOMEM;}/* (2.1) 把系统的各种关键信息使用 VMCOREINFO_xxx 一系列宏,以字符串的形式保持到 vmcoreinfo_data */VMCOREINFO_OSRELEASE(init_uts_ns.name.release);VMCOREINFO_PAGESIZE(PAGE_SIZE);VMCOREINFO_SYMBOL(init_uts_ns);VMCOREINFO_SYMBOL(node_online_map);#ifdef CONFIG_MMUVMCOREINFO_SYMBOL_ARRAY(swapper_pg_dir);#endifVMCOREINFO_SYMBOL(_stext);VMCOREINFO_SYMBOL(vmap_area_list);#ifndef CONFIG_NEED_MULTIPLE_NODESVMCOREINFO_SYMBOL(mem_map);VMCOREINFO_SYMBOL(contig_page_data);#endif#ifdef CONFIG_SPARSEMEMVMCOREINFO_SYMBOL_ARRAY(mem_section);VMCOREINFO_LENGTH(mem_section, NR_SECTION_ROOTS);VMCOREINFO_STRUCT_SIZE(mem_section);VMCOREINFO_OFFSET(mem_section, section_mem_map);#endifVMCOREINFO_STRUCT_SIZE(page);VMCOREINFO_STRUCT_SIZE(pglist_data);VMCOREINFO_STRUCT_SIZE(zone);VMCOREINFO_STRUCT_SIZE(free_area);VMCOREINFO_STRUCT_SIZE(list_head);VMCOREINFO_SIZE(nodemask_t);VMCOREINFO_OFFSET(page, flags);VMCOREINFO_OFFSET(page, _refcount);VMCOREINFO_OFFSET(page, mapping);VMCOREINFO_OFFSET(page, lru);VMCOREINFO_OFFSET(page, _mapcount);VMCOREINFO_OFFSET(page, private);VMCOREINFO_OFFSET(page, compound_dtor);VMCOREINFO_OFFSET(page, compound_order);VMCOREINFO_OFFSET(page, compound_head);VMCOREINFO_OFFSET(pglist_data, node_zones);VMCOREINFO_OFFSET(pglist_data, nr_zones);#ifdef CONFIG_FLAT_NODE_MEM_MAPVMCOREINFO_OFFSET(pglist_data, node_mem_map);#endifVMCOREINFO_OFFSET(pglist_data, node_start_pfn);VMCOREINFO_OFFSET(pglist_data, node_spanned_pages);VMCOREINFO_OFFSET(pglist_data, node_id);VMCOREINFO_OFFSET(zone, free_area);VMCOREINFO_OFFSET(zone, vm_stat);VMCOREINFO_OFFSET(zone, spanned_pages);VMCOREINFO_OFFSET(free_area, free_list);VMCOREINFO_OFFSET(list_head, next);VMCOREINFO_OFFSET(list_head, prev);VMCOREINFO_OFFSET(vmap_area, va_start);VMCOREINFO_OFFSET(vmap_area, list);VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER);log_buf_vmcoreinfo_setup;VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES);VMCOREINFO_NUMBER(NR_FREE_PAGES);VMCOREINFO_NUMBER(PG_lru);VMCOREINFO_NUMBER(PG_private);VMCOREINFO_NUMBER(PG_swapcache);VMCOREINFO_NUMBER(PG_swapbacked);VMCOREINFO_NUMBER(PG_slab);#ifdef CONFIG_MEMORY_FAILUREVMCOREINFO_NUMBER(PG_hwpoison);#endifVMCOREINFO_NUMBER(PG_head_mask);#define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy)VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE);#ifdef CONFIG_HUGETLB_PAGEVMCOREINFO_NUMBER(HUGETLB_PAGE_DTOR);#define PAGE_OFFLINE_MAPCOUNT_VALUE (~PG_offline)VMCOREINFO_NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE);#endif/* (2.2) 补充一些架构相关的 vmcoreinfo */arch_crash_save_vmcoreinfo;/* (3) 把 vmcoreinfo_data 中保存的数据以 elf_note 的形式保存到 vmcoreinfo_note 中 */update_vmcoreinfo_note;return 0;}

  • 2.2 第二部分在 panic 发生后追加了数据:

__crash_kexec → crash_save_vmcoreinfo:voidcrash_save_vmcoreinfo(void){if (!vmcoreinfo_note)return;/* Use the safe copy to generate vmcoreinfo note if have */if (vmcoreinfo_data_safecopy)vmcoreinfo_data = vmcoreinfo_data_safecopy;/* (1) 补充 "CRASHTIME=xxx" 信息 */vmcoreinfo_append_str("CRASHTIME=%lld\n", ktime_get_real_seconds);update_vmcoreinfo_note;}

vmcoreinfo 对应 readelf -n xxx 读出的数据:

$ readelf -n vmcore.202106170650 Displaying notes found at file offset 0x00001000 with length 0x00000ac8:Owner Data size DescriptionCORE 0x00000150 NT_PRSTATUS (prstatus structure)CORE 0x00000150 NT_PRSTATUS (prstatus structure)VMCOREINFO 0x000007e6 Unknown note type: (0x00000000)description data: 4f 53 52 45 4c 45 41 53 45 3d 35 2e 38 2e 30// description data 对应 ascii:OSRELEASE=5.8.0-43-genericPAGESIZE=4096SYMBOL(init_uts_ns)=ffffffffa5014620SYMBOL(node_online_map)=ffffffffa5276720SYMBOL(swapper_pg_dir)=ffffffffa500a000SYMBOL(_stext)=ffffffffa3a00000SYMBOL(vmap_area_list)=ffffffffa50f2560SYMBOL(mem_section)=ffff91673ffd2000LENGTH(mem_section)=2048SIZE(mem_section)=16OFFSET(mem_section.section_mem_map)=0SIZE(page)=64SIZE(pglist_data)=171968SIZE(zone)=1472SIZE(free_area)=88...CRASHTIME=1623937823

准备 cmdline (运行在 normal kernel)

准备好的 elf header 数据怎么传递给 crash kernel 呢?是通过 cmdline 来进行传递的:

kexec_file_load → kimage_file_alloc_init → kimage_file_prepare_segments → arch_kexec_kernel_image_load → image->fops->load → kexec_bzImage64_ops.load → bzImage64_load → setup_cmdline:static intsetup_cmdline(struct kimage *image, struct boot_params *params,unsigned long bootparams_load_addr,unsigned long cmdline_offset, char *cmdline,unsigned long cmdline_len){char *cmdline_ptr = ((char *)params) cmdline_offset;unsigned long cmdline_ptr_phys, len = 0;uint32_t cmdline_low_32, cmdline_ext_32;/* (1) 在 crask kernel 的 cmdline 中追加参数:"elfcorehdr=0x%lx " */if (image->type == KEXEC_TYPE_CRASH) {len = sprintf(cmdline_ptr,"elfcorehdr=0x%lx ", image->arch.elf_load_addr);}memcpy(cmdline_ptr len, cmdline, cmdline_len);cmdline_len = len;cmdline_ptr[cmdline_len - 1] = '\0';pr_debug("Final command line is: %s\n", cmdline_ptr);cmdline_ptr_phys = bootparams_load_addr cmdline_offset;cmdline_low_32 = cmdline_ptr_phys & 0xffffffffUL;cmdline_ext_32 = cmdline_ptr_phys >> 32;params->hdr.cmd_line_ptr = cmdline_low_32;if (cmdline_ext_32)params->ext_cmd_line_ptr = cmdline_ext_32;return 0;}

启动 crash kernel (运行在 normal kernel)

在 normal kernel 发生 panic 以后会 跳转到 carsh kernel:

die → crash_kexec → __crash_kexec → machine_kexec

接收 elfheadr (运行在 crash kernel)

在 carsh kernel 中首先会接收到 normal kernel 在 cmdline 中传递过来的 vmcore 文件的 elf header 信息:

static int __init setup_elfcorehdr(char *arg){char *end;if (!arg)return -EINVAL;elfcorehdr_addr = memparse(arg, &end);if (*end == '@') {elfcorehdr_size = elfcorehdr_addr;elfcorehdr_addr = memparse(end 1, &end);}return end > arg ? 0 : -EINVAL;}early_param("elfcorehdr", setup_elfcorehdr);

解析整理 elfheadr (运行在 crash kernel)

然后会读取 vmcore 文件的 elf header 信息,并进行解析和整理:

static int __init vmcore_init(void){int rc = 0;/* Allow architectures to allocate ELF header in 2nd kernel */rc = elfcorehdr_alloc(&elfcorehdr_addr, &elfcorehdr_size);if (rc)return rc;/** If elfcorehdr= has been passed in cmdline or created in 2nd kernel,* then capture the dump.*/if (!(is_vmcore_usable))return rc;/* (1) 解析 normal kernel 传递过来的 elf header 信息 */rc = parse_crash_elf_headers;if (rc) {pr_warn("Kdump: vmcore not initialized\n");return rc;}elfcorehdr_free(elfcorehdr_addr);elfcorehdr_addr = ELFCORE_ADDR_ERR;/* (2) 创建 /proc/vmcore 文件接口 */proc_vmcore = proc_create("vmcore", S_IRUSR, , &vmcore_proc_ops);if (proc_vmcore)proc_vmcore->size = vmcore_size;return 0;}fs_initcall(vmcore_init);↓parse_crash_elf_headers↓static int __init parse_crash_elf64_headers(void){int rc=0;Elf64_Ehdr ehdr;u64 addr;addr = elfcorehdr_addr;/* Read Elf header *//* (1.1) 读出传递过来的 elf header 信息注意:涉及到读另一个系统的内存,我们需要对物理地址进行ioremap_cache 建立映射以后才能读取后续的很多地方都是以这种方式来读取*/rc = elfcorehdr_read((char *)&ehdr, sizeof(Elf64_Ehdr), &addr);if (rc < 0)return rc;/* Do some basic Verification. *//* (1.2) 对读出的 elf header 信息进行一些合法性判断,防止被破坏 */if (memcmp(ehdr.e_ident, ELFMAG, SELFMAG) != 0 ||(ehdr.e_type != ET_CORE) ||!vmcore_elf64_check_arch(&ehdr) ||ehdr.e_ident[EI_CLASS] != ELFCLASS64 ||ehdr.e_ident[EI_VERSION] != EV_CURRENT ||ehdr.e_version != EV_CURRENT ||ehdr.e_ehsize != sizeof(Elf64_Ehdr) ||ehdr.e_phentsize != sizeof(Elf64_Phdr) ||ehdr.e_phnum == 0) {pr_warn("Warning: Core image elf header is not sane\n");return -EINVAL;}/* Read in all elf headers. *//* (1.3) 在crash kernel 上分配两个buffer,准备吧数据读到本地elfcorebuf 用来存储 elf header elf program headerelfnotes_buf 用来存储 PT_NOTE segment*/elfcorebuf_sz_orig = sizeof(Elf64_Ehdr) ehdr.e_phnum * sizeof(Elf64_Phdr);elfcorebuf_sz = elfcorebuf_sz_orig;elfcorebuf = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,get_order(elfcorebuf_sz_orig));if (!elfcorebuf)return -ENOMEM;addr = elfcorehdr_addr;/* (1.4) 把整个 elf header elf program header 读取到 elfcorebuf */rc = elfcorehdr_read(elfcorebuf, elfcorebuf_sz_orig, &addr);if (rc < 0)goto fail;/* Merge all PT_NOTE headers into one. *//* (1.5) 整理数据把多个 PT_NOTE 合并成一个,并且把 PT_NOTE 数据拷贝到 elfnotes_buf */rc = merge_note_headers_elf64(elfcorebuf, &elfcorebuf_sz,&elfnotes_buf, &elfnotes_sz);if (rc)goto fail;/* (1.6) 逐个调试 PT_LOAD segment 控制头,让每个 segment 符合 page 对齐 */rc = process_ptload_program_headers_elf64(elfcorebuf, elfcorebuf_sz,elfnotes_sz, &vmcore_list);if (rc)goto fail;/* (1.7) 配合上一步的 page 对齐调整,计算 vmcore_list 链表中的 offset 偏移 */set_vmcore_list_offsets(elfcorebuf_sz, elfnotes_sz, &vmcore_list);return 0;fail:free_elfcorebuf;return rc;}↓static int __init merge_note_headers_elf64(char *elfptr, size_t *elfsz,char **notes_buf, size_t *notes_sz){int i, nr_ptnote=0, rc=0;char *tmp;Elf64_Ehdr *ehdr_ptr;Elf64_Phdr phdr;u64 phdr_sz = 0, note_off;ehdr_ptr = (Elf64_Ehdr *)elfptr;/* (1.5.1) 更新每个独立 PT_NOTE 的长度,去除尾部全零 elf_note */rc = update_note_header_size_elf64(ehdr_ptr);if (rc < 0)return rc;/* (1.5.2) 计算 所有 PT_NOTE 数据加起来的总长度 */rc = get_note_number_and_size_elf64(ehdr_ptr, &nr_ptnote, &phdr_sz);if (rc < 0)return rc;*notes_sz = roundup(phdr_sz, PAGE_SIZE);*notes_buf = vmcore_alloc_buf(*notes_sz);if (!*notes_buf)return -ENOMEM;/* (1.5.3) 把所有 PT_NOTE 数据拷贝到一起,拷贝到 notes_buf 中 */rc = copy_notes_elf64(ehdr_ptr, *notes_buf);if (rc < 0)return rc;/* Prepare merged PT_NOTE program header. *//* (1.5.4) 创建一个新的 PT_NOTE 控制结构来寻址 notes_buf */phdr.p_type = PT_NOTE;phdr.p_flags = 0;note_off = sizeof(Elf64_Ehdr) (ehdr_ptr->e_phnum - nr_ptnote 1) * sizeof(Elf64_Phdr);phdr.p_offset = roundup(note_off, PAGE_SIZE);phdr.p_vaddr = phdr.p_paddr = 0;phdr.p_filesz = phdr.p_memsz = phdr_sz;phdr.p_align = 0;/* Add merged PT_NOTE program header*//* (1.5.5) 拷贝新的 PT_NOTE 控制结构 */tmp = elfptr sizeof(Elf64_Ehdr);memcpy(tmp, &phdr, sizeof(phdr));tmp = sizeof(phdr);/* Remove unwanted PT_NOTE program headers. *//* (1.5.6) 移除掉已经无用的 PT_NOTE 控制结构 */i = (nr_ptnote - 1) * sizeof(Elf64_Phdr);*elfsz = *elfsz - i;memmove(tmp, tmp i, ((*elfsz)-sizeof(Elf64_Ehdr)-sizeof(Elf64_Phdr)));memset(elfptr *elfsz, 0, i);*elfsz = roundup(*elfsz, PAGE_SIZE);/* Modify e_phnum to reflect merged headers. */ehdr_ptr->e_phnum = ehdr_ptr->e_phnum - nr_ptnote 1;/* Store the size of all notes. We need this to update the note* header when the device dumps will be added.*/elfnotes_orig_sz = phdr.p_memsz;return 0;}

读取 elf core (运行在 crash kernel)

经过上一节的解析 elf 头数据基本已准备好,elfcorebuf 用来存储 elf header elf program header,elfnotes_buf 用来存储 PT_NOTE segment。

现在可以通过对 /proc/vmcore 文件的读操作来读取 elf core 数据了:

static const struct proc_ops vmcore_proc_ops = {.proc_read = read_vmcore,.proc_lseek = default_llseek,.proc_mmap = mmap_vmcore,};↓read_vmcore↓static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,int userbuf){ssize_t acc = 0, tmp;size_t tsz;u64 start;struct vmcore *m = ;if (buflen == 0 || *fpos >= vmcore_size)return 0;/* trim buflen to not go beyond EOF */if (buflen > vmcore_size - *fpos)buflen = vmcore_size - *fpos;/* Read ELF core header *//* (1) 从 elfcorebuf 中读取 elf header elf program header,并拷贝给给用户态读内存 */if (*fpos < elfcorebuf_sz) {tsz = min(elfcorebuf_sz - (size_t)*fpos, buflen);if (copy_to(buffer, elfcorebuf *fpos, tsz, userbuf))return -EFAULT;buflen -= tsz;*fpos = tsz;buffer = tsz;acc = tsz;/* leave now if filled buffer already */if (buflen == 0)return acc;}/* Read Elf note segment *//* (2) 从 elfnotes_buf 中读取 PT_NOTE segment,并拷贝给给用户态读内存 */if (*fpos < elfcorebuf_sz elfnotes_sz) {void *kaddr;/* We add device dumps before other elf notes because the* other elf notes may not fill the elf notes buffer* completely and we will end up with zero-filled data* between the elf notes and the device dumps. Tools will* then try to decode this zero-filled data as valid notes* and we don't want that. Hence, adding device dumps before* the other elf notes ensure that zero-filled data can be* avoided.*/#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP/* Read device dumps */if (*fpos < elfcorebuf_sz vmcoredd_orig_sz) {tsz = min(elfcorebuf_sz vmcoredd_orig_sz -(size_t)*fpos, buflen);start = *fpos - elfcorebuf_sz;if (vmcoredd_copy_dumps(buffer, start, tsz, userbuf))return -EFAULT;buflen -= tsz;*fpos = tsz;buffer = tsz;acc = tsz;/* leave now if filled buffer already */if (!buflen)return acc;}#endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP *//* Read remaining elf notes */tsz = min(elfcorebuf_sz elfnotes_sz - (size_t)*fpos, buflen);kaddr = elfnotes_buf *fpos - elfcorebuf_sz - vmcoredd_orig_sz;if (copy_to(buffer, kaddr, tsz, userbuf))return -EFAULT;buflen -= tsz;*fpos = tsz;buffer = tsz;acc = tsz;/* leave now if filled buffer already */if (buflen == 0)return acc;}/* (3) 从 vmcore_list 链表中读取 PT_LOAD segment,并拷贝给给用户态读内存对物理地址进行ioremap_cache 建立映射以后才能读取*/list_for_each_entry(m, &vmcore_list, list) {if (*fpos < m->offset m->size) {tsz = (size_t)min_t(unsigned long long,m->offset m->size - *fpos,buflen);start = m->paddr *fpos - m->offset;tmp = read_from_oldmem(buffer, tsz, &start,userbuf, mem_encrypt_active);if (tmp < 0)return tmp;buflen -= tsz;*fpos = tsz;buffer = tsz;acc = tsz;/* leave now if filled buffer already */if (buflen == 0)return acc;}}return acc;}

,