From b4f2a515331734bfbce2ec0867042d760d972c15 Mon Sep 17 00:00:00 2001 From: Yu-Ping Wu Date: Thu, 12 Jun 2025 09:43:12 +0800 Subject: [PATCH] libpayload/arch/arm64/mmu: Fix missing CBMEM in used ranges MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit We use mmu_ranges to track the list of memory ranges and their types for MMU initialization. We also keep track of used memory ranges in usedmem_ranges, to avoid them from being re-allocated in mmu_alloc_range(). The problem is, the CBMEM range (CB_MEM_TABLE) is added to mmu_ranges, but is never marked as "used" in usedmem_ranges. This potentially causes any allocation (for example the framebuffer) to overlap with CBMEM. This issue is observed when DMA_DEFAULT_SIZE is reduced from 32MB to 1MB [1]. Prior to that change, because there isn't enough space above the coreboot table (with the 4GB upper limit) to fit the 32MB requested region, the DMA heap is always allocated *below* the coreboot table. And because the coreboot table is usually the lowest within CBMEM, the DMA heap region is allocated *below* the whole CBMEM, which happens the avoid the issue. Fix the bug by adding CB_MEM_TABLE ranges to usedmem_ranges. The ranges in usedmem_ranges don't need to be combined because they are not for MMU initialization (and there's only one CB_MEM_TABLE range). [1] commit aedc177f000a ("libpayload: arm64: Reduce DMA allocator space to 1MB") BUG=b:424107889 TEST=emerge-skywalker libpayload BRANCH=none Change-Id: Ie9ecafc17546e524253c60ab684ec10ff3495998 Signed-off-by: Yu-Ping Wu Reviewed-on: https://review.coreboot.org/c/coreboot/+/88063 Tested-by: build bot (Jenkins) Reviewed-by: Bartłomiej Grzesik Reviewed-by: Jakub "Kuba" Czapiga --- payloads/libpayload/arch/arm64/mmu.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/payloads/libpayload/arch/arm64/mmu.c b/payloads/libpayload/arch/arm64/mmu.c index 3822a891cf..e99fa37c5d 100644 --- a/payloads/libpayload/arch/arm64/mmu.c +++ b/payloads/libpayload/arch/arm64/mmu.c @@ -606,16 +606,23 @@ static void mmu_extract_ranges(struct memrange *cb_ranges, /* Extract memory ranges to be mapped */ for (; i < ncb; i++) { + uint64_t base = cb_ranges[i].base; + uint64_t size = cb_ranges[i].size; + switch (cb_ranges[i].type) { - case CB_MEM_RAM: case CB_MEM_TABLE: - if (prev_range && (prev_range->base + prev_range->size - == cb_ranges[i].base)) { - prev_range->size += cb_ranges[i].size; + /* Mark this memrange as used memory */ + if (mmu_add_memrange(&usedmem_ranges, base, size, + TYPE_NORMAL_MEM) == NULL) + mmu_error(); + __fallthrough; + case CB_MEM_RAM: + if (prev_range && + prev_range->base + prev_range->size == base) { + prev_range->size += size; } else { prev_range = mmu_add_memrange(mmu_ranges, - cb_ranges[i].base, - cb_ranges[i].size, + base, size, TYPE_NORMAL_MEM); if (prev_range == NULL) mmu_error();