libpayload: usbmsc: Correctly deal with disks larger than 2TB

Disks larger than 2TB (technically disks with more than 2^32 blocks, but
for the common block size of 512 that comes out to 2TB) cannot represent
their full amount of blocks in the SCSI READ_CAPACITY(10) command used
by libpayload's USB mass storage driver. The entire driver isn't written
to support block addresses larger than 32 bits anyway.

The SCSI command has been designed in a clever way so that devices are
supposed to return the maximum value (0xffffffff) if the actual value
doesn't fit. However, our code adds one to the value (because it is
actually the address of the last block, but we want to know the number
of blocks). This makes it overflow back to 0 which is not great.

This patch caps the result before incrementing it so that the overflow
cannot occur, allowing us to at least access the first 2TB of super
large USB sticks.

Change-Id: Ic445923b7d588c4f523c4ed95a06291bc1969261
Signed-off-by: Julius Werner <jwerner@chromium.org>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/87506
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Yu-Ping Wu <yupingso@google.com>
This commit is contained in:
Julius Werner 2025-05-01 14:32:34 -07:00
commit f2cf732997

View file

@ -28,6 +28,7 @@
//#define USB_DEBUG
#include <endian.h>
#include <limits.h>
#include <usb/usb.h>
#include <usb/usbmsc.h>
#include <usb/usbdisk.h>
@ -520,7 +521,7 @@ read_capacity(usbdev_t *dev)
MSC_INST(dev)->numblocks = 0xffffffff;
MSC_INST(dev)->blocksize = 512;
} else {
MSC_INST(dev)->numblocks = ntohl(buf[0]) + 1;
MSC_INST(dev)->numblocks = MIN(ntohl(buf[0]), UINT_MAX - 1) + 1;
MSC_INST(dev)->blocksize = ntohl(buf[1]);
}
usb_debug(" %d %d-byte sectors (%d MB)\n", MSC_INST(dev)->numblocks,