Introduction to cross-compilation

This lesson shall serve as an introduction to cross-compilation and user mode emulation. We assume the reader has some previous knowledge of C/C++ languages and is familiar with linux and command line. It's recommended to review the following resources before diving into practical aspects:

In this lesson you will learn how to build a simple C program targeting with cross-gcc toolchain targeting riscv64 arch and run it on your computer via qemu-user.

Compiling our first hello world program

First clone this repo and install the cross toolchain for riscv64. This depends on your distribution. We assume a debian based operating system, but feel free to use any distro you want.

sudo apt install build-essential gcc-riscv64-linux-gnu qemu-user

Clone and navigate to the labs/01-hello-riscv directory:

git clone https://github.com/riscv-technologies-lab/riscv-toolchain-labs.git cross-labs
cd cross-labs/labs/01-hello-riscv

Let's compile a simple hello world program natively. Take a look at the Makefile that we've prepared.

{{#include ../../labs/01-hello-riscv/hello.c}}
make run # compile the program natively
# gcc hello.c -o build/hello.elf
# build/hello.elf
# Hello, RISC-V!

Let's take a look at the generated executable with file command:

make show # run: file build/hello.elf
# gcc hello.c -o build/hello.elf
# file build/hello.elf
# build/hello.elf: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=52deb0fc601275b33ab8a638447f4bf2dcc1bb4a, for GNU/Linux 3.2.0, not stripped

We can dig deeper with ldd command.

ldd build/hello.elf
# linux-vdso.so.1 (0x00007fff042e9000)
# libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f058ff59000)
# /lib64/ld-linux-x86-64.so.2 (0x00007f0590169000)

What is vDSO? Now run readelf utility on the compiled binary and take explore the output by yourself. What happens if you compile with CCFLAGS=-static and run readelf again? Why is the output bigger?

Running under qemu usermode emulation

Run hello.elf with native usermode qemu. There's a make target to do that:

make run-qemu

Compile and run for riscv64 arch

Repeat the previous steps but with riscv64 toolchain. You can set make variables from command line, if they are defined with ?=. Take a look here. Set the compiler to riscv64-linux-gnu-gcc and qemu executable to qemu-riscv64. On ubuntu-23.04 make run-qemu fails with Could not open '/lib/ld-linux-riscv64-lp64d.so.1'. In case you encounter such an error run you can either compile with -static flag or create symlink riscv64-linux-gnu-ld with ln -s to the correct location. See the issue.

Downloading toolchain and compiling your own

In this lab you will take a look at GNU and LLVM toolchains, where to find them and how to setup them on your system. You will also get acquainted with docker and its applications, we will use docker image with prebuilt sc-dt package, which includes GNU ang LLVM toolchains.

Docker usage

Navigate to docker tutorials in order to setup your docker.

Pull image and run container:

docker run \
    --interactive \
    --tty \
    --detach \
    --env "TERM=xterm-256color" \
    --mount type=bind,source="$(pwd)",target="$(pwd)" \
    --name cpp \
    --ulimit nofile=1024:1024 \
    ghcr.io/riscv-technologies-lab/rv_tools_image:1.0.1

Wait for the image to pull, then you will enter the container.

Glancing at your first toolchain

The docker container has both GNU and LLVM toolchains in it. They are both installed in /opt/sc-dt/ directory.

Here is the GNU toolchain: /opt/sc-dt/riscv-gcc And here is the LLVM toolchain: /opt/sc-dt/llvm/

You will also see env.sh script in /opt/sc-dt/ directory. This script exports environment variables to your environment. Try running following:

source /opt/sc-dt/env.sh

Task: what changed after running the script? Take a look at your environmental variables before and after running the script. Compare and provide description. What does source command do?

Now you can access toolchains from sc-dt with either absolute typing: /opt/sc-dt/riscv-gcc/ or using environmental variable: $SC_GCC_PATH/

Navigate to labs/02-toolchain/ directory. Take a look at hello.c, this is the same program you tried at the previous lab, and the same Makefile. We will use those with our new toolchain.

As you can see in the Makefile, there are a few variables at the top:

CC ?= gcc
QEMU_USER ?= qemu-x86_64
CCFLAGS ?= 

It is a very common and a good practice to set such variables and use them throughout the Makefile. The reason is, it is much more readable, and they can be redefined with some new value. For instance, we can change the compiler, change the flags while using the same Makefile for our program.

Let's change the default compile in our Makefile to the one from sc-dt GNU toolchain:

Note: how do variables work in shell? Try accomplishing the same goal using export and explain the difference.

CC=/opt/sc-dt/riscv-gcc/bin/riscv64-unknown-linux-gnu-gcc QEMU_USER=/opt/sc-dt/tools/bin/qemu-riscv64 make build

Remember, if you encounter error like the following:

qemu-riscv64: Could not open '/lib/ld-linux-riscv64-lp64d.so.1': No such file or directory

Pass additionally CFLAGS=-static along with CC and QEMU_USER

We redefined the CC, CFLAGS and QEMU_USER variables to different value and ran build command. We see now that it ran successfully using specified compiler:

/opt/sc-dt/riscv-gcc/bin/riscv64-unknown-linux-gnu-gcc  hello.c -o build/hello.elf

Now run on QEMU:

CC=/opt/sc-dt/riscv-gcc/bin/riscv64-unknown-linux-gnu-gcc QEMU_USER=/opt/sc-dt/tools/bin/qemu-riscv64 CFLAGS=-static make run-qemu

Connecting with debugger

It's time for us to use the debugger. We'll stick with GDB from sc-dt for now.

The debugger is located at $SC_GCC_PATH/bin/riscv64-unknown-linux-gnu-gdb

First, add -g to CFLAGS variable. This adds debug symbols to final binary. They significantly affect binary size, but without them we cannot to proper debugging. Task: compare binary size with and without debug symbols enabled. Use objdump tool from toolchain to find those debug symbols added by compiler.

Now connect to qemu, make it wait for gdb connection:

/opt/sc-dt/tools/bin/qemu-riscv64 -g 1234 build/hello.elf

Open another terminal, and use gdb to connect to it:

/opt/sc-dt/riscv-gcc/bin/riscv64-unknown-linux-gnu-gdb

Inside gdb, connect to qemu by port 1234. Set breakpoint on main and run application:

target remote localhost:1234 # note: you can use "tar rem :1234"
b main
continue

Now you should see following:

(gdb) tar rem :1234
Remote debugging using :1234
Reading /home/stanislav/mipt/masters/riscv-toolchain-labs/labs/02-toolchain/fibb.elf from remote target...
warning: File transfers from remote targets can be slow. Use "set sysroot" to access files locally instead.
Reading /home/stanislav/mipt/masters/riscv-toolchain-labs/labs/02-toolchain/fibb.elf from remote target...
Reading symbols from target:/home/stanislav/mipt/masters/riscv-toolchain-labs/labs/02-toolchain/fibb.elf...
0x000000000001054c in _start ()
(gdb)

Major tasks

Porting application to RISC-V

In this task, you must choose an application you will be porting (for instance, your application for quadratic equation solution) and port it to RISC-V:

  • Create a Makefile for your application. Makefile should have build and run on qemu targets, you must be able to change compiler and simulator (and also compilation flags)
  • For convenience, add Makefile target that runs on qemu / runs gdb / builds application using docker instead console:
make build-docker # This will enter docker container and build application inside it
  • Build your app, verify that it runs on a simulator
  • Build with both GNU and LLVM toolchain. Try different optimization levels, compare assemblers. Compare them and list some differences.

Downloading toolchain and compiling your own

  • Show docker usage with riscv-tools image
  • Briefly show necessary tools from toolchain and how to work with them
  • Demonstrate building risc-v toolchain from sources
  • Demonstrate compiler output for sample app (GCC and CLANG)
  • Create sample app with Makefile and try compiling using custom toolchain, sc-dt toolchain and thead toolchain
  • Run on qemu, connect with debugger and test out

Learning linkers

In this lab, you will gain hands-on experience with relocations, how linkers resolve them, as well as get some knowledge about static / dynamic linking. Navigate to labs/03-linkers to see the examples we've prepared for you.

Definitions and declarations

Declaration in C introduces identifier and describes its type, whether it is a type, object or a function.

Definition in C instantiates / implement the identifier. It is what linker needs in order to make references to those entities.

Take a look at following declarations:

extern int bar;
extern int mul(int a, int b);
double sum(int a, double b);
struct foo;

Take a look at main.c and fact.c provided.

int main() {
  unsigned f = fact(5);
  printf("%u\n", f);
  return 0;
}
unsigned fact(unsigned x) {
  if (x < 2)
    return 1;

  return x * fact(x - 1);
}

First, let's from here use only RISC-V toolchain:

source /opt/sc-dt/env.sh # NOTE: if you are using something other than bash, this might not work. If so, try the old fashioned path export
export CC=/opt/sc-dt/riscv-gcc/bin/
export PATH=${PATH}:/opt/sc-dt/riscv-gcc/bin
export PATH=${PATH}:/opt/sc-dt/tools/bin # For QEMU

main here does not know that fact function exists. If we try to compile main to the executable make exec, we will get following error:

/opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1/../../../../riscv64-unknown-linux-gnu/bin/ld: /tmp/ccqIh9oC.o: in function `main':
main.c:(.text+0xa): undefined reference to `fact'
collect2: error: ld returned 1 exit status

Linker failed to find definition for the definition for fact function.

Task 3.1

Use readelf and file utilities to investigate main.o file and its contents and answer following questions:

Format for the following assignment: answer the questions in markdown file.

  • What is the type of the file?
  • How many sections are there?
  • List all entries in the same format readelf prints it
  • Why does entries for print and fact functions have NOTYPE type?
  • Modify the following example so executable is produced correctly.

Relocations

Let's take a look at objdump output:

riscv64-unknown-linux-gnu-objdump -d main.o

We will notice that we have address of factorial function is all zeroes:

    e:	000080e7          	jalr	ra # a <main+0xa>

Now compile both files and link into a single executable and look at the call address:

make bin
riscv64-unknown-linux-gnu-objdump -d fact | grep fact

You will see that fact now has been assigned an address and main nows how to call it:

fact:     file format elf64-littleriscv
   105fc:	028000ef          	jal	ra,10624 <fact>
0000000000010624 <fact>:
   1063c:	00e7e463          	bltu	a5,a4,10644 <fact+0x20>
   10642:	a839                	j	10660 <fact+0x3c>
   1064e:	fd7ff0ef          	jal	ra,10624 <fact>

The linker managed to find fact function and insert the correct address for it. It used relocations to do it.

riscv64-unknown-linux-gnu-readelf -r main.o

Possible output:

Relocation section '.rela.text' at offset 0x268 contains 8 entries:
  Offset          Info           Type           Sym. Value    Sym. Name + Addend
00000000000a  000c00000012 R_RISCV_CALL      0000000000000000 fact + 0
00000000000a  000000000033 R_RISCV_RELAX                        0
00000000001e  00080000001a R_RISCV_HI20      0000000000000000 .LC0 + 0
00000000001e  000000000033 R_RISCV_RELAX                        0
000000000022  00080000001b R_RISCV_LO12_I    0000000000000000 .LC0 + 0
000000000022  000000000033 R_RISCV_RELAX                        0
000000000026  000d00000012 R_RISCV_CALL      0000000000000000 printf + 0
000000000026  000000000033 R_RISCV_RELAX                        0

From the output we see that both fact and printf names calls have their relocations. These relocations are provided by compiler to asssist linker in resolving symbols.

Static libraries

The following command:

riscv64-unknown-linux-gnu-gcc main.c fact.c  -o fact

compiles program to executable. But no linker here is invoked? Or is it?

Pass --verbose flag to dive deeper into what gcc actually calls under the hood.

Find collect2 call line:

/opt/sc-dt/riscv-gcc/bin/../libexec/gcc/riscv64-unknown-linux-gnu/12.2.1/collect2 -plugin /opt/sc-dt/riscv-gcc/bin/../libexec/gcc/riscv64-unknown-linux-gnu/12.2.1/liblto_plugin.so -plugin-opt=/opt/sc-dt/riscv-gcc/bin/../libexec/gcc/riscv64-unknown-linux-gnu/12.2.1/lto-wrapper -plugin-opt=-fresolution=/tmp/cc1yCRZY.res -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lgcc_s --sysroot=/opt/sc-dt/riscv-gcc/bin/../sysroot --eh-frame-hdr -melf64lriscv -dynamic-linker /lib/ld-linux-riscv64-lp64d.so.1 -o fact /opt/sc-dt/riscv-gcc/bin/../sysroot/usr/lib64/lp64d/crt1.o /opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1/crti.o /opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1/crtbegin.o -L/opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1 -L/opt/sc-dt/riscv-gcc/bin/../lib/gcc -L/opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1/../../../../riscv64-unknown-linux-gnu/lib/../lib64/lp64d -L/opt/sc-dt/riscv-gcc/bin/../sysroot/lib/../lib64/lp64d -L/opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1/../../../../riscv64-unknown-linux-gnu/lib -L/opt/sc-dt/riscv-gcc/bin/../sysroot/lib64/lp64d -L/opt/sc-dt/riscv-gcc/bin/../sysroot/usr/lib64/lp64d -L/opt/sc-dt/riscv-gcc/bin/../sysroot/lib /tmp/ccOEmH9J.o /tmp/cc6v5KHP.o -lgcc --push-state --as-needed -lgcc_s --pop-state -lc -lgcc --push-state --as-needed -lgcc_s --pop-state /opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1/crtend.o /opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1/crtn.o

collect2 is the actual command called in the process of linking.

Try examining every argument and describe what it is responsible for.

Mostly all of the arguments are paths to libraries.

Static libraries are embedded to applications code directly.

Let's create or own little static library:

riscv64-unknown-linux-gnu-ar cr libfact.o fact.o

Use nm utility to get the list of symbols available in the archive:

riscv64-unknown-linux-gnu-nm libfact.o
fact.o:
0000000000010294 r __abi_tag
0000000000012040 B __BSS_END__
0000000000012038 B __bss_start
0000000000012038 b completed.0
0000000000012000 D __DATA_BEGIN__
0000000000012000 D __data_start
0000000000012000 W data_start
000000000001048a t deregister_tm_clones
00000000000104d0 t __do_global_dtors_aux
0000000000011e18 d __do_global_dtors_aux_fini_array_entry
0000000000012030 D __dso_handle
0000000000011e20 d _DYNAMIC
0000000000012038 D _edata
0000000000012040 B _end
0000000000010524 T fact
00000000000104f2 t frame_dummy
0000000000011e10 d __frame_dummy_init_array_entry
0000000000010608 r __FRAME_END__
0000000000012020 d _GLOBAL_OFFSET_TABLE_
0000000000012800 A __global_pointer$
00000000000105cc r __GNU_EH_FRAME_HDR
0000000000011e18 d __init_array_end
0000000000011e10 d __init_array_start
0000000000012028 D _IO_stdin_used
00000000000105c2 T __libc_csu_fini
000000000001056a T __libc_csu_init
                 U __libc_start_main@GLIBC_2.27
000000000001047e t load_gp
00000000000104f4 T main
                 U printf@GLIBC_2.27
0000000000010410 t _PROCEDURE_LINKAGE_TABLE_
00000000000104a8 t register_tm_clones
0000000000012028 D __SDATA_BEGIN__
0000000000010450 T _start
0000000000012000 D __TMC_END__

To link with your static or dynamic library, pass -llib argument to compilation flags. lib is the name of library.

Note that linking directly with ld is strongly discouraged, instead, use gcc or clang driver and pass additional options to linker if needed.

Task 3.2

  • Create a separare directory with files for your static library
  • Write Makefile target which creates static library
  • Use nm to find out what
  • Write Makefile target which links
  • Create your own static library for RISC-V. It would be even better if application was useful, for instance, a custom C logging library.

Dynamic linking

Static linking is portable, because all library code is embedded in the application and no platform support required. But this makes application's code size increase dramatically.

The solution to this problem is dynamic libraries

Let's create our little dynamic library and link our application against it:

CFLAGS=-fPIC make fact
riscv64-unknown-linux-gnu-gcc -shared fact.o -o libfact.so
$ file libfact.so
libfact.so: ELF 64-bit LSB shared object, UCB RISC-V, RVC, double-float ABI, version 1 (SYSV), dynamically linked, not stripped

Now link your library against libfact.so:

riscv64-unknown-linux-gnu-gcc main.o -o fact -lfact
/opt/sc-dt/riscv-gcc/bin/../lib/gcc/riscv64-unknown-linux-gnu/12.2.1/../../../../riscv64-unknown-linux-gnu/bin/ld: cannot find -lfact: No such file or directory
collect2: error: ld returned 1 exit status

This happened because our libfact.so is in the current working directory, and linker does not now it should look here.

You can pass paths where linker should search for the libraries with -L option:

riscv64-unknown-linux-gnu-gcc main.o -o fact -L. -lfact

We told the linker to search for libfact inside our current working directory.

file fact
fact: ELF 64-bit LSB executable, UCB RISC-V, RVC, double-float ABI, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-riscv64-lp64d.so.1, for GNU/Linux 4.15.0, not stripped

Now let's run it with qemu:

❯ qemu-riscv64 ./fact
./fact: error while loading2 shared libraries: libfact.so: cannot open shared object file: No such file or directory

What is wrong? We linker the library, didn't we?

The reason is that though we specified where to look for dynamic library, we didn't put that information in the binary.

Let's do it using rpath:

riscv64-unknown-linux-gnu-gcc main.o -o fact -L. -lfact -Wl,-rpath,.

Now let's try again:

qemu-riscv64  ./fact
./fact: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory

Now qemu failed to find the C standard library. We already know how to fix it, let's pass path to glibc:

❯ qemu-riscv64 -L . -L /opt/sc-dt/riscv-gcc/sysroot/ ./fact
120

Our factorial finally works, and we learned to create dynamic libraries.

Task 3.3

  • Create your own little dynamic library. First do it with x86 toolchain, then for RISCV.
  • Link application with the library and make it run on QEMU and on LicheePi (when available).

Intro to Linux and building it for embedded systems

In this lab you will be introduced to basic concepts from Linux based systems. Basic structure and overview of linux distributions (or distros) will be presented. You will use qemu-system to boot an existing image for LicheePi4A RISC-V SBC.

First of all, you need to build some of the required tools and download a system image, which you will need going forth. Following the preparations

Prerequisites

The official linux distro for Lichee PI 4A is revyos

Revyos image

Download a prebuilt system image from the manufacturer here. Here's a direct link. Please choose the latest LPI4A_BASIC archive. Mega has arbitrary restrictions on download size, so LPI4A_FULL is too large and you will get throttled.

Distrobox for building inside ubuntu-22.04 container.

Distrobox is a tool for working inside OCI containers in your shell. Visit the homepage or the official website. Follow the installation instructions for your distribution.

Excerpt from the official installation docs:

If you like to live your life dangerously, or you want the latest release, you can trust [the author of distrobox] and simply run this in your terminal:

curl -s https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh
# or using wget
wget -qO- https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh

After you've installed distrobox you can create a fresh ubuntu-22.04 container with a single command:

distrobox create --image ubuntu:22.04 --name ubuntu-22.04-revyos-sdk

When the command has finished downloading OCI container image you can enter it like so:

distrobox enter ubuntu-22.04-revyos-sdk

All of the subsequent instructions assume you are working inside the newly created container. If you have any questions/problems please refer to the official quick-start guide.

Build RevyOS QEMU

Vendor has implemented some custom RISC-V spec extensions, so in order to run binaries compiled for their system you will need a build of custom QEMU.

Build instructions for ubuntu-22.04.

Install the build-time dependencies.

sudo apt install ninja-build python3-venv build-essential libglib2.0-dev flex bison libpixman-1-dev git fdisk file tsocks

Fetch the repository and compile it from source.

git clone https://github.com/revyos/qemu revyos-qemu
cd revyos-qemu
git submodule init
git submodule update --recursive
./configure --target-list=riscv64-softmmu,riscv64-linux-user --with-git='tsocks git'
make -j $(nproc)
sudo make install # This will install qemu-riscv64 and qemu-system-riscv64 inside the container
# Your host system will not be affected.

Prerequisites

You will need to complete all of the steps described in the preparations. Do not forget to enter the container with distrobox enter ubuntu-22.04-revyos-sdk.

Mounting the filesystem

mkdir revyos-with-qemu
cd revyos-with-qemu
cp <path to LPI4A_BASIC_20240111.zip> ./
unzip LPI4A_BASIC_20240111.zip
# Archive:  LPI4A_BASIC_20240111.zip
#    creating: LPI4A_BASIC_20240111/
#   inflating: LPI4A_BASIC_20240111/root.ext4
#   inflating: LPI4A_BASIC_20240111/boot.ext4
#   inflating: LPI4A_BASIC_20240111/flash_image.sh
#   inflating: LPI4A_BASIC_20240111/fastboot
#   inflating: LPI4A_BASIC_20240111/u-boot-with-spl-lpi4a.bin
#   inflating: LPI4A_BASIC_20240111/u-boot-with-spl-lpi4a-16g.bin 

The extracted image contains:

  • u-boot-with-spl-lpi4a.bin - This is the secondary program loader (SPL) and primary uboot for 8Gb DDR board.
  • u-boot-with-spl-lpi4a-16g.bin - Same as u-boot-with-spl-lpi4a.bin, but for the 16Gb variant.
  • root.ext4 - Root filesystem
  • boot.ext4 - Boot partition
fdisk -l LPI4A_BASIC_20240111/root.ext4
# Disk LPI4A_BASIC_20240111/root.ext4: 4 GiB, 4294967296 bytes, 8388608 sectors
# Units: sectors of 1 * 512 = 512 bytes
# Sector size (logical/physical): 512 bytes / 512 bytes
# I/O size (minimum/optimal): 512 bytes / 512 bytes
fdisk -l LPI4A_BASIC_20240111/boot.ext4
# Disk LPI4A_BASIC_20240111/boot.ext4: 500 MiB, 524288000 bytes, 1024000 sectors
# Units: sectors of 1 * 512 = 512 bytes
# Sector size (logical/physical): 512 bytes / 512 bytes
# I/O size (minimum/optimal): 512 bytes / 512 bytes

In order to view the contents of the filesystems you need to mount them.

sudo mkdir /mnt/boot /mnt/root -p
sudo mount LPI4A_BASIC_20240111/boot.ext4 /mnt/boot -v
# mount: /dev/loop0 mounted on /mnt/boot.
sudo mount LPI4A_BASIC_20240111/root.ext4 /mnt/root -v
# mount: /dev/loop1 mounted on /mnt/root.
cd /mnt/boot
# ⬢ [Docker] ❯ lla
# Permissions Size User Date Modified Name
# drwxr-xr-x     - root  9 Oct  2023   dtbs
# drwxr-xr-x     - root 14 Dec  2023   extlinux
# drwx------     - root  9 Oct  2023   lost+found
# .rw-r--r--  167k root  6 Sep  2023   config-5.10.113-lpi4a
# .rwxr-xr-x   86k root  8 Oct  2023   fw_dynamic.bin
# .rwxr-xr-x   23M root 20 Dec  2023   Image
# .rw-r--r--  5.2M root  9 Oct  2023   initrd.img-5.10.113-lpi4a
# .rwxr-xr-x   50k root  6 Sep  2023   light_aon_fpga.bin
# .rwxr-xr-x  6.1M root  8 Oct  2023   light_c906_audio.bin
# .rw-r--r--  6.2M root  6 Sep  2023   System.map-5.10.113-lpi4a
# .rwxr-xr-x   23M root  6 Sep  2023   vmlinux-5.10.113-lpi4a

Files of the primary interest are:

  • fw_dynamic.bin - OpenSBI with dynamic information. More info at fw_dynamic.md
  • dtbs - Compiled device tree binaries.
  • Image - Kernel, which is run by the bootloader (uboot). This is a statically linked binary.
  • vmlinux-5.10.113-lpi4a - Kernel binary produced during the compilation process. Is not bootable.
  • initrd.img-5.10.113-lpi4a - Initial ram disk for Phase 1 init before rootfs can be mounted.

To view the contents of initrd it first needs to be uncompressed.

cd <path to revyos-with-qemu>
cp /mnt/boot/initrd.img-5.10.113-lpi4a .
file initrd.img-5.10.113-lpi4a
# initrd.img-5.10.113-lpi4a: gzip compressed data, was "mkinitramfs-MAIN_oNzF9f", last modified: Mon Oct  9 14:10:38 2023, from Unix, original size modulo 2^32 12879360
gunzip initrd.img-5.10.113-lpi4a -d -c > initrd.img-5.10.113-lpi4a.cpio
file initrd.img-5.10.113-lpi4a.cpio
# initrd.img-5.10.113-lpi4a.cpio: ASCII cpio archive (SVR4 with no CRC)
cpio --list -i < ./initrd.img-5.10.113-lpi4a.cpio | head -n 15
# .
# bin
# conf
# conf/arch.conf
# conf/conf.d
# conf/initramfs.conf
# etc
# etc/fstab
# etc/ld.so.cache
# etc/ld.so.conf
# etc/ld.so.conf.d
# etc/ld.so.conf.d/libc.conf
# etc/ld.so.conf.d/riscv64-linux-gnu.conf
# etc/modprobe.d
# etc/mtab
# 25155 blocks
mkdir initrd-contents
cpio -idmv < ../initrd.img-5.10.113-lpi4a.cpio
# ⬢ [Docker] ❯ lla
# Permissions Size Date Modified Name
# lrwxrwxrwx     - 26 Mar 12:02   bin -> usr/bin
# drwxr-xr-x     - 26 Mar 12:02   conf
# drwxr-xr-x     - 26 Mar 12:02   etc
# lrwxrwxrwx     - 26 Mar 12:02   lib -> usr/lib
# drwxr-xr-x     -  9 Oct  2023   run
# lrwxrwxrwx     - 26 Mar 12:02   sbin -> usr/sbin
# drwxr-xr-x     - 26 Mar 12:02   scripts
# drwxr-xr-x     - 26 Mar 12:02   usr
# .rwxr-xr-x  6.6k 10 Apr  2022   init

Running the extracted image with QEMU

Now that we have everything extracted it's easy to run the image with the custom vendor fork of qemu, which you've previously compiled from source.

First you will need to modify the /mnt/root/etc/fstab file a bit. Replace the contents of the file with the following:

# UNCONFIGURED FSTAB FOR BASE SYSTEM
/dev/vda /   auto    defaults    1 1
/dev/vdb /boot   auto    defaults    0 0

To do this you can use your faviourite editor.

echo "# UNCONFIGURED FSTAB FOR BASE SYSTEM
/dev/vda /   auto    defaults    1 1
/dev/vdb /boot   auto    defaults    0 0
" > fstab
sudo cp fstab /mnt/root/etc/fstab
cat /mnt/root/etc/fstab
# # UNCONFIGURED FSTAB FOR BASE SYSTEM
# /dev/vda /   auto    defaults    1 1
# /dev/vdb /boot   auto    defaults    0 0

Now you can launch qemu with the following command:

sudo umount /mnt/root
e2fsck -y LPI4A_BASIC_20240111/root.ext4
e2fsck -y LPI4A_BASIC_20240111/boot.ext4
sync
revyos-qemu/build/qemu-system-riscv64 -M virt -cpu c910v -smp 1 -m 3G -kernel /mnt/boot/Image -append "root=/dev/vda rw console=ttyS0" -drive file=LPI4A_BASIC_20240111/root.ext4,format=raw,id=hd0 -device virtio-blk-device,drive=hd0 -drive file=LPI4A_BASIC_20240111/boot.ext4,format=raw,id=hd1 -device virtio-blk-device,drive=hd1 -initrd /mnt/boot/initrd.img-5.10.113-lpi4a -nographic

You will see the boot log that looks something like this:

OpenSBI v0.9
   ____                    _____ ____ _____
  / __ \                  / ____|  _ \_   _|
 | |  | |_ __   ___ _ __ | (___ | |_) || |
 | |  | | '_ \ / _ \ '_ \ \___ \|  _ < | |
 | |__| | |_) |  __/ | | |____) | |_) || |_
  \____/| .__/ \___|_| |_|_____/|____/_____|
        | |
        |_|

Platform Name             : riscv-virtio,qemu
Platform Features         : timer,mfdeleg
Platform HART Count       : 1
Firmware Base             : 0x80000000
Firmware Size             : 100 KB
Runtime SBI Version       : 0.2

....

[    0.000000] Linux version 5.10.113+ (ubuntu@ubuntu-2204-buildserver) (riscv64-unknown-linux-gnu-gcc (Xuantie-900 linux-5.10.4 glibc gcc Toolchain V2.6.1 B-20220906) 10.2.0, GNU ld (GNU Binutils) 2.35) #1 SMP PREEMPT Wed Dec 20 08:25:29 UTC 2023
[    0.000000] OF: fdt: Ignoring memory range 0x80000000 - 0x80200000
[    0.000000] efi: UEFI not found.
[    0.000000] Initial ramdisk at: 0x(____ptrval____) (5160960 bytes)
[    0.000000] Zone ranges:
[    0.000000]   DMA32    [mem 0x0000000080200000-0x00000000ffffffff]
[    0.000000]   Normal   [mem 0x0000000100000000-0x000000013fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000080200000-0x000000013fffffff]

After a while you should see the login prompt:

[  OK  ] Started serial-getty@ttyS0…rvice - Serial Getty on ttyS0.
[  OK  ] Reached target getty.target - Login Prompts.
[  OK  ] Reached target multi-user.target - Multi-User System.
[  OK  ] Reached target graphical.target - Graphical Interface.
         Starting systemd-update-ut… Record Runlevel Change in UTMP...
[  OK  ] Finished systemd-update-ut… - Record Runlevel Change in UTMP.

Debian GNU/Linux 12 lpi4a ttyS0

lpi4a login:

Enter the default login debian and password debian as documented in the docs.

You should now be in the shell:


   ____              _ ____  ____  _  __  ____  _                     _
  |  _ \ _   _ _   _(_) ___||  _ \| |/ / / ___|(_)_ __   ___  ___  __| |
  | |_) | | | | | | | \___ \| | | | ' /  \___ \| | '_ \ / _ \/ _ \/ _` |
  |  _ <| |_| | |_| | |___) | |_| | . \   ___) | | |_) |  __/  __/ (_| |
  |_| \_\\__,_|\__, |_|____/|____/|_|\_\ |____/|_| .__/ \___|\___|\__,_|
               |___/                             |_|
                                        -- Presented by ISCAS and Sipeed

  Debian GNU/Linux 12 (bookworm) (kernel 5.10.113+)

Linux lpi4a 5.10.113+ #1 SMP PREEMPT Wed Dec 20 08:25:29 UTC 2023 riscv64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
debian@lpi4a:~$

The image contains python3 so you can run a basic hello-world in the python repl:

python3
# Python 3.11.4 (main, Jun  7 2023, 10:13:09) [GCC 12.2.0] on linux
# Type "help", "copyright", "credits" or "license" for more information.
# >>> print("Hello, World")
# Hello, World
<Ctrl + D>

Run uname utility to print the information about current machine:

uname -a
# Linux lpi4a 5.10.113+ #1 SMP PREEMPT Wed Dec 20 08:25:29 UTC 2023 riscv64 GNU/Linux

Using buildroot to cross-compile embedded linux systems

Buildroot is a simple, efficient and easy-to-use tool to generate embedded Linux systems through cross-compilation.

Buildroot is a widely used tool for cross-compilation. It's quite easy to use compared to alternatives like yocto. It takes care of bootstrapping a suitable toolchain with support for various standard library implementations (glibc, musl, uClibc-ng). Most importantly it can build rootfs from scratch including the necessary bootloaders.

In this lab we will be using mainline buildroot to build a bootable disk image for Sipeed LP4A.

Cross compiling a system for RISC-V with Buildroot

Getting buildroot

First off you will need to install dependencies and download mainline buildroot.

wget https://buildroot.org/downloads/buildroot-2024.02.1.tar.gz
tar -xzvf buildroot-2024.02.1.tar.gz
cd buildroot-2024.02.1/
sudo apt install libncurses-dev file cpio libssl-dev fdisk dosfstools cmake ccache build-essential

Create a defconfig file with the following command. This defines minimal set of buildroot options.

echo 'BR2_riscv=y
BR2_RISCV_ISA_RVC
BR2_TOOLCHAIN_BUILDROOT_MUSL=y
BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_5_10=y
BR2_GCC_VERSION_13_X=y
BR2_LINUX_KERNEL=y
BR2_LINUX_KERNEL_CUSTOM_GIT=y
BR2_LINUX_KERNEL_CUSTOM_REPO_URL="https://github.com/revyos/thead-kernel"
BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION="lpi4a"
BR2_LINUX_KERNEL_DEFCONFIG="revyos"
BR2_LINUX_KERNEL_DTS_SUPPORT=y
BR2_LINUX_KERNEL_INTREE_DTS_NAME="thead/light-lpi4a"
BR2_LINUX_KERNEL_NEEDS_HOST_PAHOLE=y
BR2_TARGET_ROOTFS_EXT2=y
BR2_TARGET_ROOTFS_EXT2_4=y
BR2_TARGET_ROOTFS_EXT2_SIZE="512M"
BR2_TARGET_ROOTFS_INITRAMFS=y
# BR2_TARGET_ROOTFS_TAR is not set
BR2_TARGET_OPENSBI=y
BR2_TARGET_OPENSBI_CUSTOM_GIT=y
BR2_TARGET_OPENSBI_CUSTOM_REPO_URL="https://github.com/revyos/opensbi"
BR2_TARGET_OPENSBI_CUSTOM_REPO_VERSION="th1520-v1.4"
BR2_TARGET_OPENSBI_PLAT="generic"
# BR2_TARGET_OPENSBI_INSTALL_JUMP_IMG is not set
BR2_TARGET_UBOOT=y
BR2_TARGET_UBOOT_BUILD_SYSTEM_KCONFIG=y
BR2_TARGET_UBOOT_CUSTOM_GIT=y
BR2_TARGET_UBOOT_CUSTOM_REPO_URL="https://github.com/revyos/thead-u-boot"
BR2_TARGET_UBOOT_CUSTOM_REPO_VERSION="th1520"
BR2_TARGET_UBOOT_BOARD_DEFCONFIG="light_lpi4a"
BR2_TARGET_UBOOT_NEEDS_OPENSBI=y
BR2_TARGET_UBOOT_SPL=y
BR2_PACKAGE_HOST_UBOOT_TOOLS=y' > defconfig

Then create .config via make:

make defconfig BR2_DEFCONFIG=./defconfig

Now the only thing left to do is launch make and wait till completion. Depending on the machine this might take a while.

Next you will need to fetch vendor-provided binary blobs (boot firmware):

wget https://github.com/revyos/th1520-boot-firmware/archive/refs/tags/20240127+sdk1.4.2.tar.gz
tar -xzvf 20240127+sdk1.4.2.tar.gz

Now we can create out boot filesystem partition:

dd if=/dev/zero of=output/images/bootfs.ext4 bs=4M count=32 && sync
sudo mkfs.ext4 output/images/bootfs.ext4
sudo mkdir /mnt/boot -p
sudo mount output/images/bootfs.ext4 /mnt/boot
sudo mkdir /mnt/boot/extlinux/dtbs -p
sudo cp output/images/fw_dynamic.bin output/images/Image /mnt/boot
sudo cp th1520-boot-firmware-20240127-sdk1.4.2/addons/boot/light_* /mnt/boot/
sudo cp output/images/light-lpi4a.dtb /mnt/boot/extlinux/dtbs
sudo rm /mnt/boot/lost+found -rf

Create extlinux.conf:

echo 'DEFAULT makeshiftos-default
MENU TITLE ------------------------------------------------------------
TIMEOUT 50

label makeshiftos-default
  MENU LABEL MakeShiftOS Default
  LINUX /Image
  FDT dtbs/light-lpi4a.dtb
  append root=/dev/mmcblk0p3 console=ttyS0,115200 rootwait rw earlycon clk_ignore_unused loglevel=7 eth= rootrwoptions=rw,noatime rootrwreset=yes' > extlinux.conf
sudo mv extlinux.conf /mnt/boot/extlinux/extlinux.conf

Unmount the filesystem and flash it to the LP4A board:

sync
sudo umount /mnt/boot
sudo ./fastboot flash ram output/images/u-boot-spl.bin
sudo ./fastboot reboot
sudo ./fastboot flash uboot output/images/u-boot-spl.bin
sudo ./fastboot flash boot output/images/bootfs.ext4
sudo ./fastboot flash root output/images/rootfs.ext4

(Optional) When connecting via UART you will need to set serial options in the following way:

sudo stty -F /dev/ttyUSB0 ispeed 115200 ospeed 115200 cs8 raw

References