Programming

Built-in Rust debugging. Setting up a debugging workflow from… | by Mattia Fiumara | September 2023

[ad_1]

photo by Christopher Gower on Unsplash

In my last article, I explored what it means to code a simple IoT application for the ESP32 series using Rust bindings for the ESP-IDF. This time, I wanted to delve into an aspect that I haven’t talked about: debugging, particularly on Bare Metal.

Since it might have seemed easy to code Rust on built-in targets with standard library support, I would like to show what it looks like to program Rust on a target that does not support the standard library and how to configure such project from scratch. . I set the following goals for myself:

  • Navigate the bare metal Rust ecosystem and determine what we need to set up a project from scratch when using no_std
  • Configure a logging system capable of printing logs via JTAG/SWD
  • Attach a debugger to the application and step through the code using VSCode

Hardware configuration

I will use a nRF91 Thing targeting nrf52840 in combination with a Dark magic probe to run and debug the code. This is what it looks like:

NRF91 trick with black magic probe
NRF91 Thingy with black magic probe

If you want to follow the article, you can find the associated code here. Just make sure you have Rust installed on your system and you are good to go.

For some context, let’s look at the definition of Bare metal according to Wikipedia:

In computing, the bare machine (or bare metal) refers to a computer execute instructions directly on logical hardware without intervention operating system.

Normally, when you write a program in Rust, the standard library will call upon the operating system when, for example, you try to read and write files, open network sockets, or do other types of I/O (like connecting to a console). .

When programming for embedded devices with small memory specifications, the code is usually executed directly on the processor. In these cases, Rust allows you to compile your application, excluding the standard library. It’s called no_std and the implications are as follows:

  • You must bring your own runtime
  • You will need to use some sort of HAL (Hardware Abstraction Layer) or PAC (Peripheral Access Crate) to control the hardware.
  • If you want to use dynamic memory, you need to bring your own allocator

Starting

To begin, let’s initialize a new project using cargo initthen use the following list of codes as a starting point:

#!(no_std)

use core::panic::PanicInfo;

#(panic_handler)
fn panic(_info: &PanicInfo) -> ! {
loop {}
}

fn main() -> ! {
loop {}
}

We specify at the top that we operate in a bare-metal environment without OS using #!(no_std). This ensures that we can compile our code for our target hardware, for which there is no standard library support (see Platform support for a list of targets that do or do not support the Rust standard library).

When we use #(no_std)we also need to specify a panic handler ourselves using the attribute #!(panic_handler). This is necessary because the standard panic macro relies on some functionality of the standard library, which means that our program will not compile if we do not override this behavior.

Duration

While compiling the executable code, we need to tell our processor the entry point of our application and configure the processor registers like program counter and other registers. For this we need to include a runtime. By far the most popular choice for cortex-m targets is cortex-m-rt. Add the crate to your project using cargo add:

$ cargo add cortex-m-rt
Updating crates.io index
Adding cortex-m-rt v0.7.3 to dependencies.
Features:
- device
- set-sp
- set-vtor
Updating crates.io index

The only thing we need to change now is to annotate our main function using #(entry) to indicate that this is the application entry point:

use cortex_m_rt::entry;

#(entry)
fn main() -> ! {
loop {}
}

HAL and PAC

To control our Thingy91’s peripherals and bridge our binary we use a high level hardware abstraction layer (HAL), but for some less well supported targets you will need to resort to a peripheral access crate (PAC) . ). Fortunately, the Nordic series of microcontrollers is well supported by the built-in Rust community (check out the nrf-hal GitHub), and we don’t need to click on the registers directly. For my specific chip, I will add the nrf52840-hal:

$ cargo add nrf52840-hal
Updating crates.io index
Adding nrf52840-hal v0.16.0 to dependencies.
Features:
+ rt
- doc
Updating crates.io index

To specify how to link everything to the target, we can copy the memory.x HAL file at the root of our project. This linker script tells the compiler where to store our code in flash and where our statically defined variables are stored in RAM.

Added logging

To log in Real-time transfer (RTT), I found two well-supported crates to choose from:

If you have a larger project, I highly recommend looking into it, as its logging capabilities really shine in larger projects with multiple modules. Since we only have one source file and defmt requires additional configuration in combination with a BMPI’ll use rtt-target.

You have already understood the exercise: cargo add rtt-target to add the crate to your project. Additionally, you must provide rtt-target with an implementation in critical section. This is necessary to ensure that different threads can access the same logging instance without risk of memory corruption, even if we only have to worry about a single thread. For the cortex-m CPU architecture, the simplest is to use the functionality of cortex-m Box. Manually add the checkout to your Cargo.toml file to include functionality:

cortex-m = { version = "0.7.7", features = ("critical-section-single-core") }

Now that we’ve figured it out, we can finally write some code! The code we’ll be looking at is a simple application that scans the Thingy91’s I2C bus and returns an array of devices available on the bus (similar to i2cdetect under Linux). Here is the new content of our main function:

#(entry)
fn main() -> ! {
rtt_init_print!();

// Acquire a reference to the GPIO
let p = pac::Peripherals::take().unwrap();
let port1 = hal::gpio::p1::Parts::new(p.P1);
// The I2C of the nrf52840 on the thingy91, replace if you're using different hardware
let sda = port1.p1_08.into_floating_input();
let scl = port1.p1_09.into_floating_input();
// Instantiate and enable the Two-Wire Interface Peripheral (I2C)
let mut twim = hal::Twim::new(
p.TWIM0,
hal::twim::Pins {
sda: sda.degrade(),
scl: scl.degrade(),
},
hal::twim::Frequency::K400,
);
twim.enable();

rprintln!("Scanning I2C bus...\r");
// Print I2C table the header
rprintln!(" 0 1 2 3 4 5 6 7 8 9 a b c d e f\r");
rprint!("00: ");
// Loop over all addresses on the I2C bus
for i in 1..0xFF {
if i % 0x10 == 0 {
rprint!("\r\n{:X}: ", i);
}
// We're issuing a simple scan to check if there's an ACK
// We do not care about the result in the buffer but we need to
// provide a non-empty one
let mut buffer: (u8; 1) = (0xFF);
match twim.read(i, &mut buffer) {
Ok(_) => {
rprint!("{:X} ", i);
}
Err(err) => {
match err {
// In case of a NACK we print -- similar to i2cdetect on Linux
hal::twim::Error::AddressNack => {
rprint!("-- ");
}
_ => {
// Handle other error types if needed
rprintln!("Error reading from TWIM: {:?}\r", err);
break;
}
}
}
}
}
rprintln!("\r\nDone!\r");
loop {}
}

A brief summary of what’s happening here:

  1. First, RTT logging is initialized using the rtt_init_print macro. This ensures that we can print to our logging console (in our case, the BMP’s serial device)
  2. The NRF52840’s TWIM device is initialized and enabled with the pins corresponding to the I2C bus on the Thingy91
  3. Next we print some logs and a table of all the I2C addresses present on the bus

Compilation

To compile the application, we need to specify the target we are compiling for. The most practical way is to create a .cargo/config file where we specify this; this saves us from specifying it each time as a command line parameter for cargo. Here is the code:

(build)
target = "thumbv7em-none-eabihf"

Make sure to install the required toolchain using Rustup, after which you can compile your application and check the binary size to make sure everything is linked correctly. It should look like this:

$ rustup target add thumbv7em-none-eabihf
info: downloading component ‘rust-std’ for ‘thumbv7em-none-eabihf’
info: installing component ‘rust-std’ for ‘thumbv7em-none-eabihf’
$ cargo build
Compiling rust-baremetal-debug v0.1.0 (/Users/mfiumara/repos/rust-debug-2023)
Finished dev (unoptimized + debuginfo) target(s) in 0.08s
$ arm-none-eabi-size target/thumbv7em-none-eabihf/debug/rust-baremetal-debug
text data bss dec hex filename
20056 0 1092 21148 529c target/thumbv7em-none-eabihf/debug/rust-baremetal-debug

Note that this is an unoptimized version. When building with cargo build --releasethe binary size is reduced by more than half:

$ arm-none-eabi-size target/thumbv7em-none-eabihf/release/rust-baremetal-debug
text data bss dec hex filename
8092 0 1092 9184 23e0 target/thumbv7em-none-eabihf/release/rust-baremetal-debug

Flash and debug the program

Interesting way, Rust doesn’t come with a debugger. It relies on already existing debuggers, like GDB or LLDB. If you are using a JLink SDK and Probe, I highly recommend taking a look at probe-rs. This is an amazing package that makes setting up the debugger and flashing process easy with built-in commands like cargo flash And cargo embed to start debugging. This is a great debugging workflow.

Since I’m using a Black Magic probe, it’s slightly different because it hosts a full GDB server on the probe itself, which unfortunately, is not compatible with probes. Instead of probes, we will connect VS Code to GDB using the cortex-m debugging extension, which supports the Black Magic probe. What follows .vscode/launch.json the file launches the application code (note that we enable RTT using mon rtt after attaching to GDB in the postLaunchCommands option):

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": (
{
"name": "Cortex Debug",
"cwd": "${workspaceFolder}",
"executable": "target/thumbv7em-none-eabihf/debug/rust-baremetal-debug",
"request": "launch",
"type": "cortex-debug",
"BMPGDBSerialPort": "/dev/cu.usbmodem98B724951",
"servertype": "bmp",
"interface": "swd",
"runToEntryPoint": "main",
"postLaunchCommands": ("mon rtt")
}
)
}

Now it’s finally time to press the magic “debug” button in VS Code and see if we have everything set up correctly. Just make sure you have a window open to monitor the RTT output. In my case, screen /dev/tty.usbmodem98B724953:

The final debug setup
The final debugging setup in VS Code

Looks like we finally made it! Our application launches and stops at the entry point we defined using #(entry). We can step through the code and even go into the library code to explore how HAL configures the TWIM device registers or how cortex-m-rt manages CPU initialization.

Once the program completes, we can see that the RTT is sent to our serial device, showing the output below in the terminal window. This is what it looks like:

Scanning I2C bus...
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: 50 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- 76 -- -- -- -- -- -- -- -- --
80: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
90: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
A0: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
B0: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
C0: -- -- -- -- -- -- C6 -- -- -- -- -- -- -- -- --
D0: D0 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
E0: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
F0: -- -- -- -- -- -- F6 -- -- -- -- -- -- -- --
Done!

Our program successfully scans six devices on the I2C bus and then enters the infinite loop at the end of our program.

This illustrates everything that is necessary to set up a no_std project in Rust from scratch with some basic debugging capabilities. It’s always a bit of a pain to set things up in any built-in project, so it’s good to go through one of these project setups in Rust once to really understand how everything works together and what is needed and what isn’t. .

If you want to set up your future project, I recommend checking out some project templates to use instead of going through the whole exercise of setting everything up from scratch. A good example of cortex-m targets is the cortex-m-quickstart project.

This sets up the basic runtime, although you still need to set up a logging system yourself and add a HAL. Also check if probe-rs is an option for you, especially if you plan to combine it with defmt.

[ad_2]

Source link

Related Articles

Back to top button