-
Notifications
You must be signed in to change notification settings - Fork 2k
Best Practice for RIOT Programming
xnumad edited this page Feb 13, 2024
·
12 revisions
- Use the methodology described below.
- Use ccache to speedup compilation
- Use static memory. See also Static vs. Dynamic Memory.
- Select the priorities carefully.
- Minimize stack usage with
DEVELHELP
andCREATE_STACKTEST
. - Use threads to increase flexibility, modularity, and robustness by leveraging IPC.
- Use unsigned or signed integer (
unsigned
,int
,size_t
orssize_t
) for loop variables wherever possible, but keep in mind that on some platforms anint
has a width of only 16-bit. In general, you should avoid types likeuint8_t
for loop iterators as they will probably make it more expensive on some platforms. - Join and factor out parts of the code with existing code in RIOT, where it makes sense.
- Check all
size/length
parameters when passing memory, e.g. usingsizeof(x)
orstrlen(x)
as appropriate. Make sure you don't use the wrong one with a pointer. - Make sure all code paths can be reached. Make sure there are no always
true/false
conditions. - Make sure all critical sections (
lock/unlock
,acquire/release
, ...) are always closed on every code path. - Make sure return values are consistent with our API documentation.
- Use
assert()
statements to check parameters rather than returning an error code at run-time, to keep the code size down. - Use the
DEBUG(...)
macro rather thanlog_x(...)
- Declare all internal module variables and functions
static
- Make sure variables are reduced in scope as much as possible
- Use an appropriate signedness in your variables
- Make sure the variables are big enough to prevent overflow. Be aware that the code may run on platforms with different sizes of variables. For example,
int/unsigned
is only 16-bit on msp430 and avr8. If in doubt, use portable types. - Reduce the number of function calls as far as possible without duplicating code.
- Use good judgement when using
static inline
functions and macros. If they are used in multiple places, is the increase in performance worth the penalty in code size? - Use memory judiciously in general. For example:
typedef enum {
A,
B,
...
} foo_t;
int bar(foo_t v)
{
int abc;
...
switch(v) {
case A:
abc = 23;
break;
case B:
abc = 42;
break;
...
}
...
}
/* VS */
typedef enum {
A = 23,
B = 42,
...
} foo_t;
int bar(foo_t v) {
int abc = v;
...
}
- Don't use too many threads. Try not to use more than one thread per module. Don't create threads for one-time tasks.
- Don't use the POSIX wrapper if implementing something from scratch.
- Don't allocate big chunks of memory (for instance the IPC message queue) on the stack, but use rather static memory for that.
- Don't over-provision memory.
- Don't pass stack memory between different contexts unless you can prove conclusively that it won't be a problem.
- Don't use enums for flags, because flags have a width in memory that is in most cases smaller than
sizeof(enum)
(most bitfields are 16 bits max, on most of our newer platforms,sizeof(enum)
is however 32 bits). This results in every assignment needed to be cast to eitheruint8_t
oruint16_t
. With macros you don't need to cast since they are typeless. Making the enum packed makes its width unpredictable in terms of alignment issues, when used in struct. - Don't duplicate code from elsewhere in the RIOT code base, unless there is a very good reason to do so.
- Don't duplicate code within your own code, unless there is a very good reason to do so. Use internal functions to this end.
- Don't mix up logical and bitwise operations (
!
vs~
, or&&
vs&
)
The below methodology is recommended, using well-known de facto standard tools from the FLOSS community that are compatible with RIOT. Using the below workflow improves time-to-running-code compared to typical IoT software workflows (which can be as retro as "LED-driven" debugging).
- For newbies, preliminaries are typically faster with the provisioned virtual environment setup, e.g. with Vagrant.
- To check your code, first use available static analysis as much as possible initially, which means (i) enable all compiler warnings and fix all problems found, then (ii) use a supported linter such as cppcheck to find bad coding patterns (i.e. code smells) and identify misuse of standard APIs.
- Next, use available dynamic analysis tools to find further defects while running the code on RIOT native, which means (i) running unit tests and integration tests on RIOT native emulator, and (ii) using Valgrind memcheck, as well as the GCC stack smashing detection, to detect and avoid undefined behavior due to invalid memory access.
- In case of networked applications or protocols, test several instances of native communicating via a virtual network mimicking the targeted scenario, which means (i) either using the default virtual full-mesh or other topologies configured via DESvirt, and (ii) using Wireshark to capture and analyze virtual network traffic, e.g. to ensure protocol packets are syntactically correct, and to observe network communication patterns.
- In case of incorrect behavior at this stage, analyze the system state for semantic errors on native using the standard debugger gdb, which allows virtually unlimited conditional breakpoints, record and replay, catchpoints, tracepoints and watchpoints.
- In case of suspected performance bottleneck, use performance profilers gprof, or else cachegrind, to identify precisely the bottlenecks.
- At this stage the implementation has proven bug-free on the native emulator. One can thus finally move on to hardware-in-the-loop, which means (i) flashing the binary on the targeted IoT hardware, typically using standard flasher OpenOCD or edbg, and (ii) using the RIOT shell running on the target IoT device(s) for easier debugging on the target hardware.
- In case the hardware is not available on-site, one can consider remotely flashing and testing the binary on supported open-access testbeds, e.g. IoT-LAB hardware is fully supported by RIOT.
- In case of failure, after analyzing the failure and attempting to fix the defect, go back to step 1 to make sure the fix did not itself introduce a new defect.