A part of integrating the Memfault Firmware SDK inside your devices' firmware is to create a path to the Memfault cloud. The SDK will collect data from your devices in the field such as coredumps, heartbeats and events. This data needs to be sent to the Memfault cloud for analysis.
The ways in which devices get data back to the internet varies a lot. Some devices have a direct internet connection, for example, through an LTE modem. Others are indirectly connected and send data back through a "gateway" of sort, for example by connecting over Bluetooth to a phone app that relays the data back to the internet.
To make the integration as easy as possible while catering to as many different connectivity paths, the Memfault Firmware SDK contains a "data packetizer". The data packetizer breaks up all data that the SDK needs to send out (heartbeats, coredumps, events, etc.) into smaller pieces called chunks. The chunks can be sized as large or small as required to match the capabilities and constraints of the device and its connectivity stack.
Each of these chunks need to be posted to Memfault's chunks HTTP API. The buffering, reassembly and interpretation of the chunks is done by the Memfault cloud.
Building the path between getting chunks from the SDK to posting them to the HTTP API is the only integration work which needs to be done to get the data
- The mechanism to send the chunks back to the Memfault cloud will need to be reliable. By that we mean that data integrity is checked and that chunks are not dropped unknowingly (and are retransmitted in case of data corruption or drops). Missing data and corrupt data errors will be detected by the Memfault cloud, but those chunks will be discarded.
- The device firmware is expected to periodically check whether there is data available and send the chunks out. See the data_packetizer.h header file for the C API.
- The Memfault cloud buffers chunks, until the sequence of related chunks are received. However, if it takes a prolonged period of time to post the remainder of the related chunks, the chunks may be dropped. Because of this and to minimize reporting latencies, it is recommended to drain the data packetizer at least daily.
- Chunks from a given device need to be posted to the chunks HTTP API sequentially, in the order in which the Firmware SDK's packetizer created the chunks. When posting chunks concurrently, ensure that requests for the same device cannot happen concurrently, to avoid violating the ordering requirement.
- To minimize overhead and optimize throughput, batch-upload chunks to the
chunks HTTP API using
multipart/mixedrequests and re-use HTTP connections to Memfault's servers.
- The smallest allowed chunk size is 9 bytes. That said, it is recommended to use the largest possible chunk size that your transport path allows. Smaller chunk sizes generally equate to slower transfers. The (maximum) chunk size can be changed from chunk to chunk (see the memfault_packetizer_get_next C API).
In this mode a call to
memfault_packetizer_get_next always returns a complete
"chunk". The size of the "chunk" is completely up to you (just needs to be ≥9
bytes). It is your responsibility to get the "chunk" reliably to the Memfault
cloud. Typically, the size of the chunk will align with the MTU size of the
underlying transport. Some size examples:
- For BLE, the "chunk" size may be close to 20 bytes to align with the minimal MTU size (23 bytes)
- For a network stack, the "chunk" size may be closer to 1500 bytes to align with the size of an ethernet frame
The Memfault packetizer has two API calls that operate as a pair,
memfault_packetizer_begin(...)lets you configure the operation mode of the packetizer and returns true if there is more data to send
memfault_packetizer_get_next(...)fills a user provided buffer with the next "chunk" of data to send out over the transport
In this mode, the packetizer is capable of building "chunks" which span multiple
memfault_packetizer_get_next(). This mode can be used as an
optimization when a transport is capable of sending messages of an arbitrarily
For example, some use case examples include a raw TCP socket or a serial streaming abstraction such as Bluetooth Classic SPP.
In these situations it's unlikely the entire message could be read into RAM all
at once so the API can be configured to split the read of the chunk across
Occasionally, you may want to read data from one of the data sources the packetizer is accessing in an asynchronous / event driven fashion. For example, this may be desirable if a bare metal environment is being used and coredumps are being saved to a storage medium which can be slow to access (such as external flash).
There are several properties of the SDK packetizer that are helpful for achieving this behavior:
- The packetizer guarantees individual data sources (i.e platform coredump storage) will be read sequentially.
- Each call to
memfault_packetizer_get_chunk()will result in exactly one read to the data source which is active. The size of the read will be less than or equal to the size of the chunk requested.
- The Memfault SDK allows for different APIs to be used when saving data and
when reading back data via the packetizer. For the "coredump" feature,
memfault_coredump_read()is the routine which is called when reading data from the packetizer.
These features are all verified on each release as part of the Memfault Firmware SDK unit test suite.
Using these features, we can easily load data from a slower storage in an asynchronous fashion. Once the data has been preloaded, we can then call Memfault packetizer APIs. Let's walk some example code for how asynchronous reads to coredump storage could be achieved.
In order to operate in asychronous operation mode, RLE data source compression
must be disabled. This is because RLE Compression requires several passes over
the underlying data. The feature can be disabled by either removing
memfault_data_source_rle.c from your build system or by adding the define
MEMFAULT_DATA_SOURCE_RLE_ENABLED=0 to your compilation flags.
and then in the platform coredump code,
memfault_platform_coredump_prepare_async() would look something like:
memfault_coredump_read() can just access data from the
Finally, when the data is cleared at the end of reading the data source, we reset the ram buffer so it looks empty and dispatch an erase to the backing storage.