A part of integrating the Memfault Firmware SDK inside your devices' firmware is to create a path to the Memfault cloud. The SDK will collect data from your devices in the field such as coredumps, heartbeats and events. This data needs to be sent to the Memfault cloud for analysis.
The ways in which devices get data back to the internet varies a lot. Some devices have a direct internet connection, for example, through an LTE modem. Others are indirectly connected and send data back through a "gateway" of sort, for example by connecting over Bluetooth to a phone app that relays the data back to the internet.
To make the integration as easy as possible while catering to as many different connectivity paths, the Memfault Firmware SDK contains a "data packetizer". The data packetizer breaks up all data that the SDK needs to send out (heartbeats, coredumps, events, etc.) into smaller pieces called chunks. The chunks can be sized as large or small as required to match the capabilities and constraints of the device and its connectivity stack.
Each of these chunks need to be posted to Memfault's chunks HTTP API. The buffering, reassembly and interpretation of the chunks is done by the Memfault cloud.
Building the path between getting chunks from the SDK to posting them to the HTTP API is the only integration work which needs to be done to get the data
- The mechanism to send the chunks back to the Memfault cloud will need to be reliable. By that we mean that data integrity is checked and that chunks are not dropped unknowingly (and are retransmitted in case of data corruption or drops). Missing data and corrupt data errors will be detected by the Memfault cloud, but those chunks will be discarded.
- The device firmware is expected to periodically check whether there is data available and send the chunks out. See the data_packetizer.h header file for the C API.
- The Memfault cloud buffers chunks, until the sequence of related chunks are received. However, if it takes a prolonged period of time to post the remainder of the related chunks, the chunks may be dropped. Because of this and to minimize reporting latencies, it is recommended to drain the data packetizer at least daily.
- Chunks from a given device need to be posted to the chunks HTTP API sequentially, in the order in which the Firmware SDK's packetizer created the chunks. When posting chunks concurrently, ensure that requests for the same device cannot happen concurrently, to avoid violating the ordering requirement.
- To minimize overhead and optimize throughput, batch-upload chunks to the
chunks HTTP API using
multipart/mixedrequests and re-use HTTP connections to Memfault's servers.
- The smallest allowed chunk size is 9 bytes. That said, it is recommended to use the largest possible chunk size that your transport path allows. Smaller chunk sizes generally equate to slower transfers. The (maximum) chunk size can be changed from chunk to chunk (see the memfault_packetizer_get_next C API).
In this mode a call to
memfault_packetizer_get_next always returns a complete
"chunk". The size of the "chunk" is completely up to you (just needs to be ≥9
bytes). It is your responsibility to get the "chunk" reliably to the Memfault
cloud. Typically, the size of the chunk will align with the MTU size of the
underlying transport. Some size examples:
- For BLE, the "chunk" size may be close to 20 bytes to align with the minimal MTU size (23 bytes)
- For a network stack, the "chunk" size may be closer to 1500 bytes to align with the size of an ethernet frame
The Memfault packetizer has two API calls that operate as a pair,
memfault_packetizer_begin(...)lets you configure the operation mode of the packetizer and returns true if there is more data to send
memfault_packetizer_get_next(...)fills a user provided buffer with the next "chunk" of data to send out over the transport
In this mode, the packetizer is capable of building "chunks" which span multiple
memfault_packetizer_get_next(). This mode can be used as an
optimization when a transport is capable of sending messages of an arbitrarily
For example, some use case examples include a raw TCP socket or a serial streaming abstraction such as Bluetooth Classic SPP.
In these situations it's unlikely the entire message could be read into RAM all
at once so the API can be configured to split the read of the chunk across