Reference: memfaultd Configuration
Useful links:
Basic Usage
Running memfaultd --help
prints the following:
Usage: memfaultd [-c <config-file>] [-s] [-Z] [-v] [-V]
Memfault daemon.
Options:
-c, --config-file use configuration file
-s, --show-settings
show settings and exit immediately
-Z, --daemonize daemonize (fork to background)
-v, --version show version
-V, --verbose verbose output
--help display usage information
The --config-file
path defaults to /etc/memfaultd.conf
. The settings you add
in /etc/memfaultd.conf
extend the built-in configuration
file.
As of v1.2.0, we recommend using memfaultctl
to
enable/disable data collection and view current settings.
--show-settings
is still present on memfaultd
for backwards compatibility
but will be removed in a future major version of the SDK.
Usage in a systemd
service
The meta-memfault
Yocto layer already includes a
service file, so you don't need to add one if you're using meta-memfault
.
You can use the --daemonize
flag for this. See this example, taken from the
meta-memfault
layer:
[Unit]
Description=memfaultd daemon
After=local-fs.target network.target
Before=swupdate.service collectd.service
[Service]
Type=forking
PIDFile=/run/memfaultd.pid
ExecStart=/usr/bin/memfaultd --daemonize
Restart=on-failure
[Install]
WantedBy=multi-user.target
The Before=
parameter is there to ensure memfaultd
starts before
swupdate.service
as well as collectd.service
. The configuration files used
by these daemons are generated (at least partly) by memfaultd
at startup, and
need to be present before the respective services start.
If you're not using OTA or Metrics, you may remove
the corresponding part of the Before=
parameter. You may also wish to opt out
of building the feature with memfaultd
.
/etc/memfaultd.conf
A full configuration example can be found in the source directory of
memfaultd
. Here's a copy:
{
"persist_dir": "/media/memfault",
"tmp_dir": null,
"tmp_dir_min_headroom_kib": 10240,
"tmp_dir_min_inodes": 100,
"tmp_dir_max_usage_kib": 102400,
"upload_interval_seconds": 3600,
"heartbeat_interval_seconds": 3600,
"enable_data_collection": false,
"enable_dev_mode": false,
"software_version": "<YOUR_SOFTWARE_VERSION>",
"software_type": "<YOUR_SOFTWARE_TYPE>",
"project_key": "<YOUR_PROJECT_KEY>",
"base_url": "https://device.memfault.com",
"swupdate": {
"input_file": "/etc/swupdate.cfg",
"output_file": "/tmp/swupdate.cfg"
},
"reboot": {
"last_reboot_reason_file": "/media/last_reboot_reason"
},
"coredump": {
"coredump_max_size_kib": 96000,
"compression": "gzip",
"rate_limit_count": 5,
"rate_limit_duration_seconds": 3600
},
"http_server": {
"bind_address": "127.0.0.1:8787"
},
"fluent-bit": {
"extra_fluentd_attributes": [],
"bind_address": "127.0.0.1:5170",
"max_buffered_lines": 1000,
"max_connections": 4
},
"logs": {
"compression_level": 1,
"max_lines_per_minute": 500,
"rotate_size_kib": 10240,
"rotate_after_seconds": 3600
},
"mar": {
"mar_file_max_size_kib": 10240
}
}
The settings you add in /etc/memfaultd.conf
extend the built-in configuration
file.
Top-level /etc/memfaultd.conf
configuration
Field | Description | Recommended value |
---|---|---|
upload_interval_seconds | The period (in seconds) to flush the queue and post it to Memfault. Note that a recovery system with exponential back-off is in place for network failures. | 3600 |
heartbeat_interval_seconds | The period (in seconds) of metrics aggregation into heartbeat event. | 3600 |
enable_data_collection | Whether memfaultd should collect and post data to Memfault by default. Read more here. | false (ask for user consent first) |
enable_dev_mode | Enable or disable developer mode. Read more here. | false (only use in development) |
base_url | The base URL to Memfault's device API. | https://device.memfault.com |
software_version | The current version of your software. Read more here. | Project-specific |
software_type | The current version of your software. Read more here. | Project-specific |
project_key | A write-only key for your Project. Find yours in Project -> Settings in the app. | Project-specific |
persist_dir | A directory where memfaultd can store application data persistently (needs to survive firmware upgrades). Read more [here][docs-persist-dir]. | Project-specific |
tmp_dir | A directory where memfaultd can store temporary data. This can be a temporary filesystem. Read more [here][docs-tmp-dir]. | "" (will use persist_dir ) |
tmp_dir_min_headroom_kib | Minimum space to keep available on the tmp_dir filesystem. memfaultd will stop writing and will delete buffered data when free space goes below this value. | 10% of the filesystem space - or less if your application also uses this filesystem. |
tmp_dir_min_inodes | Minimum number of inodes to keep available on the tmp_dir filesystem. memfaultd will stop writing when free inode count goes below this value. | 10% of the filesystem inodes. |
tmp_dir_max_usage_kib | Maximum size of memfault data on the tmp_dir filesystem. Memfault will start deleting older data and stop writing when this limit is reached. | Project-specific. |
http_server | Configuration values for the built-in HTTP server. [Read more][docs-http-server]. | See http_server |
swupdate | Configuration values for the swupdate feature if enabled in memfaultd (default). Read more. | See swupdate |
reboot | Configuration values for the reboot feature. Read more. | See reboot |
coredump | Configuration values for the coredump feature if enabled in memfaultd (default). Read more. | See coredump |
http_server
{
"http_server": {
"bind_address": "127.0.0.1:8787"
}
}
Field | Description | Recommended value |
---|---|---|
bind_address | Address (including port) that the server will bind to. | "127.0.0.1:8787" |
The HTTP server is currently only used to receive metrics from CollectD. See the metrics guide for more information.
swupdate
{
"swupdate": {
"input_file": "/etc/swupdate.cfg",
"output_file": "/tmp/swupdate.cfg"
}
}
Field | Description | Recommended value |
---|---|---|
input_file | Will be used as the base SWUpdate configuration when generating $output_file . May specify a suricatta section (gets merged with generated parameters). If an identify section exists it will get replaced. See upstream SWUpdate docs. | /etc/swupdate.cfg |
output_file | Generated by memfaultd using $input_file as a base. Needs to be passed to SWUpdate as its config file. See an example here. | /tmp/swupdate.cfg |
reboot
{
"reboot": {
"last_reboot_reason_file": "/media/last_reboot_reason"
}
}
Field | Description | Recommended value |
---|---|---|
last_reboot_reason_file | The path where memfaultd 's reboot reason tracking feature will attempt to find the device-specific reboot reason. If the file does not exist, memfaultd will interpret this as "no device specific reason known". | Project-specific |
coredump
{
"coredump": {
"compression": "gzip",
"coredump_max_size_kib": 96000,
"rate_limit_count": 5,
"rate_limit_duration_seconds": 3600
}
}
To configure the location where coredumps are stored during processing, see
tmp_dir
.
Field | Description | Recommended value |
---|---|---|
compression | Compression to use when storing on disk and uploading to Memfault (none or gzip ). | gzip |
coredump_max_size_kib | The maximum size of a coredump that can be processed. | 96000 |
rate_limit_count * | The maximum amount of coredumps to process in a given period of rate_limit_duration_seconds . | 5* |
rate_limit_duration_seconds * | A window in which a maximum of rate_limit_count coredumps can be processed. | 3600* |
* Please consult with the Memfault team if you need to change rate-limiting settings for your integration, as the Memfault Web App will further enforce rate limiting rules.
When a program crashes, the kernel will attempt to produce a coredump for the
crashing process. When memfaultd
receives the coredump, it will first apply
the rate limiting policy, to limit the number of coredumps that can get
generated within a period of time. If the rate limit is exceeded, the coredump
is dropped.
Next, memfaultd
will determine the maximum size that is allowed, based on the
coredump_max_size_kib
, storage_max_usage_kib
and storage_min_headroom_kib
configuration values and the amount of available storage space.
Unless there is no available storage, the coredump is written into a temporary
holding area inside the [tmp_dir
][docs-tmp-dir].
Finally, the coredump is added to the upload queue. This queue is serviced
periodically (see the top-level upload_interval_seconds
).
fluent-bit
{
"fluent-bit": {
"extra_fluentd_attributes": [],
"bind_address": "127.0.0.1:5170",
"max_buffered_lines": 1000,
"max_connections": 4
}
}
Field | Description | Recommended value |
---|---|---|
extra_fluentd_attributes | List of fluentd attributes to save beyond the defaults | [] |
bind_address | Address and port to bind the fluent-bit listener to. Replacing 127.0.0.1 by 0.0.0.0 will open the log collection service to the network (not recommended). | 127.0.0.1:5170 |
max_buffered_lines | Maximum number of lines to buffer in memory before applying backpressure to fluent-bit. | 10000 |
max_connections | Maximum number of simultaneous connected sockets with fluent-bit. | 4 |
logs
{
"logs": {
"compression_level": 1,
"max_lines_per_minute": 500,
"rotate_size_kib": 10240,
"rotate_after_seconds": 3600
}
}
Field | Description | Recommended value |
---|---|---|
compression_level | Compression level (0 - none, 1 - fast to 9 - best) | 1 - Fast |
max_lines_per_minute | Maximum number of lines to write per minute. | 500 |
rotate_after_seconds | Log files will be rotated when they reach this number of seconds. | 3600 |
rotate_size_kib | Log files will be rotated when they reach this size (in kibibytes). | 10240 (10 MiB) |
mar
{
"mar": {
"mar_file_max_size_kib": 10240
}
}
Field | Description | Recommended value |
---|---|---|
mar_file_max_size_kib | Maximum size of one MAR ZIP file for upload. | 10240 (10 MiB) |
memfaultd
will transform log files and coredumps into MAR entries on disk and
keep them in the MAR staging area until they are uploaded.
To upload them, a ZIP file is generated "on the fly", grouping multiple MAR
entries together. The mar_file_max_size_kib
controls how large this file is
allowed to grow. If the individual entries are larger than this setting, they
will be uploaded one by one.
We recommend lowering this value if your Internet connection is not fast or reliable.