This prevents wear and tear on the flash, and it also is faster in some
cases since the read-out of flash is a cheaper operation than the erasure
of flash. Some library modules (such as the esp_wifi) write out to NVS
upon every initialization without checking first that the existing value
is the same, and this speeds up initialization of modules that make
these design choices and moves it into a centralized place.
The comparison functions are based on the read-out functions of the same
name, and changes out the memcpy(...) operations for memcmp(...)
operations.
Signed-off-by: Tim Nordell <tim.nordell@nimbelink.com>
Earlier eraseItem function in Storage class would do lazy cleanup of
multi-page blobs if called using type "ANY" instead of "BLOB". It used to
just delete BLOB data and index would remain as is. Any subsequent read
would delete index entry as well. This however would return a valid
length without error if nvs_get_blob API was just used for finding
length and not reading the complete blob. This change fixes this issue.
Closes https://github.com/espressif/esp-idf/issues/3255
1. separate rom include files and linkscript to esp_rom
2. modefiy "include rom/xxx.h" to "include esp32/rom/xxx.h"
3. Forward compatible
4. update mqtt
Previously when HashList was removing items, HashListBlocks were
removed lazily. This resulted in empty HashListBlocks dangling around
in full pages, even when all items have been erased. These blocks
would only be deleted when NVS was re-initialized
(nvs_flash_deinit/nvs_flash_init).
This change does eager cleanup instead, based on the code from
@negativekelvin offered in
https://github.com/espressif/esp-idf/issues/1642#issuecomment-367227994.
Closes https://github.com/espressif/esp-idf/issues/1642.
Currently, only erase operation performed by the application leads
to detection of NVS key partition as uninitialised. This change
adds additional checks for detecting partition as uninitialised,
when device boots first time right after encryption by bootloader.
Marking a page full does not skip it from page selection process and the
same page might get returned if there is no other page with more unused
entries. Added a check for the same while storing blobs.
Fixes: https://github.com/espressif/esp-idf/issues/2313
This change adds a check for compatibility between the nvs version
found on nvs flash and the one assumed by running code during nvs
initialization. Any mismatch is reported to the user using new error
code ESP_ERR_NVS_NEW_VERSION_FOUND.
This change removes the earlier limitation of 1984 bytes for storing data-blobs.
Blobs larger than the sector size are split and stored on multiple sectors.
For this purpose, two new datatypes (multi-page index and multi-page data) are
added for entries stored in the sectors. The underlying read, write, erase and find
operations are modified to support these large blobs. The change is transparent
to users of the library and no special APIs need to be used to store these large
blobs.
Currently when page is being freed, items are individually moved from
FREEING page to ACTIVE page and erased. If power-off happens during the
process, the remaining entries are moved to ACTIVE page during recovery.
The problem with this approach is there may not be enough space on
ACTIVE page for all items if an item was partially written before
power-off and erased during recovery. This change moves all the items
from FREEING to ACTIVE page and then erased the FREEING page, If
power-off happens during the process, then ACTIVE page is erased and the
process is restarted.
Current code for recovery after power-off do not clean-up partially
erased items for FULL pages. If the erasure was part of modification
operation, this gets luckily cleaned-up because of duplicate detection
logic. For erase-only operation, the problem still exists. This patch
adds the recovery for FULL pages also.
Closes TW<20284>
When erasing a variable length item with an incorrect CRC32, the span
value of the item can not be trusted, so the item will be erased with
span = 1. Subsequent entries represent the data of the variable
length item, and these will be treated as separate items. For each
entry CRC32 is checked, the check most likely fails (because the
entry contains arbitrary data, and not a proper NVS item), and the
entry is erased. Erase function assumed that every item should be
present in cache, but it is not the case for the entries which are
just parts of item’s payload. This change allows for the item to be
not found in the hashlist, if the CRC32 check fails.
Current page selection algorithm selects a page for compaction based on just erased counts
and gives up when it does not find any page with erased count greater than 0. This is
problematic since the current allocation procedure skips the active page if there is not
enough room for the item in that page leaving free chunks on the pages. This change modifies
the algorithm to consider both erased as well as free counts on the candidate pages.
Closes TW<20297>
Users needs functions to count the number of free and used entries.
1. `nvs_get_stats()` This function return structure of statistic about the uspace NVS.
(Struct: used_entries, free_entries, total_entries and namespace_count)
2. `nvs_get_used_entry_count()` The second function return amount of entries in the namespace (by handler)
3. Added unit tests.
Closes TW<12282>
* Change snprintf for strlcat does not complain w/gcc7.2.0 and it is safer, thanks @projectgus
* Use proper quotes for character literals
Merges https://github.com/espressif/esp-idf/pull/1163
Previously, nvs_flash_init_custom would not be called if Storage for a
particular partition was already created. This caused issues if the
first call to nvs_flash_init failed due to Storage init error. This issue
exhibited itself as random failures of NVS CI test.
With this change, storage object is deleted (and not added to storage
list) if initialization fails.
Previously NVS did check CRC values of key-value pairs on the active
page, but the check for full pages was missing. This adds the necessary
check and a test for it.
This commit adds support for multiple NVS partitions. This provides application a flexibility to have multiple NVS
partitions such as separate partition with read-only manufacturing data and read-write partition with configuration.
Application can also use this to separate out application's configuration storage from system configuration.
This feature does not change any of the basic property of NVS subsystem. The same-named namespaces across partitions are
considered to be different namespaces. The original NVS API available for the applications remains unchanged. The only
difference is that instead of first NVS partition in the partition table, it now operates on the partition with label
"nvs" (which is default in the IDF provided partition table files). Additional APIs are provided to open a handle and
erase NVS with partition name as a parameter.
A test case is added in the host tests and it is made sure that all the host tests pass. nvs_rw_value app is also tested
with multiple partitions.
Signed-off-by: Amey Inamdar <amey.inamdar@gmail.com>
NVS is used to store PHY calibration data, WiFi configuration, and BT
configuration. Previously BT examples did not call nvs_flash_init,
relying on the fact that it is called during PHY init. However PHY init
did not handle possible NVS initialization errors.
This change moves PHY init procedure into the application, and adds
diagnostic messages to BT config management routines if NVS is not
initialized.
Writing values longer than half of the page size (with header taken into
account) causes fragmentation issues. Previously it was suggested on the
forum that using long values may cause issues, but this wasn’t checked
in the library itself, and wasn’t documented. This change adds necessary
checks and introduces the new error code.
Documentation is also fixed to reflect the fact that the maximum length
of the key is 15 characters, not 16.
Since read cache was introduced at page level, search cache became
useless in terms of reducing the number of flash read operations.
In addition to that, search cache used an assumption that if pointers to
keys are identical, the keys are also identical, which was proven wrong
by applications which generate key names dynamically.
This change removes CachedFindInfo, and all its uses. This is done at
expense of a small extra number of CPU operations (looking up a value in
the read cache is slightly more expensive) but no extra flash read
operations.
Ref TW12505
Ref https://github.com/espressif/arduino-esp32/issues/365
This change adds a check for the free page count to nvs_flash_init.
Under normal operation, NVS keeps at least one free page available,
except for transient states such as freeing up new page. Due to external
factors (such as NVS partition size reduction) this free page could be
lost, making NVS operation impossible. Previously this would cause an
error when performing any nvs_set operation or opening a new namespace.
With this change, an error is returned from nvs_flash_init to indicate
that NVS partition is in such a state.
One common pattern of using assert function looks as follows:
int ret = do_foo();
assert(ret == 0); // which reads as: “do_foo should never fail here, by design”
The problem with such code is that if ‘assert’ is removed by the preprocessor in release build,
variable ret is no longer used, and the compiler issues a warning about this.
Changing assert definition in the way done here make the variable used, from language syntax perspective.
Semantically, the variable is still unused at run time (as sizeof can be evaluated at compile time), so the compiler
can optimize things away if possible.
When read caching was added, Page::findItem started modifying itemIndex reference argument even if item wasn't found.
Incidentally, Storage::findItem reused itemIndex when starting search at next page.
So,
- if the first page had a cached index (findItem was called for that page), and it pointed to a non-zero index,
- first page has a few empty items at the end (but is marked full),
- next search looked up the item on the second page,
- index of the item on the second page was less than the cached index on the first page,
then the search would fail because cached starting index was reused.
This change fixes both sides of the problem:
- Page::findItem shouldn't modify itemIndex argument if item is not found
- Storage::findItem should not reuse itemIndex between pages
Two tests have been added.
Due to previous flash write bug it was possible to create multiple duplicate entries in a single page.
Recovery logic detected that case and bailed out with an assert.
This change adds graceful recovery from this condition.
Tests included.
Currently a restart is required to recover a page from invalid state.
The long-term solution is to detect such a condition and recover automatically (without a restart). This will be implemented in a separate change set.