By default, any job will run unless a filter is given, in that case
the filter will determine if the job should run or not. Some jobs do
not need to be run by default, and should only be triggered using the
bot. For such jobs, BOT_NEEDS_TRIGGER_BY_NAME can added to
environment variables.
1. Change all the gattc API && bta gattc layer.
2. Debug the code and change the btc_ble_gattc_get_db method.
3. Change the gatt read API interface.
4. Reconstruction the BTA_gattc_cache code.
5. Change back the bluedroid_get_status to marco.
6. Change the gattc docs format.
7. Change the docs format.
8. fix the read char value bug.
9. change the gattc_get_attr_count method.
10. Change back the bta_gattc write ccc code.
11. Change the gattc api docs format
12. Change the gattc API docs.
13. Change the prepare write descriptor method to avoid the exection.
14. modify gatt clinet demo with new API
15. Change the p_src_data->read.p_value to avoid exection.
16. Change the bugfix of gattc unreg for the notify.
component/bt: Added the serch service res start_handle & end_handle to the result.
component/bt: Added the bta_gattc_cache_write when gatt discovery complete.
component/bt: Added the bta_gattc_cache_write declaration.
component/bt: Added the comments for esp_ble_gattc_cache_refresh API.
component/bt: Change the spelling errors & some comment error.
component/bt: fix bug of get gattc cache address list error.
1. Change the esp_bluedroid_get_status to macro;
2. added the malloc & free for the get_addr_list
component/bt: Added the addr_info->ass_addr == NULL Judgment to prevent crashes in the bta_gattc_co_cache_find_src_addr function.
component/bt: Fixed following gattc cache bugs
1. gattc can't refresh the gattc cache in the gatt discover state;
2. remove the nvs_get_blob in the cache address init function;
3. added the nvs_set_blob return value judgment in the cache address save function;
4. added the list_new when ass_address is NULL;
5. Change the ass_address list remove method to fix the ass address can't remove bug.
1. fix issues on sending beacon caused by too much tx retries on other packets.
2. modify not to scan if rc exists when connect.
3. modify scan dwell time to default 120ms fo root.
Need to make the bootloader modular so that users can redefine its functional part.
- refactoring and moving functions to the bootloader_support component
- Changed function to `void` bootloader_utility_load_image(...);
TW19596
Current code for recovery after power-off do not clean-up partially
erased items for FULL pages. If the erasure was part of modification
operation, this gets luckily cleaned-up because of duplicate detection
logic. For erase-only operation, the problem still exists. This patch
adds the recovery for FULL pages also.
Closes TW<20284>
With V=0, build process would print “including .../Makefile.projbuild" lines, causing problems for print_flash_cmd target.
The issue was due to the way macro expansion works in make. To delay evaluation of info function until the execution of expanded block, two dollar signs are required.
Test for print_flash_cmd target added.
When erasing a variable length item with an incorrect CRC32, the span
value of the item can not be trusted, so the item will be erased with
span = 1. Subsequent entries represent the data of the variable
length item, and these will be treated as separate items. For each
entry CRC32 is checked, the check most likely fails (because the
entry contains arbitrary data, and not a proper NVS item), and the
entry is erased. Erase function assumed that every item should be
present in cache, but it is not the case for the entries which are
just parts of item’s payload. This change allows for the item to be
not found in the hashlist, if the CRC32 check fails.
Currently for checking IDF version and submodules existence,
build system uses `git` commands. But, it could be possible use-case
where `git` is not installed (assuming IDF is flattened in source format)
on system and build happens without any warnings.
Signed-off-by: Mahavir Jain <mahavir@espressif.com>
Current page selection algorithm selects a page for compaction based on just erased counts
and gives up when it does not find any page with erased count greater than 0. This is
problematic since the current allocation procedure skips the active page if there is not
enough room for the item in that page leaving free chunks on the pages. This change modifies
the algorithm to consider both erased as well as free counts on the candidate pages.
Closes TW<20297>