use CI dependencies could waste a lot bandwidth for target test jobs, as
example binary artifacts are very large. Now we will parse required
artifacts first, then use API to download required files in artifacts.
we should only load one module once.
if we load one module twice, python will regard the same object loaded in the first time and second time as different objects.
it will lead to strange errors like `isinstance(object, type_of_this_object)` return False
If we have multiple configs, we need to flash DUT with different binaries. But if we don't close DUT before apply new config, the old DUT will be reused, so new config name will not be applied.
Currently we use config and test function as filter when assign cases to one CI job. It's not necessary as the runner can run test with different configs / test functions. Now we will try to assign as many cases to a job as possible, to reduce the amount of jobs required.
This commit adds a pair of scripts, find_apps.py and build_apps.py.
These scripts are intended to be used in various CI jobs, building
multiple applications with different configurations and targets.
The first script, find_apps.py, is used to prepare the list of builds:
1. It finds apps for the given build system.
2. For each app, it finds configurations (sdkconfig files) which need
to be built.
3. It filters out the apps and configurations which are not compatible
with the given target.
4. It outputs the list of builds into stdout or a file. Currently the
format is a list of lines, each line a JSON string. In the future,
the tool can be updated to output YAML files.
The lists of builds can be concatenated and processed with standard
command line tools, like sed.
The second script, build_apps.py, executes the builds from the list.
It can execute a subset of builds based on --parallel-count and
--parallel-index arguments.
These two scripts are intended to replace build_examples_make,
build_examples_cmake, and the custom unit-test-app logic (in the
Makefile and idf_ext.py).
Closes IDF-641
Sometimes, libphy.a call phy_enter_critical() to protect accessing
critical sections, such like operating on I2C, but it may not effect
when both the CPU core call it. It may cause accessing I2C blocking
and cannot recover by esp_restart(), until do HW reboot.
1. Fix WiFi scan leads to poor performance of Bluetooth.
2. Improve WiFi connect success ratio when coexist with Bluetooth.
3. Check if WiFi is still connected when CSA or beacon timeout happen.
4. add coex pre init
Basically, in the portability layer, it is checked if the socket is
NON-block, and if not, then even the EAGAIN and EWOULDBLOCK errors are
diverted to a RECV error. This causes a problem for sockets with
receive timeouts set. When such a timeout is set, the condition for
NON_BLOCK isn't met and hence a hard error is returned.
Searching for EAGAIN and EWOULDBLOCK in lwip returns only 3 results
(accept, recvfrom, close) and all of them look to be genuine cases for
EWOULDBLOCK. So removing this check to make receive timeout with TLS
work.