There used to be dummy phase before out phase in common command
transactions. This corrupts the data.
The code before never actually operate (clear) the QE bit, once it finds
the QE bit is set. It's hard to check whether the QE set/disable
functions work well.
This commit:
1. Cancel the dummy phase
2. Set and clear the QE bit according to chip settings, allowing tests
for QE bits. However for some chips (Winbond for example), it's not
forced to clear the QE bit if not able to.
3. Also refactor to allow chip_generic and other chips to share the same
code to read and write qe bit; let common command and read command share
configure_host_io_mode.
4. Rename read mode to io mode since maybe we will write data with quad
mode one day.
During coredump, dangerous-area-checking should be disabled, and cache
disabling should be replaced by a safer version.
Dangerous-area-checking used to be in the HAL, but it seems to be more
fit to os functions. So it's moved to os functions. Interfaces are
provided to switch between os functions during coredump.
When legacy mode is used, the coredump still fails during linking
because "esp_flash_init_default_chip", "esp_flash_app_init" and
"esp_flash_default_chip " are not compiled and linked.
Instead of using ``if`` macros in callers, these functions are protected
by ``if`` macros in the header, and also not compiled in the sources.
"esp_flash_default_chip" variable is compiled with safe default value.
Add support for get write protection support, fixed the duplicated
set_write_protection link.
All the write_protection check in the top layer are removed. The lower
levels (chip) should ensure to disable write protection before the
operation start.