- Add fallback url to download the openssl tar.gz from the "old" archives
- Add URL_HASH to the openssl external project,
to avoid to redownload the archive if it has been alredy downloaded
and the integrity is verified
- Update curl_certificate table to use the newer openssl API,
so that it builds.
Remove a level of indirection when configuring and building formulas.
This should simplify working with them and also remove some issues
encountered when trying to build on Windows.
- Update libarchive to build from source on Windows and macOS
- Update yara to build from source on macOS
- Update librdkafka to build from source on macOS
- Build librdkafka with SSL and SASL_SCRAM support on Linux
- Update librpm to 4.15.1 to support the newer openssl
- Update libxml2 to build from source on Windows and macOS
- Update lzma to build from source on Windows and macOS
- Use ICU library not only for boost but libxml2 too
- Implement a workaround to have Buck builds still compile
with the old openssl version
When a query triggers multiple xFilter calls
and there's an operation that has to work on the sum of rows
resulting from all those calls, we trigger a use-after-free
when such operation tries to access the rows data.
This happens because each xFilter call we clear the rows
resulting from the previous xFilter call, and because
when returning the values of a text column we don't copy it,
but return a pointer to it.
A contrived example of a query with the issue is:
SELECT path=count(*) FROM file WHERE path = '/' OR path = '1'
This changes the last sqlite3_result_text parameter
from SQLITE_STATIC to SQLITE_TRANSIENT.
Addresses https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20833
Parsing a configuration file as a JSON document
which contains deeply nested elements can lead to a stack overflow
when using the recursive parser of RapidJSON.
Since the configuration isn't changed or parsed frequently,
use the slower iterative parser instead.
Copying the configuration JSON document
that contains deeply nested elements, using the CopyFrom API,
can lead to a stack overflow, due to the recursive nature
of the RapidJSON GenericValue construction.
Detect the depth/nesting level of a config document
and limit it to 32 levels.
Using an iterative parser, while it avoids stack overflows,
can cause memory exhaustion if the config size is too big.
Limit the maximum config size, stripped from its comments, to 1MiB.
Addresses https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20779
Removed the columns "script" and "match" from the test
since they do not belong to the chrome_extensions table,
they belong to chrome_extension_content_scripts.
Added the missing integration test for the table
chrome_extension_content_scripts.
Do not close the http server after 10s if there are requests coming,
since some tests may take more than 10s to run.
Reset the timer each time a request is received by the server instead.
This new toolchain contains a newer LLVM version (9.0.1),
a fix for the scan-build scripts and it keeps the LLVM static libraries,
necessary to implement the new BPF framework and tables.
The "decorators" configuration value must be a JSON object,
otherwise we try to search through its inexistent members
and dereference a null pointer.
Added also a regression test.
Addresses https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=19274
In the past the Windows agent changed the path where Python2
was installed; a special logic was added which should've tested
if the path existed, though it wasn't correct in the case
the powershell script is configured to be aborted at the first error.
Since the old path should not be present anymore,
we simply remove the logic and use the path we expect to exist.
With the increasing size of the build and the respective ccache
and sccache caches, the disk space sometimes is not enough
and the build fails.
This deletes the build folder as the last step since it shouldn't
be necessary anymore.
When the batch script that implements the build step has been
changed to stop the sccache server as the last command,
all build failures started to be ignored because the last command,
always succeeding, was clearing out the exit status.
Batch scripts do not have a global "exit on error" option,
so manually checking the error level and exiting with such error is needed.
* Limit regex_match and regex_split regex size
Add a new HIDDEN_FLAG, regex_max_size, with a default of 256 bytes,
which limits the size of the regex that can be used
with regex_match and regex_split SQL functions.
This is done since it's possible to create a regex
which makes the std::regex destruction go into a stack overflow,
due to too many alternate states (|).
Add a couple of tests to verify that the limit is correctly respected.
Restore the test for regex_split that was originally hanging when using
boost.