Commit Graph

13 Commits

Author SHA1 Message Date
Mitchell Grenier
c8e116aa7d Reinstated the extra escape with changes
I put the original escape back in but redirected the call to a new function
that will escape characters in the form of \xNN when:

`byte < 0x20 || byte >= 0x80`

This leaves slashes alone and should fix this issue.

UPDATE: Tests have also been added. Added an English test to test for NOP.
2015-03-23 10:49:28 -07:00
Mitchell Grenier
3d26cea88e [Fix #877] Removing an extra escape
I believe the cause of the problem was that an extraneous escape was happening
in the `addNewResults` function in query.cpp.

I believe this can be safely removed because it's purpose is only to make things
JSON safe. However, I don't think this function is ever called with out a JSON
serialization later, making this unnecessary.
2015-03-19 13:56:47 -07:00
Bryan Eastes
93cb303abc Merge branch 'master' of github.com:facebook/osquery into 520_pt_json_workaround 2014-12-20 18:24:33 -08:00
Bryan Eastes
5ad8d3ec55 Changes from CR 2014-12-20 18:19:33 -08:00
mike@arpaia.co
b9f732c31f Updating the license comment to be the correct open source header
As per t5494224, all of the license headers in osquery needed to be updated
to reflect the correct open source header style.
2014-12-18 10:52:55 -08:00
Bryan Eastes
bd97cb501a First draft of workaround for #520 2014-12-10 00:15:27 -08:00
Teddy Reed
7c738c8497 Codemod to improve include search paths 2014-12-03 15:14:02 -08:00
mike@arpaia.co
b5ee19f49f Removing the osquery::db namespace 2014-09-21 14:27:09 -07:00
mike@arpaia.co
4a048db278 database namespace documentation 2014-09-15 17:13:22 -07:00
mike@arpaia.co
66a2a6fdec Fix performance issue with the disk serializer
This is the issue noted in #76. Keeping all historical results of
queries in the HistoricalQueryResults struct makes serializing and
deserializing those structs very, very slow as time goes on. By only
storing the last execution of the query, we keep the performance
constant, but we kill the feature where osquery can rebuild timelines
without accessing logs. After talking it over, we decided that this
isn't actually that big of a deal because, if you really wanted to
rebuild the old data, you should be able to process the logs, similarly
to bin log replication in MySQL.
2014-09-02 13:13:12 -07:00
mike@arpaia.co
e723306c13 Ran clang-format across the codebase 2014-08-15 12:29:51 -07:00
mike@arpaia.co
ec30260f37 core/status to status and header cleanup 2014-08-05 16:13:55 -07:00
mike@arpaia.co
73a32b7294 Initial commit 2014-07-30 17:35:19 -07:00